uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,500,615 | arxiv | \section{Introduction}
An important problem in signal processing, particularly in the field of compressed sensing and sparse coding, is to find the ``sparsest'' solution of a linear system $\Phi x =b$; being $\Phi \in \mathbb{R}^{m \times n}$, $m<n$. That is: a solution with as many null coordinates as possible. This problem has applications in several areas like \cite{qaisar_2013}: medical imaging, error correcting, digital cameras and wireless communication. Sparsity may be measured by the $\ell_{0}$ pseudo-norm $||x||_{0}$, which counts the number of non zero coordinates. The problem of interest is then: \begin{equation} \label{eq:compressed_sensing} \tag{$P_{0}$}
\argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: ||x||_{0}.
\end{equation}
This problem is NP-hard in general \cite{foucart_2013}. A usual alternative is to replace the $\ell_{0}$ pseudo-norm by a weighted $\ell_{1}$ norm: \begin{equation} \label{eq:l1_pesos} \tag{$P_{1}W$}
x^{w} \in \argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w_{i} |x_{i}|.
\end{equation}
Problem \eqref{eq:l1_pesos} is convex and so it may be solved efficiently, although it is not always equivalent to \eqref{eq:compressed_sensing}. Note that $\ell_{1}$ minimization is obtained by using unit weights in \eqref{eq:l1_pesos}. For this particular case there are important results about its equivalence with \eqref{eq:compressed_sensing}, mainly due to Donoho \cite{donoho_2006} and Cand\`es, Romberg and Tao \cite{candes_2004}. For the general case, the task is to choose ``useful weights'' for \eqref{eq:l1_pesos}, defined as those that make $x^{w}$ be a solution of \eqref{eq:compressed_sensing}. Cand\`es, Wakin and Boyd (CWB) proposed an iterative algorithm, known as Re-Weighted $\ell_{1}$ (RW$\ell_{1}$), to estimate useful weights \cite{candes_2008}. The algorithm updates weights as follows: \begin{equation}
w_{i}^{k+1} = \frac{1}{|x_{i}^{k}|+ \epsilon_{k}}, \: \forall \: k \geq 0;
\end{equation} for some $\epsilon_{k}>0$ and with: \begin{equation} \label{eq:pesos_cwb}
x^{k} \in \argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \: \sum \limits_{i=1}^{n} w_{i}^{k} |x_{i}|, \: \forall \: k \geq 0.
\end{equation}
In this work we propose a new methodology to estimate weights, based on the theory of Lagrange duality. Using this methodology, together with an algorithm for estimating solutions from a dual problem, we obtain a new RW$\ell_{1}$ algorithm. The methodology is also applied to a noisy linear system, obtaining in this case a Re-Weighted LASSO algorithm (RW-LASSO).
The rest of the paper is organized as follows: Section \ref{sec:oracle} introduces the proposed methodology in the oracle case, in which a solution of \eqref{eq:compressed_sensing} is known. Here an oracle dual problem is obtained. Section \ref{sec:dual_soluciones} describes some solutions of this dual problem. In Section \ref{sec:rwl1_subgradiente} a new RW$\ell_{1}$ algorithm is obtained by applying the proposed methodology with the subgradient algorithm. Section \ref{sec:oracle_sin} extends the methodology and the RW$\ell_{1}$ subgradient algorithm to the non-oracle case, in which no solution of \eqref{eq:compressed_sensing} is known. Section \ref{sec:ruido} generalizes the methodology for the case in which the linear system is affected by noise. Here a RW-LASSO algorithm is obtained. Section \ref{sec:resultados} analices the performance of the proposed RW$\ell_{1}$ algorithm in the noiseless case, and the RW-LASSO algorithm in the noisy case, both applied to random linear systems. Section \ref{sec:conclusiones} gives the final conclusions.
\section{Methodology with Oracle} \label{sec:oracle}
The proposed methodology is introduced in the ideal case in which a solution $x^{*}$ of \eqref{eq:compressed_sensing} is known. Consider the ideal primal problem defined as: \begin{equation} \label{eq:primal_ideal} \tag{$P$}
\argmin_{
\begin{array}{ll}
\Phi x=b \\
|x_{i}| \leq |x^{*}_{i}|,\: \forall i
\end{array}
} \:\ 0.
\end{equation} This is a convex problem, so it can be solved efficiently. Also, any solution of \eqref{eq:primal_ideal} is a solution of \eqref{eq:compressed_sensing}. Of course \eqref{eq:primal_ideal} is ideal, since $x^{*}$ is assumed to be known, so it has no practical value. Consider the Lagrange relaxation obtained by relaxing only the constraints involving $x^{*}$. The associated Lagrangian is: \begin{equation}
L(x,w) = \sum \limits_{i=1}^{n} w_{i} \left( |x_{i}| - |x^{*}_{i}| \right) = \sum \limits_{i=1}^{n} w_{i} |x_{i}| - \sum \limits_{i=1}^{n} w_{i} |x^{*}_{i}|,
\end{equation} where $w_{i} \geq 0$ are the Lagrange multipliers. The dual function is then: \begin{equation}
d(w) := \min_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: L \left( w, x \right) = \left( \min_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w_{i} |x_{i}| \right) - \sum \limits_{i=1}^{n} w_{i} |x^{*}_{i}|.
\end{equation} This dual function involves a Weighted $\ell_{1}$ problem, in which weights are Lagrange multipliers. This is the key idea behind the proposed methodology: identify weights of \eqref{eq:l1_pesos} as Lagrange multipliers. The problem is then in the context of Lagrange duality. In particular, weights may be estimated by any algorithm to estimate multipliers. Equivalently, weights may be estimated as solutions of the dual problem, given by: \begin{equation} \label{eq:dual_ideal_relajado} \tag{$D$}
\argmax_{
\begin{array}{ll}
w \geq 0 \\
w \in \mathbb{R}^{n}
\end{array}
} \: d(w).
\end{equation} This maximization problem is always concave, so it may be solved efficiently. One drawback is that usually the dual function is non differentiable, so for example gradient based algorithms must be replaced by subgradient.
\section{Solutions of the Dual Problem} \label{sec:dual_soluciones}
Now the interest is to find ``useful solutions'' of the dual \eqref{eq:dual_ideal_relajado}. That is: $w \geq 0$ such that $x^{w}$ is a solution of \eqref{eq:compressed_sensing}. This section shows that such solutions always exist, although not every solution of \eqref{eq:dual_ideal_relajado} has this property.
\begin{proposition}
Primal problem \eqref{eq:primal_ideal} satisfies strong duality: $d^{*}=f^{*}$.
\end{proposition}
\begin{proof}
The primal optimal value is clearly $f^{*}=0$. By weak duality: $d^{*} \leq f^{*}=0$. So it suffices to show that $d(w)=0$, for some $w \geq 0$. Taking $w=\vec{0}$: \begin{equation}
d(\vec{0}) = \left( \min_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} 0 |x_{i}| \right) - \sum \limits_{i=1}^{n} 0 |x^{*}_{i}|=0.
\end{equation}
\qed
\end{proof}
It was also shown that $w=\vec{0}$ is a solution of \eqref{eq:dual_ideal_relajado}. Clearly $w = \vec{0}$ is not necessarily a useful solution, since $x^{\vec{0}}$ could be any solution of the linear system: $$x^{\vec{0}} \in \argmin_{ \begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} 0 |x_{i}| = \{ \Phi x=b \}.$$
A consequence of strong duality is that the set of Lagrange multipliers and of dual solutions are equal. Therefore, useful weights may be estimated as dual solutions. The following result shows that the dual problem always admits useful weights as solutions.
\begin{proposition}
Let $\hat{w} \geq 0$ such that: $\hat{w}_{i}=0 \Leftrightarrow x_{i}^{*} \neq 0$. Then every solution $x^{\hat{w}}$ of the problem \eqref{eq:l1_pesos} associated to $\hat{w}$, is a solution of \eqref{eq:compressed_sensing}.
\end{proposition}
\begin{proof}
Let $I = \{ i \: / \: x_{i}^{*}=0 \}$. By definition of $\hat{w}$ and $x^{\hat{w}}$, and using that $\Phi x^{*} = b$: \begin{equation} 0 \leq \sum \limits_{i \in I}^{} \hat{w}_{i}|x^{\hat{w}}_{i}| = \sum \limits_{i=1}^{n} \hat{w}_{i}|x^{\hat{w}}_{i}| \leq \sum \limits_{i=1}^{n} \hat{w}_{i}|x^{*}_{i}| = \sum \limits_{i \in I}^{} \hat{w}_{i}|x^{*}_{i}| = 0. \end{equation} This implies: $\hat{w}_{i}|x^{\hat{w}}_{i}| = 0, \: \forall \: i \in I$. Since $\hat{w}_{i} > 0, \: \forall \: i \in I$, then we must have: $x^{\hat{w}}_{i} = 0, \: \forall \: i \in I$. So: $||x^{\hat{w}}||_{0} \leq ||x^{*}||_{0}$. By definition $\Phi x^{\hat{w}}=b$, then it solves \eqref{eq:compressed_sensing}.
\qed
\end{proof}
\section{RW$\ell_{1}$ with Projected Subgradient Algorithm} \label{sec:rwl1_subgradiente}
In this section we give an implementation of the proposed methodology, by using the projected subgradient algorithm for estimating solutions of the dual problem. This algorithm may be thought as a (sub)gradient ``ascent'', with a projection on the dual feasible set. More specifically, starting at $w^{0} \geq 0$, the update is: \begin{equation}
\left\lbrace \begin{array}{ll}
w^{k+1} = w^{k} + \alpha_{k} g^{k}\\
w^{k+1} = \max \{ 0,w^{k+1} \}
\end{array} \right., \forall \: k \geq 0;
\end{equation} where $g^{k} \in \partial d(w^{k})$ is a subgradient of the dual function at $w^{k}$, and $\alpha_{k} >0$ the stepsize. Although this is not strictly an ascent method, it is always possible to choose the stepsize in order to decrease the distance of $w^{k}$ to the dual solution set. A way for this is to update the stepsize as \cite{bertsekas_1999}: \begin{equation}
\alpha_{k} = \frac{d^{*}-d(w^{k})}{||g^{k}||^{2}_{2}} \geq 0, \; \forall \: k \geq 0.
\end{equation}
Applying \cite{bertsekas_2015}[Example 3.1.2] to \eqref{eq:primal_ideal}, it can be seen that a subgradient $g^{k} \in \partial d(w^{k})$ can be obtained by solving a Weighted $\ell_{1}$ problem: \begin{equation}
x^{k} \in \argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w_{i}^{k} |x_{i}| \Rightarrow g(x^{k}) \in \partial d(w^{k}), \: \forall \: k \geq 0.
\end{equation}
Note that the stepsize can now be written as: \begin{equation} \label{eq:paso_optimo_ideal}
\alpha_{k} = \frac{d^{*}-d(w^{k})}{||g^{k}||^{2}_{2}} = \frac{0-L(x^{k},w^{k})}{||g(x^{k})||_{2}^{2}} = - \frac{ \sum \limits_{i=1}^{n} w_{i}^{k} \left( |x_{i}^{k}| - |x^{*}| \right) }{\sum \limits_{i=1}^{n} \left( |x_{i}^{k}| - |x^{*}| \right)^{2}}, \: \forall \: k \geq 0.
\end{equation}
Algorithm \ref{alg:dual_ideal} shows a pseudocode of the proposed RW$\ell_{1}$ subgradient algorithm. \begin{algorithm}[H]
\caption{RW$\ell_{1}$ with projected subgradient (with oracle and noise-free)} \label{alg:dual_ideal}
\begin{algorithmic}[1]
\REQUIRE $\Phi \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^{m}$, $w^{0} \geq 0$, $\text{RWIter} \geq 0$
\STATE $x^{0} \in \argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w^{0}_{i}|x_{i}|$ $\:\:$ \COMMENT{\eqref{eq:l1_pesos}}
\STATE
\STATE $k=0$
\WHILE{$k < \text{RWIter}$}
\STATE $g_{i}^{k}=g_{i}(x^{k})=|x_{i}^{k}|-|x_{i}^{*}|$ \COMMENT{subgradient at $w^{k}$}
\STATE
\STATE Choose $\alpha_{k}$ using \eqref{eq:paso_optimo_ideal}
\STATE
\STATE $w_{i}^{k+1} = w_{i}^{k} + \alpha_{k} g_{i}^{k}$
\STATE $w_{i}^{k+1} = \max \left( 0, w_{i}^{k+1} \right)$
\STATE
\STATE $x^{k+1} \in \argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w_{i}^{k+1} |x_{i}|$ $\:\:$ \COMMENT{\eqref{eq:l1_pesos} with warm restart $x^{k}$}
\STATE
\STATE $k = k+1$
\STATE
\ENDWHILE
\RETURN $x^{k}$
\end{algorithmic}
\end{algorithm}
\section{Methodology and Algorithm without Oracle} \label{sec:oracle_sin}
The proposed methodology is now extended to the practical case, in which no solution of \eqref{eq:compressed_sensing} is known. A simple way for doing this is to replace $x^{*}$ in the ideal constraints by its best known estimate $x^{k}$, ``amplified'' by some $\epsilon_{k}>0$: \begin{equation}
g_{i}^{k}(x)= |x_{i}| - \left( 1 + \epsilon_{k} \right) |x_{i}^{k}|, \: \forall \: k \geq 0;
\end{equation} where $x^{k}$ is calculated in the same way as in the oracle case: $$x^{k} \in \argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w_{i}^{k} |x_{i}|, \: \forall \: k \geq 0.$$
This gives specific constraints $g^{k}(\cdot)$ for each step $k$, and their respective primal problem: \begin{equation} \label{eq:primal_no_ideal} \tag{$P^{k}$}
\argmin_{
\begin{array}{ll}
\Phi x=b \\
|x_{i}| \leq \left( 1 + \epsilon_{k} \right) |x_{i}^{k}|, \: \forall i \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: 0.
\end{equation}
Since $x^{k}$ is always feasible at \eqref{eq:primal_no_ideal}, this problem has optimal value $f^{k}=0$. By relaxing its non-ideal constraints, a dual problem may be obtained. The Lagrange an dual functions are, respectively: \begin{equation}
L^{k}(x,w) = \sum \limits_{i=1}^{n} w_{i} |x_{i}| - \sum \limits_{i=1}^{n} w_{i} \left( 1 + \epsilon_{k} \right) |x^{k}_{i}|,
\end{equation} \begin{equation}
d^{k}(w) = \left( \min_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w_{i} |x_{i}| \right) - \sum \limits_{i=1}^{n} w_{i} \left( 1 + \epsilon_{k} \right) |x^{k}_{i}|.
\end{equation} Like in the oracle case, each dual function involves a Weighted $\ell_{1}$ problem, with weights as Lagrange multipliers. This allows to extend the methodology, by estimating weights of \eqref{eq:primal_no_ideal} as Lagrange multipliers, or solving its dual problem: \begin{equation} \label{eq:dual_no_ideal} \tag{$D^{k}$}
\argmax_{
\begin{array}{ll}
w \geq 0 \\
w \in \mathbb{R}^{n}
\end{array}
} \: d^{k}(w).
\end{equation}
Solutions of \eqref{eq:dual_no_ideal} may be analized in a similar way as for \eqref{eq:dual_ideal_relajado}. In particular, it can be easily seen that \eqref{eq:primal_no_ideal} satisfies strong duality, with optimal values $f^{k}=d^{k}=0$. It is very useful to know the optimal value $d^{k}$ for \eqref{eq:dual_no_ideal}, in order to compute the stepsize for the subgradient algorithm, when applied to \eqref{eq:dual_no_ideal}: \begin{equation} \label{eq:paso_optimo_noideal}
\alpha_{k} = \frac{d^{k}-d^{k}(w^{k})}{||g^{k}(x^{k})||_{2}^{2}} = \frac{0-L^{k}(x^{k},w^{k})}{||g^{k}(x^{k})||_{2}^{2}} = \frac{1}{\epsilon_{k}} \frac{ \|W^{k} x^{k} \|_{1} }{\|x^{k}\|_{2}^{2}} \geq 0.
\end{equation} Algorithm \ref{alg:dual_no_ideal} shows the pseudo-code of the non-oracle RW$\ell_{1}$ method, obtained by combining the proposed methodology with the projected subgradient algorithm.
\begin{algorithm}[H]
\caption{RW$\ell_{1}$ with projected subgradient (without oracle and noise free)} \label{alg:dual_no_ideal}
\begin{algorithmic}[1]
\REQUIRE $\Phi \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^{m}$, $w^{0} \geq 0$, $\text{RWIter} \geq 0$
\STATE
\STATE $x^{0} \in \argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w^{0}_{i}|x_{i}|$ $\:\:$ \COMMENT{\eqref{eq:l1_pesos}}
\STATE
\STATE $k=0$
\WHILE{$k < \text{RWIter}$}
\STATE $g_{i}^{k}=g_{i}(x^{k})=|x_{i}^{k}|-\left( 1 + \epsilon_{k} \right) |x_{i}^{k}| = - \epsilon_{k} |x_{i}^{k}|$ \COMMENT{subgradient of $d^{k}$ at $w^{k}$}
\STATE
\STATE Choose $\alpha_{k}$ using \eqref{eq:paso_optimo_noideal}
\STATE
\STATE $w_{i}^{k+1} = \max \left( 0, w_{i}^{k} + \alpha_{k} g_{i}^{k} \right)$
\STATE
\STATE $x^{k+1} \in \argmin_{
\begin{array}{ll}
\Phi x=b \\
x \in \mathbb{R}^{n}
\end{array}
} \:\: \sum \limits_{i=1}^{n} w_{i}^{k+1} |x_{i}|$ $\:\:$ \COMMENT{\eqref{eq:l1_pesos} with warm restart $x^{k}$}
\STATE
\STATE $k = k+1$
\STATE
\ENDWHILE
\RETURN $x^{k}$
\end{algorithmic}
\end{algorithm}
At each step of Algorithm \ref{alg:dual_no_ideal}, and before the projection, the update is: $$w_{i}^{k+1} = w_{i}^{k} + \alpha_{k} g_{i}^{k}(x^{k}) = w_{i}^{k} - \frac{ \|W^{k} x^{k} \|_{1} }{\|x^{k}\|_{2}^{2}} |x_{i}^{k}|, \: \forall \: k \geq 0;$$ so Algorithm \ref{alg:dual_no_ideal} is independent of $\epsilon_{k}>0$. We take $\epsilon_{k}=1, \: \forall \: k \geq 0$.
\section{Problem with Noise} \label{sec:ruido}
In this section we consider the case in which the linear system is affected by noise. That is: $b = \Phi x^{*} + z$, where $z$ represents the noise. The problem of interest is now: \begin{equation} \label{eq:compressed_sensing_ruido} \tag{$P_{0}^{\eta}$}
\argmin_{
\begin{array}{ll}
\frac{1}{2} ||\Phi x-b||^{2}_{2} \leq \frac{\eta^{2}}{2}\\
x \in \mathbb{R}^{n}
\end{array}
} \: ||x||_{0}.
\end{equation} This problem is also NP-hard in general, for any level of noise $\eta \geq 0$ \cite{foucart_2013}. Replacing the $\ell_{0}$ pseudo-norm by a weighted $\ell_{1}$ norm, we obtain a convex alternative: \begin{equation} \label{eq:l1_pesos_ruido} \tag{$P_{1}^{\eta}W$}
\argmin_{
\begin{array}{ll}
\frac{1}{2} ||\Phi x-b||^{2}_{2} \leq \frac{\eta^{2}}{2}
\end{array}
} \: \sum \limits_{i=1}^{n} w_{i} |x_{i}|.
\end{equation} The proposed methodology is the same as in the noiseless case. Now the oracle primal problem is:
\begin{equation}
\argmin_{
\begin{array}{ll}
\frac{1}{2} ||\Phi x-b||^{2}_{2} \leq \frac{\eta^{2}}{2}\\
|x_{i}| \leq |x^{*}_{i}|, \: \forall \: i
\end{array}
} \: 0.
\end{equation} The Lagrangian obtained by relaxing the ideal constraints is the same as for the noiseless case. The dual function is now: \begin{equation}
d(w) = \left( \min_{
\begin{array}{ll}
\frac{1}{2} ||\Phi x-b||^{2}_{2} \leq \frac{\eta^{2}}{2}
\end{array}
} \: \sum \limits_{i=1}^{n} w_{i} |x_{i}| \right) - \sum \limits_{i=1}^{n} w_{i} |x_{i}^{*}|.
\end{equation} This is a Weighted $\ell_{1}$ problem with quadratic constraints. Such as in the noiseless case, weights can be identified with Lagrange multipliers. So the methodology and the RW$\ell_{1}$ subgradient algorithm are the same as for the noiseless case, but replacing \eqref{eq:l1_pesos} with \eqref{eq:l1_pesos_ruido}. Going a step further, if the quadratic constraints are also relaxed, a new dual function may be obtained: \begin{equation}
d(w,\lambda) = \left( \min_{
\begin{array}{ll}
x \in \mathbb{R}^{n}
\end{array}
} \: \frac{\lambda}{2}\|\Phi x-b\|_{2}^{2} + \sum \limits_{i=1}^{n} w_{i} |x_{i}| \right) - \left( \frac{\lambda}{2} \eta^{2} + \sum \limits_{i=1}^{n} w_{i} |x_{i}^{*}| \right).
\end{equation} This involves the well known Weighted LASSO problem, which is a simple generalization of the LASSO problem, introduced by Tibshirani in the area of statistics \cite{tibshirani_1996}. Chen, Donoho and Saunders introduced the same LASSO problem in the context of signal representation, but with the name of Basis Pursuit Denoising \cite{chen_2001}. Note that useful weights of \eqref{eq:l1_pesos_ruido} can still be estimated as part of the Lagrange multipliers; which are now $w \in \mathbb{R}^{n}_{+}$ and $\lambda \in \mathbb{R}_{+}$. When combined with the projected subgradient algorithm, this gives a RW-LASSO algorithm, in which at each step a Weighted-LASSO problem must be solved instead of \eqref{eq:l1_pesos_ruido}: \begin{equation}
x^{k} \in \argmin_{x \in \mathbb{R}^{n}} \:\: \frac{\lambda^{k}}{2}||\Phi x-b||^{2}_{2} + \sum \limits_{i=1}^{n} w_{i}^{k} |x_{i}|, \: \forall \: k \geq 0.
\end{equation}
Algorithm \ref{alg:dual_no_ideal_ruido} shows a pseudocode for the proposed subgradient RW-LASSO algorithm.
\begin{algorithm}[H]
\caption{RW-LASSO with subgradient (without oracle and with noise)}
\label{alg:dual_no_ideal_ruido}
\begin{algorithmic}[1]
\REQUIRE $\Phi \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^{m}$, $w^{0} \geq 0$, $\lambda^{0} \in \mathbb{R}$, $\eta \geq 0$, $\text{RWIter} \geq 0$
\STATE $x^{0} \in \argmin_{x \in \mathbb{R}^{n}} \:\: \frac{\lambda^{0}}{2} \|\Phi x-b\|^{2}_{2} + \sum \limits_{i=1}^{n} w_{i}^{0} |x_{i}|$ \COMMENT{Weighted-LASSO}
\STATE
\STATE $k=0$
\WHILE{$k < \text{RWIter}$}
\STATE $g_{i}^{k}=g_{i}^{k}(x^{k})=|x_{i}^{k}|-\left( 1 + \epsilon_{k} \right) |x_{i}^{k}|$
\STATE $w_{i}^{k+1} = \max \left( 0, w_{i}^{k} + \alpha_{k} g_{i}^{k} \right)$
\STATE
\STATE $g^{k}_{\lambda} = g_{\lambda}(x^{k})=\frac{1}{2} \left( \|\Phi x^{k}-b\|^{2}_{2} - \eta^{2} \right)$
\STATE $\lambda^{k+1} = \max \left( 0, \lambda^{k} + \alpha_{k} g^{k}_{\lambda} \right)$
\STATE
\STATE $x^{k+1} \in \argmin_{x \in \mathbb{R}^{n}} \:\: \frac{\lambda^{k+1}}{2}||\Phi x-b||^{2}_{2} + \sum \limits_{i=1}^{n} w_{i}^{k+1} |x_{i}|$ \COMMENT{with warm restart $x^{k}$}
\STATE
\STATE $k = k+1$
\STATE
\ENDWHILE
\RETURN $x^{k}$
\end{algorithmic}
\end{algorithm}
CWB RW$\ell_{1}$ algorithm can also be extended to the noisy model, by updating weights as in the noiseless case, but taking \cite{candes_2008}: \begin{equation}
x^{k} \in \argmin_{
\begin{array}{ll}
\frac{1}{2} ||\Phi x-b||^{2}_{2} \leq \frac{\eta^{2}}{2}
\end{array}
} \: \sum \limits_{i=1}^{n} w_{i}^{k} |x_{i}|, \: \forall \: k \geq 0.
\end{equation}
\section{Experimental Results} \label{sec:resultados}
\subsection{Results for the Noise-free Setting}
This section analyzes the performance of the proposed RW$\ell_{1}$ subgradient algorithm, when applied to a random linear system, and taking the method by CWB as reference. For a given level of sparsity $s$, a random linear system $\Phi x=b$ is generated, with a solution $x^{*}$ such that $\|x^{*}\|_{0} \leq s$. The experimental setting is based on \cite{candes_2008}:
\begin{enumerate}
\item Generate $\Phi \in \mathbb{R}^{m \times n}$, with $n=256$, $m=100$ and Gaussian independent entries: $$\Phi_{ij} \sim N \left( 0,\sigma=\frac{1}{\sqrt{m}} \right), \: \forall \: i,j.$$ Note that in particular $\Phi$ will have normalized columns (in expected value).
\item Select randomly a set $I_{s} \subset \{ 1,\ldots,n \}$ of $s$ indexes, representing the coordinates of $x^{*}$ where non-null values are allowed.
\item Generate the values of $x^{*}_{i}, \: i \in I_{s}$, with independent Gaussian distribution: $$x^{*}_{i} \sim N \left( 0, \sigma = \frac{1}{\sqrt{s}} \right), \forall \: i \in I_{s}.$$ Note that in particular $x^{*}$ will be normalized in expected value.
\item Generate the independent term: $b=\Phi x^{*} \in \mathbb{R}^{m}$.
\end{enumerate}
For both RW algorithms, the proposed one and the method by CWB, we use $w^{0}=\vec{1}$. For CWB we take $\epsilon_{k}=0.1, \: \forall \: k \geq 0$. Following \cite{candes_2008}, we say $x^{*}$ was recovered if: \begin{equation}
\|x^{\text{RWIter}}-x^{*}\|_{\infty} \leq 1 \times 10^{-3}.
\end{equation}
For each level of sparsity $s \in [15,55]$, a recovery rate is calculated as the percentaje of recovery over $N_{p}=300$ random problems. Figure \ref{fig:performance_sin_ruido} shows the results for different number of RW iterations. Results for $\ell_{1}$ minimization are also shown for reference. Considering only one RW iteration, the proposed algorithm is slightly better than CWB. This difference disappears for two or more RW iterations, where both algorithms show the same performance; with the additional interpretability of the weights in the proposed methodology.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{imagenes/rwl1_juntos.png}
\caption{Recovery rate of RW$\ell_{1}$ algorithms.} \label{fig:performance_sin_ruido}
\end{figure}
\subsection{Results for the Noisy Setting}
Following \cite{candes_2008}, random problems with noise are generated with $n=256$ and $m=128$. $\Phi$ and $x^{*}$ are generated in the same way as in the noiseless case. Noise $z$ in $b$ is taken with Gaussian independent coordinates, and such that $x^{*}$ is feasible with high probability. For this we take: $z_{i}=\sigma v_{i}, \: v_{i} \sim N(0,1) \text{ independent}$, so: \begin{equation}
\|z\|_{2}^{2} = \sigma^{2} \|v\|_{2}^{2} = \sigma^{2} \left( \sum \limits_{i=1}^{m} v_{i}^{2} \right) \sim \sigma^{2} \chi_{m}^{2}.
\end{equation} Taking for example $\eta^{2} = \sigma^{2} \left( m + 2 \sqrt{2m} \right)$, we have: \begin{equation}
P \left( \|\Phi x^{*}-b\|_{2}^{2} \leq \eta^{2} \right) = 1 - P \left( \chi^{2}_{128} \geq 160 \right) \simeq 0.971.
\end{equation}
We use $w^{0}=\vec{1}$ for both algorithms. For subgradient RW-LASSO we take $\lambda^{0} = \frac{n}{\|z\|_{1}}$, where $z$ is a solution of $\Phi x =b$ with minimum $\ell_{2}$ norm. FISTA algorithm is used for solving each Weighted-LASSO problem \cite{beck_2009}. Performance is measured by the improvement with respect to a solution $x_{\ell_{1}}^{\eta}$ of noisy $\ell_{1}$ minimization: \begin{equation}
a=100 \times \left( 1-\frac{||x^{RW}-x^{*}||_{2}}{||x_{\ell_{1}}^{\eta}-x^{*}||_{2}} \right) \%.
\end{equation}
Figure \ref{fig:performance_ruido} shows the performance with noisy measures for both RW methods: the proposed RW-LASSO algorithm and the RW$\ell_{1}$ CWB algorithm. Results correspond to $N_{p}=300$ tests on random problems with fixed sparsity $s=38$. The mean improvement $\bar{x}$ is also shown (vertical red line), together with $\pm$ one standard deviation $\bar{\sigma}$ (vertical violet and green lines). CWB RW$\ell_{1}$ algorithm shows a mean improvement of $21\%$ with respect to $\ell_{1}$ minimization. For RW-LASSO subgradient this improvement is $32\%$, significantly higher than CWB.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{imagenes/histograma_ruido.png}
\caption{Performance of RW algorithms with respect to $\ell_{1}$ minimization (with noise).} \label{fig:performance_ruido}
\end{figure}
We also considered the RW-LASSO algorithm with weights updated as CWB, but the performance was very poor. The reason for this may be that $\lambda_{k}$ remains fixed at $\lambda_{0}$, as there is no obvious rule for updating it.
\section{Conclusions} \label{sec:conclusiones}
In this paper the important problem of finding sparse solutions of a linear system was considered. A usual alternative to this NP-hard problem is the Weighted $\ell_{1}$ problem, where the choice of weights is crucial. A new methodology for estimating weights was proposed, based on identifying weights as solutions of a Lagrange dual problem. It was shown that this problem always admits ``useful'' solutions. The proposed methodology was then applied using the projected subgradient algorithm, obtaining a RW$\ell_{1}$ algorithm, alternative to the classical one, due to CWB. This new algorithm was tested on random problems in the noiseless case, obtaining almost the same performance as that of CWB, but allowing an interpretation of weights. The proposed methodology was then extended to the noisy case. Here a RW-LASSO algorithm was obtained, by introducing a new Lagrange multiplier. This last algorithm showed a considerable improvement in performance, with respect to the RW$\ell_{1}$ algorithm proposed by CWB.
\subsubsection*{Acknowledgment.}
This work was supported by a grant from Comisi\'on Acad\'emica de Posgrado (CAP), Universidad de la Rep\'ublica, Uruguay.
\bibliographystyle{splncs04}
|
1,116,691,500,616 | arxiv | \section{Introduction}
The AdS/CFT correspondence shows that gravity theory in anti-de Sitter space is dual to conformal field theory on its boundary which implies a striking relation between the gauge theory and gravity theory~\cite{Maldacena:1997re}. As it is a weak-strong duality, we are able to investigate various strong coupling effects in field theory from gravity side. A systematic rule of correspondence between two sides is proposed which is the so-called holographic dictionary~\cite{Gubser:1998bc,Witten:1998qj}. In 2006, Ryu and Takayanagi find that entanglement entropy in boundary field theory is dual to the minimal area of boundary anchored surface in the bulk~\cite{Ryu:2006bv, Ryu:2006ef}. This work illuminates the relation between quantum information theory and gravity theory and implies that we can investigate quantum gravity effect via quantum information approach.
In recent years, there is a growing interest in another field theory quantity called circuit complexity. In quantum computation theory,circuit complexity is defined to be the minimal number of simple gates required to build a typical state from a reference state within small tolerance $\epsilon$. In order to inteprete the growth of the size of Einstein-Rosen Bridge behind the horizon, The authors in Ref~\cite{Susskind:2014rva}proposed that the ERB size is dual to the quantum complexity on the boundary, which is called CV duality. Later, there is another proposal which says that complexity is dual to on-shell action of the WdW patch~\cite{Brown:2015bva,Brown:2015lvg}, which is called CA duality. By calculating the action growth~\cite{Brown:2015bva,Brown:2015lvg,Lehner:2016vdi} , they find that the action obeys a bound called Lloyd bound~ \cite{Lloyd:2000} which is obtained in quantum computation using energy-time uncertainty principle. There are many works concerning this bound in various backgrounds \cite{Cai:2016xho, Cai:2017sjv,Swingle:2017zcd}. Moreover recent study \cite{HosseiniMansoori:2017tsm}shows the connection between the butterfly velocity and complexity growth rate.
In the context of the string theory, as low energy limit is done, a scalar field called dilaton occurs in the action . Dilaton couples to other fields in various nontrivial ways, as an example, \cite{Garfinkle:1990qj} investigate the case that dilaton couples to the Maxwell field acting like the coupling constant of Maxwell action. The appearence of dilaton field changes the spacetime structure drastically and gives us more interesting black hole solutions. \cite{Garfinkle:1990qj} gets a dilaton black hole solution in asymptotically flat spacetime and \cite{Gao:2004tu} generalize this to asymptotically dS and AdS spacetime by introducing a Liouvile-type potential. Much interest has been paid to investigate the thermodynamic behavior of these black holes\cite{Sheykhi:2009pf,Li:2017kkj}, but the detailed analysis of the complexity behavior is still yet to be done.
Despite the success of AdS/CFT correspondence, there are also many investigations beyond AdS . In AdS, space and time coordinate scale isotropically, while in order to describe some condensed matter systems , an anisotropically scaling is needed. Ref. ~\cite{Kachru:2008yh,Taylor:2008tg,Tarrio:2011de,Dong:2012se} come up with many backgrounds which support the scaling.
\begin{equation}
\begin{split}
t \to \lambda^{z} t \\
x \to \lambda x
\end{split}
\end{equation}
Among them, Ref.\cite{Taylor:2008tg} finds that the Einstein-Maxwell-Dilaton system has such non-trivial black hole solution.While there are many investigations considering the thermodynamic and hydrodynamic properties of this solution \cite{Liu:2014dva,Pang:2009ad} , its complexity behavior is still under research.
In this work, we use CA proposal and investigate the action growth of various black hole solutions in Einstein-Maxwell-Dilaton theory. In Section 2, we investigate a dilaton black hole in AdS vacuum, while an electro-magnet field exists, the Penrose diagram resembles the Schwartzchild case. While Ref.\cite{Cai:2017sjv}investigate the late-time behavior of this black hole, we investigate the full-time evolution of it and show that it approaches the late-time bound from above as Ref.\cite{Carmi:2017jqz}. In section 3, we go beyond the AdS case and investigate the Lifshitz-type black hole\cite{Taylor:2008tg} , we calculate the on-shell action in WdW patch of this black hole and find that it violates the Lloyd bound even in the late time limit. In Section 4, we discuss our result and give some interpretation.
\section {Full time evolution of action in Charged Dilaton Black hole in AdS space}
\subsection {charged dilaton black hole}
We consider the Einstein-Maxwell-Dilaton theory with the action
\begin{equation}
S=\frac{1}{16 \pi} \int d^{4}x \sqrt{-g}(R-2(\partial \phi)^{2}-V(\phi)-e^{-2\phi}F^{2})
\label{bulka}
\end{equation}
where the Liouville-type potential is
\begin{equation}
V(\phi)=-\frac{4}{l^{2}}-\frac{1}{l^{2}}[e^{2(\phi-\phi_{0})}+e^{-2(\phi-\phi_{0})}]
\end{equation}
When $\phi=\phi_{0}$, it reproduces the usual cosmological constant term in AdS space. Varying the action, we can get the equation of motion which is
\begin{equation}
R_{\mu\nu}=2\partial_{\mu} \phi \partial_{\nu} \phi+\frac{1}{2} g_{\mu\nu} V+2 e^{-2\phi}(F_{\mu\alpha} F^{\alpha}_{\nu}-\frac{1}{4} g_{\mu\nu} F^{2})
\end{equation}
\begin{equation}
\partial_{\mu}(\sqrt{-g} e^{-2\phi} F^{\mu\nu})=0
\end{equation}
\begin{equation}
\partial^{2}\phi=\frac{1}{4} \frac{dV}{d\phi}-\frac{1}{2}e^{-2\phi} F^{2}
\end{equation}
there exists a static spherically symmetric black hole solution
\begin{equation}
ds^{2}=-f(r) dt^{2}+\frac{dr^{2}}{f(r)}+r(r-2D) d \Omega^{2}
\end{equation}
where $f(r)=1-\frac{2M}{r}+\frac{r(r-2D)}{l^{2}}$
the electro-magetic field and dilaton $\phi$ can be obtained via the equation of motion $F_{tr}=\frac{Q e^{2\phi}}{r(r-2D)}$, $e^{2\phi}=e^{2\phi_{0}}(1-\frac{2D}{r})$, $\phi_{0}$is the integration constant. D is the conserved dilaton charge which is $D=\frac{Q^{2} e^{2\phi_{0}}}{2M}$ \\
Unlike the usual RN case, although the system has electro-magnet field, because of the introduction of the dilaton coupling, there is a new curvature singularity located at the $r=2D$(between the inner horizon and outer horizon). It seems natural because of the instability of the inner horizon in RN black hole. Therefore, the penrose diagram looks like the AdS-Schwartzchild black hole rather than AdS-RN black hole. \cite{Cai:2017sjv}investigated the late time behavior of the complexity growth and the result is
\begin{equation}
\frac{d C}{dt}=2M-\mu Q- D
\end{equation}
and \cite{Cai:2017sjv}proposed that the existence of dilaton will slow down the rate of complexity growth. Recent studies \cite{Carmi:2017jqz}says that in AdS-Schwartzchild case , the bound proposed in \cite{Cai:2017sjv}is approached from above so is violated in the early time. In the next section, we will show that the early-time violation behavior also occurs in this situation.
\begin{figure}
\includegraphics[width=0.4\textwidth]{penrose1.png}
\includegraphics[width=0.4\textwidth]{penrose2.png}\\
\caption{WdW patch of the time before(left side) and after (right side) the critical time, we assume boundary time satisfy the relation $t_{L}=t_{R}$, and at the right(left) boundary, bulk time flows in the same(opposite) direction as the boundary, in calculating the bulk contribution of the total action ,we partition the spacetime into three regions }\label{fig:WDWpatch of the charged dilaton black hole}
\end{figure}
\subsection{General time dependence of the Action}
Ref \cite{Lehner:2016vdi}give a method to calculate the action in the presence of null boundary, the expression is as follows
\begin{equation}
\begin{split}
I = & \frac{1}{16 \pi G_N} \int_\mathcal{M} d^{4} x \sqrt{-g} \left(\mathcal R -2 \left(\partial \phi \right)^{2}-V (\phi)-e^{-2\phi}F^{2}\right) \\
&\quad+ \frac{1}{8\pi G_N} \int_{\mathcal{B}} d^3 x \sqrt{|h|} K + \frac{1}{8\pi G_N} \int_\Sigma d^{2}x \sqrt{\sigma} \eta
\\
&\quad -\frac{1}{8\pi G_N} \int_{\mathcal{B}'}
d\lambda\, d^{2} \theta \sqrt{\gamma} \kappa
+\frac{1}{8\pi G_N} \int_{\Sigma'} d^{2} x \sqrt{\sigma} a\,.
\end{split}
\end{equation}
Here, we follow the convention in \cite{Carmi:2017jqz,Carmi:2016wjl}. Terms in the expression above are respectively bulk term, GHY boundary term, Hayward joint term\cite{Hayward:1993my}, null boundary term and null joint term. We choose affine parametrization and set $ \kappa=0$ in the following, so the contribution of null boundary vanishes.
We consider the full time evolution of action in this black hole(we set $G=1$ in this section for simplicity), we assume the boundary time have following relation $t_{L}=t_{R}=\frac{t}{2}$, the time dependence have two stage. First, the past null-boundary intersect the past-singularity, and we have the past GHY boundary term, after an amount of time ,which we call critical time, the two past-null boundary intersect each other , and a null joint term replace the GHY boundary term. The complexity behavior is different in the time below and above the critical time. So, we first determine the critical time $t_{c}$\\
\begin{equation}
\frac{t_{c}}{2}-r^{*}(\infty)=t-r^{*}(0)
\end{equation}
\begin{equation}
-\frac{t_{c}}{2}+r^{*}(\infty)=t+r^{*}(0)
\end{equation}
the critical time is
\begin{equation}
t_{c}=2(r^{*}(\infty)-r^{*}(0))
\end{equation}
when $t< t_{c}$
the contribution contains three part
\begin{equation}
S=S_{bulk}+S_{GHY}+S_{joint}
\end{equation}
the bulk contribution is the Einstein-Hilbert action plus matter field action which is ($\ref{bulka}$), and the surface comes from past and future singularity surface and the cutoff surface at $r_{max}$. The joint term is null-spacelike /null-timelike joint which occurs at singularity / cutoff surface respectively.
The bulk term is
\begin{equation}
S=\frac{1}{16 \pi} \int d^{4}x [4r(r-2D)+(r-2D)^{2}+r^{2}-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}]
\end{equation}
It is convenient to divide the integral region into three portions and the result is as follows
\begin{equation}
\begin{aligned}
S_{1}=-\frac{1}{4l^{2}} \int _{2D}^{rh} dr [ &4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](\frac{t}{2}+r^{*}_{\infty}-r^{*}(r))
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
S_{2}=-\frac{1}{2l^{2}} \int _{rh}^{rmax} dr [& 4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](r^{*}_{\infty}-r^{*}(r))
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
S_{3}=-\frac{1}{4l^{2}} \int _{2D}^{rh} dr [ &4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](-\frac{t}{2}+r^{*}_{\infty}-r^{*}(r))
\end{aligned}
\end{equation}
We see the time dependence cancels each other and bulk term is independent of time.
Then, we calculate the surface term at singularity. We choose
\begin{equation}
n_{\alpha}=-| f(r)|^{-1/2} \partial_{\alpha}r
\end{equation}
and the extrinsic curvature is
\begin{equation}
K=\nabla_{\alpha} n^{\alpha}=\frac{1}{U^{2}} \frac{d}{dr}(U^{2} n^{r})
\end{equation}
where $U^{2}=r(r-2D)$
So the GHY action is
\begin{equation}
I_{future}=\frac{1}{2} |f|^{1/2} \frac{d}{dr}(r(r-2D)|f|^{1/2})(\frac{t}{2}+r^{*}_{\infty}-r^{*}(r))|_{r=2D}
\end{equation}
\begin{equation}
I_{past}=\frac{1}{2}|f|^{1/2} \frac{d}{dr}(r(r-2D)|f|^{1/2}) (-\frac{t}{2}+r^{*}_{\infty}-r^{*}(r))|_{r=2D}
\end{equation}
\begin{equation}
I_{cutoff}=|f|^{1/2} \frac{d}{dr}(r(r-2D) |f|^{1/2})(r^{*}_{\infty}-r^{*}(r)) |_{r=r_{max}}
\end{equation}
We see the cancelation between surface at the past and future singularity, and from \cite{Chapman:2016hwi} we know that the joint terms are independent of time. Combine the above result , we see the action is constant until $t=t_{c}$. \\
When $t>t_{c}$, null joint forms at $r=r_{m}$and there is no surface term from past singularity. $r_{m}$ is obtained by equation
\begin{equation}
\frac{t-t_{c}}{2}+r^{*}(r_{m})-r^{*}(0)=0
\end{equation}
The null joint term depends on time implicitly through $r_{m}$ \\
The bulk action is
\begin{equation}
\begin{aligned}
I_{bulk}=I_{bulk}^{0}-\frac{1}{4L^{2}}\int _{2D}^{rm} dr [& 4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](\frac{t}{2}-r^{*}_{\infty}+r^{*}(r))
\end{aligned}
\end{equation}
So the change of the bulk action compared with the $t<t_{c}$ case is
\begin{equation}
\begin{aligned}
\delta I_{bulk}=-\frac{1}{4L^{2}} \int _{2D}^{rm} dr [ &4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](\frac{\delta t}{2}+r^{*}(r)-r^{*}(0))
\label{bulka2}
\end{aligned}
\end{equation}
where $\delta t=t-t_{c}$
because of the lack of surface term of past singularity, the surface contribution also depends on t.
\begin{equation}
I_{surf}=I_{0}-I_{past}
\end{equation}
so
\begin{equation}
\delta I_{surf}=\frac{1}{2}\frac{d}{dr}(r(r-2D)|f|^{1/2}) |f|^{1/2} (\frac{\delta t}{2}+r^{*}(r)-r^{*}(0))|_{r=2D}
\label{surf2}
\end{equation}
For null joint term
\begin{align}
k_{R}=-\alpha dt + \alpha \frac{dr}{f(r)} \\
k_{L}=\alpha dt + \alpha \frac{dr}{f(r)}
\end{align}
\begin{equation}
a=log(-\frac{1}{2}k_{R} \cdot k_{L})=-log(\frac{|f|}{\alpha^{2}})
\end{equation}
so the joint term reads
\begin{equation}
\begin{split}
&I_{jnt}=\frac{1}{8\pi } \int_{Sigma'} d^{2}x \sqrt{\sigma} a \\
&\quad =-\frac{r_{m}(r_{m}-2D)}{2} log\frac{|f(r_{m})|}{\alpha^{2}}\,.
\end{split}
\label{jnt2}
\end{equation}
combine the above result ($\ref{bulka2}$, $\ref{surf2}$,$\ref{jnt2}$) together and take derivative with respect to t , recall that $\frac{dr_{m}}{dt}=-\frac{1}{2}f(r_{m})$, we finally get the action growth rate at time $t>t_{c}$
\begin{equation}
\frac{dI_{tot}}{dt}=2M-{\mu_{m}}Q-D+\frac{1}{2}(r_{m}-D)f(r_{m}) log \frac{|f(r_{m})|}{\alpha^{2}}
\end{equation}
where $ \mu_{m}=\frac{Qe^{2\phi_{0}}}{r_{m}}$
at late time limit $r_{m} \to r_{+}$, we see $\mu_{m}$becomes the chemical potential and the last term vanishes. So we recover the late time result in \cite{Cai:2017sjv}. The rate of growth of action for $t>t_{c}$ is plotted as Fig2.
\begin{figure}
\includegraphics[width=0.45\textwidth]{cgrowth.pdf}\\
\caption{There are two parameters, $y=\frac{\mu Q}{2M}$, $z=r_{+}/L$. For this picture, we fix $z=1$, the green line correspond to $y=0.1$, blue line $y=0.2$, red line $y=0.3$. we find the complexity growth rate approaches the late time bound from above.}
\label{fig:complexity growth rate in different chemical potential}
\end{figure}
\section{Action growth in Lifshitz-Like black brane}
\subsection{Lifshitz black brane in EMD theory}
In many condensed matter systems, near the critical point, anisotropic scaling symmetry is expected. The gravitational systems which have the same scaling behavior are constructed in various situations.Ref.\cite{Kachru:2008yh}adds high order form field,while Ref.\cite{Taylor:2008tg} realizes Lifshitz spacetime using massive vector field and Einstein-Maxwell-Dilaton theory. Here we choose the solution constructed in Ref.\cite{Taylor:2008tg}by coupling the dilaton field to an Abelian Maxwell field. The thermodynamic beheavior of Lifshitz spacetime is investigated in Ref.\cite{Liu:2014dva}
We consider the following action in (d+2)-dimensional spacetime
\begin{equation}
\begin{aligned}
I=\frac1{16\pi G_{d+2}}\int d^{d+2} x\sqrt{-g}[&R-2\Lambda-\frac12 \partial_\mu\phi\partial^\mu\phi \\
&-\frac14 e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}].
\end{aligned}
\end{equation}
where $\Lambda\ $is the cosmological constant and the matter fields are a massless scalar and an abelian gauge field.We can get the equations of motion :
\begin{equation}
\label{2eq2}
\partial_{\mu}(\sqrt{-g}e^{\lambda\phi}F^{\mu\nu})=0,
\end{equation}
\begin{equation}
\partial_{\mu}(\sqrt{-g}\partial^{\mu}\phi)-\frac{\lambda}{4}\sqrt{-g}e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}=0,
\end{equation}
\begin{equation}
\label{2eq4} R_{\mu\nu}=\frac{2}{d}\Lambda
g_{\mu\nu}+\frac{1}{2}\partial_{\mu}\phi\partial_{\nu}\phi+\frac{1}{2}e^{\lambda\phi}F_{\mu\rho}{F_{\nu}}^{\rho}
-\frac{1}{4d}g_{\mu\nu}e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}.
\end{equation}
which has the following asymptotic Lifshitz black hole solution:
\begin{eqnarray}
\label{2eq11}
&ds^{2}=L^{2}(-r^{2z}f(r)dt^{2}+\frac{dr^{2}}{r^{2}f(r)}+r^{2}\sum\limits^{d}_{i=1}dx^{2}_{i}),~~~
\\
&f(r)=1-\frac{r^{z+d}_{+}}{r^{z+d}}\\
&F_{rt}=qe^{-\lambda\phi}r^{z-d-1},~~~e^{\lambda\phi}=r^{\lambda\sqrt{2(z-1)d}},\nonumber\\
&\lambda^{2}=\frac{2d}{z-1},~~~q^{2}=2L^{2}(z-1)(z+d),\nonumber\\
&\Lambda=-\frac{(z+d-1)(z+d)}{2L^{2}}.
\label{eom}
\end{eqnarray}
We can obtain the temperature via Euclidean path integral
\begin{align}
T_{H}=\frac{(z+d)r^{z}_{+}}{4\pi},
\end{align}
and the black hole entropy
\begin{equation}
S_{BH}=\frac{\Omega_{d}L^{d}}{4G_{d+2}}(\frac{4\pi}{z+d})^{\frac{d}{z}}T^{\frac{d}{z}}
\end{equation}
where $\Omega_{d}$ denotes the volume of the $d$-dimensional spatial coordinates.
\subsection{late time behavior in Lifshitz black brane}
The volume contribution is
\begin{equation}
S_{V}=\int_{V}(R-2\Lambda-\frac12\partial_\mu\phi\partial^\mu\phi-\frac14e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu})\sqrt{-g}d^{n+2}x
\end{equation}
We will use the null coordinates $u$ and $v$, defined by
\begin{equation}
du := dt + \frac1{r^{z+1} f} dr,~~~~
dv := dt - \frac1{r^{z+1} f} dr
\end{equation}
Integrating these relations yields the ``infalling" null coordinate $u
= t + r^*(r)$ and the ``outgoing" null coordinate $v = t - r^*(r)$,
where $r^*(r) := \int \frac1{r^{z+1} f} dr$. The metric becomes
\begin{equation}
ds^2 = L^2[-r^{2z}fdu^2 + 2r^{z-1}dudr + r^2 \sum\limits^{d}_{i=1}dx^{2}_{i}]
\end{equation}
or
\begin{equation}
ds^2 =L^2[-r^{2z}f dv^2 - 2r^{z-1}dvdr + r^2 \sum\limits^{d}_{i=1}dx^{2}_{i}]
\end{equation}
when expressed in terms of the null coordinates. For the three choices
$(t,r)$, $(u,r)$, and $(v,r)$ we have
\begin{equation}
\int \sqrt{-g} d^{d+2} x = \Omega_{d}L^{d+2}\int r^{d+z-1} dr dw,
\end{equation}
where $w = \{ t, u, v \}$.
\begin{figure}
\includegraphics[width=0.55\textwidth]{wdw.pdf}\\
\caption{WdW patch of Lifshitz black hole}
\end{figure}
As shown in Fig.3,the region $V_1$ is bounded by the null surfaces $u=u_0$,$u=u_0+\delta t$,$v=v_0+\delta t$,$r=\epsilon$.The volume integral is best performed in the $(u,r)$ coordinate system, in this system the surface $v=v_0+\delta t$ is described by $r=\rho_0(u)$,with $r^*(\rho_0)=\frac12(v_0+\delta t-u)$.Making use of equations of motion,we get
\begin{equation}
\begin{aligned}
R-2\Lambda-\frac12 \partial_\mu\phi\partial^\mu\phi-\frac14 e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}&=\frac{4\Lambda}{d}-\frac1{2d} e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu} \\
&=\frac{-2(z+d)}{L^2}
\end{aligned}
\end{equation}
with this we have
\begin{equation}
\begin{aligned}
S_{V_1} &= -2(d+z)L^d\Omega_d \int^{u_0+ \delta t}_{u_0} du
\int_{\epsilon}^{{\rho_0}(u)} r^{d+z-1}dr\\
&= -2L^d\Omega_d \int^{u_0+ \delta t}_{u_0} du
[{\rho_0}^{d+z}(u)]
\end{aligned}
\end{equation}
Where we neglect the $\epsilon^{n+1}$term in the integral as $\epsilon \to 0$
. The region $V_2$ is bounded by the null surfaces $u=u_0$, $u=u_1$,
$v=v_0$, and $v=v_0+\delta t$. In this case, the volume integral is most easily
performed in the $(v,r)$ coordinates, in which the surfaces $u=u_{0,1}$ are
described by $r=\rho_{0,1}(v)$, with $r^*(\rho_{0,1}) = \frac{1}{2}(v-u_{0,1})$. Then we have
\begin{equation}
\begin{aligned}
S_{V_2} &= -2(d+z)L^d\Omega_d \int^{v_0+ \delta t}_{v_0} dv
\int_{{\rho_1}(v)}^{{\rho_0}(v)} r^{d+z-1}dr\\
&= -2L^d\Omega_d \int^{v_0+ \delta t}_{v_0} dv
[{\rho_0}^{d+z}(v)-{\rho_1}^{d+z}(v)].
\end{aligned}
\end{equation}
Using a variables change :$u = u_0+v_0 + \delta t - v$ , we can see the terms involving ${\rho_0}(u)$ and
${\rho}_0(v)$ cancel out. We are left with
\begin{equation}
S_{V_1} - S_{V_2} = -2L^d\Omega_d \int^{v_0+ \delta t}_{v_0} dv
[{\rho_1}^{d+z}(v)].
\end{equation}
with the function $\rho_1(v)$ varying from $r_{B}$ to $r_{B}+O(\delta t)$,and hence the volume contribution to $\delta S$ is simply
\begin{equation}
\begin{aligned}
S_{V_1} - S_{V_2} =-2L^d\Omega_d {r_B}^{d+z}\delta t
\label{dS:volume}
\end{aligned}
\end{equation}
The surface contribution to $\delta S$ are given by
$-2 \int_{S} K\, d\Sigma$, where $S$ is the boundary segment given by the spacelike
hypersurface $r = \epsilon$. The (future-directed) unit normal is given by
$n_\alpha = \frac{L}{r}|f|^{-1/2} \partial_\alpha r$. The extrinsic curvature is then
\begin{equation}
K = \nabla_\alpha n^\alpha = -\frac{1}{L^{d+2} r^{z+d-1}} \frac{d}{dr}
\Bigl(L^{d+1} r^{z+d} |f|^{1/2} \Bigr),
\end{equation}
and the volume element:
\begin{equation}
d\Sigma = \Omega_{d} L^{d+1}|f|^{1/2} r^{z+d} dt
\end{equation}
Letting $r = \epsilon
\ll r_{+}$ and then approximating $f \simeq -(r_{+}/r)^{z+d}$; $K\simeq -\frac{z+d}{2L}(r_{+}/r)^{\frac{z+d}2}$; $d\Sigma \simeq \Omega_{d} L^{d+1}(r_+r)^{\frac{z+d}2} dt$,
we find that
\begin{equation}
-2 \int_{S} K d\Sigma = (z+d) \Omega_{d}L^{d}{r_+}^{z+d} \delta t.
\label{dS:surface}
\end{equation}
It is finite and independent of $\epsilon$.
We then calculate joint terms at $ B$,$B'$.The null joint rule states that \cite{Lehner:2016vdi}
\begin{equation}
a = \ln\bigl( -{\textstyle \frac{1}{2}} k \cdot \bar{k} \bigr),
\end{equation}
We choose the vectors $k^\alpha$ and $\bar{k}^\alpha$ to be
affinely parametrized and read
\begin{equation}
\begin{aligned}
k_\alpha = -c\partial_\alpha v = -c\partial_\alpha(t - r^*), \qquad
\bar{k}_\alpha= \bar{c}\partial_\alpha u
= \bar{c}\partial_\alpha(t+r^*)\,
\end{aligned}
\end{equation}
where $c$ and $\bar{c}$ are arbitrary (positive) constants.With these
choices, we have that $k \cdot \bar{k} = 2c\bar{c}/(f L^2 r^{2z})$, then
\begin{equation}
a = \ln\bigl[-c\bar{c}/(f L^2 r^{2z})]£¬
\end{equation}
With the above expression, we find that
\begin{align}
2 \oint_{B'} a\, dS - 2 \oint_{B} a\, dS
= 2\Omega_d \bigl[ h(r_{B'}) - h(r_{B})\bigr],
\end{align}
where $h(r) := r^d L^d \ln[-c\bar{c}/(f L^2 r^{2z})]$.
Then we perform a Taylor expansion of
$h(r)$ about $r = r_B$. Because the displacement is in a direction of
increasing $v$, we have that $du = 0$, $dv = \delta t$, and
$dr = -\frac{1}{2} f r^{z+1}\delta t$. This gives us
\begin{equation}
\begin{aligned}
&h(r_{B'}) - h(r_{B}) = -\frac{1}{2} f r^{z+1}\frac{dh}{dr}\bigg|_{r=r_{B}} \delta t\\
&= -\frac{1}{2} L^d r^{z+d}\biggl[ -r\frac{df}{dr} -2z f+ fd
\ln\biggl(\frac{c\bar{c}}{-fL^2 r^{2z}} \biggr) \biggr]\bigg|_{r=r_{B}} \delta t,
\end{aligned}
\end{equation}
and then
\begin{equation}
\begin{aligned}
&2 \oint_{B'} a dS - 2 \oint_{B} a dS \\
&= -\Omega_{d} L^d r^{z+d}\biggl[ -r\frac{df}{dr} -2z f+ fd
\ln\biggl(\frac{c\bar{c}}{-fL^2 r^{2z}} \biggr) \biggr]\bigg|_{r=r_{B}} \delta t
\end{aligned}
\end{equation}
Making use of the explicit expression of
$f$,and take the late time limit $ r_{B} \to r_{+}$,the Log term vanishes,and the result is
\begin{equation}
2 \oint_{B'} a dS - 2 \oint_{B} a dS
=(z+d)L^d\Omega_d {r_{+}}^{d+z}\delta t
\label{dS:joint}
\end{equation}
Combining Eqs.~(\ref{dS:volume}), (\ref{dS:surface}), and
(\ref{dS:joint}), we have
\begin{equation}
\delta S = (2z+2d-2)L^d\Omega_d {r_+}^{d+z}\delta t
\end{equation}
in late time limit.
From reference \cite{Pang:2009ad},we know the mass of the Lifshitz spacetime, this is
\begin{equation}
M=\frac{r^{z+d}_{+}dL^{d}\Omega_{d}}{16\pi G_{d+2}},
\label{m}
\end{equation}
This mass is the Komar integral subtracting the zero temperature background to remove the infinite volume divergence\cite{Taylor:2008tg}. So we have,
\begin{equation}
\frac{dS}{dt} = 32\pi G_{d+2}\frac{z+d-1}{d}M
\end{equation}
and in a more usual convention,
\begin{equation}
\frac{dI}{dt} = 2\frac{z+d-1}{d}M
\end{equation}
we can see that,when $z=1$,
\begin{equation}
\frac{dI}{dt} = 2M
\end{equation}
it is just the result in reference \cite{Lehner:2016vdi}.
When $z>1$,we have $\frac{dI}{dt} >2M$, the bound in \cite{Brown:2015bva,Brown:2015lvg} is violated even in the late time limt.
\subsection{general time behavior Lifshitz black brane}
In this subsection, we investigate the full time evolution of action growth and see whether the late time result is approached from below or above.We divide the Wheeler-DeWitt patch into three parts as in Fig1. When $t<t_c$,the action is composed of following three parts.
\begin{equation}
I_{tot}=I_{bulk}+I_{GHY}+I_{joint}
\end{equation}
where $I_{bulk}$ is the bulk terms and $I_{GHY}$ is the GHY surface terms at $r=\epsilon\to 0$ and$r=r_{max}\to \infty $ ,the joint terms is the joint formed by null and spacelike(timelike) surface at $r=\epsilon$ ($r=r_{max}$) .
For $t<t_c$,we have
\begin{equation}
I_{bulk}^1=-\int^{r_+}_{\epsilon_0}\frac{(z+d)L^d\Omega_d}{8\pi G}(\frac{t}2+r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
\begin{equation}
I_{bulk}^2=-\int^{r_{max}}_{r_+}\frac{(z+d)L^d\Omega_d}{8\pi G}2(r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
\begin{equation}
I_{bulk}^3=-\int^{r_+}_{\epsilon_0}\frac{(z+d)L^d\Omega_d}{8\pi G}(-\frac{t}2+r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
we see total cancelation and the bulk term is independent of time.
GHY term is calculated as
\begin{equation}
I_{surf}^{future}=\frac{r^{z+d}L^d\Omega_d}{8\pi G}(\frac{t}2+r^*(\infty)-r^*(r))[(z+d)|f|+\frac{r}{2}\partial_{r}|f|]\Big|_{r=\epsilon_0}
\end{equation}
\begin{equation}
I_{surf}^{past}=\frac{r^{z+d}L^d\Omega_d}{8\pi G}(-\frac{t}2+r^*(\infty)-r^*(r))[(z+d)|f|+\frac{r}{2}\partial_{r}|f|]\Big|_{r=\epsilon_0}
\end{equation}
\begin{equation}
I_{surf}^{UV cutoff}=\frac{r^{z+d}L^d\Omega_d}{8\pi G}(2(r^*(\infty)-r^*(r))[(z+d)|f|+\frac{r}{2}\partial_{r}|f|]\Big|_{r=r_{max}}
\end{equation}
it can be seen that the total boundary terms is also independent of time.
Joint terms from null boundaries insect with surface of past and future singularity vanish and joint terms from null boundaries insect with UV cutoff is independent of time \cite{Chapman:2016hwi}.So it has no contribution to complexity growth .
So we conclude that $\frac{dI}{dt}=0$ when $t<t_c$.
When $t>t_c$:
the only difference is the intersections between the null boundaries with past singularity surface change to intersections between two null boundaries.
We calculate in the same way
\begin{equation}
\label{feww}
I_{bulk}^1=-\int^{r_+}_{\epsilon_0}\frac{(z+d)L^d\Omega_d}{8\pi G}(\frac{t}2+r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
\begin{equation}
I_{bulk}^2=-\int^{r_{max}}_{r_+}\frac{(z+d)L^d\Omega_d}{8\pi G}2(r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
\begin{equation}
I_{bulk}^3=-\int^{r_+}_{r_m}\frac{(z+d)L^d\Omega_d}{8\pi G}(-\frac{t}2+r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
and combine them all we get
\begin{equation}
I_{bulk}-I_{bulk}^0=-\int^{r_m}_0\frac{(z+d)L^d\Omega_d}{4\pi G}(\frac{t}2-r^*(\infty)+r^*(r))r^{z+d-1}dr
\label{delta1}
\end{equation}
where we have included the factor of two coming from two sides of WdW patch
For GHY boundary terms, in the absence of past surface at singularity, it also depends on time
\begin{equation}
\begin{aligned}
I_{surf}^{past}=\frac{r^{z+d}L^d\Omega_d}{8\pi G}(&-\frac{t}2+r^*(\infty)\\
&-r^*(r))[(z+d)|f|+\frac{r}{2} \partial_{r} |f|]\Big|_{r=\epsilon_0}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
I_{surf}-I_{surf}^{0}&=-2I_{past} \\
&=-\frac{r^{z+d}L^d\Omega_d}{4\pi G}(-\frac{t}2+r^*(\infty)-r^*(r))\\
&\times[(z+d)|f|+\frac{r}{2}\partial_{r} |f|]\Big|_{r=\epsilon_0}
\label{delta2}
\end{aligned}
\end{equation}
where factor of two comes from the two sides of WdW patch.
Joint terms from null boundaries insect with surface of past and future singularity vanish and joint term from null boundaries insect with UV cutoff is independent of time.So we only need to consider the intersection of two past null boundaries at $r=r_{m}$,where $r_{m}$ satisfy equation:
\begin{equation}
\frac{t}{2}+r^*(r_m)-r^*(\infty)=0
\end{equation}
for $\delta t=t-t_c$,we can rewrite the above equation in
\begin{equation}
\frac{\delta t}2=r^*(0)-r^*(r_{m})
\end{equation}
varying it with t,we get
\begin{equation}
\frac{dr_m}{dt}=-\frac12 r^{z+1}_m f(r_m)
\label{drdt}
\end{equation}
We choose the normal vectors of null boundaries to be
\begin{equation}
k_L=\alpha(dt+\frac{dr}{r^{z+1}f})
\end{equation}
\begin{equation}
k_R=-\alpha(dt-\frac{dr}{r^{z+1}f})
\end{equation}
The joint term is
\begin{equation}
\begin{aligned}
I_{joint}=\frac1{8 \pi G}\oint a ds&=\frac1{8 \pi G}\oint\log(-\frac12{k_L}\cdot{k_R})ds\\
&=\frac{L^d\Omega_d r^d}{8\pi G}\log(\frac{-\alpha^2}{L^2 r^{2z} f})|_{r=r_m}
\label{delta3}
\end{aligned}
\end{equation}
combine equations ($\ref{delta1}$)($\ref{delta2}$)($\ref{drdt}$)($\ref{delta3}$),we can get
\begin{equation}
\begin{split}
\frac{dI}{dt}&=\frac{dI_{bulk}}{dt}+\frac{dI_{surf}}{dt}+\frac{dI_{joint}}{dt}\\
&=-\frac{L^d\Omega_d}{8\pi G}r_m^{z+d}+\frac{(z+d)r_+^{z+d}L^d\Omega_d}{16\pi G}\\
&+\frac{L^d\Omega_d}{8\pi G}[-\frac{d}2 f(r_m) r_m^{z+d}\log(\frac{-\alpha^2}{L^2 r_m^{2z} f(r_m)})+z r_m^{d+z}+\frac{d-z}2 r_+^{z+d})]\\
&=\frac{L^d\Omega_d}{8\pi G}r_m^{z+d} (z-1)+\frac{d r_+^{z+d}L^d\Omega_d}{8\pi G}\\
&-\frac{d L^d\Omega_d}{16\pi G}[ f(r_m) r_m^{z+d}\log(\frac{-\alpha^2}{L^2 r_m^{2z} f(r_m)})]
\label{final}
\end{split}
\end{equation}
At the late time situation,$r_m\rightarrow r_+$ and $f(r_m)\rightarrow 0$,so the last term of (\ref{final}) is zero.Then the complexity growth can be simplify to
\begin{equation}
\frac{dI}{dt}=\frac{(d+z-1) r_+^{z+d}L^d\Omega_d}{8\pi G}
\end{equation}
which recover the late time result.
To see whether the limit is approached from above or below, we plot the relation between complexity growth rate and time in four dimensions for various z in Fig4.
\begin{figure}
\includegraphics[width=0.55\textwidth]{cgrowth2.pdf}\\
\caption{The relation between complexity growth rate and boundary time, we choose the normalization factor $\alpha=L r_{+}^{z}$, and green/ blue/red lines correspond respectively to $z=3$/$z=2$/$z=1$, we find it approaches the late time bound from above, and for z sufficiently large, the complexity growth rate experience a decreasing period during early time }
\label{fig:complexity growth rate in different z}
\end{figure}
\section{Conclusion and discussion}
\label{Con}
In this paper, we investigate the effect of dilaton that is coupled to Maxwell field on the holographic complexity growth . In Section two, we investigate the black hole solution which is asymptotically AdS . In this case we find the presence of dilaton charge can slow down the computation rate, and when we take into account the full time evolution, we find the complexity growth rate approaches the late time bound from above like Ref \cite{Carmi:2017jqz}. In Section Three, we investigate the case which is asymptotically Lifshitz, in this case, dilaton is essential to support anisotropic vacuum structure. We find that in this case, the Lloyd bound is violated even in the late time limit. We investigate the full time evolution of the holographic complexity growth, it approaches the late time bound from above , moreover, for z sufficiently large , it exhibit an interesting behavior in time which is above critical time but much earlier than late time.
Although the above two cases are both solution in EMD theory, the roles dilaton play are very different. For the AdS dilaton example, dilaton charge and electro charge are free parameters and appear in the black hole solutions. While in Lifshitz case, dilaton and maxwell field are added to maintain the anisotropic scaling background. It doesn't contribute to the black hole solution, which means that the black hole only has one free parameter M. Holographically, eternal AdS black hole is dual to thermofield double state $| TFD \rangle$, in Schwartzchild case,
\begin{equation}
| TFD \rangle =\sum_{n} e^{-\beta E_{n}} | n_{L} \rangle |n_{R} \rangle
\end{equation}
In RN case
\begin{equation}
| TFD \rangle =\sum_{n} e^{-\beta (E_{n}-\mu Q)} | n_{L} \rangle |n_{R} \rangle
\end{equation}
what appears on the exponential depends on the thermodynamic relation. In Lifshitz case, because the charge is fixed cosntant. The first law of thermodynamics is just $dM=T dS$. So from boundary respect, there is no chemical potential term in thermofield double state. In other words, Ref.\cite{Brown:2015lvg} gives the complexity growth inequality
\begin{equation}
\frac{dC}{dt} \leqslant \int_{gs}^{S} T dS
\end{equation}
so from thermodynamical relation $dM=T dS$, we conclude that the complexity growth rate will only depend on the mass despite the presence of matter field in our Lifshitz case.
Ref.\cite{Yang:2016awy} proves that the action growth obeys the bound 2M under the following two conditions : (1) the matter field locates outside the outmost killing horizon (2) the strong energy condition is obeyed. In our two cases, the matter field extends into the killing horizon,so it doesn't satisfy the condition required in \cite{Yang:2016awy}. In AdS dilaton case, the bound $2M$ is satisfied ,while in Lifshitz case, it is violated. So the requirements in \cite{Yang:2016awy} are too restrictive, our calculation shows that weaker condition is needed to prove the late time bound from the bulk side and distinguish these two cases.
Recent study \cite{Swingle:2017zcd} investigate the action growth of the hyper-scaling violation background in EMD theory.They found that it depends on two parameters, the "dynamical exponent" $z$ and hyperscaling violation exponent $\theta$. They showed when $\theta=0$, their results match our Lifshitz result.
There are several interesting directions to pursue, the reason of the violation of the Lloyd bound is still unclear.The EMD theory is not expected to be a UV complete theory of quantum gravity, so it would be interesting to take into account the stringy effect near the singularity and recalculate the action growth.Moreover, for full time evolution of complexity growth in Lifshitz black brane, for z sufficiently large, such as $z=3$, the complexity growth rate exhibits new behaviors that are not found in \cite{Carmi:2017jqz}, it is interesting to investigate what it means physically when z becomes large. \footnote{Recently this problem is investigated in Ref.\cite{Alishahiha:2018tep} by adding a counter term to remove the normalization ambiguity in the joint term. There is no strong motivation to add this term to remove the ambiguity unless field theory suggests us to do that. However as Ref. \cite{Jefferson:2017sdb} suggests , this normalization ambiguity is somehow related to the choice of reference state in the field theory construction of complexity}Although CA conjecture proposed in \cite{Brown:2015bva,Brown:2015lvg} passes many nontrivial tests including switchback effect \cite{Stanford:2014jda}, it is still unproved.Recent studies \cite{Carmi:2017jqz} implies that although CA conjecture captures some essence in complexity, it needs to be revised. Our work's result on Lifshitz black hole also gives the evidence that the revision is needed in CA duality.
Apart from CA duality, there are also other proposals that relate complexity to other bulk quantities, such as \cite{Susskind:2014rva,Couch:2016exn,Alishahiha:2015rta,Momeni:2016ira,Momeni:2016ekm}, it is interesting to calculate complexity growth rate in these proposals and investigate if our results still hold. We will discuss the thermodynamic volume proposal \cite{Couch:2016exn} in the following
The Ref. \cite{Couch:2016exn} related the complexity on the boundary to the spacetime volume of WdW patch and gave the proposal complexity-volume duality 2.0. It is
\begin{equation}
C \sim \frac{1}{\hbar} P (spacetime volume)
\end{equation}
In our Lifshitz case, because of following relation
\begin{equation}
\begin{aligned}
R-2\Lambda-\frac12 \partial_\mu\phi\partial^\mu\phi-\frac14 e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}=\frac{-2(z+d)}{L^2}
\end{aligned}
\end{equation}
the bulk action contribution is proportional to the spacetime volume
\begin{equation}
I_{bulk}=-\frac{z+d}{8 \pi G L^{2}} \int d^{d+2} x \sqrt{-g}
\end{equation}
so there is no essential difference between the volume and bulk action of WdW patch.
We calculate the complexity growth rate using CV 2.0 proposal explicitly.
In extended phase space, the pressure is identified as the cosmological constant
\begin{equation}
P=-\frac{\Lambda}{8\pi G}=\frac{(z+d)(z+d-1)}{16 \pi G L^{2}}
\end{equation}
The spacetime volume inside the WdW patch is calculated from \ref{feww} . And when $t>t_{c}$, the time dependence of the volume is
\begin{equation}
\frac{dV}{dt}=\frac{L^{d+2}\Omega_{d} r_{m}^{z+d}}{z+d}
\end{equation}
After taking the late time limit ( $r_{m} \to r_{+}$). The complexity growth rate is
\begin{equation}
\label{dcdt}
\frac{dC}{dt}=\frac{(z+d-1)M}{d}
\end{equation}
We see in the complexity-volume 2.0 proposal, the result is half of the CA proposal, the Lloyd bound 2M is satisfied when z is not large enough, but as in Ref.\cite{Couch:2016exn} the Lloyd bound can be altered up to a pre-factor
\begin{equation}
\frac{dC}{dt} \leqslant \frac{\alpha E}{ \pi \hbar}
\end{equation}
the Ref.\cite{Couch:2016exn} shows that in AdS-RN case, this bound is satisfied for $\alpha=1$. In CV 2.0 proposal if we set $\alpha=1$ to be the Lloyd bound, this bound is violated in the Lifshitz case at late time because of the anisotropic scaling z as in the CA proposal.
Moreover, as conjectured in Ref. \cite{Couch:2016exn}, in various cases ,the late time value of the time derivative of spacetime volume of the WdW patch reduces to the thermodynamical volume( or the difference between two thermodynamic volume in the case of two horizons). In Lifshitz case, this late time limit of spacetime volume derivative reads
\begin{equation}
V=\frac{L^{d+2}\Omega_{d} r_{+}^{z+d}}{z+d}
\end{equation}
when $z=1$ in four dimension, it reduces to the familiar thermodynamic volume of Schwartzchild case $\frac{4 \pi r^{3}_{+}}{3}$. While when $z \neq 1$, because the profile of matter field depends on the cosmological constant nontrivially, it is hard to use Iyer-Wald formalism \cite{Iyer:1994ys} to derive the explicit expression of the thermodynamic volume. So in the further work , it will be interesting to derive the thermodynamic volume in Lifshitz case and see if it is the same as our late time limit result. It will provide a convincing evidence for the conjecture in Ref.\cite{Couch:2016exn} that thermodynamic volume is essential for complexity growth.
Although there have been many discussions on the complexity in holographic side, a concrete definition of complexity in quantum field theory still lacks. Recently, there are many works concerning this question. \cite{Jefferson:2017sdb,Chapman:2017rqy} propose some definitions about complexity in field theory side using the Finsler geometry and Fubini-Study metric. \cite{Caputa:2017yrh} discretizes the Euclidean path integral and defines the "path integral complexity" in terms of Liouville action. For concrete calculation in field theory side, see also\cite{Kim:2017qrq,Yang:2017nfn, Hashimoto:2017fga}.
Once a concrete and calculable definition of complexity is given, the problem of CA duality conjecture will be clarified to large extent. Moreover, apart from the problems from CA conjecture, the assumptions made in the process to derive the Lloyd bound should also be checked \cite{Cottrell:2017ayj}.
\begin{acknowledgments}
We want to thank Prof. Ronggen Cai for useful advice and encouragement during this work . And we also want to thank Run-Qiu Yang, Shao-Jiang Wang for useful help and discussions during this work.
\end{acknowledgments}
\bibliographystyle{utphys}
\section{Introduction}
The AdS/CFT correspondence shows that gravity theory in anti-de Sitter space is dual to conformal field theory on its boundary which implies a striking relation between the gauge theory and gravity theory~\cite{Maldacena:1997re}. As it is a weak-strong duality, we are able to investigate various strong coupling effects in field theory from gravity side. A systematic rule of correspondence between two sides is proposed which is the so-called holographic dictionary~\cite{Gubser:1998bc,Witten:1998qj}. In 2006, Ryu and Takayanagi find that entanglement entropy in boundary field theory is dual to the minimal area of boundary anchored surface in the bulk~\cite{Ryu:2006bv, Ryu:2006ef}. This work illuminates the relation between quantum information theory and gravity theory and implies that we can investigate quantum gravity effect via quantum information approach.
In recent years, there is a growing interest in another field theory quantity called circuit complexity. In quantum computation theory,circuit complexity is defined to be the minimal number of simple gates required to build a typical state from a reference state within small tolerance $\epsilon$. In order to inteprete the growth of the size of Einstein-Rosen Bridge behind the horizon, The authors in Ref~\cite{Susskind:2014rva}proposed that the ERB size is dual to the quantum complexity on the boundary, which is called CV duality. Later, there is another proposal which says that complexity is dual to on-shell action of the WdW patch~\cite{Brown:2015bva,Brown:2015lvg}, which is called CA duality. By calculating the action growth~\cite{Brown:2015bva,Brown:2015lvg,Lehner:2016vdi} , they find that the action obeys a bound called Lloyd bound~ \cite{Lloyd:2000} which is obtained in quantum computation using energy-time uncertainty principle. There are many works concerning this bound in various backgrounds \cite{Cai:2016xho, Cai:2017sjv,Swingle:2017zcd}. Moreover recent study \cite{HosseiniMansoori:2017tsm}shows the connection between the butterfly velocity and complexity growth rate.
In the context of the string theory, as low energy limit is done, a scalar field called dilaton occurs in the action . Dilaton couples to other fields in various nontrivial ways, as an example, \cite{Garfinkle:1990qj} investigate the case that dilaton couples to the Maxwell field acting like the coupling constant of Maxwell action. The appearence of dilaton field changes the spacetime structure drastically and gives us more interesting black hole solutions. \cite{Garfinkle:1990qj} gets a dilaton black hole solution in asymptotically flat spacetime and \cite{Gao:2004tu} generalize this to asymptotically dS and AdS spacetime by introducing a Liouvile-type potential. Much interest has been paid to investigate the thermodynamic behavior of these black holes\cite{Sheykhi:2009pf,Li:2017kkj}, but the detailed analysis of the complexity behavior is still yet to be done.
Despite the success of AdS/CFT correspondence, there are also many investigations beyond AdS . In AdS, space and time coordinate scale isotropically, while in order to describe some condensed matter systems , an anisotropically scaling is needed. Ref. ~\cite{Kachru:2008yh,Taylor:2008tg,Tarrio:2011de,Dong:2012se} come up with many backgrounds which support the scaling.
\begin{equation}
\begin{split}
t \to \lambda^{z} t \\
x \to \lambda x
\end{split}
\end{equation}
Among them, Ref.\cite{Taylor:2008tg} finds that the Einstein-Maxwell-Dilaton system has such non-trivial black hole solution.While there are many investigations considering the thermodynamic and hydrodynamic properties of this solution \cite{Liu:2014dva,Pang:2009ad} , its complexity behavior is still under research.
In this work, we use CA proposal and investigate the action growth of various black hole solutions in Einstein-Maxwell-Dilaton theory. In Section 2, we investigate a dilaton black hole in AdS vacuum, while an electro-magnet field exists, the Penrose diagram resembles the Schwartzchild case. While Ref.\cite{Cai:2017sjv}investigate the late-time behavior of this black hole, we investigate the full-time evolution of it and show that it approaches the late-time bound from above as Ref.\cite{Carmi:2017jqz}. In section 3, we go beyond the AdS case and investigate the Lifshitz-type black hole\cite{Taylor:2008tg} , we calculate the on-shell action in WdW patch of this black hole and find that it violates the Lloyd bound even in the late time limit. In Section 4, we discuss our result and give some interpretation.
\section {Full time evolution of action in Charged Dilaton Black hole in AdS space}
\subsection {charged dilaton black hole}
We consider the Einstein-Maxwell-Dilaton theory with the action
\begin{equation}
S=\frac{1}{16 \pi} \int d^{4}x \sqrt{-g}(R-2(\partial \phi)^{2}-V(\phi)-e^{-2\phi}F^{2})
\label{bulka}
\end{equation}
where the Liouville-type potential is
\begin{equation}
V(\phi)=-\frac{4}{l^{2}}-\frac{1}{l^{2}}[e^{2(\phi-\phi_{0})}+e^{-2(\phi-\phi_{0})}]
\end{equation}
When $\phi=\phi_{0}$, it reproduces the usual cosmological constant term in AdS space. Varying the action, we can get the equation of motion which is
\begin{equation}
R_{\mu\nu}=2\partial_{\mu} \phi \partial_{\nu} \phi+\frac{1}{2} g_{\mu\nu} V+2 e^{-2\phi}(F_{\mu\alpha} F^{\alpha}_{\nu}-\frac{1}{4} g_{\mu\nu} F^{2})
\end{equation}
\begin{equation}
\partial_{\mu}(\sqrt{-g} e^{-2\phi} F^{\mu\nu})=0
\end{equation}
\begin{equation}
\partial^{2}\phi=\frac{1}{4} \frac{dV}{d\phi}-\frac{1}{2}e^{-2\phi} F^{2}
\end{equation}
there exists a static spherically symmetric black hole solution
\begin{equation}
ds^{2}=-f(r) dt^{2}+\frac{dr^{2}}{f(r)}+r(r-2D) d \Omega^{2}
\end{equation}
where $f(r)=1-\frac{2M}{r}+\frac{r(r-2D)}{l^{2}}$
the electro-magetic field and dilaton $\phi$ can be obtained via the equation of motion $F_{tr}=\frac{Q e^{2\phi}}{r(r-2D)}$, $e^{2\phi}=e^{2\phi_{0}}(1-\frac{2D}{r})$, $\phi_{0}$is the integration constant. D is the conserved dilaton charge which is $D=\frac{Q^{2} e^{2\phi_{0}}}{2M}$ \\
Unlike the usual RN case, although the system has electro-magnet field, because of the introduction of the dilaton coupling, there is a new curvature singularity located at the $r=2D$(between the inner horizon and outer horizon). It seems natural because of the instability of the inner horizon in RN black hole. Therefore, the penrose diagram looks like the AdS-Schwartzchild black hole rather than AdS-RN black hole. \cite{Cai:2017sjv}investigated the late time behavior of the complexity growth and the result is
\begin{equation}
\frac{d C}{dt}=2M-\mu Q- D
\end{equation}
and \cite{Cai:2017sjv}proposed that the existence of dilaton will slow down the rate of complexity growth. Recent studies \cite{Carmi:2017jqz}says that in AdS-Schwartzchild case , the bound proposed in \cite{Cai:2017sjv}is approached from above so is violated in the early time. In the next section, we will show that the early-time violation behavior also occurs in this situation.
\begin{figure}
\includegraphics[width=0.4\textwidth]{penrose1.png}
\includegraphics[width=0.4\textwidth]{penrose2.png}\\
\caption{WdW patch of the time before(left side) and after (right side) the critical time, we assume boundary time satisfy the relation $t_{L}=t_{R}$, and at the right(left) boundary, bulk time flows in the same(opposite) direction as the boundary, in calculating the bulk contribution of the total action ,we partition the spacetime into three regions }\label{fig:WDWpatch of the charged dilaton black hole}
\end{figure}
\subsection{General time dependence of the Action}
Ref \cite{Lehner:2016vdi}give a method to calculate the action in the presence of null boundary, the expression is as follows
\begin{equation}
\begin{split}
I = & \frac{1}{16 \pi G_N} \int_\mathcal{M} d^{4} x \sqrt{-g} \left(\mathcal R -2 \left(\partial \phi \right)^{2}-V (\phi)-e^{-2\phi}F^{2}\right) \\
&\quad+ \frac{1}{8\pi G_N} \int_{\mathcal{B}} d^3 x \sqrt{|h|} K + \frac{1}{8\pi G_N} \int_\Sigma d^{2}x \sqrt{\sigma} \eta
\\
&\quad -\frac{1}{8\pi G_N} \int_{\mathcal{B}'}
d\lambda\, d^{2} \theta \sqrt{\gamma} \kappa
+\frac{1}{8\pi G_N} \int_{\Sigma'} d^{2} x \sqrt{\sigma} a\,.
\end{split}
\end{equation}
Here, we follow the convention in \cite{Carmi:2017jqz,Carmi:2016wjl}. Terms in the expression above are respectively bulk term, GHY boundary term, Hayward joint term\cite{Hayward:1993my}, null boundary term and null joint term. We choose affine parametrization and set $ \kappa=0$ in the following, so the contribution of null boundary vanishes.
We consider the full time evolution of action in this black hole(we set $G=1$ in this section for simplicity), we assume the boundary time have following relation $t_{L}=t_{R}=\frac{t}{2}$, the time dependence have two stage. First, the past null-boundary intersect the past-singularity, and we have the past GHY boundary term, after an amount of time ,which we call critical time, the two past-null boundary intersect each other , and a null joint term replace the GHY boundary term. The complexity behavior is different in the time below and above the critical time. So, we first determine the critical time $t_{c}$\\
\begin{equation}
\frac{t_{c}}{2}-r^{*}(\infty)=t-r^{*}(0)
\end{equation}
\begin{equation}
-\frac{t_{c}}{2}+r^{*}(\infty)=t+r^{*}(0)
\end{equation}
the critical time is
\begin{equation}
t_{c}=2(r^{*}(\infty)-r^{*}(0))
\end{equation}
when $t< t_{c}$
the contribution contains three part
\begin{equation}
S=S_{bulk}+S_{GHY}+S_{joint}
\end{equation}
the bulk contribution is the Einstein-Hilbert action plus matter field action which is ($\ref{bulka}$), and the surface comes from past and future singularity surface and the cutoff surface at $r_{max}$. The joint term is null-spacelike /null-timelike joint which occurs at singularity / cutoff surface respectively.
The bulk term is
\begin{equation}
S=\frac{1}{16 \pi} \int d^{4}x [4r(r-2D)+(r-2D)^{2}+r^{2}-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}]
\end{equation}
It is convenient to divide the integral region into three portions and the result is as follows
\begin{equation}
\begin{aligned}
S_{1}=-\frac{1}{4l^{2}} \int _{2D}^{rh} dr [ &4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](\frac{t}{2}+r^{*}_{\infty}-r^{*}(r))
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
S_{2}=-\frac{1}{2l^{2}} \int _{rh}^{rmax} dr [& 4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](r^{*}_{\infty}-r^{*}(r))
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
S_{3}=-\frac{1}{4l^{2}} \int _{2D}^{rh} dr [ &4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](-\frac{t}{2}+r^{*}_{\infty}-r^{*}(r))
\end{aligned}
\end{equation}
We see the time dependence cancels each other and bulk term is independent of time.
Then, we calculate the surface term at singularity. We choose
\begin{equation}
n_{\alpha}=-| f(r)|^{-1/2} \partial_{\alpha}r
\end{equation}
and the extrinsic curvature is
\begin{equation}
K=\nabla_{\alpha} n^{\alpha}=\frac{1}{U^{2}} \frac{d}{dr}(U^{2} n^{r})
\end{equation}
where $U^{2}=r(r-2D)$
So the GHY action is
\begin{equation}
I_{future}=\frac{1}{2} |f|^{1/2} \frac{d}{dr}(r(r-2D)|f|^{1/2})(\frac{t}{2}+r^{*}_{\infty}-r^{*}(r))|_{r=2D}
\end{equation}
\begin{equation}
I_{past}=\frac{1}{2}|f|^{1/2} \frac{d}{dr}(r(r-2D)|f|^{1/2}) (-\frac{t}{2}+r^{*}_{\infty}-r^{*}(r))|_{r=2D}
\end{equation}
\begin{equation}
I_{cutoff}=|f|^{1/2} \frac{d}{dr}(r(r-2D) |f|^{1/2})(r^{*}_{\infty}-r^{*}(r)) |_{r=r_{max}}
\end{equation}
We see the cancelation between surface at the past and future singularity, and from \cite{Chapman:2016hwi} we know that the joint terms are independent of time. Combine the above result , we see the action is constant until $t=t_{c}$. \\
When $t>t_{c}$, null joint forms at $r=r_{m}$and there is no surface term from past singularity. $r_{m}$ is obtained by equation
\begin{equation}
\frac{t-t_{c}}{2}+r^{*}(r_{m})-r^{*}(0)=0
\end{equation}
The null joint term depends on time implicitly through $r_{m}$ \\
The bulk action is
\begin{equation}
\begin{aligned}
I_{bulk}=I_{bulk}^{0}-\frac{1}{4L^{2}}\int _{2D}^{rm} dr [& 4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](\frac{t}{2}-r^{*}_{\infty}+r^{*}(r))
\end{aligned}
\end{equation}
So the change of the bulk action compared with the $t<t_{c}$ case is
\begin{equation}
\begin{aligned}
\delta I_{bulk}=-\frac{1}{4L^{2}} \int _{2D}^{rm} dr [ &4r(r-2D)+(r-2D)^{2}+r^{2}\\
&-\frac{2Q^{2}l^{2} e^{2\phi_{0}}}{r^{2}}](\frac{\delta t}{2}+r^{*}(r)-r^{*}(0))
\label{bulka2}
\end{aligned}
\end{equation}
where $\delta t=t-t_{c}$
because of the lack of surface term of past singularity, the surface contribution also depends on t.
\begin{equation}
I_{surf}=I_{0}-I_{past}
\end{equation}
so
\begin{equation}
\delta I_{surf}=\frac{1}{2}\frac{d}{dr}(r(r-2D)|f|^{1/2}) |f|^{1/2} (\frac{\delta t}{2}+r^{*}(r)-r^{*}(0))|_{r=2D}
\label{surf2}
\end{equation}
For null joint term
\begin{align}
k_{R}=-\alpha dt + \alpha \frac{dr}{f(r)} \\
k_{L}=\alpha dt + \alpha \frac{dr}{f(r)}
\end{align}
\begin{equation}
a=log(-\frac{1}{2}k_{R} \cdot k_{L})=-log(\frac{|f|}{\alpha^{2}})
\end{equation}
so the joint term reads
\begin{equation}
\begin{split}
&I_{jnt}=\frac{1}{8\pi } \int_{Sigma'} d^{2}x \sqrt{\sigma} a \\
&\quad =-\frac{r_{m}(r_{m}-2D)}{2} log\frac{|f(r_{m})|}{\alpha^{2}}\,.
\end{split}
\label{jnt2}
\end{equation}
combine the above result ($\ref{bulka2}$, $\ref{surf2}$,$\ref{jnt2}$) together and take derivative with respect to t , recall that $\frac{dr_{m}}{dt}=-\frac{1}{2}f(r_{m})$, we finally get the action growth rate at time $t>t_{c}$
\begin{equation}
\frac{dI_{tot}}{dt}=2M-{\mu_{m}}Q-D+\frac{1}{2}(r_{m}-D)f(r_{m}) log \frac{|f(r_{m})|}{\alpha^{2}}
\end{equation}
where $ \mu_{m}=\frac{Qe^{2\phi_{0}}}{r_{m}}$
at late time limit $r_{m} \to r_{+}$, we see $\mu_{m}$becomes the chemical potential and the last term vanishes. So we recover the late time result in \cite{Cai:2017sjv}. The rate of growth of action for $t>t_{c}$ is plotted as Fig2.
\begin{figure}
\includegraphics[width=0.45\textwidth]{cgrowth.pdf}\\
\caption{There are two parameters, $y=\frac{\mu Q}{2M}$, $z=r_{+}/L$. For this picture, we fix $z=1$, the green line correspond to $y=0.1$, blue line $y=0.2$, red line $y=0.3$. we find the complexity growth rate approaches the late time bound from above.}
\label{fig:complexity growth rate in different chemical potential}
\end{figure}
\section{Action growth in Lifshitz-Like black brane}
\subsection{Lifshitz black brane in EMD theory}
In many condensed matter systems, near the critical point, anisotropic scaling symmetry is expected. The gravitational systems which have the same scaling behavior are constructed in various situations.Ref.\cite{Kachru:2008yh}adds high order form field,while Ref.\cite{Taylor:2008tg} realizes Lifshitz spacetime using massive vector field and Einstein-Maxwell-Dilaton theory. Here we choose the solution constructed in Ref.\cite{Taylor:2008tg}by coupling the dilaton field to an Abelian Maxwell field. The thermodynamic beheavior of Lifshitz spacetime is investigated in Ref.\cite{Liu:2014dva}
We consider the following action in (d+2)-dimensional spacetime
\begin{equation}
\begin{aligned}
I=\frac1{16\pi G_{d+2}}\int d^{d+2} x\sqrt{-g}[&R-2\Lambda-\frac12 \partial_\mu\phi\partial^\mu\phi \\
&-\frac14 e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}].
\end{aligned}
\end{equation}
where $\Lambda\ $is the cosmological constant and the matter fields are a massless scalar and an abelian gauge field.We can get the equations of motion :
\begin{equation}
\label{2eq2}
\partial_{\mu}(\sqrt{-g}e^{\lambda\phi}F^{\mu\nu})=0,
\end{equation}
\begin{equation}
\partial_{\mu}(\sqrt{-g}\partial^{\mu}\phi)-\frac{\lambda}{4}\sqrt{-g}e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}=0,
\end{equation}
\begin{equation}
\label{2eq4} R_{\mu\nu}=\frac{2}{d}\Lambda
g_{\mu\nu}+\frac{1}{2}\partial_{\mu}\phi\partial_{\nu}\phi+\frac{1}{2}e^{\lambda\phi}F_{\mu\rho}{F_{\nu}}^{\rho}
-\frac{1}{4d}g_{\mu\nu}e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}.
\end{equation}
which has the following asymptotic Lifshitz black hole solution:
\begin{eqnarray}
\label{2eq11}
&ds^{2}=L^{2}(-r^{2z}f(r)dt^{2}+\frac{dr^{2}}{r^{2}f(r)}+r^{2}\sum\limits^{d}_{i=1}dx^{2}_{i}),~~~
\\
&f(r)=1-\frac{r^{z+d}_{+}}{r^{z+d}}\\
&F_{rt}=qe^{-\lambda\phi}r^{z-d-1},~~~e^{\lambda\phi}=r^{\lambda\sqrt{2(z-1)d}},\nonumber\\
&\lambda^{2}=\frac{2d}{z-1},~~~q^{2}=2L^{2}(z-1)(z+d),\nonumber\\
&\Lambda=-\frac{(z+d-1)(z+d)}{2L^{2}}.
\label{eom}
\end{eqnarray}
We can obtain the temperature via Euclidean path integral
\begin{align}
T_{H}=\frac{(z+d)r^{z}_{+}}{4\pi},
\end{align}
and the black hole entropy
\begin{equation}
S_{BH}=\frac{\Omega_{d}L^{d}}{4G_{d+2}}(\frac{4\pi}{z+d})^{\frac{d}{z}}T^{\frac{d}{z}}
\end{equation}
where $\Omega_{d}$ denotes the volume of the $d$-dimensional spatial coordinates.
\subsection{late time behavior in Lifshitz black brane}
The volume contribution is
\begin{equation}
S_{V}=\int_{V}(R-2\Lambda-\frac12\partial_\mu\phi\partial^\mu\phi-\frac14e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu})\sqrt{-g}d^{n+2}x
\end{equation}
We will use the null coordinates $u$ and $v$, defined by
\begin{equation}
du := dt + \frac1{r^{z+1} f} dr,~~~~
dv := dt - \frac1{r^{z+1} f} dr
\end{equation}
Integrating these relations yields the ``infalling" null coordinate $u
= t + r^*(r)$ and the ``outgoing" null coordinate $v = t - r^*(r)$,
where $r^*(r) := \int \frac1{r^{z+1} f} dr$. The metric becomes
\begin{equation}
ds^2 = L^2[-r^{2z}fdu^2 + 2r^{z-1}dudr + r^2 \sum\limits^{d}_{i=1}dx^{2}_{i}]
\end{equation}
or
\begin{equation}
ds^2 =L^2[-r^{2z}f dv^2 - 2r^{z-1}dvdr + r^2 \sum\limits^{d}_{i=1}dx^{2}_{i}]
\end{equation}
when expressed in terms of the null coordinates. For the three choices
$(t,r)$, $(u,r)$, and $(v,r)$ we have
\begin{equation}
\int \sqrt{-g} d^{d+2} x = \Omega_{d}L^{d+2}\int r^{d+z-1} dr dw,
\end{equation}
where $w = \{ t, u, v \}$.
\begin{figure}
\includegraphics[width=0.55\textwidth]{wdw.pdf}\\
\caption{WdW patch of Lifshitz black hole}
\end{figure}
As shown in Fig.3,the region $V_1$ is bounded by the null surfaces $u=u_0$,$u=u_0+\delta t$,$v=v_0+\delta t$,$r=\epsilon$.The volume integral is best performed in the $(u,r)$ coordinate system, in this system the surface $v=v_0+\delta t$ is described by $r=\rho_0(u)$,with $r^*(\rho_0)=\frac12(v_0+\delta t-u)$.Making use of equations of motion,we get
\begin{equation}
\begin{aligned}
R-2\Lambda-\frac12 \partial_\mu\phi\partial^\mu\phi-\frac14 e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}&=\frac{4\Lambda}{d}-\frac1{2d} e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu} \\
&=\frac{-2(z+d)}{L^2}
\end{aligned}
\end{equation}
with this we have
\begin{equation}
\begin{aligned}
S_{V_1} &= -2(d+z)L^d\Omega_d \int^{u_0+ \delta t}_{u_0} du
\int_{\epsilon}^{{\rho_0}(u)} r^{d+z-1}dr\\
&= -2L^d\Omega_d \int^{u_0+ \delta t}_{u_0} du
[{\rho_0}^{d+z}(u)]
\end{aligned}
\end{equation}
Where we neglect the $\epsilon^{n+1}$term in the integral as $\epsilon \to 0$
. The region $V_2$ is bounded by the null surfaces $u=u_0$, $u=u_1$,
$v=v_0$, and $v=v_0+\delta t$. In this case, the volume integral is most easily
performed in the $(v,r)$ coordinates, in which the surfaces $u=u_{0,1}$ are
described by $r=\rho_{0,1}(v)$, with $r^*(\rho_{0,1}) = \frac{1}{2}(v-u_{0,1})$. Then we have
\begin{equation}
\begin{aligned}
S_{V_2} &= -2(d+z)L^d\Omega_d \int^{v_0+ \delta t}_{v_0} dv
\int_{{\rho_1}(v)}^{{\rho_0}(v)} r^{d+z-1}dr\\
&= -2L^d\Omega_d \int^{v_0+ \delta t}_{v_0} dv
[{\rho_0}^{d+z}(v)-{\rho_1}^{d+z}(v)].
\end{aligned}
\end{equation}
Using a variables change :$u = u_0+v_0 + \delta t - v$ , we can see the terms involving ${\rho_0}(u)$ and
${\rho}_0(v)$ cancel out. We are left with
\begin{equation}
S_{V_1} - S_{V_2} = -2L^d\Omega_d \int^{v_0+ \delta t}_{v_0} dv
[{\rho_1}^{d+z}(v)].
\end{equation}
with the function $\rho_1(v)$ varying from $r_{B}$ to $r_{B}+O(\delta t)$,and hence the volume contribution to $\delta S$ is simply
\begin{equation}
\begin{aligned}
S_{V_1} - S_{V_2} =-2L^d\Omega_d {r_B}^{d+z}\delta t
\label{dS:volume}
\end{aligned}
\end{equation}
The surface contribution to $\delta S$ are given by
$-2 \int_{S} K\, d\Sigma$, where $S$ is the boundary segment given by the spacelike
hypersurface $r = \epsilon$. The (future-directed) unit normal is given by
$n_\alpha = \frac{L}{r}|f|^{-1/2} \partial_\alpha r$. The extrinsic curvature is then
\begin{equation}
K = \nabla_\alpha n^\alpha = -\frac{1}{L^{d+2} r^{z+d-1}} \frac{d}{dr}
\Bigl(L^{d+1} r^{z+d} |f|^{1/2} \Bigr),
\end{equation}
and the volume element:
\begin{equation}
d\Sigma = \Omega_{d} L^{d+1}|f|^{1/2} r^{z+d} dt
\end{equation}
Letting $r = \epsilon
\ll r_{+}$ and then approximating $f \simeq -(r_{+}/r)^{z+d}$; $K\simeq -\frac{z+d}{2L}(r_{+}/r)^{\frac{z+d}2}$; $d\Sigma \simeq \Omega_{d} L^{d+1}(r_+r)^{\frac{z+d}2} dt$,
we find that
\begin{equation}
-2 \int_{S} K d\Sigma = (z+d) \Omega_{d}L^{d}{r_+}^{z+d} \delta t.
\label{dS:surface}
\end{equation}
It is finite and independent of $\epsilon$.
We then calculate joint terms at $ B$,$B'$.The null joint rule states that \cite{Lehner:2016vdi}
\begin{equation}
a = \ln\bigl( -{\textstyle \frac{1}{2}} k \cdot \bar{k} \bigr),
\end{equation}
We choose the vectors $k^\alpha$ and $\bar{k}^\alpha$ to be
affinely parametrized and read
\begin{equation}
\begin{aligned}
k_\alpha = -c\partial_\alpha v = -c\partial_\alpha(t - r^*), \qquad
\bar{k}_\alpha= \bar{c}\partial_\alpha u
= \bar{c}\partial_\alpha(t+r^*)\,
\end{aligned}
\end{equation}
where $c$ and $\bar{c}$ are arbitrary (positive) constants.With these
choices, we have that $k \cdot \bar{k} = 2c\bar{c}/(f L^2 r^{2z})$, then
\begin{equation}
a = \ln\bigl[-c\bar{c}/(f L^2 r^{2z})]£¬
\end{equation}
With the above expression, we find that
\begin{align}
2 \oint_{B'} a\, dS - 2 \oint_{B} a\, dS
= 2\Omega_d \bigl[ h(r_{B'}) - h(r_{B})\bigr],
\end{align}
where $h(r) := r^d L^d \ln[-c\bar{c}/(f L^2 r^{2z})]$.
Then we perform a Taylor expansion of
$h(r)$ about $r = r_B$. Because the displacement is in a direction of
increasing $v$, we have that $du = 0$, $dv = \delta t$, and
$dr = -\frac{1}{2} f r^{z+1}\delta t$. This gives us
\begin{equation}
\begin{aligned}
&h(r_{B'}) - h(r_{B}) = -\frac{1}{2} f r^{z+1}\frac{dh}{dr}\bigg|_{r=r_{B}} \delta t\\
&= -\frac{1}{2} L^d r^{z+d}\biggl[ -r\frac{df}{dr} -2z f+ fd
\ln\biggl(\frac{c\bar{c}}{-fL^2 r^{2z}} \biggr) \biggr]\bigg|_{r=r_{B}} \delta t,
\end{aligned}
\end{equation}
and then
\begin{equation}
\begin{aligned}
&2 \oint_{B'} a dS - 2 \oint_{B} a dS \\
&= -\Omega_{d} L^d r^{z+d}\biggl[ -r\frac{df}{dr} -2z f+ fd
\ln\biggl(\frac{c\bar{c}}{-fL^2 r^{2z}} \biggr) \biggr]\bigg|_{r=r_{B}} \delta t
\end{aligned}
\end{equation}
Making use of the explicit expression of
$f$,and take the late time limit $ r_{B} \to r_{+}$,the Log term vanishes,and the result is
\begin{equation}
2 \oint_{B'} a dS - 2 \oint_{B} a dS
=(z+d)L^d\Omega_d {r_{+}}^{d+z}\delta t
\label{dS:joint}
\end{equation}
Combining Eqs.~(\ref{dS:volume}), (\ref{dS:surface}), and
(\ref{dS:joint}), we have
\begin{equation}
\delta S = (2z+2d-2)L^d\Omega_d {r_+}^{d+z}\delta t
\end{equation}
in late time limit.
From reference \cite{Pang:2009ad},we know the mass of the Lifshitz spacetime, this is
\begin{equation}
M=\frac{r^{z+d}_{+}dL^{d}\Omega_{d}}{16\pi G_{d+2}},
\label{m}
\end{equation}
This mass is the Komar integral subtracting the zero temperature background to remove the infinite volume divergence\cite{Taylor:2008tg}. So we have,
\begin{equation}
\frac{dS}{dt} = 32\pi G_{d+2}\frac{z+d-1}{d}M
\end{equation}
and in a more usual convention,
\begin{equation}
\frac{dI}{dt} = 2\frac{z+d-1}{d}M
\end{equation}
we can see that,when $z=1$,
\begin{equation}
\frac{dI}{dt} = 2M
\end{equation}
it is just the result in reference \cite{Lehner:2016vdi}.
When $z>1$,we have $\frac{dI}{dt} >2M$, the bound in \cite{Brown:2015bva,Brown:2015lvg} is violated even in the late time limt.
\subsection{general time behavior Lifshitz black brane}
In this subsection, we investigate the full time evolution of action growth and see whether the late time result is approached from below or above.We divide the Wheeler-DeWitt patch into three parts as in Fig1. When $t<t_c$,the action is composed of following three parts.
\begin{equation}
I_{tot}=I_{bulk}+I_{GHY}+I_{joint}
\end{equation}
where $I_{bulk}$ is the bulk terms and $I_{GHY}$ is the GHY surface terms at $r=\epsilon\to 0$ and$r=r_{max}\to \infty $ ,the joint terms is the joint formed by null and spacelike(timelike) surface at $r=\epsilon$ ($r=r_{max}$) .
For $t<t_c$,we have
\begin{equation}
I_{bulk}^1=-\int^{r_+}_{\epsilon_0}\frac{(z+d)L^d\Omega_d}{8\pi G}(\frac{t}2+r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
\begin{equation}
I_{bulk}^2=-\int^{r_{max}}_{r_+}\frac{(z+d)L^d\Omega_d}{8\pi G}2(r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
\begin{equation}
I_{bulk}^3=-\int^{r_+}_{\epsilon_0}\frac{(z+d)L^d\Omega_d}{8\pi G}(-\frac{t}2+r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
we see total cancelation and the bulk term is independent of time.
GHY term is calculated as
\begin{equation}
I_{surf}^{future}=\frac{r^{z+d}L^d\Omega_d}{8\pi G}(\frac{t}2+r^*(\infty)-r^*(r))[(z+d)|f|+\frac{r}{2}\partial_{r}|f|]\Big|_{r=\epsilon_0}
\end{equation}
\begin{equation}
I_{surf}^{past}=\frac{r^{z+d}L^d\Omega_d}{8\pi G}(-\frac{t}2+r^*(\infty)-r^*(r))[(z+d)|f|+\frac{r}{2}\partial_{r}|f|]\Big|_{r=\epsilon_0}
\end{equation}
\begin{equation}
I_{surf}^{UV cutoff}=\frac{r^{z+d}L^d\Omega_d}{8\pi G}(2(r^*(\infty)-r^*(r))[(z+d)|f|+\frac{r}{2}\partial_{r}|f|]\Big|_{r=r_{max}}
\end{equation}
it can be seen that the total boundary terms is also independent of time.
Joint terms from null boundaries insect with surface of past and future singularity vanish and joint terms from null boundaries insect with UV cutoff is independent of time \cite{Chapman:2016hwi}.So it has no contribution to complexity growth .
So we conclude that $\frac{dI}{dt}=0$ when $t<t_c$.
When $t>t_c$:
the only difference is the intersections between the null boundaries with past singularity surface change to intersections between two null boundaries.
We calculate in the same way
\begin{equation}
\label{feww}
I_{bulk}^1=-\int^{r_+}_{\epsilon_0}\frac{(z+d)L^d\Omega_d}{8\pi G}(\frac{t}2+r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
\begin{equation}
I_{bulk}^2=-\int^{r_{max}}_{r_+}\frac{(z+d)L^d\Omega_d}{8\pi G}2(r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
\begin{equation}
I_{bulk}^3=-\int^{r_+}_{r_m}\frac{(z+d)L^d\Omega_d}{8\pi G}(-\frac{t}2+r^*(\infty)-r^*(r))r^{z+d-1}dr
\end{equation}
and combine them all we get
\begin{equation}
I_{bulk}-I_{bulk}^0=-\int^{r_m}_0\frac{(z+d)L^d\Omega_d}{4\pi G}(\frac{t}2-r^*(\infty)+r^*(r))r^{z+d-1}dr
\label{delta1}
\end{equation}
where we have included the factor of two coming from two sides of WdW patch
For GHY boundary terms, in the absence of past surface at singularity, it also depends on time
\begin{equation}
\begin{aligned}
I_{surf}^{past}=\frac{r^{z+d}L^d\Omega_d}{8\pi G}(&-\frac{t}2+r^*(\infty)\\
&-r^*(r))[(z+d)|f|+\frac{r}{2} \partial_{r} |f|]\Big|_{r=\epsilon_0}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
I_{surf}-I_{surf}^{0}&=-2I_{past} \\
&=-\frac{r^{z+d}L^d\Omega_d}{4\pi G}(-\frac{t}2+r^*(\infty)-r^*(r))\\
&\times[(z+d)|f|+\frac{r}{2}\partial_{r} |f|]\Big|_{r=\epsilon_0}
\label{delta2}
\end{aligned}
\end{equation}
where factor of two comes from the two sides of WdW patch.
Joint terms from null boundaries insect with surface of past and future singularity vanish and joint term from null boundaries insect with UV cutoff is independent of time.So we only need to consider the intersection of two past null boundaries at $r=r_{m}$,where $r_{m}$ satisfy equation:
\begin{equation}
\frac{t}{2}+r^*(r_m)-r^*(\infty)=0
\end{equation}
for $\delta t=t-t_c$,we can rewrite the above equation in
\begin{equation}
\frac{\delta t}2=r^*(0)-r^*(r_{m})
\end{equation}
varying it with t,we get
\begin{equation}
\frac{dr_m}{dt}=-\frac12 r^{z+1}_m f(r_m)
\label{drdt}
\end{equation}
We choose the normal vectors of null boundaries to be
\begin{equation}
k_L=\alpha(dt+\frac{dr}{r^{z+1}f})
\end{equation}
\begin{equation}
k_R=-\alpha(dt-\frac{dr}{r^{z+1}f})
\end{equation}
The joint term is
\begin{equation}
\begin{aligned}
I_{joint}=\frac1{8 \pi G}\oint a ds&=\frac1{8 \pi G}\oint\log(-\frac12{k_L}\cdot{k_R})ds\\
&=\frac{L^d\Omega_d r^d}{8\pi G}\log(\frac{-\alpha^2}{L^2 r^{2z} f})|_{r=r_m}
\label{delta3}
\end{aligned}
\end{equation}
combine equations ($\ref{delta1}$)($\ref{delta2}$)($\ref{drdt}$)($\ref{delta3}$),we can get
\begin{equation}
\begin{split}
\frac{dI}{dt}&=\frac{dI_{bulk}}{dt}+\frac{dI_{surf}}{dt}+\frac{dI_{joint}}{dt}\\
&=-\frac{L^d\Omega_d}{8\pi G}r_m^{z+d}+\frac{(z+d)r_+^{z+d}L^d\Omega_d}{16\pi G}\\
&+\frac{L^d\Omega_d}{8\pi G}[-\frac{d}2 f(r_m) r_m^{z+d}\log(\frac{-\alpha^2}{L^2 r_m^{2z} f(r_m)})+z r_m^{d+z}+\frac{d-z}2 r_+^{z+d})]\\
&=\frac{L^d\Omega_d}{8\pi G}r_m^{z+d} (z-1)+\frac{d r_+^{z+d}L^d\Omega_d}{8\pi G}\\
&-\frac{d L^d\Omega_d}{16\pi G}[ f(r_m) r_m^{z+d}\log(\frac{-\alpha^2}{L^2 r_m^{2z} f(r_m)})]
\label{final}
\end{split}
\end{equation}
At the late time situation,$r_m\rightarrow r_+$ and $f(r_m)\rightarrow 0$,so the last term of (\ref{final}) is zero.Then the complexity growth can be simplify to
\begin{equation}
\frac{dI}{dt}=\frac{(d+z-1) r_+^{z+d}L^d\Omega_d}{8\pi G}
\end{equation}
which recover the late time result.
To see whether the limit is approached from above or below, we plot the relation between complexity growth rate and time in four dimensions for various z in Fig4.
\begin{figure}
\includegraphics[width=0.55\textwidth]{cgrowth2.pdf}\\
\caption{The relation between complexity growth rate and boundary time, we choose the normalization factor $\alpha=L r_{+}^{z}$, and green/ blue/red lines correspond respectively to $z=3$/$z=2$/$z=1$, we find it approaches the late time bound from above, and for z sufficiently large, the complexity growth rate experience a decreasing period during early time }
\label{fig:complexity growth rate in different z}
\end{figure}
\section{Conclusion and discussion}
\label{Con}
In this paper, we investigate the effect of dilaton that is coupled to Maxwell field on the holographic complexity growth . In Section two, we investigate the black hole solution which is asymptotically AdS . In this case we find the presence of dilaton charge can slow down the computation rate, and when we take into account the full time evolution, we find the complexity growth rate approaches the late time bound from above like Ref \cite{Carmi:2017jqz}. In Section Three, we investigate the case which is asymptotically Lifshitz, in this case, dilaton is essential to support anisotropic vacuum structure. We find that in this case, the Lloyd bound is violated even in the late time limit. We investigate the full time evolution of the holographic complexity growth, it approaches the late time bound from above , moreover, for z sufficiently large , it exhibit an interesting behavior in time which is above critical time but much earlier than late time.
Although the above two cases are both solution in EMD theory, the roles dilaton play are very different. For the AdS dilaton example, dilaton charge and electro charge are free parameters and appear in the black hole solutions. While in Lifshitz case, dilaton and maxwell field are added to maintain the anisotropic scaling background. It doesn't contribute to the black hole solution, which means that the black hole only has one free parameter M. Holographically, eternal AdS black hole is dual to thermofield double state $| TFD \rangle$, in Schwartzchild case,
\begin{equation}
| TFD \rangle =\sum_{n} e^{-\beta E_{n}} | n_{L} \rangle |n_{R} \rangle
\end{equation}
In RN case
\begin{equation}
| TFD \rangle =\sum_{n} e^{-\beta (E_{n}-\mu Q)} | n_{L} \rangle |n_{R} \rangle
\end{equation}
what appears on the exponential depends on the thermodynamic relation. In Lifshitz case, because the charge is fixed cosntant. The first law of thermodynamics is just $dM=T dS$. So from boundary respect, there is no chemical potential term in thermofield double state. In other words, Ref.\cite{Brown:2015lvg} gives the complexity growth inequality
\begin{equation}
\frac{dC}{dt} \leqslant \int_{gs}^{S} T dS
\end{equation}
so from thermodynamical relation $dM=T dS$, we conclude that the complexity growth rate will only depend on the mass despite the presence of matter field in our Lifshitz case.
Ref.\cite{Yang:2016awy} proves that the action growth obeys the bound 2M under the following two conditions : (1) the matter field locates outside the outmost killing horizon (2) the strong energy condition is obeyed. In our two cases, the matter field extends into the killing horizon,so it doesn't satisfy the condition required in \cite{Yang:2016awy}. In AdS dilaton case, the bound $2M$ is satisfied ,while in Lifshitz case, it is violated. So the requirements in \cite{Yang:2016awy} are too restrictive, our calculation shows that weaker condition is needed to prove the late time bound from the bulk side and distinguish these two cases.
Recent study \cite{Swingle:2017zcd} investigate the action growth of the hyper-scaling violation background in EMD theory.They found that it depends on two parameters, the "dynamical exponent" $z$ and hyperscaling violation exponent $\theta$. They showed when $\theta=0$, their results match our Lifshitz result.
There are several interesting directions to pursue, the reason of the violation of the Lloyd bound is still unclear.The EMD theory is not expected to be a UV complete theory of quantum gravity, so it would be interesting to take into account the stringy effect near the singularity and recalculate the action growth.Moreover, for full time evolution of complexity growth in Lifshitz black brane, for z sufficiently large, such as $z=3$, the complexity growth rate exhibits new behaviors that are not found in \cite{Carmi:2017jqz}, it is interesting to investigate what it means physically when z becomes large. \footnote{Recently this problem is investigated in Ref.\cite{Alishahiha:2018tep} by adding a counter term to remove the normalization ambiguity in the joint term. There is no strong motivation to add this term to remove the ambiguity unless field theory suggests us to do that. However as Ref. \cite{Jefferson:2017sdb} suggests , this normalization ambiguity is somehow related to the choice of reference state in the field theory construction of complexity}Although CA conjecture proposed in \cite{Brown:2015bva,Brown:2015lvg} passes many nontrivial tests including switchback effect \cite{Stanford:2014jda}, it is still unproved.Recent studies \cite{Carmi:2017jqz} implies that although CA conjecture captures some essence in complexity, it needs to be revised. Our work's result on Lifshitz black hole also gives the evidence that the revision is needed in CA duality.
Apart from CA duality, there are also other proposals that relate complexity to other bulk quantities, such as \cite{Susskind:2014rva,Couch:2016exn,Alishahiha:2015rta,Momeni:2016ira,Momeni:2016ekm}, it is interesting to calculate complexity growth rate in these proposals and investigate if our results still hold. We will discuss the thermodynamic volume proposal \cite{Couch:2016exn} in the following
The Ref. \cite{Couch:2016exn} related the complexity on the boundary to the spacetime volume of WdW patch and gave the proposal complexity-volume duality 2.0. It is
\begin{equation}
C \sim \frac{1}{\hbar} P (spacetime volume)
\end{equation}
In our Lifshitz case, because of following relation
\begin{equation}
\begin{aligned}
R-2\Lambda-\frac12 \partial_\mu\phi\partial^\mu\phi-\frac14 e^{\lambda\phi}F_{\mu\nu}F^{\mu\nu}=\frac{-2(z+d)}{L^2}
\end{aligned}
\end{equation}
the bulk action contribution is proportional to the spacetime volume
\begin{equation}
I_{bulk}=-\frac{z+d}{8 \pi G L^{2}} \int d^{d+2} x \sqrt{-g}
\end{equation}
so there is no essential difference between the volume and bulk action of WdW patch.
We calculate the complexity growth rate using CV 2.0 proposal explicitly.
In extended phase space, the pressure is identified as the cosmological constant
\begin{equation}
P=-\frac{\Lambda}{8\pi G}=\frac{(z+d)(z+d-1)}{16 \pi G L^{2}}
\end{equation}
The spacetime volume inside the WdW patch is calculated from \ref{feww} . And when $t>t_{c}$, the time dependence of the volume is
\begin{equation}
\frac{dV}{dt}=\frac{L^{d+2}\Omega_{d} r_{m}^{z+d}}{z+d}
\end{equation}
After taking the late time limit ( $r_{m} \to r_{+}$). The complexity growth rate is
\begin{equation}
\label{dcdt}
\frac{dC}{dt}=\frac{(z+d-1)M}{d}
\end{equation}
We see in the complexity-volume 2.0 proposal, the result is half of the CA proposal, the Lloyd bound 2M is satisfied when z is not large enough, but as in Ref.\cite{Couch:2016exn} the Lloyd bound can be altered up to a pre-factor
\begin{equation}
\frac{dC}{dt} \leqslant \frac{\alpha E}{ \pi \hbar}
\end{equation}
the Ref.\cite{Couch:2016exn} shows that in AdS-RN case, this bound is satisfied for $\alpha=1$. In CV 2.0 proposal if we set $\alpha=1$ to be the Lloyd bound, this bound is violated in the Lifshitz case at late time because of the anisotropic scaling z as in the CA proposal.
Moreover, as conjectured in Ref. \cite{Couch:2016exn}, in various cases ,the late time value of the time derivative of spacetime volume of the WdW patch reduces to the thermodynamical volume( or the difference between two thermodynamic volume in the case of two horizons). In Lifshitz case, this late time limit of spacetime volume derivative reads
\begin{equation}
V=\frac{L^{d+2}\Omega_{d} r_{+}^{z+d}}{z+d}
\end{equation}
when $z=1$ in four dimension, it reduces to the familiar thermodynamic volume of Schwartzchild case $\frac{4 \pi r^{3}_{+}}{3}$. While when $z \neq 1$, because the profile of matter field depends on the cosmological constant nontrivially, it is hard to use Iyer-Wald formalism \cite{Iyer:1994ys} to derive the explicit expression of the thermodynamic volume. So in the further work , it will be interesting to derive the thermodynamic volume in Lifshitz case and see if it is the same as our late time limit result. It will provide a convincing evidence for the conjecture in Ref.\cite{Couch:2016exn} that thermodynamic volume is essential for complexity growth.
Although there have been many discussions on the complexity in holographic side, a concrete definition of complexity in quantum field theory still lacks. Recently, there are many works concerning this question. \cite{Jefferson:2017sdb,Chapman:2017rqy} propose some definitions about complexity in field theory side using the Finsler geometry and Fubini-Study metric. \cite{Caputa:2017yrh} discretizes the Euclidean path integral and defines the "path integral complexity" in terms of Liouville action. For concrete calculation in field theory side, see also\cite{Kim:2017qrq,Yang:2017nfn, Hashimoto:2017fga}.
Once a concrete and calculable definition of complexity is given, the problem of CA duality conjecture will be clarified to large extent. Moreover, apart from the problems from CA conjecture, the assumptions made in the process to derive the Lloyd bound should also be checked \cite{Cottrell:2017ayj}.
\begin{acknowledgments}
We want to thank Prof. Ronggen Cai for useful advice and encouragement during this work . And we also want to thank Run-Qiu Yang, Shao-Jiang Wang for useful help and discussions during this work.
\end{acknowledgments}
\bibliographystyle{utphys}
|
1,116,691,500,617 | arxiv | \section{Settings and main results.}
Let $U: \mathbb R^{d}\rightarrow \mathbb R$ be a smooth function such that $U\geq 1$ and $\int e^{-U(x)}dx$ is finite. $U$ will represent the confinement potential for the Hamiltonian $H(x,y)=U(x)+\frac{1}{2}|y|^2$ defined on $\mathbb R^{2d}$. The associated Boltzmann-Gibbs (probability) measure is given by
$$
\text{d}\mu = \frac{1}{Z} \; e^{-H(x,y)}\text{d} x \text{d} y
$$
where $Z$ is the normalizing constant $\int e^{-H(x,y)}\text{d} x \text{d} y$.\\ The Langevin dynamics associated to this measure is a flow of probability measures $\text{d} \mu_t = f_t \, \text{d}\mu$ for $t\geq 0$, where $f_t$ solves (at least in a weak sense) the Langevin equation $$\partial_t f_t = L f_t \, ,$$ $L$ being given by
\begin{eqnarray}\label{EqGeneLangevin}
L &=& -y.\nabla_x +\left( \nabla U(x)-y\right).\nabla_y + \Delta_y \; .
\end{eqnarray}
We are thus interested in solutions belonging to $\mathbb L^1(\mu)$. Of course, the hypoelliptic regularity theorem ensures that $(t,x,y) \mapsto f_t(x,y)$ is smooth on $\mathbb R_+^*\otimes \mathbb R^{2d}$, whatever the regularity of $f_0$. It is then easy to see that mass and positivity are preserved so that if $f_0 \text{d} \mu$ is a probability measure so is $f_t \text{d} \mu$ for any $t\geq 0$. \\ The corresponding stochastic process is given by the S.D.E. $$
\left\{\begin{array}{l}
dx_t=y_tdt\\
dy_t=-y_tdt-\nabla U(x_t)dt+\sqrt{2}dW_t
\end{array}\right.
$$ where $(W_t)$ is an usual $d-$dimensional Wiener process. The infinitesimal generator of the process is thus $L^*= y.\nabla_x -\left( \nabla U(x)+y\right).\nabla_y + \Delta_y$. The law $\mu$ is the unique invariant (but not reversible) probability measure for the process, and $\text{d}\mu_t=f_t \text{d} \mu$ is the distribution of the process at time $t$. One can also write down the P.D.E. satisfied by $\mu_t$ (or its density w.r.t. Lebesgue measure) which is usually called the kinetic Fokker-Planck equation. We denote by $P_t=e^{tL}$ the semi-group on $\mathbb L^1(\mu)$ with generator $(L,D(L))$, i.e. $f_t=P_tf_0$.
\medskip
We are interested in the long time behavior of the Langevin diffusion. The usual ergodic theorem tells us that $\frac 1t \, \int_0^t \, \mu_s \, \text{d} s$ weakly converges to $\mu$ as $t$ grows to infinity. One can thus ask for the convergence of $f_t$ towards $1$ as $t$ goes to infinity. \\ This question has been investigated by many authors in recent years both in the PDE community and the probability community. One of the main difference is of course the way to look at this convergence: total variation distance, $\mathbb L^2(\mu)$ norm, $\mathbb H^1(\mu)$ semi-norm, relative entropy, Wasserstein distance. Another associated problem is to get some bounds on the rate of convergence, once convergence holds true. Let's review some results in this direction.
\medskip
More or less at the same time, both probabilists and PDE specialists have considered the problem of the speed of convergence to equilibrium. Talay \cite{Tal02} and Wu \cite{Wu01} have built Lyapunov functions and using Meyn-Tweedie's approach have established (non quantitative) exponential convergence to equilibrium (see also \cite{BCG08} for this approach for kinetic models) under quite general assumptions. Desvillettes and Villani \cite{DV01} used an heavy Fourier machinery to established sub-exponential entropic convergence. Then H\'erau and Nier \cite{HN04} have carried out the spectral analysis of this equation and thus obtained a $\mathbb L^2$ exponential decay with quite sharp constants under general conditions. It has settled the bases for the theory of hypocercivity of Villani \cite{Villani} for the $\mathbb L^2$ and the entropic convergence to equilibrium, when $\text{Hess}(U)$ is bounded in the entropic case, see also \cite{MMS15} for a version without regularity issues. Finally, and quite recently, coupling approaches, using synchronous coupling or coupling by reflection (see \cite{BGM10} or \cite{EGZ16,EGZ17}) have established exponential convergence to equilibrium in Wasserstein distance with sharp constants, once again when $\text{Hess}(U)$ is bounded.
\bigskip
\noindent As we will adopt the terminology and adapt the methodology of hypocoercivity as in Villani \cite{Villani}, let us describe a little bit further the formalism of this setting. Recall that the variance of a squared integrable function $g$ with respect to $\mu$ is defined by $$\text{Var}_\mu(g):=\int g^2\text{d}\mu-\left(\int g\text{d}\mu\right)^2=\int \left(g-\int g\text{d}\mu\right)^2\text{d}\mu$$ while the entropy is defined for positive functions by $$\text{Ent}_\mu(f):=\int f\ln f\text{d}\mu-\int f\text{d}\mu \ln \int f\text{d}\mu \; .$$ The law $\mu$ is said to satisfy a Poincar\'{e} inequality if there exists a positive constant $C_P$ such that for all smooth functions $g$
\[
\text{Var}_\mu(g) \leq C_P \int |\nabla g|^2 \text{d}\mu \, .
\]
Similarly, $\mu$ satisfies a logarithmic Sobolev (or log-Sobolev in short) inequality if there exists a constant $C_{LS}$ such that for all smooth functions $g$,
\[
\text{Ent}_\mu(g^2) \leq C_{LS} \int |\nabla g|^2 \text{d}\mu \, .
\]
The natural $\mathbb H^1_\mu$ semi-norm is defined as $||g||_{H^1_\mu}:=||\nabla g||_{\mathbb L^2_\mu}$. Exponential convergence of $P_tf_0$ to $1$ in $\mathbb H^1_\mu$ and variance was proved by Villani \cite{Villani} under two conditions:
\begin{enumerate}
\item[(1-var)]
\quad $ |\nabla^2 U|\leq c \; (1+|\nabla U|)$;
\item[(2-var)] \quad $e^{-U(x)}\text{d} x$ satisfies a Poincar\'{e} inequality.
\end{enumerate}
Remark that (2-var) is equivalent to the fact that $\mu$ satisfies a Poincar\'e inequality, thanks to the tensorization property of the latter, since the gaussian measure satisfies a Poincar\'e inequality.
\medskip
\noindent For convergence in entropy, the assumptions made by Villani are much stronger:
\begin{enumerate}
\item[(1-ent)] \quad $\nabla^2 U$ is bounded; \item[(2-ent)] \quad $e^{-U(x)} \, \text{d} x$ satisfies a log -Sobolev inequality.
\end{enumerate}
Again, (2-ent) is equivalent to the fact that $\mu$ satisfies a log-Sobolev inequality, thanks to a similar argument of tensorization.\\ When both these assumptions are satisfied, Villani showed that, for any initial probability density $f_0$ with finite moments of order 2, the entropy of $P_tf_0$ converges to $0$ exponentially fast (see Villani \cite{Villani} Theorem 39).
\medskip
\noindent Our main goal in this paper is to get rid of the boundedness assumption (1-ent) for $\nabla^2 U$, replacing it by
\begin{hyp}\label{HypB}
there exists $\eta \geq 0$ such that $U^{-2\eta}\nabla^2 U$ is bounded.
\end{hyp}
\noindent A typical situation where Assumption \ref{HypB} is satisfied is when both $U$ and $\nabla^2 U$ have polynomial growth at infinity, i.e. $U(x)\geq c_1 \, (1+|x|)^l$ and $|\nabla^2U|\leq c_2 \, (1+|x|)^j$ so that we may choose $\eta \geq \frac{j}{2l}$. In particular if $j=l-2\geq 0$ as it is the case for true polynomials of degree at least 2, we may choose $\eta = \frac12-\frac{1}{l}$.
\medskip
\noindent The counterpart is that we have to reinforce (2-ent) replacing it by the stronger
\begin{hyp}\label{HypU}
$\mu$ satisfies the following weighted log-Sobolev inequality: there exists $\rho>0$ s.t. for all smooth enough $g$ with $\int g^2 \, \text{d} \mu=1$:
\begin{equation}\label{eqlspoids}
\mbox{\rm Ent}_\mu(g^2) \leq \rho \int (H^{-2\eta}|\nabla_x g|^2 + |\nabla_y g|^2)\text{d} \mu.
\end{equation}
\end{hyp}
\noindent Once both Assumptions \ref{HypB} and \ref{HypU} are satisfied, we can prove exponential decay in entropy for the Langevin diffusion. Our approach is based on the multiplier method. More precisely we will prove the following:
\begin{thm}\label{ThmHypocoPoids}
Under Assumptions \ref{HypB} and \ref{HypU}, let
\begin{eqnarray*}
\lambda & = & \left( \| H^{-2\eta}\nabla^2 U\|_{\infty} + 2\right)^2\, ,\\
\kappa & = & \frac1{1300\left( \eta+d\right)^4 } \, .
\end{eqnarray*}
Then for all initial probability density $f$,
$$
\mbox{\rm Ent}_\mu(P_t f) \leq \exp\left({-\frac{\kappa}{1+4\lambda\rho}\int_0^t (1-e^{-s})^2\text{d} s}\right) \mbox{\rm Ent}_\mu(f) \, .
$$
\end{thm}
\noindent Section 2 is devoted to the proof of this theorem which contains Villani's result in the case $\eta=0$. The key idea is to use a twisted gradient depending on time, see lemma \ref{LemGammaPoids}. An important aspect of our result is that the bounded Hessian condition in Villani's approach is relaxed as Assumption 1. In fact it was a major issue raised by Villani \cite{Villani} concerning the entropic convergence. Indeed, his $L^2$ multiplier method, at the basis of the entropic hypocercivity, does not rely on a Poincar\'e inequality but on Brascamp-Lieb inequality. It was thus thought that for the multiplier method to hold for entropy, an entropic Brascamp-Lieb inequality was needed. However Bobkov-Ledoux \cite{BL00} proved that this inequality is false in general, and true in very particular setting. Our strategy is then to show that it is not an entropic Brascamp-Lieb inequality that we need but a particular weighted logarithmic Sobolev inequality. Note also that a first attempt to skip the boundedness assumption for the Hessian is contained in \cite{BCG08} Theorem 6.10, but the statement therein is much weaker than the one of the present theorem and most importantly not at all quantitative .
\medskip
Next we shall show that, similarly to the non weighted case studied in \cite{CG16} (see also \cite{BBCG,CGWW09}), the weighted log Sobolev inequality in Assumption \ref{HypU} is equivalent to some Lyapunov type condition. \\ To this end we introduce the natural second order operator $$L_{\eta}:=H^{-2\eta} \Delta_x + \Delta_y - H^{-2\eta}\left(2\eta\frac{\nabla_x H}{H}+\nabla_x H\right).\nabla_x - \nabla_y H.\nabla_y \, ,$$ which is symmetric in $\mathbb L^2_\mu$ and satisfies
\begin{equation}\label{eqIPP}
\int \, f \, L_\eta g \, \text{d} \mu = - \, \int \, (H^{-2\eta} \, \nabla_x f.\nabla_x g+ \nabla_y f.\nabla_y g) \, \text{d} \mu \, .
\end{equation}
\begin{thm}\label{Thm-LyapunovCondition}
Assume that $U$ goes to infinity at infinity, that $|\nabla H|\ge h>0$ outside some large ball. Denote $A_r:=\{(x,y): H(x,y)\leq r\}$, and
\[
\theta(r)=\sup\limits_{z\in \partial A_r} \max\limits_{i,j=1,...,2d} |\frac{\partial^2 H}{\partial z_i\partial z_j} |
\]
Assume that $\theta(r)\leq ce^{C_0r}$ with some positive constants $C_0$ and $c$ for $r$ sufficiently large. Assume that there exists a Lyapunov function $W$ with $W(x)\ge w>0$ for all $(x,y)$ and some $\lambda,b>0$ satisfying
$$L_{\eta} W(x,y)\le -\lambda H(x,y)\, W(x,y)+b\, . $$
Then $\mu$ verifies a weighted logarithmic Sobolev inequality \eqref{eqlspoids}.
\end{thm}
Remark that the condition $\theta(r)\le ce^{{C_0}r}$ is trivially verified when both $U$ and $\mbox{\rm Hess}(U)$ have a polynomial growth. Also, a Lyapunov function exists if $U$ satisfies the conditions in the following corollary:
\begin{cor}\label{Cor-Lyapunov}Assume that the following conditions hold outside a compact domain
\begin{enumerate}
\item $\Delta_x U\leq \kappa |\nabla_x U|^2$ for some $\kappa\in (0,1)$;
\item a growth condition: $|\nabla_x U|^2 \geq c U^{2\eta+1}$ for some positive constant $c$.
\end{enumerate}
Then $d\mu=\frac{1}{Z}e^{-H(x,y)}dxdy$ satisfies a weighted logarithmic Sobolev inequality.
Moreover, if we assume that $U^{-2\eta}\nabla^2 U$ is bounded, then we may apply Theorem \ref{ThmHypocoPoids}.
\end{cor}
The next section will present the proof of Theorem 1, where the entropic multipliers method is presented. In Section 3, the treatment via Lyapunov condition of weigthed log-Sobolev inequality, i.e. Theorem 2 and Corollary 3, is done.
The final section discusses some additional points on weighted inequalities. Indeed, the proof of weighted Poincar\'e inequality used by Villani relies solely on some Poincar\'e inequality for each measure and adapt the usual argument of tensorization using heavily the orthogonality inherited from the $\mathbb L^2_\mu$ structure. However, in the entropic case, {from a log-Sobolev for each marginal, we are only able to recover a weaker inequality for the product measure.}
\bigskip
\section{Proof of Theorem \ref{ThmHypocoPoids}.}
This section is devoted to the proof of Theorem \ref{ThmHypocoPoids}. Actually we will prove a more general statement. Consider an admissible function $\Psi$, that is $\Psi\in C^4$ and $\frac{1}{\Psi''}$ is positive concave, as in \cite{Mon15b}. \\ Theorem \ref{ThmHypocoPoids} corresponds to $$\Psi: \mathbb R^+\rightarrow \mathbb R, u \mapsto u\ln u+1-u \, ,$$ while the $\mathbb L^2_\mu$ case corresponds to $\Psi(u)=(u-1)^2$. We also define $\psi= \Psi''$.
{We only consider the case where $f_0$ is bounded away from zero. Indeed, if it is not the case, writing $g_0 = (1-\delta) f_0 + \delta$ for some $\delta>0$, then we may prove Theorem \ref{ThmHypocoPoids} for $g_t = (1-\delta) f_t + \delta$ and let $\delta$ go to zero to recover the result for $f_t$.}
\medskip
In this general framework we replace the weighted log-Sobolev inequality in Assumption \ref{HypU} by the following, satisfied for any bounded density of probability $f$,
\begin{equation}\label{assump2}
\int \, \Psi(f) \, \text{d} \mu \leq \rho \, \int \, \psi(f) \, \left(H^{-2\eta}|\nabla_x f|^2 + |\nabla_y f|^2 \right)\text{d} \mu.
\end{equation}
We shall obtain the analogue of Theorem \ref{ThmHypocoPoids}, replacing the entropy by $\int \Psi(f) \text{d} \mu$, i.e. if \eqref{assump2} and assumption \ref{HypB} are satisfied, then for all initial probability density $f$,
\begin{equation}\label{convpsi}
\Psi(P_t f) \leq \exp\left({-\frac{\kappa}{3+\lambda\rho}\int_0^t (1-e^{-s})^2\text{d} s}\right) \Psi(f) \, .
\end{equation}
\bigskip
\noindent The key point of the proof is to introduce a time {and space}-dependent twisted gradient. Consider $r\in\mathbb N$ and for $0 \leq i \leq r$, $x\mapsto b_i(x)\in\mathbb R^d$ a smooth vector field, $C_i = b_i.\nabla$, $Cf=(C_0 f,\dots,C_r f)$,
$t,x \mapsto M_t(x)$ a smooth function from $\mathbb R_+\times\mathbb R^{d}$ to $\mathcal M_{r\times r}^{sym+}(\mathbb R)$ the set of positive semi-definite symmetric real matrices of size $r$, and
\begin{eqnarray*}
F(t) &=& \int \psi(P_t f) \left( C P_t f\right)^T M_t C P_t f \text{d} \mu
\end{eqnarray*}
where $A^T$ stands for the transpose of the matrix $A$ and vectors are seen as 1-column matrices. The following results holds for any diffusion operator:
\begin{lem}\label{LemGammaPoids}
Let $L=L_s+L_a$, where $L_s=\frac12(L+L^*)$ and $L_a=\frac12(L-L^*)$ stand for the symmetric and antisymmetric part of $L$ in $\mathbb L^2_\mu$. Then
\begin{eqnarray*}
F'(t) &\leq& \int \psi(P_t f) \left( C P_t f\right)^T \Big(2 M_t \left[ C,L\right] + \left( (2L_s-L)M_t+\partial_t M_t\right) C \Big) P_t f \text{d} \mu.
\end{eqnarray*}
where $\left[ C_i,L\right] = C_i L-L C_i $ is the (generalized) Lie bracket of $C_i$ and $L$ and $\left[ C,L \right] = \left( \left[ C_0,L\right] ,\dots,\left[ C_r,L\right] \right)$.
\end{lem}
\begin{proof}
In the following we write $f$ for $P_t f$ and $M_t(x)=(m_{i,j}(t,x))_{0\leq i,j\leq r}$. First it holds
\begin{eqnarray*}
\partial_t\left( \int \psi(f) m_{i,j} C_i f C_j f \text{d} \mu \right) &=& \int \psi(f) \partial_t (m_{i,j}) C_i fC_j f + m_{i,j} \partial_t\left( \psi(f) C_i fC_j f \right) \text{d} \mu.
\end{eqnarray*}
This derivation is justified by the fact that $f_0$ is uniformly strictly positive and so is $f_t$ by hypoellipticity and the control of the growth of the derivative of $f_t$, using Villani \cite[Sect. A.21]{Villani} or \cite{GW12}. Denote as usual the Carr\'e-du-Champ operator $2\, \Gamma(g,h)= L(gh)-gLh-hLg$.
Next, $\mu$ being invariant for $L$, and using the diffusion property, i.e. that the chain rule property $L\Psi(f_1,...,f_d)=\sum_1^d\partial_i\Psi(f)Lf_i+\sum_{i,j}\partial_{i,j}\Psi(f)\Gamma(f_i,f_j)$ holds for all nice $\Psi$ and $f$,
\begin{eqnarray*}
0 &=& \int L\left( m_{i,j} \psi(f) C_i fC_j f \right) \text{d} \mu\\
&=& \int L\left( m_{i,j} \right) \psi(f) C_i fC_j f \text{d} \mu + \int m_{i,j} L\left( \psi(f)C_i fC_j f \right) \text{d} \mu\\
& & + 2\int \Gamma\left( m_{i,j} , \psi(f) C_i fC_j f \right) \text{d} \mu\\
&=& \int (L-2L_s)\left( m_{i,j} \right) \psi(f) C_i fC_j f \text{d} \mu + \int m_{i,j} L\left( \psi(f) C_i fC_j f \right) \text{d} \mu \, .
\end{eqnarray*}
The case where $M$ is constant (and symmetric semi-definite positive) is already treated in \cite[Lemma 8]{Mon15b} where it is shown that
\begin{eqnarray*}
\sum_{i,j} m_{i,j} \Big(L\left( \psi(f) C_i fC_j f \right) - \partial_t\left( \psi(f) C_i fC_j f \right) \Big)&\geq& 2 \psi(f) \sum_{i,j} m_{i,j} \left( C_i f\right) \left[ L,C_j\right] f \, .
\end{eqnarray*}
The proof follows by taking the integral of both sides.
\end{proof}
\begin{proof}[Proof of Theorem \ref{ThmHypocoPoids}]
Now consider the case of the Langevin diffusion, namely $L$ is given by \eqref{EqGeneLangevin}. Note that
\[\left[ L,\nabla_y \right] = \nabla_x + \nabla_y\hspace{40pt} \left[ L,\nabla_x \right] = -\nabla^2 U(x).\nabla_y. \]
The operator $L$ is decomposed as $L=L_s + L_a$ where
\[L_s = - y.\nabla_y + \Delta_y\hspace{40pt}L_a =- y.\nabla_x + \nabla U(x) .\nabla_y .\]
Recalling $H(x,y) = U(x) + \frac12 |y|^2$, then $L_a H=0$ and more generally $L_a (g \circ H)=0$ for any smooth $g:\mathbb R\rightarrow \mathbb R$. In particular for $\eta >0$,
\begin{eqnarray*}
(2L_s-L)\left( H^{-\eta}\right)& =& L_s\left( H^{-\eta}\right)\\
& =& \eta(|y|^2+d)H^{-\eta-1} + \eta(\eta+1)|y|^2H^{-\eta-2} \\
& \leq & \eta(2\eta+d+4) H^{-\eta}.
\end{eqnarray*}
Let $a,b,c$ depend on $t$ and $H(x,y)$, and let $M=\begin{pmatrix}
a & b \\ b & c
\end{pmatrix}$ and $C=\nabla$, so that Lemma \ref{LemGammaPoids} reads
\begin{eqnarray*}
F'(t
&\leq& -2\int \psi(P_t f) \left( \nabla P_t f\right)^T N\nabla P_t f \text{d} \mu
\end{eqnarray*}
with
\begin{eqnarray*}
N &=& \begin{pmatrix}
b -\frac12 (L_s+\partial_t)a & & - a \nabla^2 U+b -\frac12 (L_s+\partial_t)b\\
& & \\
c -\frac12 (L_s+\partial_t)b & & - b \nabla^2 U+c -\frac12 (L_s+\partial_t)c
\end{pmatrix} .
\end{eqnarray*}
In the top left corner $b$ is good news since it gives some coercivity in the $x$ variable. Nevertheless as soon as $b\neq 0$, $b\nabla^2U$ in the bottom right corner is an annoying term that can only be controlled by the entropy production if it is bounded (which is where, in the previous studies, the assumption that $\nabla^2 U$ is bounded barged in).
\medskip
\noindent Writing $\alpha(t) = (1-e^{-t})$, set
\[c=2\varepsilon\alpha H^{-\eta} \hspace{25pt}b=\varepsilon^2\alpha^2H^{-2\eta}\hspace{25pt}a=\varepsilon^3\alpha^3H^{-3\eta}\]
for some $\varepsilon\in(0,1)$. In other words,
\begin{eqnarray*}
(\nabla f)^T M\nabla f &=& \varepsilon \alpha H^{-\eta} |\nabla_y f|^2 + \varepsilon \alpha H^{-\eta} |\nabla_y f + \varepsilon\alpha H^{-\eta} \nabla_x f|^2,
\end{eqnarray*}
so that, in particular, $M$ is positive definite. In that case we bound
\begin{eqnarray*}
b -\frac12 (L_s+\partial_t)a
& \geq & \varepsilon^2\alpha^2 H^{-2\eta} - \frac32\eta(6\eta+d+4)\varepsilon^3\alpha^3 H^{-3\eta} - \frac32\varepsilon^3\alpha^2 e^{-t} H^{-3\eta}\\
& \geq & \varepsilon^2\alpha^2 H^{-2\eta}\left( 1 -\left(\frac32\eta(6\eta+d+4)\alpha + \frac32 e^{-t}\right) \varepsilon\right)\\
& \geq & \varepsilon^2\alpha^2 H^{-2\eta}\left( 1- 9(\eta+d)^2\varepsilon\right)
\end{eqnarray*}
\begin{eqnarray*}
- b \nabla^2 U+c -\frac12 (L_s+\partial_t)c
& \geq & -\varepsilon^2 \alpha^2 \| H^{-2\eta}\nabla^2 U\|_{\infty} + 2\varepsilon\alpha H^{-\eta} -\eta(2\eta+d+4)\varepsilon\alpha H^{-\eta} - \varepsilon e^{-t}H^{-\eta} \\
& \geq & -\varepsilon^2 \alpha^2 \| H^{-2\eta}\nabla^2 U\|_{\infty} - \varepsilon H^{-\eta}\left( -2\alpha + \eta(2\eta+d+4)\alpha + e^{-t}\right) \\
& \geq & -\varepsilon^2\| H^{-2\eta}\nabla^2 U\|_{\infty} - 3\varepsilon\left( \eta+d\right)^2
\end{eqnarray*}
\begin{eqnarray*}
| b+c-a\nabla^2U-(L_s+\partial_t)b|
& \leq & |\varepsilon^2 \alpha^2 H^{-2\eta} + 2\varepsilon \alpha H^{-\eta} - 2 e^{-t} \varepsilon^2 \alpha H^{-2\eta}| \\
& & + |\varepsilon^3 \alpha^3 H^{-3\eta}\nabla^2 U | + 2\eta (4\eta+d+4) \varepsilon^2 \alpha^2 H^{-2\eta}\\
&\leq & \varepsilon\alpha H^{-\eta} \left( \varepsilon^2\| H^{-2\eta}\nabla^2 U\|_{\infty} + 2 + 8\varepsilon(\eta+d)^2\right)
\end{eqnarray*}
which implies for $\varepsilon = \frac14 \times \frac19 (\eta+d)^{-2}$ that
\begin{eqnarray*}
(\nabla f)^T N \nabla f & \geq & \frac14\varepsilon^2\alpha^2 H^{-2\eta} |\nabla_x f|^2
- A|\nabla_y f|^2
\end{eqnarray*}
with
\begin{eqnarray*}
A &=& \frac12 \left( \varepsilon^2\| H^{-2\eta}\nabla^2 U\|_{\infty} + 2 + \frac{2}{9}\right)^2 + \varepsilon^2\| H^{-2\eta}\nabla^2 U\|_{\infty} + \frac{1}{12} \\
& \leq & \left( \| H^{-2\eta}\nabla^2 U\|_{\infty} + 2\right)^2\ :=\ \lambda.
\end{eqnarray*}
Writing
\begin{eqnarray*}
G(t) &=& \frac{1}{2\lambda} F(t) + \int \Psi(P_t f)\text{d} \mu,
\end{eqnarray*}
we have obtained
\begin{eqnarray*}
G'(t) &\leq & - \int \Psi''(P_t f) \left( \frac{\alpha^2 \varepsilon^2}{4\lambda} H^{-2\eta}|\nabla_x P_t f|^2+ \left( 2 - \frac A\lambda \right)|\nabla_y P_t f|^2\right) \text{d} \mu \\
&\leq & - \frac{\alpha^2 \varepsilon^2}{4\lambda} \int \Psi''(P_t f) \left( H^{-2\eta}|\nabla_x P_t f|^2+ |\nabla_y P_t f|^2\right) \text{d} \mu.
\end{eqnarray*}
On the one hand,
\begin{eqnarray*}
F(t) & \leq & 3\varepsilon \alpha \int \Psi''(P_t f) \left( H^{-2\eta}|\nabla_x P_t f|^2+|\nabla_y P_t f|^2\right) \text{d} \mu,
\end{eqnarray*}
and on the other hand, using the inequality \eqref{assump2},
\begin{eqnarray*}
\int \Psi(P_t f)\text{d} \mu & \leq & \rho \int \Psi''(P_t f) \left( H^{-2\eta}|\nabla_x P_t f|^2+|\nabla_y P_t f|^2\right) \text{d} \mu,
\end{eqnarray*}
which implies
\begin{eqnarray*}
G'(t) & \leq & - \frac{\alpha^2 \varepsilon^2}{1+4\lambda\rho} G(t).
\end{eqnarray*}
Hence,
\[\text{Ent}_\mu(P_t f) \ \leq \ G(t) \ \leq \ G(0) \exp\left(- \frac{ \varepsilon^2}{1+4\lambda\rho}\int_0^t \alpha^2(s)\text{d} s\right),\]
and $G(0) = \text{Ent}_\mu(f)$. The proof is complete.
\end{proof}
\medskip
\section{Weighted Functional Inequalities with $\eta\geq 0$.}
We turn to the study of the functional inequality \eqref{assump2}. For simplicity we shall only consider the cases $\Psi(u)=(u-1)^2$ (Variance) and $\Psi(u)=u\ln u -u +1$ (Entropy). \\ \noindent Recall the definition of $L_\eta$, $$L_{\eta}:=H^{-2\eta} \Delta_x + \Delta_y - H^{-2\eta}\left(2\eta\frac{\nabla_x H}{H}+\nabla_x H\right).\nabla_x - \nabla_y H.\nabla_y \, ,$$ which satisfies
\begin{equation}\label{eqIPP'}
-\int \, f \, L_\eta f \, \text{d} \mu \, = \, \int \, (H^{-2\eta} \, |\nabla_x f|^2+ |\nabla_y f|^2) \, \text{d} \mu \, : = \mathcal E_\eta(f).
\end{equation}
Let us state our first main results
\begin{thm}\label{thmlyap-var}
The weighted Poincar\'e inequality $$\text{Var}_\mu(g) \leq \rho \, \int \, \left(H^{-2\eta}|\nabla_x g|^2+|\nabla_y g|^2\right) \, \text{d}\mu$$ is satisfied if and only if there exists a Lyapunov function, i.e. a smooth function $W$ such that $W(x,y)\geq w >0$ for all $(x,y)$, a constant $\lambda>0$ and a bounded open set $A$ such that $$L_\eta W \leq - \, \lambda \, W \, + \, \mathbf 1_{\bar A} \, .$$
\end{thm}
We provide then the equivalent result for the logarithmic Sobolev inequality.
\begin{thm}\label{thmlyap-ent}
Assume that $H$ goes to infinity at infinity and that there exists $a>0$ such that $e^{aH}\in \mathbb L^1(\mu)$.
\begin{enumerate}
\item If $\mu$ satisfies the weighted log-Sobolev inequality \eqref{eqlspoids}, then, there exists a Lyapunov function, i.e. a smooth function $W$ such that $W(x,y)\geq w >0$ for all $(x,y)$, two positive constants $\lambda$ and $b$ such that
\begin{equation}\label{eqlyapls}
L_\eta W \leq - \, \lambda \, H \, W \, + \, b \, .
\end{equation}
\item Conversely, assume that there exists a Lyapunov function satisfying \eqref{eqlyapls} and that $|\nabla H|(x,y) \geq c > 0$ for $|(x,y)|$ large enough. Define \[
\theta(r)=\sup\limits_{z\in \partial A_r} \max\limits_{i,j=1,...,2d} |\frac{\partial^2 H}{\partial z_i\partial z_j} |
\]
and assume that $\theta(r)\leq ce^{C_0r}$ with some positive constants $C_0$ and $c$ for $r$ sufficiently large. Then $\mu$ satisfies the weighted log-Sobolev inequality \eqref{eqlspoids}.
\end{enumerate}
\end{thm}
These theorems are the analogues, in the weighted situation we are looking at, of (part of) Theorem 1.1 and Theorem 1.2 in \cite{CG16}. Their proofs are very similar concerning the part 1) of the previous theorem and we shall only give some details in the entropic case. Let us begin by a simple and crucial Lemma, at the basis of the use of Lyapunov type condition. Note that it can also be proved via large deviations argument.
\begin{lem}\label{lem52} For every continuous function $W\ge 1$ in the domain of $L_\eta$ such
that $-L_\eta W/W$ is $\mu$-a.e. lower bounded, for all $g $ in the domain of $L_\eta$,
\begin{equation}\label{lem52a}
\int -\frac{L_\eta W}{W} g^2 \, \text{d}\mu \le \int \left( H^{-2\eta} |\nabla_x g|^2 + |\nabla_y g|^2 \right) \, \, \text{d}\mu.\end{equation}
\end{lem}
\begin{proof}This follows from integration by parts and Cauchy-Schwartz inequality. Indeed,
\begin{eqnarray*}
\int -\frac{L_\eta W}{W} g^2 \,\text{d}\mu
&=& \int H^{-2\eta}\langle \nabla_x W,\nabla_x \frac{g^2}{W}\rangle + \langle \nabla_y W,\nabla_y \frac{g^2}{W}\rangle \text{d}\mu\\
&=&\int H^{-2\eta}\left(-\frac{g^2}{W^2}|\nabla_x W|^2 + 2\frac{g}{W}\langle \nabla_x W,\nabla_xg\rangle\right)\\&&\qquad\qquad\qquad+ \left(-\frac{g^2}{W^2}|\nabla_y W|^2 + 2\frac{g}{W}\langle \nabla_y W,\nabla_yg\rangle\right) \text{d}\mu \\
&\leq& \int \left( H^{-2\eta} |\nabla_x g|^2 + |\nabla_y g|^2 \right) \, \, \text{d}\mu
\end{eqnarray*}
\end{proof}
Let us now prove Theorem \ref{thmlyap-ent}.
\begin{proof}
For a given nice function $\phi$, introduce the operator $G_\eta$ via $G_\eta h=- \, L_\eta h +\phi h$. For any $h$ in the domain of $L_\eta$, $\int \, h \, G_\eta h \text{d} \mu = \mathcal E_\eta(h) + \int h^2 \, \phi \, \text{d}\mu$. Choosing $\phi = -c + \mathbf 1_A$ for some set $A$ to be defined, in the variance case and $\phi = \rho(b - H)$ in the entropic case, one deduces that $G_\eta$ is continuous for the norms whose square are respectively $\mathcal E_\eta(h) + \int_A h^2 \, \text{d}\mu$ and $\mathcal E_\eta(h) + \int h^2 \, \text{d}\mu$. If a weighted Poincar\'e inequality (resp. weighted log-Sobolev inequality) is satisfied, following the proof of Theorem 2.1 (resp. Proposition 3.1) in \cite{CG16}, we get that the form $\int h \, G_\eta h \, \text{d}\mu$ is also coercive so that applying Lax-Milgram theorem we get a solution to $G_\eta h= 1$, which furnishes the desired Lyapunov function (see \cite{CG16} for the details).
\medskip
For the converse, we revisit the proof of \cite{CG16} Proposition 3.5 in order to adapt it to our case. As usual, we will rather prove the (weighted) log-Sobolev inequality in its equivalent (weighted) Super Poincar\'e inequality form, i.e. there exist $c,\beta>0$ such that for all smooth $f$ and $s>0$,
$$\int f^2\text{d}\mu\le s\int (H^{-2\eta}|\nabla_x f|^2+|\nabla_y f|^2)\text{d}\mu+c\,e^{\beta/s}\left(\int|f|\text{d}\mu\right)^2.$$ Indeed, the latter implies a defective (weighted) log-Sobolev inequality and a weighted Poincar\'e inequality (choosing $s$ such that $c e^{\beta/s}=1$) and we obtain a tight (weighted) log-Sobolev inequality by using Rothaus lemma (see \cite{BGL14} p.239), which states that
\begin{equation}\label{eqrot}
\mbox{Ent}_\mu(f^2) \leq \mbox{Ent}_\mu(\tilde f^2) + 2 \mbox{Var}_\mu(f) \, ,
\end{equation}
where $\tilde f= f - \int f \, \text{d}\mu$. For all this we refer to \cite{CGWW09,CGW11,Wbook}.
\medskip
Recall $A_r=\{H \leq r\}$. For $r_0$ large enough and some $\lambda'<\lambda$ we have
$$L_{\eta}W \leq - \lambda' \, H \, W \, + \, b \, \mathbf 1_{A_{r_0}} \, ,$$
so that we may assume that
$$\frac{L_{\eta}W}{W}(x,y) \leq - \, \lambda \, H(x,y) \, + \, \frac{b}{w} \, \mathbf 1_{A_{r_0}} \, ,$$
We have for
$r>r_0$,
\begin{eqnarray*}
\int \, f^2 \, \text{d}\mu
&\leq& \int_{A_r} f^2 \, \text{d}\mu \, + \, \int_{A^c_r} \frac{\lambda H}{\lambda r}f^2 \, \text{d}\mu \\
&\leq& \int_{A_r} f^2 \, \text{d}\mu \, + \, \int \frac{\lambda H}{\lambda r}f^2 \, \text{d}\mu \\
&\leq& \int_{A_r} f^2 \, \text{d}\mu \, + \, \frac{1}{\lambda \, r} \, \int \left(\frac{-L_{\eta} W}{W}\, +\, \frac{\, b \, \mathbf 1_{A_{r_0}} \,}{w}\right) \,f^2 \, \text{d}\mu\\
&\leq& \left(1+\frac{b}{\lambda r w}\right) \, \int_{A_r} f^2 \, \text{d}\mu \, + \, \frac{1}{\lambda \, r} \, \int \left( H^{-2\eta} |\nabla_x f|^2 + |\nabla_y f|^2 \right) \, \, \text{d}\mu \\
\end{eqnarray*}
It remains to control the integral in $A_r$. It is in fact a simple consequence of Nash inequalities for the Lebesgue measure rewritten in its Super Poincar\'e form (c.f. \cite[Prop 3.8]{CGWW09}): there exists $c_d$ such that for all $r$ large enough, all smooth $f$ and $s>0$
\begin{eqnarray*}
\int_{A_r} f^2 \text{d} x \text{d} y &\le& s \, \int_{A_r} |\nabla f|^2 \text{d} x \text{d} y+c_d \theta^d(r)(1+s^{-2d})\left(\int|f| \text{d} x \text{d} y\right)^2 \\
&\leq & s\,\int_{A_r} |\nabla f|^2 \text{d} x \text{d} y+c_d c e^{2dC_0r}(1+s^{-2d})\left(\int|f| \text{d} x\text{d} y\right)^2 \, .
\end{eqnarray*}
Recall that $H\geq 1$. We thus have
\begin{eqnarray*}
\int_{A_r} \, f^2 \, \text{d}\mu &\leq& \frac{1}{eZ} \, \int_{A_r} f^2 \text{d} x \text{d} y \\ &\leq& \frac{r^{2\eta} \, e^r}{e} \, s \, \int \left( H^{-2\eta} |\nabla_x f|^2 + |\nabla_y f|^2 \right) \, \text{d}\mu \, + \, Z c_dc e^{2dC_0r}(1+s^{-2d})e^{2r}\left(\int_{A_r} |f|d\mu\right)^2 \, .
\end{eqnarray*}
Letting $u=se^{r-1} \, r^{2\eta}$ and $C'=Zcc_d$, and considering integral on the whole space in the right hand side, we have thus obtained (for $r$ large enough)
\begin{eqnarray*}
\int_{A_r} \, f^2 \, \text{d}\mu &\leq& u \, \int \left( H^{-2\eta} |\nabla_x f|^2 + |\nabla_y f|^2 \right) \, \text{d}\mu \, + \,C' \, r^{4d\eta} \, (1+u^{-2d})e^{2(1+dC_0+d)r}\left(\int|f|d\mu\right)^2 \, .
\end{eqnarray*}
Denoting $c=1+\frac{b}{\lambda r_0 w}$, and $\beta_d=2+d+2dC_0$, we thus have, for all $u>0$ and $r$ large enough
\begin{equation}\label{eqsuperP}
\int f^2 \text{d}\mu \, \leq \, \left(u \, c \, + \, \frac{1}{\lambda \, r}\right) \, \int \left( H^{-2\eta}|\nabla_x f|^2 + |\nabla_y f|^2 \right) \, \, \text{d}\mu \, + \, C' \, (1+u^{-2d}) \, r^{2d\eta} \, c \, e^{\beta_d r} \, \left(\int \, |f| \, d\mu\right)^2.
\end{equation}
Choosing $r\lambda =(uc)^{-1}$ and $s=2uc$, we have thus proved the existence of some $\beta'_d$ such that
$$\int f^2 \text{d}\mu \, \leq \, s \, \int \left( H^{-2\eta}|\nabla_x f|^2 + |\nabla_y f|^2 \right) \, \, \text{d}\mu \, + \, C'' \, e^{\beta'_d/s} \, \left(\int \, |f| \, d\mu\right)^2 \, ,$$ and the proof is complete.
\end{proof}
\begin{rmq}
For a general weighted logarithmic Sobolev inequality with the weighted energy $$\int \left( w_1|\nabla_x f|^2 + w_2|\nabla_y f|^2 \right)d\mu,$$ we can introduce the symmetric generator
$$L_{w_1,w_2}:=w_1 \Delta_x + w_2 \Delta_y - w_1\left(-\frac{\nabla_x w_1}{w_1}+\nabla_x H\right).\nabla_x - w_2\left(-\frac{\nabla_y w_2}{w_2}+\nabla_y H\right).\nabla_y.$$
If a Lyapunov function (as in Theorem \ref{Thm-LyapunovCondition} but for $L_{w_1,w_2}$) exists, then following the same line, we can obtain (with the retired additional assumptions on the weights) a weighted logarithmic Sobolev inequality. \hfill $\diamondsuit$
\end{rmq}
We now proceed to the
\begin{proof}[Proof of Corollary \ref{Cor-Lyapunov}]
Consider a smooth function $W(x,y)=e^{\alpha U(x)+\frac{\beta}{2}|y|^2}$ with two constants $\alpha, \beta\in (0,1)$ to be determined. Then for $|(x,y)|\geq R$,
\begin{eqnarray*}
\frac{L_\eta W}{W}
&=& \alpha H^{-2\eta}\left[ \Delta_x U +\left( \alpha - \frac{2\eta}{H}-1 \right)|\nabla_x U|^2\right] + \beta(d-(1-\beta)|y|^2)\\
&\le& \beta d- \alpha \left( 1 - \alpha - \kappa \right)|\nabla_x U|^2 H^{-2\eta}-\beta(1-\beta)|y|^2
\end{eqnarray*}
where we used the first condition in the assumption of the corollary.
To bound the last term by some $C -\lambda H$, we consider $\alpha \in (0,1-\kappa), \beta\in (0,1)$, and divide it into two cases. If $\frac{|y|^2}{2}\geq \frac{H}{2}$, then
\[
-\alpha\left( 1 - \alpha - \kappa \right)|\nabla_x U|^2 H^{-2\eta}-\beta(1-\beta)|y|^2
\le -\beta(1-\beta) H
\]
Otherwise,we have $U\geq \frac{H}{2}$. Combined with the second condition, it follows
\[
-\frac{|\nabla_x U|^2}{H^{2\eta}}\leq -\frac{c U^{2\eta+1}}{2^{2\eta}U^{2\eta}}\leq -\frac{c}{2^{2\eta+1}}H
\]
which completes the proof of the Lyapunov condition. Since the second condition implies that $U$ goes to infinity at infinity and $|\nabla_x U|\geq u\geq 0$, we get a weighted logarithmic Sobolev inequality for $\mu$ by the previous theorem.
\end{proof}
The next example, which is the simple polynomial case will show the adequacy of our conditions on weighted log-Sobolev inequality with the Assumption \ref{HypB}.
\begin{ex}
Let us consider the example where $U(x)=|x|^l$ with $l>2$ for $|x|$ large enough, that is, $H(x,y)=|x|^l+\frac{|y|^2}{2}$. Then $\Delta_x U=(dl+l^2-2l))|x|^{l-2}$ and $|\nabla_x U|^2=l^2 |x|^{2l-2}$. The first condition is satisfied since $l> 2$, while for the second condition we need
$$\eta \le \frac{1}{2}-\frac{1}{l}$$
Note that $||U^{-2\eta}\nabla^2 U||_{\infty}\sim |x|^{l-2-2l\eta}$, to ensure that $U^{-2\eta}\nabla^2 U$ is bounded, we have to choose $\eta=\frac{1}{2}-\frac{1}{l}$. With the case $l=2$ we recover Villani's result.
\end{ex}
Let us give another example which will show that our limit growth for the potential $U$ is below the exponential growth
\begin{ex}
Choose now $U(x)=e^{a|x|^b}$ for $a,b>0$ for $|x|$ large enough. Then $\Delta_xU\sim a^2b^2|x|^{2(b-1)}e^{a|x|^b}$ and $|\nabla_x U|^2\sim a^2b^2 e^{2a|x|^b}$. The first condition is thus satisfied , while the second one imposes once again that $2\eta+1\le 2$. Now, Assumption 1 imposes that $2\eta>1$ if $b\ge 1$ leading to an impossible adequacy of the two sets of conditions and to $2\eta\ge 1$ if $b<1$ and thus the choice of $\eta=1/2$ is admissible.
\end{ex}
Let us end this section by a remark
\begin{rmq}
For the multipliers method in the variance case, Villani does not use $H^{-2\eta}$ in the energy to get his inequality, as will be seen in the next section but prove a rather stronger inequality with weight in the derivative in x in the energy $U(x)^{-2\eta}(1+|y|^2)^{-2\eta}$. The fact that he deals with the variance helps him enough to prove such a weighted Poincar\'e inequality. We may also consider a weighted logarithmic Sobolev inequality with such a weight. However, via the Lyapunov condition approach, the condition on $\eta$ is then too strong to match with Assumption 1. It is thus crucial to have a weighted inequality with weight $H^{-2\eta}$ for Theorem~1.
\end{rmq}
The next section presents an alternative approach trying to provide an answer to the problem alluded in the previous remark. Is it possible to provide a "tensorization-like" approach to provide a weighted logarithmic Sobolev inequality as in Villani's paper, thus giving an alternative to Lyapunov conditions?
\medskip
\section{Some further remarks on weighted inequalities.}\label{sec comments}
In this final section we shall try to understand whether it is possible to impose conditions on $U$ solely in order to get weighted inequalities. We shall use several times the following elementary inequalities, true for all $\eta \geq 0$, all $x$ and $y$ (recall that $U\geq 1$)
\begin{equation}\label{eqneqH}
U^{-\eta}(x) \, \left(1+\frac 12 \, |y|^2\right)^{-\eta} \, \leq \, H^{-\eta}(x,y) \, \leq \min \left(U^{-\eta}(x) \, , \, \left(1+\frac 12 \, |y|^2\right)^{-\eta}\right) \, .
\end{equation}
We shall use in the sequel the notations $U^{- 2\eta}(x)=\phi_1(x)$ and $\left(1+\frac 12 \, |y|^2\right)^{-2 \eta}=\phi_2(y)$.
\medskip
\subsection{The case of weighted Poincar\'{e} inequalities. \\ \\}
Assume that $\mu$ satisfies a weighted Poincar\'e inequality. If we choose an $f$ that only depends on $x$ and use that $H^{-2\eta}(x,y)\leq U^{-2\eta}(x)$ for all $y$, we immediately see that the first marginal of $\mu$, i.e. $\text{d}\mu_1(x) :=\frac{1}{Z_1}e^{-U(x)}\text{d} x $ also satisfies the weighted Poincar\'{e} inequality
\begin{equation}\label{Eq-weighted PI on x}
\text{Var}_{\mu_1}(f) \leq C \int U^{-2\eta}|\nabla f|^2\text{d}\mu_1 \, .
\end{equation}
Conversely we have,
\begin{thm}\label{thmwp} Write $\mu(dx,dy)=\mu_1(dx)\otimes \mu_2(dy)$. If $\mu_1(dx) =\frac{1}{Z_1} \, e^{-U(x)}\text{d} x$ satisfies the weighted Poincar\'{e} inequality \eqref{Eq-weighted PI on x} with constant $C_1$,
then $\mu$ satisfies the following weighted Poincar\'{e} inequality
\[
\text{Var}_{\mu}(h) \leq C' \int (H^{-2\eta}|\nabla_x h|^2 + |\nabla_y h|^2)\text{d}\mu
\]
with $$C'\leq \max \left(\left(2+\frac{4}{M_2}\right), \frac{4C_1}{M_2}\right) \quad \textrm{ where } M_2= \int \, \left(1+ \frac 12 |y|^2\right)^{-2\eta} \, \mu_2(dy) \, .$$
\end{thm}
\begin{proof}
A proof is given in Villani \cite{Villani} Theorem A.3. It uses extensively the spectral theory of the sum of operators. We shall give a more pedestrian (similar) proof.
\medskip
\noindent The first point is that, since we assumed that $U\geq 1$,
\begin{equation}\label{eqminH}
H^{-2\eta}(x,y) \geq \phi_1(x) \, \phi_2(y) := \, U^{-2\eta}(x) \; \left(1 + \frac 12 \, |y|^2\right)^{-2\eta} \, .
\end{equation}
Thus, if we decompose $\mu(dx,dy)=\mu_1(dx)\otimes \mu_2(dy)$ we have
\begin{eqnarray*}
\int \, H^{-2\eta} \, |\nabla_xh|^2 \, \mu(dx,dy) &\geq& \int \, \phi_1(x) \, \phi_2(y) \, |\nabla_xh|^2 \, \mu_1(dx)\otimes \mu_2(dy) \\ &\geq& \frac{1}{C_1} \, \int \, \phi_2(y) \, \left(h(x,y)-\int h(u,y)\mu_1(du)\right)^2 \, \mu(dx,dy) \, .
\end{eqnarray*}
Now write,
\begin{eqnarray*}
h(x,y)-\int h(u,y)\mu_1(du) &=& \left(h(x,y)-\int h(u,y)\mu_1(du) -\int h(x,v)\mu_2(dv)+ \int\int h \text{d}\mu_1 \text{d}\mu_2\right) + \\ &+&\left(\int h(x,v) \mu_2(dv) -\int\int h \text{d}\mu_1 \text{d}\mu_2\right)\\ &=& g_1(x,y) + g_2(x)
\end{eqnarray*}
\noindent and use $$(a+b)^2 \geq \frac 12 \, b^2 \, - \, a^2 \, .$$ This yields, since $\phi_2(y) \leq 1$,
\begin{equation*}
\int \, H^{-2\eta} \, |\nabla_xh|^2 \, \mu(dx,dy) \geq \frac{1}{2C_1} \, \left(\int \phi_2 \, d\mu_2\right) \left(\int g_2^2(x) \mu_1(dx)\right) - \, \frac{1}{C_1} \, \int\int g_1^2(x,y) \, \mu_1(dx)\mu_2(dy) \, .
\end{equation*}
Notice that for all $y$, $$\int g_1^2(x,y) \, \mu_1(dx)= \text{Var}_{\mu_1}\left(h(.,y)- \int h(.,v) \, \mu_2(dv)\right)$$ so that $$\int g_1^2(x,y) \, \mu_1(dx) \leq \int \, \left(h(x,y)- \int h(x,v) \, \mu_2(dv)\right)^2 \, \mu_1(dx) \, .$$
We can thus integrate this inequality w.r.t. $\mu_2$, use Fubini's theorem, then for each fixed $x$ use the usual Poincar\'e inequality for the standard gaussian measure $\mu_2$ and finally integrate with respect to $\mu_1$. This yields
\begin{eqnarray*}
\int\int g_1^2(x,y) \, \mu_1(dx)\mu_2(dy) &\leq& \int\int \, \left(h(x,y)- \int h(x,v) \, \mu_2(dv)\right)^2 \, \mu(dx,dy) \\ &\leq& \int\int |\nabla_y h|^2(x,y) \, \mu(dx,dy) \, .
\end{eqnarray*}
Gathering all this we have obtained
\begin{equation}\label{eqpwinter}
\int g_2^2(x) \mu_1(dx) \leq \frac{2 \, C_1}{M_2} \, \int H^{-2\eta}|\nabla_x h|^2 \, \text{d}\mu + \frac{2}{M_2} \, \int |\nabla_y h|^2 \, \text{d}\mu \, .
\end{equation}
Finally
\begin{eqnarray*}
\text{Var}_{\mu}(h) &=& \int \, \left(h(x,y)-\int h(x,v)\mu_2(dv)+\int h(x,v)\mu_2(dv)-\int \int h\text{d} \mu\right)^2 \, \mu(dx,dy) \\ &\leq& 2 \,\int\int \, \left(h(x,y)- \int h(x,v) \, \mu_2(dv)\right)^2 \, \mu(dx,dy) + \, 2 \, \int g_2^2(x) \mu_1(dx) \\ &\leq& 2 \, \int\int |\nabla_y h|^2(x,y) \, \mu(dx,dy) + \, 2 \, \int g_2^2(x) \mu_1(dx) \, ,
\end{eqnarray*}
and the result follows from \eqref{eqpwinter}.
\end{proof}
\noindent As a conclusion the weighted Poincar\'e inequality on $\mathbf R^{2d}$ reduces to a weighted Poincar\'{e} inequality on $\mathbb R^d$ (up to some constant). One should think that the previous result is a kind of weighted tensorization property. This is not the case due to the fact that the weight in front of $\nabla_x$ depends on both variables $x$ and $y$.\\ \noindent There are many ways to obtain such an inequality. Of course since it is stronger than the usual Poincar\'e inequality, our result is weaker than the one of Villani (but with a simpler proof and explicit bounds for the constants), and we will only describe a typical situation where this equality can be obtained. \\ As we have seen in the previous section, this weighted Poincar\'e inequality is equivalent to the existence of some Lyapunov function for $L_{1,\eta}$ which is built similarly to $L_\eta$ replacing $H$ by $U$. We can also obtain a slightly different condition. Introduce the probability measure $\mu_1^\phi(dy) = \frac{\phi_1(y)}{M_1} \, \mu_1(dy)$ and the $\mu_1^\phi$ symmetric operator $$G_1^\phi = \Delta_x - \left(1+\frac{2\eta}{U}\right) \nabla U. \nabla \, .$$ Assume that we can find a Lyapunov function $W\geq 1$ such that $$\frac{G_1^\phi W(x)}{W(x)} \leq - a \, U^{2\eta}(x)$$ for $|x|$ larger than some $R>0$. If $h$ is compactly supported in $|x|>R$, we may write $$\int h^2 \, d\mu_1 \leq - \frac{M_1}{a} \, \int \frac{G_1^\phi W}{W} \, h^2 \, \text{d} \mu_1^\phi \leq \frac{M_1}{a} \int |\nabla h|^2 \, \text{d} \mu_1^\phi = \frac{M_1}{a} \, \int |\nabla h|^2 \, U^{-2\eta} \, \text{d} \mu_1$$ according to the computations in \cite{BBCG} p.64. Following the method introduced in \cite{BBCG} we then obtain that $\mu_1$ satisfies the desired weighted Poincar\'e inequality. According to \cite{CG16} Theorem 4.4, the existence of such a Lyapunov function is linked to the fact that $\mu_1$ satisfies some $F$-Sobolev inequality, with $F=\ln_+^{2\eta}$. This is for instance the case when $U(x)=1+|x|^\alpha$ and $\eta=1-\alpha^{-1}$.
\bigskip
\subsection{The case of weighted log-Sobolev inequalities. \\ \\}
\noindent We look now at the similar weighted logarithmic Sobolev inequality, namely,
\begin{equation*}
\mbox{Ent}_\mu(f^2)\leq \rho \int (H^{-2\eta}|\nabla_x f|^2 + |\nabla_y f|^2)\text{d} \mu.
\end{equation*}
\noindent As in the $L^2$ setting, it implies a weighted log Sobolev inequality for $\mu_1$ on $\mathbb R^d$ i.e.
\begin{equation}\label{eqls1}
\mbox{Ent}_{\mu_1}(f^2)\le C \, \int U^{-2\eta}|\nabla_x f|^2\text{d}\mu_1 \, .
\end{equation}
Since the standard gaussian measure $\mu_2$ satisfies a log-Sobolev inequality too (with optimal constant $2$), one should expect to obtain the analogue of theorem \ref{thmwp}. This is not so easy (actually we did not succeed in proving such a result) and certainly explains the limitation of Villani's approach, since this property reduces to the well known tensorization property of the logarithmic Sobolev inequality only in the case $\eta=0$. The best we are able to do is to prove that, in this situation
\medskip
\begin{thm}\label{thmtensorls}
Write $\mu(dx,dy)=\mu_1(dx)\otimes \mu_2(dy)$. If $\mu_1(dx) =\frac{1}{Z_1} \, e^{-U(x)}\text{d} x$ satisfies the weighted log-Sobolev inequality \eqref{eqls1}, then $\mu$ satisfies \eqref{assump2} with an admissible function $u\mapsto \Psi(u)$ behaving like $u \, \ln^{\frac 12}(u)$ at infinity.
\end{thm}
Combined with the results of Section 2 which deals with a decay for more general functionals than the variance or entropy, we are thus able to prove under such conditions an exponential decay for $\Psi$ behaving like $u \, \ln^{\frac 12}(u)$ at infinity.
\begin{proof}
The first step of the proof is the following
\begin{lem}\label{lemphils}
Define the probability measure $\mu_2^\phi(dy) = \frac{\phi_2(y)}{M_2} \, \mu_2(dy)$. Then $\mu_2^\phi$ satisfies a log-Sobolev inequality.
\end{lem}
\noindent An immediate consequence is the following inequality for $\mu^\phi(dx,dy)=\mu_1(dx) \otimes \mu_2^\phi(dy)$,
\begin{equation}\label{eqtens}
\text{Ent}_{\mu^\phi}(h^2) \leq C \, \int (\phi_1 \, |\nabla_x h|^2 + |\nabla_y h|^2)\text{d}\mu^\phi \, ,
\end{equation}
which follows from the tensorization property of the log-Sobolev inequality.
\begin{proof}[Proof of Lemma \ref{lemphils}]
Write $$\mu_2^\phi(dy) = Z^\phi \, e^{- \, \left(\frac{|y|^2}{2} + 2\eta \, \ln(1+|y|^2/2)\right)} \text{d} y= Z^\phi \, e^{-V_2(y)} \text{d} y \, .$$ A simple calculation shows that $$Hess V_2(y)= \left(1 + \frac{2\eta}{1+|y|^2/2}\right) \, Id \; - \; \frac{2\eta}{(1+|y|^2/2)^2} \, M(y)$$ where $M_{i,j}(y)=y_i y_j$. Hence, $$Hess V_2(y) \, \geq \, \left(1 + \frac{2\eta}{1+|y|^2/2} - \frac{2 \eta d \, |y|^2}{(1+|y|^2/2)^2}\right) \, Id$$ in the sense of quadratic forms. Hence for $|y|$ large enough (of order $c \sqrt n$), the potential $V_2(y)$ is uniformly convex, uniformly in $y$. This proves (combining Bakry-Emery criterion and Holley-Stroock perturbation argument) the Lemma.
\end{proof}
As we recalled, the weighted log-Sobolev inequality is equivalent to a (weighted) super Poincar\'e inequality, for all smooth $h$ and all $s>0$,
\begin{equation}\label{eqsuperdef1}
\int h^2 \text{d}\mu^\phi \, \leq \, s\int (\phi_1 \, |\nabla_x h|^2+|\nabla_y h|^2) \text{d}\mu^\phi \, + \, c\, e^{\beta/s}\left(\int|h| \, \text{d}\mu^\phi\right)^2 \, .
\end{equation}
Since $\phi_2 \leq 1$, it follows
\begin{equation}\label{eqsuperdef2}
\int h^2 \text{d}\mu^\phi \, \leq \, \frac s{M_2} \int (H^{-2\eta} \, |\nabla_x h|^2+|\nabla_y h|^2) \text{d}\mu \, + \, \frac c{M_2} \, e^{\beta/s}\left(\int|h| \, \text{d}\mu\right)^2 \, .
\end{equation}
For $R>1$, introduce the 1-Lipschitz function $$\varphi(r)=(r-R) \, \mathbf 1_{R<r<R+1} + \mathbf 1_{R+1\leq r} \, .$$ One can write
\begin{eqnarray*}
\int h^2 \text{d}\mu &\leq& \int_{|y|\leq R+1} \, h^2 \, \text{d}\mu + \int \, h^2 \, \varphi^2(|y|) \, \text{d}\mu \\ &\leq&
\frac{M_2}{\phi_2(R+1)} \, \int_{|y|\leq R+1} h^2 \text{d}\mu^\phi \, + \, \int \, h^2 \, \varphi^2(|y|) \, \text{d}\mu \\ &\leq& \frac{M_2}{\phi_2(R+1)} \, \int h^2 \text{d}\mu^\phi \, + \, \int \, h^2 \, \varphi^2(|y|) \, \text{d}\mu \, .
\end{eqnarray*}
The first term in the sum will be controlled thanks to \eqref{eqsuperdef2}. In order to control the second term, we introduce,once again, some Lyapunov function. \\
\noindent Denote by $G$ the Ornstein-Uhlenbeck operator $G=\Delta_y - y.\nabla_y$ and consider $W(y)=e^{|y|^2/4}$. A simple calculation shows that $$\frac{GW}{W} \leq \frac 14 \, (2d-|y|^2)$$ for $|y|>\sqrt{2d}$. Hence if $R>\sqrt{2d}$, we get for $|y|>R$, $$1 \leq 4 \, \left(\frac{-GW}{W}\right) \, \frac{1}{|y|^2-2d} \leq 4 \, \left(\frac{-GW}{W}\right) \, \frac{1}{R^2-2d}$$ and finally
\begin{equation}\label{eqlyap}
\int \, h^2 \, \varphi^2(|y|) \, \text{d}\mu \leq \frac{4}{R^2-2d} \, \int \, \left(\frac{-GW}{W}\right) \, h^2 \, \varphi^2(|y|) \, \text{d}\mu \, .
\end{equation}
\noindent Integrating by parts, and after some easy manipulations (see \cite{BBCG} for the details), we will thus obtain for well chosen constants $C,C'$ all $s>0$ and large enough $R$,
\begin{equation}\label{eqlsbeta}
\int h^2 \text{d}\mu \leq C \, (sR^2+R^{-2}) \int (\phi_1 \, |\nabla_x h|^2+|\nabla_y h|^2) \text{d}\mu^\phi \, + \, C' \, R^2 \, e^{\beta/s}\left(\int|h| \, \text{d}\mu\right)^2 \, .
\end{equation}
Choosing $u=s^{\frac 12}$ and $R=s^{-1/4}$, we obtain a super Poincar\'e inequality
\begin{equation}\label{eqlsbeta2}
\int h^2 \text{d}\mu \leq C \, u \int (\phi_1 \, |\nabla_x h|^2+|\nabla_y h|^2) \text{d}\mu^\phi \, + \, C' \, e^{\beta'/u^2}\left(\int|h| \, \text{d}\mu\right)^2 \, .
\end{equation}
which furnishes a $F=\ln_+^{\frac 12}$-Sobolev inequality, i.e. if $\int h^2 \, \text{d} \mu=1$,
\begin{equation*}
\int h^2 \, \ln_+^{\frac 12} h^2 \, \text{d}\mu \leq C \, \int (\phi_1 \, |\nabla_x h|^2+|\nabla_y h|^2) \text{d}\mu^\phi \, .
\end{equation*}
Notice that, since $\phi_2\leq 1$, the previous inequality is stronger than
\begin{equation}\label{eqlsbeta3}
\int h^2 \, \ln_+^{\frac 12} h^2 \, \text{d}\mu \leq C \, \int (H^{-2\eta} \, |\nabla_x h|^2+|\nabla_y h|^2) \text{d}\mu \, .
\end{equation}
It remains to link \eqref{eqlsbeta3} to \eqref{assump2}. Actually, as explained in \cite{BCR06} section 7, one can replace $\ln_+$ by smooth functions $F$ with a similar behaviour at infinity (and satisfying $F(1)=0$. So we choose $\psi(u)= \frac{\ln^{\frac 12}(e+u)}{u}$ and $\Psi''=\psi$ with $\Psi(1)=0$. $\Psi(u)$ behaves like $F(u)= u \ln^{\frac 12}(e+u)$ at infinity. Applying \eqref{eqlsbeta3} with $\Psi$ instead of $u \ln_+^{\frac 12}(u)$ (modifying the constant) and $h^2=f$ we have (the value of $C$ varies from one line to the other)
\begin{eqnarray*}
\int \, \Psi(f) \, \text{d}\mu \, &\leq& \, C \, \int \frac 1f (H^{-2\eta} \, |\nabla_x f|^2+|\nabla_y f|^2) \text{d}\mu \, \\ &\leq& C \, \int \frac{\ln^{\frac 12}(e+f)}{f} (H^{-2\eta} \, |\nabla_x f|^2+|\nabla_y f|^2) \text{d}\mu \\ &\leq& C \, \int \, \psi(f) \, (H^{-2\eta} \, |\nabla_x f|^2+|\nabla_y f|^2) \text{d}\mu \, ,
\end{eqnarray*}
completing the proof.
\end{proof}
{\bf Aknowledgments.}\\ The project has benefitted from the support of ANR STAB (Stabilit\'e du comportement asymptotique d'EDP, de processus stochastiques et de leurs
discr\'etisations : 12-BS01-0019), and ANR EFI.
\bibliographystyle{plain}
|
1,116,691,500,618 | arxiv | \section{Introduction}
Cosmological observations provide overwhelming evidence for Dark Energy (DE)
\cite{Suzuki:2011hu,Anderson:2012sa,Parkinson:2012vd,Ade:2013zuv}.
This of course is a subject to several assumptions such as
that Einstein's General Relativity provides an accurate description of gravity on cosmological scales, and in addition that the Friedmann--Lema\^itre--Robertson--Walker models adequately describe our Universe \cite{Clifton:2011jh,Bolejko:2011jc,Buchert:2011sx}.
Although the simplest DE candidate, a cosmological constant $\Lambda$, is perfectly consistent with current cosmological observations, there is presently no satisfactory explanation for the tiny DE density (over $120$ orders of magnitude smaller than the Planck density) which, nevertheless, appears to account for about $70 \%$ of the total energy density of the Universe at the present time. Hence, despite the simplicity of the cosmological constant, dynamical DE models are arguably better motivated from a theoretical point of view (see, for example, \cite{Copeland:2006wr,Li:2011sd} and references therein).
While DE might explain the observed dynamics of the Universe on cosmological scales, a nonrelativistic dark matter (DM) component is required in the standard cosmological model to account for the observed clustering of large scale structures. The simplest model which incorporates the DM and DE components is the so-called $\Lambda$CDM model, in which the DE and DM roles are played by a cosmological constant $\Lambda$ and Cold Dark Matter (CDM) particles with a negligible free streaming length. The energy components of this model can either be taken as a DE fluid with $p_{\rm DE}=-\rho_{\rm DE}=-\rho_\Lambda$ and a DM fluid with $p_{\rm DM}=0$ or a single Unified DE (UDE) component with $p_{DE}=-\rho_\Lambda$ and arbitrary $\rho > \rho_\Lambda$ (here $\rho$ and $p$ represent the density and pressure, respectively). Hence, the $\Lambda$CDM scenario can be regarded as the simplest example of a UDE realization where the role of DM and DE are played by the same dark fluid \cite{Avelino:2003cf}. Various other interesting candidates for the unification of DM and DE have been proposed in the literature, including the Chaplygin gas and its variations \cite{Kamenshchik:2001cp,Bilic:2001cg,Bento:2002ps}, tachyon field models \cite{Gibbons:2002md,Padmanabhan:2002cp,Bagla:2002yn,Chimento:2003ta,Calcagni:2006ge,Avelino:2010qn,Avelino:2011ey}, and a large variety of Interacting Dark Energy (IDE) models \cite{Bassett:2002fe,Farrar:2003uw,Gumjudpai:2005ry,Boehmer:2008av,Clemson:2011an,Avelino:2012tc}.
In this paper we shall focus on UDE models in which the UDE fluid is described by a perfect fluid with an isentropic Equation of State (EoS) $p=p(\rho) = w(\rho) \rho$, where $w(\rho)$ is the EoS parameter of the DE, but most of our results will also apply to other IDE models in the strong coupling regime. Despite the very different parameterizations available for $w(\rho)$, all UDE models are characterized by an EoS, satisfying two important properties: i) if the density is much larger than the average density of the Universe at the present time, then $w \sim 0$ (and $c_s \sim 0$, where $c_s$ is the sound speed); ii) if the density is close to the current average density of the Universe then the EoS parameter of the UDE fluid is close to $-1$. A representative example of a family of isentropic UDE models is the Generalized Chaplygin Gas (GCG), characterized by the EoS $p=-A/\rho^{\alpha}$ where $A > 0$ and $0 \le \alpha \le 1$ are constants.
Isentropic UDE models have been claimed to be essentially ruled out due to the late time oscillations of the matter power spectrum inconsistent with observations, except for a small region of parameter space close to the standard $\Lambda$CDM model \cite{Sandvik:2002jz} (for $\alpha < 0$ linear theory would instead predict an exponential blowup of the matter power spectrum). Although the inclusion of baryons in the analysis does lead to less stringent bounds on the GCG parameter $\alpha$ \cite{Beca:2003an}, linear isentropic UDE models have been shown to be tightly constrained by cosmological observations \cite{Alcaniz:2002yt,Dev:2002qa,Makler:2002jv,Bento:2003we,Amendola:2003bz,Cunha:2003vg,Dev:2003cx,Bertolami:2004ic,Biesiada:2004td,Wu:2006pe,Wu:2007bv,Gorini:2007ta,Lu:2008zzb,Fabris:2010yh,Xu:2010zzb,Lu:2010zzj,Fabris:2011wk,Xu:2012ca,Wang:2013qy} (see also \cite{Reis:2003mw,Zimdahl:2005ir,Bilic:2006cp,Bertacca:2010ct} for a discussion of nonisentropic UDE models).
The effect of small scale nonlinearities has been recognized as having a strong potential impact on the large scale evolution of the Universe, in particular in the context of UDE scenarios, \cite{Avelino:2003ig,Bilic:2003cv,Beca:2005gc,Avelino:2007tu,Avelino:2008cu,Roy:2009cj,DelPopolo:2013bpa}. However, it has been argued that, in the case of the Chaplygin gas, nonlinear effects would be too small to significantly affect the above linear results \cite{Bilic:2003cv} (see also \cite{Avelino:2003ig}). This conclusion relied on the assumption of a constant spectral index of scalar gaussian fluctuations as well as an EoS for the Chaplygin gas whose form remains unchanged both at large densities and small scales. These are very strong assumptions, given the relatively weak constraints on the scalar spectral index on nonlinear scales (wavenumbers $k \gsim 3 \, {\rm Mpc}^{-1}$) and the expectation that the isentropic perfect fluid approximation might break at sufficiently large densities or small scales.
In this paper we relax these assumptions and model the effect of the small scale nonlinearities on the background evolution of the Universe using a single parameter $\epsilon$, representing the fraction of the UDE density which has become nonlinear due to the gravitational collapse into UDE halos. We show that, for $\epsilon$ close to unity, the linear theory results no longer hold and that the backreaction of the small scale structures on the large scale evolution of the Universe may render the Chaplygin gas model virtually indistinguishable from the $\Lambda$CDM scenario for all possible values of the GCG parameter $\alpha$.
In this paper we shall use units where $8 \pi G/3=1$.
\section{Homogeneous Unified Dark Energy Models\label{homog}}
In this section we shall consider a perfectly homogeneous and isotropic universe made up of mainly two components: a baryonic component of density $\rho_b$ and negligible pressure and a UDE fluid component of density $\rho$ and pressure $p$. Energy-momentum conservation implies
\begin{equation}
(\ln \rho)'+3(1+w)=0\,, \label{enec}
\end{equation}
where $w=p/\rho$ is the EoS parameter of the UDE fluid and a prime represents a derivative with respect to $\ln a$. By setting $(\ln \rho)'=0$ one obtains $w=-1$, usually signaling a minimum density in a cosmological context. The complete solution to this equation can be written in the general form
\begin{equation}
\rho=\rho_0 \exp \left(3 \int_{\ln a}^{\ln a_0} (1+w(x)) dx\right)\,,
\label{rhoev}
\end{equation}
where the subscript $0$ refers to the present time time $t_0$. For simplicity we shall take $a_0=1$, so that $\ln a_0=0$.
We shall consider the GCG, characterized by the EoS parameter
\begin{equation}
w \equiv \frac{p}{\rho}=-\frac{A}{\rho^{1+\alpha}}\,,
\end{equation}
where $A$ is a positive constant and $0 \le \alpha \le 1$, as a simple representative example of a family of UDE fluids; for $\alpha=0$ the GCG model is completely equivalent to $\Lambda$CDM. However, the best motivated model from the GCG family is characterised by $\alpha=1$, due to an interesting connection to string theory \cite{Ogawa:2000gj}.
The evolution of the (homogeneous) GCG density $\rho$ as a function of the scale factor $a$ is given by
\begin{equation}
\rho=\rho_0 \left[(1-{\bar A})a^{-3(1+\alpha)}+{\bar A}\right]^{\frac{1}{1+\alpha}}\,,
\end{equation}
where $\bar A=A/\rho_0^{1+\alpha}$ and
\begin{equation}
\rho_{\rm min}=\rho_0 {\bar A}^{\frac{1}{1+\alpha}}\,,
\end{equation}
is the minimum density, for which $w=-1$.
In a flat Universe the Hubble parameter $H \equiv {\dot a}/a$ and the deceleration parameter are respectively given by
\begin{equation}
H^2=\rho + \rho_b\,,
\end{equation}
and
\begin{equation}
q \equiv - (\ln H)' -1= \frac{1}{2} \left(\Omega\left(1+3w\right)+\Omega_b\right)\,,
\end{equation}
where a dot represents a derivative with respect to the physical time, $\Omega=\rho/\rho_c$, $\Omega_b=\rho_b/\rho_c$ and $\rho_c$ is the critical density (note that, in a flat universe, $\Omega+\Omega_b=1$).
The transition from a decelerating to an accelerating regime occurs when $q=0$. For $\Omega_b=0$ the transition would occur at the scale factor
\begin{equation}
a_{q=0}=\left(\frac{1-{\bar A}}{2{\bar A}}\right)^{\frac{1}{3(1+\alpha)}}\,.
\end{equation}
\section{Backreaction effects \label{inhomog}}
In this section we shall study the backreaction of small scale nonlinearities on the large scale evolution of the Universe in UDE scenarios, using the GCG as representative family of UDE models. In order to make the problem more tractable, we shall assume that the distribution of the GCG component in a large comoving volume $V$ of the Universe is essentially composed of two types of regions: i) collapsed regions `+' with a GCG characteristic density much greater than the average GCG density $\rho$ and consequently with a very small pressure and volume (which, for simplicity, we shall assume to be zero in the estimation of the EoS parameter of the GCG); ii) underdense regions `-' with densities smaller than $\rho$ which occupy most of the volume of the Universe.
The average fraction of the total GCG energy $E$ which is incorporated into collapsed objects (with total energy $E_+$) in a comoving region of the Universe of comoving volume $V$ will be parametrized by
\begin{equation}
\epsilon=E_+/E\,,
\end{equation}
which quantifies the level of small scale clustering.
The contribution of the collapsed regions to the average density of the Universe is then given by
\begin{equation}
\rho_+=E_+/V=\epsilon \rho\,.
\end{equation}
On the other hand, the average density in underdense regions is just
\begin{equation}
\rho_-=\frac{E_-}{V}=\frac{E-E_+}{V}=(1-\epsilon)\rho\,.
\end{equation}
Eq. (\ref{enec}) remains valid in the presence of small scale nonlinearities in the GCG component. However, in this case, the contribution to the pressure comes solely from the underdense regions so that
\begin{equation}
w=\frac{p_-}{\rho}=\frac{\rho_-}{\rho}\frac{p_-}{\rho_-}=(1-\epsilon)w_-\,,
\end{equation}
where $w_-=p_-/\rho_-$ can be identified as the effective DE EoS parameter.
\begin{figure}[t]
\centering
\includegraphics[width=9.0cm]{fig1}
\caption{Evolution of the parameter $\epsilon$ with the scale factor $a$ for four different models: $\epsilon_i=0$, $\alpha=1$ (dotted line), $\epsilon_i=0.5$, $\alpha=1$ (dot-dashed line), $\epsilon_i=0.9$, $\alpha=1$ (dashed line) and $\epsilon_i=[1+{\bar A}a_i^3/(1-{\bar A})]^{-1}$, $\alpha=0$ (solid line). Note that the solid line corresponds to the $\Lambda$CDM case. \label{fig-1}}
\end{figure}
Having added a new parameter $\epsilon$ characterizing the level of small scale clustering in UDE models, one also needs to specify its evolution. Here we shall consider a simple model where $E_+$ remains fixed. Given that $E \propto \rho a^3$ one has
\begin{equation}
\epsilon=\frac{\epsilon_0 \rho_0}{\rho a^3}\,.
\end{equation}
At early times $\rho \propto a^{-3}$ and $\epsilon$ becomes a constant ($\epsilon \to \epsilon_i$). The generalization to an arbitrary evolution of $E_+$ is straightforward (in more realistic models $E_+$ is expected to increase with $a$).
In this paper we shall assume that $\Omega_{b0}=0.0487$ and $H_0=67.3 \, {\rm km \, s^ {-1} \, Mpc^ {-1}}$, in agreement with the latest Planck results \cite{Ade:2013zuv}. We shall also use the estimate of the matter density parameter obtained by the Planck collaboration, $\Omega_{m0}=0.315$, to fix the baryonic and GCG fractional densities at early times (into the matter dominated era) to be, respectively, equal to
\begin{equation}
\Omega_{bi}=\frac{\Omega_{b0}}{\Omega_{m0}}=0.155\,, \qquad \Omega_{bi}=1-\Omega_i\,.
\end{equation}
By choosing these values for the cosmological parameters, we ensure that at recombination our GCG model is fully consistent with the Planck Cosmic Microwave Background (CMB) constraints (note that at early times the GCG behaves as CDM).
Fig. 1 shows the evolution of the parameter $\epsilon$ for four different models: $\epsilon_i=0$, $\alpha=1$ (dotted line), $\epsilon_i=0.5$, $\alpha=1$ (dot-dashed line), $\epsilon_i=0.9$, $\alpha=1$ (dashed line) and $\epsilon_i=[1+{\bar A}a_i^3/(1-{\bar A})]^{-1}$, $\alpha=0$ (solid line). Although in the later case, with $\alpha=0$, the actual value of $\epsilon_i$ is not relevant for the evolution of the average GCG density $\rho$, since it corresponds to the $\Lambda$CDM limit of the GCG scenario, it has been chosen to account for a simple decomposition of the UDE fluid into matter and a cosmological constant. As previously mentioned, the present time is achieved when $\Omega_b=0.0487$, which may happen at different values of $\epsilon_i$ in different models (as expected, these differences completely vanish in the $\epsilon \to 1$ limit).
\begin{figure}
\centering
\includegraphics[width=9.0cm]{fig2}
\caption{Evolution of the effective DE EoS parameter $w_-$ with the scale factor $a$ for the models considered in Fig. 1. \label{fig-2}}
\end{figure}
The evolution of $\epsilon=E_+/E$ with the scale factor $a$ in Fig. 1 shows that $\epsilon$ tends to a constant value $\epsilon_i$ for $a \ll 1$, evolving rapidly towards zero for $a \gg 1$. This is the expected behavior since $E$ is roughly a constant during the matter dominated era and grows proportionally to $a^3$ in the DE dominated era. Thus, for fixed $E_+$, the asymptotic evolution of $\epsilon$ with the scale factor is just $\epsilon={\rm constant}$ into the matter dominated era and $\epsilon \propto a^{-3}$ into the DE dominated era.
Fig. 2 shows the evolution of the effective DE EoS parameter $w_-$ with $a$ for the models considered in Fig. 1. Except for the $\alpha=0$ case (the $\Lambda$CDM limit of the GCG), the value of $w_-$ smoothly interpolates from $w_-=0$ into the matter dominated era to $w_-=-1$ into the DE era, with the scale factor at the transition being controlled by the parameter $\epsilon_i$. As expected, Fig. 2 shows that as the amount of small scale clustering is increased (by increasing $\epsilon_i$) the transition from a cosmological constant to a CDM behaviour occurs at larger and larger redshifts.
Although the evolution of the DE EoS parameter $w_-$ cannot in general be found analytically, a simple fit to a numerical solution can nevertheless be found in the constant $E_+$ case studied in the present paper. The transition from $w_-=0$ to $w_-=-1$ occurs for $\rho_- \sim \rho_{\rm min}$. Taking into account that during the $w_-=0$ phase
\begin{equation}
\rho_-=(1-\epsilon)\rho \propto (1-\epsilon_i)a^{-3}\,,
\end{equation}
one finds that the scale factor $a_{\rm tr}$ at the transition between the two phases is roughly proportional to $(1-\epsilon_i)^{1/3}$. Taking into account that, if $\epsilon_i = 0$, the transition from $w_-=0$ to $w_-=-1$ occurs for a scale factor approximately equal to $[(1-{\bar A})/{\bar A}]^{1/(3(1+\alpha))}$ one finds that
\begin{equation}
a_{\rm tr} = (1-\epsilon_i)^{1/3} [(1-{\bar A})/{\bar A}]^{\frac{1}{3(1+\alpha)}}\,.
\end{equation}
Note that $a_{\rm tr} \to 0$ for $\epsilon_i \to 1$, as expected in the $\Lambda$CDM limit of the Chaplygin gas. A fit which takes the above scaling into account is given by
\begin{equation}
w_-=-\frac{{\bar A}}{Ba^{-3(1+\alpha)}+{\bar A}}\,,
\end{equation}
with $B=(1-{\bar A})(1-\epsilon_i)^{1+\alpha}$. For $\epsilon_i \sim 0$ one has $B \sim 1-{\bar A}$ and the background result is recovered. On the other hand, $B=0$ in the $\epsilon \to 1$ limit so that $w_-=-1$ for all values of the scale factor $a$, as expected of a cosmological constant.
\begin{figure}
\centering
\includegraphics[width=9.0cm]{fig3}
\caption{The lines show the correspondence between the values of $w$ and $\epsilon_i$, for standard quintessence and GCG models with the same angular diameter distance to the last scattering surface, and sound horizon as constrained by Planck. The assumed values of $\alpha$ of the GCG model are, from top to bottom, $\alpha=0,0.2,0.4,0.6,0.8,1$, respetively. \label{fig-3}}
\end{figure}
\section{Cosmological observations\label{angdis}}
In order to have agreement with the Planck results \cite{Ade:2013zuv}, one needs to ensure that the angular diameter distance to the last scattering surface
\begin{equation}
d_\theta(z_{\rm rec})=\frac{1}{1+z_{\rm rec}}\int_0^{z_{\rm rec}} \frac{dz}{H(z)}\,,
\end{equation}
and that the sound horizon is compatible with the latest CMB observations. Here $z=1/a-1$ is the redshift and the subscript `rec' represents recombination. In the following, we shall find the standard quintessence model with constant $w$ (and $\Omega_{b0}=0.0487$, $\Omega_{m0}=0.315$ and $H_0=67.3 \, {\rm km \, s^ {-1} \, Mpc^ {-1}}$) which has the same distance to the last scattering surface as that obtained for the GCG model, parametrised by $\alpha$ and $\epsilon_i$, with the corresponding choices for the cosmological parameters.
The lines in Fig. 3 show the correspondence between the values of $w$ and $\epsilon_i$, for standard quintessence and GCG models with the same angular diameter distance to the last scattering surface. The values of $\alpha$ corresponding to the different lines are $\alpha=0,0.2,0.4,0.6,0.8,1$ (from top to bottom, respectively). It is clear that the $\Lambda$CDM limit is recovered both for $\alpha=0$ (irrespectively of the value of $\epsilon_i$) and for $\epsilon_i \to 1$ (irrespectively of the value of $\alpha$). This result implies that, independently of the value of $\alpha$, GCG models can be made compatible with current observational constraints as long as the level of nonlinear clustering is high enough. For $\epsilon > 0.9$, the corresponding value of $w$ is well within the latest observational uncertainties \cite{Ade:2013zuv} for all $0 \le \alpha \le 1$.
The values of $\alpha$ and $\epsilon$ may also be constrained using supernova data. The likelihood
\[ {\cal L} = {\rm e}^{-\chi^2/2}, \]
is given by
\begin{equation}
\chi^2 = (\vec{M} - \vec{D})^T C^{-1} (\vec{M} - \vec{D}),
\end{equation}
where $\vec{M}$ is the distance modulus
\begin{equation}
M_i = 5 \log_{10} \left[ (1+z_i) \int\limits_0^{z_i} \frac{ {\rm d} \tilde{z} } { H(\tilde{z}) } \right] +25,
\end{equation}
and $\vec{D}$ and $C$ are respectively the distance modulus and covariance matrix of
the Union2.1 data set \cite{Suzuki:2011hu}.
The 68\% and 95\% confidence regions are presented in Fig.~\ref{fsn}.
As seen, now with nonlinear clustering effect included, i.e. when $\epsilon \to 1$,
the Chaplygin Gas ($\alpha =1$) is consistent with supernova constraints.
Without taking into account the nonlinear clustering effects
($\epsilon = 0$) the Chaplygin Gas model would be ruled out.
In the next section we show that these nonlinear clustering effects also improve
the status of the Chaplygin model in regards to the growth of cosmic structures.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{fig4}
\caption{68\% and 95\% confidence regions based on the supernova observations.
The colours correspond to the likelihood (${\cal L}/{\cal L}_{max}$).}
\label{fsn}
\end{center}
\end{figure}
\section{Evolution of density perturbations\label{denper}}
Although a detailed study of the evolution of density perturbations is outside the scope of this paper, we shall atempt to describe its main features. At late times a large sound speed prevents the growth of density perturbations of the `-' component of the GCG. This happens, at different times on different scales, when the comoving sound horizon ($\sim c_{s-}/(Ha)$) multiplied by the comoving wavenumber ($k$) becomes of order unity. At early times, into the matter dominated era, $H \sim H_0 a^{-3/2}$ and $c_{s-}^2=- \alpha w_- \sim \alpha a^{3(\alpha+1)}/(1-\epsilon_i)^{1+\alpha}$. Hence, a fluctuation of wavenumber $k$ will stop growing roughly at the scale factor
\begin{equation}
a_k \sim \alpha^{-\frac{1}{4+3 \alpha}} (1-\epsilon_i)^{\frac{1+\alpha}{4+3 \alpha}} \left(\frac{k}{H_0}\right)^ {-\frac{2}{4+3 \alpha}}\,.
\end{equation}
On the other hand, into the matter dominated era, the linear evolution of density perturbations of the `+' GCG component is described by the equation (see, for example, \cite{Beca:2003an})
\begin{equation}
\delta_+'' + (2+\zeta)\delta_+' - \frac32\left[\Omega_+ \delta_+ + (1+3 c_{s-}^ 2)\Omega_- \delta_- + \Omega_b \delta_b \right]=0\,, \label{denev}
\end{equation}
with $\zeta=H'/H=-3/2$. For the sake of simplicity, we shall assume that $\delta_+=\delta_b$ and absorb the baryons in the `+' component of the GCG, so that $\Omega_+ \sim \epsilon_i$ at early times.
If $c_s \ll 1$ and $a \ll a_k$ one obtains the standard result for the linear growing mode DM perturbations in the matter dominated era, which is $\delta_+ \propto \delta_- \propto a$. On the other hand, for $a \gg a_k$ the fluctuations in the `-' component of the GCG will be negligible ($\delta_- \sim 0$) and Eq. (\ref{denev}) becomes
\begin{equation}
\delta_+''+\frac12 \delta_+' -\frac32 \epsilon_i \delta_+=0\,.
\end{equation}
In this case, the growing mode solution is given by
\begin{equation}
\delta_+ \propto a^{1-\frac{3(1-\epsilon_i)}{5}}\,,
\end{equation}
for $1-\epsilon_i \ll 1$.
This results in a smaller growth factor with respect to the standard case by
\begin{equation}
f=a_k^{-\frac{3(1-\epsilon_i)}{5}} \sim 1+\frac35 \ln a_k (1-\epsilon_i)\,,
\end{equation}
where the approximation is valid if $|f-1| \ll1$. For $\epsilon_i > 0.9$ this condition is satisfied for $k \lsim 0.3 \, {\rm Mpc}^{-1}$ (and even on smaller nonlinear scales). This implies that the late time oscillations of the matter power spectrum which plagues linear GCG models can be avoided if the level of nonlinear clustering is sufficiently high, thus rendering the model consistent with observations.
\section{Conclusions \label{conc}}
In this paper we have parametrised the effect of UDE nonlinear clustering on the dynamics of the Universe. We have shown that cosmological scenarios in which the DM and DE roles are played by a single UDE fluid may be reconciled with the latest observational results, provided there is a high level of nonlinear clustering of the UDE component. Although we have focused on the GCG as a concrete example, our main results are expected to hold in general for UDE models.
\begin{acknowledgments}
We thank Vasco Ferreira for useful comments. The work of P.P.A. was supported by Fundação para a Ciência e a Tecnologia (FCT) through the Investigador FCT contract No. IF/00863/2012 and POPH/FSE (EC) by FEDER funding through the program Programa Operacional de Factores de Competitividade - COMPETE, and by Grant No. PTDC/FIS/111725/2009 (FCT). G.F.L. acknowledges support through an Australian Research Council Discovery Project (DP130100117).
\end{acknowledgments}
|
1,116,691,500,619 | arxiv | \section*{Introduction} %
\label{sec:introduction} The first instance of Rota-Baxter operator appeared in the context of associative algebras in 1960, in a paper by Baxter \cite{Baxter}, as a tool to study fluctuation theory in probability.
Since then, these operators were widely used in many branches of mathematics and mathematical physics.
Almost forty years later, Kupershmidt \cite{K} introduced
$\O$-operators on Lie algebras as a kind of generalization of classical $r$-matrices, thus opening a broad application of $\O$-operators to integrable systems. Given a Lie algebra $(E,[\cdot,\cdot])$ and a representation $\Phi$ of $E$ on a vector space $V$, an $\O$-operator on $E$ with respect to $\Phi$ is a linear map $T:V \to E$ such that
$[T(x),T(y)]=T\big(\Phi(T(x))(y)- \Phi(T(y))(x)\big)$.
When $\Phi$ is the adjoint representation of $E$, $T$ is a Rota-Baxter operator (of weight zero). $\O$-operators are also called relative Rota-Baxter operators or generalized Rota-Baxter operators.
In recent years Rota-Baxter and $\O$-operators, in different algebraic and geometric settings, have deserved a great interest by mathematical and physical communities.
In \cite{TBGS2019}, a homotopy version of $\O$-operators on symmetric graded Lie algebras was introduced. This was the first step towards the definition of an $\O$-operator on a Lie $\infty$-algebra with respect to a representation on a graded vector space that was given in \cite{LST2021}.
The current paper also deals with $\O$-operators on Lie $\infty$-algebras, but with a different approach which uses Lie $\infty$-actions instead of representations of Lie $\infty$-algebras. Our definition is therefore different from the one given in \cite{LST2021} but there is a relationship between them.
There are two equivalent definitions of Lie $\infty$-algebra structure on a graded vector space $E$, both given by collections of $n$-ary brackets which are either symmetric or skew-symmetric, depending on the definition we are considering, and must satisfy a kind of generalized Jacobi identities. One goes from one to the other by shifting the degree of $E$ and applying a \emph{d\'ecalage} isomorphism. We use the definition in its symmetric version, where the brackets have degree $+1$. Equivalently, this structure can be defined by a degree $+1$ coderivation $M_E$ of $\bar{S}(E)$, the reduced symmetric algebra of $E$,
such that the commutator $[M_E,M_E]_{c}$ vanishes.
Representations of Lie $\infty$-algebras on graded vector spaces were introduced in \cite{LM}. In \cite{LST2021}, the authors consider a representation $\Phi$ of a Lie $\infty$-algebra $E$ on a graded vector space $V$ and define an $\O$-operator (homotopy relative Rota-Baxter operator) on $E$ with respect to $\Phi$ as a degree zero element $T$ of $\textrm{Hom}(\bar{S}(V), E)$
satisfying a family of suitable identities. Inspired by the notion of an action of a Lie $\infty$-algebra on a graded manifold \cite{MZ}, we define an action of a Lie $\infty$-algebra $(E,M_E)$ on a Lie $\infty$-algebra $(V,M_V)$ as a Lie $\infty$-morphism $\Phi$ between $E$ and $\Coder(\bar{S}(V))[1]$, the symmetric DGLA of coderivations of $\bar{S}(V)$. An $\O$-operator on $E$ with respect to the action $\Phi$ is a comorphism between $\bar{S}(V)$ and $\bar{S}(E)$ that intertwines the coderivation $M_E$ and a degree $+ 1$ coderivation of $\bar{S}(V)$ built from $M_V$ and $\Phi$, which turns out to be a Lie $\infty$-algebra structure on $V$ too.
As we said before, the two $\O$-operator definitions, ours and the one in \cite{LST2021}, are different. However, since there is a close connection between Lie $\infty$-actions and representations of Lie $\infty$-algebras, the two definitions can be related.
On the one hand, any representation of $(E,M_E)$ on a complex $(V,\d )$ can be seen as a Lie $\infty$-action of $(E,M_E)$ on $(V,D )$, with $D$ the coderivation given by the differential $\d$, and for this very ``simple" Lie $\infty$-algebra structure on $V$ our $\O$-operator definition recovers the one given in \cite{LST2021}. On the other hand, any action $\Phi$ of $(E,M_E)$ on $(V,M_V)$ yields a representation $\rho$ on the graded vector space $\bar{S}(V)$ and an $\O$-operator with respect to the action $\Phi$ is not the same as an $\O$-operator with respect to the representation $\rho$. However, there is a way to relate the two concepts.
A well-known Voronov's construction \cite{V05} defines a Lie $\infty$-algebra structure on an abelian Lie subalgebra $\mathfrak{h}$ of $\Coder(\bar{S}(E\oplus V))$ and we show that $\O$-operators with respect to the action $\Phi$ are Maurer-Cartan elements of $\mathfrak{h}$.
In general, deformations of structures and morphisms are governed by DGLA's or, more generally, by Lie $\infty$-algebras. We do not intend to deeply study deformations of $\O$-operators on Lie $\infty$-algebras with respect to Lie $\infty$-actions. Still, we prove that deformations of an $\O$-operator are controlled by the twisting of a Lie $\infty$-algebra, constructed out of a graded Lie subalgebra of $\Coder(\bar{S}(E\oplus V))$.
The paper is organized in four sections. In Section \ref{section1} we collect some basic results on graded vector spaces, graded symmetric algebras and Lie $\infty$-algebras that will be needed along the paper. In Section~\ref{section2}, after recalling the definition of a representation of a Lie $\infty$-algebra on a complex $(V,\d)$ \cite{LM}, we introduce the notion of action of a Lie $\infty$-algebra on another Lie $\infty$-algebra (Lie $\infty$-action) and we prove that a Lie $\infty$-action of $E$ on $V$ induces a Lie $\infty$-algebra structure on $E\oplus V$. We pay special attention to the adjoint action of a Lie $\infty$-algebra. In Section \ref{section3} we introduce the {main notion of the paper --} $\O$-operator on a Lie $\infty$-algebra $E$ with respect to an action of $E$ on another Lie $\infty$-algebra, and we give the explicit relation between these operators and $\O$-operators on $E$ with respect to a representation on a graded vector space introduced in \cite{LST2021}.
Given an $\O$-operator $T$ on $E$ with respect to a Lie $\infty$-action $\Phi$ on $V$, we show that $V$ inherits a new Lie $\infty$-algebra structure given by a degree $+ 1$ coderivation which is the sum of the initial one on $V$ with a degree $+ 1$ coderivation obtained out of $\Phi$ and $T$.
We prove that symmetric and invertible comorphisms $T:\bar{S}(E^*) \to S(E)$ are $\O$-operators with respect to the coadjoint action if and only if a certain element of $\bar{S}(E^*)$, which is defined using the inverse of $T$, is a cocycle for the Lie $\infty$-algebra cohomology of $E$. Section \ref{section3} ends with the characterization of $\O$-operators as Maurer-Cartan elements of a Lie $\infty$-algebra obtained by Voronov's higher derived brackets construction. {The main result in} Section \ref{section4} shows that Maurer-Cartan elements of a {graded Lie subalgebra of} $\Coder(\bar{S}(E\oplus V))$ encode a Lie $\infty$-algebra on $E$ and an action of $E$ on $V$. Moreover, we obtain the Lie $\infty$-algebra that controls the deformation of $\O$-operators with respect to a fixed action.
\section{Lie \texorpdfstring{${{\infty}}$}--algebras} \label{section1}
We begin by reviewing some concepts about graded vector spaces, graded symmetric algebras and Lie $\infty$-algebras.
\subsection{ Graded vector spaces and graded symmetric algebras}
We will work with $\mathbb Z$-graded vector spaces with finite dimension over a field $\mathbb K=\mathbb R$ or $\mathbb C$.
Let $E=\oplus_{i\in\mathbb Z}E_i$ be a finite dimensional graded vector space. We call $E_i$ the homogeneous component of $E$ of degree $i$. An element $x$ of $E_i$ is said to be homogeneous with degree $|x|=i$.
For each $k\in\mathbb Z$, one may shift all the degrees by $k$ and obtain a new grading on $E$. This new graded vector space is denoted by $E[k]$ and is defined by $E[k]_i=E_{i+k}$.
A morphism $\Phi:E\to V$ between two graded vector spaces is a degree preserving linear map, i.e. a collection of linear maps $\Phi_i:E_i\to V_i$, $i\in\mathbb Z$. We call $\Phi:E\to V$ a (homogeneous) morphism of degree $k$, for some $k\in\mathbb Z$, and we write $|\Phi|=k$, if it is a morphism between $E$ and $V[k]$.
{This way we have a natural grading in the vector space of linear maps between graded vector spaces:
$$
{\mathrm{Hom}}(E,V)=\oplus_{i\in \mathbb Z}{\mathrm{Hom}}_i(E,V).
$$
In particular, ${\mathrm{Hom}}(E,E)=\End(E)=\oplus_{i\in \mathbb Z}\End_i(E) $.}
The dual $E^*$ of $E$ is naturally a graded vector space whose component of degree $i$ is, for all $ i \in {\mathbb Z}$, the dual $(E_{-i})^* $ of $E_{-i}$. In equation: $(E^*)_i = (E_{-i})^*$.
Given two graded vector spaces $E$ and $V$, their direct sum $E\oplus V$ is a vector space with grading
$$
(E\oplus V)_i= E_i\oplus V_i
$$
and their usual tensor product comes equipped with the grading $$(E\otimes V)_i= \oplus_{j+k=i} \, E_j\otimes V_k.$$
We will adopt the Koszul sign convention, for homogeneous {linear maps} $f:E\to V$ and $g:F\to W$ the tensor product $f\otimes g:E\otimes F\to V\otimes W$ is the morphism of degree $|f|+|g|$ given by
\begin{equation*}
(f\otimes g)(x\otimes y)=(-1)^{|x||g|} f(x)\otimes g(y),
\end{equation*}
for all homogeneous $x\in E$ and $y\in F$.
For each $k\in\mathbb N_0$, let $T^k(E)=\otimes^k E$, with $T^0(E)=\mathbb K$, and let $T(E)=\oplus_{k} T^k(E)$ be the tensor algebra over $E$. The {\textbf{graded symmetric algebra over $E$}} is the quotient
$$
S(E)=T(E)/\eval{x\otimes y- (-1)^{|x||y|}y\otimes x}.
$$
The symmetric algebra $S(E)= \oplus_{k\geq0}S^k(E)$ is a graded commutative algebra, whose product we denote by $ \odot $. { For $x=x_1 \odot \ldots \odot x_k \in S^k(E)$, we {set} $|x|= \sum_{i=1}^k |x_i|$.}
For $n\geq 1$, let $S_n$ be the permutation group of order $n$. For any homogeneous elements $x_1,\ldots,x_n\in E$ and $\sigma \in S_n$, the {Koszul sign} is the element in $\{-1,1\}$ defined by
$$
x_{\sigma(1)} \odot \ldots \odot x_{\sigma(n)}=\epsilon(\sigma) \, x_1 \odot \ldots \odot x_n.
$$
{As usual, writing $\epsilon(\sigma)$ is an abuse of notation because the Koszul sign also depends on the $x_i$.}
An element $\sigma$ of $S_{n}$ is called an $(i,n-i)$-unshuffle if $\sigma(1)<\ldots< \sigma(i)$ and $\sigma(i+1)<\ldots < \sigma(n)$.
The set of $(i,n-i)$-unshuffles is denoted by $Sh(i,n-i)$. { Similarly, $Sh(k_1,\ldots, k_j)$ is the set of $(k_1,\ldots, k_j)$-unshuffles, i.e., elements of $S_n$ with $k_1+\ldots+ k_j=n$ such that the order is preserved within each block of length $k_i$, $1\leq i\leq j$.}
The reduced
symmetric algebra $\bar S(E)=\oplus_{k\geq 1}S^k(E)$ has a natural { coassociative and } cocommutative coalgebra structure given by the coproduct $\Delta:\bar S(E)\to \bar S(E)\otimes \bar S(E)$,
\begin{equation*}
\Delta(x)=0, \,\, x \in E;
\end{equation*}
\begin{equation*}\label{def:coproduct:S(E)}
\Delta(x_1\odot\ldots \odot x_n)=\sum_{i=1}^{n-1}\sum_{\sigma\in Sh(i,n-i)}\epsilon(\sigma) \, \left(x_{\sigma(1)}\odot\ldots \odot x_{\sigma(i)}\right)\otimes \left(x_{\sigma(i+1)}\odot\ldots\odot x_{\sigma(n)}\right),
\end{equation*}
for $x_1,\dots,x_n\in E$.
We will mainly use Sweedler notation: given $x \in \bar S(E)$, $$\Delta^{(1)}(x)=\Delta(x)=x_{(1)}\otimes x_{(2)},$$
and {the coassociativity yields} $$\Delta^{(n)}(x)=(\mathrm{id}\otimes\Delta^{(n-1)})\Delta(x)=x_{(1)}\otimes \ldots\otimes x_{(n+1)},\quad n\geq 2.$$
Notice that
$$\Delta^{(n)}(x)=0, \,\,\,\, x \in S^{\leq n}(E).$$
{The cocommutativity of the coproduct is expressed, for homogeneous elements of $\bar S(E)$, as
$$x_{(1)}\otimes x_{(2)}=(-1)^{|x_{(1)}||x_{(2)}|}x_{(2)}\otimes x_{(1)}.$$
}
Let $V$ be another graded vector space. A
linear map $f:\bar S(E)\to V$ is given by a collection of maps $f_k:S^k(E)\to V$, $k\geq 1$, and is usually denoted by $f=\sum_kf_k$.
\begin{rem}
Every linear map ${f}:S^k(E)\to V$ corresponds to a graded symmetric $k$-linear map $f \in {\mathrm{Hom}}(\otimes^k E,V)$ through the quotient map $p_k:\otimes^k E \to S^k(E)$ i.e., $f\equiv{f} \smalcirc p_k$. In the sequel, we shall often write $${f}(x_1 \odot \ldots \odot x_k)=f(x_1, \ldots, x_k), \quad x_i \in E.$$
\end{rem}
A {\bf coalgebra morphism} {(or {\bf comorphism}) between the coalgebras $(\bar S(E), \Delta_E)$ and $(\bar S(V), \Delta_V)$ is a morphism $F:\bar S(E) \to \bar S(V)$ of graded vector spaces} such that
$$
(F\otimes F)\smalcirc \Delta_E=\Delta_V\smalcirc F.
$$
There is a one-to-one correspondence between coalgebra morphisms $F:\bar S(E)\to \bar S(V)$ and {degree preserving} linear maps $f:\bar S(E)\to V$. {Each $f$ determines $F$ by
\begin{equation*}
F(x)=\sum_{k\geq1}\frac{1}{k!}f(x_{(1)})\odot\ldots\odot f(x_{(k)}), \, x \in \bar S(E),
\end{equation*}
and $f=p_V \smalcirc F$, with $p_V:\bar S(V)\to V$ the projection map.}
A {degree $k$} {\bf coderivation} of $\bar S(E)$, for some $k \in \mathbb Z$, is a linear map $Q:\bar S(E)\to \bar S(E)$ of degree $k$ such that
$$
\Delta\smalcirc Q=(Q\otimes \mathrm{id} + \mathrm{id}\otimes Q)\smalcirc\Delta.
$$
We also have a one to one correspondence between coderivations of $\bar S(E)$ and linear maps $q=\sum_{i}q_i:\bar S(E)\to E$:
\begin{prop} \label{prop:isomorphism:families:coderivations}
Let $E$ be a graded vector space and $p_E:\bar S(E)\to E$ the projection map.
For every linear map $q=\sum_iq_i: \bar S(E)\to E$, the linear map
$\displaystyle Q: \bar S(E)\to \bar S(E )$
given by
\begin{equation} \label{eq:coderivation}
Q(x_1\odot \ldots \odot x_n)=\sum_{i=1}^n \, \sum_{\sigma \in Sh(i,n-i)}\epsilon(\sigma)q_i\left(x_{\sigma(1)}, \ldots, x_{\sigma(i)}\right)\odot x_{\sigma(i+1)}\odot \ldots \odot x_{\sigma(n)},
\end{equation}
is the unique coderivation of $\bar S(E )$ such that $p_E\smalcirc Q =q$.
\end{prop}
{In Sweedler notation, Equation \eqref{eq:coderivation} is written as:
$$Q(x)= q(x_{(1)}) \odot x_{(2)} + q(x), \,\,\, x \in \bar S(E ).$$
}
When $E$ is a finite dimensional graded vector space, we may identify $S(E^*)$ with $(SE)^*$. Koszul sign conventions yield, for each homogeneous elements $f,g\in E^*$,
\begin{equation*}
(f\odot g)(x\odot y)=(-1)^{|x||g|}f(x)\, g(y) + f(y)\, g(x), \quad x,y\in E.
\end{equation*}
\subsection{Lie \texorpdfstring{$\infty$}--algebras}
We briefly recall the definition of Lie $\infty$-algebra \cite{LS}, some basic examples and related concepts.
We will consider the symmetric approach to Lie $\infty$-algebras.
\begin{defn}
A \textbf{symmetric Lie $\infty$-algebra} (or a \textbf{Lie$[1]$ $\infty$-algebra}) is a graded vector space $E=\oplus_{i\in\mathbb Z} E_i$ together with a family of degree $+1$ linear maps $l_k: S^k(E)\to E$, $k\geq 1$, satisfying
\begin{equation}\label{eq:def:symm:L:infty:algebra}
\sum_{i+j=n+1}\sum_{\sigma\in Sh(i,j-1)}\epsilon(\sigma) \, l_j\left(l_i\left(x_{\sigma(1)},\ldots, x_{\sigma(i)}\right),x_{\sigma(i+1)}\ldots, x_{\sigma(n)}\right)=0,
\end{equation}
for all $n\in\mathbb N$ and all homogeneous elements $x_1,\ldots,x_n\in E$.
\end{defn}
The \emph{d\'ecalage} isomorphism \cite{V05}
establishes a one to one correspondence between skew-symmetric Lie $\infty$-algebra structures $\set{l'_k}_{k \in \mathbb N}$ on $E$ and symmetric Lie $\infty$-algebra structures $\set{l_k}_{k \in \mathbb N}$ on $E[1]$:
{$$l_k(x_1, \ldots, x_k)= (-1)^{(k-1)|x_1|+(k-2)|x_2|+ \ldots + |x_{k-1}|}l'_k(x_1, \ldots, x_k).$$}
{In the sequel, we frequently write Lie $\infty$-algebra, omitting the term symmetric.}
\begin{ex}[Symmetric graded Lie algebra]
A symmetric graded Lie algebra is a symmetric Lie $\infty$-algebra $E=\oplus_{i\in\mathbb Z}E_i$ such that $l_n = 0$ for $n \neq 2$. Then the degree $0$ bilinear map on $E[-1]$ defined by
\begin{equation}
\label{eq:decalage}
[\![x,y]\!] := (-1)^{i} l_2(x,y), \hbox{ for all $x \in E_i,y\in E_j $,}
\end{equation}
is a graded Lie bracket.
In particular, if $E=E_{-1}$ is concentrated on degree $-1$, we get a Lie algebra structure.
\end{ex}
\begin{ex}[Symmetric DGLA algebra]
A symmetric differential graded Lie algebra (DGLA) is a symmetric Lie $\infty$-algebra $E=\oplus_{i\in\mathbb Z}E_i$ such that $l_n=0$ for $n \neq 1$ and $n \neq 2$.
Then, from \eqref{eq:def:symm:L:infty:algebra}, we have that $d:=l_ 1$ is a degree $+1$ linear map $d:E\to E$ squaring zero and satisfies the following compatibility condition with the bracket $\brr{\cdot , \cdot }:=l_2(\cdot , \cdot)$ :
\begin{equation*}
\label{eq:gradedLie}
\left\{\begin{array}{l}
d\brr{x,y} + \brr{d(x),y} + (-1)^{|x|}\brr{x, d(y)}=0,\\
\brr{\brr{x,y},z} + (-1)^{|y||z|}\brr{\brr{x,z},y} + (-1)^{|x|}\brr{x,\brr{y,z}}=0,
\end{array}\right.
\end{equation*}
{Applying the \emph{d\'ecalage} isomorphism, $(E[-1], d, [\![\cdot, \cdot]\!])$ is a (skew-symmetric) DGLA, with $[\![\cdot, \cdot]\!]$ given by \eqref{eq:decalage}.}
\end{ex}
\begin{ex} \label{example:DGLA_Coder}
\label{ex:DGLA:End:E}
Let $(E=\oplus_{i\in\mathbb Z}E_i, \d)$ be a cochain complex.
Then ${\End (E)[1]}$
has a natural symmetric DGLA structure with $l_1=\partial, \;\;l_2=\brr{\cdot , \cdot}$ given by:
\begin{equation*}
\left\{\begin{array}{l}
\partial\ph
=-\d \smalcirc \phi + (-1)^{|\phi|+1} \phi\smalcirc \d,\\
\brr{\phi,\psi
=(-1)^{|\phi|+1}\left(\phi\smalcirc \psi - (-1)^{(|\phi|+1)(|\psi|+1)} \psi\smalcirc\phi \right),
\end{array}\right.
\end{equation*}
for $\phi, \psi$ homogeneous elements of $\End(E)[1]$. {In other words, $ \partial\phi=- [ \d, \phi]_{c}$ and $\brr{\phi,\psi}=(-1)^{\deg(\phi)}[ \phi, \psi]_{c}$, with $[\cdot, \cdot]_{c}$ the graded commutator on $\End(E)$ and $\deg(\phi)$ the degree of $\phi$ in $\End(E)$.}
\end{ex}
The symmetric Lie bracket $\brr{\cdot ,\cdot}$ on $\End(\bar S(E))[1]$ preserves $\Coder(\bar S(E))[1]$, the space of coderivations of $\bar S(E)$, {so that $(\Coder(\bar S(E))[1], \partial, \brr{\cdot ,\cdot})$ is a symmetric DGLA.}
The isomorphism between ${\mathrm{Hom}}(\bar S(E),E)$ and $\Coder(\bar S(E))$ given by Proposition \ref{prop:isomorphism:families:coderivations},
induces a Lie bracket on ${\mathrm{Hom}}(\bar S(E),E)$ known as the Richardson-Nijenhuis bracket:
$$
\brr{f,g}_{_{RN}}(x)=f(G(x))-(-1)^{|f||g|}g(F(x)),\quad x\in \bar S(E),
$$
for each $f,g\in {\mathrm{Hom}}(\bar S(E),E)$, where $F$ and $G$ denote the coderivations defined by $f$ and $g$, respectively. {In other words, $\brr{F,G}_{c}$ is the (unique) coderivation of $\bar S(E)$ determined by $\brr{f,g}_{_{RN}} \in {\mathrm{Hom}}(\bar S(E),E)$.
Degree $+1$ elements $l:=\sum_k l_k$ of ${\mathrm{Hom}}(\bar S(E),E)$ satisfying $[l,l]_{_{RN}}=0$ define a Lie $\infty$-algebra structure on $E$.} This way we have an alternative definition of Lie $\infty$-algebra \cite{LS}:
\begin{prop}
A Lie $\infty$-algebra is a graded vector space $E$ equipped with a degree $+1$ coderivation $M_E$ of $\bar S(E)$ such that \begin{equation*}\brr{M_E,M_E}_{c}=2M_E^2=0.\end{equation*}
\end{prop}
The dual of the coderivation $M_E$ yields a differential $\d_*$ on $\bar{S}(E^*)$.
The \textbf{cohomology of the Lie $\infty$-algebra} $\left( E, M_E \equiv \set{l_k}_{k\in\mathbb N}
\right)$ is the cohomology defined by the differential $\d_*$.
A {\bf Maurer-Cartan element} of a Lie $\infty$-algebra $(E,\set{l_k}_{k\in\mathbb N})$ is a degree zero element $z$ of $E$ such that
\begin{equation} \label{def:MC:element}
\sum_{k \geq 1}\frac{1}{k!}\, l_{k}(z, \ldots,z) =0.
\end{equation}
The set of Maurer-Cartan elements of $E$ is denoted by $\textrm{MC}(E)$.
Let $z$ be a Maurer-Cartan element of $(E,\set{l_k}_{k\in\mathbb N})$ and set, for $k\geq 1$,
\begin{equation} \label{def:twisting:MC}
l_k^z(x_1, \ldots, x_k):= \sum_{i\geq 0}\frac1{i!}\, l_{k+i}(z, \ldots, z, x_1, \ldots, x_k).
\end{equation}
Then, $(E,\set{l_k^z}_{k\in\mathbb N})$ is a Lie $\infty$-algebra, called the \emph{twisting of} $E$ by $z$ \cite{G}.
For filtered, or even weakly filtered Lie $\infty$-algebras, the convergence of the infinite sums defining Maurer-Cartan elements and twisted Lie $\infty$-algebras (Equations (\ref{def:MC:element}) and (\ref{def:twisting:MC})) is guaranteed (see \cite{G,FZ,LST2021}).
For a symmetric graded Lie algebra $(E,l_2)$, the twisting by $z \in\textrm{MC}(E)$ is the symmetric DGLA $(E, l_1^z=l_2(z, \cdot),l_2^z=l_2)$.
\subsection{Lie \texorpdfstring{$\infty$}--morphisms}
A morphism of Lie $\infty$-algebras is a morphism between symmetric coalgebras that is compatible with the Lie $\infty$-structures.
\begin{defn}\label{defn:Lie:infty:morphism}
Let $(E,\set{l_k}_{k\in\mathbb N})$ and $(V,\set{m_k}_{k\in\mathbb N})$ be Lie $\infty$-algebras. A \textbf{Lie ${\infty}$-morphism } $\displaystyle \Phi:E \rightarrow V$
is given by a collection of degree zero linear maps:
$$
\Phi_k:S^k(E)\to V,\quad k\geq 1,
$$
such that, for each $n\geq 1$,
\begin{align}\label{eq:def:Lie:infty:morphism}
&\sum_{\begin{array}{c} \scriptstyle{k+l=n}\\ \scriptstyle{\sigma\in Sh(k,l)}\\ \scriptstyle{l\geq 0, \, k\geq 1}\end{array}}\!\!\!\!\!\!\! \varepsilon(\sigma) \Phi_{1+l}\Big(l_k(x_{\sigma(1)},\ldots, x_{\sigma(k)}), x_{\sigma(k+1)}, \ldots, x_{\sigma(n)}\Big) \\ &=\!\!\!\!\!\!\!\!\!\!\! \sum_{\begin{array}{c}\scriptstyle{k_1+\ldots+ k_j=n} \\ \scriptstyle{\sigma\in Sh(k_1,\ldots, k_j)}\end{array}} \!\!\!\!\!\!\! \frac{\varepsilon(\sigma)}{j!}\,m_j\Big(\Phi_{k_1}(x_{\sigma(1)}, \ldots x_{\sigma(k_1)}),
\Phi_{k_2}(x_{\sigma(k_1+1)}, \ldots \nonumber x_{\sigma(k_1+k_2)}),\ldots,\\
&\hspace{6cm} \Phi_{k_j}(x_{\sigma(k_1+\ldots+k_{j-1}+1)}, \ldots, x_{\sigma(n)})\Big),\nonumber
\end{align}
\end{defn}
\noindent{If $\Phi_k=0$ for $k\neq 1$, then $\Phi$ is called a \textbf{{strict} Lie ${\infty}$-morphism.}}
A \textbf{curved Lie $\infty$-morphism} $E\to V$, with $V$ a weakly filtered Lie $\infty$-algebra, is a degree zero linear map $\Phi:S(E)\to V$ satisfying, for $n\geq 0,$ an adapted version of \eqref{eq:def:Lie:infty:morphism} where the indexes $k_1,\ldots, k_j$ on the right hand side of the equation run from $0$ to $n$. The zero component $\Phi_0:\mathbb R\to V_0$ gives rise to an element $\Phi_0(1)\in V_0$, which by abuse of notation we denote by $\Phi_0$. The curved adaptation of \eqref{eq:def:Lie:infty:morphism}, for $n=0$, then reads $0=\sum_{j\geq 1}\frac{1}{j!}\,m_j(\Phi_0,\ldots,\Phi_0)$. In other words, $\Phi_0$ is a Maurer Cartan element of $V$ \cite{MZ}.
\
Considering the coalgebra morphism $\Phi: \bar S(E)\to \bar S(V)$ defined by
the collection of degree zero linear maps $$
\Phi_k:S^k(E)\to V,\quad k\geq 1,
$$
we see that Equation \eqref{eq:def:Lie:infty:morphism} is equivalent to
$\Phi$ preserving the Lie $\infty$-algebra structures:
$$\Phi \smalcirc M_E=M_V\smalcirc \Phi.$$
\section{Representations of Lie \texorpdfstring{$\infty$}--algebras} \label{section2}
A complex $(V,\d)$ induces a natural symmetric DGLA structure in $\End(V)[1]$, see Example \ref{ex:DGLA:End:E}.
\begin{defn}
A \textbf{representation} of a Lie $\infty$-algebra $(E,\set{l_k}_{k\in\mathbb N})$ on a complex $(V,\d)$ is a Lie $\infty$-morphism $$\Phi:(E,\set{ l_k}_{k\in\mathbb N})\rightarrow (\End(V)[1],\partial, \brr{\cdot , \cdot}),$$
{i.e., $\Phi \smalcirc M_E= M_{\End(V)[1]}\smalcirc \Phi$, where $M_E$ is the coderivation determined by $\sum_k l_k$ and $ M_{\End(V)[1]}$ is the coderivation determined by $\partial+ \brr{\cdot , \cdot}$.}
\end{defn}
Equivalently, a representation of $E$ is defined by a collection of degree $+1$ maps
$$\Phi_k:S^k(E)\to \End(V),\quad k\geq 1,$$
such that, for each $n\geq 1$,
\begin{align} \label{eq:def:representation}
\lefteqn{\sum_{\begin{array}{c} \scriptstyle{i=1}\\ \scriptstyle{\sigma\in Sh(i,n-i)}\end{array}}^{\scriptstyle{n}}\!\!\!\!\!\!\!\!\!\! \varepsilon(\sigma)\Phi_{n-i+1}\left(l_i\left(x_{\sigma(1)}, \ldots, {x_{\sigma(i)}}\right), {x_{\sigma(i+1)}}, \ldots, {x_{\sigma(n)}}\right)=} \\
&= & \partial \Phi_n(x_1,\ldots, x_n)+ \frac{1}{2} \!\!\!\!\!\!\!\!\!\!\sum_{\begin{array}{c} \scriptstyle{j=1}\\ \scriptstyle{\sigma\in Sh(j,n-j)}\end{array}}^{\scriptstyle{n-1}} \!\!\!\!\!\!\!\!\!\!\varepsilon(\sigma)\brr{\Phi_j({x_{\sigma(1)}}, \ldots, {x_{\sigma(j)}}) , \Phi_{n-j}({x_{\sigma(j+1)}}, \ldots, {x_{\sigma(n)}}) }. \nonumber
\end{align}
\begin{rem}
A representation on a complex $(V,\d)$ can be seen as a curved Lie $\infty$-morphism $\Phi:E \to \End (V)[1]$, with $\Phi=\sum_{k\geq 0}\Phi_k$ and $\Phi_0=\d$. In fact, the first term on the right hand-side of Equations \eqref{eq:def:representation} is given by
$$\partial \Phi_n(x_1,\ldots, x_n)=\brr{\Phi_0, \Phi_n(x_1,\ldots, x_n)},$$
and we have a curved Lie $\infty$-morphism $$\Phi:(E,\set{ l_k}_{k\in\mathbb N})\rightarrow (\End(V)[1],\brr{\cdot, \cdot})$$
between the Lie $\infty$-algebra $E$ and the symmetric graded Lie algebra $(\End(V)[1],\brr{\cdot, \cdot})$ (see \cite{MZ}, Lemma 2.5).
{This is why sometimes a representation of a Lie $\infty$-algebra $E$ on a complex $(V,\d)$ is called a representation on the graded vector space $V$ (compatible with the differential $\d$ of $V$). }
\end{rem}
Any representation {$\Phi:E\to \End(V)[1]$} of a Lie $\infty$-algebra $E$ on a complex $(V,\d)$ has a dual one.
Let $$
{}^*:\End(V)\to \End(V^*)
$$
be the Lie $\infty$-morphism given by
\begin{equation}\label{eq:dual:map}
\eval{f^*(\alpha),v}=-(-1)^{|\alpha||f|}\eval{\alpha, f(v)},\quad f\in\End(V), \alpha\in V^*, v\in V. \end{equation}
The \textbf{dual representation} ${} ^*\Phi:E\to \End(V^*)[1]$ is obtained by composition of $\Phi$ with this Lie $\infty$-morphism. It is a representation on the complex $(V^*,\d^*)$
and is given by
\begin{equation}\label{eq:dual:representation}
\eval{{}^*\Phi(e)(\alpha), v}=-(-1)^{(|e|+1)|\alpha|}\eval{\alpha,\Phi(e)(v)}, \quad e\in \bar S(E),\, \alpha\in V^*, \, v\in V. \end{equation}
\begin{rem} \label{rem:lie:infty:algebra:E+V}
Given a representation $\Phi:E\rightarrow \End(V)[1]$ on a complex $(V,\d)$, defined by the collection of degree $+1$ linear maps $\Phi_k:S^k(E)\to \End(V)$, $k\geq 1$, one may consider the collection of degree $+1$ maps
$\phi_k:S^{k}(E)\otimes V\to V$, $k\geq 0$, where $\phi_0=\d:V\to V$ {and $\phi_k(x,v)=(\Phi_k(x))(v), \, k\geq1$.}
{The embedding $\bar S(E)\oplus \big(S(E)\otimes V\big) \hookrightarrow \bar S(E\oplus V)$, provides a collection of maps}
\begin{equation*}
\tilde\Phi_k: S^{k}(E\oplus V)\to E\oplus V, \,\,\, k\geq 1,
\end{equation*}
given by
\begin{align*}
&\tilde\Phi_k\left((x_1, v_1), \ldots, (x_{k}, v_{k}) \right) \\& =\bigg(l_{k}(x_1, \ldots, x_{k}), \sum_{i=1}^{k}(-1)^{|x_i|(|x_{i+1}|+\ldots+|x_{k}|)}\phi_{k-1}(x_1,\ldots, \widehat{x_i},\ldots, x_{k},v_i)\bigg),
\end{align*}
and we may
express Equations \eqref{eq:def:representation} as
\begin{equation}\label{eq:def:representation:LModule}
\tilde\Phi_\bullet\left(\tilde\Phi_\bullet(x_{(1)})\odot x_{(2)} \right) + \tilde\Phi_1\tilde\Phi_\bullet(x)=0, \quad x\in \bar S(E\oplus V).
\end{equation}
Equation \eqref{eq:def:representation:LModule} means that $\tilde{\Phi} $ equips $E \oplus V$ with a Lie $\infty$-algebra structure.
\end{rem}
Now suppose the graded vector space $V$ has a Lie $\infty$-algebra structure $\set{ m_k}_{k\in\mathbb N}$ given by a coderivation $M_V$ of $\bar S(V)$. By the construction in Example~\ref{example:DGLA_Coder}, the
coderivation $M_V$ of $\bar S(V)$ defines
a symmetric DGLA structure in $\Coder(\bar S(V))[1]$:
$$
\partial_{M_V}Q=- M_V\smalcirc Q + (-1)^{\deg(Q)} Q\smalcirc M_V,
$$
$$
\brr{Q,P}=(-1)^{\deg(Q)}\bigg(Q\smalcirc P-(-1)^{\deg(Q)\deg(P)}P\smalcirc Q\bigg),
$$
where $\deg(Q)$ and $\deg(P)$ are the degrees of $Q$ and $P$ in $\Coder (\bar S(V))$.
{Generalizing the notion of an action of a graded Lie algebra on another graded Lie algebra,} we have the following definition of an action of a Lie $\infty$-algebra on another Lie $\infty$-algebra:
\begin{defn}
An \textbf{action of the Lie $\infty$-algebra} $(E,M_E\equiv\set{ l_k}_{k\in\mathbb N})$ on the Lie $\infty$-algebra $(V,M_V\equiv\set{ m_k}_{k\in\mathbb N})$, or a \textbf{Lie $\infty$-action} of $E$ on $V$, is a Lie $\infty$-morphism
$$
\Phi: (E,\set{ l_k}_{k\in\mathbb N}) \to (\Coder (\bar S(V))[1], \partial_{M_V}, \brr{\cdot,\cdot}).
$$
\end{defn}
\begin{rem} \label{rem:maps:definition:action}
{Being a Lie $\infty$-morphism,} an action $$
\Phi: E \to \Coder (\bar S(V))[1]
$$ is univocally defined by a collection of {degree $+1$} linear maps
$$
\Phi_k: S^k(E)\to \Coder {(\bar S(V)), \quad k \geq 1.}
$$
By the isomorphism provided in Proposition \ref{prop:isomorphism:families:coderivations}, {and since each $\Phi_k(x), \,\, x \in S^k(E)$, is a coderivation of $\bar S(V)$,} we see that an action is
completely defined by a collection of linear maps
\begin{equation}\label{eq:linear:maps:representation}
\Phi_{k,i}: S^k(E)\otimes S^i(V)\to V, \quad i,k\geq 1.
\end{equation}
{We will denote the coderivation $\Phi_k(x)$ simply by $\Phi_x$.}
\end{rem}
\begin{rem}
If we define $\Phi_{0}:=M_{V}$, then
an action is equivalent to a curved Lie $\infty$-morphism between $E$ and the graded Lie algebra $\Coder (\bar S(V))$ (compatible with the Lie $\infty$-structure in $V$) \cite{MZ}. {In this case, $\Phi=\sum_{k\geq 0}\Phi_k$ is called a {\bf{curved Lie $\infty$-action}}}.
\end{rem}
{There is a close relationship between representations and actions on Lie $\infty$-algebras.}
First notice that each linear map $\mathcal{l}:V\to V$ induces a (co)derivation of $\bar S(V)$. Hence we may see $\End (V)[1]$ as a Lie $\infty$-subalgebra of $\Coder(\bar S(V))[1]$. Therefore,
given a representation $\Phi:E\to \End(V)[1]$ of the Lie $\infty$-algebra $E$ on the complex $(V,\d)$, we have a natural action of $E$ on the Lie $\infty$-algebra $(V,M_{V})$, where $M_{V}$ is the coderivation defined by the map $\d:V\to V$.
In this case, we say {\bf{the action is induced by a representation}.}
Moreover, for each action $\Phi:E\to \Coder{(\bar S(V))}[1]$ of $E$ on the Lie $\infty$-algebra $(V, M_{V}\equiv\set{ m_k}_{k\in\mathbb N})$, we have a representation of $E$ on $V$ given by
the collection of maps $\Phi_{k,1}:S^k(E)\otimes V \to V$, $k\geq 1$, or equivalently, $\Phi_{k,1}\equiv \rho_k:S^k(E)\to \End(V)$, $k\geq 1$. The morphism $\rho=\sum_k \rho_k$ is a representation of the Lie $\infty$-algebra $E$ on the complex $(V, \d=m_1)$, called the \textbf{linear representation defined by $\Phi$}.
Finally one should notice that, given a Lie $\infty$-algebra $(V,M_V)$, the graded vector space $\Coder (\bar S(V))[1]$ is a Lie $\infty$-subalgebra of $\End(\bar S(V))[1]$. Therefore, any action $\Phi:E\to \Coder(\bar S(V))[1]$ of the Lie $\infty$-algebra $E$ on $(V,M_V)$ yields a representation of $E$ on the graded vector space $\bar S(V)$. We call it the \textbf{representation induced by the action $\Phi$}. The coderivation $M_V$ defines a (co)derivation of $\bar S(\bar S(V))$ and the representation is compatible with this (co)derivation.
\begin{rem} In \cite{MZ}, the authors define an action of a finite dimensional Lie $\infty$-algebra $E$ on a graded manifold $\mathcal{M}$ as a Lie $\infty$-morphism $\Phi:E\to \mathfrak{X}({\mathcal{M}})[1]$. As the authors point out, when $\mathcal{M}$ is the graded manifold defined by a finite dimensional Lie $\infty$-algebra, we have an action of a Lie $\infty$-algebra on another Lie $\infty$-algebra. The definition presented here is a particular case of theirs because we are only considering coderivations of $\bar S(V)$, i.e. coderivations of $S(V)$ vanishing on the field $S^{0}(E)$. This restrictive case reduces to the usually Lie algebra action on another Lie algebra (and its semi-direct product) while the definition given in \cite{MZ}, gives rise to general Lie algebra extensions. For our purpose, this definition is more adequate. \end{rem}
Next, with the identification $S^n(E\oplus V)\simeq \oplus_{k=0}^n S^{n-k}(E)\otimes S^{k}(V)$, we see that the action $\Phi$ determines a coderivation of $\bar S(E\oplus V)$. Together with $M_E$ and $M_V$ we have a Lie $\infty$-algebra structure on $E\oplus V$.
Next proposition can be deduced from \cite{MZ}.
\begin{prop}\label{prop:lie:infty:algebra:E+V} Let $(E, M_E\equiv\set{ l_k}_{k\in\mathbb N})$ and $(V,M_V\equiv\set{ m_k}_{k\in\mathbb N})$ be Lie $\infty$-algebras.
An action
$$\Phi: E \to \Coder (\bar S(V))[1]$$
defines a Lie $\infty$-algebra structure on $E\oplus V$.
\end{prop}
\begin{proof}
We consider the brackets $\{\mathfrak{l}_n\}_{n \in \mathbb N}$ on $E\oplus V$ given by:
\begin{align*}
& \mathfrak{l}_n(x_1, \ldots, x_n)=l_n(x_1, \ldots, x_n), \quad x_i \in E\\
& \mathfrak{l}_n(v_1, \ldots, v_n)=m_n(v_1, \ldots, v_n), \quad v_i \in V \\
& \mathfrak{l}_{k+n}(x_1, \ldots, x_k,v_1, \ldots, v_n )= \Phi_{k,n}(x_1, \ldots, x_k,v_1, \ldots, v_n ),
\end{align*}
with $\Phi_{k,n}:S^k(E)\otimes S^n(V) \to V$ the collection of linear maps defining $\Phi$ (see Remark~\ref{rem:maps:definition:action}).
{The collection of linear maps $\Phi_{k,n}$ defines a coderivation of $\bar S(E\oplus V)$, $$\Upsilon:\bar S(E\oplus V)\to \big(\bar S(E)\otimes \bar S(V)\big)\oplus \bar S(V)\subset \bar S(E\oplus V)$$ related to the
action $\Phi$ by
$$
\Upsilon(x\otimes v)=\Phi_x(v), \quad x\in E,\, v\in \bar S(V)$$
and $$
\Upsilon(x\otimes v)=\Phi_x(v) + (-1)^{|x_{(1)}|}x_{(1)}\otimes\Phi_{x_{(2)}}(v), \quad x\in S^{\geq 2}(E),\, v\in \bar S(V).
$$}
The degree $+1$ coderivation of $\bar S(E\oplus V)$ determined by $\{\mathfrak{l}_n\}_{n \in \mathbb N}$ is $$M_{E\oplus V}=M_E+\Upsilon+M_V.$$
Let us prove that $M_{E\oplus V}^2=0$. For $ x \in \bar S(E)$ and $v \in \bar S(V)$,
$$M_{E\oplus V}^2(x)=M_E^2(x)=0 \,\,\,\,\textrm{and}\,\,\,\, M_{E\oplus V}^2(v)=M_V^2(v)=0$$
while, for mixed terms, we have $$M_{E\oplus V}(x\otimes v)= M_E(x)\otimes v+(-1)^{|x|}x \otimes M_V(v)+(-1)^{|x_{(1)}|}x_{(1)}\otimes \Phi_{x_{(2)}}(v)+\Phi_x(v)$$
and
$$ \mathfrak{l}(M_{E\oplus V}(x\otimes v))=(\Phi_{M_E(x)})_{\bullet}\, (v)+ (-1)^{|x|}(\Phi_x)_{\bullet} \, (M_V(v))+(-1)^{|x_{(1)}|}(\Phi_{x_{(1)}})_{\bullet}( \Phi_{x_{(2)}}(v))+m_{\bullet}(\Phi_x(v)).$$
Since $\Phi$ is a Lie $\infty$-morphism, we have $$\Phi_{M_E(x)}=-M_V\smalcirc \Phi_x-(-1)^{|x|}\Phi_x \smalcirc M_V+ \frac12\brr{\Phi_{x_{(1)}},\Phi_{x_{(2)}}},$$
which implies $ M_{E\oplus V}^2=0$.
\end{proof}
The Lie $\infty$-algebra structure in $E\oplus V$ presented in Remark~\ref{rem:lie:infty:algebra:E+V}, is a particular case of Proposition~\ref{prop:lie:infty:algebra:E+V}, with $M_V=\d$.
\
\paragraph{\textbf{Adjoint representation and adjoint action}}
An important example of a representation is given by a Lie $\infty$-algebra structure.
Let $\left( E, M_E \equiv \set{l_k}_{k\in\mathbb N} \right)$ be a Lie $\infty$-algebra; thus $(E,l_1)$ is a complex.
The collection of degree $+1$ maps
\begin{equation*} \label{eq:adjoint:representation:algebra}
\begin{array}{rrcl}
\ad_k:& S^k(E) &\to& \End(E) \\
& \;x_1\odot\ldots \odot x_k & \mapsto & \ad_{x_1 \odot \dots \odot x_k} := l_{k+1}\left( x_1,\ldots, x_k, \, \, \, -\,\, \right)
\end{array}, \quad k\geq 1,
\end{equation*}
satisfies Equations \eqref{eq:def:representation}. {(Note that Equations \eqref{eq:def:representation} are equivalent to Equations \eqref{eq:def:symm:L:infty:algebra})}. So, this collection of maps defines a representation $\ad=\sum_k \ad_ k$ of the Lie $\infty$-algebra $E$ on $(E,l_1)$.
\begin{defn}
The representation $\ad$ is called the \textbf{adjoint representation} of the Lie $\infty$-algebra $\left( E, M_E \equiv \set{l_k}_{k\in\mathbb N} \right)$.
\end{defn}
Moreover, notice that for each $x\in S^i(E)$, {$i\geq 1$}, we may consider the degree $|x|+1$ coderivation $\ad^D_x$ of $\bar S(E)$ defined by the family of linear maps
\begin{equation*}
\begin{array}{rrcl}
{(\ad_x)_k}:& S^k(E) &\to& E \\
& e & \mapsto & l_{i+k}(x,e), \quad {k\geq 1}.
\end{array}
\end{equation*}
So, we have a collection of degree $+1$ linear maps
\begin{equation} \label{eq:coderivation_ad}
\begin{array}{rrcl}
{\ad_i}:& S^i(E) &\to& \Coder(\bar S(E)) \\
& x & \mapsto & \ad^D_x
\end{array}, \quad {i\geq 1,}
\end{equation}
and we set ${\bf ad}=\sum_i \ad_i$.
\begin{prop}
{The collection of degree $+1$ linear maps given by \eqref{eq:coderivation_ad} defines a Lie $\infty$-morphism
$${\bf ad}: (E, \set{l_k}_{k\in\mathbb N}) \to \left(\Coder (\bar S(E))[1], \partial_{M_E}, \brr{\cdot , \cdot} \right)$$
}
from the Lie $\infty$-algebra $E$ to the symmetric DGLA $\Coder (\bar S(E))[1]$.
\end{prop}
\begin{proof}
For each $x\in S^i(E)$,
let $\ad_x={\sum_k(\ad_x)_k}$ and set $l=\sum_k l_k$.
If $x\in \oplus_{i\geq 2}S^i(E)$ {and $e \in \bar S(E)$, we have
\begin{align*}
M_E(x\odot e)=& M_E(x) \odot e+ (-1)^{|x|}x \odot M_E(e)+ (-1)^{|e||x_{(2)}|}l(x_{(1)}, e)\odot x_{(2)} \\+
& l (x, e_{(1)})\odot e_{(2)}+ (-1)^{|e_{(1)}||x_{(2)}|}l(x_{(1)}, e_{(1)})\odot x_{(2)}\odot e_{(2)} + l(x, e)
\end{align*}
and so,
\begin{align*}
\ad_x (M_E(e))&=l(x,M_E(e))\\
&= (-1)^{|x|}\underbrace{l(M_E(x\odot e))}_{=0 \,\, \textrm{by} \,\, \eqref{eq:def:symm:L:infty:algebra}}- (-1)^{|x|} l (M_E(x), e)- (-1)^{|x|}l(\ad_x^D(e))\\
& \quad - (-1)^{|x_{(1)}|+|x_{(1)}||x_{(2)}| }l(x_{(2)},\ad_{x_{(1)}}^D(e))\\
&= \big(- (-1)^{|x|}\ad_{M_E(x)}- (-1)^{|x|}l \smalcirc \ad^D_{x}-(-1)^{|x_{(2)}|}\ad_{x_{(1)}}\smalcirc \ad^D_{x_{(2)}}\big)(e),
\end{align*}
which is equivalent to
\begin{equation*}
\ad_{M_E(x)}=-l\smalcirc \ad^D_x-(-1)^{|x|}\ad_x \smalcirc M_E - (-1)^{|x_{(2)}|}\ad_{x_{(1)}}\smalcirc \ad^D_{x_{(2)}}
\end{equation*}
or to
\begin{equation} \label{eq:ad:ME}
\ad_{M_E(x)}=-[l, \ad_x]_{_{RN}}-\frac12 (-1)^{|x_{(1)}|}[\ad_{x_{(1)}}, \ad_{x_{(2)}}]_{_{RN}}.
\end{equation}
Note that the coderivation defined by the second member of \eqref{eq:ad:ME} is
\begin{equation*}
[M_E, \ad^D_x]+\frac12 [\ad^D_{x_{(1)}}, \ad^D_{x_{(2)}}]
= \partial_{M_E}(\ad^D_x) + \frac12 [\ad^D_{x_{(1)}}, \ad^D_{x_{(2)}}].
\end{equation*}
If $x\in E$, a similar computation gives
\begin{equation} \label{eq:ad:l1}
\ad_{l_1(x)}=-l \smalcirc \ad^{D}_{x} - (-1)^{|x|}\ad_x\smalcirc M_E = -[l, \ad_x]_{_{RN}}.
\end{equation}
}
Equations \eqref{eq:ad:ME} and \eqref{eq:ad:l1} mean that the map {${\bf ad}: E \to \Coder (\bar S(E))[1]$} is a Lie $\infty$-morphism.
\end{proof}
\begin{defn} The linear map {${\bf ad}: E \to \Coder(\bar S(E))[1]$} is an action of the Lie $\infty$-algebra $E$ on itself, called the \textbf{adjoint action of $E$}.
\end{defn}
\section{\texorpdfstring{$\O$}--operators on a Lie \texorpdfstring{$\infty$}--algebra} \label{section3}
In this section we define $\O$-operators on a Lie $\infty$-algebra $E$ with respect to an action of $E$ on a Lie $\infty$-algebra $V$. This is the main notion of the paper.
\subsection{\texorpdfstring{$\O$}--operators with respect to a Lie \texorpdfstring{$\infty$}--action}
{Let $(E,M_E\equiv\set{l_k}_{k\geq 1})$ and $(V,M_V\equiv\set{m_k}_{k\geq 1})$ be Lie $\infty$-algebras and $\Phi:E\to \Coder(\bar S(V))[1]$ a Lie $\infty$-action of $E$ on $V$.
Remember we are using Sweedler's notation: for each $v\in \bar S(V)$,
$$\Delta(v)=v_{(1)}\otimes v_{(2)}$$ and
$$\Delta^{2}(v)=(\mathrm{id} \otimes \Delta)\Delta(v)=( \Delta\otimes\mathrm{id})\Delta(v)=v_{(1)}\otimes v_{(2)}\otimes v_{(3)}.$$
Each degree zero linear map $T:\bar S(V)\to \bar S(E)$ defines a degree $+1$ linear map $\displaystyle \Phi^T: \bar S(V)\to \bar S(V)$ given by
\begin{align*}
\Phi^T(v)&=0,\quad v\in V,\\
\Phi^T(v)&=\Phi_{T(v_{(1)})}\, v_{(2)},\quad v\in S^{\geq 2}(V).
\end{align*}
}
\begin{lem}\label{Lemma:deformed:coderivation:by:T}
The linear map $\displaystyle \Phi^T: \bar S(V)\to \bar S(V)$ is a degree $+1$ coderivation of $\bar S(V)$ and is defined by the collection of linear maps
$\sum\Phi_{\bullet,\bullet}(T\otimes\mathrm{
id})\Delta$.
\end{lem}
\begin{proof}
For the linear map $\displaystyle \Phi^T: \bar S(V)\to \bar S(V)$ to be a coderivation it must satisfy:
$$
\Delta\Phi^T(v)=\left(\Phi^T\otimes \mathrm{id} + \mathrm{id}\otimes \Phi^T\right)\Delta(v),\quad v\in \bar S(V).
$$
This equation is trivially satisfied for $v\in V$.
For each $v=v_1\odot v_2\in S^2(V)$ we have
$\Phi^T(v)\in V$ and consequently, $\Delta\Phi^T(v)=0$. On the other hand, since ${\Phi^T}_{|V}=0$, we see that
$$\left(\Phi^T\otimes \mathrm{id} + \mathrm{id}\otimes \Phi^T\right)\Delta(v)=0$$
and the equation is satisfied in $S^2(V)$.
Now let $v\in S^{\geq 3}(V)$, then
\begin{align*}
\Delta\Phi^T(v)&=\Delta \Phi_{T(v_{(1)})}v_{(2)}\\
&= \left(\Phi_{T(v_{(1)})}\otimes\mathrm{id} + \mathrm{id}\otimes \Phi_{T(v_{(1)})}\right)\Delta (v_{(2)}).
\end{align*}
The coassociativity of $\Delta$ ensures that
\begin{align*}
\Delta\Phi^T(v)&= \Phi_{T(v_{(1)})}v_{(2)}\otimes v_{(3)} + (-1)^{(|v_{(1)}|+1)|v_{(2)}|}v_{(2)}\otimes \Phi_{T(v_{(1)})}v_{(3)}\\
&=\Phi_{T(v_{(1)})}v_{(2)}\otimes v_{(3)} + (-1)^{|v_{(1)}|} v_{(1)}\otimes \Phi_{T(v_{(2)})}v_{(3)}\\
&= \left(\Phi^T\otimes \mathrm{id} + \mathrm{id}\otimes \Phi^T\right)\Delta(v).
\end{align*}
\end{proof}
\begin{defn}
Let $(E, M_E \equiv \set{l_k}_{k\geq 1})$ and $(V, M_V \equiv \set{m_k}_{k\geq 1})$ be
Lie $\infty$-algebras and $\Phi:E\to \Coder (\bar S(V))[1]$ an action.
An \textbf{$\O$-operator} {on $E$ with respect to the action $\Phi$} is a (degree $0$) morphism of coalgebras $T:\bar S(V) \to \bar S(E)$ such that
\begin{equation}\label{def:O:operator}
{M_E\smalcirc T=T\smalcirc\left(\Phi^T+M_V\right).}
\end{equation}
\end{defn}
\begin{defn} A \textbf{Rota Baxter operator (of weight $1$)} on a Lie $\infty$-algebra $( E, M_E \equiv \set{l_k}_{k\geq 1})$ is an $\O$-operator with respect to the adjoint action.
\end{defn}
An $\O$-operator $T:\bar S(V)\to \bar S(E)$ with respect to an action $\Phi:E\to \Coder(\bar S(V))[1]$ of $(E,M_E\equiv \set{l_k}_{k\geq 1})$ on $(V,M_V\equiv\set{m_k}_{k\geq 1})$ is defined by a linear map ${t=\sum_{i}t_{i}}:\bar S(V)\to E$
satisfying:
\begin{itemize}
\item[(i)] $\displaystyle l_1( t_1(v))= t_1(m_1(v)), \quad v\in V$
\item[(ii)] $l(T(v))=t\left(\Phi_{T(v_{(1)})}v_{(2)} + m(v_{(1)})\odot v_{(2)}\right), \quad v\in \oplus_{i\geq 2}S^i(V).$
\end{itemize}
In particular, the $\O$-operator $T$ is a comorphism i.e., for each $v \in {S^n(V)}$, $n\geq 1$,
\begin{equation*}
T(v)= \sum_{k_1+ \ldots +k_r=n}\frac{1}{r!}\, t_{k_1}(v_{(k_{1})})\odot \ldots \odot t_{k_r}(v_{(k_r)}),
\end{equation*}
so, detailing (ii) for $v=v_1\odot v_2$, we get
\begin{align*}
l_1\Big(t_2(v_1, \,v_2)\Big) + l_2\Big(t_1(v_1),\, t_1(v_2)\Big) &=
\,t_1\left(\Phi_{t_1(v_1)}v_2 + (-1)^{|v_1|\,|v_2|}\Phi_{t_1(v_2)}v_1 + m_2(v_1,v_2)\right)\\
&+ t_2\Big(m_1(v_1), v_2)\Big) + (-1)^{|v_1|}t_2\Big(v_1,m_1(v_2)\Big).
\end{align*}
{Generally, for every $v=v_1\odot \ldots \odot v_n\in S^n(V)$, $n\geq 3$, we have
{\small
\begin{eqnarray*} \label{eq:expression:T}
\lefteqn{\sum_{\begin{array}{c} \scriptstyle{k_1+\ldots +k_i=n}\\ \scriptstyle{\sigma \in Sh(k_1, \ldots, k_i)}\end{array}}
\frac{\epsilon(\sigma)}{i!}\, l_i\bigg(t_{k_{1}}(v_{\sigma(1)}, \dots, v_{\sigma(k_1)}), \ldots, t_{k_{i}}(v_{\sigma(k_1+ \ldots + k_{i-1}+1)}, \dots, v_{\sigma({n})})\bigg) }\\
&=&\!\!\!\!\!\!\!\!\!\!\sum_{\begin{array}{c} \scriptstyle{k_1+\ldots +k_{i+2}=n}\\
\scriptstyle{\sigma \in Sh(k_1, \ldots, k_{i+2})}\end{array}}
\!\!\!\!\!\!\!\!\!\!\frac{\epsilon(\sigma)}{i!}\, t_{1+k_{i+2}}\bigg(
\Phi_{i,{k_{i+1}}}
\bigg(
t_{k_{1}}(v_{\sigma(1)} \dots, v_{\sigma(k_1)})\odot \ldots \odot t_{k_{i}}(v_{\sigma(k_1+ \ldots + k_{i-1}+1)}, \dots,
v_{\sigma({ k_1+\ldots + k_i})}), \nonumber \\
& &\qquad v_{\sigma(k_1+ \ldots + k_{i}+1)}
\odot \dots\odot v_{\sigma({k_1+\ldots + k_{i+1}})} \bigg), v_{\sigma(k_1+ \ldots + k_{i+1}+1)}\odot \dots \odot v_{\sigma (n)}
\bigg) \\
&&+ {\sum_{i=1}^{n}}\,\sum_{\sigma \in Sh(i,n-i)} \epsilon(\sigma)\, {t_{n-i+1}}\, \big(m_i(v_{\sigma(1)}, \dots, v_{\sigma(i)}), v_{\sigma(i+1)}, \ldots, v_{\sigma(n)}\big). \nonumber
\end{eqnarray*}}}
\begin{rem}
When $M_V=0$
we are considering $V$ simply as a graded vector space, with no Lie $\infty$-algebra attached and an $\O$-operator must satisfy
$$
{M_E\smalcirc T=T\smalcirc\Phi^T.}
$$
In this case, the terms of {above equations}
involving the brackets $m_i$ on $V$ vanish.
\end{rem}
\begin{rem}
When $(E, \brr{\cdot,\cdot}_E)$ and $(V, \brr{\cdot,\cdot}_V)$ are Lie algebras, for degree reasons, a morphism $T=t_{1}$ must be a strict morphism. Moreover, our definition coincides with the usual definition of $\O$-operator (of weight $1$) between Lie algebras \cite{K} :
$$
\brr{t_{1}(v),t_{1}(w)}_{E}=t_{1}\left(\Phi_{t_{1}(v)}w - \Phi_{t_{1}(w)}v+\brr{v,w}_{V}\right), \quad v,w\in V.
$$
\end{rem}
\begin{rem}
When $(V, \d)$ is just a complex and the action $\Phi:E\to\Coder (\bar S(V)) [1]$ is induced by a representation
$\rho:E\to \End(V)[1]$ we have that $\Phi(x)$ is the {(co)derivation} defined by $\rho(x)$. In this case, $\O$-operators with respect to $\Phi$ coincide with $\O$-operators with respect to $\rho$
(or relative Rota Baxter operators) given in
\cite{LST2021}.
\end{rem}
In \cite{LST2021} the authors define $\O$-operators with respect to representations of Lie $\infty$-algebras. Any action induces a representation and $\O$-operators with respect to an action are related with $\O$-operators of with respect to the induced representation. We shall see that this relation is given by the comorphism $$I=\sum_{n\geq 1} {\mathrm{i}}_n:\bar S(V)\to \bar S(\bar S(V)),$$ defined by
the family of inclusion maps
$\displaystyle \mathrm{i}_n:S^n(V)\hookrightarrow \bar{S}(V)$, $n\geq 1$.
Notice that any coderivation $D$ of $\bar S(V)$ induces a (co)derivation $D^{\d}$ of $\bar S(\bar S(V))$. The comorphism $I$ preserves these coderivations:
\begin{lem}\label{lemma:I:Lie:morphism}
Let $V$ be a graded vector space and $D$ a coderivation of $\bar S(V)$.
The map $I:\bar S(V)\to \bar S(\bar S(V)) $ satisfies
$$
I\smalcirc D=D^{\d}\smalcirc I.
$$
\end{lem}
\begin{proof}
We will denote by $\cdot$ the symmetric product in $\bar S(\bar S(V))$, to distinguish from the symmetric product $\odot$ in $\bar S(V)$.
Let $v\in S^n(V)$, $n\geq 1$, and denote by
$\set{m_k}_{k\geq 1}$ the family of linear maps defining the coderivation $D$.
For $v\in V$, we immediately have $D^{\d}\smalcirc I(v)= D\smalcirc I(v)=I\smalcirc D(v)$.
For $v\in S^{\geq 2}(V)$ we have
\begin{eqnarray*}
\lefteqn{ D^{\d}\smalcirc I(v)= D^{\d}\bigg(\sum_{k=1}^n \frac{1}{k!}v_{(1)}\cdot \ldots\cdot v_{(k)}\bigg)}\\
&=&\sum_{k=1}^n \frac{1}{k!}\bigg(D(v_{(1)})\cdot v_{(2)} \cdot \ldots\cdot v_{(k)}+\ldots + (-1)^{|D|(|v_{(1)}|+\ldots+|v_{(k-1)}|)}v_{(1)} \cdot \ldots\cdot v_{(k-1)} \cdot D(v_{(k)})\bigg)\\
&=&\sum_{k=1}^n \frac{1}{(k-1)!}D(v_{(1)})\cdot v_{(2)} \cdot \ldots\cdot v_{(k)}\\
&=&D(v_{(1)}) \cdot I(v_{(2)}).
\end{eqnarray*}
On the other hand
\begin{align*}
I\smalcirc D(v) &=I(m_{\bullet}(v_{(1)})\odot v_{(2)})\\
&=m_{\bullet}(v_{(1)})\cdot I(v_{(2)}) + (m_{\bullet}(v_{(1)})\odot v_{(2)})\cdot I(v_{(3)}) \\
&=D(v_{(1)}) \cdot I(v_{(2)})
\end{align*}
and the result follows.
\end{proof}
\begin{rem}
In particular, if $D$ defines a Lie $\infty$-algebra structure on $V$, then $D^{\d}$ defines a Lie $\infty$-algebra structure on $\bar S(V)$ and $I$ is a Lie $\infty$-morphism.
\end{rem}
\begin{prop}
Let $\Phi:E\to \Coder (\bar S(V))[1]$ be an action of the Lie $\infty$-algebra $\left( E, M_E \equiv \set{l_k}_{k\geq 1} \right)$ on the Lie $\infty$-algebra $\left(V, M_V \equiv \set{m_k}_{k\geq 1} \right)$ and $\tilde T:\bar S(\bar S(V))\to E$ be an $\O$-operator with respect to the induced representation $\rho:E\to \End(\bar S(V))[1]$. Then
$T=\tilde T \smalcirc I$ is an $\O$-operator with respect to the action $\Phi$.
\end{prop}
\begin{proof}
For each $x\in \bar S(E)$, let us denote by
$$\Phi_x^{\d}:=\Phi(x)^{\d}=\rho(x)^{\d},
$$ the (co)derivation of $\bar S(\bar S(V))$ defined by $\rho(x)$.
Let $\tilde T$ be an $\O$-operator with respect to the induced representation.
This means that
$$
M_E\smalcirc \tilde T(w)=\tilde T\Big(\Phi_{T(w_{(1)})}^{\d} w_{(2)} + {M_V}^{\d}(w)\Big), \quad w\in \bar S(\bar S(V)).
$$
Then,
for each $w=I(v)$, $v\in\bar S(V)$, we have:
\begin{align*}
M_E\smalcirc \tilde T(I(v))&=
\tilde T\Big(\Phi_{\tilde T(I(v)_{(1)})}^{\d} I(v)_{(2)} + {M_V}^{\d}\smalcirc I(v)\Big).
\end{align*}
Using the fact that $I$ is a comorphism and Lemma \ref{lemma:I:Lie:morphism}, we rewrite last equation as
\begin{align*}
M_E\smalcirc T(v)&=
\tilde T\Big(\Phi_{\tilde T(I(v_{(1)}))}^{\d} I(v_{(2)}) + {M_V}^{\d}\smalcirc I(v)\Big)\\
&=\tilde T \Big(I\smalcirc\Phi_{T(v_{(1)})} v_{(2)} + I\smalcirc {M_V}(v)\Big) \\
&=T \Big(\Phi_{T(v_{(1)})} v_{(2)} + {M_V}(v)\Big).
\end{align*}
Taking into account this equation and that $T$ is a comorphism, because is the composition of two comorphisms, the result follows.
\end{proof}
\begin{prop}
Let $T$ be an $\O$-operator on $\left( E, M_E \equiv \set{l_k}_{k\geq 1} \right)$ with respect to a Lie $\infty$-action $\Phi:E\to \Coder (\bar S(V))[1]$ on $\left( V, M_V \equiv \set{m_k}_{k\geq 1} \right)$. Then, $V$ has a new Lie $\infty$-algebra structure {$$M_{V^{T}}= \Phi^T + M_V$$} and $T:(V, M_{V^{T}})\to (E, M_E)$ is a Lie $\infty$-morphism.
\end{prop}
\begin{proof}
By Lemma \ref{Lemma:deformed:coderivation:by:T} we know $\Phi^T$ is a degree $+1$ coderivation of $\bar S(V)$ hence so is $M_{V^T}$.
Since $\Phi$ is an action, {so that $\Phi \smalcirc M_E= M_{\textrm{Coder}(\bar S(V))[1]} \smalcirc \Phi$,} and $T$ is a comorphism, we have, for each $v\in \bar S(V)$,
\begin{eqnarray}
\label{equation:first:O:operators}
\Phi_{M_{E}T(v_{(1)})}v_{(2)}&=& -M_{V} \Phi_{T(v_{(1)})}v_{(2)} - (-1)^{|v_{(1)}|} \Phi_{T(v_{(1)})} M_{V}( v_{(2)}) \\
&&\quad + (-1)^{|v_{(1)}|+1} \Phi_{T(v_{(1)})}\Phi_{T(v_{(2)})} v_{(3)}.\nonumber
\end{eqnarray}
On the other hand, $T$ is an $\O$-operator: $$M_{E}\smalcirc T(v)=T\smalcirc \Phi_{T(v_{(1)})}v_{(2)} + T\smalcirc M_{V}(v)$$ and this yields
\begin{eqnarray}\label{equation:second:O:operators}
\Phi_{M_{E}T(v_{(1)})}v_{(2)}=\Phi_{T\Phi_{T(v_{(1)})}v_{(2)} } v_{(3)} + \Phi_{TM_{V}(v_{(1)})}v_{(2)}.
\end{eqnarray}
Moreover,
due to the fact that both $\Phi^T$ and $M_V$ are coderivations and $M_V^2=0$, we have
\begin{align*}
M_{V^{T}}^{2} (v)&= (\Phi^T)^2(v)+\Phi^T\smalcirc M_V(v) + M_V\smalcirc \Phi^T(v)\\
&= \Phi_{T(\Phi_{T(v_{(1)})}v_{(2)})} v_{(3)} + (-1)^{|v_{(1)}|} \Phi_{T(v_{(1)})}\Phi_{T(v_{(2)})} v_{(3)} \\
&\quad +
\Phi_{TM_{V}(v_{(1)})}v_{(2)} + (-1)^{|v_{(1)}|}
\Phi_{T(v_{(1))}}M_{V}(v_{(2)} ) + M_{V}(\Phi_{T(v_{(1)})}v_{(2)}).
\end{align*}
Taking into account Equations (\ref{equation:first:O:operators}) and (\ref{equation:second:O:operators}) we conclude $\displaystyle M_{V^T}^2=0$. Therefore,
$M_{V^{T}}$ defines a Lie $\infty$-algebra structure on $V$ and Equation (\ref{def:O:operator}) means that $T: \bar S(V) \to \bar S(E)$ is a Lie $\infty$-morphism between the Lie $\infty$-algebras $(V, M_{V^{T}})$ and $(E, M_E)$.
\end{proof}
The brackets of the Lie $\infty$-algebra structure on $V$ defined by the coderivation $M_{V^{T}}$ are given by
$$m_1^T(v)= m_1(v)$$
and, for $n\geq 2$,
\begin{align*}
&m_n^T(v_1, \ldots, v_n)= m_n(v_1, \ldots v_n)+ \, \sum_{\begin{array}{c} \scriptstyle{k_1+\ldots +k_i=j}\\\scriptstyle{1\leq j \leq n-1}\end{array}}\sum_{\sigma \in Sh(k_1, \ldots, k_i,n-j)} \epsilon(\sigma)\, \frac1{n!}\\
&\Phi_{i,n-{j}}\left( t_{k_1}(v_{\sigma(1)}, \ldots, v_{\sigma(k_1)})\odot \ldots\odot t_{k_i}(v_{\sigma(k_1 + \dots + k_{i-1}+1)}, \dots , v_{\sigma(j)}), v_{\sigma(j+1)}\odot \dots\odot v_{\sigma(n)} \right),
\end{align*}
with $\Phi_{i,n-{j}}\, , i\geq 1,$ the linear maps determined by the action $\Phi$ (see \eqref{eq:linear:maps:representation}).
\
\paragraph{\textbf{$\O$-operators for the coadjoint representation}}
Let $(E, M_E\equiv\set{ l_k}_{k\geq 1})$ be a finite dimensional Lie $\infty$-algebra.
Next, we consider the dual of the adjoint representation of $E$ (see \eqref{eq:dual:representation}), called the coadjoint representation.
\begin{defn}
The \textbf{coadjoint representation} of $E$,
$\displaystyle
\ad^{*}:E \to \End (E^{*})[1]
$, is defined by
\begin{equation*
\eval{\ad^{*}_{x}(\alpha),v}=-(-1)^{| \alpha |(|x|+1)}\eval{\alpha, \ad_{x}v},\quad v\in E,\, x\in \bar S(E),\, \alpha\in E^{*}.
\end{equation*}
\end{defn}
Notice that $E^*$ is equipped with the differential $l_{1}^{*}$ (see \eqref{eq:dual:map}).
\
An $\O$-operator on $E$ with respect to the coadjoint representation $\ad^*:E\to \End{(E^*)}[1]$ is a coalgebra morphism $T:\bar S (E^{*})\to \bar S(E)$ given by a collection of maps $t=\sum_{i} t_{i}:\bar S(E^{*})\to E$ satisfying
{
\begin{eqnarray}\label{equation:Ooperator:coadjoint}
l(T(\alpha))&=&\!\!\!\!\!\!\!\!\!\!\!\!\sum_{\begin{array}{c} \scriptstyle{1\leq i\leq n-1}\\\scriptstyle{\sigma\in Sh(i,n-i)}\end{array}}\!\!\!\!\!\! \varepsilon(\sigma)\,t_{n-i+1}(\ad^{*}_{T( \alpha_{\sigma(1)}\odot \ldots\odot\alpha_{\sigma(i)})} \alpha_{\sigma(i+1)}, \alpha_{\sigma(i+2)}, \ldots,
\alpha_{\sigma(n)}) \nonumber \\
&&+ \sum_{{i=1}}^{n} (-1)^{|\alpha_{1}| + \dots + |\alpha_{i-1}|}t_n(\alpha_{1},\ldots, l_{1}^{*}\alpha_{i}, \ldots,\alpha_{n}),
\end{eqnarray}
}
for all $\alpha=\alpha_{1}\odot\ldots\odot \alpha_{n} \in S^{n}(E^{*})$, $n\geq 1$.
We say that $T$ is {\bf symmetric} if $$\eval{\beta, t_{n}(\alpha_{1},\ldots,\alpha_{n})}=(-1)^{|\alpha||\beta|+|\alpha_{n}|(|\alpha_{1}|+\ldots +|\alpha_{n-1}|)}\eval{\alpha_{n}, t_{n}(\alpha_{1},\ldots,\alpha_{n-1},\beta)}, $$ for all
$\alpha_1,\ldots, \alpha_n, \beta\in E^{*}$ and $n\geq 1$.
When $T$ is invertible, its inverse $T^{-1}:\bar S (E)\to \bar S(E^{*})$, given by $t^{-1}=\sum_{n}t^{-1}_{n}$, is also symmetric:
$$\eval{ t^{-1}_{n}(x_{1},\ldots, x_{n}), y}=(-1)^{|y||x_{n}|}\eval{t^{-1}_{n}(x_{1},\ldots, x_{n-1},y), x_{n}},$$
for every $ x_1,\ldots x_n, y\in E$, $n\geq 1$.
One should notice that $t^{-1}_{n}$ is \textbf{not} the inverse map of $t_{n}$. It simply denotes the $n$-component of the inverse $T^{-1}$ of $T$.
For each $n\geq 1$, let
$\omega^{(n)}\in \otimes^n E^*$ be defined by $\omega^{(1)}=0$ and
$$\eval{\omega^{(n)}, x_{1}\otimes \ldots \otimes x_{n}}=\eval{ t^{-1}_{n-1}(x_{1}, \ldots, x_{n-1}), x_{n}}, \quad x_1,\dots, x_n\in E.$$
The symmetry of $T^{-1}$ guarantees that
$\omega=\sum_{n\geq 1} \omega^{(n)}$ is an element of
$\bar S(E^{*})$.
\begin{prop}
Let $T:\bar S(E^{*})\to \bar S(E)$ be an invertible symmetric comorphism. The linear map $T$ is an $\O$-operator with respect to the coadjoint representation if and only if $\omega\in \oplus_{n\geq 2} S^{n}(E^{*})$, given by
$$
\eval{\omega, x_{1} \odot\ldots\odot x_{k+1}}=\eval{t_{k}^{-1}(x_{1},\ldots, x_{k}), x_{k+1}}, \quad x_{1}, \ldots, x_{k+1}\in E,\, k\geq 1,
$$
is a cocycle for the Lie $\infty$-algebra cohomology.
\end{prop}
\begin{proof}
When $T$ is invertible, Equation (\ref{equation:Ooperator:coadjoint}) is equivalent to equations
$$t_1^{-1}l_{1}(x)=l_{1}^{*}t_1^{-1}(x), \quad x\in E,$$
and
{
$$t^{-1}M_{E}(x)=\ad^{*}_{x_{(1)}} t^{-1}(x_{(2)}) + l_{1}^{*}t_n^{-1}(x),\quad x\in S^{n}(E), n\geq 2.$$
}
Let $x=x_1\odot\ldots\odot x_n\in S^n(E)$, $n\geq 1$, and $y\in E$, such that $|y|=|x|+1$. We have:
{\begin{align*}
\eval{\omega, M_{E}(x\odot y)} &= \eval{ t^{-1}(M_{E} (x)),y} +(-1)^{|x|} \eval{t^{-1}(x_{1},\ldots, x_{n}), l_{1}(y) }\\
&\quad +
(-1)^{|x_{(1)}|} \eval{t^{-1}(x_{(1)}), \ad_{x_{(2)}} y } \\
&=\eval{ t^{-1}(M_{E} (x)),y} -\eval{l_{1}^{*}t^{-1}(x), y } -
\eval{\ad^*_{x_{(1)}}t^{-1}(x_{(2)}), y }\\
\end{align*}
}
and the result follows.
\end{proof}
\subsection{\texorpdfstring{$\O$}--operators as Maurer-Cartan elements}
Let $\displaystyle (E,M_E\equiv\set{l_k}_{k\geq 1})$ and \linebreak
$\displaystyle (V, M_V\equiv\set{m_k}_{k\geq 1})$ be Lie $\infty$-algebras.
The graded vector space of linear maps between $\bar S(V)$ and $E$ will be denoted by $\mathfrak{h}:= {\mathrm{Hom}} (\bar S(V), E)$.
It can be identified with the space of coalgebra morphisms between $\bar S(V)$ and $\bar S(E)$.
On the other hand, since
$$S^n(E\oplus V)\simeq \oplus_{k=0}^n \left({ S^{n-k}(E)\otimes S^{k}(V)}\right),\quad n\geq 1,$$ the space $\mathfrak{h}$ can be seen as a subspace of $\Coder (\bar S(E\oplus V))$, the space of coderivations of
$\bar S(E\oplus V)$. Its elements define coderivations that only act on elements of $\bar S(V)$, they are $S(E)$-linear.
The space $S(E\oplus V)$ has a natural $S(E)$-bimodule structure. With the above identification we have:
$$
e\cdot (x\otimes v)=(e\odot x)\otimes v=(-1)^{|e|(|x|+|v|)}(x\otimes v)\cdot e,
$$
for $e\in S(E)$, $x\otimes v\in S(E\oplus V)\simeq S(E)\otimes S(V)$.
Let $t:\bar S(V)\to E$ be an element of $\mathfrak{h}$ defined by the collection of maps
$t_k: S^{k}(V)\to E$, $k\geq 1$. Let us denote by $T: \bar S(V)\to \bar S(E)$ the coalgebra morphism and by $\mathfrak{t}$ the coderivation of $\bar S(E\oplus V)$ defined by $t$.
Notice that $$\mathfrak{t}(v)=t_1(v), \quad v\in V$$
and
$$ \mathfrak{t}(v)=t(v_{(1)})\otimes v_{(2)} + t(v), \quad v\in S^{\geq 2}(V).$$
and also, for $x \in \bar S (E)$, $$\mathfrak{t}(x \otimes v)=(-1)^{|x||t|}x \cdot \mathfrak{t}(v), \quad v \in \bar S(V).$$
\begin{prop} \label{prop:h:abelian:subalgebra}
The space $\mathfrak{h}$ is an abelian Lie subalgebra of $\Coder (\bar S(E \oplus V))$.
\end{prop}
\begin{proof}
Let $t=\sum_{i} t_{i} :\bar S(V)\to E$ and $w=\sum_{i} w_{i} :\bar S(V)\to E$ be elements of $\mathfrak{h}$. Denote by
$\mathfrak{t}$ and $\mathfrak{w}$
the coderivations of $\bar S(E\oplus V)$ defined by $t$ and $w$, respectively.
{Let $v \in \bar S(V)$.} The Lie bracket of $\mathfrak{t}$ and $\mathfrak{w}$ is given by:
\begin{align*}
\brr{\mathfrak{t}, \mathfrak{w}}_{c}(v)&=\mathfrak{t}\smalcirc ({w}(v_{(1)})\otimes v_{(2)})-(-1)^{|t||w|}\mathfrak{w}\smalcirc ({t}(v_{(1)})\otimes v_{(2)})\\
&=(-1)^{|t|(|w|+|v_{(1)}|)}{w}(v_{(1)})\cdot \mathfrak{t}(v_{(2)})-(-1)^{|t||w|}(-1)^{|w|(|t|+|v_{(1)}|)}{ t}(v_{(1)})\cdot \mathfrak{w}(v_{(2)})\\
&=\Big((-1)^{|t|(|w|+|v_{(1)}|)}{ w}(v_{(1)})\cdot {t}(v_{(2)})-(-1)^{|t||w|}(-1)^{|w|(|t|+|v_{(1)}|)}{t}(v_{(1)})\cdot {w}(v_{(2)})\Big)\otimes v_{(3)}\\
& \quad + (-1)^{|t|(|w|+|v_{(1)}|)}{ w}(v_{(1)})\cdot {t}(v_{(2)})-(-1)^{|t||w|}(-1)^{|w|(|t|+|v_{(1)}|)}{t}(v_{(1)})\cdot {w}(v_{(2)})\\
&=\Big((-1)^{|t|(|w|+|v_{(1)}|)}{w}(v_{(1)})\odot {t}(v_{(2)})-(-1)^{|t|(|w|+|v_{(2)}|)+{|v_{(1)}||v_{(2)}|}}{w}(v_{(2)})\odot {t}(v_{(1)})\Big)\otimes v_{(3)}\\
&\quad +(-1)^{|t|(|w|+|v_{(1)}|)}{w}(v_{(1)})\odot {t}(v_{(2)})-(-1)^{|t|(|w|+|v_{(2)}|)+{|v_{(1)}||v_{(2)}|}}{w}(v_{(2)})\odot {t}(v_{(1)}),
\end{align*}
{where we used the fact that $\mathfrak{t}$ and $\mathfrak{w}$ are $\bar S(E)$-linear.}
Because of cocommutativity of the coproduct, the last expression vanishes.
\end{proof}
Now, let $\Phi:E\to \Coder (\bar S(V))[1]$ be an action of the Lie $\infty$-algebra $E$ on the Lie $\infty$-algebra $V$. {By Proposition~\ref{prop:lie:infty:algebra:E+V}, $\Phi$ induces a coderivation $\Upsilon$ of $\bar S(E\oplus V)$ and $M_{E \oplus V}= M_E + \Upsilon+ M_V$ is a Lie $\infty$-algebra structure on $E\oplus V$. Let $\mathcal{P}:\Coder (\bar S(E\oplus V))\to \mathfrak{h}$ be the projection onto $\mathfrak{h}$.}
Then we have:
\begin{prop} \label{prop:Vdata:h}
The quadruple $\displaystyle \left(\Coder (\bar S(E\oplus V)), \mathfrak{h}, \mathcal{P}, M_{E \oplus V}\right)$ is a $V$-data and $\mathfrak{h}$ has a Lie $\infty$-algebra structure.
\end{prop}
\begin{proof}
We already know that $\Coder (\bar S(E\oplus V))$, equipped with the commutator, is a graded Lie algebra and $\mathfrak{h}$ is an abelian Lie subalgebra.
Let $p:\bar S(E\oplus V)\to E$ be the projection and $i: \bar S(V)\to \bar S(E\oplus V)$ the inclusion.
Notice that, for each $Q\in\Coder (\bar S(E\oplus V))$ we have $\displaystyle \mathcal{P}(Q)=p\smalcirc Q\smalcirc i$ so
$$\ker \mathcal{P}=\set{Q\in\Coder (\bar S(E\oplus V)): Q\smalcirc i \mbox{ is a coderivation of } \bar S(V)}$$
is clearly a Lie subalgebra of $\Coder (\bar S(E\oplus V))$:
\begin{align*}
\mathcal{P}(\brr{Q,P}_{c})&= p\smalcirc \brr{Q,P}_{c}\smalcirc i=p\smalcirc QP\smalcirc i - (-1)^{|Q||P|}p\smalcirc PQ\smalcirc i\\
&=p\smalcirc Q\smalcirc i\smalcirc P\smalcirc i - (-1)^{|Q||P|}p\smalcirc P\smalcirc i\smalcirc Q\smalcirc i=0, \quad P,Q\in\ker \mathcal{P}.
\end{align*}
Moreover
$$M_{E \oplus V} \smalcirc i =M_{V}, \mbox{ so } M_{E \oplus V}\in (\ker\mathcal{P})_{1}$$
and, since $M_{E \oplus V}$ defines a Lie $\infty$-structure in $E\oplus V$, we have:
$$
\brr{M_{E \oplus V},M_{E \oplus V}}_{c}=0.
$$
Voronov's construction \cite{V05} guarantees that $\mathfrak{h}$ inherits a (symmetric) Lie $\infty$-structure given by:
\begin{align*}
\partial_{k}\big(t_{1},\ldots,t_{k})=\mathcal{P}([[\ldots\brr{M_{E \oplus V}, t_{1}}_{_{RN}}\ldots]_{_{RN}}, t_{k}]_{_{RN}}\big), \quad t_{1},\ldots,t_{k}\in\mathfrak{h}, \, k\geq 1.
\end{align*}
\end{proof}
\begin{rem}
A similar proof as in \cite{LST2021} shows that, with the above structure, $\mathfrak{h}$ is a filtered Lie $\infty$-algebra.
\end{rem}
\begin{lem}\label{lemma:bracket:voronov}
Let
$t:\bar S(V)\to E$ be a degree zero element of $\mathfrak{h}$.
For each $v\in \bar S(V)$,
\begin{align*}
\partial_{1}t(v)=l_{1}t(v)-t\smalcirc M_{V}(v)
\end{align*}
and
\begin{align*}
\partial_{k}(t,\ldots,t)(v)&=l_{k}\big(t(v_{(1)}),\ldots, t(v_{(k)})\big)-k \, t \left(\Phi_{t(v_{(1)})
\odot\ldots\odot t(v_{(k-1)})} v_{(k)}\right), \quad k\geq 2.
\end{align*}
\begin{proof}
Let $p:{\bar S(E\oplus V)}\to E$ be the projection map and $\mathfrak{t}$ the coderivation of $\bar S(E\oplus V)$ defined by $t$.
Notice that
\begin{align*}
p\smalcirc \mathfrak{t}&= t \\
p\smalcirc \mathfrak{t}^{k}&= 0, \quad k\geq 2.
\end{align*}
Consequently, for $k=1$ we have
\begin{align*}
\partial_{1}t(v)=p\smalcirc M_{E \oplus V}\smalcirc {\mathfrak{t}}(v)- p\smalcirc \mathfrak{t}\smalcirc M_{V}(v)=l_{1}t(v)-t\smalcirc M_{V}(v),
\,\,{v \in \bar S(V)}
\end{align*}
and, for $k\geq 2$,
\begin{align*}
\partial_{k}(t,\dots,t)&=p\smalcirc M_{E \oplus V}\smalcirc \mathfrak{t}^{k} - k \, p\smalcirc \mathfrak{t}\smalcirc M_{E \oplus V}\smalcirc \mathfrak{t}^{k-1}\\
&=l\smalcirc \mathfrak{t}^{k} - k\,t\smalcirc M_{E \oplus V}\smalcirc \mathfrak{t}^{k-1}
\end{align*}
and the result follows.
\end{proof}
\end{lem}
\begin{rem}
Notice that
$\displaystyle \partial_{k}(t,\dots,t)(v)=0$, for $v\in S^{<k}(V)$, as a consequence of $\mathfrak{t}^{k}(v)=0$ and $\Phi\smalcirc \mathfrak{t}^{k-1}(v) \in \bar S(E)$.
\end{rem}
Next proposition realizes $\mathcal{O}$-operators as Maurer-Cartan elements of this Lie $\infty$-algebra $\mathfrak{h}$.
\begin{prop} \label{prop:1-1correspondence}
$\O$-operators {on $E$ with respect to an action $\Phi$} are
Maurer-Cartan elements of $\mathfrak{h}$.
\end{prop}
\begin{proof}
Let $t:\bar S(V)\to E$ be a degree zero element of $\mathfrak{h}$.
Maurer-Cartan equation yields
$$
\partial_{1}t + \frac{1}{2}\partial_{2}(t,t) + \dots+ \frac{1}{k!} \partial_{k}(t, \ldots,t) + \ldots =0.
$$
Using Lemma \ref{lemma:bracket:voronov} we have, for each $v\in S^{k}(V)$,
\begin{align*}
\partial_{1}t (v)+& \frac{1}{2}\partial_{2}(t,t) (v)+ \dots +\frac{1}{k!} \partial_{k}(t, \ldots,t) (v) =\\
&= l_{1}t(v)-t\smalcirc M_{V}(v) \\
& +\frac{1}{2}l_{2}\big(t(v_{(1)}), t(v_{(2)})\big)- t \left(\Phi_{t(v_{(1)})} v_{(2)}\right)+ \ldots +\\
& +\frac{1}{k!}l_{k}\big(t(v_{(1)}),\ldots, t(v_{(k)})\big)-\frac{1}{(k-1)!} \, t \left(\Phi_{t(v_{(1)})
\odot\ldots\odot t(v_{(k-1)})} v_{(k)}\right).\\
\end{align*}
Let $T:\bar S(V)\to \bar S(E)$ be the morphism of coalgebras defined by $t:\bar S(V)\to E$.
Maurer-Cartan equation can be written as
$$
l\smalcirc T(v) - t\smalcirc M_{V}(v) - t\,\Phi_{T(v_{(1)})}v_{(2)}=0,
$$
which is equivalent to $T$ being an $\O$-operator (see Equation (\ref{def:O:operator})).
\end{proof}
\section{Deformation of \texorpdfstring{$\O$}--operators} \label{section4}
{We prove that each Maurer-Cartan element of a special graded Lie subalgebra of $\Coder(\bar S(E\oplus V))$ encondes a Lie $\infty$-algebra structure on $E$ and a curved Lie $\infty$-action of $E$ on $V$. We study deformations of $\O$-operators.}
\subsection{Maurer-Cartan elements of Code
$\mathbf{(\bar{S}(E\oplus{V}))}$}
Let $E$ and $V$ be two graded vector spaces and consider the graded Lie algebra $\mathfrak{L}:=( \Coder(\bar S(E\oplus V)), \brr{\cdot,\cdot}_{c})$.
Since
$\bar S(E\oplus V)\simeq \bar S(E) \oplus {(\bar S(E) \otimes \bar S(V))\oplus \bar S(V)}$,
the space $M:=\Coder(\bar S(E))$ of coderivations of $\bar S(E)$ can be seen as a graded Lie subalgebra of $\mathfrak{L}$. Also, the space $R$ of coderivations {defined by linear maps of the space ${\mathrm{Hom}}((\bar S(E) \otimes \bar S(V))\oplus \bar S(V),V)$} can be embedded in $\mathfrak{L}$.
We will use the identifications $M\equiv {\mathrm{Hom}}(\bar S(E),E)$ and $R\equiv {\mathrm{Hom}}((\bar S(E) \otimes \bar S(V))\oplus \bar S(V),V)$.
Given $\rho\in R$, we will denote by $\rho_0$ the restriction of the linear map $\rho$ to $\bar S(V)$ and by $\rho_x$ the linear map obtained by restriction of $\rho$ to $\set{x}\otimes \bar S(V)$, with $x \in \bar{S}(E)$.
We set $\mathfrak{L}':=M\oplus R$.
\begin{prop} \label{prop:L':Lie:subalgebra}
The space $\mathfrak{L}'$ is a graded Lie subalgebra of $\mathfrak{L}=\Coder (\bar S(E \oplus V))$.
\end{prop}
\begin{proof}
Given $m\oplus \rho, m'\oplus \rho' \in \mathfrak{L}'$, let us see that
$$\brr{m\oplus \rho,m'\oplus \rho'}_{_{RN}}= \brr{m,m'}_{_{RN}} \oplus (\brr{m,\rho'}_{_{RN}} + \brr{\rho, m'}_{_{RN}} + \brr{\rho, \rho'}_{_{RN}})$$ is an element of $\mathfrak{L}'$. It is obvious that $\brr{m,m'}_{_{RN}}\in {\mathrm{Hom}}(\bar S(E),E)$. Consider $m^D$ and $\rho^D$ the coderivations of $\bar S(E\oplus V)$ defined by the morphisms $m$ and $\rho$, respectively. For $x \in \bar S(E)$ and $v \in \bar S(V)$ we have,
\begin{align*}
\brr{m,\rho'}_{_{RN}}(x)&=\brr{m,\rho'}_{_{RN}}(v)=0\\
\brr{m,\rho'}_{_{RN}}(x \otimes v)&= \left( m\smalcirc \rho'^D - (-1)^{|m||\rho'|} \rho' \smalcirc m^D \right)(x \odot v)\\
& = - (-1)^{|m||\rho'|} \rho'_{m^D(x)}(v) \in V
\end{align*}
and
\begin{align*}
\brr{\rho,\rho'}_{_{RN}}(x)&=0\\
\brr{\rho,\rho'}_{_{RN}}(v)&= \rho \smalcirc \rho'^D (v)- (-1)^{|\rho||\rho'|} \rho' \smalcirc \rho^D (v) \in V \\
\brr{\rho,\rho'}_{_{RN}}(x \otimes v)&=
\underbrace{(-1)^{|x||\rho'|}\rho_x(\rho_0'^D(v)) + (-1)^{|x_{(1)}||\rho'|}\rho_{x_{(1)}}(\rho_{x_{(2)}}'^D(v)) + \rho_0(\rho_x'^D(v))}_{\in V}\\
&- (-1)^{|\rho||\rho'|}\big( \underbrace{(-1)^{|x||\rho|}\rho'_x(\rho_0^D(v))+ (-1)^{|x_{(1)}||\rho|}\rho'_{x_{(1)}}(\rho_{x_{(2)}}^D(v)) + \rho'_0(\rho_x^D(v)) }_{\in V} \big),
\end{align*}
which proves that $\brr{m,\rho'}_{_{RN}} + \brr{\rho, m'}_{_{RN}}+\brr{\rho, \rho'}_{_{RN}} \in {\mathrm{Hom}}((\bar S(E) \otimes \bar S(V))\oplus \bar S(V),V)$.
\end{proof}
\
Next theorem shows that an element $m \oplus \rho \in \mathfrak{L}'$ which is a Maurer-Cartan of $\mathfrak{L}=\Coder (\bar S(E \oplus V))$ encodes a Lie $\infty$-algebra structure on $E$ and an action of $E$ on the Lie $\infty$-algebra $V$.
\begin{thm} \label{prop:MC:L'+h}
Let $E$ and $V$ be two graded vector spaces and $m \oplus \rho \in \mathfrak{L}'=M\oplus R$. Then,
$m \oplus \rho$ is a Maurer-Cartan element of $\mathfrak{L}'$ if and only if
$m^D$ defines a Lie $\infty$-structure on $E$ and $\rho$ is a curved Lie $\infty$-action of $E$ on $V$ .
\end{thm}
\begin{proof}
We have
\begin{equation} \label{eq:MC_equival_action}
\brr{m\oplus \rho,m\oplus \rho}_{_{RN}}=0 \Leftrightarrow
\begin{cases}\brr{m,m}_{_{RN}}=0\\ 2 \brr{m,\rho}_{_{RN}} + \brr{\rho, \rho}_{_{RN}}=0.
\end{cases}
\end{equation}
Similar computations to those in the proof of Proposition~\ref{prop:L':Lie:subalgebra} give, for all $v \in \bar S(V)$ and $x \in \bar S(E)$,
\begin{eqnarray*}
\lefteqn{\begin{cases}
\left( 2 \brr{m,\rho}_{_{RN}} + \brr{\rho, \rho}_{_{RN}}\right)(v)= 0 \\
\left( 2 \brr{m,\rho}_{_{RN}} + \brr{\rho, \rho}_{_{RN}}\right)(x\otimes v)=0
\end{cases}}\\
& \Leftrightarrow
\begin{cases}
\rho_0 \smalcirc \rho_0^D(v)=0\\
\rho_{m^D(x)}(v) = \left( -\brr{\rho_0, \rho_x}_{_{RN}}
- \frac{(-1)^{|x_{(1)}|} }{2}\brr{\rho_{x_{(1)}}, \rho_{x_{(2)}}}_{_{RN}}\right)(v).
\end{cases}
\end{eqnarray*}
Since $m \oplus \rho$ is a degree $+1$ element of $\mathfrak{L}'$, the right hand-side of \eqref{eq:MC_equival_action} means that $m^D$ defines a Lie $\infty$-algebra structure on $E$ and $\rho= \sum_{k\geq 0} \rho_k$ is a curved Lie $\infty$-action of $E$ on $V$.
Notice that $\rho_0^D:\bar S(V) \to \bar S(V)$ equips $V$ with a Lie $\infty$-structure.
Reciprocally, if $(E,m^D)$ is a Lie $\infty$-algebra and $\rho$ is a curved Lie $\infty$-action of $E$ on $V$, the degree $+1$ element $m \oplus \rho$ of $\mathfrak{L}'$ is a Maurer-Cartan element of $\mathfrak{L}'$.
\end{proof}
Next proposition gives the Lie $\infty$-algebra that controls the deformations of the actions of $E$ on $V$ \cite{G}.
\begin{prop} \label{prop:Lie:infty:controls:MC}
Let $m \oplus \rho$ be a Maurer-Cartan element of $\mathfrak{L}'$ and $m' \oplus \rho'$ a degree $+1$ element of $\mathfrak{L}'$. Then, $m \oplus \rho+ m' \oplus \rho'$ is a Maurer-Cartan element of $\mathfrak{L}'$ if and only if $m' \oplus \rho'$ is a Maurer-Cartan element of $\mathfrak{L}'\,^{m \oplus \rho}$.
Here, $\mathfrak{L}'\,^{m \oplus \rho}$ denotes the DGLA which is the twisting of $\mathfrak{L}'$ by $m \oplus \rho$.
\end{prop}
\subsection{Deformation of \texorpdfstring{$\O$}--operators}
Let $\mathfrak{h}$ be the abelian Lie subalgebra of $\mathfrak{L}=\Coder(\bar S(E\oplus V))$ considered in Proposition~\ref{prop:h:abelian:subalgebra} and $\mathcal{P}:\mathfrak{L} \to \mathfrak{h}$ the projection onto $\mathfrak{h}$. Let $\Delta \in \mathfrak{L}'$ be a Maurer-Cartan element of $\mathfrak{L}$. Then, $\displaystyle \left(\mathfrak{L}, \mathfrak{h}, \mathcal{P}, \Delta \right)$ is a $V$-data and $\mathfrak{h}$ has a Lie $\infty$-structure given by the brackets:
$$\partial_{k}(a_{1},\ldots,a_{k})=\mathcal{P}([[\ldots\brr{\Delta, a_{1}}_{_{RN}}\ldots]_{_{RN}}, a_{k}]_{_{RN}}), \quad k\geq 1.$$
We denote by $\mathfrak{h}_\Delta$ the Lie $\infty$-algebra $\mathfrak{h}$ equipped with the above structure.
The $V$-data $\displaystyle \left(\mathfrak{L}, \mathfrak{h}, \mathcal{P}, \Delta \right)$ also provides a Lie $\infty$-algebra structure on $\mathfrak{L}[1] \oplus \mathfrak{h}$, that we denote by $(\mathfrak{L}[1] \oplus \mathfrak{h})_\Delta$, with brackets \cite{V05}:
\begin{equation} \label{eq:bracket:q:Delta}
\begin{cases}
q_1^\Delta((x,a_1))=(-\brr{\Delta, x}_{_{RN}}, \mathcal{P}(x+ \brr{\Delta, a_1}_{_{RN}})) \vspace{.2cm}\\
q_2^\Delta(x,x')= (-1)^{deg(x)}\brr{x,x'}_{_{RN}}\vspace{.2cm} \\
q_k^\Delta(x, a_1, \ldots, a_{k-1})=\mathcal{P}([ \ldots [[x, a_1]_{_{RN}}, a_2]_{_{RN}}\ldots a_{k-1}]_{_{RN}}),\,\,\,k\geq2,\vspace{.2cm} \\
q_k^\Delta(a_1, \ldots, a_{k})= \partial_k(a_1, \ldots, a_{k}),\,\,\,k\geq 1,
\end{cases}
\end{equation}
$x,x' \in \mathfrak{L}[1]$ and $a_1, \ldots,a_{k-1} \in \mathfrak{h}$. Here, $deg(x)$ is the degree of $x$ in $\mathfrak{L}$.
\
Moreover, since $\mathfrak{L}'$ is a Lie subalgebra of $\mathfrak{L}$ satisfying $\brr{\Delta, \mathfrak{L}'}\subset \mathfrak{L}'$, the brackets $\set{q_k^\Delta}_{k \in \mathbb N}$ restricted to $\mathfrak{L}'[1] \oplus \mathfrak{h}$ define a Lie $\infty$-algebra structure on $\mathfrak{L}'[1] \oplus \mathfrak{h}$, that we denote by $(\mathfrak{L}'[1] \oplus \mathfrak{h})_\Delta$. Notice that the restrictions of the brackets $\set{q_k^\Delta}$ to $\mathfrak{L}'[1] \oplus \mathfrak{h}$ are given by the same expressions as in \eqref{eq:bracket:q:Delta} except for $k=1$:
$$q_1^\Delta((x,a_1))=(-\brr{\Delta, x}_{_{RN}}, \mathcal{P}(\brr{\Delta, a_1}_{_{RN}}))=(-\brr{\Delta, x}_{_{RN}}, \partial_1(a_1)),$$
because $\mathcal{P}(\mathfrak{L}')=0$.
Of course, $\mathfrak{h}_\Delta$ is a Lie $\infty$-subalgebra of $(\mathfrak{L}'[1]\oplus \mathfrak{h})_\Delta$.
\begin{rem}
The brackets \eqref{eq:bracket:q:Delta} that define the Lie $\infty$-algebra structure of $(\mathfrak{L}[1] \oplus \mathfrak{h})_\Delta$ coincide with those of $\mathfrak{h}_\Delta$ for $x=x'=0$ . So, an easy computation yields
$$t \in \textrm{MC}(\mathfrak{h}_\Delta) \, \Leftrightarrow \, (0,t) \in \textrm{MC}(\mathfrak{L}'[1] \oplus \mathfrak{h})_\Delta.$$
\end{rem}
Theorem 3 in \cite{FZ} yields:
\begin{prop} \label{prop_FZ}
Consider the $V$-data $\displaystyle \left(\mathfrak{L}, \mathfrak{h}, \mathcal{P}, \Delta \right)$, with $\Delta\in \emph{MC}(\mathfrak{L}')$ and let $t$ be a degree zero element of $\mathfrak{h}$. Then,
$$t \in \emph{MC}(\mathfrak{h}_{\Delta})
\, \Leftrightarrow \,
(\Delta,t)\in \emph{MC}(\mathfrak{L}[1]\oplus \mathfrak{h})_{\Delta}.$$
\end{prop}
Recall that, given an element $t \in \mathfrak{h}={\mathrm{Hom}}(\bar S(V), E)$, the corresponding morphism of coalgebras $T: \bar S(V) \to \bar S(E)$ is an $\O$-operator if and only if $t$ is a Maurer-Cartan element of $\mathfrak{h}_\Delta$ (Proposition~\ref{prop:1-1correspondence}). Moreover, given a Maurer-Cartan element $m \oplus \rho$ of $\mathfrak{L'}$, we know from Theorem \ref{prop:MC:L'+h} that $(E, m^D)$ is a Lie $\infty$-algebra and $\rho$ is a curved Lie $\infty$-action of $E$ on $V$. So,
an $\O$-operator can be seen as a Maurer-Cartan element of the Lie $\infty$-algebra $(\mathfrak{L}'[1] \oplus \mathfrak{h})_\Delta$:
\begin{prop} \label{prop:T:MC}
Let $E$ and $V$ be two graded vector spaces. Consider a morphism of coalgebras $T: \bar S(V) \to \bar S(E)$ defined by $t \in \mathrm{Hom}(\bar S(V), E)$, and the $V$-data $\displaystyle \left(\mathfrak{L}, \mathfrak{h}, \mathcal{P}, \Delta \right)$, with $\Delta:=m \oplus \rho \in \emph{MC}(\mathfrak{L'})$. Then,
$T$ is an $\O$-operator on $E$ with respect to the curved Lie $\infty$-action $\rho$ if and only if $(\Delta, t)$ is a Maurer-Cartan element of $(\mathfrak{L}'[1] \oplus \mathfrak{h})_\Delta$.
\end{prop}
\begin{cor}
If $T$ is an $\O$-operator on the Lie $\infty$-algebra $(E, m^D)$ with respect to the curved Lie $\infty$-action $\rho$ of $E$ on $V$, then $((\mathfrak{L}'[1] \oplus \mathfrak{h})_{m \oplus \rho})^{(m \oplus \rho,t)}$ is a Lie $\infty$-algebra.
\end{cor}
As a consequence of Theorem 3 in \cite{FZ}, we obtain the Lie $\infty$-algebra that controls the deformation of $\O$-operators on $E$ with respect to a fixed curved Lie $\infty$-action on $V$:
\begin{cor} Let $E$ and $V$ be two graded vector spaces and consider the $V$-data $(\mathfrak{L}, \mathfrak{h}, \mathcal{P}, \Delta:=m\oplus \rho)$.
Let $T$ be an $\O$-operator on $(E,m^D)$ with respect to the curved Lie $\infty$-action $\rho$ and $T':\bar S(V) \to \bar S(E)$ a (degree zero) morphism of coalgebras defined by $t' \in \mathrm{Hom}(\bar S(V), E)$. Then, $T+T'$ is an $\O$-operator on $E$ with respect to the curved Lie $\infty$-action $\rho$ if and only if $(\Delta, t')$ is a Maurer-Cartan element of $(\mathfrak{L}'[1]\oplus \mathfrak{h})_{\Delta}^{(\Delta, t)}$.
\end{cor}
\begin{proof}
Let $t \in \mathfrak{h}$ be the morphism defined by $T$. Then \cite{FZ},
$$(\Delta, t+t')\in \textrm{MC}(\mathfrak{L}'[1]\oplus \mathfrak{h})_{\Delta} \, \Leftrightarrow (\Delta,t')\in \textrm{MC}(\mathfrak{L}'[1]\oplus \mathfrak{h})_{\Delta}^{(\Delta, t)}.
$$
\end{proof}
|
1,116,691,500,620 | arxiv | \section{Introduction}
One of the ideas grown from the attempts to understand the fascinating properties of copper oxides is the conviction that doping of Mott insulators reveals a latent
tendency for superconducting pairing already hidden in the parent system. Whether this idea is relevant to the cuprates remains controversial. What we have at the moment is a theoretical demonstration of its validity for one-dimensional models such as models of ladders \cite{fisher}. It has been rigorously demonstrated that if the undoped ladder is a Mott insulator, then under doping it gives rise to a superconducting quasi long range order. For 3D array of ladders (LA) this would give rise to full fledged superconductivity. Naturally, it would be interesting to find or to custom made materials fitting the model description of LAs to check the validity of the theory. One example of a real material considered as close approximation of the LA ideal is the so-called "telephone number" compound Sr$_{14-x}$Ca$_x$ Cu$_{24}$ O$_{41}$. Although this material possesses a lot of interesting properties, it appears to be too complicated due to the fact that its structure contains not just CuO ladders, but also chains \cite{girsh}.
In this paper we suggest that one can search for realizations of LA not just among CuO-based systems, such as the telephone number compound, but among nearly compensated metals such as ferropnictides. The Fermi surface of such metals consists of small electron and hole pockets so that the Fermi energy is much smaller than the
bandwidth $ \epsilon_F << W$. Systems of that kind can be considered as strongly correlated in the sense that, as was demonstrated
in Ref.\cite{chubukov08},
the interactions undergo strong renormalization and at energies close to $\epsilon_F$ the effective Hamiltonian is substantially different from the one given by the band structure theory. This feature rises the red flag for numerical studies of finite systems meaning that size effects in these systems must be really severe. Fortuitously it turns out that instead of complicating matters, the renormalization simplifies the interaction pattern. Namely, interactions in the broad conduction and valence bands drive the system towards higher symmetry simultaneously increasing their own strength. The resulting effective theory in the region of energies $\sim \epsilon_F$ is described by a highly symmetric Hamiltonian where instabilities in several different channels (Spin Density Wave, superconductivity and
Charge Density Wave)
compete with each other
It is believed that finite $\epsilon_F$ breaks the symmetry and favors some particular type of order
\cite{chub}.
There is, however, a possibility that at zero doping no order is chosen, like in spin liquid or band insulator (after all in the pnictides there are two nonequivalent sites in the elementary cell). Such possibility is easily realized for the undoped 1D ladder which then becomes superconducting under doping \cite{fisher}.
We suggest that a "custom made" material for a doped spin liquid would be a "striped" ferropnictide, that is ferropnictide subject to a one-dimensional periodic potential of a moderate amplitude $\epsilon_F < U << W$ (further in the text we provide more detailed criteria). One can imagine such potential being produced when a pnictide film is grown on a suitable substrate or occur naturally. The recently performed experiments indicate that the latter possibility is realized in the ferropnictide Ca(Fe$_{1-x}$Co$_{x}$)$_2$ As$_2$
which naturally develops a strong in-plane anisotropy (stripe ordering ?) and become rather one-dimensional as a result \cite{tiang}. Theoretically one-dimensional version of the pnictide Hamiltonian has recently been considered by Berg {\it et. al.}\cite{berg}. These authors applied DMRG method to the four-chain lattice Hamiltonian. Here we assume a much smaller degree of one-dimensionality which occurs for a moderate potential modulation not affecting the states with energies larger than $U << W$. This allows us to assume that the renormalization process has taken its toll and for energies smaller than $\epsilon_F$ one can use the simplified Hamiltonian with enlarged symmetry. The advantages of our model are numerous. First, we have a one-dimensional model where non-perturbative methods can be applied. Second, the situation we consider has a good chance to describe reality. Third, the stripe modulation may enhance the pairing strength, as it probably does for the cuprates \cite{tranquada},\cite{2DHub}. Fourth, by considering a symmetric model we simplify the discussion.
The corresponding Hamiltonian density has the form considered in \cite{kim}:
\begin{eqnarray}
&& {\cal H} = - c^+_{\sigma}\partial_x^2 c_{\sigma} + f^+_{\sigma}\partial_x^2f_{\sigma} - u_0(n_c -n_f)^2 + \label{kim}
\\
&& u_2\Big(c^+_{\uparrow}c^+_{\downarrow}f_{\downarrow}f_{\uparrow} + h.c.\Big) - k^2_F(n_c - n_f) - \mu(n_c + n_d) \nonumber
\end{eqnarray}
where $ \mu$ measures a deviation from a perfect nesting.
It is worth noticing that the Coulomb interaction, $(n_c + n_f)^2$ term, renormalizes to zero \cite{AAC}. It is also possible to show that the disparity between electron and hole masses only affects the coefficients in two-dimensional RG equations without changing the low energy Hamiltonian (\ref{kim}).
Assuming that the interactions are weak we linearize the spectrum close to the Fermi points:
\begin{eqnarray}
c = R\mbox{e}^{-\mbox{i} k_F x} + L\mbox{e}^{\mbox{i} k_F x}, ~~ f = r\mbox{e}^{\mbox{i} k_F x} + l\mbox{e}^{-\mbox{i} k_F x},
\end{eqnarray}
where $k_F$ is the Fermi momentum at $\mu =0$.
The kinetic energy acquires the standard form (we neglect the difference in the Fermi velocities of electrons and holes):
\begin{eqnarray}
T = \mbox{i} v(- R^+_{\sigma}\partial_x R_{\sigma} + L^+_{\sigma}\partial_x L_{\sigma}) + \mbox{i} v(- r^+_{\sigma}\partial_x r_{\sigma} + l^+_{\sigma}\partial_x l_{\sigma})
\end{eqnarray}
The Umklapp interaction becomes
\begin{eqnarray}
&& V_2 = u_2\Big\{\Big[L_{\uparrow}L_{\downarrow}r^+_{\downarrow}r^+_{\uparrow} + R_{\uparrow}R_{\downarrow}l^+_{\downarrow}l^+_{\uparrow} + \nonumber\\
&& \Big(R_{\uparrow}L_{\downarrow} + L_{\uparrow}R_{\downarrow} \Big)\Big(l^+_{\downarrow}r^+_{\uparrow} + r^+_{\downarrow}l^+_{\uparrow} \Big)\Big] + h.c.\Big\} \label{V2}
\end{eqnarray}
The densities are
\begin{eqnarray}
&& n_c = (R^+_{\sigma}R_{\sigma} + L^+_{\sigma}L_{\sigma}) + \mbox{e}^{-2\mbox{i} k_F x}L^+_{\sigma}R_{\sigma} + \mbox{e}^{2\mbox{i} k_F x}R^+_{\sigma}L_{\sigma}\nonumber\\
&& n_f = (r^+_{\sigma}r_{\sigma} + l^+_{\sigma}l_{\sigma}) + \mbox{e}^{-2k_F\mbox{i} x}r^+_{\sigma}l_{\sigma} + \mbox{e}^{2\mbox{i} k_F x}l^+_{\sigma}r_{\sigma}
\end{eqnarray}
The bosonization rules are standard:
\begin{eqnarray}
L_{\sigma} = \frac{\xi^{(1)}_{\sigma}}{\sqrt{\pi a_0}}\mbox{e}^{\mbox{i}\sqrt{4\pi}\varphi^{(1)}_{\sigma}}, ~~ R_{\sigma} = \frac{\xi^{(1)}_{\sigma}}{\sqrt{\pi a_0}}\mbox{e}^{-\mbox{i}\sqrt{4\pi}\bar\varphi^{(1)}_{\sigma}} \label{bos}
\end{eqnarray}
and the same for $r,l$ with index 1 being replaced with 2. Here $\varphi,\bar\varphi$ are chiral bosonic fields and $\xi$ are coordinate independent Klein factors:
\begin{eqnarray}
\{ \xi_{\sigma}^a,\xi_{\sigma'}^b\} = \delta_{\sigma\s'}\delta_{ab}
\end{eqnarray}
The Klein factors constitute the Clifford algebra and are Dirac $\gamma$-matrices of the O(6) group.
Substituting (\ref{bos}) into (\ref{V2}) and introducing new fields
\begin{eqnarray}
&& \phi_c^{(+)} = \frac{1}{2}\Big[\phi_{\uparrow}^{(1)} + \phi_{\downarrow}^{(1)} + \phi_{\uparrow}^{(2)} + \phi_{\downarrow}^{(2)} \Big]\nonumber\\
&& \phi_c^{(-)} = \frac{1}{2}\Big[\phi_{\uparrow}^{(1)} + \phi_{\downarrow}^{(1)} - \phi_{\uparrow}^{(2)} - \phi_{\downarrow}^{(2)} \Big]\nonumber\\
&& \phi_s^{(+)} = \frac{1}{2}\Big[\phi_{\uparrow}^{(1)} - \phi_{\downarrow}^{(1)} + \phi_{\uparrow}^{(2)} - \phi_{\downarrow}^{(2)} \Big]\nonumber\\
&& \phi_s^{(-)} = \frac{1}{2}\Big[\phi_{\uparrow}^{(1)} - \phi_{\downarrow}^{(1)} - \phi_{\uparrow}^{(2)} + \phi_{\downarrow}^{(2)} \Big]
\end{eqnarray}
we get
\begin{eqnarray}
&& V = 4u_2\frac{\xi_{\uparrow}^{(1)}\xi_{\downarrow}^{(1)}\xi_{\downarrow}^{(2)}\xi_{\uparrow}^{(2)}}{(\pi a_0)^2}\times\\
&& \cos[\sqrt{4\pi}\theta_c^{(-)}]\Big\{\cos[\sqrt{4\pi}\phi_c^{(+)}] + \cos[\sqrt{4\pi}\phi_s^{(+)}] + \cos[\sqrt{4\pi}\phi_s^{(-)}] \Big\},\nonumber
\end{eqnarray}
where $\phi = \varphi + \bar\varphi, ~~ \theta = \varphi - \bar\varphi$.
This interaction can be refermionized:
\begin{eqnarray}
V = u_2\bar\psi_0\psi_0\sum_{a=1}^3(\bar\psi_a\psi_a)
\end{eqnarray}
where $\bar\psi_a = (L^+,R^+)_a, \psi^T_a = (R,L)_a$.
The new fermions transform in the vector representation of SO(8) group and should not be confused with original ones.
The density-density interaction gives rise to the term
\begin{eqnarray}
V_0 = - u_0\sum_{a\neq b}(\bar\psi_a\psi_a)(\bar\psi_b\psi_b)
\end{eqnarray}
As a result we get the Gross-Neveu model with $Z_2\times$O(6) symmetry:
\begin{eqnarray}
&& {\cal H} = \mbox{i} v\sum_{a=1}^3(- R^+_a\partial_x R_a + L^+_a\partial_xL_a) + \nonumber\\
&& \mbox{i} v (- R^+_0\partial_x R_0 + L^+_0\partial_xL_0) + \nonumber\\
&& -u_0\sum_{a\neq b}(R^+_aL_a + L^+_aR_a)(R^+_bL_b + L^+_bR_b) - \nonumber\\
&& u_0' (R^+_aL_a + L^+_aR_a)^2 + \nonumber\\
&& u_2(R^+_0L_0 + L^+_0R_0)\sum_{a=1}^3(R^+_aL_a + L^+_aR_a) \nonumber\\
&& - \mu(R_1^+R_1 + L^+_1L_1) -h(R^+_2R_2 + L^+_2L_2) \label{GN}
\end{eqnarray}
(here the index 0 corresponds to $(c,-)$, 1 to $(c,+)$, 2 and 3 to $(s,\pm)$). The RG equations can be extracted from \cite{fisher}:
\begin{eqnarray}
&& \dot u_0 = - 4u_0^2 - 2u_2^2, ~~ \dot u_2 = -(u_0' + 5u_0)u_2, \nonumber\\
&& \dot u_0' = - 4u_0^2 - 2u_2^2 \label{RG}
\end{eqnarray}
Let us consider the undoped ($\mu =0$) case first. Analysis of Eqs.(\ref{RG}) shows that the interactions scale to strong coupling under rather general conditions. At strong coupling the RG trajectories asymptotically restore the SO(8) symmetry $u_0 = u_0' = |u_2|$. The spectrum is gapful and at the SO(8) symmetric point is known exactly (one can find a detailed discussion in \cite{fisher}). The fact that the spectrum is gapful is an indication that the undoped state is not a superconductor. In the ground state fields $\phi_c^{(+)},\phi_s^{(\pm)}$ freeze at $\Phi =0$, the field $\theta_c^{(-)}$ freezes at $0$ or $\sqrt{\pi}/2$ depending on sign of $u_2$
There is no {\it local} order parameter corresponding to these field configurations. This situation is known as $d$-Mott insulator \cite{fisher}. It is interesting that the coherent spin excitations (vector particles) are emitted at the Neel wave vector connecting the centers of electron and hole pockets. The corresponding operator is the staggered spin current:
\begin{eqnarray}
&& {\bf N} = \mbox{i} [c^+\vec\sigma f - f^+\vec\sigma c],\label{N}\\
&& N^+ \approx \frac{4\mbox{i}\xi^{(1)}_{\uparrow}\xi^{(2)}_{\downarrow}}{\pi a_0}\times\nonumber\\
&& \cos[\sqrt{\pi}(\phi_c^{(+)} + \theta_c^{(-)})]\cos[\sqrt{\pi}(\phi_s^{(-)} + \theta_s^{(+)})]\nonumber
\end{eqnarray}
At finite doping the chemical potential always exceeds the charge gap.
Then field $\phi_c^{(+)}$ becomes massless, but all other fields remain massive with doping dependent diminished
spectral gaps. The correlation functions in this case have been calculated in \cite{konik}. The quasi long range
superconducting
order emerges with the order parameters
\begin{eqnarray}
&& \Delta_c = c_{\uparrow}c_{\downarrow} \approx L_{\uparrow}R_{\downarrow} + R_{\uparrow}L_{\downarrow} = \nonumber\\
&& \frac{2\xi^{(1)}_{\uparrow}\xi^{(1)}_{\downarrow} }{(\pi a_0)}\mbox{e}^{\mbox{i}\sqrt{\pi}[\theta_c^{(+)} + \theta_c^{(-)}]}\cos\{\sqrt\pi[\phi_s^{(+)} + \phi_s^{(-)}]\}\label{SC1}\\
&& \Delta_f = f_{\uparrow}f_{\downarrow} \approx l_{\uparrow}r_{\downarrow} + r_{\uparrow}l_{\downarrow} = \nonumber\\
&& \frac{2\xi^{(2)}_{\uparrow}\xi^{(2)}_{\downarrow} }{(\pi a_0)}\mbox{e}^{\mbox{i}\sqrt{\pi}[\theta_c^{(+)} - \theta_c^{(-)}]}\cos\{\sqrt\pi[\phi_s^{(+)} - \phi_s^{(-)}]\} \label{SC2}
\end{eqnarray}
The relation between the signs of the amplitudes is determined by the sign of $u_2$ (minus for $u_2 >0$). The phase of the OPs is $\theta_c^{(+)}$. The scaling dimensions at are $d =1/4K$, where $K$ is the Luttinger liquid parameter in the charge sector. In the regime where the forward scattering is weak $K \approx 1$. The pairing susceptibility is strongly divergent:
\begin{equation}
\chi_P \sim T^{-2 +2d} = T^{-3/2}.
\end{equation}
and hence the resulting superconductivity is of a strongly non-BCS nature.
Now we would like to some aspects of the stripe formation. To simplify the calculations we model the stripe as a periodic potential made of a sum of parabolic ones:
\begin{eqnarray}
&& U(y) = \sum_{n=-\infty}^{\infty} u(y-nb), \\
&& u(x) = U_0[2\pi y/b]^2\theta(|y|-b/2).\nonumber
\end{eqnarray}
In the tight binding approximation the lowest band has the wave functions
\begin{eqnarray}
&& \psi(q,y) = \frac{1}{\sqrt{\pi \xi N}}\sum_n\exp[-(y-nb)^2/2\xi^2]\mbox{e}^{\mbox{i} qn}, \nonumber\\
&& \xi^{-2} = (2\pi/b)\sqrt{U_0m} \label{psi}
\end{eqnarray}
($N$ being the number of stripes) with the transverse dispersion
\begin{eqnarray}
&&\epsilon(q) = 2t\cos (qb), \nonumber\\
&& t = \frac{\pi}{2}\sqrt{U_0/mb^2}\exp\Big[-(\pi/2)\sqrt{U_0mb^2}\Big]. \label{t}
\end{eqnarray}
For our model calculations to be valid we need this transverse dispersion to be somewhat smaller that the 1D gaps. The corrections in $(t/M)$ can be taken into account in spirit of \cite{rrts} where a very similar model of 2-leg ladders coupled by weak transverse tunneling was considered. Then with expression for transverse tunneling (\ref{t}) available, we can estimate the transition temperature. From (\ref{SC1},\ref{SC2}) we conclude that the order parameter amplitude is $\sim M/W$ ($M$ being the gap scale in the doped regime). The Josephson coupling between the stripes is $J \sim t^2/M$. The pairing susceptibility is $\chi_P \sim \rho(\epsilon_F)(M/T)^{3/2}$. The criterion for the transition is $J\chi_P \sim 1$ which gives
\begin{eqnarray}
T_c \sim M\Big[(t^2/W)\rho(\epsilon_F) M\Big]^{2/3}
\end{eqnarray}
This 3D transition temperature is significantly smaller than the energy gap $M$. In that sense a hypothetic pnictide stripe superconductor will be similar to the underdoped cuprates where the spin gap greatly exceeds the transition temperature.
Naturally, wave function (\ref{psi}) enters in the momentum dependence of various correlation functions through the formfactor. Thus the correlation functions of various densities (such as, for instance (\ref{N}) at low energies will have the dynamical susceptibility displaying 1D dispersion:
\begin{eqnarray}
&& \langle\la {\bf N}(-\omega,-{\bf q}){\bf N}(\omega,{\bf q}')\rangle\ra = \label{fact}\\
&& g(q_{\perp})g(q'_{\perp})\sum_m\delta(q_{\perp}-q'_{\perp} - 2\pi m/b)D(\omega,q_{\parallel})\nonumber\\
&& g(q) \approx \frac{1}{\pi \xi}\int \mbox{d} y \mbox{e}^{\mbox{i} qy} \exp[-y^2/\xi^2] = \mbox{e}^{- (q\xi/2)^2}\nonumber
\end{eqnarray}
The transverse wave vector at which the intensity drops sharply is
\begin{equation}
q_{max} \approx (2\pi/b)[U_0mb^2/\pi^2]^{1/4}
\end{equation}
Due to a rapid exponential dependence of the transverse tunneling amplitude (\ref{t}) it is conceivable to have $q_{max} < 2\pi/b$ and still have $t < M$. We mention this because
the factorization (\ref{fact}) in combination with 1D dispersion of gapped magnetic excitations has been observed in FeTe$_{0.6}$Se$_{0.4}$ \cite{tranq1}. The ARPES data presented in \cite{tranq1} also show that the low-lying quasiparticle excitations are anisotropic which is consistent with the anisotropy of the magnetic excitation spectrum.
We conclude the paper with a brief discussion of salient features the experimentalists have to look for. Primarily these are spectral gaps; since model (\ref{GN}) has a complicated spectrum, the gaps are likely to be different in different dynamical response functions.
Although in this picture we considered a situation when there is one electron and one hole band at the chemical potential, the number can differ for different pnictides. For instance, the STM measurements of the only known striped pnictide Ca(Fe$_{0.97}$Co$_{0.03}$)$_2$ As$_2$ \cite{tiang} found only one hole band (though strongly one-dimensional) and no evidence for electron bands. A possible explanation is that one electron and one hole band got paired producing a gap and leaving one unpaired band behind. The evidence in favor of such explanation comes from the fact that the observed density of states is strongly energy dependent on the scale $|E| < 0.1$eV and exhibits a mixed metallic and pseudogap-like shape. In any case, a detailed explanation of these experiments is probably premature and is not a purpose of this paper which primary goal is to attract attention to the subject of striped pnictides.
I am grateful to Maxim Khodas, John Tranquada for fruitful discussions and to Andrey Chubukov for reading the manuscipt and making valuable remarks.
This research was supported by US DOE, Office of Basic Energy Science as a part of CES. The Center for Emerging Superconductivity is a DOE
Energy Frontier Research Center.
|
1,116,691,500,621 | arxiv | \section{Introduction}
The rotation-powered pulsars (RPPs) are known as rapidly spinning and strongly magnetized
neutron stars that are radiating at the expense of their rotational energy. The X-ray
emission of RPPs may contain both thermal and non-thermal components. The thermal emission
might be further divided into non-pulsed and pulsed components. The non-pulsed component,
originates from the cooling of the neutron star,
is from the whole pulsar surface with a characteristic temperature of about 0.1 keV,
while the pulsed component comes from hot spots
($\sim$ 1.0 keV) on the pulsar surface, which are heated by the bombardment of
relativistic particles streaming back to the surface from the pulsar magnetosphere (Kundt
\& Schaaf 1993, Zavlin et al. 1995, Gil \& Krawczyk 1996). The non-thermal pulsar emission
is from the pulsar magnetosphere, and it might also contain pulsed (e.g., Cheng \& Zhang
1999; Zhang \& Harding 2000) and non-pulsed components (e.g., Tennant et al. 2001; Becker
et al. 2004). In some cases, a pulsar wind nebula (PWN) is found to surround a RPP. The
X-ray emission of the PWN is non-pulsed and often dominates the non-thermal emission of
the system.
A lot of efforts have been devoted to the statistical studies of pulsar X-ray emission
properties, with particular emphasis on the efficiency of conversion of the pulsar spin
down power $\dot{E}$ into X-ray luminosity. By using the {\sl Einstein} data, Seward \&
Wang (1988) found that $L_{\rm X}\propto \dot{E}^{1.39}$, where $L_{\rm X}$ is the
0.2-4.0 keV X-ray luminosity of the pulsar (plus PWN). Becker \& Tr{\"u}mper (1997)
obtained $L_{\rm X}\simeq 10^{-3}\dot{E}$ using a sample of 27 pulsars observed by {\sl
ROSAT}, where $L_{\rm X}$ is the total X-ray (0.1-2.4 keV) luminosity of the pulsar plus
PWN. However, in these two works, the thermal emission may contribute significantly to
the total X-ray luminosity given the adopted soft X-ray band. Saito (1998) analyzed 16
RPPs observed by {\sl ASCA} (2-10 keV) and found $L_{\rm X}\propto \dot{E}^{3/2}$.
Possenti et al. (2002) reported $L_{\rm X}\propto \dot{E}^{1.34}$ using 39 pulsars
observed by {\sl ASCA, RXTE, BeppoSAX, Chandra} and {\sl XMM-Newton}. The X-ray
luminosities in Saito (1998) and Possenti et al. (2002) also include the total emission
due to the pulsars plus PWNe, given the limited spatial resolutions of {\sl ASCA}, {\sl
RXTE}, and {\sl BeppoSAX}. Cheng et al. (2004) divided the total X-ray emission into
pulsed and non-pulsed components, and found that the X-ray luminosity of the pulsed one
follows $L_{\rm X,pul}\propto\dot{E}^{1.2}$, which agrees with the model prediction
$L_{\rm X}\propto\dot{E}^{1.15}$ by Cheng \& Zhang (1999). For non-pulsed emission, they
got $L_{\rm X,npul}\propto\dot{E}^{1.4}$, where they supposed that the non-pulsed
component comes mainly from PWNe and the contribution of non-pulsed component from
pulsars is negligible. It is worth noting that the scatter in the relation is large,
as pointed out by Possenti et al. (2002), who performed study including the estimates of the observational
errors and showing that the linear fit is statistically unacceptable.
All these previous works suffer from the low spatial resolution of the detectors, making
it difficult to resolve the emission of the pulsars from that of their surrounding PWNe.
It is the purpose of our current work to resolve and to analyze the pulsar and the PWN emission
separately. Thanks to the high spatial resolution observations performed with {\sl
Chandra} and {\sl XMM-Newton}, we have been able to satisfactorily investigate 27 pulsars
and 24 PWNe, for which we have determined the non-thermal X-ray fluxes and spectra in
the 2-10 keV band. Then we have carried out separated statistic studies of RPPs and PWNe,
and tested the consistence of their emission properties with current models. The
organization of this paper is as following: the sample and the data processing are presented
in section 2; the statistical analyses of the X-ray spectral properties of RPPs and PWNe are
given in section 3; we discuss the physical implications of our results in section 4 and
summarize our work in section 5.
\section{Sample and Data processing}
We collect pulsar and PWN samples from the observations by {\sl Chandra} and {\sl
XMM-Newton}, which both have high spatial resolutions, i.e., $\sim1\arcsec$ and
$\sim6\arcsec$, respectively. We take the {\sl Chandra} data directly from the
literatures, and if there are no published results, we analyzed the data in this paper.
The {\sl XMM-Newton} data are adapted only if there are no relevant data from {\sl
Chandra} for the same source. All the {\sl XMM-Newton} results are taken from
literatures. Totally we obtain the X-ray spectra of 27 RPPs and 24 PWNe. In our samples,
millisecond pulsars (MSPs) are not included. It is generally believed that MSPs have ever
undergone an accretion-driven spin-up phase and they are usually old and regarded as a
significantly different class. Similar study on the MSPs is also limited by the rare data
available. Therefore we do not analyze MSPs here, although we discuss them when compare
our analysis with the previous work including MSPs.
In our samples, there are 15, out of 27, spectra of pulsars obtained from the
archived {\sl Chandra} data. We select only the pulsars detected by the Advanced CCD
Imaging Spectrometer (ACIS) in the Timed Exposure (TE) Mode, in which a pulsar is able
to be resolved spatially from its surrounding PWN. We calibrate the data with CIAO
(ver 3.4) and CALDB (ver 3.3.0). We first reprocess the Level 1 data
for the correction of the charge transfer inefficiency (CTI) effects, then
clean the background and remove the afterglow. Time intervals
with anomalous background rates associated with particle flare events are further
rejected in the Level 2 data. And then the pulsar positions are obtained by the {\it
celldetect} tool in CIAO. Finally, the spectra are extracted from the Level 2 data and then fit
with {\sl XSPEC}. We use both the power-law (PL) and the power-law+blackbody (PL+BB)
models to fit the pulsar spectra. If the resulted spectral indices are consistent
within errors in both models, then the results from the PL model are used, otherwise
those from the PL+BB model are used. In our spectral analysis, we show errors at the
90\% confidence level.
Pileup occurs when more than one photon are collected in one pixel within a CCD
readout frame, since those photons can only be recorded as a single photon event whose
energy is the sum of the collected photons. Therefore pileup may affect the
results of spectral analysis. According to section 6.14.2 in the {\sl Chandra} Proposers'
Observatory Guide v.7\footnote{http://cxc.harvard.edu/proposer/POG/}, the effect of
pileup can be omitted if the pileup fraction is $\le$10\%. However, pileup does affect
the spectral analysis even if the pileup fraction is $\le$10\%. For example, since the
pileup fraction of PSR J1930+1852 is estimated to be only 6\%, its spectral index is
reported to be $1.09^{+0.08}_{-0.09}$ without pileup correction (Lu et al. 2002),
whileas the spectral index is $1.35^{+0.06}_{-0.10}$ after pileup correction (Camilo
et al. 2002). In our spectral analysis, we first estimate the pileup fraction using
PIMMS\footnote{http://cxc.harvard.edu/toolkit/pimms.jsp}, and then add a pileup model
in the spectral fitting of the pulsars if the pileup fraction is higher than 3\%.
Totally, there are 8 pulsars in which the pileup model is included in the spectral
fitting, i.e., PSRs J0205+6449, J0537$-$6910, B0540$-$69 (and its PWN), B0833$-$45
(Vela), J1747$-$2958, J1846$-$0258, J1930+1852 and B1951+32.
The absorption column density ($N_{\rm H}$) is obtained in several ways. (1) For 6
pulsars (PSRs J0205+6449, J0537$-$6910, J1747$-$2958, J1846-0258, J1930+1852 and
B1951+32) with bright PWNe, $N_{\rm H}$ are obtained from the spectral fitting of
their PWNe, and then fixed when fitting the spectra of the pulsars. (2)
PSRs B0540$-$69 and J1124$-$5916 are embedded in SNRs 0540$-$69.3 and G292.0+1.8,
respectively, and their PWN spectra below 2.5 keV are strongly affected by the SNRs.
At the same time, constraining $N_{\rm H}$ with emission above 2.5 keV is difficult
because of the small absorption in this high energy range. Therefore their $N_{\rm H}$
are obtained by fitting the pulsar spectra that the contamination of the SNR emission
is negligible, and then $N_{\rm H}$ are fixed in the spectral fitting of their PWNe.
(3) The PWNe associated with PSRs B0355+54, J1617$-$5055, B1823$-$13, B1929+10 and
J2229+6114 are not bright, and $N_{\rm H}$ is determined by jointly fitting the
spectra of both the pulsar and its PWN. (4) PSRs J0633+1746 and B0833$-$45 have been
studied extensively, and their $N_{\rm H}$ values used in our spectral fitting are
taken from Caraveo et al. (2004) and Pavlov et al. (2001).
The luminosity uncertainty is crucial in our analysis of correlations, and
should be considered carefully. Since the X-ray luminosity is given by $L_{\rm X}=4\pi
d^2f_{\rm X}$, where $d$ is the pulsar distance and $f_{\rm X}$ is the 2-10 keV X-ray
flux, the $L_{\rm X}$ uncertainty should be derived from the uncertainties of both
$f_{\rm X}$ and $d$. The uncertainty of $f_{\rm X}$ is derived from those of the
normalization and the photon index in the spectral fitting. For the fluxes taken from
literatures, their uncertainties are extrapolated from the published ones by the
ratios of the fluxes in 2-10 keV to those in the corresponding published energy ranges.
The distances are usually not well constrained, thus the distance uncertainty
may dominate the luminosity uncertainty. There are several cases in our
samples: (1) the distances of 7 pulsars are derived from the radio dispersion
measures, and their errors are conservatively taken to be 40\%, as estimated by Cordes
\& Lazio (2001); (2) the distances of 14 pulsars are obtained via
their associated SNRs, and some of them are shown with published distance errors in
literatures, while for the others without published errors a conservative error of
50\% is taken; (3) PSRs J0537$-$6910 and B0540$-$69 are both located in the Large
Magellanic Cloud (LMC), whose distance is taken as 50 kpc, and an
error of 10 kpc for a conservative estimation is adapted (Bradley 2007).
We summarize the properties of all the 27 RPPs and 24 PWNe in Tables 1 and
2, respectively. In Table 2 only 22 PWNe are associated with the RPPs listed in
Table 1. The other two pulsars are excluded from Table 1: (1) Camilo et al. (2004)
suggested that PSR J1016$-$5857 is too faint to be resolved from its background PWN in
the {\it Chandra} Observation; (2) Hessels et al. (2004) found that the spectrum of
PSR J2021+3651 can be fit by a BB model and is thus dominated by thermal components.
On the other hand, 5 out of the 27 RPP samples in Table 1 are not listed in Table 2
for PWNe, because the following 5 pulsars have no PWN reported: B0628$-$28,
B0656+14, B0823+26, B0950+08 and B1055$-$52 ({\" O}gelman \& Tepedelenlio{\v g}lu
2004, Becker et al. 2004, De Luca et al. 2005).
\section{Analyses and Results}
The pulsar photon indices are distributed in a range of $1\la\Gamma_{\rm psr}\la3$ (as
shown in Fig \ref{fig:Lxvs.Gamma}). It should be noted that a significant fraction, about
$\sim15\%$ (4 out of 27), of the sources have soft spectra of $\Gamma_{\rm psr}>2$, which
may raise problems for current models as discussed later in \S 4. The photon indices of
the PWNe span a narrower range (Fig \ref{fig:Lxvs.Gamma}). As discussed below (\S 4),
this is consistent with the pulsar wind model.
We investigate below the correlations between the X-ray emission properties of RPPs
and PWNe, and between the emission properties and the pulsar rotational parameters.
The rotation parameters include the period $P$, the period derivatives $\dot{P}$, and
some derived parameters, e.g., the magnetic field
$B=3.3\times10^{19}(P\dot{P})^{1/2}$G, the characteristic age $\tau=P/2\dot{P}$, and
the spin down power $\dot{E}=4\pi^2I\dot{P}/P^3$, where a typical moment of
$I=10^{45}\rm g\;cm^{2}$ is assumed. We have taken the values of $P$ and $\dot{P}$
from the pulsar catalog by Manchester et al. (2005)\footnote{See
http://www.atnf.csiro.au/research/pulsar/psrcat/}.
In order to evaluate the significance level of the correlations of the two parameters
concerned, we calculate the widely used Pearson correlation coefficient ($r$), the
Spearman rank correlation coefficient ($r_{\rm s}$), and the two-sided significance level
($p_{\rm s}$) of the Spearman rank test. The results are listed in Table 3.
In addition to the correlation tests, we also perform a linear fit using the least
square method (LSM) to the relevant relations of the parameter pairs. Since the fitting
results are usually dominated by a few data points with much smaller observational errors
than the others, we also perform a linear fit without the observational errors for comparison.
The fitting results are all listed in Table 3, and shown in the relevant figures. In the
following we present the results in details.
\subsection{Correlations between the RPP emission properties and the pulsar
rotational parameters}
We study the RPP emission first. Strong correlations appear between the X-ray
luminosities of pulsars ($L_{\rm X, psr}$) and the pulsar rotational parameters (see
Table 3 and Figs \ref{fig_le} and \ref{fig_Lx_psr}).
First, $L_{\rm X,psr}$ is negatively correlated with $\tau$ and positively correlated
with $\dot{E}$, which are supported by the Spearman tests: $r_{\rm s} = -0.81$ and
$p_{\rm s}<0.0001$ between $L_{\rm X,psr}$ and $\tau$; and $r_{\rm s}=0.82$ and
$p_{\rm s}<0.0001$ between $L_{\rm X,psr}$ and $\dot{E}$.
We also note that there are some hints of the correlation hold for $L_{\rm X,psr}$ vs
$P$ and $\dot{P}$ separately, with the relevant Spearman rank correlation coefficients of
$-0.66$ and $0.69$ respectively and both significance levels $<0.001$.
The correlations between $L_{\rm X,psr}$ vs $P$ and $\dot{P}$ will disappear when the
sample includes both the MSPs and the normal RPPs, just as shown by Possenti et al.
(2002).
Despite the Pearson and Spearman correlation coefficient support the existence of a
correlation between $L_{\rm X,psr}$ and $\dot{E}$ (or $\tau$), a simple linear fit to the
logarithm of the data points with the observational errors included does not produce a
statistically acceptable model. In fact, it results (here $L_{\rm X,psr}$ and $\dot{E}$
are in units of erg s$^{-1}$ and $\tau$ in years; see also Figs 2 and 3).
\[
L_{\rm X,psr}=10^{-0.8\pm1.3}\dot{E}^{0.92\pm0.04}(\chi^2 = 2.6) \]
\[ L_{\rm X,psr} = 10^{38.1\pm0.3}\tau^{-1.19\pm0.05}(\chi^2 = 4.9)
\]
Here and elsewhere in this paper, the uncertainties on the linear fits are reported at
68\% confidence level. Previous authors (notably Possenti et al. 2002) also noticed that
a large scatter in the plot prevents to obtain an acceptable fit of the data with a
simple power law dependence of $L_{\rm X,psr}$ on $\dot{E}$. Hence this relation must
only be seen as an empirical average trend and not suitable for predicting the luminosity
of any specific source.
We have also explored a linear fit which does not account for the uncertainties on the values of $L_{\rm X,psr}$. It turns out
\[
L^{*}_{\rm X,psr}=10^{-4.2\pm3.7}\dot{E}^{1.0\pm0.1} \]
\[ L^{*}_{\rm X,psr} = 10^{38.9\pm0.9}\tau^{-1.4\pm0.2}
\]
(see Figs \ref{fig_le} and~\ref{fig_Lx_psr}). A comparison between the current $L_{\rm
X,psr}$ versus $\dot{E}$ relation and the previous studies is shown in Fig \ref{fig_le}.
It can be seen that the relation we obtain above is close to the one between the pulsed
X-ray emission and the spin down power in Cheng et al. (2004), which indicates that most
of the non-thermal X-ray emission from a pulsar is pulsed.
As already done by Possenti et al. (2002) using a sample including also the MSPs (but not
disentangling PWN from RPP emission), we also try to fit $L_{\rm X}$ with
$aP^{b}\dot{P}^c$. This gives the relation
\[
L_{\rm X,psr}=(40\pm1)P^{-3.4\pm0.3}\dot{P}^{0.77\pm0.07}(\chi^2=2.5).
\]
The nominal result of this (still statistically unacceptable) fit would suggest a
preferred dependence of $L_{\rm X,psr}$ on $\dot{E}/P$: however we note that, accounting
for the uncertainties on the parameters, the simpler dependence on $\dot{E}$ (recovered
in the work of Possenti et al. 2002) is also viable.
We also study the pulsar spectral properties, and check if there is any correlation
between the pulsar spectral index $\Gamma_{\rm psr}$ and the pulsar rotational
parameters. Inspection of Fig. \ref{fig_Gamma_psr}, may indicate the occurrence of a
positive correlation of $\Gamma_{\rm psr}$ with $P$ and $\tau$ and of a negative
correlation of $\Gamma_{\rm psr}$ with $\dot{P}$ and $\dot{E}$. However, a numerical
test indicates that all these correlations are too weak (the Spearman coefficients
$|r_{\rm s}|$ are all $\lesssim0.60$, see Table 3) for drawing any firm conclusion with
the available data.
\subsection{Correlations between the PWN emission properties
and the pulsar rotational parameters}
We study here the correlations between the PWN X-ray
luminosity $L_{\rm X,pwn}$ and the pulsar rotational parameters. As shown in Table 3 and
Fig.~\ref{fig_Lx_pwn}, a weak positive correlation between $L_{\rm X,pwn}$ and $\dot{P}$ has
been detected.
Table 3 and Figs~\ref{fig_le} and~\ref{fig_Lx_pwn} also show that $L_{\rm X,pwn}$ is
strongly correlated with $\dot{E}$ and $\tau$.
The linear fits with the observational errors result
\[
L_{\rm X,pwn} = 10^{-19.6\pm3.0}\dot{E}^{1.45\pm0.08} (\chi^2=2.7)
\]
\[
L_{\rm X,pwn} = 10^{42.4\pm0.5}\tau^{-2.1\pm0.1} (\chi^2=5.0)
\]
Again, $L_{\rm X,pwn}$ and $\dot{E}$ are in units of erg~s$^{-1}$. Like for the emission
from RPPs, the adopted linear model in the logarithm of the data does not provide a
statistically acceptable fit.
Trying a linear fit which does not account for the uncertainties on the values of $L_{\rm X,pwn}$, we obtain
\[
L^{*}_{\rm X,pwn} = 10^{-14.9\pm6.0}\dot{E}^{1.3\pm0.2}
\]
\[
L^{*}_{\rm X,pwn} = 10^{40.5\pm1.1}\tau^{-1.7\pm0.3}
\]
We note that the slope of the relation $L^{*}_{\rm X,pwn}\propto \dot{E}^{1.3}$ is
somewhat different from that of the pulsar, $L^{*}_{\rm X,psr}\propto\dot{E}$. The same
holds true comparing $L_{\rm X,pwn}\propto \dot{E}^{1.45}$ with $L_{\rm
X,psr}\propto\dot{E}^{0.92}$. It is worth noting that, as seen in Fig. \ref{fig_le}, the
scatterings in the relation of $L_{\rm X,psr}$ versus $\dot{E}$ and that of the PWNe are
comparable and both are large.
As seen in Fig~\ref{fig_Gamma_pwn} and Table 3, there is no evidence for strong
correlations between $\Gamma_{\rm pwn}$ and the pulsar rotational parameters. The
Spearman rank test also supports this eye-ball study. However in Figs
\ref{fig:Lxvs.Gamma} and \ref{fig_Gpwn_lx_edot} we see obvious positive correlations
between the photon index $\Gamma_{\rm pwn}$ and the X-ray luminosity $L_{\rm X,pwn}$
or the X-ray conversion efficiency $L_{\rm X,pwn}/\dot{E}$. We will discuss the physical
implication in \S 4.
\subsection{Correlations between non-thermal emission properties of RPPs and PWNe}
For those samples with both the RPP and the PWN non-thermal X-ray emission measured, we
test the correlations between them. As shown in Fig \ref{fig:pulvsPWN}, a strong
correlation between the X-ray luminosity of RPPs and that of PWNe appears in our samples,
while no correlation shown between their photon indices.
The correlation test between two luminosities gives $r_{\rm s}=0.91$ and $p_{\rm
s}<10^{-4}$, and trying a linear fit to the relation leads to $L_{\rm
X,pwn}=10^{-1.9\pm3.2}L_{\rm X,psr}^{1.1\pm0.1}$. Given the strong positive
correlations of $L_{\rm X,psr}$ and $L_{\rm X,pwn}$ versus $\dot{E}$ separately, a
strong correlation between the two luminosities might be naturally expected. However,
the slope of the relation between the two luminosities is somewhat different from that
expected from the previous two relations, $L_{\rm X,psr}\propto\dot{E}$ and $L_{\rm
X,pwn}\propto\dot{E}^{1.3}$. This may be because that the samples used are different.
For example, those data points with $\dot{E}\la10^{34}\rm ergs\,s^{-1}$ are not
included for the relation of $L_{\rm X,pwn}$ versus $\dot{E}$ since no obvious PWNe
were detected, and they seem to result in a smaller slope for $L_{\rm X,psr}$ versus
$\dot{E}$ relation (Fig \ref{fig_le}). However, it should be noted that both $L_{\rm
X, psr}$ and $L_{\rm X, pwn}$ are actually modulated by the source distance.
The above correlation might be due to the effect of distance modulation.
In our samples, the X-ray luminosity ratio between PWNe and RPPs, as shown by
$f_{\rm X,pwn}/f_{\rm X,psr}$ in Fig. \ref{fig:flux_ratio}, varies in the range of
$0.1-30$, about 2 orders of magnitude, and PWNe are generally brighter than their related
RPPs, typically $L_{\rm X,pwn}/L_{\rm X,psr}\sim1-10$. Fig. 9 also tells us that
a more energetic pulsar does not tend to transfer a bigger fraction
of $\dot{E}$ into PWN emission than to the pulsar emission, and vice versa.
Gotthelf (2003) reported a linear relation between $\Gamma_{\rm psr}$ and $\Gamma_{\rm
pwn}$ for the Crab-like pulsars, but we find no correlation between them in our samples
(Fig. \ref{fig:pulvsPWN}). We also check the relation using the samples of Gotthelf
(2003), and a similar relation appears. These might suggest that the linear relation
exists only in the Crab-like pulsars, which are very young.
\section{Discussions}
\subsection{Non-thermal X-ray luminosities of RPPs and PWNe}
The detected X-ray emission from the RPPs and their PWNe is powered by the pulsar
rotation energy. In our sample, the conversion efficiency of $\dot{E}$ to the 2-10 keV
X-ray emission varies in the range of $(L_{\rm X,psr}+L_{\rm
X,pwn})/\dot{E}\sim10^{-6}-10^{-1}$, with the mean at $\sim10^{-3}$, so usually only a
small fraction of the spin-down power goes into the non-thermal X-ray emission. The
non-thermal X-ray emission in a source is usually dominated by the PWN rather than the
pulsar, and on average, $\langle L_{\rm X,pwn}/L_{\rm X,psr}\rangle\sim10$. This implies
that the relations of the pulsar X-ray luminosity and the spin-down power obtained in the
previous works using the low spatial resolution observations at $>2$keV might be
dominated by the PWN emission.
In this work we can separately analyze the luminosity and the spin-down power
relations for the pulsars and their PWNe, thanks to the high spatial resolution of the
observations. A strong positive correlation between
$L_{\rm X,psr}$ and $\dot{E}$ is obtained in this paper,
similar to the previous results, as shown in Fig \ref{fig_le}. We compare our
results with the previous work in the following. However, one should keep in mind that
there are obvious differences in the analysis processes, i.e., we can separate the RPP
and the PWN emission and do not include the MSPs in the sample, while the previous
work did not separate the RPPs and the PWNe and included MSPs in the analysis.
The relations for the X-ray luminosity of the RPPs in 2-10 keV band are $L_{\rm
X,psr}\propto \dot{E}^{0.92\pm0.04}$ (uncertainties on $L_{\rm X,psr}$ included) and
$L^{*}_{\rm X,psr}\propto \dot{E}^{1.0\pm0.1}$ (not accounting for the uncertainties on
$L_{\rm X,psr}$), respectively. They are roughly in agreement with the scaling found by
Becker \& Tr\"umper (1997), who used the X-ray luminosity in the 0.1-2.4 keV band. The
$L_{\rm X}-\dot{E}$ relations obtained by Saito(1998) and Possenti et al. (2002) appear
steeper than both our derived relations for RPPs, but they are roughly consistent with
both $L_{\rm X,pwn}\propto \dot{E}^{1.45\pm0.08}$ and $L^{*}_{\rm X,pwn}\propto
\dot{E}^{1.3\pm0.2}$ (Fig \ref{fig_le}). This may suggest that their relations could
also be influenced by the PWN emission due to the lower spatial resolution. The relations
$L_{\rm X,pul} \propto\dot{E}^{1.2\pm0.08}$ and $L_{\rm X,npul} \propto
\dot{E}^{1.4\pm0.1}$, obtained by Cheng et al. (2004) with {\sl ASCA} data, are similar
to our results.
We have shown that the weak negative correlation of $L_{\rm X,psr}$ versus $P$ and the weak
positive correlation of $L_{\rm X,psr}$ versus $\dot{P}$ lead to a strong positive
correlation between $L_{\rm X,psr}$ and $\dot{E}$ in our MSP-excluding RPP samples. On
the other hand in the previous work including the MSPs and the normal RPPs, the
correlations between the X-ray luminosity (might include the PWN emission) and $P$ or
$\dot{P}$ disappear whereas a trend of $L_{\rm X}$ versus $\dot{E}$ is still there
(e.g., Possenti et al 2002). Moreover, the MSP samples alone also obey a similar
correlation (Possenti et al. 2002). All these factors together strongly suggest that the
X-ray luminosities of RPPs, including the MSPs, are only dependent on their spin-down
powers.
Although there is a strong correlation between $L_{\rm X,psr}$ and $\dot{E}$, the
scattering in this relation is large and the linear fit with the observational errors
included usually gives a statistically unacceptable result, as suggested by Possenti et
al. (2002). $L_{\rm X,psr}$ at given $\dot{E}$ may spread over 2-4 orders of magnitude,
as seen in Fig \ref{fig_le}. The uncertainty in the distance determination and the
momenta of inertia are not expected to lead to such a large span, so other intrinsic
factors may work, e.g., the viewing angle effect, etc. The scattering in the $L_{\rm
X,pwn}-\dot{E}$ relation is comparably large, which is somewhat strange, since the PWN
emission is less influenced by the viewing angles.
\subsection{Non-thermal X-ray spectra of the RPPs}
There are mainly two scenarios to produce the non-thermal X-rays in the magnetospheres of
pulsars. The outer gap scenario (e.g., Cheng et al. 1998; Wang et al. 1998; Cheng \&
Zhang 1999) produces a downward synchrotron-curvature cascade, where the secondary
electrons/positions produce X-rays by synchrotron emission. Another scenario is the polar
gap scenario, e.g., Zhang \& Harding (2000) proposed the ``full polar cap cascade", where
the non-thermal X-rays are produced by resonant inverse Compton (IC) scattering off the
thermal X-ray photons. In both scenarios the $L_{\rm X, psr}\propto\dot{E}$ relation is
generally predicted, although the X-ray spectra are not easy to understand.
We note that a significant fraction of pulsars with very soft spectral indices,
$\Gamma_{\rm psr}\sim2-3$, may pose questions on current models. In a cascade, the
monoenergetic primary electrons emit monoenergetic curvature photons, which subsequently
turn into still monoenergetic pairs in a soft photon bath. The fast energy loss of the
secondary pairs in the magnetic field produces synchrotron emission, which have an photon
spectrum with a power law index $\Gamma_1=1.5$. If the cascade continues the photons
produce next-generation pairs and then synchrotron photons with index
$\Gamma_2=1+\Gamma_1/2=1.75$; furthermore, $\Gamma_3=1+\Gamma_2/2=1.875$... So the
indices will never be bigger than 2, in contrast with the soft spectra. Actually this
discussion could also work if IC rather than synchrotron emission is involved since the
index of synchrotron and IC emission is equal for the same energy distribution of pairs.
As for the polar gap scenarios, the synchrotron emission at X-rays is weak because the
secondary pairs with small pitch angles produce synchrotron emission well above the
cyclotron frequency in the strong pulsar magnetic field, typically $\sim100$~keV. Zhang
\& Harding (2000) proposed that the X-ray emission is dominated by the low energy tail in
the resonant IC emission. However this tail may be hard with index $\Gamma<2$, as shown
in some Monte Carlo simulations (e.g., Fang \& Zhang 2006), although the cases might be
more complicated when more factors such as the viewing angle are taken into account.
Wang \& Zhao (2004) reported the possible negative correlations between
$\Gamma_{\rm psr}$ and $\dot{\Omega}$ and between $\Gamma_{\rm psr}$ and ${\zeta}$,
where $\zeta$ is the generation order parameter characterizing a pulsar under the
scheme of cascade processes (Zhao et al. 1989; Lu et al. 1994; Wei et al. 1997). A
similar negative correlation between $\Gamma_{\rm psr}$ and ${\zeta}$ in anomalous
X-ray pulsars and softer gamma-ray repeaters had also been reported (Marsden \& White
2001; Lu et al. 2003), suggesting that a common mechanism may operate in both normal
and anomalous pulsars. These observational results seem in contrast with the predicted
positive correlation between $\Gamma_{\rm psr}$ and $\zeta$ by Lu et al. (1994). Here,
we also check the relation between $\Gamma_{\rm psr}$ and the generation order
parameter $\zeta_3= 1 + (0.6 - (11/14){\rm log}P + (2/7){\rm log}\dot{P}_{15})/1.3$
(Eq. 6 of Wang \& Zhao 2004) and list the correlation results in Table 3. It turns out
that although there may be some hints of such a negative correlation in our sample,
the correlation tests do not support it strongly, $r_{\rm s}=-0.5$. So the current
data are not good enough to test the theoretical predictions of Lu et al. (1994).
\subsection{X-ray spectra of PWNe}
In the standard Kennel \& Coroniti (1984ab) model for the Crab nebula, the young Crab
pulsar loses its rotational energy predominantly in the form of a highly relativistic
particle wind, which encounters with the surrounding medium in a termination shock and
become visible by synchrotron emission downstream from the shock. In this context
the energy in the relativistic wind is transferred into post shock magnetic field and
accelerated particles with energy distribution of $N_e(E_e)\propto E_e^{-p}$. Chevalier
(2000) discussed the PWN spectra with emphasis on the cooling of the X-ray emitting
electrons, which leads to a steeper index $p+1$ for high energy and fast-cooling electrons,
and hence a spectral transition of synchrotron photons from $(p+1)/2$ in the slow-cooling
regime to $(p+2)/2$ in the fast-cooling regime at break frequency ($\nu_c$).
The data fit with a single power law model to the indeed broken power law would result in a
spectral index always in the range of $(p+1)/2\leq\Gamma_{\rm pwn}\leq(p+2)/2$. An
electron index value $p\approx2.2$ is generally obtained in theoretical works on particle
acceleration in relativistic collisionless shocks, by both numerical calculations (e.g.,
Achterberg et al. 2001) and analytic analysis (e.g., Keshet \& Waxman 2005), and also
inferred from observation in other kinds of astrophysical relativistic shocks, e.g., GRB
afterglows (e.g., Freedman \& Waxman 2001). Our results show an narrow index range of
$1.5\la\Gamma_{\rm pwn}\la2.1$ unless one source with somewhat higher value $\sim2.5$,
suggesting an electron index of $p\sim2.2$. This consistence with particle wind models
gives a strong support to the Fermi-shock acceleration in PWNe.
We show that the PWN spectral parameters are not strongly correlated with the pulsar
rotational parameters (Fig \ref{fig_Gamma_pwn}). Gotthelf (2003) reported the correlation
between $\Gamma_{\rm psr}$ and $\Gamma_{\rm pwn}$ for nine Crab-like pulsars. Our studies
show that such a correlation is probably not a common property for all RPPs. Therefore,
the electron spectrum and its evolution in a PWN are not determined by the central
pulsar, consistent with wind models where the emission comes from a relativistic shock
between wind and environment interaction.
The relation of $\Gamma_{\rm pwn}$ with PWN luminosity $L_{\rm X,pwn}$ and the
conversion efficiency $L_{\rm X,pwn}/\dot{E}$ (Figs~\ref{fig:Lxvs.Gamma} and
\ref{fig_Gpwn_lx_edot} and Table 3) could be understood qualitatively in the framework of
pulsar wind models taking into account the electron cooling effect on spectral profile
(e.g., Chevalier 2000). If pulsar loses most of its rotation energy through particle
winds, then higher $\dot{E}$ corresponds to stronger cooling and hence lower spectral
break $\nu_c$, which further means a larger index $\Gamma_{\rm pwn}$ in a fixed
observational energy range. In the meantime, a higher $\dot{E}$ corresponds to a larger
$L_{\rm X,pwn}$, no matter $\nu_c$ is below or above the observational range, and
corresponds to constant X-ray conversion efficiency for fast cooling regime ($\nu_c$
below observed range) or larger $L_{\rm X,pwn}/\dot{E}$ for slow cooling regime ($\nu_c$
above observed range). Therefore we have softer PWN spectra (larger $\Gamma_{\rm pwn}$)
for more luminous PWNe (larger $L_{\rm X,pwn}$) and higher energy conversion efficiency
($L_{\rm X,pwn}/\dot{E}$). This consistence supports the wind-shock model for PWNe. In
this context, the transition of $\Gamma_{\rm pwn}$ from high to low values in Fig.
\ref{fig:Lxvs.Gamma} suggests that the spectral break $\nu_c$ locates at 2-10 keV for
$L_{\rm X,pwn}\sim10^{33}\rm ergs\, s^{-1}$. This may give constraint to wind model
parameters.
\section{Conclusions}
In this work, using the available samples of 27 RPPs and 24 PWNe observed by {\sl
Chandra} and {\sl XMM-Newton}, we obtain the non-thermal X-ray spectral properties, i.e.,
luminosities and spectral indices, of RPPs and PWNe separately. We then analyze their
distribution and correlation with each other and with pulsar rotational parameters.
\begin{itemize}
\item As to the correlations we find: (1) $L_{\rm X,psr}$ and $L_{\rm X,pwn}$
display a strong correlation with both $\dot{E}$ and $\tau$; (2) $L_{\rm X,psr}$ also
shows a possible weaker correlation with $P$ and $\dot{P}$, whereas $L_{\rm X,pwn}$
manifests a similar weak correlation with $\dot{P}$ only; (3) $\Gamma_{\rm pwn}$ is
positively correlated with $L_{\rm X,pwn}$ and the efficiency of conversion of rotational
energy loss in X-ray luminosity $L_{\rm X,pwn}/\dot{E}$.
\item Trying to fit the logarithm of the data with a simple linear fit, we find: $L_{\rm
X,psr}=10^{-0.8\pm1.3}\dot{E}^{0.92\pm0.04}$ and $ L_{\rm X,pwn} =
10^{-19.6\pm3.0}\dot{E}^{1.45\pm0.08}$. However, both the fits are statistically
unacceptable. Not accounting for the uncertainties on the observed luminosity, the
aforementioned relations become $L^{*}_{\rm X,psr}=10^{-4.2\pm3.7}\dot{E}^{1.0\pm0.1} $ and
$L^{*}_{\rm X,pwn} = 10^{-14.9\pm6.0}\dot{E}^{1.3\pm0.2}$, respectively.
Since the scatter in the relation
for PWN (whose emission should be less affected by viewing angle) is comparably larger
than that for RPPs, the scatter in the relation is more probably intrinsic to the
sources.
\item The PWN X-ray luminosity is typically 1 to 10 times larger than that from the
underlying pulsar.
\item The pulsar photon index spans a range of $1\la\Gamma_{\rm
psr}\la3$. A significant fraction of RPPs with low $\dot{E}$ show soft spectra of $\Gamma_{\rm psr}>2$, which seems not
consistent with the current models and urges for further investigation of the non-thermal X-ray emission mechanisms of pulsars.
\item The PWN spectral properties are consistent with the particle wind model: the photon
index range $1.5\la\Gamma_{\rm pwn}\la2$ is
consistent with that expected from the shock-accelerated electrons of index $p\sim2$;
the correlations of $\Gamma_{\rm pwn}$ with
$L_{\rm X,pwn}$ and the conversion efficiency $L_{\rm X,pwn}/\dot{E}$ are
consistent with the wind
model; no correlation between $\Gamma_{\rm pwn}$ and the pulsar rotational parameters
also implies that the cooling process is not related to the center pulsars but to the
interaction of the pulsar wind with its environment.
\end{itemize}
\section*{Acknowledgments}
We thank the referee for very thorough comments. We are grateful to S. N. Zhang, L. M.
Song, J. M. Wang, G. J. Qiao, B. Zhang and H. G. Wang for helpful discussion. We thank
Prof. Wielebinski and Dr. Jessner for critically reading the manuscript and giving many
valuable suggestions. XHL sincerely thanks Prof. Wielebinski for the financial support
during her stay at MPIfR, and thanks J.L. Han for warm hospitality during her stay at
NAOC. This work is supported by the National Science Foundation of China through grants
10573017, 10533020 and 10473010.
|
1,116,691,500,622 | arxiv |
\section{Introduction}
Since its discovery~\cite{bib:discoverydzero,bib:discoverycdf}, the determination of the top quark mass \ensuremath{m_t}\xspace, a fundamental parameter of the standard model (SM), has been one of the main goals of the CERN Large Hadron Collider (LHC) and of the Fermilab Tevatron Collider. Indeed, \ensuremath{m_t}\xspace and masses of $W$ and Higgs bosons are related through radiative corrections that provide a consistency check of the SM~\cite{bib:lepewwg,bib:theory}. Furthermore, \ensuremath{m_t}\xspace dominantly affects the stability of the SM Higgs potential~\cite{bib:theory,bib:vstab1}.
With $\ensuremath{m_t}\xspace=173.34\pm0.76~\ensuremath{\textnormal{GeV}}\xspace$, a world-average combined precision of 0.44\% has been achieved~\cite{bib:combiworld}.
In the SM, the top quark decays to a $W$~boson and a $b$~quark nearly 100\% of the time.
Thus, $\ensuremath{t\bar{t}}\xspace$ events are classified according to $W$ boson decays as ``dileptonic''~(\ensuremath{\ell\ell}\xspace), ``lepton+jets'' (\ensuremath{\ell\!+\!{\rm jets}}\xspace), or ``all--jets''. Single top production contributes significantly at the LHC through the $qg\to q't\bar b$ process. In the following, I will present representative measurements in the three channels; a full listing of \ensuremath{m_t}\xspace results from the LHC and the Tevatron can be accessed through Refs.~\cite{bib:topresatlas,bib:toprescdf,bib:toprescms,bib:topresdzero}.
\section{Standard measurements of the top quark mass} \label{sec:standard}
The most precise single measurement of \ensuremath{m_t}\xspace in the \ensuremath{\ell\ell}\xspace channel is performed by the ATLAS Collaboration using 20.2~\ensuremath{{\rm fb}^{-1}}\xspace of $pp$ collisions at \ensuremath{\sqrt s=8~\TeV}\xspace~\cite{bib:a8_ll}. The selection requires two isolated leptons ($e$ or $\mu$) of opposite charge, missing transverse momentum \ensuremath{E\!\!\!\!/_T}\xspace due to neutrinos, and $\geq 2$ jets, where at least one of which is identified as originating from a $b$ quark ($b$-tagged). A transverse momentum $p_{T,\ell b}>120~\ensuremath{\textnormal{GeV}}\xspace$ is required for the average of the two $\ell b$ systems to reduce the dominant uncertainty from the jet energy scale (JES). The \ensuremath{m_t}\xspace is extracted with the ``template method'', which in this case fits the distribution in the average invariant mass of the $\ell b$ system to the expectations from Monte Carlo (MC) simulations for different \ensuremath{m_t}\xspace, shown in Fig.~\ref{fig:ll}~(a). The best fit to data is shown in Fig.~\ref{fig:ll}~(b), and results in $\ensuremath{m_t}\xspace=172.99\pm0.41\ensuremath{{\rm(stat)}}\xspace\pm0.74\ensuremath{{\rm(syst)}}\xspace$~GeV.
Tevatron's most precise single measurement in the \ensuremath{\ell\ell}\xspace channel of $m_t=173.32\pm1.36\ensuremath{{\rm(stat)}}\xspace\pm0.85\ensuremath{{\rm(syst)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$ is performed by the D0 Collaboration using 9.7~\ensuremath{{\rm fb}^{-1}}\xspace of $p\bar p$ collisions at \ensuremath{\sqrt s=1.96~\TeV}\xspace~\cite{bib:d0_ll}.
\begin{figure}[b]
\centering
\begin{overpic}[clip,height=4.5cm]{fig/a8_ll_template.png}
\put(20,62){\bf\sffamily{(a)}}
\end{overpic}
\qquad
\begin{overpic}[clip,height=4.5cm]{fig/a8_ll_result.png}
\put(20,48){\bf\sffamily{(b)}}
\end{overpic}
\caption{
\label{fig:ll}
{\bf(a)} Expected dependence of the \ensuremath{m_{\ell b}}\xspace distribution of processes involving top quarks on \ensuremath{m_t}\xspace from Monte Carlo simulations at \ensuremath{\sqrt s=8~\TeV}\xspace with the ATLAS detector~\cite{bib:a8_ll}.
{\bf(b)} The distribution in $\ensuremath{m_{\ell b}}\xspace$ in 20.3~\ensuremath{{\rm fb}^{-1}}\xspace of data at \ensuremath{\sqrt s=8~\TeV}\xspace with the ATLAS detector. The predictions correspond to the best-fit values.
}
\end{figure}
The most precise single measurement of \ensuremath{m_t}\xspace from the Tevatron is performed by the D0 Collaboration using 9.7~\ensuremath{{\rm fb}^{-1}}\xspace of data in the \ensuremath{\ell\!+\!{\rm jets}}\xspace channel~\cite{bib:d0_lj} with a ``matrix element~(ME) method''. This approach determines the probability of observing a given event under both the $\ensuremath{t\bar{t}}\xspace$ signal and background hypotheses, as a function of \ensuremath{m_t}\xspace. This probability is calculated {\em ab initio} using the respective MEs of the \ensuremath{t\bar{t}}\xspace signal and dominant $W$+jets background, taking into account effects from parton showering (PS), hadronisation, and finite detector resolution.
This selection requires the presence of one isolated lepton, \ensuremath{E\!\!\!\!/_T}\xspace, and exactly four jets with at least one $b$-tag. A new JES calibration from exclusive $\gamma+$jet, $Z+$jet, and dijet events is applied to account for differences in detector response to jets originating from a gluon, a $b$~quark, and $u,d,s,$ or $c$~quarks. The overall JES \ensuremath{k_{\rm JES}}\xspace is calibrated {\it in situ} by constraining the reconstructed invariant mass of the hadronically decaying $W$ boson to $\ensuremath{M_W}\xspace=80.4$~GeV. The likelihood over all candidate events is maximised in $(\ensuremath{m_t}\xspace,\ensuremath{k_{\rm JES}}\xspace)$ as shown in Fig.~\ref{fig:lj}~(a), and $\ensuremath{m_t}\xspace=174.98\pm0.58\ensuremath{{\rm(stat\!+\!JES)}}\xspace\pm0.49\ensuremath{{\rm(syst)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$ is obtained. The most precise \ensuremath{m_t}\xspace result from the CDF Collaboration in the \ensuremath{\ell\!+\!{\rm jets}}\xspace channel of $\ensuremath{m_t}\xspace=172.85\pm0.71\ensuremath{{\rm(stat\!+\!JES)}}\xspace\pm0.85\ensuremath{{\rm(syst)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$~\cite{bib:cdf_lj} is obtained with the template method.
The most precise single measurement of \ensuremath{m_t}\xspace from the LHC is performed by the CMS Collaboration using 19.7~\ensuremath{{\rm fb}^{-1}}\xspace of data at \ensuremath{\sqrt s=8~\TeV}\xspace in the \ensuremath{\ell\!+\!{\rm jets}}\xspace channel~\cite{bib:c8_lj}. The analysis uses a similar selection to the D0 result and applies the ``ideogramm method'' to extract \ensuremath{m_t}\xspace. Similar to the ME method, this approach calculates the probability to observe a given event as a function of $(\ensuremath{m_t}\xspace,\ensuremath{k_{\rm JES}}\xspace)$. However, this probability is not calculated {\em ab initio}, but is obtained from MC simulations, in analogy to the template method. The final result of $\ensuremath{m_t}\xspace=172.35\pm0.16\ensuremath{{\rm(stat\!+\!JES)}}\xspace\pm0.48\ensuremath{{\rm(syst)}}\xspace$~GeV is represented in Fig.~\ref{fig:lj}~(b). The most precise \ensuremath{m_t}\xspace result from the ATLAS Collaboration is obtained with the template method using 4.7~\ensuremath{{\rm fb}^{-1}}\xspace of data at \ensuremath{\sqrt s=7~\TeV}\xspace and reads $\ensuremath{m_t}\xspace=172.33\pm0.75\ensuremath{{\rm(stat\!+\!JES)}}\xspace\pm1.02\ensuremath{{\rm(syst)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$~\cite{bib:a7_lj}.
\begin{figure}
\centering
\begin{overpic}[clip,height=4.5cm]{fig/d0_lj.png}
\end{overpic}
\qquad
\begin{overpic}[clip,height=4.5cm]{fig/c8_lj.png}
\put(80,62){\large\bf\sffamily{(b)}}
\end{overpic}
\caption{
\label{fig:lj}
{\bf(a)} The likelihood in $(\ensuremath{m_t}\xspace,\ensuremath{k_{\rm JES}}\xspace)$ in 9.7~\ensuremath{{\rm fb}^{-1}}\xspace of $p\bar p$ collisions at \ensuremath{\sqrt s=1.96~\TeV}\xspace recorded with the D0 detector~\cite{bib:d0_lj}. Fitted contours of equal probability are overlaid as solid lines. The maximum is marked with a cross.
{\bf(b)}~Same as (a), but in 19.5~\ensuremath{{\rm fb}^{-1}}\xspace of $pp$ collisions at \ensuremath{\sqrt s=8~\TeV}\xspace recorded with the CMS detector~\cite{bib:c8_lj}. The central result corresponds to ``Hybrid'', and \ensuremath{k_{\rm JES}}\xspace is denoted as ``JSF''.
}
\end{figure}
The all-jets channel is particularly challenging due to very high background from QCD multijets. Tevatron's most precise single \ensuremath{m_t}\xspace result in this channel comes from the CDF Collaboration using 9.3~\ensuremath{{\rm fb}^{-1}}\xspace of data~\cite{bib:cdf_jj}. A neural network and $b$-tagging enhance the signal-to-background ratio from $10^{-3}$ to about 1. The correct assignment of jets to partons is determined by minimising a $\chi^2$, which accounts for consistency of the two dijet systems with $m_W$, consistency of the two $jjb$ systems with each other, and consistency of the individual fitted jet momenta with measured ones, within experimental resolutions. The measured value is $\ensuremath{m_t}\xspace=175.07\pm1.19\ensuremath{{\rm(stat\!+\!JES)}}\xspace\pm1.55\ensuremath{{\rm(syst)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$. The most precise result in the all-jets channel at the LHC of $\ensuremath{m_t}\xspace=172.32\pm0.25\ensuremath{{\rm(stat\!+\!JES)}}\xspace\pm0.59\ensuremath{{\rm(syst)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$ comes from the CMS Collaboration~\cite{bib:c8_lj}.
An overview of recent \ensuremath{m_t}\xspace measurements at the LHC~\cite{bib:overview_LHC} is given in Fig.~\ref{fig:overview}. A combination of \ensuremath{m_t}\xspace measurements from Run~I and II of the Tevatron considering statistical and systematic correlations yields $\ensuremath{m_t}\xspace=174.30\pm0.35\ensuremath{{\rm(stat)}}\xspace\pm0.34\ensuremath{{\rm(syst)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$~\cite{bib:combo_Tevatron}.
\begin{figure}
\centering
\begin{overpic}[clip,height=10cm]{fig/overview_LHC.pdf}
\end{overpic}
\caption{
\label{fig:overview}
Overview of recent \ensuremath{m_t}\xspace measurements at the LHC~\cite{bib:overview_LHC}. References to the individual measurements are given at the bottom of the Figure.
}
\end{figure}
\section{Measurements of the top quark mass in the pole scheme} \label{sec:mtpole_tt}
The {\em standard} measurements of \ensuremath{m_t}\xspace from Sect.~\ref{sec:standard} are experimentally the most precise ones. However, they extract an \ensuremath{m_t}\xspace~{\em parameter} as implemented in MC generators, which is related to the pole mass scheme definition \ensuremath{m_t^{\rm pole}}\xspace in the SM Lagrangian within an uncertainty of $\leq$1~GeV~\cite{bib:alt}.
The first LHC result on \ensuremath{m_t}\xspace at \ensuremath{\sqrt s=13~\TeV}\xspace is an extraction of \ensuremath{m_t^{\rm pole}}\xspace from \ensuremath{\sigma_{t\bar t}}\xspace performed by CMS in the \ensuremath{\ell\!+\!{\rm jets}}\xspace channel using 2.3~\ensuremath{{\rm fb}^{-1}}\xspace of data~\cite{bib:mtpole_tt}. This analysis exploits the dependence of \ensuremath{\sigma_{t\bar t}}\xspace on \ensuremath{m_t^{\rm pole}}\xspace, which is now known with $\approx$3\% precision at NNLO with NNLL corrections~\cite{bib:xsec_tt_nnlo}. The input measurement of \ensuremath{\sigma_{t\bar t}}\xspace achieves a relative uncertainty of $\approx$4\% by constraining the dominant $W$+jets background through sidebands in low jet and $b$-tag multiplicities, and using the difference in $\ensuremath{{\rm d}}\sigma/\ensuremath{{\rm d}} m_{\ell b}$ dependence between signal and background. The final result is $\ensuremath{m_t^{\rm pole}}\xspace=173.3^{+2.3}_{-2.0}({\rm stat+syst})\,^{+1.6}_{-1.1}\ensuremath{{\rm(theo)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$.
The most precise \ensuremath{m_t^{\rm pole}}\xspace measurement is performed by the ATLAS Collaboration in the \ensuremath{\ell\!+\!{\rm jets}}\xspace channel using 4.6~\ensuremath{{\rm fb}^{-1}}\xspace of data at \ensuremath{\sqrt s=7~\TeV}\xspace~\cite{bib:mtpole_ttj}. The \ensuremath{m_t^{\rm pole}}\xspace is extracted from from the production cross section of a \ensuremath{t\bar{t}}\xspace system in association with a jet \ensuremath{\sigma_{t\bar t+1~{\rm jet}}}\xspace, since the radiation rate of a high-\ensuremath{p_T}\xspace gluon off the \ensuremath{t\bar{t}}\xspace system is proportional to \ensuremath{m_t^{\rm pole}}\xspace. More precisely, the differential production cross section $\mathcal R(\ensuremath{m_t^{\rm pole}}\xspace,\rho_s)\equiv1/\sigma_{\ensuremath{t\bar{t}}\xspace+1\rm jet} \cdot \ensuremath{{\rm d}} \sigma_{\ensuremath{t\bar{t}}\xspace+1\rm jet}/\ensuremath{{\rm d}} \rho_s$ is compared to NLO calculations~\cite{bib:mtpole_ttj_xsec}, where $\rho_s \equiv 2m_0/\sqrt{s_{\ensuremath{t\bar{t}}\xspace+1{\rm jet}}}$, and the arbitrary constant $m_0$ is set to 170~GeV in this analysis. The selection is similar to other analyses in the \ensuremath{\ell\!+\!{\rm jets}}\xspace channel discussed in Sect.~\ref{sec:standard}, and the correct jet-parton assignment is determined through a $\chi^2$ kinematic fit. To reduce the total uncertainty, $\ensuremath{p_T}\xspace>50~\ensuremath{\textnormal{GeV}}\xspace$ is required for the extra jet. The distribution in $\rho_s$ is corrected for detector, PS, hadronisation effects, and the presence of background. The resulting distribution at parton level is given in Fig.~\ref{fig:mtpole}~(a). The final result reads $\ensuremath{m_t^{\rm pole}}\xspace=173.1\pm1.50\ensuremath{{\rm(stat)}}\xspace\pm1.43\ensuremath{{\rm(syst)}}\xspace^{+0.93}_{-0.49}\ensuremath{{\rm(theo)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$.
The second most precise \ensuremath{m_t^{\rm pole}}\xspace measurement is performed by the D0 Collaboration in the \ensuremath{\ell\!+\!{\rm jets}}\xspace channel using 9.7~\ensuremath{{\rm fb}^{-1}}\xspace of data~\cite{bib:mtpole_ttdiff}. This analysis extracts \ensuremath{m_t^{\rm pole}}\xspace by relating measured $\ensuremath{{\rm d}}\ensuremath{\sigma_{t\bar t}}\xspace/\ensuremath{{\rm d}} m_{\ensuremath{t\bar{t}}\xspace}(\ensuremath{m_t}\xspace)$ and $\ensuremath{{\rm d}}\ensuremath{\sigma_{t\bar t}}\xspace/\ensuremath{{\rm d}} p_{T,t/\bar t}(\ensuremath{m_t}\xspace)$ to recent NNLO and NLO calculations~\cite{bib:mtpole_ttdiff_xsec}. {\em Differential} cross sections allow for a more complete use of kinematic information, and thus a notably higher statistical precision than the \ensuremath{m_t^{\rm pole}}\xspace extraction from an inclusive \ensuremath{\sigma_{t\bar t}}\xspace measurement. The selection is similar to Ref.~\cite{bib:d0_lj}, and the correct jet-parton assignment is identified through a \ensuremath{\chi^{2}}\xspace kinematic fit. The resulting distributions are corrected for detector, PS, hadronisation effects, and the presence of background to obtain $\ensuremath{{\rm d}}\ensuremath{\sigma_{t\bar t}}\xspace/\ensuremath{{\rm d}} m_{\ensuremath{t\bar{t}}\xspace}(\ensuremath{m_t}\xspace)$ and $\ensuremath{{\rm d}}\ensuremath{\sigma_{t\bar t}}\xspace/\ensuremath{{\rm d}} p_{T,t/\bar t}(\ensuremath{m_t}\xspace)$, which are then directly compared to theory calculations to extract \ensuremath{m_t^{\rm pole}}\xspace. The final result reads $\ensuremath{m_t^{\rm pole}}\xspace=169.1\pm2.5({\rm stat+syst})\pm1.5\ensuremath{{\rm(theo)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$.
\begin{figure}
\centering
\begin{overpic}[clip,height=5.0cm]{fig/mtpole_ttj_rhos_unfold.png}
\put(35,86){\bf\sffamily{(a)}}
\end{overpic}
\qquad
\begin{overpic}[clip,height=5.0cm]{fig/d0_mtpole.png}
\put(24,80){\bf\sffamily{(b)}}
\end{overpic}
\caption{
\label{fig:mtpole}
{\bf(a)}~The distribution $\mathcal R\equiv1/\sigma_{\ensuremath{t\bar{t}}\xspace+1\rm jet} \cdot \ensuremath{{\rm d}} \sigma_{\ensuremath{t\bar{t}}\xspace+1\rm jet}/\ensuremath{{\rm d}} \rho_s$ in $pp$ collisions at \ensuremath{\sqrt s=7~\TeV}\xspace with the ATLAS detector~\cite{bib:mtpole_ttj}, compared to NLO predictions~\cite{bib:mtpole_ttj_xsec}.
{\bf(b)} The distribution of $\ensuremath{{\rm d}}\ensuremath{\sigma_{t\bar t}}\xspace/\ensuremath{{\rm d}} p_{T,t/\bar t}(\ensuremath{m_t}\xspace)$ in $p\bar p$ collisions at \ensuremath{\sqrt s=1.96~\TeV}\xspace with the D0 detector~\cite{bib:mtpole_ttdiff}, compared to NNLO predictions~\cite{bib:mtpole_ttdiff_xsec}.
Both distributions are shown at parton level, after corrections for detector, PS, and hadronisation effects.
}
\end{figure}
\section{Conclusions}
I presented recent measurements of the top quark mass, a fundamental parameter of the SM. The most precise single measurements at the LHC and the Tevatron of respectively $\ensuremath{m_t}\xspace=172.35\pm0.16\ensuremath{{\rm(stat\!+\!JES)}}\xspace\pm0.48\ensuremath{{\rm(syst)}}\xspace$~GeV and $\ensuremath{m_t}\xspace=174.98\pm0.58\ensuremath{{\rm(stat\!+\!JES)}}\xspace\pm0.49\ensuremath{{\rm(syst)}}\xspace~\ensuremath{\textnormal{GeV}}\xspace$ are performed by the CMS and D0 Collaborations in the \ensuremath{\ell\!+\!{\rm jets}}\xspace channel, corresponding to a relative precision of 0.30\% and 0.43\%. The precision of \ensuremath{m_t}\xspace measurements in the pole scheme is improved to 1.3\% due to the advent of new theory calculations and experimental approaches.
I would like to thank my colleagues from the ATLAS, CDF, CMS, and D0 experiments for their help in preparing this article, the staffs at CERN and Fermilab together with their collaborating institutions, as well as the relevant funding agencies.
\newpage
\input{bib.tex}
\end{document}
|
1,116,691,500,623 | arxiv | \section{Introduction}
'A meme is an idea, behavior, or style that spreads from person to person within a culture often with the aim of conveying a particular phenomenon, theme, or meaning represented by the meme.' \cite{wiki}, \cite{mw} \\ \\
Memes are ubiquitous in today’s day and age; their language and ideologies are in constant flux. Memes come in almost every form of media, with new formats constantly evolving. Primarily they function as a medium for humor to be shared, utilizing cultural (especially subcultural) themes. However, they can also be manipulated to further political ideals, magnify echo chambers and antagonize minorities. Memes are their own form of communication and have truly captured this generation.
As AI grows in leaps and bounds it requires new and challenging tasks. The contemporary relevance of memes and the high level of understanding required to generate them motivate this project \cite{CS230}. \\ \\
We approach this task by considering only the image-with-caption class of meme, an example of which is shown in Fig.1. This reduces the problem greatly and allows for relatively simple collection of datasets. In this paper, we specifically refer to meme generation as the task of generating a humorous caption in a manner that is relevant to the initially provided image, which can be a meme template or otherwise. We apply an encoder-decoder image captioning system similar to the one outlined in \cite{showtell} \cite{tensorflow}, consisting of an CNN image embedding initial stage, followed by an LSTM RNN for language generation. We introduce and test several variants of the LSTM model and evaluate their outputs. \\ \\
Evaluation of generated meme quality is difficult to reliably automate. We evaluate and fine-tune our models using perplexity as a measure for the language modeling task, which is in high correlation with the BLEU score \cite{showtell}, and support our quantitative evaluations by utilizing human testers. Testers are asked to attempt to differentiate generated memes from real ones and/or rank generated memes on their hilarity \cite{CS230}, because at the end of the day their purpose is to be funny.
\begin{figure}[h]
\captionsetup{width=.8\linewidth}
\begin{center}
\includegraphics[scale=0.35]{Meme.eps}
\end{center}
\centering
\caption{\fontsize{11}{13}\selectfont A meme produced on \cite{memegenerator}, utilizing the popular Boromir format.}
\end{figure}
\section{Background/Related Work}
\subsection{Image Captioning Models}
The advent of sequence-to-sequence machine translation models \cite{sutskever_sequence_2014} established the idea of encoding information (such as a French sentence) and using at is an input to an RNN that generates language. It was not long before this encoder-decoder framework was extended to image captioning \cite{showtell}. \\ \\
Authors of \cite{showtell} introduce a widely recognized model for image captioning, which constitutes the backbone of our meme generation model. Accordingly, this model consists of an encoder-decoder scheme. The encoder is a deep convolutional neural network (CNN) that that takes images as inputs and produces fixed-length vector embeddings which are then fed into the decoder. The implementation of the decoder \cite{showtell architecture} begins with a trainable fully connected layer that takes the image embeddings from the encoder and maps them to the word embedding space. The output of this fully connected layer constructs the initial state of the RNN network which is used for language modeling, i.e. caption generation. Authors choose to use a Long Short Term Memory (LSTM) network as a variant of RNNs due to their well established success on sequence tasks \cite{showtell}. \\ \\
This type of image captioning model has been further improved recently by using bi-directional LSTMs \cite{bi} and including attention mechanisms \cite{xu_show_2015}, discussed below. Although these models perform very well on metrics such as BLEU for factual descriptions of images, there has been little work on generating humorous captions. Models such as StyleNet \cite{Stylenet} have attempted to produce humorous caption using an encoder-decoder architecture with limited success.
Successful meme generation requires a diverse range of humorous captions for the same image which are related to concepts portrayed by the image, not necessarily the content of the image itself. To achieve this we make use of much of the previous work above while incorporating our own ideas.
\subsection{Recurrent Neural Networks (RNNs) for Language Modeling Tasks}
RNNs and their variants are known to produce state-of-the-art results in sequential NLP tasks such as language modeling and machine translation. The authors of \cite{mikolov} and \cite{karpathy} discuss the success of RNNs in sequential models, where the input data does not have a fixed size. Among different types of RNNs, LSTMs \cite{lstm} are known to provide very satisfying results due to the fact that they employ "gating mechanisms" to remember data from long periods of time. The LSTM cells that we also use in our model due to same motivations operate based on the following equations \cite{showtell}:
\begin{align*}
i_t &= \sigma (W_{ix}x_{t} + W_{im}m_{t-1}) \\
f_t &= \sigma (W_{fx}x_{t} + W_{fm}m_{t-1}) \\
o_t &= \sigma (W_{ox}x_{t} + W_{om}m_{t-1}) \\
c_t &= f_t \circ c_{t-1} + i_t \circ \tanh(W_{cx}x_{t} + W_{cm}m_{t-1}) \\
m_t &= o_t \circ c_t \\
p_{t+1} &= \textrm{softmax}(m_t)
\end{align*}
where $f$ is the forget gate, $i$ the input gate, $o$ is the output gate, $m$ is the memory output and $W$ are the trainable matrices. The word prediction is made through a softmax layer which outputs a probability distribution for each word in the vocabulary.
\subsection{Pretrained GloVe Vectors}
Using vector embeddings to represent words is a vital concept to capture semantic similarities in various NLP tasks. Therefore, we rely on vector embeddings that have been previously trained on very large corpus to capture the semantic connections necessary for our text generation task. In this context, pretrained GloVe \cite{glove} word embeddings constitute the most suitable choice of word representation for our project, based on the fact that they have been trained on very large corpora, offering an option of words from Common Crawl with 42 billion tokens, 1.9 million uncased vocabulary and 300 dimensional embeddings \cite{gloveembed}. Our choice of these vector embeddings relies heavily on the words included in our meme caption dataset. Memes often include informal and slang words, and our analysis of the Common Crawl GloVe dictionary confirms that most of those words are available as pretrained GloVe embeddings in the crawl. Our incorporation of the GloVe vectors into the model will be explained in the following sections.
\subsection{Attention Mechanisms for RNNs}
Attention is one of the most revolutionary concepts introduced to deep learning with state-of-the-art results. In sequential NLP tasks such as language modeling/text generation and machine translation, attention offers a solution to the bottleneck problem which occurs from using fixed-length vectors for long input sequences \cite{bahdanau}. The general idea of attention is for the decoder to be able to pick up relevant embeddings in the encoder without running into a memory issue. Two of the most common variants of attention are introduced by Bahdanau et al. \cite{bahdanau} and Luong et al.\cite{attention}, the latter of which offers adjustments to the Bahdanau et al. attention model, and we choose to proceed with this attention mechanism in one of the variants of our model. \\
\section{Approach}
\subsection{Dataset}
The structure of our dataset has a significant influence on our model design and therefore shall be introduced before a thorough model description. Our dataset consists of approximately 400.000 image, label and caption triplets with 2600 unique image-label pairs, acquired from \cite{memegenerator} using a python script that we wrote. Labels are short descriptions referring to the image, i.e. the meme template, and are the same for identical images. Accordingly, each image-label pair is associated with several (roughly 160) different captions. A sample from our dataset is shown in table 1.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ c p{5cm} p{5cm} }
\toprule
Image & Label & Caption \\
\cmidrule(r){1-1}\cmidrule(lr){2-2}\cmidrule(l){3-3}
\raisebox{-\totalheight}{\includegraphics[scale=0.4]{success-kid2.eps}}
&
\begin{itemize}[topsep=0pt]
\item success kid
\end{itemize}
&
\begin{itemize}[topsep=0pt]
\item Didnt study for a test still get a higher grade than someone who did
\item Ate spaghetti with a white shirt on no stains
\item New neighbors Free Wifi \ldots
\end{itemize} \\
\cmidrule(r){1-1}\cmidrule(lr){2-2}\cmidrule(l){3-3}
\raisebox{-\totalheight}
{\includegraphics[scale=0.4]{awkward-seal2.eps}}
&
\begin{itemize}[topsep=0pt]
\item awkward seal
\end{itemize}
&
\begin{itemize}[topsep=0pt]
\item You laugh when your friend says something He was being serious
\item took a photo camera the wrong way
\item Goes to friends house Friend isn't there yet \ldots
\end{itemize}
\\ \bottomrule
\end{tabular}
\caption{Sample dataset}
\label{tbl:myLboro}
\end{center}
\end{table}
Before training our model with the captions, we performed a preprocessing on our dataset. Each word in the caption was lowercased to match the GloVe format, punctuation marks were assigned their own tokens, and start, end and unknown tokens were introduced into the vocabulary with randomly initialized word vectors. We performed a cut on the words that appear in the captions: every word that appears fewer than 3 times in the corpus is set to UNK. Captions with more than 2 UNKs are removed from the dataset.
\subsection{Model Variants}
\subsubsection{Encoder}
The motivation behind the encoder is to provide a meaningful initial state to the decoder to initiate the text generation process. To capture the image embeddings, we rely on the CNN model Inception-v3 \cite{inception-v3},\cite{showtell} pretrained on the ILSVRC-2012-CLS image classification dataset. We take the last hidden layer of the CNN as the encoder output. When a meme template is run through the Inception model, the output is a fixed length vector, image embedding, that captures the content of the image. Note that this model outputs a 2048 dimensional vector which results in a mismatch with our word embedding space that is 300 dimensional. Hence, we project the image embeddings into the word embedding space using a trainable fully connected layer.
In our project, we implement 3 different variants of the proposed encoder scheme. The first variant explained above just uses the meme templates and disregards the labels completely. Hence, the inputs to the decoder solely include encodings from the images. Only an image is required to generate a meme. This model and its results can be seen in \cite{CS230}.
The second variant of the encoder includes the meme labels. In this model, we first obtain the image embeddings by running the images though Inception as previously. Now, we also get the pretrained GloVe embedding for each word present in the meme label and compute their average. This averaged vector is concatenated to the image embedding vector and then fed into a trainable fully connected layer.
We average rather than concatenate as this keeps size constant. This was motivated by the assumption that the label words contain semantic encodings that can be mapped into word embedding space to deliver contextual meaning for the language generation that occurs in the decoder. The output of the fully connected layer is fed into the decoder as the initial state in the same manner as the first encoder scheme. Fig. 2 shows this architecture.
The third variant of our model again includes the meme labels, but makes a slight adjustment to the encoder to include attention mechanism. In this model variant, we obtain the image embeddings and put them through a fully connected layer, but this time we extend the encoder with an additional LSTM network before the decoder. This LSTM network takes the projected image embedding as the initial state and runs the GloVe embeddings of the labels through the LSTM. We perform attention on the encoder LSTM cells using Luong attention mechanism \cite{attention}. The output of this additional LSTM network serves as the initial state of the decoder.
In light of these encoder schemes, the equations that provide the initial state for the decoder in each model variant are shown below. Let $p \in \mathbb{R}^{2048} $ be the inception output corresponding to a meme template, and let $q \in \mathbb{R}^{300}$ be the decoder initial state. Also let $e_i \in \mathbb{R}^{300}$ represent the GloVe embeddings of the label words.
\begin{align*}
&(1)\quad q=W_1p+b_1 \\
&(2)\quad q=W_2(p || \frac{e_1+...+e_n}{n})+b_2 \\
&(3)\quad x_t=W_3p+b_3, \quad q=m_t
\end{align*}
where $x_t$ and $m_t$ follow the variable naming conventions of the LSTM equations in \cite{showtell}, introduced in section 2.2 of our paper.
\subsubsection{Decoder}
The decoder consists of a unidirectional LSTM network that operates according to the equations described in section 2.2. Every LSTM cell reuses the variables in the model. One of the modifications that we introduce to the original Show and Tell Model \cite{showtell} is the use of pretrained GloVe embeddings rather than randomly initialized word vectors. In addition, we still leave the word embedding vectors trainable based on the fact that we have a very large dataset with approximately 40k unique words and that the semantics of memes are very idiosyncratic so may not be entirely captured by the pretrained vectors. We find both qualitatively and quantitatively that allowing word vectors to be trainable improves our results.
In the previous section, we introduced 3 different variants to the encoder scheme. On the other hand, we use the same decoder construction for each of the encoders, except for the last model that incorporates an attention mechanism. Furthermore, we initially constructed each variant as a single-layered model, and then increased the number of layers to 2 and 3 to perform evaluations on the effect of model depth on the language modeling task.
\begin{figure}
\captionsetup{width=1.0\linewidth}
\begin{center}
\includegraphics[scale=0.4]{model2.eps}
\end{center}
\centering
\caption{\fontsize{10}{13}\selectfont A visualization of our second encoder-decoder model variant. We refer to GloVe word embeddings as $e_i$ and the probability distribution obtained at each LSTM time-step as $p_j$.}
\end{figure}
\subsubsection{Inference and Beam Search}
In the absence of label words that describe an image, inference, i.e. text generation is initiated with the image embedding. After each time-step that corresponds to a single cell, softmax probability distributions are computed for the words in the vocabulary. The output of an LSTM cell is fed sequentially into the next cell to generate the next word based on another softmax probability distribution. For original and humorous meme generation, greedy search is completely ineffective.
Furthermore, we found that inference based on standard beam search, in which we keep $k$ outputs in memory at each time-step, sequentially compute their "descendants" and then finally output the $k$ sentences with the overall highest total probability scores, gives adequate but non-optimal results. In order to generate the freshest memes and diverse outputs for the same template we implement a temperature function into the beam search algorithm. Instead of selecting the top k most probable words, the k words are sampled from a probability distribution of the top 100 words, where the temperature of the distribution is a hyperparameter. A probability distribution $p$ can be modified with temperature $T$ by the function $f$ shown below.
\begin{equation}
f(p)_i = \frac{p_i^{1/T}}{\Sigma_jp_j^{1/T}}
\end{equation}
where $T=1$ corresponds to unchanged probabilities, high $T$ lead to a very flat distribution (random pick) and low $T$ leads to argmax (greedy search).
In the presence of label vectors that correspond to a meme template, i.e. using the second or third model variants for inference, the users can provide additional context for the text generation by providing labels of their choice, giving them an additional handle on the generated meme content.
\section{Experiments}
\subsection{Training}
For each model variant, we performed training using 1, 2 and 3 layered versions of the LSTM decoder network. In addition, we tested both Momentum and SGD optimizers. A thorough hyper-parameter search was done to find the best learning rate schedule, batch size and LSTM/attention unit size. The evaluation metric during this search was perplexity, introduced in section 4.2 and displayed for both models in fig.3. Final hyperparameter choices are also shown in fig.3 In practice, we observed no significant difference in the perplexity score or the output quality of the memes when we increased the LSTM decoder network layer depth from 1 to 2 and eventually 3.
\begin{figure}[h!]
\captionsetup{width=1.0\linewidth}
\begin{center}
\includegraphics[scale=0.5]{perp2.eps}
\end{center}
\centering
\caption{\fontsize{10}{13}\selectfont A visualization of our second encoder-decoder model variant. We refer to GloVe word embeddings as $e_i$ and the probability distribution obtained at each LSTM time-step as $p_j$.}
\end{figure}
\subsection{Results and Evaluation}
For the tuning of hyperparameters and to quantitatively evaluate our model, we built an evaluation set of 105 template-label-caption examples taken from the training set which have repeated formats. E.g in Fig.1 the Boromir meme almost always starts the caption with `one does not simply', so the boromir image + the label `boromir' +`one does not simply' would constitute one example in the eval set.
We calculate the perplexity scores of the model on this eval set. Perplexity (PP) is a measure of the inverse probabilities of predicting the next word in the example caption (C):
\begin{equation}
PP(C) = \sqrt[N]{\prod_{i=1}^{N}\frac{1}{P(w_i|w_1...w_{i-1})}}.
\end{equation}
$\left\{w_1 ...w_N \right\}$ are the words in caption C. The probabilities P can be found from the cross entropy loss calculated on the LSTM output. Low perplexity means the model frequently assigns high probabilities to the given evaluation example captions. This metric tells us how well the model is learning to caption images of different formats with the correct style. It is a limited metric for success as it tells us nothing about whether the captions are humorous, original and varied. To solve this problem, we additionally implement a function to check whether generated captions, or exceptionally similar captions, are in the training set. For our final test we show 20 different memes to 5 people from diverse backgrounds and note if they can differentiate them from real (training set) memes of the same format and random text generated memes. The same people also ranked the memes being shown on how funny they found them on a scale of 0-10. The final results of these analyses are shown in fig.3 and table 2
Taking a look at our model variants' results in table 2, there are not significant differences between the two models that make use of the meme labels both in terms of perplexity and human assessments. Fig.4 shows some example generated memes from the variants, where we can see both models generalize relatively well to unseen images. The average meme produced from both is difficult to differentiate from a real meme and both variants scored close to the same hilarity rating as real memes, though this is a fairly subjective metric. The attention variant decreases the number of caption copies from the dataset compared to the GloVe average model but reduces performance on the human tested metrics. This might be expected as attention model variant places more emphasis on the labels, so by varying the labels for new images this could encourage more original memes. However there are only 2600 unique labels in the training set, so it would be difficult for the model to generalize to new labels, thus reducing performance.
Compared with the model variant that employs an image-only encoder \cite{CS230}, conditioning on the labels did allow for more varied meme content. We found that the given label did not provide a good handle on the content of the generated meme caption as can be seen in fig.4: the unseen generated memes are not related to the labels, only to the images. Again, this is to be expected considering there were few unique labels in the training set and these were only broadly related to the captions.
\begin{figure}
\captionsetup{width=1.0\linewidth}
\begin{center}
\includegraphics[scale=0.45]{Mememss.eps}
\end{center}
\centering
\caption{\fontsize{10}{13}\selectfont Original memes generated by both model variants. The input labels for the seen images are their labels from the training set while for the unseen image we use `AI is the new electricity' for all 4.}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{c|c c c c}
\hline
Model& \% in data & Perplexity & Hilarity & Differentiability \\
\hline\hline
\multicolumn{4}{c}{\textbf{Seen Images}} \\
\hline
Attention & 16 & 2.02 & 6.0 & 70\%\\
\hline
Glove Averages & 18 & 2.28 & 6.8 & 63\% \\
\hline
\multicolumn{4}{c}{\textbf{Unseen Images}} \\
\hline
Attention & 18 & - & 5.5 & -\\
\hline
Glove Averages & 26 & & 6.9 & -\\
\end{tabular}
\caption{Real Memes scored on average 7.0 in the hilarity test, between 1-10. Differentiability shows how often our human testers were able to distinguish real memes from generated ones, if indistinguishable this score would be near 50\%. Since memes are generated by beam search, one has to manually select a caption from the k generated, introducing a slight selection bias.}
\end{table}
\section{Conclusion}
In this paper, we have successfully demonstrated how to use a neural network model to generate memes given an input image. We suggested different variants of encoder schemes that can operate with or without labels, and reported a fine-tuned LSTM network for language modeling. Obtained results indicate that memes can be generated that in general cannot be easily distinguished from naturally produced ones, if at all, using human evaluations.
We acknowledge that one of the greatest challenges in our project and other language modeling tasks is to capture humor, which varies across people and cultures. In fact, this constitutes a research area on its own, as seen in publishings such as \cite{humor}, and accordingly new research ideas on this problem should be incorporated into the meme generation project in the future. One example would be to train on a dataset that includes the break point in the text between upper and lower for the image. These were chosen manually here and are important for the humor impact of the meme. If the model could learn the breakpoints this would be a huge improvement and could fully automate the meme generation.
Another avenue for future work would be to explore visual attention mechanisms that operate on the images and investigate their role in meme generation tasks, based on publishings such as \cite{showattendtell}, \cite{textguided} and \cite{visualattn}.
Lastly we note that there was a bias in the dataset towards expletive, racist and sexist memes, so yet another possibility for future work would be to address this bias.
\small{
|
1,116,691,500,624 | arxiv | \section{Introduction}
\label{sec:Introduction
Core-collapse supernovae (SNe) are believed to originate from evolved, massive progenitors (initial mass $\gtrsim 8$--10\,M$_{\odot}$) whose iron core undergoes gravitational collapse. Among them, Type II-Plateau (II-P) SNe show prominent hydrogen in their spectrum and a plateau in the optical light curves. Type IIb SNe have hydrogen in the spectrum initially, and a H-deficient spectrum at later times. Finally, Types Ib and Ic show no evidence for hydrogen at any time. The H-deficient/H-poor core-collapse SNe are thought to be produced by progenitors stripped of their hydrogen (SN~Ib) and possibly helium (SN~Ic) envelopes prior to exploding \citep[for a review, see][]{Filippenko1997}. Due to the stiff dependence of mass loss on luminosity/mass, a sequence of increasing main-sequence mass may be pictured going from progenitors of SNe~II-P, IIb, Ib, and Ic \citep[][]{Heger2003,Crowther2007,Georgy2009}. Rotation, metallicity, and binarity also affect the mass loss \citep[e.g.,][]{Pod1992,Meynet1994,Meynet2000}.
The viability of the standard explosion mechanism for stars of increasing mass is challenging, given that their higher mass cores are more bound, and their SN shocks subject to a very high accretion rate \citep[e.g.,][]{Burrows2007}. Binary-star evolution has been studied as a channel to circumvent this caveat \citep[e.g.,][]{Utrobin1994,Woosley1994,Fryer2007,Yoon2010,Smith2011}. The basic ingredient of this scenario is mass loss through transfer onto a companion. In this case, the mass-loss luminosity scaling does not apply, and much lower mass progenitors can explode as H-poor cores.
Recently, \citet{Dessart2011} published simulations of SN light curves resulting from explosions of SN~IIb/Ib/Ic progenitors. All SNe show a $\sim 10$-day-long post-breakout plateau with a luminosity of $(1-5)\times10^7$\,L$_{\odot}$. Analytical estimates for the early-time ($t\lesssim 1-2$\,d since explosion) post-breakout emission have been provided by \citet{Rabinak2011} and \citet{Nakar2010}.
In this Letter, we present the discovery of a type Ic SN, PTF\,10vgv, detected by the Palomar Transient Factory\footnote{http://www.astro.caltech.edu/ptf/} \citep[PTF;][]{Law2009,Rau2009} (\S 2). We report its spectral classification (\S 3) and the radio follow-up observations (\S 4). We constrain the radius of the stellar progenitor of this SN by comparing our tight pre-discovery upper-limits with the predictions of several models \citep{Dessart2011,Rabinak2011,Nakar2010} (\S 5).
\section{Discovery and $R$-band photometry}
\label{sec:Observations}
On 2010 September 14.1446 (UTC times are used throughout), we discovered a Type Ic SN, PTF\,10vgv, via the automated Oarical software \citep{Bloom2011}. The SN was visible at a magnitude of $R \approx 19.9$ (Table 1 and Figure \ref{zoom}) in an image (60\,s exposure) taken with the Palomar Oschin Schmidt 48-inch telescope (P48). It was not seen in previous images of the same field taken on 2010 September 12.4830, down to a limiting magnitude of $R > 20.2$. The SN J2000 position is $\alpha = 22^{\rm h}16^{\rm m}01.17^{\rm s}$, $\delta = +40^\circ52'03.3''$ \citep{ATEL2914}, at an angular distance of $\sim 5''$ from the galaxy SDSS\footnote{Sloan Digital Sky Survey \citep{York2000}.} J221601.54+405206.5.
\begin{figure}
\begin{center}
\includegraphics[height=6.cm]{Fig1.ps}
\caption{Discovery image of PTF\,10vgv (marked with a red arrow) in the $R$ band; the host galaxy is also visible. Circles of $5''$ radius mark the positions of the ten reference stars used for calibration of the P48 photometry (see text). \label{zoom}}
\end{center}
\end{figure}
P48 observations were obtained with the Mould-$R$ filter (Table 1 and Figure \ref{Fig2}). A high-quality image produced by stacking several images of the same field was used as a reference and subtracted from the individual images. Photometry was performed with an aperture of $2''$ radius relative to the $r$-band magnitudes of ten SDSS reference stars in the field (Figure \ref{zoom}), applying color corrections \citep{Corsi2011}. Aperture corrections were applied to account for systematic errors as well as errors introduced by the subtraction process \citep{Corsi2011}.
\begin{figure}
\begin{center}
\includegraphics[width=8.cm]{Fig2.ps}
\caption{\textit{Top:} P48 $R$-band light curve of PTF\,10vgv (black dots) corrected for Galactic extinction. P48 pre-discovery upper limits derived using 60\,s exposure images are plotted as black triangles. Deeper upper limits obtained by coadding the pre-explosion images are plotted as green triangles, with the green horizontal lines indicating the time range spanned by the coadded images. For comparison, we also plot the light curve of SN\,1994I (dashed line), rescaled to the redshift of PTF\,10vgv. \textit{Bottom:} Schematic representations of the bolometric light curves of models Bmi18mf3p79z=1 (dashed line) and Bmi25mf5p09z1 (dash-dotted line) of \citet[][]{Dessart2011} are compared with the PTF\,10vgv bolometric light curve (solid line). The black triangle and solid horizontal line indicate our deepest pre-explosion coadded upper limit (see upper panel), rescaled to account for the bolometric correction (and for Galactic extinction). See \S 5 for discussion.
\label{Fig2}}
\end{center}
\end{figure}
\begingroup
\renewcommand{\thefootnote}{\alph{footnote}}
\begin{longtable}{ccc}
\caption{P48 observations of PTF\,10vgv in $R$-band. This Table is published in its entirety in the electronic edition of this journal.\label{Tab1}}\\
\hline
Start time & Exposure & Mag\tablenotemark{a}\\
JD-2455453.6446 (d) & (s)& [mag]\\
\hline
-6.776 & 600 & $<21.2$\tablenotemark{b}\\
-6.776 & 60 & $<20.8$\tablenotemark{b}\\
-6.732 & 60 & $<21.1$\tablenotemark{b}\\
-5.732 & 60 & $<20.9$\tablenotemark{b}\\
-5.688 & 60 & $<20.8$\tablenotemark{b}\\
-3.811 & 60 & $<20.6$\tablenotemark{b}\\
-3.811 & 360 & $<20.8$\tablenotemark{b}\\
-3.766 & 60 & $<20.6$\tablenotemark{b}\\
-2.814 & 60 & $<20.8$\tablenotemark{b}\\
-2.768 & 60 & $<21.4$\tablenotemark{b}\\
-1.706 & 60 & $<20.4$\tablenotemark{b}\\
-1.662 & 60 & $<20.2$\tablenotemark{b}\\
0.000 & 60 & $19.897\pm0.079$\\
0.044 & 60 & $19.788\pm0.081$\\
0.996 & 60 & $18.489\pm0.053$\\
1.043 & 60 & $18.455\pm0.037$\\
1.994 & 60 & $17.742\pm0.032$\\
2.068 & 60 & $17.803\pm0.035$\\
3.019 & 60 & $17.283\pm0.033$\\
3.064 & 60 & $17.270\pm0.034$\\
4.075 & 60 & $16.904\pm0.024$\\
4.122 & 60 & $16.879\pm0.023$\\
5.073 & 60 & $16.676\pm0.025$\\
5.117 & 60 & $16.660\pm0.034$\\
6.065 & 60 & $16.496\pm0.026$\\
6.108 & 60 & $16.464\pm0.046$\\
7.078 & 60 & $16.360\pm0.031$\\
7.122 & 60 & $16.349\pm0.025$\\
8.058 & 60 & $16.297\pm0.025$\\
8.106 & 60 & $16.297\pm0.030$\\
9.148 & 60 & $16.224\pm0.054$\\
9.193 & 60 & $16.238\pm0.049$\\
10.149 & 60 & $16.230\pm0.039$\\
10.193 & 60 & $16.204\pm0.050$\\
11.060 & 60 & $16.222\pm0.045$\\
11.103 & 60 & $16.223\pm0.042$\\
\hline
\footnotetext[1]{Magnitudes are not corrected for Galactic extinction, and are calibrated to the SDSS $r$ (SDSS is estimated to be on the AB system within $\pm 0.01$\,mag in the $r$ and $i$ bands).}
\footnotetext[2]{$3\sigma$ upper limit computed by simulating stars at the position of PTF\,10vgv, to account for the presence of the underlying host galaxy.}
\end{longtable}
\endgroup
\renewcommand{\thefootnote}{\arabic{footnote}}
\section{Spectral classification}
\label{spettrale}
\begin{figure}
\begin{center}
\includegraphics[height=6.cm]{Fig3.ps}
\caption{Spectra of PTF\,10vgv from Lick/Kast (black and red lines), HET/LRS (green line; telluric absorption lines not removed), and P200/DBSP (orange and blue lines). The approximate epoch since the $R$-band peak is also indicated for each spectrum. For comparison, the spectrum of SN\,1994I around maximum light is also shown (magenta). All data are available in digital form from the Weizmann Institute of Science Experimental Astrophysics Spectroscopy System at http:$//$www.weizmann.ac.il$/$astrophysics$/$wiseass$/$. \label{spectra}}
\end{center}
\end{figure}
After rapidly identifying PTF\,10vgv, we triggered our follow-up programs \citep{Gal-Yam2011}.
On 2010 September 16 and October 1, we observed PTF\,10vgv with the dual-arm Kast spectrograph \citep{ms93} on the 3\,m Shane telescope at Lick Observatory (Figure \ref{spectra}). We used a $2''$ wide slit, a 600/4310 grism on the blue side, and a 300/7500 grating on the red side, yielding full width at half-maximum intensity (FWHM) resolutions of $\sim 4$\,\AA\ and $\sim 10$\,\AA, respectively. All observations were aligned along the parallactic angle to reduce differential light losses \citep{f82}. Respective exposure times and air masses were 1800\,s and 1.03 for the first epoch, and 2100\,s and 1.00 for the second epoch. The spectra were reduced using standard techniques \citep[e.g.,][]{fps+03} based on \texttt{IRAF} and \texttt{IDL} routines. Using the Kast spectra we derive a redshift of $z=0.0142\pm0.0002$ (using H$\beta$, \ion{O}{3}, H$\alpha$, \ion{N}{2}, and \ion{S}{2} lines) for PTF\,10vgv.
On 2010 September 27, in between the two epochs of the Kast observations, we observed PTF\,10vgv with the Low Resolution Spectrograph (LRS) mounted on the Hobby-Eberly Telescope (HET), using the gr300 grating and GG385 filter. We applied bias- and flat-field corrections using daytime calibration frames, and removed cosmic rays using the \texttt{IRAF} task ``L.A. Cosmic'' \citep{vandokkum01}. The spectrum was extracted and wavelength-calibrated using the ``apall'' and ``identify'' \texttt{IRAF} tasks, respectively, and had exposure time 450\,s at a mean airmass of 1.26.
On 2010 October 30 and December 07, we observed PTF\,10vgv with the Double Beam Spectrograph \citep[DBSP;][]{Oke1982} on the Palomar 200-inch telescope (P200; Figure \ref{spectra}). We used the 600/4000 and the 158/7500 gratings for the blue and red cameras, respectively, with a D55 dichroic, resulting in a spectral coverage of $\sim$ 3500--9500\,\AA. The spectra were reduced using a custom pipeline combining \texttt{IRAF} and \texttt{IDL} scripts. Respective exposure times and air masses were 600\,s and 1.1 for the first epoch, and 350\,s and 1.04 for the second epoch.
We measured the velocity of the \ion{Si}{2} absorption at 6355\,\AA, which traces reasonably closely the position of the photosphere \citep[e.g.,][]{Tanaka2008}, using the spectra of PTF\,10vgv taken on September 16, September 27, and October 1. The velocities are $16\times10^{3}$\,km\,s$^{-1}$, $9\times10^{3}$\,km\,s$^{-1}$, and $6\times10^{3}$\,km\,s$^{-1}$, respectively, for the three epochs. These are comparable to those of the ``normal'' SN\,Ic\,1994I at similar epochs \citep[][]{Sauer2006}, $\sim 0.7$ times those measured for SN\,2006aj \citep[associated with X-ray flash 060218;][]{Mazzali2006a}, and smaller than those of the gamma-ray burst (GRB)-associated SN\, 1998bw \citep{Iwamoto1998} and SN\,2003dh (see Figure 5 in Corsi et al. 2011 and references therein). The broad-line SN\,Ic\,2002ap also showed higher velocities \citep[$\gtrsim 16\times10^{3}$\,km\,s$^{-1}$ at $\sim 1$ week after the explosion;][]{GalYam2002,Mazzali2002}.
We thus classify PTF\,10vgv as a normal Type Ic SN. A cursory examination of PTF\,10vgv spectra suggests that the blending of lines in this SN is stronger than in both SN\,2006aj and SN\,1994I, indicating that in PTF\,10vgv there may be significantly more mass at v$\approx2\times10^4$\,km\,s$^{-1}$. In Figure \ref{spectra}, we compare our spectra of PTF\,10vgv with the one of SN\,1994I around maximum light.
\section{Radio Follow-up Observations}
Starting on 2010 October 7.16, we observed the position of PTF\,10vgv (along with the necessary calibrators) with the Expanded Very Large Array \citep[EVLA;][]{Perley2009} in its C configuration, at 4.495\,GHz and 7.915\,GHz, for a total time of 30 min \citep{Corsi2010}. We detected no radio emission from the position of PTF\,10vgv, down to 3$\sigma$ limits of $120\,\mu$Jy at 4.495\,GHz and $102\,\mu$Jy at 7.915\,GHz. Based on this, we estimate the 5\,GHz spectral luminosity of PTF\,10vgv to be $\lesssim 5\times10^{26}$\,erg\,s$^{-1}$\,Hz$^{-1}$, or $\sim 100$ times below the radio luminosity of the GRB-associated SN\,1998bw \citep{Kulkarni1998} on a similar timescale. This supports the idea that PTF\,10vgv is a normal SN~Ic, rather than a GRB-associated SN. We reobserved PTF\,10vgv with the EVLA in its BnA configuration starting on 2011 May 12.52, for a total time of 1\,hr and at a central frequency of 8.46 GHz. No radio sources were detected in the error circle of PTF\,10vgv down to a 3$\sigma$ limit of $30\,\mu$Jy. EVLA data were reduced and imaged using the AIPS software package.
\section{Discussion}
The measured peak magnitude of PTF\,10vgv (see Table 1) corrected for Galactic extinction \citep[$A_R \approx 0.45$\,mag;][]{Schlegel1998} gives $M_R = -18.16\pm0.05$ mag. The peak absolute magnitude of SN\,1994I was $M_R = -17.99\pm0.48$ \citep[][]{Richmond1996}, while SN\,2006aj had $M_R = -18.81\pm0.06$ mag \citep{Mazzali2006a}. Since PTF\,10vgv is intermediate, in terms of $R$-band peak luminosity, between SN\,1994I and SN\,2006aj, we estimate its nickel mass $M_{^{56}\rm Ni, 10vgv}$ by interpolating between these two SNe \citep{Sauer2006,Mazzali2006a}, using the scaling $L_{\rm peak}\propto M_{\rm Ni} \tau_c^{-1}$ for the peak luminosity \citep[where $\tau_c$ is the light-curve peak width;][]{Arnett1982}, and considering that the PTF\,10vgv light curve is a factor of $\sim 1.25$ broader than that of SN\,1994I (while we take the same $\tau_c$ for PTF\,10vgv and SN\,2006aj). This yields $M_{^{56}\rm Ni, 10vgv} \approx 0.12$\,M$_{\odot}$.
The mass and kinetic energy of the SN ejecta scale as \citep{Arnett1982} $M_{\rm ej} \propto \tau^2_c {\rm v_{ph}}$ and $E_{\rm K} \propto \tau^2_c {\rm v^3_{ph}}~$, where ${\rm v_{ph}}$ is the photospheric velocity. Using these scalings, and considering that the photospheric velocities of PTF\,10vgv are comparable to those of SN\,1994I and $\sim 0.7$ times those of SN\,2006aj (\S \ref{spettrale}), we estimate the ejecta mass and kinetic energy of PTF\,10vgv interpolating between SN\,2006aj \citep{Mazzali2006a} and SN\,1994I \citep{Sauer2006}. We get $M_{\rm ej, 10vgv}=(1.5\pm0.3)$\,M$_{\odot}$ and $E_{\rm K, 10vgv}=(0.9\pm0.3)\times10^{51}$\,erg. This estimate may be refined through spectral modeling. Our spectral analysis suggests that a different mass-velocity distribution may be realized in PTF\,10vgv (\S \ref{spettrale}), which may lead to a larger $E_{\rm K}$ than estimated on the basis of the light-curve properties, since the latter are mostly determined by the opacity in the inner ejecta.
Our pre-discovery upper limits can be used to constrain the radius of the stellar progenitor of PTF\,10vgv via comparison with model predictions \citep{Dessart2011,Rabinak2011,Nakar2010}. We apply a bolometric correction to our $R$-band data, computed assuming that the SN emits as a black body at temperature $T_{\rm phot}$ (and neglecting redshift corrections):
\begin{eqnarray}
\nonumber M_{\rm bol}-M_R = -2.5\,{\rm log}_{10}\left(\frac{4\pi\,(10\,{\rm pc})^2 F_0\int^{\nu_2}_{\nu_1}S(\nu)d\nu}{{\rm L}_{\odot}}\right)+M_{\rm bol, \odot}\\+2.5\,{\rm log}_{10}\left(\frac{\int^{\nu_2/kT}_{\nu_1/kT}S(x)x^3(e^{x}-1)^{-1}dx}{\pi^4/15}\right),~~~\label{bol}
\end{eqnarray}
where
$M_{\rm bol, \odot}=4.72$; $S(\nu)$ is the P48 Mould-$R$ filter transmission; $\nu_1-\nu_2=(4.1-5.3)\times10^{14}$\,Hz; and $F_{0}$ is the photometric zero-point flux ($F_0=3.631\times10^{-20}$\,erg\,cm$^{-2}$s$^{-1}$Hz$^{-1}$ for AB magnitudes). We conservatively maximize the bolometric correction setting $T_{\rm phot}\approx 10^4$\,K, the largest early-time ($t \lesssim 10$\,d since breakout) temperature predicted by the $^{56}$Ni-rich models of Dessart et al. (2011; see their Figure 2, bottom-left panel). In this way we get $M_{\rm bol}-M_R =-0.496$\,mag.
The optical luminosity of core-collapse SNe after breakout depends on the ejecta composition (via the opacity parameter), the stellar radius, and the $E_{\rm K}/M_{\rm ej}$ ratio. A larger $E_{\rm K}/M_{\rm ej}$ ratio and a lower He fraction both increase the predicted luminosity, for a given stellar radius \citep[][Equations (25) and (29)]{Rabinak2011}.
In recent numerical simulations of core-collapse explosions of single and binary progenitors of SNe~IIb/Ib/Ic, \citet{Dessart2011} predicted the existence a $\sim 10$-day-long ($\sim 10$ times shorter than in SNe~II-P) post-breakout\footnote{The breakout of a shock through the stellar surface is predicted to be the first electromagnetic signal marking the birth of a SN \citep[e.g.,][]{Arnett1977,Falk1978,Klein1978,Chevalier1992,Waxman2007,Nakar2010,Rabinak2011}.} plateau, with a luminosity of $(1-5)\times10^7$\,L$_{\odot}$ ($\sim 10$ times smaller than in SNe~II-P). This plateau has the same origin as that observed in SNe~II-P\footnote{The plateau is associated with a cooling and recombination wave (CRW) propagating downward through the SN envelope, separating almost recombined outer layers from strongly ionized inner ones \citep[e.g.,][]{Nadyozhin2003}. During the plateau phase, the photosphere sits on the upper edge of the CRW front, whose downward speed is approximately equal to the outward expansion velocity, thus $R_{\rm phot}\approx$\,const. Since also $T_{\rm phot}\approx$\,const\,$\approx T_{\rm recomb}$ (where $T_{\rm recomb}$ is the recombination temperature), a plateau in the luminosity is expected.}, but in the case of SNe~IIb/Ib/Ic it is predicted to have a smaller duration and luminosity because of a more compact progenitor.
For PTF\,10vgv we can exclude the presence of a post-breakout plateau with luminosity greater than the one of the compact progenitor model Bmi25mf5p09z1 (Figure \ref{Fig2}, lower panel). We thus derive $R\lesssim 4.4\,{\rm R}_{\odot}$ for the radius of PTF\,10vgv progenitor. However, the stellar models analyzed by \citet{Dessart2011} have $E_{\rm K}/M_{\rm ej}$ lower than we derive here, and a high surface He fraction. So the bound on the progenitor radius derived from the comparison with these models is likely over-estimated.
Similar limits ($R\lesssim5\,{\rm R}_{\odot}$) can be derived using the predictions by Nakar \& Sari (2010; black line in their Figure 3). But this model is accurate only up to $\lesssim 11$\,hr after the explosion, since recombination is not treated.
Using $M_{\rm ej, 10vgv}=(1.5\pm0.3)$\,M$_{\odot}$ and $E_{\rm K, 10vgv}=(0.9\pm0.3)\times10^{51}$\,erg, as derived above, the tightest constraint, $ R \lesssim 0.7\,{\rm R}_{\odot}$, is obtained from the C/O model of \citet{Rabinak2011}, that accounts for the dependence of the opacity on the envelope composition. The same model, for an envelope composed of mostly He, gives us $ R \lesssim 1.3\,{\rm R}_{\odot}$. Thus, $ R \lesssim 1\,{\rm R}_{\odot}$ is a reasonable estimate \citep[considering that progenitors of type Ic SNe may contain a small fraction of He in the outer layers;][]{Georgy2009}.
Applying this same analytical model to the first clear detection of SN\,1994I (Sauer et al. 2006, Figure 8; Richmond et al. 1996, Figure 7), we get $R\lesssim 1/4\,R_{\odot}$,
considering that $M_{\rm ej,1994I}\approx M_{\rm ej, 10vgv}$, $E_{\rm K, 1994I}\approx E_{\rm K, 10vgv}$, and that the luminosity of SN\,1994I at the time of detection was $\approx 3$ times smaller than the one of PTF\,10vgv.
Our limits for PTF\,10vgv, $R\lesssim (1-5)\,{\rm R}_{\odot}$, are consistent with a small Wolf-Rayet star \citep[e.g.,][]{Crowther2007}, as expected for a highly stripped SN~Ic. Almost all Galactic WN stars with hydrogen (WNL; e.g., Hamann et al. 2006) have $R\gtrsim 5 R_{\odot}$, and all of those reported there have $R\gtrsim 2 R_{\odot}$. Our result thus favors a progenitor having no hydrogen at the surface (WNE, WC or WO, e.g., Sander et al. 2011), in agreement with the fact that Ic SNe progenitors are generally thought to be stripped of their H- (and He-) rich layers (e.g., Gal-Yam et al. 2005; Smartt 2009).
PTF\,10vgv provides the first constraint on the progenitor radius of a SN ever obtained from optical pre-explosion limits extending up to a week before discovery. Optical surveys with rapid cadence and relatively deep exposures (like PTF) should allow us to study many more objects in this manner.
\acknowledgments
We thank Boaz Katz and Eli Waxman for useful comments.
PTF is a collaboration of Caltech, LCOGT, the Weizmann Institute, LBNL,
Oxford, Columbia, IPAC, and UC Berkeley. Staff and computational resources
were provided by NERSC, supported by the DOE Office of Science. Lick
Observatory and the Kast spectrograph are operated by the University of California.
HET and its LRS are supported by UT Austin, the Pennsylvania State University,
Stanford, Ludwig-Maximilians-Universit\"{a}t M\"{u}nchen,
Georg-August-Universit\"{a}t G\"{o}ttingen, and the Instituto de Astronomia
de la Universidad Nacional Autonoma de Mexico. The EVLA
is operated by NRAO for the NSF, under cooperative agreement by
Associated Universities, Inc. We thank the staffs of the above observatories
for their assistance. A.G. and S.R.K. acknowledge support
from the BSF; A.G. further acknowledges support from the ISF,
FP7/IRG, Minerva, the Sieff Foundation, and the
German-Israeli Fund (GIF). A.V.F. and his group
at UC Berkeley acknowledge generous financial assistance from Gary
\& Cynthia Bengier, the Richard \& Rhoda Goldman Fund,
the TABASGO Foundation, and NSF grant AST-0908886.
A.C. acknowledges support from LIGO, which was constructed by Caltech
and MIT with funding from the NSF under cooperative agreement PHY-0757058, and partial support from NASA/\textit{Swift} grant NNH10ZDA001N.
|
1,116,691,500,625 | arxiv | \section{Introduction\label{intro}}
\begin{figure}[h]
\centering{}\includegraphics[width=0.65\textwidth]{figures/find-words-challenge-18__700}\caption{\label{fig: hidden-words}A children's puzzle where the goal is to
find six hidden words: Book, words, story, pages, read, novel. For
a machine this is far from child's play. Could this be solved by providing
a million similar examples to a deep-learning system? Does a human
need such training?}
\end{figure}
Once only known to a few outside of academia, machine-learning has
become ubiquitous in both popular media and in the industry. Superhuman
capabilities are now being gradually recorded in various fields: in
the game of GO, (\cite{silver2016mastering,silver2017mastering}),
in face verification (\cite{lu2015surpassing,qi2018face}), image
categorization (\cite{he2015delving}) and even in logical reasoning
in simple scenes (\cite{santoro2017simple,perez2017learning,perez2017film}).
Most current leading methods involve some variant of deep learning.
Consequentially, they require large amounts of hand-labeled data (with
the exception of \cite{silver2017mastering} - which used self-play
to gain experience). This has elicited a data-hungry era, with increasingly
large-scale datasets painstakingly labeled for object classification/detection/segmentation,
image annotation, visual question-answering, and pose estimation (\cite{russakovsky2015imagenet,lin2014microsoft,krishna2017visual,antol2015vqa,guler2018densepose})
to name a few. This is accompanied by a growing demand for computational
power.
We bring forward challenges in vision which do not seem to be solved
by current methods - and more importantly - by current popular methodologies,
meaning that neither additional data, nor added computational power
will be the drivers of the solution.
\subsection*{Related Work\label{sec:Related-Work}}
\textbf{Imbalanced or Small Data: }datasets tend to be naturally imbalanced,
and there is a long history of suggested remedies (\cite{lim2011transfer,zhu2014capturing,wang2017learning}).
Handling lack of training data has also been treated by attempting
to use web-scale data of lesser quality than hand-annotated dataset
\cite{sun2017revisiting}, simulating data {[}cite data for cars,
text recognition in the wild, captcha{]}. \textbf{Transfer Learning:
}reusing features of networks trained on large is a useful starting
point (cf \cite{sharif2014cnn}) \textbf{One-Shot-Learning}: attempting
to reduce the number of required training example, in extreme cases
to one or even zero examples (\cite{snell2017prototypical});\textbf{
Deep-Learning Failures}: recently, some simple cases where deep learning
fails to work as one would possibly expect were introduced, along
with theoretical justifications (\cite{shalev2017failures}).
\section{Challenging Cases}
We present two examples and then discuss them. They have a few common
characteristics: humans are able to solve them on the first ``encounter''
- despite not having seen any such images before. Incidentally - but
not critically - the two examples are from the domain of visual text
recognition. Moreover, though humans know how to recognize text as
seen in regular textbooks, street-signs, etc, the text in these images
is either hidden, rendered, or distorted in an uncharacteristic manner.
\textbf{Children's games}: the first case is well exemplified by a
child's game, hidden word puzzles. The goal is to find hidden words
in an image. Fig. \ref{fig: hidden-words} shows an arbitrarily selected
example. For a human observer this is a solvable puzzle, though it
may take a few minutes to complete. We applied two state-of-the-art
methods for text recognition in the wild with available code (\cite{shi2017end})
or an on line-demo (\cite{zhou2017east}\footnote{\url{http://east.zxytim.com}})
on the image in Fig. \ref{fig: hidden-words}. As this did not work
immediately, we focused on the word ``NOVEL'' (the ``N'' is below
the forearm of the left person, ending with an ``L'' below his foot),
by cropping it an rotating so the text is level, cropping more tightly
, and even cropping only the letter ``L''. See Table \ref{tab:crops}
for the corresponding sub-images (including the entire image at the
top row) and the results output by the two methods.
This is by no means a systematic test and some may even claim that
it isn't fair - and they would be right: these systems were not trained
on such images; \cite{shi2017end} was only trained on a photo-realistic
dataset of 8 million synthetic training images, and \cite{zhou2017east}
was only trained on tens of thousands of images from coco-text (\cite{veit2016coco}),
or used powerful pre-trained networks where training data was less
available.
\begin{table}
\begin{centering}
\begin{tabular}{ccccc}
Sub Image & \includegraphics[height=0.07\textheight]{figures/find-words-challenge-18__700} & \includegraphics[height=0.07\textheight]{figures/novel_straight} & \includegraphics[height=0.07\textheight]{figures/novel_straight_crop} & \includegraphics[height=0.07\textheight]{figures/L}\tabularnewline
\midrule
\cite{shi2017end} & ``sned'' & ``vvoz'' & ``novees'' & ``teg''\tabularnewline
\midrule
\cite{zhou2017east} & ``score'' & $\emptyset$ & $\emptyset$ & $\emptyset$\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption{\label{tab:crops}Text detected by two state-of-the-art scene-text
recognition methods applied to sub-images of a children's puzzle.
$\emptyset$ means no text was detected by the method (images scaled
to fit figure). }
\end{table}
\begin{figure}
\begin{centering}
\includegraphics[bb=0bp 15bp 1592bp 76bp,clip,width=0.75\textwidth]{figures/captcha}
\par\end{centering}
\caption{Variants of textual CAPTCHA. Captchas are becoming increasingly difficult
(reproduced from \cite{le2017using}) }
\end{figure}
\textbf{CAPTCHA}: a well-known mechanism to thwart automated misuse
of websites by distinguishing between humans and machines (\cite{von2003captcha}).
Textual captchas involve presenting an image of text which has to
be read and written by the user. We focus on this type of captcha,
though others exist (\cite{singh2014survey}). The introduction of
captchas immediately triggered the invention of new automatic ways
to break them (\cite{mori2003recognizing}), which eventually sparked
an ``arms race'' between increasingly complex captchas and correspondingly
powerful automated methods (\cite{chen2017survey}). This caused a
state where on one-hand the best leading textual captcha-solution
methods involve training DNN's over data with similar distortion characteristics
as the desired types of captcha - though still these systems have
limited success rates (at times less than 50\%) - and on the other
hand the level of distortion has become such that humans have a hard-time
solving some of them.
\section{Machines vs Humans as Supervised Learners}
One can rule out the suggested examples by saying that they are simply
out-of-sample datapoints on behalf of a statistical learner's perspective.
Yet it seems that with whatever supervision human-beings receive -
they are usually able to solve them despite not being especially exposed
to this kind of stimulus. Moreover, precisely these kinds of images
are used routinely in human IQ testing, so they are a universally
accepted indicator for human performance. If these examples may seem
esoteric, we can revert to more common cases: as a child, how often
is one exposed to bounding boxes of objects? How often to delineations
of objects with precise segmentation masks? How often to pose-configurations,
facial and bodily key-points, and dense-meshes of 3D objects overlayed
on their field of view (\cite{guler2018densepose})? More critically,
for how many different object types does this happen (if any), for
how many different instances, with what level of precision of annotation,
and in how many modalities?
The granularity of visual supervision given to machines seems to be
much finer than that given to humans. As for the amount of directly
supervised data, it does not seem to really be the main limiting factor;
as already noted several times, performance either saturates with
training data (\cite{zhu2012we,zhu2016we}) or at best grows logarithmically
(\cite{sun2017revisiting,hestness2017deep}, increasing mAP from 53\%
to 58\% when growing from 10M to 300M examples) making the solution
of more data for better performance simply impractical - even for
those with the most resources. And this is for ``common'' problems,
such as object detection.
Humans who only ever read street-signs and textbooks are able to solve
captchas of various kinds without any special training on their first
encounter with them. The same is true for the ``picture puzzles''
mentioned above, as it is for other cases not mentioned here. We do
not claim that humans are not subject to supervised learning in their
early life, and in later stages. On the contrary, supervisory signals
arise from multiple sources: caretakers who provide supervisory signals
by teaching, ``internal supervision'' provided by innate biases
(\cite{ullman2012simple}) and finally rewards stemming from results
of behaviour, such as suffering pain from hitting an object. But any
such supervision is interspersed within a vast, continuous stream
of unsupervised data, most of which does not have an easily measurable
supervisory affect on the observer.
There is something fundamentally different about the way humans construct
or use internal representations, enabling them to reason about and
solve new pattern-recognition tasks. We hypothesize that these are
approached by generating procedures of a compositional nature when
presented with a novel - or known - task (as suggested by the Visual
Routines of \cite{ullman1984visual} or the Cognitive Programs of
\cite{tsotsos2014cognitive}. We intend to maintain a collection of
examples beyond the ones suggested above, to encourage the community
to attempt to solve them, not by learning from vast amounts of similar
examples, but by learning from related, simpler subtasks and learning
to reason and solve them by composing the appropriate solutions.
\bibliographystyle{IEEEtran}
|
1,116,691,500,626 | arxiv | \section{Introduction}
The problem of expressing of eigenvalues of the polynomials as a certain functions of the coefficients is one of the oldest mathematical
problems. The question on possibility, or impossibility, to express the eigenvalues of the polynomials through coefficients on making use of
radicals had been exhaustively answered by E.Galois and H.Abel that the polynomial higher than fourth order, in general, does not admit a
presentation of solutions via radicals \cite{History}. In spite of this rigorous mathematical theory mathematicians remained to believe that the
eigenvalues of the polynomials can be expressed in analytical way as certain functions of the coefficients \cite{Belard}, \cite{Lajtin}. Hermite
was first who found a very elegant expression of eigenvalues of the quintic equation by modular functions \cite{Hermite}. The theory of elliptic
functions originally was related with the problem of finding of eigenvalues of the cubic polynomial. In fact, the Weierstrass elliptic functions
at the periods are equal to eigenvalues of the cubic equation \cite{Weber}. It is clear, however, that for a search of analytical solutions of
the $n>5$-degree polynomials one needs of mathematical tools beyond the elliptic functions. In this context as a hopeful tools one may take the
theories of hyper-elliptic functions \cite{Mordell} and multi-complex algebras \cite{Lipatov}, \cite{Yamaleev1}.
The main purpose of the present paper is to construct evolution equations for eigenvalues and coefficients of polynomials. We search such kind
of evolution equations which remain the original polynomial within certain class of polynomials. Therefore first of all we define the set of
invariants and classify the polynomials with respect to the obtained set of invariants. The polynomials from the same class possess with
congruent set of eigenvalues, i.e. inside a given class the eigenvalues of one polynomial are obtained by simultaneous translations of the
eigenvalues of the other polynomial. The algorithm of calculation of the eigenvalues is based on the evolution process reducing the number of
the coefficients of initial polynomial. During of the evolution the initial polynomial will transformed into the other polynomial remaining
within the frames of a given class. The evolution is directed in such a way that the final polynomial will possess with one trivial solution.
The coefficients of the finial polynomial are solutions of the Cauchy problem for ordinary differential equations where the coefficients of the
polynomial serve as initial data. As soon as the Cauchy problem is resolved, the eigenvalues of initial polynomial are found simply by certain
set of translations of the eigenvalues of the final polynomial. It is shown, if the coefficients obey to equations for Weierstrass
hyper-elliptic functions then the eigenvalues will obey equations for hype-elliptic Jacobi functions.
The present method has been procreated in the process of construction of generalized electrodynamics of $n$-th order (see,
Refs.\cite{Yamaleev2},\cite{Yamaleev3},\cite{Yamaleev4}). In this theory an important role plays the fact that the mapping between inner- and
outer-momenta is built as a mapping between coefficients and eigenvalues of the characteristic $n$-degree polynomial. The generalized dynamics
is based on dynamic equations of motion previously developed for the polynomials. As a starting platform of the construction served the elements
of the relativistic dynamics closely related with quadratic polynomial.
Besides the Introduction the paper contains the following sections.
In Section 2, the equations of evolution for the coefficients of $n$-degree polynomial are formulated. In Section 3, the Algorithm of finding of
eigenvalues of the corresponding polynomial is built. In Section 4, some peculiarities of the cubic equation is explored. In Section 5, the
elements of the relativistic dynamics based on quadratic characteristic polynomial is presented. In Section 6 we give an account of a sketch of
the generalized dynamics related with $n$-degree characteristic polynomial.
\section{Evolution equations for eigenvalues and coefficients of $n$-degree polynomial }
If $F$ is a field and $q_1,...,q_n$ are algebraically independent over $F$, the polynomial
$$
p(X)=\prod_{i=1}^n(X-q_i)
$$
is referred to as {\it generic polynomial} over $F$ of degree $n$. The polynomial equation of $n$-degree over field $F$ is written in the form
$$
p(X):=X^{n}+\sum_{k=1}^{n-1}(-)^k(n-k+1)P_kX^{n-k}+(-)^{n}P^{2}=0, \eqno(2.1)
$$
where the coefficients $P_k\in F(q_1,...,q_n)$. In this paper we shall restrict our attention only to polynomials with real coefficients and
with simple roots. Moreover the last coefficient $P^2$ is essentially positive. The signs at the coefficients in Eq.(2.1) are changed from term
to term which allows in Vi\`{e}ta formulae to keep only the positive signs. The expressions at the coefficients are included purely for
convenience, and have no real bearing on the theory. The mapping from the set of eigenvalues onto the set of coefficients is given by Vi\`{e}ta
formulae
$$
(a)~~nP_1=\sum_{i=1}^{n}q_i,~~(b)~~P^{2}=\prod_{i=1}^{n}q_i,~~(c)~~P_k=\sum_{1\leq r_1<\ldots<r_k\leq n}\prod^k_{j=1}q_{r_j}, \eqno(2.2)
$$
here $P_k$ is called the {\it elementary symmetric $k$-degree polynomials} of the eigenvalues. The number of monomials inside $k$-th elementary
polynomial is equal to binomial coefficient:
$$
C^k_n=\left( \begin{array}{c}
k\\
n
\end{array} \right)=
\frac{n!}{k!(n-k)!}. \eqno(2.3)
$$
Since the roots of the generic polynomial $p(X)$ are algebraically independent, this polynomial is, in some sense, the most general polynomial
possible.
In $p(X)$ replace $X$ with $X=Y+P_1$. This transformation will eliminate the $(n-1)$-degree term, resulting in a polynomial of the form
$$
r(Y):=Y^{n}+\sum_{k=2}^{n-1}(-)^kR_kY^{n-k}+(-)^nR_0=0. \eqno(2.4)
$$
The polynomials $p(X)$ and $r(Y)$ have the same splitting field and hence the same Galois group. Let $E$ be the splitting field of $r(Y)$, let
$y_k,k=1,..,n$ be its roots in $E$ and $G=G_F(E)$ be its Galois group.
{\bf Lemma 2.1}
{\it The coefficients $R_k,~R_0$ of Eq.(2.4) are invariants with respect to simultaneous translations of the eigenvalues of Eq.(2.1)}
The {\bf Proof} of the statement it follows from formula $Y=X-P_1$ which in terms of the eigenvalues is expressed as follows
$$
y_k=q_k-\frac{1}{n}\sum_{i=1}^{n}q_i=\frac{1}{n}\sum_{i\neq k}^n (q_k-q_i). \eqno(2.5)
$$
It is seen that the eigenvalues of Eq.(2.4) are represented by differences between the eigenvalues of Eq.(2.1), hence, they are invariants with
respect to the simultaneous translations. Since the coefficients $R_0,R_k,k=2,...,n-1,$ are sum of uniform monomials of $y_k,k=1,...,n$ they
have same feature, namely, they are invariants with respect to simultaneous translations of the eigenvalues of Eq.(2.1), too.
{\bf End of Proof.}
The polynomial $r(Y)$ we denominate as {\it invariant polynomial} (with respect to translations of the roots of (2.1)).
The main task of this section is to introduce evolution equations for the coefficients of Eq.(2.1) which remain invariant the coefficients of
Eq.(2.4) $R_k$. This result is given by the following
{\bf Theorem 2.2}
{\it Let $q_{k}, k=1,...,n$ be set of eigenvalues of polynomial equation of $n$- degree (2.1). Let the differentials of all eigenvalues are
equal to each other
$$
dq_1=dq_2=\ldots=dq_k=\ldots=dq_n.\eqno(2.6)
$$
Then differentials of the coefficients satisfy the following system of equations:}
$$
2P_{n-1}dP_1=dP^2,~\eqno(2.7)
$$
$$
dP_{n-k}=(k+2)P_{n-k-1}~dP_1,~~k=1,...,n-2; \eqno(2.8)
$$
{\bf Proof.}
Notice that from (2.2a) it follows that
$$
dq_k=dP_1,~k=1,...,n.
$$
Coefficients of the polynomial (2.1) are symmetric forms of the eigenvalues where $k$-th coefficient $(n-k+1)P_k$ consists of $C^{k}_n$
monomials of $k$-degree. Thus, the derivative of this coefficient $(n-k+1)dP_k$ contains $kC^{k}_n$ monomials and, since the derivatives all
eigenvalues are equal to each other, so, the derivative of $(n-k+1)P_k$ is proportional to $\lambda dP_1$ where the coefficient of
proportionality consists of $kC^{k}_n~$ $(k-1)$-degree symmetric monomials. On the other hand, the symmetric polynomial of $(k-1)$-th degree can
expressed only by $C^{k-1}_n$ symmetric monomials of $(k-1)$-degree. This means the expression for $\lambda$ consists of
$kC^{k}_n/C^{k-1}_n=n-k+1$ same symmetric polynomials of $(k-1)$-degree which are equal to the $(k-1)$-th coefficient $(n-k+2)P_{k-1}$. The
result is expressed as follows
$$
(n-k+1)dP_k=(n-k+1)(n-k+2)P_{k-1}~dP_1,~~k=2,3,...,n-1.
$$
Differentiation of the last coefficient gives
$$
dP^2=2P_{n-1}dP_1, \eqno(2.9)
$$
which completes the system of differential equations for the coefficients of Eq.(2.1).
{\bf End of Proof.}
The following Lemma demonstrate an important role of the invariant polynomial in the evolution process.
{\bf Lemma 2.3}
{\it The first integrals of evolution equations (2.8) are given by coefficients of invariant polynomial (2.4).}
{\bf Proof}
In fact, the roots of Eq.(2.4), $y_k,~k=1,...,n$, according to formulae (2.5), are the first integrals of evolution equations for eigenvalues
(2.6). Equations (2.7)-(2.9) are consequences of Eqs.(2.6), hence coefficients $R_k$ as algebraic functions of the solutions of evolution
equations are first integrals of Eqs.(2.7)-(2.9).
{\bf End of Proof}
Inversely, the use formula $Y=X-P_1$ in Eq.(2.4) has to transform invariant equation into equation (2.1). Substitute $X-P_1$ instead of $Y$ and
gather together powers of $X$ and the coefficients of the obtained polynomial compare with coefficients of Eq.(2.1). We shall see that the
coefficients $P_k,~k=1,...,n-1$ now are expressed as $k$-degree polynomials of $P_1$ with coefficients consisting of $R_k,~k=2,...,n-1$.
Especially notice, the invariant $R_0$ is defined by $n$-degree polynomial of $P_1$ with coefficients built from $R_k$. The first task is to
find an explicit form of this polynomial. The general form of those polynomial can be presented as follows
$$
P_1^{n}+\sum_{k=2}^{n-1}P_1^{n-k}f_k(R_1,...,R_k)+R_0=P^2.
$$
Now, the task is to find an explicit form of the function $f_k$. With that purpose explore firstly the case $P^2=0$. In this case one of the
solutions of Eq.(2.1) is equal to zero. Then the corresponding solution of the invariant polynomial is
$$
y_1=-P_1(P^2=0)=-P_1(0).
$$
Hence $(-P_1(0))$ will satisfy (2.4). By replacing $Y$ with $(-P_1(0))$ in Eq.(2.4) we come to the following equation for $P_1$:
$$
P_1^{n}(0)+\sum_{k=2}^{n-1}R_k P_1^{n-k}(0)+R_0=0. \eqno(2.10)
$$
Notice, here the signs at all coefficients are positive. Secondly, suppose that we have changed coefficients of Eq.(2.1) from the set
$\{~P_k,~P^2\neq 0~\}$ to the set $\{~\widetilde{P}_k,~P^2=0~\}$ obeying evolution equations (2.8). This way of evolution provides the
polynomial with $P^2=0$ with the same invariants as the original one. Hence, when $P^2\neq 0$ the coefficients of Eq.(2.10) will not change,
but now this polynomial will be equal to $P^2$:
$$
P_1^{n}+\sum_{k=2}^{n-1}R_k P_1^{n-k}+R_0=P^2. \eqno(2.11)
$$
Notice, the situation is somewhat similar the conventional {\it Classical Invariant Theory of Polynomials} \cite{Olver}. If classical invariant
theory is a study of properties of a polynomial $p(x)$ that are unchanged under fractional linear transformations, within the framework of the
present approach we study properties of polynomials under translational transformations.
{\bf Theorem 2.4}
{\it Let coefficients of Eq.(2.1) obey evolution equations (2.7)-(2.9). Then }
$$
(k+1)P_{n-k}=\frac{1}{k!}\frac{d^k}{{dP_1}^k}P^2.\eqno(2.12)
$$
{\bf Proof}.
Differentiate equation (2.11) by taking into account that $R_0,R_k,k=1,...,n-1$ are constants. We get
$$
dP_1~(~nP_1^{n-1}+\sum_{k=2}^{n-2}(n-k)R_k P_1^{n-k-1}+R_{n-1}~)=dP^2. \eqno(2.13)
$$
Compare this equation with equation (2.9). It is seen, the expression inside brackets at $dP_1$ in (2.13) is nothing else than the coefficient
$2P_{n-1}$ expressed as a polynomial of $P_1$ with invariant coefficients:
$$
2P_{n-1}=nP_1^{n-1}+\sum_{k=2}^{n-2}(n-k)R_k P_1^{n-k-1}+R_{n-1}. \eqno(2.14)
$$
Hence,
$$
2P_{n-1}=\frac{d}{{dP_1}}P^2.
$$
Next, differentiate (2.14) with respect to $P_1$, we obtain
$$
dP_{n-1}=3P_{n-2}~dP_1,\eqno(2.15)
$$
where $3P_{n-2}$ in fact is the next coefficient of Eq.(2.1):
$$
3P_{n-2}=~\frac{n(n-1)}{2}P_1^{n-2}+\sum_{k=2}^{n-3}\frac{(n-k)(n-k-1)}{2}R_k P_1^{n-k-2}+R_{n-2}.\eqno(2.16)
$$
Hence,
$$
3P_{n-2}=\frac{d^2}{{dP_1}^2}P^2.
$$
At the next step we shall obtain
$$
dP_{n-2}=4P_{n-3}~dP_1,\eqno(2.17)
$$
where the expression at the differential $dP_1$ we denoted by $4P_{n-3}$ because this expression indeed is $(n-3)$-th coefficient of Eq.(2.1):
$$
4P_{n-3}=~\frac{n(n-1)(n-2)}{1\cdot 2\cdot 3}P_1^{n-3}+\sum_{k=2}^{n-4}\frac{(n-k)(n-k-1)(n-k-2)}{1\cdot 2\cdot 3}R_k P_1^{n-k-3}+R_{n-3}.
\eqno(2.18)
$$
Hence,
$$
4P_{n-3}=\frac{d^3}{{dP_1}^3}P^2.
$$
From these formulae by induction one may easily establish that the general formula for $l$-th coefficient $P_{n-l}$ is
$$
(l+1)P_{n-l}= \left( \begin{array}{c}
l\\
n
\end{array} \right)
P_1^{n-l}+\sum_{k=2}^{n-l-1} \left( \begin{array}{c}
l\\
n-k
\end{array} \right)
R_k P_1^{n-k-l}+R_{n-l}=\frac{1}{l!}\frac{d^l}{{dP_1}^l}P^2. \eqno(2.18)
$$
{\bf End of Proof}.
This theorem has some interesting consequences.
{\bf Corollary 2.5}
{\it The following representation for polynomial $p(X)$ holds true}
$$
\exp(-X\frac{d}{dP_1})P^2=0. \eqno(2.19)
$$
{\bf Proof}
The Euler operator, generator of translation, is
represented by following expansion
$$
\exp(-X\frac{d}{dP_1})= 1-X \frac{d}{d P_1}+X^2\frac{1}{2!}\frac{d^2}{d P_1^2}+...+ X^n\frac{1}{n!}\frac{d}{d P_1^n}+...~. \eqno(2.20)
$$
By differentiating Eq.(2.11) $n$-times we get
$$
\frac{d}{d P_1^n}P^2=n!. \eqno(2.21)
$$
Hence the sum in (2.20) is completed by this term. Thus,
$$
\exp(-X\frac{d}{dP_1})P^2=p(X). \eqno(2.22)
$$
Here the variable $X$ means one of the roots of Eq.(2.1) in quality of which let us take $q_n$.
The derivative with respect to $P_1$ due to (2.6) can be expressed by the following sum
$$
\frac{d}{dP_1}=\sum_{k=1}^n\frac{\partial q_k}{\partial P_1}\frac{\partial}{\partial q_k}= \sum_{k=1}^n\frac{\partial}{\partial q_k}.
\eqno(2.23)
$$
Then,
$$
\exp(-q_n\frac{d}{dP_1})P^2=\prod_{k=1}^{n-1}\exp(-q_n\frac{\partial}{\partial q_k})\exp(-q_n\frac{\partial}{\partial q_n}).\eqno(2.24)
$$
Now, take into account Vi\`{e}ta formula for $P^2$ given by (2.2). The Euler operator (2.24) will translate each root by $q_n$. The last
operator in (2.24) acts only upon $n$-th root resulting $q_n-q_n=0$. Hence,
$$
\exp(-X\frac{d}{dP_1})P^2=p(X)=0.
$$
{\bf End of Proof}.
\section{ Algorithm of finding the eigenvalues of $n$-degree polynomials}
The main idea of the present algorithm is to reduce the problem of solution of
$n$-degree polynomial equation into the problem of solution of $n-1$ degree polynomial equation. The evolution from $n$-degree polynomial up to
$(n-1)$-degree polynomial is fulfilled in such a way that remains invariant coefficients $R_k$. Hence, initial and final polynomials of this
evolution will possess with congruent set of eigenvalues, so that the solutions of the former can be obtained from the solutions of the latter
simply on making use of transformations of translation.
One of the ways to reduce a degree of the polynomial is achieved
by tending $P^2$ to zero. For that purpose we must to use $P^2$ as
an evolution parameter of the evolution equations (2.8) with final
goal to find the coefficients of Eq.(2.1) at the point $P^2=0$.
This evolution will remain invariant the coefficients of Eq.(2.4),
hence from solutions of the polynomial with $P^2=0$ we may come to
the solutions of the original equation simply by translation of
the set of solutions with new coefficients.
Denote $x=P^2$. Re-write Eq.(2.8) with respect to $x$. We get
$$
2P_{n-1}{dP_1}=dx,~~,
$$
$$
dP_{n-k}=(k+2)P_{n-k-1}dP_{1},~~k=1,...,n-3; \eqno(3.1)
$$
$$
\frac{dP_2}{dx}=nP_1dP_{1}.
$$
This a well-known {\it Cauchy problem} with initial data $P_k(x=P^2),~~k=1,2,3,...,n-1$. The variable $x$ run from $x=P^2$ till $x=0$. These
equations usually are resolved by using the celebrated {\it Cauchy-Lipschitz} method of calculation \cite{Cauchy}. This procedure is fulfilled
by dividing the interval $(x_0,0)$ into $N$ parts:
$$
\Delta x_0=x_1-x_0,~\Delta x_i=x_{i+1}-x_i,~\Delta x_{n-1}=x-x_{n-1},\eqno(3.2)
$$
where $x_i<x_{i+1},~x_N=0.$
In this way the continuous evolution process is transformed into discrete process consisting of $N$ steps. In the last step of
we come to $n$-degree polynomial free of the last coefficient:
$$
{\stackrel{(1)}{X^n}}+ \sum_{k=1}^{n-1}(-)^k(n-k+1){\stackrel{(1)}{P_k}} {\stackrel{(1)}{X^k}}=0.\eqno(3.4)
$$
One of the solutions is trivial, excluding this solution we come to the polynomial of $(n-1)$-degree:
$$
{\stackrel{(1)}{X^{n-1}}}+ \sum_{k=1}^{n-1}(-)^k(n-k+1){\stackrel{(1)}{P_k}} {\stackrel{(1)}{X^{k}}}+2{\stackrel{(1)}{P_{n-1}}}=0. \eqno(3.5)
$$
Let us mention that Eq.(3.4) possesses with same invariants as original one, i.e. Eq.(2.1). If one will use the numerical methods of solution
then the relationships for invariants are satisfied within given accuracy of calculations. Suppose that the solutions of $(n-1)$-degree equation
(3.5) ${\stackrel{(1)}{q}}_k,~k=1,...,n-1$ are known. Complete this set of solutions by ${\stackrel{(1)}{q}}_n=0.$ Then the solutions of
original equation one may find simply by the following translations
$$
q_k={\stackrel{(1)}{q_k}}+P_1-{\stackrel{(1)}{P_1}},~k=1,...,n.\eqno(3.6)
$$
If solutions of the $(n-1)$-degree equation still are unknown, then one may continue to apply this algorithm again in order to reduce the
problem of solution of $(n-1)$-degree equation to the problem of solution $(n-2)$-degree polynomial equation. This process can be continued up
till quadratic or linear equation. At each step of iteration one will find an information on coefficient ${\stackrel{(r)}{P}}_1$. At the $r$-th
iteration one deals with $(n-r)$-degree equation in the form
$$
{\stackrel{(r)}{X^{n-r}}}+ \sum_{k=1}^{n-r-1}(-)^k(n-k+1){\stackrel{(r)}{P_k}} {\stackrel{(r)}{X^k}}+(r+1){\stackrel{(r)}{P_{n-r}}}=0.
\eqno(3.7)
$$
At this stage the norm of the coefficient ${\stackrel{(r)}{P}}_{n-r}$ will fulfil the role of evolution parameter of the next evolution process.
As soon as the solutions of the lowest polynomial is found, the inverse process of iterations will consists only translations of the known set
of eigenvalues with known parameter of translation. Let the last step of iteration be linear equation from which we find only one solution,
$$
{\stackrel{(n-1)}{q_1}}=n{\stackrel{(n-1)}{P_1}}.
$$
Then the solutions of the original equation (2.1) are found as a result of the following set of translations
$$
q_1=n{\stackrel{(n-1)}{P_1}}+n\sum_{r=2}^n\frac{1}{r}(~{\stackrel{(n-r)}{P_1}}-{\stackrel{(n-r+1)}{P_1}}~),\eqno(3.8a)
$$
$$
q_s=\sum_{r=s}^n\frac{1}{r}(~{\stackrel{(n-r)}{P_1}}-{\stackrel{(n-r+1)}{P_1}},~s=2,3,...,n;\eqno(3.8b)
$$
where ${\stackrel{(0)}{P_1}}=P_1$.
\section{ Relativistic Lorentz-force equations and related quadratic polynomial }
Inter-relation between the evolution of the eigenvalues an the coefficients of the polynomials with dynamic equations for physical systems we
can observe already at the level of quadratic polynomial. In aim of this section is to demonstrate as the quadratic polynomial is related with
relativistic Lorentz-force equations. In the sequel, we shall use this example as a starting platform to pass to case of higher degree
polynomials and related generalized dynamics. Let us start with generic quadratic polynomial
$$
X^2-2P_0~X+P^2=0, \eqno(4.1)
$$
with real coefficients $P_0^2\geq P^2$, and real eigenvalues $p^2_1,p^2_2$. This polynomial closely related with the relativistic dynamics
\cite{Yamaleev2}, \cite{Gal}. In order to demonstrate this fact let us start from Lorentz-force equations for a charged particle inside external
electromagnetic fields ${{\mathbf E}},~{{\mathbf B}}$ written with respect to proper time $\tau$ {\footnote{
Hereafter for the sake of simplicity we omit all parameters like charge, mass, light-velocity and other parameters regularizing physical
dimensions taking them equal to unit.}}
$$
\frac{d{{\mathbf P}}}{d\tau}=({{\mathbf E}}~P_0+[{{\mathbf P}}\times {{\mathbf B}}]),~ \frac{dP_0}{d\tau}=({{\mathbf E}} \cdot {{\mathbf P}}), \eqno(4.2)
$$
$$
\frac{d{{\mathbf r}}}{d\tau}={{\mathbf P}},~~\frac{dt}{d\tau}={P_0}. \eqno(4.3)
$$
Consider projection of equations (4.2) on direction of the motion defined by unit vector $\vec n={{\mathbf P}} /P$:
$$
\frac{d P}{d\tau}=({{\mathbf E}}~\cdot {{\mathrm n}})~P_0,~ \frac{dP_0}{d\tau}=({{\mathbf E}} \cdot {{\mathrm n}})~P, \eqno(4.4)
$$
Then one deals only with the lengths of the momenta $P_0,~P$. Simplify Eqs.(4.4) by introducing new evolution parameter:
$$
\frac{dP}{ds}=P_0,~~\frac{dP_0}{ds}=P, \eqno(4.5)
$$
where
$$
\frac{ds}{d\tau}=({{\mathbf E}}\cdot {{\mathrm n}}). \eqno(4.6)
$$
The first constant of motion is easily found
$$
P_0^2-P^2=M^2. \eqno(4.7)
$$
Here $M^2$ is a constant of motion. Conventionally, this constant is interpreted as a {\it square of inertial mass} \cite{Adan}.
Inside stationary potential field, when ${{\mathbf E}}=-{\nabla}{V(r)}$, dynamic equations imply the other integral of motion, the energy,
$$
{{\cal E}}_0=P_0+V(r). \eqno(4.8)
$$
At the rest state where $P=0$ one obtains $P_0(P=0)=M$. The relativistic mechanics deals with two kinds of the energy, namely,
$$
p^2_1=P_0-M,~~p^2_2=P_0+M.\eqno(4.9)
$$
From formulae (4.7) and (4.9) it follows
$$
P=p_1p_2,~P_0=\frac{1}{2}(p_2^{2}+p_1^{2}),~ M=\frac{1}{2}(p_2^{2}-p_1^{2}). \eqno(4.10)
$$
Notice, the first two formulae of (4.10) mean Vi\`{e}ta formulae for quadratic polynomial equation (4.1). Substitution $X=Y+P_0$ in (4.1) leads
to {\it invariant equation}
$$
Y^2=P_0^2-P^2=M^2,\eqno(4.11)
$$
the invariant coefficient of which is equal to the invariant of physical motion. Quadratic equation for $P_0$ is given by
$$
P_0^2-M^2=P^2.\eqno(4.12)
$$
Differentiating this equation we derive evolution equations for the coefficients, which in the case of quadratic polynomial, of course, is a
simple task:
$$
2P_0~P=\frac{d}{ds}P^2,~~~\frac{d}{ds}P_0=P.\eqno(4.13)
$$
Solutions of these evolution equations are given by hyperbolic cosine-sine functions
$$
P=M~sinh(s),~~P_0=M~cosh(s). \eqno(4.14)
$$
Then the eigenvalues of Eq.(5.1) are expressed by hyperbolic cosine-sine functions of one-half argument
$$
p_1^2=\sqrt{2M}~sinh(\frac{s}{2}),~~p_2^2=\sqrt{2M}~cosh(\frac{s}{2}). \eqno(4.15)
$$
Now, consider so-called the {\it effective potential representation}. With this purpose let us come back to the equations written with respect
to proper-time $\tau$. Then from the second equation of (4.13) it follows
$$
{{\cal E}}_0=P_0+V(r).
$$
Further, replace $P_0$ by ${{\cal E}}_0-V(r)$ in the first of equation of (4.13), this gives
$$
\frac{d}{ds}{{\mathbf P}}=-\nabla V(r)~({{\cal E}}_0-V(r)=-\nabla W(r,{{\cal E}}_0),\eqno(4.16)
$$
where the effective potential is defined by
$$
W(r{{\cal E}}_0)={{\cal E}}_0V(r)-\frac{1}{2}V^2(r).\eqno(4.17)
$$
Here, the relativistic Lorentz-force equation written in the form of Newtonian equation with the {\it effective potential} $W$. From this
equation it follows the Newtonian form of the energy
$$
{{\cal E}} =\frac{1}{2}P^2+W(r,{{\cal E}}_0)=\frac{1}{2}({{\cal E}}_0^2-M^2).\eqno(4.18)
$$
\section{Cubic polynomial equation and related dynamics}
In the previous section we have demonstrated as the relativistic dynamics is related with the evolution of quadratic polynomial. This example
provides us with appropriate tool in order to construct generalized scheme based on the evolution of polynomials of higher order. In this
section we explore the case of cubic polynomial
$$ p(X)=X^{3}-3P_1~X^{2}+2P_2~X-P^{2}=0,\eqno(5.1)
$$
relations between coefficients and eigenvalues are given by
$$
3P_1=q_1+q_2+q_3,~ 2P_2=q_1q_2+q_2q_3+q_3q_1,~P^{2}=q_1q_2q_3.\eqno(5.2)
$$
We shall restrict ourselves only with the case when coefficients are represented by real numbers and let us assume that $p(X)$ is irreducible.
By replacing $X$ with $X=Y+P_1$ we come to invariant polynomial
$$
Y^{3}+R_2~Y-R_{0}=0,\eqno(5.3)
$$
where
$$
~(a)~~R_2=2P_2-3P_1^{2},~~~(b)~~P_1^{3}+R_2~P_1+R_0=P^{2}.\eqno(5.4)
$$
The eigenvalues and the coefficients of this polynomial are invariants with respect to simultaneous translations of the eigenvalues of Eq.
(5.1). Obviously this statement is a consequence of the formula $ Y=X-P_1$ from which it follows
$$
3y_1=e_2-e_3,~3y_2=e_3-e_1,~3y=e_1-e_2,~~\eqno(5.5)
$$
where
$$
e_1=q_3-q_2,e_2=q_1-q_3,e_3=q_2-q_1.
$$
The evolution equations remaining constants coefficients $R_0,R_2$ are obtained directly from Eqs.(5.4). Differentiating these equations we get
$$
dP_1~(3P^2_1+R_2)=2P_2~dP_1=dP^2,\eqno(5.6)
$$
$$
dP_2=3dP_1.
$$
Evolution equations for the eigenvalues, obviously, have to has the following form
$$
\frac{d}{ds}q_k=A,~~k=1,2,3,
$$
where $A$ is some function same for all eigenvalues. In order to construct some dynamics in the quality of $A$ we must take $A=P$. Then
$$
\frac{d}{ds}q_k=\frac{d}{ds}P_1=P.\eqno(5.7)
$$
The cubic polynomial is an object of special interest, because this polynomial is closely related with the classical elliptic functions. The
solutions of the evolution equations for the eigenvalues and the coefficients of the cubic polynomial can be represented via elliptic Jacobi and
Weierstrass functions, correspondingly \cite{Akhiezer}. On making use (5.7) from (5.4b) we come to the following differential equation
$$
P_1^{3}+R_1~P_1+R_0= (\frac{dP_1}{ds})^2 .\eqno(5.8)
$$
Write this equation in the following designations
$$
(\frac{d\wp}{dz})^2=4\wp(z)^3-g_2\wp(z)-g_3,\eqno(5.9)
$$
where $z=4s$, $g_2=-4R_1,~~g_3=-4R_0$ and $\wp(2s)=P_1(s)$. The
integral formula for $\wp(z)$ is given by
$$
z=\int^{\infty}_{\wp}({4x^3-g_2 x-g_3})^{-1/2}dx.\eqno(5.10)
$$
The functions $\wp(z)$ and $\wp'(z)$ are Weierstrass elliptic functions with periods $2\omega_1,~~2\omega_2$. Define
$\omega_3=-\omega_1-\omega_2$. Then the values
$$
\wp(\omega_1),~\wp(\omega_2),~\wp(\omega_3)$$ are the roots of
cubic equation
$$
4x^3-g_2x-g_3=0.\eqno(5.11)
$$
Introduce new variables $p_k,~k=1,2,3$ where $p^2_k=q_k$. Then $P=p_1p_2p_3$. For $p_k$ evolution equations are derived from (5.7):
$$
\frac{dp_1}{ds}=p_2p_3,~~\frac{dp_2}{ds}=p_1p_3,~~\frac{dp_3}{ds}=p_2p_1.\eqno(5.12)
$$
Solutions of these equations presented by quotients of Jacobi elliptic functions. Let $sn(u),cn(u),dn(u)$ be a set of Jacobi elliptic
functions. Define the following quotients of these functions (in Glaisher notations \cite{Akhiezer})
$$
ns=-\frac{1}{sn},~~cs=-\frac{cn}{sn},~~ds=-\frac{dn}{sn},
$$
which obey the following differential equations
$$
\frac{d}{du}ns(u)=cs(u)ds(u), \frac{d}{du}cs(u)=ns(u)ds(u),\frac{d}{du}ds(u)=cs(u)ns(u),\eqno(5.13)
$$
with
$$
ns^2-cs^2=1,~~ds^2-cs^2=1-{{\mathrm k}}.
$$
From Eqs.(4.12) it follows that the values
$$
e_1=q_3-q_2,~e_2=q_1-q_3,
$$
are constants of motion. Hence solutions of Eqs.(4.12) are presented via Jacobi elliptic functions as follows
$$
q_3={e_1}ns^2(u,{{\mathrm k}} ),~q_2={e_1}cs^2(u,{{\mathrm k}}),~q_1={e_1}ds^2(u,{{\mathrm k}}),~~{{\mathrm k}}=1-\frac{e_2}{e_1}.\eqno(5.14)
$$
The final result let us represent by the following\\
{\bf Statement}:
{\it If squared roots of the eigenvalues of the cubic equation obey equations for Jacobi elliptic functions, then the evolution of the
coefficients are governed by equations for Weierstrass elliptic functions.}
For cubic equation there exist, also, another possibility to express its eigenvalues, namely, via trigonometric functions. To the formulae given
below we come virtue of formulae (3.25) derived in Ref.\cite{Lipatov}.
The eigenvalues of the reduced polynomial are given by formulae:
$$
y_1=-\frac{2}{3}\sqrt{3R_1}\cos\theta,~y_2=\frac{1}{3}\sqrt{3R_1}(\cos\theta+\sqrt{3}\sin\theta),~
y_3=\frac{1}{3}\sqrt{3R_1}(\cos\theta-\sqrt{3}\sin\theta). \eqno(5.15)
$$
It is easy to verify that
$$
y_1+y_2+y_3=0,~~y_1y_2+y_2y_3+y_3y_1=-R_1.
$$
Equation for the last coefficient, $R_0$, leads to trigonometric equation for $\cos(\theta)$:
$$
-R_0=y_1y_2y_3
$$
$$
=\cos\theta(\cos^2\theta-3\sin^2\theta)=4\cos^3\theta-3\cos\theta=\frac{27}{2}\frac{R_0}{(\sqrt{3R_1})^{3}}.
$$
By using the trigonometric formula
$$
\cos(3\theta)=4\cos^3\theta-3\cos\theta
$$
we come to simple trigonometric equation:
$$
\cos(3\theta)=\frac{3\sqrt{3}}{2}\frac{R_0}{\sqrt{R_1^3}}. \eqno(5.16)
$$
Since we restrict ourselves only with real solutions the following inequality has to be true
$$
(\frac{R_0}{2})^2< (\frac{R_1}{3})^3. \eqno(5.17)
$$
The algorithm elaborated in the previous section in the case of cubic equation (5.1) is simplified as follows. With respect to $x=P^2$ as an
evolution parameter evolution equations (5.6)-(5.7) have a form
$$
2P_2{dP_1}={2P_2}dx,~~ 2P_2{dP_2}={3P_1}dx, \eqno(5.18)
$$
with the initial data $P_1(x=P^{2})=P_1,~P_2(x=P^{2})=P_2$. At the final stage when $x=P^2=0$ one finds
$P_1(P^2=0)={\stackrel{(1)}{P_1}},~P_2(P^2=0)={\stackrel{(1)}{P_2}}$ satisfying relationships
$$
R_2=2{\stackrel{(1)}{P_2}}-3{\stackrel{(1)}{P_1^{2}}},~~{\stackrel{(1)}{P_1^{3}}}+R_1~{\stackrel{(1)}{P_1}}+R_0=0.
$$
Thus the new equation possesses with same invariants as the original one,
$$
X^3(0)-3{\stackrel{(1)}{P_1}}X^2(0)+2{\stackrel{(1)}{P_2}}X(0)=0. \eqno(5.19)
$$
Equations (5.1) and (5.19) possess with congruent eigenvalues, however Eq.(5.19) has one trivial root. Therefore the problem is reduced to the
solution of quadratic equation
$$
X^2(0)-3{\stackrel{(1)}{P_1}}X^2(0)+2{\stackrel{(1)}{P_2}}X(0)=0. \eqno(5.20)
$$
From solutions of eq.(5.20) to the solutions of Eq.(5.1) one comes simply by the set of translations:
$$
q_1=P_1-{\stackrel{(1)}{P_1}},~~q_2=q_2(0)+P_1-{\stackrel{(1)}{P_1}},~~q_3=q_3(0)+P_1-{\stackrel{(1)}{P_1}}.
$$
The dynamics related with the cubic polynomial, evidently, is characterized with two first constants of motion $R_2,~R_0$ which do not depend of
potential field. In quality of evolution equations we have to use Eqs.(5.6)-(5.7) written with respect to parameter of evolution defined in
(4.6):
$$
P_2~=\frac{d}{ds}P,~~~\frac{d}{ds}P_2=3dP_1,~~\frac{d}{ds}P_1=P.\eqno(5.21)
$$
Evolution equations for the eigenvalues, correspondingly, are given by
$$
\frac{d}{ds}p^2_k=P,~~k=1,2,3.\eqno(5.22)
$$
From these equations it follows that the constants of motion are given by formulae
$$
M_1=p^2_2-p^2_3,~M_2=p^2_3-p^2_1,~M_3=p^2_1-p^2_2.\eqno(5.23)
$$
In order to include into this scheme the potential field $V(r)$ we have to use equation (4.6):
$$
\frac{ds}{d\tau}=({{\mathrm U}}\cdot {{\mathrm n}}),~~{{\mathrm U}}=\nabla V(r),~~P{{\mathrm n}}={{\mathbf P}}.
\eqno(5.24)
$$
Furthermore, the set of evolution equations has to be completed with the interrelation between momentum and velocity with respect to time-like
parameter:
$$
{{\mathbf P}}=\frac{d{{\mathbf r}}}{d\tau}.\eqno(5.25)
$$
In these designations evolution equations (5.21) are transformed
into the following dynamic equations
$$
\frac{d}{d\tau}{{\mathbf P}}=-\nabla
V(r)~P_2,~~\frac{d}{d\tau}P_2=-3({{\mathbf P}}\cdot\nabla)V(r)
P_1,~~\frac{d}{d\tau}P_1=-({{\mathbf P}}\cdot\nabla) V(r).\eqno(5.26)
$$
In the case of stationary potential, beside the first constant $R_2,R_0$, the equations imply another constant of motion, the energy
$$
{{\cal E}}_1=P_1+V.\eqno(5.27)
$$
On making use of expression for $P_1$ from (5.27) in the second
equation of Eqs.(6.6), we get
$$
{{\cal E}}_2=P_2+3{{\cal E}}_1V-\frac{3}{2}V^2=\frac{1}{2}(R_2+3{{\cal E}}_1^2).\eqno(5.28)
$$
This is second expression for the energy. Next, express from this
formula $P_2$ and substitute into first equation of (5.26). This
leads to Newtonian equation
$$
\frac{d}{d\tau}{{\mathbf P}}=-\nabla W(r,{{\cal E}}_1),\eqno(5.29)
$$
with effective potential
$$
2W(r,{{\cal E}}_1)={{\cal E}}_2V(r)-3{{\cal E}}_1V^2(r)+V^3(r).\eqno(5.30)
$$
From Eq.(5.29) one may find formula for the total energy
$$
{{\cal E}}=\frac{1}{2}P^2+W=\frac{1}{2}(R_0+R_2{{\cal E}}_1+{{\cal E}}_1^3).\eqno(5.31)
$$
\section{ Generalized dynamics related with evolution of $n$-degree polynomial}
Now we have accumulated enough experience in order to be able to build the general form of the dynamics related with $n$-degree polynomials. Let
us start with evolution equations (2.8) written for $n$-degree polynomial (2.1). Dynamic equations with respect to some time-like evolution
parameter $s$ describing a motion inside stationary potential field $V(r)$ are formulated in the following form
$$
\frac{d{{\mathbf P}}}{d\tau}={{\mathrm U}}~P_{n-1},~~ \eqno(6.1a)
$$
$$
\frac{dP_{k}}{d\tau}=({{\mathrm U}}\cdot{{\mathbf P}})P_{k-1}({n-k+2}),~~ k=2,\ldots,
n-1, \eqno(6.1b)
$$
$$
\frac{dP_{1}}{d\tau}=({{\mathrm U}}\cdot{{\mathbf P}}). \eqno(6.1c)
$$
The set of evolution equations has to be completed with the
interrelation between momentum and velocity given by Eq.(5.25).
This system of dynamic equations remain invariant the coefficients
$R_0,R_k,k=1,...,n-1$. The motion of a physical system obeying to
these equations possesses with the set of {\it inner} and {\it
outer} momenta. Evolution of the set of outer-momenta $\{
P^2,P_k,k=1,...,n-1~\}$ are given by Eqs.(6.1), whereas the
evolution of the inner-momenta $\{ p_k,k=1,...,n\}$ is described
by
$$
\frac{dp^2_k}{d\tau}=({{\mathrm U}}\cdot{{\mathbf P}}),~k=1,...,n.\eqno(6.2)
$$
The first integrals of this system are given by
$$
M_{ik}=p^2_i-p^2_k,~~i\neq k.\eqno(6.3).
$$
From Eq.(6.2c) we find the first constant of integration, the
energy ${{\cal E}}_1=P_1+V$, or $P_1={{\cal E}}_1-V.$ By substituting $P_1$ into
the next, $k=2$-th equation of (6.2b) we find the other constant
of integration (the second expression for energy):
$${{\cal E}}_2=P_2+nE_1~V-\frac{n}{2}V^2.$$
By continuing this process, namely, by substituting $P_2$ from the last expression into the next equation with $k=3$ we shall find the third
constant of integration:
$${{\cal E}}_3=P_3-(n-1)E_2~V+\frac{n(n-1)}{2}E_1~V^2-\frac{n(n-1)}{2\cdot 3}V^3.$$
Continue this process up till $(n-1)$-th stage. Then, in the
$(n-1)$-th stage we shall obtain the expression for $P_{n-1}$. By
introducing this expression into Eq.(6.2a), we come to Newtonian
equation
$$
\frac{d}{d\tau}{{\mathbf P}}=-\nabla W(r,{{\cal E}}_1), \eqno(6.4)
$$
where the effective potential is given by the following series
$$
W(r,{{\cal E}}_1)=\frac{1}{2} \sum^n_{k=1} (k+1){{\cal E}}_{n-k}
V^k(r)(-1)^{k+1},~\mbox{ with}~ {{\cal E}}_0=\frac{1}{n+1}. \eqno(6.5)
$$
From Eq.(6.4) one may find formula for the total energy
$$
{{\cal E}}=\frac{1}{2}P^2+W=\frac{1}{2}(R_0+\sum_{k=2}^{n-1}R_k{{\cal E}}_1^{k-1}+{{\cal E}}_1^n).\eqno(6.6)
$$
{\bf Concluding remarks}
This is a prodigious fact that the evolutions equations elaborated for polynomials serve as dynamic equations for the generalized dynamics of
high-energy particles. Notice, the Algorithm of calculation of eigenvalues of the polynomials elaborated in this paper is distinct of the
numerical methods of calculations which principally are based on iteration process with initial sampling, or tentative, data of roots, so that
effectiveness of these algorithms depends of the initial data. The iteration process based on the present Algorithm uses in the quality of
initial data the coefficients of the original polynomial. The method elaborated here can be considered also as a theory of functional connection
between coefficients and eigenvalues of the polynomial expressed as one-valued Weierstrass and Jacobi hyper-elliptic functions. Evidently, the
present method without any principal difficulties can be continued to the case of polynomials defined over the field of complex numbers.
We have restrict our attention only on dynamic equations given in one-dimensional coordinate space. On the theory of the generalized dynamics
in $4D$ space with physical units one may consult in Refs.\cite{Yamaleev3} and \cite{Yamaleev4} and the references therein.
|
1,116,691,500,627 | arxiv | \section{Introduction}
The study of the in-medium properties of hadrons is an important area of research in the physics of strongly interacting matter. The study of the heavy flavor hadrons \cite{Hosaka} has attracted a lot of attention due to its relevance in the ultra-relativistic heavy ion collision experiments. Recently, heavy quarkonia ($\overline{q}q; q= c,b$) under extreme conditions of matter, i.e., high density and/or high temperature, have been investigated extensively. The medium, created in the relativistic high energy collisions between heavy nuclei, affect the particles masses and decay widths, which have further observable impacts, e.g., particle production ratio etc. In the non-central heavy ion collision experiments, strong magnetic fields are expected to be produced \cite{kharzeev,fukushima,skokov,deng}. The magnetic fields produced, have been estimated to be huge in RHIC, BNL and in LHC, CERN \cite{tuchin}. However, the time evolution of the magnetic field produced in such experiments
requires the detailed knowledge of the electrical conductivity
of the medium and careful treatment of the solutions of magneto-hydrodynamic
equations \cite{tuchin} and is still an open question. The study of the effects of strong magnetic fields on the in-medium properties of hadrons has
initiated a new area of research in the heavy ion physics. \\
The heavy quarkonium (charmonium and bottomonium) states have been investigated in the literature using the potential models \cite{eichten1,eichten2,radfort}, the QCD sum rule approach \cite{klingl,kim,ko,am82,pallabi}, coupled channel approach \cite{molina}, the quark-meson coupling model \cite{krein,tsushima}, a chiral effective model \cite{amupsarx, am981, am90}, and a field theoretic model
for composite hadrons \cite{am102, am95}.
In the present work, we study the masses of the S-wave (Vector,
$\Upsilon (1S)$, Pseudoscalar, $\eta_b$) and P-wave (scalar, $\chi_{b0}$, axial vector $\chi_{b1}$) bottomonium ground states, within the magnetized asymmetric nuclear medium using the formalism of QCD sum rule. The S-wave and P-wave charmonium ground states in magnetized matter have already been studied \cite{cho91, pallabi} by the sum rule formalism. The medium effects are incorporated through the QCD gluon condensates \cite{schechter}, up to dimension 4, in terms of the medium modifications of scalar fields within a chiral SU(3) model based on the non-linear realization of chiral ${SU(3)}_L\times{SU(3)}_R$ symmetry.\\
The open heavy flavor mesons, namely the open charm and the open bottom mesons, have also been studied within the QCD sum rule approach \cite{arata,wang, gubler}, the model for composite hadrons \cite{amarx} and the chiral model, without magnetic field \cite{am23, amd91}, and also with magnetic field \cite{am97, am98}. The in-medium masses of the light vector mesons have been studied using the QCD sum rule approach \cite{hatsuda}, where the medium modifications come through the non-strange and strange light quark condensates and the scalar gluon condensates, calculated within the chiral effective model, in strange asymmetric matter, without the effect of magnetic field \cite{am91}, and in nuclear medium with the effect of magnetic field \cite{am100}. In QCD sum rule approach, the mass modifications of the hidden heavy flavor mesons (charmonium and bottomonium) are found through the medium modifications of the scalar and the twist-2 gluon condensates calculated in a chiral effective model. The open heavy flavor mesons have their mass modifications in terms of both the light quark condensates (because of the light quark flavor present in their quark structure) as well as gluon condensates simulated within the chiral model. By finding out the mass modifications of the charmonium (bottomonium) and open charm (bottom) mesons within the medium, modifications of their decay widths have also been calculated using a field theoretic model of composite hadrons in presence of magnetic field \cite{am102, amsm1, amsm2} also using a light quark- anti-quark pair creation model, namely $^3P_0$ model \cite{amal}. These studies have important observable consequences in the relativistic heavy ion collision experiments, with the current focus on the heavy flavor meson in magnetized matter \cite{machado, matheus, suzuki}. There have also been a numbers of finite temperature studies of heavy quarkonia within the QCD sum rule framework. The properties of strange mesons have been investigated in magnetized matter, within the chiral SU(3) model \cite{anuj} and in the field theoretic model \cite{amsm3}.
\\
In section II, the chiral SU(3) model has been described briefly;
section III introduces with the Quantum-Chromodynamical (QCD) Sum Rule
to find the in-medium masses of the lowest bottomonium states and the
mass shifts due to spin-mixing effects; in section IV, the results
of the in-medium masses and their shifts at various conditions
are discussed; section V summarizes the findings of this work.
\section{The Chiral ${SU(3)}_L\times{SU(3)}_R $ Model}
In-medium masses of the bottomonium ground states are computed within the QCD sum rule approach, in terms of the gluon condensates. These condensates, in the present study, are calculated in a chiral effective SU(3) model \cite{papa59}. The chiral model is based on the non-linear realization of chiral symmetry \cite{weinberg, coleman, bardeen}, and the scale invariance breaking of QCD \cite{papa59, am69, zschi}. An effective Lagrangian, based on the non-linear realization of chiral symmetry has been employed here, with a logarithmic potential in scalar dilatation field \cite{erik}, $\chi$ mimics the scale-invariance breaking of QCD. The chiral $ {SU(3)}_L\times{SU(3)}_R$ model Lagrangian density has the following general form \cite{papa59},
\begin{equation}
\mathcal{L}=\mathcal{L}_{kin}+\mathcal{L}_{BM}+\mathcal{L}_{vec}+\mathcal{L}_0+\mathcal{L}_{scale-break}+\mathcal{L}_{SB}+\mathcal{L}_{mag}
\end{equation}
in the above expression, $\mathcal{L}_{kin}$ is the kinetic energy of the baryons and the mesons; $\mathcal{L}_{BM}$ represents the baryon-mesons (spin-0 and spin-1) interactions; $ \mathcal{L}_{vec}$ , contains the quartic self-interactions of the vector mesons and their couplings with the scalar ones; $\mathcal{L}_0$ incorporates the spontaneous chiral symmetry breaking effects via meson-meson interactions; $\mathcal{L}_{scale-break}$ is the scale symmetry breaking logarithmic potential; the explicit symmetry breaking term, $\mathcal{L}_{SB}$; finally the magnetic field effects on the charged and neutral baryons in the nuclear medium are given by \cite{amupsarx, am981, am97, am98, broderik, prakash, wei, guang},
\begin{equation}
\mathcal{L}_{mag}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-q_i{\bar{\psi}}_i\gamma_\mu A^\mu\psi_i-\frac{1}{4}\kappa_i\mu_N{\bar{\psi}}_i\sigma^{\mu\nu}F_{\mu\nu}\psi_i
\end{equation}
where, $\psi_i$ is the baryon field operator $( i = p, n)$, in the case of nuclear matter, the parameter, $\kappa_i$ here is related to the anomalous magnetic moment of the i-th baryon ($p, n$) (\cite{broderik} - \cite{paoli}), with $\kappa_p = 3.5856$ and $\kappa_n = -3.8263$, are the gyromagnetic ratio corresponding to the anomalous magnetic moments (AMM) of the proton and the neutron respectively. Thus, in the magnetized nuclear medium, magnetic field has contributed through the Landau energy levels of the charged particles \cite{ivanov}, and through the non-zero anomalous magnetic moments of the nucleons \cite{ivanov, paoli}.
\\
In the chiral model, mean-field approximation is adopted, where the meson fields are treated as classical fields. The expectation values of the fields within the system, have non-zero contribution only for the vector (time-component) and scalar meson fields and zero for the other meson (pseudoscalar, axial vector) fields \cite{papa59}.\\
The scalar dilatation field, $\chi$ simulates the scalar gluon condensate $ \langle \frac{\alpha_s}{\pi} G_{\mu\nu}^a$ $G^{a\mu\nu}\rangle$, as well as the twist-2 gluon operator $ \langle \frac{\alpha_s}{\pi} G_{\mu\sigma}^a$ $G_{\nu}^{a\space \sigma}\rangle$, within the model. The energy momentum tensor, $T_{\mu\nu}$ derived from the $\chi$-terms in the chiral model Lagrangian density \cite{am82} thus obtained,
\begin{equation}
T_{\mu\nu}=\left(\partial_{\mu}\chi\right)\left(\frac{\partial\mathcal{L}}{\partial\left(\partial^\nu\chi\right)}\right)- g_{\mu\nu}\mathcal{L}_\chi
\end{equation}
The QCD energy momentum tensor, in the limit of current quark masses, contains a symmetric trace-less part and a trace part, as given below \cite{morita, cohen},
\begin{equation}
T_{\mu\nu}=-ST\left(G_{\mu\sigma}^aG_\nu^{a\sigma}\right)+\frac{g_{\mu\nu}}{4}\left(\sum_{i}m_i\overline{q}_i q_i+\langle \frac{\beta_{QCD}}{2g}G_{\sigma k}^a G^{a\sigma}_k\rangle\right)
\end{equation}
with the leading order QCD $\beta$ function \cite{am82}, $\beta_{QCD}(g) = -\frac{g^3}{(4\pi)^2} (11-\frac{2}{3} N_f )$, by taking the 3 color quantum numbers of QCD, and no. of flavors, $N_f=3$. Here, $m_i$'s $(i= u, d, s)$ are the current quark masses.\\ Writing the medium expectation value of the twist-2 gluon operator as,
\begin{equation}
\langle \frac{\alpha_s}{\pi} G_{\mu\sigma}^a G_{\nu}^{a\space \sigma}\rangle = \left(u_\mu u_\nu-\frac{g_{\mu\nu}}{4}\right)G_2
\end{equation}
where $u_\mu$ is the 4-velocity of the nuclear medium, taken to be at rest \cite{am82, pallabi} in the present investigation, $u_\mu=\left(1,\ 0,\ 0,\ 0\right)$; the QCD energy momentum tensor then reads
\begin{equation}
T_{\mu\nu}=-\frac{\pi}{\alpha_s}\left(u_\mu u_\nu-\frac{g_{\mu\nu}}{4}\right)G_2+\frac{g_{\mu\nu}}{4}\left(\sum_{i}m_i\overline{q}_i q_i+\langle \frac{\beta_{QCD}}{2g}G_{\sigma k}^a G^{a\sigma}_k\rangle\right)
\end{equation}
Comparing the expressions of energy momentum tensor in equation(6) and in equation(3), one obtains the expressions for $G_2$ (the twist-2 component) and the scalar gluon condensate by multiplying both sides with $\left(u_\mu u_\nu-\frac{g_{\mu\nu}}{4}\right)$ and $g^{\mu\nu}$ respectively. These are given by-
\begin{equation}
G_2 = \frac{\alpha_s}{\pi}\Bigg[-(1-d+4k_4)(\chi^4-\chi_0^4)-
\chi^4 \ln\left(\frac{\chi^4}{\chi_0^4}\right)+\frac{4}{3}d\chi^4\ln\left(\left(\frac{(\sigma^2-\delta^2)\zeta}{\sigma_0^2\zeta_0}\right)\left(\frac{\chi}{\chi_0}\right)^3\right)\Bigg]
\end{equation}
and,
\begin{equation}
\langle \frac{\alpha_s}{\pi} G_{\mu\nu}^a G^{a\mu\nu}\rangle
=\frac{8}{9}\Bigg[(1-d)\chi^4+\left(\frac{\chi}{\chi_0}\right)^2\left(m_\pi^2 f_\pi \sigma +\left(\sqrt{2}m_k^2f_k-\frac{1}{\sqrt{2}}m_\pi^2 f_\pi\right)\zeta\right)\Bigg]
\end{equation}
Thus, the expectation values of the scalar and the twist-2 gluon condensates depend on the in-medium values of the non-strange scalar field, $\sigma$, the strange scalar field, $\zeta$, the scalar-isovector field, $\delta$ [ if the current quark masses are taken to be non-zero ] besides the scalar dilaton field, $\chi$, within the chiral $ SU(3) $ model. By deriving the Euler Lagrange's equations of motion from the effective model Lagrangian under mean-field approximation, the coupled equations of motion in these fields are obtained, incorporating the effects of density, isospin asymmetry, anomalous magnetic moments, and magnetic field of the nuclear medium under study.
The coupled equations of motion in $\sigma, \zeta, \delta,$ and $\chi $ are,
\begin{eqnarray}
&& k_0\chi^2\sigma-4k_1(\sigma^2+\zeta^2+\delta^2) \sigma
-2k_2(\sigma^3+3\sigma\delta^2)-2k_3 \chi\sigma\zeta
\nonumber \\ &-&\frac{d}{3}\chi^4\left(\frac{2\sigma}{\sigma^2
-\delta^2}\right)+\left(\frac{\chi}{\chi_0}\right)^2 m_\pi^2 f_\pi
- \sum g_{\sigma i} \rho_i^s = 0
\end{eqnarray}
\begin{eqnarray}
&& k_0\chi^2\zeta -4k_1(\sigma^2+\zeta^2+\delta^2)\zeta-4k_2\zeta^3-k_3 \chi(\sigma^2-\delta^2)-\frac{d}{3}\frac{\chi^4}{\zeta} \nonumber \\
&+&\left(\frac{\chi}{\chi_0}\right)^2\left(\sqrt{2}m_k^2f_k-\frac{1}{\sqrt{2}}m_\pi^2 f_\pi\right)-\sum g_{\zeta i} \rho_i^s = 0
\end{eqnarray}
\begin{eqnarray}
&& k_0\chi^2\delta -4k_1(\sigma^2+\zeta^2+\delta^2)\delta - 2k_2(\delta^3+3\sigma^2\delta) +k_3\chi\delta\zeta \nonumber \\ &+&\frac{2}{3}d\chi^4\left(\frac{\delta}{\sigma^2-\delta^2}\right)-\sum g_{\delta i}\rho_i^s =0
\end{eqnarray}
\begin{eqnarray}
&& k_0\chi(\sigma^2+\zeta^2+\delta^2)-k_3(\sigma^2-\delta^2)\zeta+ \chi^3 \left[1+4\ln\left(\frac{\chi^4}{\chi_0^4}\right)\right]+ (4k_4 - d) \chi^3
\nonumber \\ &-&\frac{4}{3}d\chi^3 \ln\left[\left(\frac{(\sigma^2-\delta^2)\zeta}{\sigma_0^2\zeta_0}\right)\left(\frac{\chi}{\chi_0}\right)^3\right]+2\frac{\chi}{\chi_0^2}\left[m_\pi^2 f_\pi \sigma +\left(\sqrt{2}m_k^2f_k-\frac{1}{\sqrt{2}}m_\pi^2 f_\pi\right)\zeta\right] = 0.
\end{eqnarray}
in these equations, $\rho_i ^s$ is the scalar density for the $i$-th
baryon in the magnetized matter \cite{amupsarx}.
\section{In-Medium Masses Within The QCD Sum Rule Approach}
In this section, the in-medium masses of the lowest-lying bottomonium S-waves 1S [$\Upsilon (1S)$, $\eta_b$ and P-waves, 1P [$\chi_{b0}$ and $\chi_{b1}$], are calculated within the QCD Sum Rule approach. In the QCD Sum Rule framework, the masses of these bottomonium ground states are obtained by using the medium modified scalar and twist-2 gluon condensates. The condensates are then calculated within the chiral SU(3) model in terms of the scalar fields modifications within the magnetized, iso-spin asymmetric nuclear matter.
The in-medium mass squared, $m_i^{*2}$ for the i-type bottomonium ground state [ i= vector, pseudoscalar, scalar, and axial-vector ], in the QCD sum rule can be written as \cite{reinders},
\begin{equation}
m_i^{*2}\simeq \frac{M_{n-1}^i (\xi)}{M_{n}^i (\xi)}-4m_b^2\xi
\end{equation}
where $M_{n}^i $ is the $n$-th moment of the i-type meson, and,
$\xi$ is the renormalization scale.
Using the operator product expansion technique [OPE], the moment can be written as \cite{klingl, reinders},
\begin{equation}
M_{n}^i(\xi)=A_n^i(\xi)\left[1+ a_n^i(\xi)\alpha_s + b_n^i(\xi)\phi_b+c_n^i(\xi)\phi_c\right]
\end{equation}
Here, $A_n^i, a_n^i, b_n^i,$ and $c_n^i$ are the Wilson coefficients. The $A_n^i$ coefficients result from the bare-loop diagram of perturbative QCD, $ a_n^i$ are the contributions from the perturbative radiative corrections, and the coefficients, $b_n^i$ are related to the scalar gluon condensate through
\begin{equation}
\phi_b=\frac{4\pi^2}{9}\frac{\langle \frac{\alpha_s}{\pi} G_{\mu\nu}^a G^{a\mu\nu}\rangle}{\left(4m_b^2\right)^2}.
\end{equation}
By the replacement of the value of scalar gluon condensate, above equation can be written in terms of the scalar fields as,
\begin{equation}
\phi_b=\frac{32\pi^2}{81\left(4m_b^2\right)^2}\Bigg[(1-d)\chi^4+ \left(\frac{\chi}{\chi_0}\right)^2\left(m_\pi^2 f_\pi \sigma +\left(\sqrt{2}m_k^2f_k-\frac{1}{\sqrt{2}}m_\pi^2 f_\pi\right)\zeta\right)\Bigg]
\end{equation}
Finally, the $c_n^i$ coefficients are associated with the twist-2 gluon condensates through
\begin{equation}
\phi_c=\frac{4\pi^2}{3\left(4m_b^2\right)^2 }G_2
\end{equation}
In terms of the scalar fields, expression for $\phi_c$ is,
\begin{eqnarray}
\phi_c &= & \frac{4\pi \alpha_s}{3\left(4m_b^2\right)^2}
\Bigg[-(1-d+4k_4)(\chi^4-\chi_0^4)-
\chi^4 \ln\left(\frac{\chi^4}{\chi_0^4}\right) \nonumber \\
&+&\frac{4}{3}d\chi^4\ln\left(\left(\frac{(\sigma^2-\delta^2)\zeta}{\sigma_0^2\zeta_0}\right)\left(\frac{\chi}{\chi_0}\right)^3\right)\Bigg]
\end{eqnarray}
The $\xi$-dependent parameters $m_b$ and $\alpha_s$, are the running bottom quark mass and the running coupling constant respectively, given below \cite{am82, reinders},
\begin{equation}
\frac{m_b(\xi)}{m_b} = 1-\frac{\alpha_s}{\pi}\left[\frac{2+\xi}{1+\xi} \ln(2+\xi)-2\ln2\right]
\end{equation}
with $m_b\equiv m_b (p^2=-m_b^2)=4.23 $ GeV \cite{reinders85}, and
\begin{equation}
\alpha_s \left(Q_0^2+4m_b^2\right) = \alpha_s(4m_b^2) \Bigg/ \left(1+\frac{(33-2n_f)}{12\pi}\alpha_s(4m_b^2)\ln\frac{Q_0^2+4m_b^2}{4m_b^2}\right)
\end{equation}
where, $n_f=5$, $\alpha_s \left(4m_b^2\right) \simeq 0.15$ \cite{reinders85}, and $Q_0^2=4m_b^2 \xi$. \\
The Wilson coefficients $A_n^i, a_n^i, b_n^i,$ are given in ref.\cite{reinders} for different quantum numbers, $J^{PC}$ of particle states, for e.g., the scalar, vector, pseudoscalar, axial-vector channels, etc. The $c_n^i$'s are listed for the vector and pseudoscalar channels in \cite{klingl}, in case of S-wave charmonium ground states, and, for the P-waves (scalar and axial-vector),
$c_n^i$'s are calculated using a background field technique
in Ref. \cite{song}.
In the present work, in the presence of an external magnetic field,
we have considered the effects of spin-magnetic field interaction on
the bottomonium 1S triplet and singlet states. The effects of spin-magnetic field interaction have been studied for the 1S charmonium states, vector $J/\Psi$ and pseudoscalar $\eta_c$ at finite magnetic fields \cite{pallabi, cho91, cho14, alford, suzuki17}. This leads to a mixing between $J/\Psi$ (1S) and $\eta_c$ states. At non-zero magnetic fields, the spin-magnetic field coupling leads to a mixing between the longitudinal component of the spin 1, $\Upsilon (1S)$ state and the spin 0, $\eta_b$ state. The masses of the longitudinal vector, $\Upsilon^{||}(1S)$ (pseudoscalar, $\eta_b$) bottomonium states, are seen to have a rise (drop) with increasing magnetic fields, when the spin-mixing effects are incorporated at finite magnetic fields. The effective masses of the $\Upsilon^{||}(1S)$ ( 1S triplet) and $\eta_b$ (1S singlet) by considering the shifts due to spin-magnetic field interaction, thus given by \cite{alford},
\begin{equation}
m^{eff}_{\Upsilon (1S)}= m^*_{\Upsilon(1S)} + \Delta m_{sB},\;\;\;\;
m^{eff}_{\eta_b}= m^*_{\eta_b} + \Delta m_{sB}
\end{equation}
In the above equation, $m^*_{\Upsilon(1S)/\eta_b}$ denotes the in-medium masses of the S-waves bottomonium ground states calculated within QCD sum rule framework [equation. (13)], and $\Delta m_{sB}$ is the shift due to the spin-magnetic field interaction. Expression for the latter is given as below,
\begin{equation}
\Delta m_{sB} = \frac{\Delta M}{2}\left((1+{\chi_{sB}}^2)^{1/2}-1\right),\\ \chi_{sB} = \frac{2g\mu_b B}{\Delta M}
\end{equation}
where, $\mu_b = (\frac{1}{3}e)/(2m_b)$ is the bottom quark Bohr magneton with the constituent bottom quark mass, $m_b = 4.7 $ GeV in the present work, $\Delta M = m^*_{\Upsilon} - m^*_{\eta_b}$, and g is chosen to be 2 (ignoring the effects of the anomalous magnetic moments of the bottom quark (anti-quark)).
\section{Results and Discussions}
In the present work, in-medium masses of the lowest lying bottomonia are investigated within the QCD Sum rule (QCDSR) approach, which has extensively been
applied to study the lowest lying hadronic resonances. The masses of the S-wave bottomonium ground states [vector, $\Upsilon (1^3S_1)$, pseudoscalar, $\eta_b (1^1S_0)$] and the P-wave bottomonium ground states [scalar $\chi_{b0} (1^3P_0)$, axial-vector $\chi_{b1} (1^3P_1)$] are investigated in the isospin asymmetric as well as symmetric nuclear matter, both in the presence of strong magnetic fields and in the absence of any magnetic field. The anomalous magnetic moments (AMM) of the nucleons (protons and neutrons) are considered while investigating the effects of magnetic fields, in the present work. In QCDSR, the masses are obtained by taking the ratio of two consecutive moments in the appropriate n-region, such that the stability can be found. \\In the sum rule calculations of the heavy quarkonia ($\overline{q}q, q=c, b$) masses, the moments [$M_n(\xi)$], in the QCD operator product expansion (up to dimension-4 here), are depended on the scalar gluon condensate ($\phi_b$) and the twist-2 gluon operator ($\phi_c$). The medium effects on the masses are incorporated through these condensates, which are further investigated within an effective chiral SU(3) model.\\ In the chiral model, the gluon condensates are simulated through the scalar dilaton field, $\chi$ which in turn mimics the scale symmetry breaking of QCD through a logarithmic potential. At non-zero magnetic fields, proton has Landau energy levels contribution, which are taken into consideration in the solution of scalar fields. For any given values of density ($\rho_B$), isospin asymmetry ($\eta$), and magnetic field (eB), coupled equations of motions in the scalar fields, $\sigma, \zeta, \delta,$ and $\chi$ are solved under the mean-field approximation of the chiral effective model Lagrangian. Then, the scalar and twist-2 gluon condensates are obtained in terms of these scalar fields from equations (8) and (7) respectively. Thereafter, using equations (15) and (16), $\phi_b$ and $\phi_c$ can be calculated. Thus the medium modifications of the scalar iso-scalar (strange $\zeta$, non-strange $\sigma$), scalar-isovector, $\delta$, scalar dilaton $\chi$, fields are projected in the final mass calculation of $\overline{b}b$ states through the condensates. The Wilson coefficients in $M_n(\xi)$ are obtained using QCD perturbation theory.\\In the present investigation, the value of $\xi$ is taken as 1 for the S-wave and 2.5 for the P-wave states, leading to the value of the running coupling constant, $\alpha_s$, to be 0.1411 [S states] and 0.1346 [P states], and of the running bottom quark mass, $m_b$ to be 4180.3 MeV and 4130.8 MeV, respectively.\\
The isospin asymmetry parameter is denoted by $\eta\left(=\frac{\rho_n-\rho_p}{2\rho_B}\right)$ [$\rho_n$ and $\rho_p$ : number density of the neutron and the proton respectively], and the nuclear matter saturation density, $\rho_0$ is 0.15 $fm^{-3}$ in the present work. Calculations are done in both the isospin symmetric ($\eta=0$), and asymmetric ($\eta=0.5$) nuclear matter at the baryonic density of, $\rho_B$ = $\rho_0$, 2$\rho_0$ and 4$\rho_0$, for finite values of the AMM of p and n.
Masses are calculated in the magnetized nuclear matter for the values of magnetic field, $|eB|$ = $4m_\pi^2$ and $12m_\pi^2$. Energy levels of protons have Landau levels contribution due to their electric charge, and both protons and neutrons have contribution due to their AMM in the magnetized nuclear matter. \\ The effects of asymmetric high density nuclear medium are also investigated in the present work, without considering the effects of magnetic field. The masses are calculated within the sum rule framework, by calculating the condensates in a chiral SU(3) model, without considering the magnetic field. Vacuum masses (at $\rho_B=0$) for all four states thus obtained, are shown in table \ref{table:1},
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|}
\hline \textbf{Particle state} & \textbf{Vacuum mass (MeV)} \\[0.5ex] \hline
$ \eta_b (^1S_0) $ & 9681.479 \\ \hline
$\Upsilon (^3S_1)$ & 9751.249 \\ \hline
$\chi_{b0}(^3P_0)$ & 10573.429 \\ \hline
$\chi_{b1}(^3P_1)$ & 10812.121 \\ \hline
\end{tabular}
\caption{Vacuum Masses (MeV) of the S and P waves bottomonium ground states}
\label{table:1}
\end{table}
\begin{figure}
\includegraphics[width=1.1\textwidth]{fig1.pdf}\hfill
\vskip -0.8in
\caption{Masses (MeV) are plotted with variation in n, for 1S states, $\Upsilon$ and $\eta_b$ in the absence of magnetic field (eB = 0), at $\rho_B = \rho_0$, $2\rho_0$ and $4\rho_0$. Masses are plotted both at symmetric ($\eta$ = 0) and asymmetric ($\eta$ = 0.5) nuclear matter.}
\label{fig:2a}
\end{figure}
\begin{figure}
\includegraphics[width=1.1\textwidth]{fig2.pdf}\hfill
\vskip -0.8in
\caption{Masses (MeV) are plotted with variation in n, for 1P states, $\chi_{b0}$ and $\chi_{b1}$ in the absence of magnetic field, at $\rho_B = \rho_0$, $2\rho_0$ and $4\rho_0$. Masses are plotted both at symmetric ($\eta$ = 0) and asymmetric ($\eta$ = 0.5) nuclear matter.}
\label{fig:2b}
\end{figure}
\begin{figure}
\includegraphics[width=1.1\textwidth]{fig3.pdf}\hfill
\vskip -0.8in
\caption{Masses (MeV) are plotted with variation in n, for the pseudoscalar 1S state, $\eta_b$ at non-zero magnetic field (eB = 4$m_{\pi}^2$ and 12$m_{\pi}^2$), for $\rho_B = 0$ and $\rho_0$. Masses are plotted both at symmetric ($\eta$ = 0) and asymmetric ($\eta$ = 0.5) nuclear matter.}
\label{fig:3a}
\end{figure}
\begin{figure}
\includegraphics[width=1.1\textwidth]{fig4.pdf}\hfill
\vskip -0.8in
\caption{Masses (MeV) are plotted with variation in n, for the vector 1S state, $\Upsilon (1S)$ at non-zero magnetic field (eB = 4$m_{\pi}^2$ and 12$m_{\pi}^2$), for $\rho_B = 0$ and $\rho_0$. Masses are plotted both at symmetric ($\eta$ = 0) and asymmetric ($\eta$ = 0.5) nuclear matter.}
\label{fig:3b}
\end{figure}
\begin{figure}
\includegraphics[width=1.1\textwidth]{fig5.pdf}\hfill
\vskip -0.8in
\caption{Masses (MeV) are plotted with variation in n, for the scalar 1P state, $\chi_{b0}$ at non-zero magnetic field (eB = 4$m_{\pi}^2$ and 12$m_{\pi}^2$), for $\rho_B = 0$ and $\rho_0$. Masses are plotted both at symmetric ($\eta$ = 0) and asymmetric ($\eta$ = 0.5) nuclear matter.}
\label{fig:4a}
\end{figure}
\begin{figure}
\includegraphics[width=1.1\textwidth]{fig6.pdf}\hfill
\vskip -0.8in
\caption{Masses (MeV) are plotted with variation in n, for the axial-vector 1P state, $\chi_{b1}$ at non-zero magnetic field (eB = 4$m_{\pi}^2$ and 12$m_{\pi}^2$), for $\rho_B = 0$ and $\rho_0$. Masses are plotted both at symmetric ($\eta$ = 0) and asymmetric ($\eta$ = 0.5) nuclear matter.}
\label{fig:4b}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.1\textwidth]{fig7.pdf}
\hfill
\vskip -0.8in
\caption{Mass shifts (MeV) are plotted as a function of magnetic field, eB (in units of $m_\pi^2$), for $\Upsilon^{||} (1S)$ and $\eta_b$ states by considering their spin-mixing effects in the presence of finite magnetic fields. Plot.(a) shows the $\eta = 0 $ case and plot.(b) is for $\eta = 0.5$. Mixing effects are considered at $\rho_B = 0$ and $\rho_0$. }
\label{fig:5}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
Particle state & $\rho_B$ & $\eta=0$ & $\eta=0.5$ \\[2ex]
\hline \hline
& $\rho_0$ & 9681.149 & 9681.16 \\ [2ex]
\cline{3-4}
& & \textbf{-0.33} & \textbf{-0.319}\\ [2ex]
\cline{2-4}
\textbf{$\eta_b$}& 2$\rho_0$ & 9680.762 & 9680.812 \\ [2ex]
\cline{3-4}
& & \textbf{-0.72} & \textbf{-0.67}\\[2ex]
\cline{2-4}
& 4$\rho_0$ & 9680.285 & 9680.396 \\[2ex]
\cline{3-4}
& & \textbf{-1.194} & \textbf{-1.083}\\ [2ex]
\hline \hline
& $\rho_0$ & 9750.89 & 9750.903 \\[2ex]
\cline{3-4}
& & \textbf{-0.360} & \textbf{-0.346}\\ [2ex]
\cline{2-4}
\textbf{$\Upsilon (1S)$} & 2$\rho_0$ & 9750.317 & 9750.391 \\ [2ex]
\cline{3-4}
& & \textbf{-0.932} & \textbf{-0.857} \\ [2ex]
\cline{2-4}
& 4$\rho_0$ & 9749.574 & 9749.747 \\[2ex]
\cline{3-4}
&& \textbf{-1.675} & \textbf{-1.502} \\ [2ex]
\hline
\end{tabular}
\caption{Masses (MeV) and their shifts (MeV) from the vacuum values for lowest lying S-wave bottomonia at $\rho_B = \rho_0$, $2\rho_0$ and $4\rho_0 $; at zero magnetic field in the isospin symmetric and asymmetric nuclear matter.}
\label{table:2.a}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
Particle state & $\rho_B$ & $\eta=0$ & $\eta=0.5$ \\[2ex]
\hline \hline
& $\rho_0$ & 10572.541 & 10572.58 \\[2ex]
\cline{3-4}
&& \textbf{-0.89} & \textbf{-0.851} \\ [2ex]
\cline{2-4}
\textbf{$\chi_{b0} $} & 2$\rho_0$ & 10571.058 & 10571.252 \\ [2ex]
\cline{3-4}
&& \textbf{-2.370} & \textbf{-2.176} \\ [3ex]
\cline{2-4}
& 4$\rho_0$ & 10569.12 & 10569.571\\[2ex]
\cline{3-4}
&& \textbf{-4.310} & \textbf{-3.86} \\ [2ex]
\hline \hline
& $\rho_0$ & 10811.248 & 10811.284 \\[2ex]
\cline{3-4}
&& \textbf{-0.873522} & \textbf{-0.83756} \\ [2ex]
\cline{2-4}
\textbf{$\chi_{b1} $}& 2$\rho_0$ & 10809.788 & 10809.980 \\ [2ex]
\cline{3-4}
&& \textbf{-2.333} & \textbf{-2.142} \\ [2ex]
\cline{2-4}
& 4$\rho_0$ & 10807.878 & 10808.323 \\ [2ex]
\cline{3-4}
&& \textbf{-4.243} & \textbf{-3.798} \\ [2ex]
\hline
\end{tabular}
\caption{Masses (MeV) and their shifts (MeV) from the vacuum values for lowest lying P-wave bottomonia at $\rho_B = \rho_0$, $2\rho_0$ and $ 4\rho_0 $; at zero magnetic field in the isospin symmetric and asymmetric nuclear matter.}
\label{table:2.b}
\end{center}
\end{table}
Masses (MeV) are calculated at zero magnetic field in symmetric and asymmetric nuclear matter at the baryonic density $\rho_B$ = $\rho_0$, ${2\rho}_0$ and ${4\rho}_0$. They are listed in table.\ref{table:2.a} and table.\ref{table:2.b} for the S-wave and P-wave states respectively.
In table.\ref{table:3}, the masses and their shifts in MeV, from the corresponding vacuum values are shown for the S-wave states [$\Upsilon (1S), \eta_b$], at eB = $4m_\pi^2$ and $12m_\pi^2$ both in symmetric ($\eta = 0.0$) and asymmetric ($\eta=0.5$) nuclear matter for $\rho_B=\rho_0$ and $2\rho_0$.
Similar findings can also be made for the P-wave bottomonium ground states. Table.\ref{table:4}, illustrates the in-medium masses and their shifts from the vacuum, for the scalar $(\chi_{b0})$ and the axial vector $(\chi_{b1})$ states with the variation of $\rho_B$, eB, and $\eta$.
The mixing between the $1S$ $\Upsilon$ and $\eta_b$ states due to the spin-magnetic field interaction effect leads to a respective increase and decrease in their masses with increasing magnetic fields. The mass shifts for these states are shown in table.\ref{table:5} at $\rho_B=0, \rho_0$ and for $\eta =0, 0.5$ with the variation of magnetic fields, eB/$m_\pi^2$ = 4, 8, 10 and 12.
\begin{table}
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
\multirow{2}{4em}{eB} & \multirow{2}{4em}{$\rho_B$} & \multicolumn{2}{|c|}{\textbf{$\eta_b $}} & \multicolumn{2}{|c|}{\textbf{$\Upsilon (1S$)}} \\ [2ex]
\cline{3-6}
& & \textbf{$\eta=0$} & $\eta=0.5$ & $\eta=0$ & $\eta=0.5$\\ [2ex]
\hline
& \textbf{${\rho_0}$} & 9681.144 & 9681.181 & 9750.878 & 9750.932 \\[2ex]
\cline{2-6}
\textbf{$4m_\pi^2$} & $\Delta m$ & \textbf{-0.335} & \textbf{-0.298} & \textbf{-0.371} & \textbf{-0.317}\\[2ex]
\cline{2-6}
& $ 2\rho_0 $ & 9680.774 & 9680.851 & 9750.337 & 9750.45\\[2ex]
\cline{2-6}
& $\Delta m$ & \textbf{-0.705} & \textbf{-0.628} & \textbf{-0.911} & \textbf{-0.799}\\[2ex]
\hline
& $\rho_0$ & 9681.148 & 9681.193 & 9750.885 & 9750.948\\[2ex]
\cline{2-6}
\textbf{$ 12m_\pi^2 $} & $\Delta m$ & \textbf{-0.332} & \textbf{-0.287} & \textbf{-0.364} & \textbf{-0.301}\\[2ex]
\cline{2-6}
& $ 2\rho_0$ & 9680.776 & 9680.918 & 9750.338 & 9750.550\\[2ex]
\cline{2-6}
& $\Delta m$ & \textbf{-0.703} & \textbf{-0.561} & \textbf{-0.911} & \textbf{-0.698}\\[2ex]
\cline{2-6}
\hline
\end{tabular}
\end{center}
\caption{Masses (MeV) and their shifts (in MeV) for S-wave bottomonium ground states, at eB = $4m_\pi^2$ and $12m_\pi^2$ and for $\eta =$0 and 0.5.}
\label{table:3}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
\multirow{2}{4em}{eB} & \multirow{2}{4em}{$\rho_B$} & \multicolumn{2}{|c|}{\textbf{$\chi_{b0}$}} & \multicolumn{2}{|c|}{\textbf{$\chi_{b1}$}} \\ [2ex]
\cline{3-6}
& & \textbf{$\eta=0$} & $\eta=0.5$ & $\eta=0$ & $\eta=0.5$\\ [2ex]
\hline
& \textbf{${\rho_0}$} & 10572.513 & 10572.653 &10811.220&
10811.359 \\[2ex]
\cline{2-6}
\textbf{$4m_\pi^2$} & $\Delta m$ & \textbf{-0.916} & \textbf{-0.775} & \textbf{-0.901} & \textbf{-0.763}\\[2ex]
\cline{2-6}
& $ 2\rho_0 $ & 10571.113 & 10571.404 & 10809.842 & 10810.128\\[2ex]
\cline{2-6}
& $\Delta m$ & \textbf{-2.316} & \textbf{-2.025} & \textbf{-2.280} & \textbf{-1.993}\\[2ex]
\cline{2-6}
\hline
& $\rho_0$ & 10572.531 & 10572.692 &10811.239 & 10811.397 \\[2ex]
\cline{2-6}
\textbf{$ 12m_\pi^2 $} & $\Delta m$ & \textbf{ -0.897} & \textbf{-0.736} & \textbf{-0.883} & \textbf{-0.724}\\[2ex]
\cline{2-6}
& $ 2\rho_0$ & 10571.114 & 10571.665 & 10809.843 & 10810.386\\[2ex]
\cline{2-6}
& $\Delta m$ & \textbf{-2.315} & \textbf{-1.763} & \textbf{-2.279} & \textbf{-1.736}\\[2ex]
\cline{2-6}
\hline
\end{tabular}
\caption{Masses (MeV) and their shifts (in MeV) for P-wave bottomonium ground states, at eB = $4m_\pi^2$ and $12m_\pi^2$ and for $\eta =$0 and 0.5.}
\label{table:4}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
State & $\rho_B$ & \multicolumn{4}{|c|}{eB in units of $m_\pi^2$} \\ [2 ex]
\cline{3-6}
&& 4& 8 & 10 & 12 \\ [2ex]
\hline \hline
& 0 & 0.428 & 1.682 & 2.595 & 3.681 \\ [2ex]
\cline{2-6}
$\Upsilon (1S)$& $\rho_0 (\eta = 0)$ & 0.057 & 1.318 & 2.232 & 3.319 \\ [2ex]
\cline{2-6}
& $\rho_0 (\eta = 0.5)$ & 0.111 & 1.378 & 2.293 & 3.381 \\ [2ex]
\hline
& 0 & -0.428 & -1.682 & -2.595 & -3.681\\ [2ex]
\cline{2-6}
$\eta_b$ & $\rho_0 (\eta = 0)$ & -0.763 & -2.016 & -2.927 & -4.014 \\ [2ex]
\cline{2-6}
& $\rho_0 (\eta = 0.5)$ & -0.726 & -1.973 & -2.884 & -3.969\\ [2ex]
\hline
\end{tabular}
\caption{Mass-shifts (MeV) of 1S $\Upsilon$ and $\eta_b$ due to their mixing effects under spin-magnetic field interaction at vacuum ($\rho_B = 0$) and at $\rho_B = \rho_0$, for isospin symmetric ($\eta = 0$) and asymmetric ($\eta = 0.5$) matter. }
\label{table:5}
\end{center}
\end{table}
In QCD Sum Rule approach, masses of the lowest order resonances are calculated for a range of n-values (n : order of moments in QCDSR). The minimum point is considered to be the approximate physical mass of the corresponding resonance.The effects of the nuclear matter density, isospin asymmetry, and magnetic fields, on the bottomonia masses are shown in figures [\ref{fig:2a}-\ref{fig:4b}] with variation in n. As it is observed from these figures as well as from the mass shifts in tables [\ref{table:2.a}-\ref{table:4}], that among the other effects acting on the properties of bottomonia in nuclear medium, density has the significant contribution and that P-wave states have prominent mass drops than the S-wave states in the QCD sum rule approach calculations, under the same nuclear matter conditions. As it is shown in tables, [\ref{table:3}-\ref{table:4}], the mass shifts of $\eta_b, \Upsilon(1S), \chi_{b0}, \chi_{b1}$ at nuclear matter saturation density, $\rho_B = \rho_0$ in isospin symmetric (extreme asymmetric) nuclear matter, $\eta=0 (0.5)$, for eB = 4$m_{\pi}^2$ are -0.335 (-0.298), -0.371 (-0.317), -0.916 (-0.775), -0.901 (-0.763), respectively, and at $\rho_B = 2\rho_0$ these are modified to -0.705 (-0.628), -0.911 (-0.799), -2.316 (-2.025), -2.280 (-1.993). \\
The dominant contribution from the magnetic fields, in mass shifts are obtained through the spin-mixing effects between the 1S states, $\Upsilon^{||}(1S)$ and $\eta_b$. This is due to the spin-magnetic field interaction at finite magnetic fields between the longitudinal component of the spin one and spin zero states of 1S bottomonia considered here. As can be seen from figure \ref{fig:5}, that the splitting between the $\Upsilon^{||}(1S)$ and $\eta_b$ are increasing with magnetic fields. In table \ref{table:5}, mass shifts for $\Upsilon^{||}(1S)$ and $\eta_b$, at $\rho_B = \rho_0$, for eB = 4$m_{\pi}^2$ (12$m_{\pi}^2$) are given for symmetric matter case, 0.057 (3.319) and -0.7634 (-4.014) respectively, and for extreme asymmetric case, 0.111 (3.381) and -0.726 (-3.969) respectively.
\section{Summary}
To summarize the findings of the present work, the in-medium masses of both the
S and P waves bottomonium states calculated using a QCD sum rule approach,
are observed to decrease with increasing baryonic density. The drop is
more prominent for the P states. The effects of isospin asymmetry
of the nuclear medium and the magnetic field, are seen to be appreciable
at high densities. Both the asymmetry of nuclear medium and the strong magnetic fields effects coming via nucleons are seen to have negligible contribution
in changing the in-medium properties as compared to the significant density
contribution. The magnetic fields contribution is dominant while
taking into consideration of the spin-magnetic field interaction
between the 1S spin triplet and singlet states, which leads
to an appreciable increase and decrease in the masses of the longitudinal
component of $\Upsilon(1S)$ and $\eta_b$. This might
be observed as a quasi-peak at $m_{\eta_b}$ in the dilepton spectra
in non-central ultra-relativistic collisions at RHIC, LHC,
where the magnetic fields produced are huge.
For the zero magnetic fields, the bottomonium masses have dominant
effects from the baryon density, which can have consequences on the
production of the open and hidden bottom mesons in facilities
which probe the high density baryonic matter.
|
1,116,691,500,628 | arxiv | \section{Introduction}
It has long been known that biometric recognition systems are vulnerable to manipulation through spoofing, also known as presentation attack detection~\cite{isopad}. Some of the earliest work in anti-spoofing was published almost two decades ago~\cite{Ratha2001,SCHUCKERS200256}. Since then, a number of common evaluation or challenges have emerged, \emph{e.g.}\ in fingerprint recognition~\cite{ghiani2017review} and face recognition~\cite{chakka2011competition}. The ASVspoof challenge series was born to spearhead research in anti-spoofing for automatic speaker verification (ASV).
Common datasets prepared for the two ASVspoof challenges in 2015 and 2017 were accompanied with common protocols and evaluation metrics. Motivated by the need to build interest and momentum in anti-spoofing research, the ASVspoof challenges have focused on the assessment of countermeasure technologies in isolation from ASV. This approach to assessment offered a low cost of entry and helped to attract researchers from outside of the speaker recognition research community; participation was not dependent on experience in speaker recognition. According to the same strategy, the chosen evaluation metric was the standard \emph{equal error rate} (EER) of a spoofing attack detection module.
The ASVspoof challenge series has developed into what is arguably now the most successful of all biometric anti-spoofing challenges: the ASVspoof 2015 database hosted on the Edinburgh DataShare\footnote{\url{https://datashare.is.ed.ac.uk/handle/10283/2778}} has attracted the greatest number of page views over the academic year 2016-17;
well over 150 download requests were received for the 2017 database; almost 50 participants submitted results to the 2017 evaluation. Even if the simplicity of the challenges may have been instrumental to their success, improvements to the evaluation strategy, and metric in particular, have been planned for since long before the first challenge~\cite{evans2013spoofing}.
While there are compelling reasons to pursue evaluation in isolation from ASV, this strategy is sub-optimal in the longer term. While spoofing countermeasures and ASV solve different tasks --- an argument which may support the former approach to evaluation --- they are but sub-systems of a single system with a common overarching goal. The performance of a spoofing detector naturally impacts on the performance of the ASV system; it will influence not just the false alarm rate, but also the miss rate~\cite{Sahid2016-integrated}, meaning that it will impact on reliability and usability. Accordingly, there is no guarantee that a better-performing countermeasure (lower EER) will deliver more reliable ASV performance. In summary, with progress in anti-spoofing research continuing at a pace, metrics must evolve to reflect the performance of the system {\it as a whole}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.47\textwidth]{DFC_cascade_parallel_v4.pdf}
\caption{This work addresses performance assessment of a combined system consisting of an \emph{automatic speaker verification} (ASV) module and a `plug-and-play' \emph{spoofing countermeasure} (CM) that are combined either (i) CM followed by ASV, (ii) ASV followed by CM, or (iii) in parallel. The combined system is subjected to benchmarking using speech utterances from three different types of users: targets, nontargets and attackers.}
\label{fig:CM-ASV_systems}
\end{figure}
\begin{table*}[!htb]\caption{Possible joint actions in a parallel integration of countermeasure (CM) and automatic speaker verification (ASV) and their associated false rejection (miss) and false acceptance rates. See Fig. \ref{fig:CM-ASV_systems} for the explanation of the three different types of systems.}\label{tab:all-system-actions}
\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|ccc|}
\hline
& & \multicolumn{3}{c|}{\textbf{Type of trial} (prior probability)}\\
& & \textbf{Target} & \textbf{Nontarget} & \textbf{Spoof}\\
\textbf{System} & \textbf{(CM action, ASV action)} & $(\pi_\mathrm{tar})$ & $(\pi_\mathrm{non})$ & $(\pi_\mathrm{spoof})$\\
\hline
\multirow{3}{*}{(i)} & (\texttt{ACCEPT}, \texttt{REJECT}) & \textbf{(a)} miss & OK & OK\\
& (\texttt{ACCEPT}, \texttt{ACCEPT}) & OK & \textbf{(b)} false accept & \textbf{(c)} false accept \\
& (\texttt{REJECT}, \texttt{SLEEP}) & \textbf{(d)} miss & OK & OK\\
\hline
\multirow{3}{*}{(ii)} & (\texttt{SLEEP}, \texttt{REJECT}) & miss & OK & OK\\
& (\texttt{ACCEPT}, \texttt{ACCEPT}) & OK & false accept &false accept \\
& (\texttt{REJECT}, \texttt{ACCEPT}) & miss & OK & OK\\
\hline
\multirow{4}{*}{(iii)} & (\texttt{ACCEPT}, \texttt{REJECT}) & miss & OK & OK\\
& (\texttt{ACCEPT}, \texttt{ACCEPT}) & OK & false accept & false accept \\
& (\texttt{REJECT}, \texttt{REJECT}) & miss & OK & OK\\
& (\texttt{REJECT}, \texttt{ACCEPT}) & miss & OK & OK\\
\hline
\end{tabular}
\end{center}
\end{table*}
Ideally, such a new metric would bridge the gap between the anti-spoofing and ASV communities while maintaining support for countermeasure research in isolation from ASV; even if the goal of improving ASV reliability is common to both, spoofing countermeasures and ASV sub-system still have different specific goals. Such a new metric should, however, reflect the impact of spoofing countermeasures on subsequent verification with intuitive, interpretable results, providing for the reliable ranking of competing countermeasure solutions. Such a new metric should also remain independent to the form of spoofing attack (\emph{e.g.}\ replay, voice conversion, speech synthesis).
There is one additional requirement in that such a metric should reflect the impact of spoofing countermeasures in a Bayes sense. {\it Not all spoofing attacks are equal}. Let us imagine a `poor' spoofing attack which closely resembles a zero-effort impostor attack. Such an attack would resemble high quality, natural speech and would likely be missed by a spoofing countermeasure. Assuming an ASV system of high quality, such an attack will ultimately fail since the trial does not resemble the target speaker. In this sense, that the spoofing countermeasure misses the attack implies little cost. Conversely, a high quality spoofing attack which fools the ASV system with near certainly implies a high cost should it be missed by the spoofing countermeasure. An improved metric should therefore reflect the cost of decisions in a Bayes / minimum risk sense.
A solution which satisfies all of these requirements can be derived from the \emph{detection cost function} (DCF) framework~\cite{BRUMMER2006230} endorsed since 1996 by the National Institute of Standards and Technology (NIST) within the scope of the speaker recognition evaluation (SRE) campaigns \cite{Doddington2000-NIST-overview}. The adoption of standard corpora and DCF metric as \emph{the} primary means of unbiased assessment of ASV performance has been instrumental to progress in the field. Key to the DCF is the specification of \emph{costs} for missing target users and falsely accepting impostors (nontarget) in addition to the \emph{prior probabilities} of each.
Costs specify a loss in money, reputation, user dissatisfaction or other similar consequences upon the making of incorrect decisions. The specification of costs and priors tailors the DCF metric towards the development of ASV technologies for a range of different applications. The costs and priors could indeed be very different in surveillance and forensics compared to authentication applications, such as e-banking or home control. The costs and priors have varied across the different NIST SRE campaigns but the underlying DCF framework has remained the same. The NIST SREs have focused on applications with \emph{low target user priors}, reflective of surveillance or speaker indexing applications.
Despite its generality, and for two reasons, the NIST DCF is not readily applicable to scenarios that involve spoofing attacks. First, there is a need to augment the user set (targets and nontargets) with an additional \emph{spoofing impostor} set. Spoofing impostors are neither targets nor nontargets (zero-effort impostors); they require specific treatment. Second, the standard DCF is designed for the assessment of a \emph{single} ASV system, whereas this paper is concerned with the assessment of ASV systems that are combined with spoofing countermeasures (CM) (Fig.~\ref{fig:CM-ASV_systems}).
Each system addresses \emph{different} detection tasks and thus it is necessary to determine how their individual detection error rates combine upon the decisions made by both systems in the face of each
user type (Table \ref{tab:all-system-actions}).
This is the goal of the proposed \emph{tandem detection cost function} (t-DCF). It is a generalisation of the standard NIST DCF under the same risk analysis framework that supports the evaluation of combined ASV and spoofing countermeasures.
The study reported in this paper is intended to serve as a self-contained tutorial-like presentation including a treatment of the traditional DCF. In order to investigate the merit of the new t-DCF, we examine differences in the ranking of systems submitted to the both of the ASVspoof challenge editions when the ranking is determined using (i)~the performance of spoofing countermeasures assessed in isolation using the original EER metric, and (ii)~the proposed DCF-based approach which reflects the performance of spoofing countermeasures combined with a common ASV system. If the differences in ranking are shown to be negligible, then the current approach to isolated countermeasure assessment may be satisfactory. In contrast, pronounced differences between rankings would support adoption of the proposed DCF-based approach into the roadmap for future ASVspoof challenges.
\section{Automatic speaker verification, spoofing countermeasures and their combination}
This section describes the functions of automatic speaker verification (ASV) and spoofing countermeasure (CM) systems in addition to the manner in which they can be combined.
\subsection{Problem formulation}
ASV systems aim to verify the correspondence between speakers in two different speech utterances. The first forms the \emph{enrollment} utterance and is processed to form a speaker model, whereas the second is provided during testing in the form of a \emph{trial}. As illustrated in Fig.~\ref{fig:CM-ASV_systems}, three different trials may be encountered: (1)~\emph{target}, (2)~\emph{nontarget} and (3)~\emph{spoofing impostor}. Only target trials should be positively verified. Both forms of impostor trial should be rejected.
While nontargets and spoofing impostors may be grouped together into one class, there are reasons to consider three distinct classes. ASV systems are generally designed to distinguish only between target trials (class~1) and nontarget trials~(class~2). They have either limited or no capability to reject spoofing impostor trials~(class~3), which may closely resemble target trials. In this sense, the ASV system can only discriminate between target trials~(classes~1 and~3) and nontarget trials~(2). In contrast, CM systems are designed to distinguish bona fide speech~(classes~1 and~2) from spoofed speech~(3). Herein lies the need for three classes, which stems from the different, complementary actions of \emph{separate} CM and ASV systems.
While previous work has shown the potential to combine the action of CM and ASV systems in the form of a single system~\cite{Sizov2015-tifs}, separating CM and ASV systems has the potential for the \emph{explicit} detection of spoofing attacks. The paper considers three such architectures illustrated in Fig.~\ref{fig:CM-ASV_systems} and described in further detail below. First, we
\subsection{ASV and CM systems}
The ASV system operates on a pair of speech utterances, $\mathcal{X}=(\mathcal{X}_\text{train},\mathcal{X}_\text{test})$ where $\mathcal{X}_\text{train}$ is a training, or \emph{enrollment} utterance associated with a known speaker identity and where $\mathcal{X}_\text{test}$ is the test or trial utterance. Utterances can be presented as raw waveforms, sequences of spectral features, i-vectors, Gaussian mixture models or other similar descriptors. The ASV system outputs a \emph{detection score} (often, a log-likelihood ratio), denoted here by $r \in \mathbb{R}$, associated with the strength of two opposing hypotheses, namely the target (null) hypothesis (utterances $\mathcal{X}_\text{train}$ and $\mathcal{X}_\text{test}$ were produced by the same speaker) and the nontarget (alternative) hypothesis (different speakers). Higher score values indicate stronger support for the target hypothesis. Hard decisions are made upon the comparison of scores $r$ to a threshold $t$: if $r > t$, then the target hypothesis is accepted. Otherwise, the nontarget hypothesis is accepted.
The CM operates in a similar manner, but with different models and hypotheses. Whereas the ASV system requires the learning of one model \emph{per speaker}, CMs generally require the learning of only two models. Extending the previous notation $\mathcal{X}=(\mathcal{X}_\text{train},\mathcal{X}_\text{test})$, $\mathcal{X}_\text{train}$ now consists of a (potentially very large) \emph{set} of utterances corresponding to either \emph{bona fide} or \emph{spoofed} speech,
whereas $\mathcal{X}_\text{test}$ still represents a single test or trial.
The hypotheses are now that the trial corresponds to either a bona fide (null) hypothesis or spoofed (alternative) hypothesis.
The CM output score, denoted by $q \in \mathbb{R}$,
is now interpreted as the support for the bona fide hypothesis.
Hard CM decisions are then made upon the comparison of $q$ to a CM-specific threshold $s$: if $q>s$ then the bona fide hypothesis is accepted. Otherwise, the spoofed hypothesis is accepted.
\subsection{System combination}\label{sec:system-combo}
The different ways in which \emph{separate} ASV and CM systems can be combined is illustrated in Fig.~\ref{fig:CM-ASV_systems}. They encompass either \emph{cascaded} or \emph{parallel} combinations~\cite{Sahid2016-integrated}.
ASV and CM systems can be cascaded in either order.
In this case the CM acts as a gate and will reject immediately trials which are detected as spoofing attacks, saving redundant processing by ASV.
Likewise, the ASV could act as a gate, saving redundant processing by the CM.
Alternatively, ASV and CM systems can work in parallel whereby trials are only accepted upon the positive decisions of both sub-systems.
The work presented in this paper provides a means of assessing the reliability of such combined systems, whatever the approach to combination.
The combined system selects an action $\vec{\alpha}=(\alpha^\text{cm},\alpha^\text{asv}) \in\mathcal{A}\times\mathcal{A}$ from the set of possible joint actions of the two detectors. Here, an \emph{action} implies a hard classification decision, each of which is associated a cost which incurred if the decision is incorrect. For a given trial, each systems (ASV and CM) selects one of the actions from the set:
\begin{align}
\mathcal{A} & =\{\texttt{ACCEPT},\texttt{REJECT},\texttt{SLEEP}\}\nonumber
\end{align}
where the `dummy' \texttt{SLEEP} action indicates a trial that, as a result of cascaded combination, is not processed by the ASV or CM sub-systems.
Given the set of joint actions, $\mathcal{A}\times\mathcal{A}$, there are nine possible action pairs. It is evident from Fig.~\ref{fig:CM-ASV_systems}, though, that six action pairs are sufficient to describe the cascaded and parallel combinations:
\begin{align}
\vec{\alpha}_1 & = (\texttt{ACCEPT}\, ,\texttt{ REJECT})\nonumber\\
\vec{\alpha}_2 & = (\texttt{ACCEPT}\, ,\texttt{ ACCEPT})\nonumber\\
\vec{\alpha}_3 & = (\texttt{REJECT}\, ,\texttt{ REJECT})\nonumber\\
\vec{\alpha}_4 & = (\texttt{REJECT}\, ,\texttt{ ACCEPT})\nonumber\\
\vec{\alpha}_5 & = (\texttt{REJECT}\, ,\texttt{ SLEEP})\nonumber\\
\vec{\alpha}_6 & = (\texttt{SLEEP }\, ,\texttt{ REJECT}),\nonumber
\end{align}
the last two of which are specific to cascaded configurations. These same six action pairs are illustrated in Table~\ref{tab:all-system-actions} with the
errors that may result from each. Action pair $\vec{\alpha}_2$ is the only pair that may lead to false acceptance errors. The others may lead to false rejection errors (misses). These error rates constitute the basic elements for computing the detection cost which is the subject of the next section.
The tandem detection cost function (t-DCF) proposed in this paper is a single scalar that reflects the reliability of decisions made by the combined ASV and CM system.
It is based upon the combination of detection error rates for the individual systems, taking into account the action $\vec{\alpha}_i$ assigned to a representative number of different trial types (see Table~\ref{tab:all-system-actions}). Before describing the t-DCF metric,
we review the standard detection cost function and its application to ASV and CM systems on their own.
\section{ASV and CM error rates}\label{sec:counting-errors}
The basic set-up is as follows. As evaluators, we are given a combined system $\mathcal{S}=(\text{ASV}, \text{CM})$ composed of a pair of ASV and CM systems combined using one of the three approaches illustrated in Fig.~\ref{fig:CM-ASV_systems}. We do not have access to the systems themselves --- only their output scores $(r_i,q_i) \in \mathbb{R}^2, i=1,2\dots,N$ in response to a set of $N$ evaluation trials defined by us. We have a total of $N_\text{tar}$ target, $N_\text{non}$ nontarget and $N_\text{spoof}$ spoof trials. They are mutually exclusive, so $N=N_\text{tar}+N_\text{non}+N_\text{spoof}$. Even if we use the paired notation $(r_i,q_i)$, we compute the errors related to ASV and CM independently of each other. Thus, in principle, the ASV scores $\{r_i\}$ and the CM scores $\{q_i\}$ could originate from a different set of evaluation trials (though usually we use the same test files).
For generality, in the following subsections, we write the detection error rates of each system as functions of their respective detection thresholds ($t$ for ASV, $s$ for CM), even if one has to fix them in an actual authentication application.
\subsection{Detection error rates of ASV}\label{subsec:asv-errors}
We are now in a position to define the miss (or false rejection) rate and the false alarm (or false acceptance) rate of the ASV system at threshold $t$:
\begin{equation}\label{eq:asv-detection-errors}
\begin{aligned}
P_\text{miss}^\text{asv}(t) & \ensuremath{\triangleq} \int_{-\infty}^t p(r|\text{tar})\, \text{d}r \approx \frac{1}{N_\text{tar}} \sum_{i \in \Lambda_\text{tar}} \mathbb{I}\{r_i \leq t \}\\
P_\text{fa}^\text{asv}(t) & \ensuremath{\triangleq} \int_{t}^\infty p(r|\text{non})\, \text{d}r \approx \frac{1}{N_\text{non}}\sum_{i \in \Lambda_\text{non}} \mathbb{I}\{r_i > t\},
\end{aligned}
\end{equation}
where $p(r|\cdot)$ denotes the underlying continuous class-conditional score density, and where $\approx$ signifies that we estimate the error rates from a finite sample by counting, using the sums shown at the end of each equation. Here, $\mathbb{I}$ is an indicator function, while $\Lambda_\text{tar}$ and $\Lambda_\text{non}$ index the target and nontarget trials. The miss rate is the proportion of target trials that were falsely rejected, and the false alarm rate is the proportion of nontarget trials that were falsely accepted.
The casual reader might be puzzled why we define the false alarm rate considering nontargets only, rather than the pooled (mixture) distribution of nontarget and spoof scores --- after all, are those not the ones whose false acceptances we are concerned with? The reason, as mentioned earlier, is that the spoof samples in fact resemble much more the target samples than nontarget samples: they should be treated as having score distribution characteristics more similar to $p(r|\text{tar})$ than $p(r|\text{non})$; if this actually was not the case, one could say that the spoofed test samples are not very interesting ones, as the unprotected ASV system would reject them, and we are back to the conventional ASV set-ups.
In a \textbf{worst case attack scenario} with extremely high quality spoofing attacks\footnote{Such as artificial speech attacks produced by state-of-the-art speech synthesis, or high-end loudspeaker anechoic room replay attacks that the authors introduced in the ASVspoof 2017 challenge.}, we set $p(r|\text{spoof})=p(r|\text{tar})$. In this case the miss rate of the ASV system of the genuine target speakers is the same as the miss rate of the spoof tests. As an example, consider a high-accuracy ASV system with target speaker miss rate of 1\%. Under the worst-case assumption, this is also the miss rate of the spoof tests (``ASV did not miss the spoof sample'') --- implying that 99\% of the spoofs were, in fact, accepted by the ASV system as target trials. The validity of the worst-case assumption depends both on the ASV system and the evaluation corpus.
One benefit of the worst-case assumption is simplicity: our proposed tandem DCF can be computed using the `traditional' miss and false alarm rates alone --- that is, the ASV system itself does not need to be tested with the spoof trials. When the worst case assumption does \emph{not} hold, we measure the empirical miss rate of spoof trials against the ASV system. Specifically, we compute the probability of the event that a \emph{spoof test was \underline{not} missed by the ASV system}, as $1 - P_\text{miss,spoof}^\text{asv}$, where
\begin{equation}\label{eq:puzzling-equation}
\begin{aligned}
P_\text{miss,spoof}^\text{asv}(t) & \ensuremath{\triangleq} \int_{-\infty}^t p(r|\text{spoof})\, \text{d}r\\
& \approx \frac{1}{N_\text{spoof}} \sum_{i \in \Lambda_\text{spoof}} \mathbb{I}\{r_i \leq t \},\\
\end{aligned}
\end{equation}
and where $\Lambda_\text{spoof}$ indexes the spoof trials. Hence, \eqref{eq:puzzling-equation} counts the fraction of spoofing trials below the detection threshold --- that is, the fraction of spoofing trials that were correctly rejected. Then, the `not missed' case,
$1 - P_\text{miss,spoof}^\text{asv}(t)$, counts the proportion of spoofing trials that were falsely accepted by the ASV. Note that we treat the spoofs as the positive class --- spoof trials replace the target speaker trials when computing ASV-specific detection error rates --- and therefore we have to define the false acceptance rate of spoofs as the opposite of missing them; false acceptance rate is undefined for a positive class.
\subsection{Detection error rates of CM}
The task of a CM is to differentiate human samples from spoofs. In this respect, the targets and nontargets are taken to be in one positive `human' class of bona fide speech while the spoofs represent the negative class. We assume $p(q|\text{hum})=p(q|\text{tar})=p(q|\text{non})$, where $q$ denotes the countermeasure score and `hum' stands for human. Therefore,
\begin{equation} \label{eq:miss_rate_spoof}
\begin{aligned}
P_\text{miss}^\text{cm}(s) & \ensuremath{\triangleq} \int_{-\infty}^s p(q|\text{hum})\, \text{d}q \approx \frac{1}{N_\text{hum}} \sum_{j \in \Lambda_\text{hum}} \mathbb{I}\{q_j \leq s \}\\
P_\text{fa}^\text{cm}(s) & \ensuremath{\triangleq} \int_{s}^\infty p(q|\text{spoof})\, \text{d}q \approx \frac{1}{N_\text{spoof}}\sum_{j \in \Lambda_\text{spoof}} \mathbb{I}\{q_j > s\},
\end{aligned}
\end{equation}
where $\Lambda_\text{hum}=\Lambda_\text{tar}\,\cup \,\Lambda_\text{non}$ indices the human trials, $\Lambda_\text{spoof}$ indices the spoof trials, and $N_\text{hum}=N_\text{tar}+N_\text{non}$.
\subsection{Equal error rate (EER)}
Since the miss and false alarm rates of a given system are, respectively, increasing and decreasing functions of the detection threshold, there exists a unique error rate at which the two equal each other. This is the well-known \emph{equal error rate} (EER). Technically, for a finite detection score set, the EER does not exist. It may nonetheless be estimated using interpolation techniques; we point the interested reader to~\cite[p. 85]{Brummer2010-PhD} for further details.
\section{Detection costs: background}\label{sec:detection-cost-background}
\subsection{Bayes minimum risk}
In \emph{Bayes' minimum risk classification}, one makes predictions of the class label and picks a class that leads to the least risky choice. Consider an \emph{action set}, denoted $\mathcal{A}=\{\alpha_1,\dots,\alpha_L\}$, which represents the decisions made by a classification system. Further, a \emph{proposition set}, $\vec{\Theta}=\{\theta_1,\dots,\theta_M\}$ represents the actual states of nature (ground truth or class label). Note that $L$ and $M$ do not have to be equal. Selecting an action $\alpha \in \mathcal{A}$ has a \emph{consequence}. We assign a nonnegative \emph{cost} $C(\vec{\alpha}|\theta) \in \mathbb{R}^+$ on taking action $\vec{\alpha}$ when the proposition $\theta \in \vec{\Theta}$ is actually true. For correct actions we assign a cost of 0 without loss of generality. In our context, the action means taking a decision (choosing the ASV and CM actions) for a single test trial, and the proposition, or class label, contains the actual type of the user in that trial. The cost can be thought as a class-specific unit cost for a mistake made by the classification system; such as an amount of money that a bank loses if a legitimate customer is rejected, or an intruder is accepted, with possibly much higher cost for the latter. The evaluator chooses these costs before obtaining any ASV or CM detection scores.
Consider some fixed operating point(s) and let $P_\text{err}(\theta)$ denote the class-conditional error probability of a given detection system for class $\theta$. The detection system could be either ASV, CM or one of the combined systems in Fig.~\ref{fig:CM-ASV_systems}. In the case of standalone systems, $P_\text{err}(\theta)$ would be one of the miss or false alarm rates discussed in the previous sections; in the case of the combined systems, computation of $P_\text{err}(\theta)$ involves combining error probabilities from the two systems that will be detailed below. Now, since $P_\text{err}(\theta)$ just counts the (normalized) errors for class $\theta$, each one of which has a unit cost $C(\vec{\alpha}|\theta)$ upon taking action $\alpha$, the total accumulated cost is simply $C(\vec{\alpha}|\theta) P_\text{err}(\theta)$.
The last ingredient in completing the basic DCF formulation is to choose a \emph{prior}, $\vec{\pi} \in \mathbb{P}_M$, over the propositions. Here $\pi_i=P(\theta_i)$ and $\mathbb{P}_M \ensuremath{\triangleq} \{(\pi_1,\dots,\pi_M)|\pi_i \geq 0,\,\,\sum_i \pi_i = 1\}$ is a probability simplex. The prior sets one's expectation of how often each one of the propositions is true (\emph{i.e.} how frequent the target and nontarget users might be). The priors can, but do not have to, match the empirical trial proportions in the evaluation corpus. The system vendor does not have access to the true proportions.
Under the previous assumptions, the expected (or average) cost for taking a specific action $\alpha$ is
\begin{equation}
\text{DCF}(\vec{\alpha}_j) = \sum_{i=1}^M \pi_i C(\vec{\alpha}_j|\theta_i)P_\text{err}(\vec{\alpha}_j|\theta_i).
\end{equation
The total expected cost, that we will refer to as the \emph{detection cost function} (DCF) is then the total cost obtained by summing the action-specific costs
\begin{equation}\label{eq:general-dcf-recipe}
\text{DCF} = \sum_{j=1}^L \text{DCF}(\vec{\alpha}_j)=\sum_{j=1}^L \sum_{i=1}^M \pi_i C(\vec{\alpha}_j|\theta_i)P_\text{err}(\vec{\alpha}_j|\theta_i).
\end{equation
Note that the error $P_\text{err}(\vec{\alpha}_j|\theta_i)$, which could be a miss or false acceptance, depends on the action and the correct class.
\subsection{NIST DCF}
In the conventional ASV without spoofing considerations, we have target and nontarget trials and our ASV system either accepts or rejects the user. Therefore we have $\vec{\Theta}=\{\vec{\theta}_\text{tar},\vec{\theta}_\text{non}\}$ and $\mathcal{A}=\{\texttt{ACCEPT},\texttt{REJECT}\}$ and, coincidentally, $|\vec{\Theta}|=|\mathcal{A}|$. Choosing the decision regions (in our case, setting the ASV decision threshold $t$ in a 1-dimensional detection score space) defines the actions of the classifier. In specific, $\texttt{REJECT}$ action corresponds to the region $[{-\infty}, t]$ and its complement $\texttt{ACCEPT}$ corresponds to the region $[t, \infty]$. Therefore, the conditional error probabilities at operating point $t$ are $P_\text{err}(\texttt{REJECT}|\theta_\text{tar})=P(r \leq t|\theta_\text{tar})=P_\text{miss}^\text{asv}(t)$ and $P_\text{err}(\texttt{ACCEPT}|\theta_\text{non})=P(r>t|\theta_\text{non})=P_\text{fa}^\text{asv}(t)$.
Since we have only two types of trial users, it is sufficient to specify only the target prior $\pi_\text{tar}$; the nontarget prior is then $\pi_\text{non}=1-\pi_\text{tar}$. Further, let us use more convenient notations $C_\text{miss}=C(\texttt{REJECT}|\theta_\text{tar})$, $C_\text{fa}=C(\texttt{ACCEPT}|\theta_\text{non})$ to denote the two costs. Substituting all these ingredients into \eqref{eq:general-dcf-recipe} gives finally the more familiar DCF form used extensively in the technology benchmarks coordinated by National Institute of Standards and Technology (NIST) \cite{Doddington2000-NIST-overview}:
\begin{equation}\label{eq:NIST-DCF}
\text{DCF}(t) = C_\text{miss}\pi_\text{tar}P_\text{miss}^\text{asv}(t) + C_\text{fa}(1 - \pi_\text{tar})P_\text{fa}^\text{asv}(t),
\end{equation}
which we will refer to as the NIST DCF. Once we fix the DCF parameters $(C_\text{miss},C_\text{fa},\pi_\text{tar})$ and the operating point (threshold) $t$, the DCF provides a single number that measures the performance of the evaluated ASV system in the sense explained above. Choosing the cost parameters defines an \emph{application} \cite{BRUMMER2006230} of interest. We note also that even if the above cost has three parameters, they can be collapsed into a single cost parameter known as the \emph{effective prior} \cite[p. 75]{Brummer2010-PhD} without loss of generality regarding ranking of system performance.
\section{Proposed t-DCF}\label{sec:proposed-tDCF}
With the relevant theory background covered above, it is now straightforward to extend the NIST DCF to evaluation scenarios that involve spoofing. Now the action set $\mathcal{A}=\{\vec{\alpha}_1,\dots,\vec{\alpha}_6\}$ consists of the six possible (ASV, CM) joint actions defined in subsection \ref{sec:system-combo}, while the proposition set expands to $\vec{\Theta}=\{\theta_\text{tar},\theta_\text{non},\theta_\text{spoof}\}$ with a prior $(\pi_\text{tar},\pi_\text{non},\pi_\text{spoof}) \in \mathbb{P}_3$. Note that now $|\vec{\Theta}|\neq |\mathcal{A}|$. As for the detection costs, since we have two detection systems, each with two possible outcomes\footnote{The third dummy action, \texttt{SLEEP}, is dictated by the other decisions.}, we specify four costs:
\begin{itemize}
\item $C_\text{miss}^\text{asv}$ -- cost of ASV system rejecting a target trial.
\item $C_\text{fa}^\text{asv}$ -- cost of ASV system accepting a nontarget trial.
\item $C_\text{miss}^\text{cm}$ -- cost of CM rejecting a human trial.
\item $C_\text{fa}^\text{cm}$ -- cost of CM accepting a spoof trial.
\end{itemize}
What now remains is detailing the computation of the error probabilities. Since the ASV and CM systems work in unison, we must take into account both of their errors. We treat the two systems as being independent and find the joint probability of an event by multiplying the relevant error probabilities of each system. Our formalism is general but for brevity, we focus on the cascaded configuration (i)~of Fig.~\ref{fig:CM-ASV_systems}. Referring to Table~\ref{tab:all-system-actions}, there are four possible errors in total, labeled \textbf{(a)}, \textbf{(b)}, \textbf{(c)} and \textbf{(d)}. The error probabilities are functions of the detection thresholds of the two systems, $s$ for the CM and $t$ for the ASV module.
\begin{enumerate}[label=\textbf{(\alph*)}]
\item CM correctly passes on target speaker utterance to the ASV system, which however misses it, causing a false rejection; the probability for this event is,
\begin{equation*}
P_\text{a}(s,t) \ensuremath{\triangleq} (1 - P^\text{cm}_\text{miss}(s)) \times P^\text{asv}_\text{miss}(t),
\end{equation*}
read as ``CM does \emph{not} miss human speech, and ASV falsely rejects the target.''
\item CM passes on a nontarget which gets accepted by ASV, causing false acceptance; the probability,
\begin{equation*}
P_\text{b}(s,t) \ensuremath{\triangleq} (1 - P^\text{cm}_\text{miss}(s)) \times P^\text{asv}_\text{fa}(t),
\end{equation*}
is read as ``CM does \emph{not} miss human speech, and ASV falsely accepts the nontarget''.
\item CM falsely passes on a spoof sample
which gets falsely accepted by the ASV system. The probability is,
\begin{equation*}
P_\text{c}(s,t) \ensuremath{\triangleq} P^\text{cm}_\text{fa}(s) \times (1 - P^\text{asv}_\text{miss,spoof}(t))
\end{equation*}
read as ``CM falsely passes on a spoof sample, and ASV does \emph{not} miss the target'' (we refer the reader back to subsection \ref{subsec:asv-errors}). The miss rate $P^\text{asv}_\text{miss,spoof}(t)$ can be evaluated empirically using \eqref{eq:puzzling-equation} or, in the worst-case spoofing attack scenario, be fixed to the target miss rate $P^\text{asv}_\text{miss}(t)$ defined in \eqref{eq:asv-detection-errors}.
\item CM falsely rejects target speaker utterance as a spoof; the probability is
\begin{equation*}
P_\text{d}(s) = P^\text{cm}_\text{miss}(s)
\end{equation*}
read as ``countermeasure misses human speech.''
\end{enumerate}
\noindent\textbf{Remark.} It is worth noticing that the miss rate \(P_\text{d}(s)\) is made up of two separate error terms:
\begin{equation*}
P^\text{cm}_\text{miss}(s) \times P^\text{asv}_\text{miss}(t)
\end{equation*}
and
\begin{equation*}
P^\text{cm}_\text{miss}(s) \times (1 - P^\text{asv}_\text{miss}(t))
\end{equation*}
that correspond to the (\texttt{REJECT, REJECT}) and (\texttt{REJECT, ACCEPT}) actions, respectively, as shown in Table \ref{tab:all-system-actions}. The miss rate $P^\text{asv}_\text{miss}(t)$ of the ASV system is canceled out when the two error terms are summed to form $P_d(s)$.
We now have all the ingredients defined for our proposal:
\begin{mdframed}[style=MyFrame]\label{algo:define-eval-cond-modified}
\center{\textbf{Tandem detection cost function (t-DCF)}}
\begin{equation}\label{eq:proposed-dcf-modified}
\begin{aligned}
\text{t-DCF}(s,t) & = C_\text{miss}^\text{asv} \cdot \pi_\text{tar} \cdot P_\text{a}(s,t)\\
& + C_\text{fa}^\text{asv} \cdot \pi_\text{non}\cdot P_\text{b}(s,t)\\
& + C_\text{fa}^\text{cm} \cdot \pi_\text{spoof} \cdot P_\text{c}(s,t)\\
& + C_\text{miss}^\text{cm}\cdot \pi_\text{tar}\cdot P_\text{d}(s).
\end{aligned}
\end{equation}
\end{mdframed}
\subsection{Properties of t-DCF}
Let us now observe how the t-DCF behaves in a few interesting special cases. For brevity we focus on the CM-ASV tandem system (i)~of Fig.~\ref{fig:CM-ASV_systems}. We assume the worst-case spoofing scenario with identical target and spoof ASV score distributions.
\textbf{An ASV system without any countermeasure.} First, consider a regular, unprotected ASV system. This is equivalent to placing a `dummy' countermeasure that passes on every speech utterance to the ASV back-end, with threshold $s=-\infty$ leading to $P_\text{miss}^\text{cm}(s)=0$ and $P_\text{fa}^\text{cm}(s)=1$. Thus
\@fleqntrue\@mathmargin0pt
\begin{equation*}
\begin{aligned}
\text{t-DCF}_\texttt{ACCEPT-ALL}(t) & = C_\text{miss}^\text{asv} \cdot \pi_\text{tar} \cdot P^\text{asv}_\text{miss}(t)\\
& + C_\text{fa}^\text{asv} \cdot \pi_\text{non} \cdot P^\text{asv}_\text{fa}(t)\\
& + C_\text{fa}^\text{cm} \cdot \pi_\text{spoof} \cdot \left(1 -P_\text{miss}^\text{asv}(t)\right)%
\end{aligned}
\end{equation*}
\@fleqnfalse
The first two terms are the errors of the ASV system. The only error contribution of the CM is in the last term which corresponds to passing a spoofed sample to the ASV, which does not miss it. If one further assumes that there are no spoofing attacks ($\pi_\text{spoof}=0$), then the t-DCF collapses to the NIST DCF~\eqref{eq:NIST-DCF}. Thus, the t-DCF can be interpreted as a generalization of NIST DCF to scenarios that involve spoofing attacks with a tandem ASV-CM system designed to cope with all three types of trials.
\textbf{A countermeasure that rejects every input sample.} As another extreme case, consider a countermeasure that rejects every sample before passing it to the ASV system. Now $s=\infty$, $P_\text{miss}^\text{cm}(s)=1$ and $P_\text{fa}^\text{cm}(s)=0$, leading to
\@fleqntrue\@mathmargin0pt
\begin{equation*}
\begin{aligned}
\text{t-DCF}_\texttt{REJECT-ALL} & = C_\text{miss}^\text{cm}\pi_\text{tar} \end{aligned}
\end{equation*}
\@fleqnfalse
Now, the t-DCF is \emph{constant} in that it does not depend on the ASV system; this is reasonable since the ASV system was never invoked.
\textbf{The perfect countermeasure.} The perfect countermeasure system with an EER of 0\% has $P_\text{miss}^\text{cm}=P_\text{fa}^\text{cm}=0$. The last two terms of \eqref{eq:proposed-dcf-modified}
are zero, thereby giving
\@fleqntrue\@mathmargin0pt
\begin{equation*}
\text{t-DCF}_\texttt{IDEAL-CM}(t) = C_\text{miss}^\text{asv} \cdot \pi_\text{tar} \cdot P^\text{asv}_\text{miss}(t)
+ C_\text{fa}^\text{asv} \cdot \pi_\text{non} \cdot P^\text{asv}_\text{fa}(t).
\end{equation*}
\@fleqnfalse
Notice that in \eqref{eq:NIST-DCF} we have $(1 - \pi_\text{tar}) = \pi_\text{non}$, in which case the t-DCF would be an exact match to the NIST DCF. The difference is that the priors do not sum up to one since the complete space we started with had a non-zero probability associated with spoof trials.
\textbf{The perfect ASV.} Similar to above, consider an ASV system with both detection errors being zero. In this case, the tDCF becomes
\begin{equation*}
\begin{aligned}
\text{t-DCF}_\texttt{IDEAL-ASV}(s) & = C_\text{miss}^\text{cm} \cdot \pi_\text{tar} \cdot P^\text{cm}_\text{miss}(s)\\
& + C_\text{fa}^\text{cm} \cdot \pi_\text{spoof} \cdot P^\text{cm}_\text{fa}(s),
\end{aligned}
\end{equation*}
which has the same form as the NIST DCF, except that the evaluated system and the costs and priors are those of the CM, not the ASV system. To conclude the two previous special cases, whenever one of the detectors makes no classification errors, the t-DCF counts the errors of the remaining system.
\subsection{Choosing t-DCF parameters (choosing the application)}\label{sec:parameter-selection}
Now, how should one set the parameters of the t-DCF? Even if the t-DCF formulation applies, in principle, to the evaluation of arbitrary scenarios including surveillance and forensic use cases, this paper considers \textbf{authentication} to which the problem of spoofing is relevant.
In answering this question, we consider a hypothetical `banking' scenario. This is a mere example to help illustrate the concepts, rather than a real-world banking scenario based on empirical data. The use of an example is necessary; there is no way to determine the actual frequency of spoofing attacks (if one could really detect and count them, why should one care about spoofing research in the first place?). The best one can do is to \emph{assert} a spoofing prior and other cost parameters some arbitrary but reasonable values. In a banking application, $\pi_\text{non} \ll \pi_\text{tar}$ and $\pi_\text{spoof} \ll \pi_\text{tar}$ might be fairly reasonable assumptions, \emph{i.e.}, a bank might process hundreds of thousands of transactions daily, most of which contain a legitimate, bona fide user accessing his or her own phone/e-bank account.
It is of interest to fix as many of the parameters as possible while varying other, more interesting parameters. To this end, the primary variable of interest is the prior of the spoofing attack, $\pi_\text{spoof}$. After asserting $\pi_\text{spoof}$ (for instance 0.001), $\pi_\text{tar}=(1 - \pi_\text{spoof})\times 0.99$ and $\pi_\text{non}=(1 - \pi_\text{spoof})\times 0.01$ are fixed; the priors sum to 1. The multipliers $0.99$ and $0.01$ are arbitrary but representative of a banking application with a high target speaker prior and a low nontarget prior. As for the cost parameters $C_\text{fa}^\text{asv}$, $C_\text{miss}^\text{asv}$, $C_\text{fa}^\text{cm}$, and $C_\text{miss}^\text{cm}$, it is of interest to express these as a \emph{ratio}
since this reflects the desired balance between miss and false alarm rates.
The rejection of bona fide users should incur a cost that reflects user inconvenience. The acceptance of zero-effort impostors and spoofing impostors should incur a higher cost: this reflects losses to the bank incurred as a result of granting fraudsters access to customer bank accounts.
These are competing requirement, however, implying a reasonable balance between the cost ratios. Similar to the typical NIST SREs, we set $C_\text{fa}^\text{asv}/C_\text{miss}^\text{asv}=10$ and $C_\text{fa}^\text{cm}/C_\text{miss}^\text{cm}=10$. In practice, we set $C_\text{fa}^\text{asv} = C_\text{fa}^\text{cm}=10$ and $C_\text{miss}^\text{asv} = C_\text{miss}^\text{cm} = 1$.
\section{Experimental set-up}
\subsection{ASVspoof 2015 and 2017 corpora}
The two ASV Spoofing and Countermeasures (ASVspoof) corpora originate from the challenges held in 2015 and 2017. The 2015 evaluation focused on the detection of synthetic speech~(SS) and voice conversion~(VC) whereas the 2017 edition focused on the detection of replay attacks. The data-related details of both corpora are reported elsewhere~\cite{Wu2015-asvspoof,Kinnunen2017assessing}; the focus here is on aspects relevant to evaluation.
Participants in both challenges were provided with labeled training and development data, and were asked to submit CM scores $\{q_j\}$ for a set of unlabeled evaluation trials. The performance of submitted countermeasures was then ranked using an EER metric\footnote{An average EER computed across individual tasks was used in 2015, whereas a pooled EER was used in 2017.}.
The 2015 evaluation data contains 9,404 bona fide trials and 184,000 spoofed trials, with the latter comprising 10 different SS and VC attacks (5 known and 5 unknown). The 2017 evaluation data contains 1,298 bona fide and 12,008 spoofed trials comprising diverse replay attacks collected from 161 replay sessions (collected in 57 distinct configurations). For the 2015 data, we select the male ASV trials. For the 2017 data, we exclude replay segments that lack a corresponding speaker enrollment in the original RedDots source corpus. A summary of trial statistics for the 2015 and 2017 evaluation partitions is presented in Table~\ref{tab:ASV15_17-trial-summary}.
While the focus of the evaluation itself was on the development of spoofing CMs, both corpora are accompanied with protocols for ASV assessment. These have been used previously in order to gauge ASV vulnerabilities to each form of spoofing attack (and hence to demonstrate the need for spoofing CMs). Table~\ref{tab:ASV15_17-trial-summary} illustrates the number of genuine trials, zero-effort impostor and spoofing attack trials for the respective evaluation partitions. Note that the ASVspoof 2015 speech corpus is used for text-independent ASV task with short utterances whlile the ASVspoof 2017 was for text-dependent scenario.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.4}
\begin{footnotesize}
\caption{Number of trials in the ASVspoof 2015 and ASVspoof 2017 evaluation protocols for ASV experiments.}
\label{tab:ASV15_17-trial-summary}
\centerline{
\begin{tabular}{c|c|c|}
\hline
\multicolumn{1}{|c|}{Trial Type} & ASVspoof 2015 & ASVspoof 2017 \\
\hline \hline
\multicolumn{1}{|c|}{Target} & 4053 & 1106 \\
\hline
\multicolumn{1}{|c|}{Nontarget} & 77007 & 18624 \\
\hline
\multicolumn{1}{|c|}{Spoof} & 80000 & 10878 \\
\hline
\end{tabular}
}
\end{footnotesize}
\end{table}
\subsection{ASV systems}
All ASV experiments are performed with a common Gaussian mixture model - universal background model (GMM-UBM) ~\cite{Reynolds2000-gmm-ubm} framework using a Mel-frequency cepstral coefficient (MFCC) front-end. Pre-emphasized speech is processed with 20~ms frames every 10~ms. The power spectrum is obtained using a windowed discrete Fourier transform (DFT) to obtain 19 static MFCCs (excluding the 0-th coefficient) extracted using the discrete cosine transform (DCT) of 20 log-power, Mel-scaled filterbank outputs. RASTA filtering is applied before delta and delta-delta computation, resulting in 57 features per frame. Energy-based speech activity detection (SAD) is applied in order to discard non-speech frames. Cepstral mean and variance normalization (CMVN) is the applied at the utterance level. For ASVspoof 2015, we retain the energy coefficient and skip both SAD and CMVN. The UBM has 512 Gaussians and is trained using the TIMIT corpus\footnote{\url{https://catalog.ldc.upenn.edu/LDC93S1}} using an expectation-maximization algorithm. Speaker models are obtained through maximum a posteriori adaptation.
\section{Results}
Reported here are results for the top-10 performing submissions to the two ASVspoof challenges assessed using both the default EER metric and the t-DCF metric proposed in this paper. We keep our ASV system fixed and compare the performance of the different CMs. Specifically, we carry out linear calibration of the ASV scores following the ASV-specific parameters $\pi_\text{tar}$, $C_\text{fa}^\text{asv}$, $C_\text{miss}^\text{asv}$, and threshold the ASV scores at $t=0$ to obtain the ASV miss and false alarm rates. We then report the \emph{minimum} t-DCF of a given CM system by $\min_{s} \{\text{t-DCF}(s,t=0)\}$, by sweeping the CM threshold to find the minimum achievable t-DCF of that system.
Results are illustrated in Table~\ref{tab:tdcfResults} for ASVspoof 2015 (left) and ASVspoof 2017 (right). Systems are ranked according to EER-derived results presented in the second column of each half of the table. t-DCF-derived results appear in columns 3, 4 and 5 for spoofing attack priors $\pi_\text{spoof}=$0.001, 0.01, and 0.05 respectively. In addition, the first two rows show the special cases of the traditional, unprotected ASV system and the perfect CM for reference purposes. The former is to show the general improvement when the CM module is combined with ASV, while the latter indicates the best achievable performance for the ASV system.
As expected, all the CMs for both corpora provide a substantial boost over the \emph{no CM} case. While for low values of $\pi_\text{spoof}$ there is little to choose between the performance of each system, differences are more pronounced for higher priors. There are also differences in ranking, had this been been performed according to the t-DCF, instead of the EER. For ASVspoof 2015, system B is the best performing no matter what the prior. System S01 remains the best performing for ASVspoof 2017, even if ranking differences are still observed elsewhere. Finally, there is also a clear margin between the obtained t-DCF scores and the best achievable results (perfect CM). For the ASVspoof 2015 data, the best system (B) however gets very close (0.1661) to the optimum one (0.1660) for the lowest spoof prior.
Ranking differences serve to show the importance of assessing CM performance, not in isolation, but \emph{combined} with ASV. These findings support the adoption of the t-DCF into the roadmap for future ASVspoof challenges. It is stressed, however, that these same findings do not prevent the challenge from focusing on the \emph{development} of CMs in isolation. If accompanied with a set of ASV scores and aligned protocols, future challenges could still focus exclusively on the development of CMs since the proposed t-DCF metric then allows optimisation to be performed in a manner that reflects their impact on the performance of CMs when \emph{combined} with ASV.
\begin{table*}
\caption{t-DCF values of joint evaluation of ASV and CM using different values of $\pi_\text{spoof}$ for top-10 systems of ASVspoof 2015 and ASVspoof 2017.}
\label{tab:tdcfResults}
\begin{center}
\begin{tabular}{|l|c|ccc||l|c|ccc|}
\hline
\multicolumn{5}{|c||}{ASVspoof 2015} & \multicolumn{5}{c|}{ASVspoof 2017}\\
& & \multicolumn{3}{c||}{t-DCF for $\pi_\text{spoof}=$} & & & \multicolumn{3}{c|}{t-DCF for $\pi_\text{spoof}=$}\\
System & EER & 0.001 & 0.01 & 0.05 & System & EER & 0.001 & 0.01 & 0.05 \\
\hline
no CM & - & 0.1709 & 0.2146 & 0.4061 & no CM & - & 0.0307 & 0.1016 & 0.4169 \\
perfect CM & 0.00 & 0.1660 & 0.1653 & 0.1601 & perfect CM & 0.00 & 0.0228 & 0.0227 & 0.0217 \\
\hline
A & 1.57 & 0.1665 & 0.1696 & 0.1735 & S01 & 6.92 & 0.0277 & 0.0646 & 0.1126 \\
B & 2.55 & 0.1661 & 0.1670 & 0.1684 & S02 & 12.41 & 0.0305 & 0.0984 & 0.1847 \\
D & 3.65 & 0.1662 & 0.1677 & 0.1718 & S03 & 14.28 & 0.0302 & 0.0955 & 0.2066 \\
C & 4.87 & 0.1665 & 0.1704 & 0.1825 & S04 & 14.87 & 0.0302 & 0.0951 & 0.2123 \\
I & 4.97 & 0.1662 & 0.1681 & 0.1738 & S05 & 16.54 & 0.0306 & 0.1005 & 0.2310 \\
E & 5.50 & 0.1664 & 0.1701 & 0.1828 & S06 & 17.96 & 0.0291 & 0.0856 & 0.2429 \\
F & 6.08 & 0.1670 & 0.1717 & 0.1873 & S08 & 18.09 & 0.0297 & 0.0910 & 0.2423 \\
G & 6.12 & 0.1667 & 0.1711 & 0.1859 & S07 & 18.67 & 0.0303 & 0.0928 & 0.2271 \\
H & 6.64 & 0.1669 & 0.1730 & 0.1912 & S09 & 20.19 & 0.0304 & 0.0982 & 0.2194 \\
J & 7.83 & 0.1664 & 0.1702 & 0.1846 & S10 & 21.17 & 0.0300 & 0.0914 & 0.2554 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Conclusions}
This paper proposes an elegant solution to the assessment of combined spoofing countermeasures and automatic speaker verification. The tandem decision cost function (t-DCF) draws upon established best practice in assessing the reliability of biometric systems in a Bayes/minimum risk sense, by combining a fixed cost model with trial priors. Together, they reflect the practical consequences of decision errors in realistic use case scenarios in which biometric systems may face bona fide users, casual/zero-effort impostors, or fraudsters seeking to spoof the system by manipulating the decisions it makes. The t-DCF generalises to situations without CMs, those with overly aggressive CMs in addition to the consideration of ASV and CM systems that make no errors and has application to the study of \emph{any} biometric. It is also agnostic to the particular approach by which a biometric system and CM is combined. Example assessments using the proposed t-DCF are reported for automatic speaker recognition within the context of two ASVspoof challenges. Differences in CM rankings observed using the t-DCF metric advocate its adoption into the roadmap for future ASVspoof challenges, in addition to the assessment of biometric spoofing and countermeasures generally.
|
1,116,691,500,629 | arxiv |
\section{Conclusion}
In this work we show the importance of auditory information in cross-domain first person action recognition. We exploit the complementary nature
of audio and visual information by defining a new cross-modal loss function that operates directly on the relative feature norm of the two modalities. Extensive experiments on DG and DA settings prove the power of our loss. Future work will further pursue this research avenue, exploring the effectiveness of the RNA-loss in third person activity recognition settings, and combined with traditional cross-domain architectures.
\section{Experiments}
\label{sec:experimental}
\subsection{Experimental Setting}\label{experimental_setting}
\noindent
\textbf{Dataset.}
Using the EPIC-Kitchens-55 dataset \cite{damen2018scaling}, we adopted the same experimental protocol of \cite{Munro_2020_CVPR}, where the three kitchens with
the largest amount of labeled samples are handpicked from the 32 available. We refer to them here as D1, D2, and D3 respectively. Since the action classification task is complicated by the large number of action labels, we consider only a subset, namely: \textit{put}, \textit{take}, \textit{open}, \textit{close}, \textit{wash}, \textit{cut}, \textit{mix}, and \textit{pour}. The challenges are not only due to the large domain shift
among different kitchens, but also to the unbalance of the class distribution intra- and inter-domain, as shown in \cite{Munro_2020_CVPR}.
\noindent
\textbf{Input.}
Regarding the RGB input, a set of 16 frames, referred to as \textit{segment}, is randomly sampled during training, while at test time 5 equidistant segments spanning across all clips are fed to the network. At training time, we apply random crops, scale jitters and horizontal flips for data augmentation, while at test time only center crops are applied.
Regarding aural information, we follow
\cite{Kazakos_2019_ICCV} and convert the audio track into a 256 $\times$ 256 matrix representing the log-spectrogram of the signal. The audio clip is first extracted from the video, sampled at 24kHz and then the Short-Time Fourier Transform (STFT) is calculated of a window length of 10ms, hop size of 5ms and 256 frequency bands
\noindent
\textbf{Implementation Details.} Our network is composed of two streams, one for each modality $m$, with distinct feature extractor $F^{m}$ and classifier $G^{m}$. The RGB stream uses the Inflated 3D ConvNet (I3D) with the same initialization proposed by the authors \cite{carreira2017quo},
as done in
\cite{Munro_2020_CVPR}.
The audio feature extractor uses the BN-Inception model \cite{bn-inception}, a 2D ConvNet pretrained on ImageNet \cite{imageNet}, which proved to be a reliable backbone for the processing of audio spectrograms~\cite{Kazakos_2019_ICCV}. Each $F^{m}$ produces a 1024-dimensional representation $f_{m}$ which is fed to the action classifier $G^{m}$, consisting in a fully-connected layer that outputs the score logits for the 8 classes. Then, the two modalities are fused by summing the outputs and the cross entropy loss is used to train the network.
To remain coherent with the setup used by \cite{Munro_2020_CVPR},
we follow their strategy to validate our hyper-parameters.
All training models are run for 9000 iterations and finally tested with the average of the last 9 models. For further details on the optimizer, learning rate, parameters used and on the training process in general, we refer to the supplementary material.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{latex/Img/DomainShift.pdf}
\caption{Single- vs multi-modality accuracy ($\%$) on both supervised and source only settings. The drop in performances when testing on target (right) highlight the presence of a strong domain shift.}
\label{fig:supervised_so}
\vspace{-0.2cm}
\end{figure}
\subsection{Results}
\begin{table*}[ht]
\begin{minipage}{0.4\linewidth}
\begin{adjustbox}{width=1.4\columnwidth, margin=0ex 1ex 0ex 0ex}
\begin{tabular}{l|cccccc|c}
\hline \hline
Single-Source & \multicolumn{1}{c}{D1 $\rightarrow$ D2} & D1 $\rightarrow$ D3 & D2 $\rightarrow$ D1 & D2 $\rightarrow$ D3 & D3 $\rightarrow$ D1 & D3 $\rightarrow$ D2 & Mean \\ \hline \hline
Source Only & 39.03& 39.17 &35.27& 47.52& 40.26& 49.98& 41.87 \\ \hline
Align. Only & 38.50 & 33.75 & 32.59 & 45.78 & 39.97 & 50.86 & 41.76 \\
Orth. Only & 39.18 & 37.55 & 36.86 & 47.09 & 43.70 & 51.61 & 42.67 \\
BatchNorm & 40.03 & 39.88 & 36.39 & 48.47 & \underline{42.60} & 48.33 & 42.62 \\
SS \cite{Munro_2020_CVPR} & 38.86 & 33.75 & 32.59 & 45.78 & 39.97 & 50.86 & 40.30 \\ \hline
RNA-Net (Ours) & \underline{45.01} & \underline{44.62} & \underline{41.76} & \underline{48.90} & 42.20 & \underline{51.98} & \textbf{45.75} \\
\hline
\end{tabular}
\end{adjustbox}
\end{minipage}
\hfill
\begin{minipage}{0.421\linewidth}
\begin{adjustbox}{width=1\columnwidth, margin=0ex 1ex 0ex 0ex}
\begin{tabular}{l|ccc|c}
\hline \hline
Multi-Source & \multicolumn{1}{c}{D1, D2 $\rightarrow$ D3} & D1, D3 $\rightarrow$ D2 & D2, D3 $\rightarrow$ D1 & Mean\\ \hline \hline
Deep All & 51.47 & 43.19 & 39.35 & 44.67 \\ \hline
Align. Only & 50.01 & 42.40 & 44.40 & 45.60 \\
Orth. Only & 53.08 & 41.76 & 48.07 & 47.64\\
BatchNorm & 52.07 & 42.63 & 45.14 & 46.61 \\
SS \cite{Munro_2020_CVPR} & 51.87 & 39.79 & \underline{52.73} & 48.13 \\ \hline
RNA-Net (Ours) & \underline{55.88} & \underline{45.65} & 51.64 & \textbf{51.06} \\
\hline
\end{tabular}
\end{adjustbox}
\end{minipage}
\caption{Top-1 Accuracy ($\%$) of our RNA-Net under the single-source DG setting (left) and the multi-source DG setting (right).}
\label{tab:single-source-dg}
\vspace{-0.2cm}
\end{table*}
\begin{table*}[ht]
\centering
\begin{adjustbox}{width=0.7\columnwidth, margin=0ex 1ex 0ex 0ex}
\begin{tabular}{l|cccccc|c}
\hline \hline
UDA & \multicolumn{1}{c}{D1 $\rightarrow$ D2} & D1 $\rightarrow$ D3 & D2 $\rightarrow$ D1 & D2 $\rightarrow$ D3 & D3 $\rightarrow$ D1 & D3 $\rightarrow$ D2 & Mean \\ \hline \hline
SS Only \cite{Munro_2020_CVPR} & 44.83 & 42.88 & 40.61 & \underline{54.21} & 42.58 & 53.50 & 46.44 \\
Adversarial Only \cite{grl-pmlr-v37-ganin15} & 41.02 & 43.04 & 39.36 & 49.25 & 38.77 & 50.56 & 43.67 \\
MM-SADA \cite{Munro_2020_CVPR} & \underline{48.90} & 46.66 & 39.51 & 50.89 & \underline{45.42} &\underline{55.14} & 47.75 \\
MMD \cite{da-mmdlong2015learning} & 42.40 & 43.84 & 40.87 & 48.13 & 41.46 &50.03 & 44.46 \\
AdaBN \cite{ada-bn} & 36.64 & 42.57 & 33.97 & 46.63 & 40.51 & 51.20 & 41.92 \\ \hline
RNA-Net (Ours) & 46.89 & 48.40 & 41.58 & 51.77 & 43.19 & 54.43 & 47.71 \\
RNA-Net+GRL (Ours) & 46.65 & \underline{49.95} & \underline{46.06} & 51.77 & 42.20 & 53.14 & \textbf{48.30 } \\ \hline
\end{tabular}
\end{adjustbox}
\caption{Top-1 accuracy ($\%$) of our RNA-Net under the multi-modal UDA setting.}
\label{tab:multi-modal-da}
\vspace{-0.3cm}
\end{table*}
\noindent
\textbf{Preliminary Analysis.} To verify that combining audio and visual modalities actually improves results, we assess the impact of each modality individually (Figure \ref{fig:supervised_so}). Firstly, the two streams are trained both separately and jointly in a supervised fashion (referred to as \textit{supervised}). Then, we validate the same models under a cross-domain setting, meaning that training is performed on source data only, and test on unseen target data (referred to as \textit{source-only}).
Results (Figure \ref{fig:supervised_so}, left) highlight that, by using a single domain, the visual part is more robust than the audio one (+$2.7\%$). Conversely, when testing on target data from a different domain (Figure~\ref{fig:supervised_so}, right), audio is on-par with RGB.
This suggests that when a domain-shift exists, it is mainly imputable to changes in visual appearance. In the cross-domain scenario,
the accuracy drops dramatically (Figure~\ref{fig:supervised_so}, right), proving how the domain shift impacts negatively the performance. Interestingly, we see that the fusion of the two modalities brings a greater contribution when facing this problem, increasing the source-only results on single modality by $4\%$. This confirms that combining audio and visual cues is useful to partially overcome the weaknesses of each individual modality across domains. A similar exploration on audio and appearance fusion was done by \cite{Kazakos_2019_ICCV}.
\noindent
\textbf{Baseline Methods.} \label{par:baselinemethods} To empirically prove the limitations caused by strictly enforcing an alignment or orthogonality between RGB and audio representations (see Section~\ref{sec:assumptions}), we compare our $\mathcal{L}_{RNA}$ with an \textit{alignment-based} and an \textit{orthogonality-based} loss respectively, both operating on the features of the two modalities. The first, which we indicate with \textit{$\mathcal{L}_{\parallel}$}, imposes an alignment constraint by minimizing the term $1-CosineSimilarity(x,y)$, ideally aiming to the representation in Figure \ref{fig:OurLoss}-b. The second, which we indicate with \textit{$\mathcal{L}_{\bot}$}, operates by minimizing the term $CosineSimilarity(x,y)^2$, imposing an \textit{orthogonality} constraint (Figure \ref{fig:OurLoss}-c). To demonstrate that mitigating the unbalance between the modality feature norms helps the classifier to better exploit the two modalities, we add a Batch Normalization layer before the $G^{m}$ classifier, that serves as a regularizer on input features. We adapt all these baseline methods to our backbone architecture, in order to fairly compare them with our RNA-Net. The baseline for single-source DG is the standard \textit{source-only} approach, while in a multi-source context we take as baseline
the so-called \textit{Deep All} approach, namely
the backbone architecture when no other domain adaptive strategies are exploited and \textit{all and only} the source domains are fed to the network. Indeed, this is the ultimate validation protocol in image-based DG methods \cite{bucci2020selfsupervised}. We also provide as a competitor a self-supervised approach, inspired by works that proved its robustness across-domains \cite{carlucci2019domain}. The choice fell on a multi-modal \textit{synchronization} task~\cite{Munro_2020_CVPR}.
\noindent
\textbf{DG Results.} Table \ref{tab:single-source-dg}-a shows single-source DG results. We see that $\mathcal{L}_{\bot}$ (referred to as \textit{orthogonality only}) outperforms $\mathcal{L}_{\parallel}$ (referred to as \textit{alignment only}) by up to $1\%$. This confirms that preserving modality-specific features guides the network in the right direction. RNA-Net outperforms such methods by up to $3\%$, confirming that bounding the features in a mutually exclusive aligned or orthogonal space representation could cause a degradation in performance. At the same time, the need of balancing between the two norm distributions is shown to be effective by the results obtained by adding a simple regularization strategy (referred to \textit{BatchNorm}). Once again, our RNA-Net outperforms the competitors by up to $3\%$, proving the strength of $\mathcal{L}_{RNA}$. Finally, the fact that a robust method as the self-supervised (referred as \textit{SS}) does not surpass the source-only baseline, highlights the complexity of the problem.
Table \ref{tab:single-source-dg}-b shows the results obtained on multi-source DG. Our method achieves a consistent boost in performance ($+6.4\%$) w.r.t. DeepAll, and outperforms all other baselines.
\noindent
\textbf{DA Results.} We validate or method in the DA context against four existing unsupervised domain adaptation approaches:
(a) AdaBN \cite{ada-bn}: Batch Normalization layers are updated with target domain statistics;
(b) MMD \cite{da-mmdlong2015learning}: it minimizes separate discrepancy measures applied to single modalities;
(c) Adversarial Only \cite{grl-pmlr-v37-ganin15}: a domain discriminator is trained in an adversarial fashion through the gradient reverse layer (GRL) in order to make the feature representations for source and target data indistinguishable;
(d) MM-SADA \cite{Munro_2020_CVPR}: a multi-modal domain adaptation framework which is based on the combination of existing DA methods, i.e., a self-supervised synchronization pretext task and an adversarial approach.
Results are summarized in Table \ref{tab:multi-modal-da}. When target data is available at training time, our $\mathcal{L}_{RNA}$ outperforms the standard DA approaches AdaBN \cite{ada-bn} and MMD \cite{da-mmdlong2015learning} by $5.8\%$ and $3.3\%$ respectively. Moreover, our method outperforms adversarial alignment \cite{grl-pmlr-v37-ganin15} by $4\%$. Interestingly, when used in combination to the adversarial approach, our $\mathcal{L}_{RNA}$ slightly improves performances. This complementarity is due to ability of our approach to preserve the structural discrimination of each modality and its intra-class compactness, compensating the distortion in the original distribution induced by the adversarial approach. This validates the considerations done at the end of Section \ref{sec:rna_loss}. Conversely, $\mathcal{L}_{RNA}$ achieves a boost of more than $1\%$ in terms of accuracy when compared against a standard self-supervised synchronization task, which in turn operates by means of reducing the discrepancy, as we do. Finally, we validate our method against the most recent approach in video-based DA literature, i.e., MM-SADA \cite{Munro_2020_CVPR}, achieving on-par results. Considering that MM-SADA combines both a self-supervised and adversarial approach, we compete by means of a lightweight architecture and by employing different modalities.
\textbf{Ablation Study.} To verify the effectiveness of our design choice, we introduce a loss variant, called \textit{Hard Norm Alignment} (HNA), that induces the norms to tend to a given value arbitrarily $R$. The $R$ term is chosen after observing the range of the norms of the two modalities, and picking a value half-way between the two. To further prove the strength of our method over different architectural variants, we compare the \textit{late fusion} approach against the so-called \textit{mid-level fusion}, proposed in \cite{Kazakos_2019_ICCV}.
It consists in feeding the prediction layer a concatenation of the two modality features.
The results are shown in Table \ref{tab:ablation_}. Note how HNA performs worse than $\mathcal{L}_{RNA}$ in all contexts, confirming that an ``hard'' loss function constitutes in a limit.
As far as concern the mid-level fusion approach, it demonstrates to be a valid alternative in both the supervised and cross-domain settings, remarking the flexibility of our method to be employed in different feature fusion strategies. \\
\input{latex/Tables/Ablation}
\textbf{Qualitative Analysis.} We give an insight of the norm unbalance problem described in Section \ref{sec:assumptions} by showing diagrams representing the norm variations and their impact on the performance. To the readjustment of the norms corresponds a boost in performance (Figure \ref{fig:PlotAccNorm}-a). We also show in Figure \ref{fig:PlotAccNorm}-b the percentage of the total norm given by the 300 most relevant features for classification. While minimizing $\mathcal{L}_{RNA}$, the top 300 features maintain (or even increase) their importance, since their norm ends up representing the majority of the total norm. This further remarks that while relatively adjusting the feature norms of the two modalities, our $\mathcal{L}_{RNA}$ serves as a feature ``selector" for the final classifier.
Lastly, our method brings the side benefit of making the network focus more on relevant portions of the image, with sharper and well defined class activation maps (CAMs) w.r.t. the baseline, as shown in Figure \ref{fig:cams}.
\label{tab:ablation}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{latex/Img/cams.pdf}
\caption{\textbf{Qualitative DG results.} Class activation maps (CAMs) obtained on a target segment using a model trained without (top) and with (bottom) our proposed $\mathcal{L}_{RNA}$ loss, with its audio waveform (middle). A benefit brought by our method is a more localized focus that the network puts on relevant portions of the image after re-balancing the contribution of the two modalities (\textcolor{blue}{blue} corresponds to higher attention, \textcolor{red}{red} to less). The effects on the feature norm values are visible in the histograms at the bottom.
}
\label{fig:cams}
\vspace{-0.1cm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{latex/Img/Top300.pdf}
\caption{Final score prediction unbalance between audio and visual modalities w/ and w/o our loss function (left). Discrepancy between the norm ranges and their variation before and after the adjustment (right). When minimizing $\mathcal{L}_{RNA}$, the features which are kept active are the most relevant for classification, i.e., top-300.
}
\label{fig:PlotAccNorm}
\vspace{-0.3cm}
\end{figure}
\section{Introduction}
\begin{centering}
\begin{figure}[t]
\includegraphics[width=1\linewidth]{latex/Img/Teaser.pdf}
\caption{Egocentric action recognition comes with a rich sound representation, due to the frequent hand-object interactions and the closeness of the sensor to the sound source. Here we show that the complementary nature of visual
and audio information can be exploited to deal with the cross-domain challenge.
}
\label{fig:OurLoss}
\end{figure}
\end{centering}
First Person Action Recognition is rapidly attracting the interest of the research community \cite{sudhakaran2018attention,sudhakaran2019lsta,furnari2020rolling,Kazakos_2019_ICCV,ghadiyaram2019large,Wu_2019_CVPR}, both for the significant challenges it presents and for its central role in real-world egocentric vision applications, from wearable sport cameras to human-robot interaction or human assistance.
The recent release of the EPIC-Kitchen large-scale dataset \cite{damen2018scaling} has given a very significant boost to the research activities in this field, offering the possibility to study people’s daily actions from a unique point of view. The collection of this dataset consists in the segmentation of long untrimmed videos representing people’s daily activities recorded in the same kitchen. This process results in a huge number of sample clips representing a large variety of action classes, which are however captured in a limited number of environments. This intrinsic unbalance causes the so called environmental bias, meaning that the learned action representations are strictly dependent on the surroundings, and thus hardly able to generalize to videos recorded in different conditions \cite{torralba2011unbiased}. In general, this problem is referred to in the literature as domain shift, meaning that a model trained on a source labelled dataset cannot generalize well on unseen data, called target. Recently, \cite{Munro_2020_CVPR} addressed this issue
by reducing the problem to an unsupervised domain adaptation (UDA) setting, where an unlabeled set of trimmed samples from the target is available during training. However, the UDA setting is not always realistic, because the target domain might not be known a priori or because it might be costly (or plainly impossible) to access target data at training time.
In this paper we argue that the true challenge is to learn a representation
able to generalize to any unseen domain, regardless of the possibility to access target data at training time. This means developing a method general enough to work both on UDA and Domain Generalization (DG) \cite{pmlr-v28-muandet13}.
Inspired by the idea of exploiting the multi-modal nature of videos as done in \cite{Munro_2020_CVPR}, we propose a new cross-domain generalization method which leverages over the complementary nature of visual and audio information. We start by observing that first person action recognition intrinsically comes with rich sound information, due to the strong hand-object interactions and the closeness of the sensors to the sound source. The use of auditory information could be a good workaround for the problems which arise from the use of wearable devices, in that it is not sensitive to the ego-motion and it is not limited by the field of view of the camera. Moreover, our idea is that, since the audio and visual modalities come from different sources, the domain-shift they suffer from is not of the same nature. Motivated by these considerations, we propose a new cross-modal loss function, which we call Relative Norm Alignment loss, that operates on the relative features norm of the two modalities by acting on their magnitude. Our loss improves the cooperation between the audio and visual channels, which results in a stronger ability to overcome the domain shift. We show with extensive experiments that, when used in a very simple audio-visual architecture, our loss leads to strong results both in UDA and DG settings.
To summarize, our contributions are the following:
\begin{itemize}
\item we empirically bring to light a problem related to the heterogenous nature of audio and visual modalities, which causes an unbalance preventing the two modalities to correctly cooperate;
\vspace{-7pt}
\item we propose a new cross-modal audio-visual Relative Norm Alignment loss by progressively aligning the relative feature norms of the two modalities;
\vspace{-7pt}
\item we present a new benchmark for both single-source and multi-source DG settings in first person videos, which, to the best of our knowledge, no prior work has explored yet;
\vspace{-7pt}
\item we validate the effectiveness of our method on both DG and UDA scenarios, achieving competitive results compared to previous works.
\vspace{-7pt}
\end{itemize}
\section{Relative Norm Alignment}\label{sec:preliminaries}
\subsection{Problem Statement}
Given one or more source domains $\{\mathcal{S}_1,...,\mathcal{S}_k\}$, where each $\mathcal{S}={\{(x^s_i,y^s_i)\}}^{N_s}_{i=1}$ is composed of $N_s$ source samples with label space $Y^s$ known, our goal is to learn a model representation able to perform well on a target domain $\mathcal{T}={\{x^t_i\}}^{N_t}_{i=1}$ of $N_t$ target samples whose label space $Y^t$ is unknown. Our two main assumptions are that the distributions of all the domains are different, \ie $\mathcal{D}_{s,i} \neq \mathcal{D}_t$ $\land$ $\mathcal{D}_{s,i} \neq \mathcal{D}_{s,j}$, with $i \neq j$, $i,j=1,...,k$, and that the label space is shared, $\mathcal{C}_{s,i} = \mathcal{C}_t$, $i=1,...,k$. In this work we consider two different scenarios:
\noindent
{\textbf{Domain Generalization (DG)}},
where at training time the model can access one or more fully labeled source datasets $\mathcal{S}_1,...,\mathcal{S}_m$, but no information is available about the target domain $\mathcal{T}$.
\noindent
{\textbf{Unsupervised Domain Adaptation (UDA)}}, where at training time it is possible to access a set of unlabeled target samples belonging to the target domain $\mathcal{T}$, jointly with one fully labeled source domain $\mathcal{S}$.
\noindent
For both scenarios, the ultimate goal is to learn a classifier able to generalize well on the target data.
\textbf{Multi-Modal Approach.} Our goal is to investigate how using multi-modal signals from source and target data affects the ability of a first-person action classification net to generalize across domains.
Specifically, given
a multi-modal input $X=(X^1,...,X^M)$, where $X^m=(x^m_1,...x^m_{N_m})$ is the set of all $N_m$ samples of the $m$-th modality, we use a separate feature extractor $F^m$ for $X^m
, and we employ all the $f_m=F^m(x^m_i)$ corresponding features, encoding information from multiple channels, during the learning process. We denote with $h(x^m_i)=({\lVert{ \cdot }\rVert}_2 \circ f_m)(x^m_i)$ the $L_2$-norm of the features $f_m$.
\subsection{Cross-Modal Audio-Visual Alignment}
\label{sec:assumptions}
Let us consider a multi-modal framework characterized by $M=2$ modalities, specifically RGB clips and audio signals. We indicate with $f_v=F^v(x^v_i)$ and $f_a=F^a(x^a_i)$ the features encoding the visual and audio information, respectively (details about the feature extractor modules are given in Section \ref{RNA-NET}).
The discrepancy between their norms, i.e., $ h(x^v_i)$ and $h(x^a_i)$, is measured by a \textit{mean-feature-norm distance} term $\delta$, defined as:
\begin{equation}
\delta(h(x^v_i),h(x^a_i))=\frac{1}{N}\sum_{x^v_i \in \mathcal{X}^v}h(x^v_i)-\frac{1}{N}\sum_{x^a_i \in \mathcal{X}^a} h(x^a_i) ,
\end{equation}
where $N=|\mathcal{X}^v|=|\mathcal{X}^a|$ denotes the number of the samples for each modality. Figure \ref{fig:Feat_Norm} illustrates the feature norms of the two modalities and the $\delta$ between the two.
It has been shown in the literature that aligning audio and visual information by solving synchronization tasks~\cite{look_listen_learn,multisensory_owens, cooperative_torresani,objects_that_sound,afourasself} leads to representations that facilitate a number of audio-visual downstream tasks, including action recognition. Such approaches enforce feature alignment by means of Euclidean or similarity-based losses, whose objective is to embed audio and visual inputs into a shared representation space.
Our intuition is that the optimization of these loss functions could, to some extent, limit action recognition networks when dealing with cross-modal scenarios.
This is because, as opposed to acting on the magnitude of audio and visual norms, these losses mainly use the angular distance $\theta$ between the two embeddings, defined as $\theta=arccos(\frac{\mathbf{f}_v\cdot \mathbf{f}_a}{\lVert{f_v}\rVert\lVert{f_a}\rVert})$.
By acting only on the normalized feature vectors, they are indeed capable of aligning the two representations (Figure \ref{fig:OurLoss}-b) but they struggle to exploit the modality-specific characteristics of the two streams.
In other words, when using an angular loss we impose
the prior that what is significant for the visual stream is significant also for the audio stream, but this might not be true in practice, especially when training and test data come from different distributions.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{latex/Img/Idea.pdf}
\caption{\textbf{Relative Norm Alignment.} The norm $h(x^v_i)$ of the $i$-th \textcolor{OliveGreen}{visual} sample (top-left) and $h(x^a_i)$ of the $i$-th \textcolor{RoyalBlue}{audio} sample (top-right) are represented, by means of segments of different lengths. The radius of the two circumferences represents the mean feature norm of the two modalities, and $\delta$ their discrepancy.
By minimizing $\delta$, audio and visual feature norms are induced to be the same, as shown at the bottom.
}
\label{fig:Feat_Norm}
\vspace{-0.2cm}
\end{figure}
For instance, in a clip where the action \textit{take} does not produce any sound,
the information brought by the audio will be referred to only as background noise. Conversely, for the same action carried out with another object or in a different setting, the aural information might be highly informative, possibly even more than the visual one.
We show below with a set of experiments (Section \ref{sec:experimental}, Figure \ref{fig:PlotAccNorm}) that a large $\delta$, i.e., a misalignment at feature-norm level, negatively affects the learning process by causing an unbalance between the contributions brought by each modality, which therefore degrades the classification performance.
\subsection{Relative Norm Alignment Loss} \label{sec:rna_loss}
Motivated by the considerations above, we propose a new cross-modal loss function, which aims to reduce the discrepancy between feature distributions by aligning their contribution during training. As opposed to losses acting on the normalized feature vectors, our loss operates on their magnitude, which intuitively results in more freedom to preserve the modality-specific features (see Figures \ref{fig:OurLoss}-b-c). We expect this to be important in cross-domain scenarios. Considering the dot product $<f_v, f_a>$, defined as
\begin{equation}
\label{formula:dot_product}
<f_v, f_a>= {\lVert f_v \rVert}_2 {\lVert f_a \rVert}_2 \cos{\theta} ,
\end{equation}
our approach involves the first two terms of Equation \ref{formula:dot_product}, imposing a relative alignment between them. \\ Our relative norm alignment (RNA) loss function is defined as
\begin{equation}\label{formula:rna}
\mathcal{L}_{RNA}=\frac{1}{N}\frac{\sum_{x^v_i \in \mathcal{X}^v}h(x^v_i)}{\sum_{x^a_i \in \mathcal{X}^a}h(x^a_i)} \rightarrow 1 ,
\end{equation}
where $h(x^v)=\lVert{ f_v }\rVert_2$ and $h(x^a)=\lVert{ f_a }\rVert_2$ indicate the norm of visual and audio features respectively. This dividend/divisor structure is used to encourage a relative adjustment between the norm of the two modalities, enhancing an \textit{optimal equilibrium} between the two embeddings. When minimizing $\mathcal{L}_{RNA}$ , the network can either increase the divisor $(f_v \nearrow)$ or decrease the dividend $(f_a \searrow)$, leading to three potential main benefits:
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{latex/Img/NormDistribution.pdf}
\caption{\textbf{Feature Distribution Adjustment.} (a) shows the distribution of \textcolor{OliveGreen}{visual} and \textcolor{RoyalBlue}{audio} features. We see that, without any form of alignment, audio features are predominant over visual ones, which could ultimately lead to a loss of information. By minimizing $\mathcal{L}_{RNA}$, two possible scenarios can occur, displayed in (b) and (c). In both, the range where the feature norms vary is the same, making the informative content of the two distributions comparable with each other. This lets the loss learn from data when it is more convenient to align them (b) or when it is better to preserve their peculiarities (c).
}
\label{fig:OurLoss}
\vspace{-0.2cm}
\end{figure}
\begin{enumerate}[itemsep=1pt, topsep=0pt, partopsep=0pt]
\item Since $\lVert{ f_v }\rVert_2$ and $\lVert{ f_a }\rVert_2$ tend to the same value, features norm ranges are encouraged to be comparable, preventing one modality to drown out the other, improving the final prediction (Figures \ref{fig:OurLoss}-b and \ref{fig:OurLoss}-c).
\item By reducing the norm of the features while learning the feature extractor, the latter is free to choose which are the less/more discriminative ones, and lower/rise their norm accordingly, increasing the generalization ability of the model.
\item Comparing to standard similarity losses, by not constraining the angular distance $\theta$ between the two modality representations, feature distributions have the freedom to arrange in non-overlapping configurations (Figure ~\ref{fig:OurLoss}-c).
\end{enumerate}
The effects observable at feature-level of the application of our $\mathcal{L}_{RNA}$ are represented in Figure \ref{fig:OurLoss}.
In Figure \ref{fig:OurLoss}-a we represent the features distribution learned with standard cross-entropy loss. As it can be seen, the feature norms of the two modalities differ by a $\delta$, meaning that the respective features lie within different ranges which make the two representations hard to compare. The solution proposed by our $\mathcal{L}_{RNA}$ corresponds to \textbf{1)}. In Figures \ref{fig:OurLoss}-b and \ref{fig:OurLoss}-c we show the feature representations obtained by minimizing our loss function. The situation depicted in Figure \ref{fig:OurLoss}-b occurs when audio and visual information ``agree" by means of their modality-specific features.
The scenario depicted in Figure \ref{fig:OurLoss}-c is the most interesting, since it represents a situation which is not compatible with the aim of standard similarity losses. As stated in \textbf{3)}, our $\mathcal{L}_{RNA}$ ensures that the modality-specific features are preserved, allowing the final classifier to exploit their complementarity.
\vspace{-0.2cm}
\section{ Cross-Domain Audio-Visual RNA-Net} \label{RNA-NET}
\vspace{-0.05cm}
This section shows how $\mathcal{L}_{RNA}$
can be effectively used in a very simple audio-visual deep network for cross-domain first person action recognition. The network, shown in Figure \ref{fig:architecture}, inputs audio and visual information in two separate branches. After a separate convolution-based feature extraction step, the $\mathcal{L}_{RNA}$ loss learns how to combine the two modalities, leading to a significant generalization across-domains, in both the domain generalization (DG) and unsupervised domain adaptation (UDA) settings. Below we describe in more details how the net works in both settings.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{latex/Img/Net.pdf}
\caption{\textbf{RNA-Net.} Labeled \textcolor{OliveGreen}{source visual} $x^v_{s,i}$ and \textcolor{RoyalBlue}{source audio} $x^a_{s,i}$ inputs are fed to the respective feature extractors $F^v$ and $F^a$. \textcolor{Orange}{Unlabeled target} data of any modality ($x^m_{t,i}$) is seen at training time only in UDA setting, and not in DG. Our $\mathcal{L}_{RNA}$ operates at feature-level by balancing the relative feature norms of the two modalities. The action classifiers $G^v$ and $G^a$ are trained with standard cross-entropy loss $\mathcal{L}_{ce}$. At inference time, multi-modal target data is used for classification.
}
\label{fig:architecture}
\vspace{-5 pt}
\end{figure}
\subsection{AV-RNA-Net for Domain Generalization}
\label{sec:dg}
In a cross-modal context, the input comes from one or more source domains $\mathcal{S}_k=(\mathcal{S}^v,\mathcal{S}^a)$. Under the DG setting, the target is not available during training (see Section~\ref{sec:preliminaries}).
As shown in Figure \ref{fig:architecture}, each input modality is fed to a separate feature extractor, $F^v$ and $F^a$ respectively. The resulting features $f_v=F^v(x^v_i)$ and $f_a=F^a(x^a_i)$ are then passed to separate classifiers $G^v$ and $G^a$, whose outputs correspond to distinct score predictions (one for each modality). The two are combined with a \textit{late fusion} approach and used to obtain a final prediction $P(x)$ (please refer to Section \ref{sec:experimental} for more details). Our loss $\mathcal{L}_{RNA}$ operates at feature-level before the final classification, acting as a bridge between the two modalities and encouraging a balance between the respective contributions.
\par{\textbf{Why should $\mathcal{L}_{RNA}$ help to generalize?}} Our loss $\mathcal{L}_{RNA}$ rises the generalization ability of the network for two main reasons. \textbf{1)} By self-reweighting the two modalities contribution during training, the classifier has the chance to rank such contributions according to their real relevance, thus avoiding to be fooled by the natural unbalance due to their intrinsic heterogeneity. This is helpful especially in a multi-source setting, as it ensures uniformity not only across modalities, but also across data from different sources. \textbf{2)} By undirectly minimizing the norm of the feature activations, our loss sets a limit to the learner feature encoding, and thus it forces it to only learn the discriminative information. As a consequence, the classifier can learn to ignore information which is strictly domain-specific, distilling the most useful and transferable features.
\subsection{Extension to Unsupervised Domain Adaptation}\label{dg}
Under this setting, both labelled source data from a single source domain $\mathcal{S}=(\mathcal{S}^v,\mathcal{S}^a)$, and unlabelled target data $\mathcal{T}=(\mathcal{T}^v$, $\mathcal{T}^a)$ are available during training. Figure \ref{fig:architecture} shows the flow of source and target data, indicating with different colors source visual data (green), source audio data (blue) and target data (orange). We denote with $x_{s,i}=(x^v_{s,i},x^a_{s,i})$ and $x_{t,i}=(x^v_{t,i},x^a_{t,i})$ the $i$-th source and target samples respectively. As it can be seen from Figure \ref{fig:architecture}, both $x^m_{s,i}$ and $x^m_{t,i}$ are fed to the feature extractor $F^m$ of the $m$-th specific modality, shared between source and target, obtaining respectively the features $f_s=(f^v_s,f^a_s)$ and $f_t=(f^v_t,f^a_t)$. In order to consider the contribution of both source and target data during training, we redefine our $\mathcal{L}_{RNA}$ under the UDA setting as
\begin{equation}\label{eq:loss_s_t}
\mathcal{L}_{RNA}=\mathcal{L}^s_{RNA}+\mathcal{L}^t_{RNA} ,
\end{equation}
\begin{equation}\label{eq:loss_s_t_1}
\mathcal{L}^s_{RNA}=\frac{1}{N}\frac{\sum_{x^v_{s,i} \in \mathcal{X}^v_s}h(x^v_{s,i})}{\sum_{x^a_{s,i} \in \mathcal{X}^a_s}h(x^a_{s,i})} \rightarrow 1 ,
\end{equation}
\begin{equation}\label{eq:loss_s_t_2}
\mathcal{L}^t_{RNA}=\frac{1}{N}\frac{\sum_{x^v_{t,i} \in \mathcal{X}^v_t}h(x^v_{t,i})}{\sum_{x^a_{t,i} \in \mathcal{X}^a_t}h(x^a_{t,i})} \rightarrow 1 ,
\end{equation}
By minimizing $\mathcal{L}^s_{RNA}$ we benefit from the considerations described in Section \ref{sec:dg}. Also, by minimizing $\mathcal{L}^t_{RNA}$, and thus learning the reweighting between the modalities on the unlabelled data, the encoded features contain useful information which directly enable us to adapt to the target.
A problem that often occurs with UDA methods is that forcing an alignment between source and target features increases the risk of affecting the discriminative charateristics of the two, and thus destroying the inter-class separability~\cite{pmlr-v97-liu19b}.
In our UDA setting, by operating on the two domains through separate $\mathcal{L}^s_{RNA}$ and $\mathcal{L}^t_{RNA}$, we mitigate this risk, preserving the discriminative structure of the two.
\section{Related Works}
\textbf{First Person Action Recognition.}
Until now, research has been focused on data provided by a specific view of the camera (often fixed), i.e., third person view \cite{10.5555/2968826.2968890,wang2016temporal,carreira2017quo}. With the recent release of a large-scale dataset of first-person actions \cite{damen2018scaling}, the community has also become interested in working on videos that are recorded from an egocentric point of view.
Since egocentric action recognition suffers from the motion of the camera and sudden changes of view, the main approaches proposed so far are based on multi-stream architectures \cite{carreira2017quo,10.5555/2968826.2968890,ma2016going,lin2019tsm,cartas2019seeing,Kazakos_2019_ICCV,lu2019learning}, many of which are inherited from the third-person action recognition literature. The networks used
to extract spatial-temporal information from egocentric videos can be divided into two main groups. The first exploits Long Short-Term Memory and variants \cite{Sudhakaran_2017_ICCV,sudhakaran2018attention,sudhakaran2019lsta,furnari2020rolling} to generate an embedding representation based on the temporal relations between the features frames. The second \cite{singh2016first,tran2015learning,Wu_2019_CVPR,kapidis2019multitask} leverages 3D convolutional kernels which jointly generate spatial-temporal features by sliding along the spatial and temporal dimensions.
Recent works exploit an attention mechanism at frame or clip level \cite{sudhakaran2018attention,sudhakaran2019lsta,perezrua2020knowing,Lu2019TIP,lu2019learning} to re-weight the spatial or temporal features, obtaining remarkable results.
By observing the importance of multi-stream approaches in this context, they \cite{sudhakaran2019hierarchical,wangsymbiotic,Wu_2019_CVPR,zhou2018temporal} investigate
alternative methods to fuse streams w.r.t. the standard late fusion approach, creating a more compact multi-modal representation. Although optical flow has proven to be a strong asset for the action recognition task,
it is computationally expensive. As shown in
\cite{Crasto_2019_CVPR},
the use of optical flow limits the application of several methods in online scenarios, pushing the community either to investigate alternative paths \cite{Kazakos_2019_ICCV,cartas2019seeing} or towards single-stream architectures \cite{zhao2019dance,Crasto_2019_CVPR,lee2018motion,sun2018optical,planamente2020joint}.
\textbf{Audio-Visual Learning.}
A wide literature exploits the natural correlation between audio and visual signals to learn cross-modal representations that can be transferred well to a series of downstream tasks, such as third person activity recognition. Most of these representation learning methods use a self-supervised learning approach, consisting in training the network to solve a \textit{synchronization} task~\cite{look_listen_learn,multisensory_owens, cooperative_torresani,objects_that_sound,afourasself,aytar2016soundnet}, i.e., to predict whether the audio and visual signals are temporally aligned or not. By solving this pretext task, the network is induced to find a correspondence between audio and visual cues, making the resulting representations perfect for tasks like sound-source localization \cite{objects_that_sound,afourasself,Zhao_2018_ECCV}, active speaker detection \cite{out_of_time,afourasself}, and multi-speaker source separation \cite{multisensory_owens,afourasself}. Audio has also been used as a preview for video skimming, due to its lightweight characteristics \cite{listen_to_look}. More recently, it proved to be useful even in egocentric action recognition \cite{Kazakos_2019_ICCV,cartas2019seeing}.
However, the role of this information in a cross-domain context is still unexplored. In this work, we investigate the importance of audio when used together with visual information in learning a robust representation on unseen data.
\textbf{Unsupervised Domain Adaptation (UDA).}
The goal of UDA is to bridge the domain gap between a labeled source domain and an unlabeled target one.
We can divide UDA approaches in \textit{discrepancy-based} methods, which explicitly minimize a distance metric among source and target distributions \cite{da-afnxu2019larger, da-mcdsaito2018maximum}, e.g., the maximum mean discrepancy (MMD) in \cite{da-mmdlong2015learning}, and \textit{adversarial-based} methods~\cite{da-adv-deng2019cluster, da-adv-tang2020discriminative}, often leveraging a gradient reversal layer (GRL)~\cite{grl-pmlr-v37-ganin15}.
Other works exploit batch normalization layers to normalize source and target statistics \cite{ada-bn, DBLP:conf/iclr/LiWS0H17, da-bnchang2019domain}.
Still, another approach is the \textit{generative-based} one, which operates by performing style-transfer directly on input data \cite{da-cycle-gong2019dlow, da-cycle-hoffman2018cycada}.
The approaches described above have been designed for standard image classification tasks. Only few works analyzed UDA for video understanding \cite{videoda-chen2019temporal,Munro_2020_CVPR,videoda-choi2020unsupervised,videoda-Jamal2018DeepDA}. \cite{videoda-chen2019temporal} focuses on aligning temporal relation features to increase robustness across domains. In \cite{videoda-choi2020unsupervised}, the network is trained to solve an auxiliary self-supervised task on source and target data. Recently \cite{Munro_2020_CVPR} proposed an UDA method for first person fine-grained action recognition, called MM-SADA, combining a multi-modal self-supervised pretext task with an adversarial training.
\textbf{Domain Generalization (DG).} The DG setting is closer to real-world conditions, in that it addresses the problem of learning a model able to generalize well using inputs from multiple distributions, when no target data is available at all. Previous approaches in DG are mostly designed for image data \cite{carlucci2019domain,volpi2018generalizing,li2018domain,dou2019domain,li2018deep,bucci2020selfsupervised} and are divided in \textit{feature-based} and \textit{data-based} methods. The former focus on extracting invariant information which are shared across-domains~\cite{li2018domain,li2018deep}, while the latter exploits data-augmentation strategies to augment source data with adversarial samples and possibly get closer to the target~\cite{volpi2018generalizing}. Interestingly, using a self-supervised pretext task is an efficient solution to the extraction of a more robust data representation \cite{carlucci2019domain,bucci2020selfsupervised}. We are not aware of previous works on first or third person DG.
Among unpublished works, we found only one \textit{arXiv} paper~\cite{videodg-yao2019adversarial}, in third person action recognition, designed for single modality. Under this setting, first person action recognition models, and action recognition networks in general, degenerate in performance due to the strong divergence between source and target distributions. Our work stands in this DG framework, and proposes a feature-level solution to this problem in first person action recognition by leveraging the natural audio-visual correlation.
|
1,116,691,500,630 | arxiv | \subsection{Periodic Anderson Model with BCS conduction band}
In the previous sections, we have studied the first order phase transition in the impurity model
(\ref{eq:qd_supercond_hamiltonian}). As the dynamical mean field theory (DMFT) provides a link
between impurity models and lattice models, we can ask the question if the singlet to doublet phase
transition observed in the impurity model is also realized in a corresponding lattice model.
An appropriate lattice model will of course include a $U(1)$ symmetry breaking term like the impurity model
(\ref{eq:qd_supercond_hamiltonian}) does, and in fact in the framework of the DMFT, a periodic
Anderson model extended by the BCS mean field Hamiltonian (BCS-PAM) for the conduction band electrons corresponds to the impurity
model presented in the previous sections
\footnote{ Strictly speaking, the reference model in the DMFT only has one superconducting bath,
while we introduced a left and a right bath in the Hamiltonian (\ref{eq:qd_supercond_hamiltonian}).
However, in the CTQMC, the reference model is entirely encoded in the bare Green's function, which
can be understood as an action representation in the path integral formalism. The explicit number of
the superconducting baths is therefore unimportant.
}. The Hamiltonian of the BCS-PAM is given by:
\begin{equation}
\label{eq:BCS_PAM}
H = H_c + H_f + H_V
\end{equation}
with
\begin{equation}
H_c = \sum_{k,\sigma} \xi(k) \tilde{c}_{k,\sigma}^\dagger \tilde{c}_{k,\sigma}
- \Delta \sum_k \left( \tilde{c}_{k,\uparrow}^\dagger \tilde{c}_{-k\downarrow}^\dagger + \text{h.c.} \right)
\label{eq:BCS_PAM_c}
\end{equation}
\begin{equation}
H_f = \sum_{k,\sigma} \xi_f \tilde{f}_{k,\sigma}^\dagger \tilde{f}_{k,\sigma} + U \sum_{i_f} \left( \tilde{n}_{i_f,\uparrow} - \frac{1}{2} \right) \left( \tilde{n}_{i_f,\downarrow} - \frac{1}{2} \right)
\label{eq:BCS_PAM_f}
\end{equation}
\begin{equation}
H_V = - V \sum_{k,\sigma} \left( \tilde{c}_{k,\sigma}^\dagger \tilde{f}_{k,\sigma} + \text{h.c.} \right)
\label{eq:BCS_PAM_V}
\end{equation}
We have considered a square lattice with hopping matrix element $t$ between the conduction electrons such that:
\begin{equation}
\label{eq:BCS_PAM_disp}
\xi(k) = -2t \left( \cos( k a_x) + \cos(k a_y) \right).
\end{equation}
Note, that the impurity model (\ref{eq:qd_supercond_hamiltonian}) has a large range of applications
in the DMFT
ranging from the attractive Hubbard model with $U(1)$ symmetry broken solutions studied in references
\cite{0295-5075-85-2-27001,bauer-hewson-dmft-nrg} to the BCS-PAM, which
is considered here.
The treatment of this model within DMFT involves the same steps as for the impurity model (\ref{eq:qd_supercond_hamiltonian}), introducing a particle-hole transformation for the spin down operators. The Hamiltonian can then be cast in the form $H=H_0+H_U$ with
\begin{equation}
H_0 = \sum_k \vec{c}_k^\dagger \mat{E}(k) \vec{c}_k - V \sum_{k} \left( \vec{c}_k^\dagger \matgr{\sigma}_z \vec{f}_k + \text{h.c.} \right) + \sum_k \vec{f}_k^\dagger \matgr{\epsilon}_f \vec{f}_k
\label{eq:BCS_PAM_canon}
\end{equation}
and $H_U= - U \sum_{i_f} \left( n_{i_f,\uparrow} - \frac{1}{2} \right) \left( n_{i_f,\downarrow} - \frac{1}{2}
\right) $. Here, we have used the same Nambu-spinor notation as in Sec. \ref{sec:impurity-model}
with the exception, that $d$ operators have been renamed $f$ to be consistent with the literature
\cite{hewson,RevModPhys.68.13}.
\subsection{DMFT with superconducting medium}
The standard DMFT can be easily adapted to a superconducting bath using the
Nambu formalism \cite{RevModPhys.68.13}. We obtain the self consistency equation for a finite lattice
with $N$ sites expressed by a $2\times2$ matrix equation:
\begin{equation}
\label{eq:dmft-selfconsistency}
\mat{G^{ff}}(i\omega_n) = \frac{1}{N} \sum_\vec{k}
\left[\mat{G_{kk}^{0,ff}}^{-1}(i\omega_n)-\mat{\Sigma^{ff}}(i\omega_n)\right]^{-1}.
\end{equation}
Here, $\mat{G^{ff}}(i\omega_n) = -\int \limits_0^\beta \mathrm{d} \tau \, \mathrm{e}^{-i\omega_n \tau}
\thavg{T \vec{f}(\tau) \vec{f}^\dagger}$ is the full Matsubara Green's function of the reference model,
$\mat{G_{kk}^{0,ff}}(i\omega_n) $ is the Matsubara $f$-Green function of the bare lattice model
and $\mat{\Sigma^{ff}}$ is the self energy.
Equation (\ref{eq:dmft-selfconsistency}) can be solved by iteration starting usually at a self
energy $\mat{\Sigma^{ff}} \equiv 0$. From $\mat{G^{ff}}(i\omega_n)$, the bare Green's function $\mat{\mathcal
G^{ff}_0}(i\omega_n)$ of the reference model, can be calculated using Dyson's equation
$\mat{\mathcal G_0^{ff}}^{-1} = \mat{G^{ff}}^{-1} + \mat{\Sigma^{ff}} $. The reference model, which is now
described by $\mat{\mathcal G^{ff}_0}$ and the interaction part of the Hamiltonian can subsequently be
solved using the CTQMC method yielding $\mat{G^{ff}}(i\omega_n)$ for the next DMFT iteration.
\subsection{Hysteresis}
In the DMFT, we can calculate the double occupancy $\thavg{\tilde{f}_{\uparrow,i}^\dagger
\tilde{f}_{\uparrow,i} \tilde{f}_{\downarrow,i}^\dagger \tilde{f}_{\downarrow,i}}$ of the $f$-sites, which is together
with the assumption of a homogeneous system proportional to $\frac{\partial \Omega }{\partial U}$.
Therefore, we expect a jump in the double occupancy to appear at a critical value of $U$, if we have
a first order phase transition as in the impurity problem.
\begin{figure}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-double-occupancy-hysteresis-B150.tex} }
\caption{\label{fig:dmft-double-occ-hyst} Double occupancy of the $f$ sites in the
BCS-PAM. In the proximity of the critical value of $U$, we observe two different solutions
of the DMFT self consistency cycle. The upper (red) branch is generated, if we start the
DMFT algorithm with a self energy $\Sigma \equiv 0$, while we obtain the solution shown by
the lower (blue) branch if we take the self energy of the data point at $U=0.44$ as the
starting point of the DMFT iterations. }
\end{figure}
Figure \ref{fig:dmft-double-occ-hyst} shows our result for the double occupancy of the $f$ sites as
a function of $U$. Depending on the initial choice of the self energy in the DMFT cycle, we obtain two
different solutions. If we start with the local Green's function of the bare lattice model, which
corresponds to a self energy $\Sigma \equiv 0 $, we obtain the upper branch of the hysteresis.
The lower branch is obtained by taking the self energy of the solution in the strong coupling phase
at $U=0.44$ as starting point for the DMFT cycle.
The coexistence of two solutions is a strong hint that a first order phase transition occurs.
It should be noted that beginning at a value of $U\approx0.34$, the upper branch of the hysteresis
becomes unstable, i.e. the inherent fluctuations of the Monte Carlo results suffice to drop from the
upper branch of the hysteresis to the lower branch after a certain number of iterations. Increasing
the number of Monte Carlo measurements delays the drop to the lower branch to a higher number of
iterations. This behavior can be understood in the following way: In the coexistence region, the
grand potential $\Omega$ of the upper and lower branch of the hysteresis cross at a certain value of
$U$. For small values of $U$, $\Omega$ is minimal on the upper branch, while the lower branch is
metastable, for larger values of $U$, however, the stable solution is the lower branch.
In the strong coupling phase and on the lower branch of the hysteresis, the Monte Carlo results
suddenly develop a finite magnetization corresponding to a frozen spin. This is due to divergent
autocorrelation times in the Monte Carlo simulation and is linked to the physical formation of a
local moment.
\subsection{Local dynamical spin structure factor}
\label{subsec:DMFT-spin}
To further classify the weak and strong coupling phases, we calculate the local dynamical spin
structure factor $S(\omega) = \frac{1}{N} \sum_\vec{q} S(\vec{q},\omega)$. The Lehmann
representation for $S(\omega)$ is given by Eq. (\ref{eq:spinstructure-lehmann_2}), where in this
case $S_+=S_+^{f,i}$.
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-spinstructure-U034.tex} }
\end{center}
\caption{Dynamical spin structure factor for the upper and the lower branch of the
hysteresis in Fig. \ref{fig:dmft-double-occ-hyst}. Clearly, the upper branch of the
hysteresis corresponds to a singlet solution, while the lower branch shows a local moment.
}
\label{fig:dmft-spinstructure-hysteresis}
\end{figure}
As in the impurity case, $S(\omega)$ is a measure for the energy needed to flip the spin on an
$f$-site. Figure \ref{fig:dmft-spinstructure-hysteresis} shows the result for the local dynamical
spin structure factor on both branches of the hysteresis. The solution corresponding to the upper
branch of the hysteresis is linked to the weak coupling regime and shows a characteristic energy
scale required for flipping a spin.
The lower branch of the hysteresis represents the strong coupling phase and shows a clear local
moment peak in the dynamical spin structure factor at $\omega=0$.
This behavior reflects exactly the single impurity physics discussed in the previous section where
we observed the Kondo effect in the weak coupling phase and the formation of a local moment in the
strong coupling phase.
\subsection{f-Density of states}
In order to investigate the behavior of the $f$-bands at the phase boundary and to be able to
compare with the single impurity model, we calculate the density of states for the $f$-sites
$\rho_{\text{ff}}$ directly from the local Green's function $G(\tau)$ using the stochastic analytic
continuation method for different values of $U$. From Fig. \ref{fig:dmft-dos}, one can recognize the signature
of the impurity physics (see Sec. \ref{subsec:impurity-spectral}), namely the crossing of Andreev bound states in the
vicinity of the first order transition at $U\approx0.35$.
Note, that we have only shown the level crossing for the impurity model if $\Delta$ is changed, but
for varying $U$, the crossing of the Andreev bound states in the impurity model
(\ref{eq:qd_supercond_hamiltonian}) has been observed by Bauer et al. \cite{0953-8984-19-48-486211}.
Clearly in the lattice model, one expects the Andreev bound states to acquire a dispersion relation which shows up
as a finite width in $\rho_{\text{ff}}$.
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-ff-DOS-B100-3d.tex} }
\end{center}
\vspace{-1cm}
\caption{Density of states for the f-electrons as a function of $U$ for the parameters
$V=0.5$, $\Delta=2$, $\mu=\epsilon_f=0$ and $\beta=100$.}
\label{fig:dmft-dos}
\end{figure}
\subsection{Dispersion relation of Andreev bound states}
We have seen in the previous subsections, that the local physics of the single impurity model can
be carried over to the lattice case within the DMFT approximation.
Here, we concentrate on unique features of the lattice model (\ref{eq:BCS_PAM}), namely the
dispersion relation of the f-bands as obtained by analyzing the single particle spectral function.
Using the local self-energy of the DMFT, $\mat{\Sigma^{ff}}(i\omega_n)$, this quantity is extracted from the Green
functions
\begin{equation}
\mat{G^{ff}_{\vec{k}\vec{k}}}(i\omega_n) =
\left[\mat{G^{0,ff}_{\vec{k}\vec{k}}}(i\omega_n)^{-1} - \mat{\Sigma^{ff}}(i\omega_n)
\right]^{-1}.
\label{eq:dmft_g_ff_kk}
\end{equation}
and
\begin{equation}
\mat{G_{\vec{k}\vec{k}}^{{cc}}}(i\omega_n) =
\mat{G_{\vec{k}\vec{k}}^{{0,cc}}}(i\omega_n) - \mat{G_{\vec{k}\vec{k}}^{{0,cf}}}(i\omega_n)
\mat{G_{\vec{k}\vec{k}}^{{ff}}}(i\omega_n)
\mat{G_{\vec{k}\vec{k}}^{{0,fc}}}(i\omega_n) .
\label{eq:dmft_g_cc_kk}
\end{equation}
where $ \mat{G_{\vec{k}\vec{k}}^{{0,cc}}}(i\omega_n) $, $ \mat{G_{\vec{k}\vec{k}}^{{0,ff}}}(i\omega_n) $,
$ \mat{G_{\vec{k}\vec{k}}^{{0,cf}}}(i\omega_n) $, $ \mat{G_{\vec{k} \vec{k}}^{{0,fc}}} (i\omega_n) $
denote the noninteracting Green functions for the corresponding orbitals in the unit cell.
Using the stochastic analytic continuation, these Green's functions can be rotated to real frequencies,
yielding in principle the spectral function $\mat{A}(\vec{k},\omega)$. For each
$\vec{k}$-point and real frequency this quantity is a $4\times4$ matrix since we have a $2\times2$ Nambu spectral
function for each combination of $f$ and $c$ orbitals. Our analysis of the spectral function is based on
the basis independent quantity $A(\vec{k},\omega)= \tr \mat{A}(\vec{k},\omega)$.
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-spectrum-Trace-U0125-3d.tex} }
\end{center}
\vspace{-1cm}
\caption{Trace of the spectral function $A(\vec{k},\omega)$ at $\beta=100$ in the singlet regime. The
parameters of the simulation were given by $U=0.125$, $V=0.5$, $\Delta=2$ and
$\mu=\epsilon_f=0$. }
\label{fig:dmft-spectral-singlet}
\end{figure}
Fig. \ref{fig:dmft-spectral-singlet} plots this quantity in the singlet phase.
The overall structure of the spectral function is similar to the structure observed for
the bare BCS-PAM characterized by the four bands:
\begin{equation}
E_{\pm,\pm} (\vec{k}) =
\pm \sqrt{ V^2 + E^{2}(\vec{k})/2 \pm E(\vec{k}) \sqrt{ V^2 + E^2(\vec{k}) /4 } }
\end{equation}
where $ E(\vec{k}) = \sqrt{ \epsilon^2(\vec{k}) + \Delta^2} $. The bands with dominant c-character,
$ E^{c}_{\pm}(\vec{k}) \equiv E^{\pm,+}(\vec{k}) $,
at high frequencies are well separated from the bands of dominant $f$-character at low frequencies,
$ E^{f}_{\pm}(\vec{k}) = E_{\pm,-}(\vec{k}) $. For the considered bare parameters, $V$ is the smallest scale and sets
the magnitude of the dispersion relation of the $f$-band. In particular expanding in $V$ gives:
\begin{equation}
E^{f}_{\pm}(\vec{k}) = \pm \frac{V^2}{E(\vec{k})} + {\cal O}
\left( \frac{V^4}{E(\vec{k})^3} \right)
\label{E_f_BCS_PAM.Eq}
\end{equation}
Starting from the point of view of the impurity model, which as seen above accounts very well for overall form of the
k-integrated $f$-spectral function, $ E^{f}_{\pm}(\vec{k}) $ may be perceived as the dispersion relation of the
Andreev bound states.
\begin{figure}[h]
\begin{minipage}[h]{\columnwidth}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-free-spectrum-Trace-zoomed-3d.tex} }
\end{minipage}
\begin{minipage}[h]{\columnwidth}
\vspace{-1.8cm}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-spectrum-Trace-U0125-zoomed-3d.tex} }
\end{minipage}
\begin{minipage}[h]{\columnwidth}
\vspace{-1.8cm}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-spectrum-ff-U02-3d.tex} }
\end{minipage}
\begin{minipage}[h]{\columnwidth}
\vspace{-1.8cm}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-spectrum-Trace-U0275-zoomed-3d.tex} }
\end{minipage}
\vspace{-1cm}
\caption{Trace of the spectral function $A(\vec{k},\omega)$ at $\beta=100$ in the singlet
regime for increasing interaction $U$. The width of the $f$-bands clearly decreases and the
dispersion becomes weaker.
The parameters of the simulations were given by $V=0.5$, $\Delta=2$ and
$\mu=\epsilon_f=0$. }
\label{fig:dmft-spectral-singlet-zoom}
\end{figure}
The singlet phase is continuously connected to the $U=0$ point. Starting from this limit, we can account for
the Hubbard $U$ within a slave boson approximation \cite{Kotliar86} which will renormalize the hybridization
matrix element to lower values. Owing to Eq. \ref{E_f_BCS_PAM.Eq} this suppresses the dispersion relation of the $f$-electrons. This
aspect is clearly observed in Fig. \ref{fig:dmft-spectral-singlet-zoom}.
\begin{figure}[h]
\begin{center}
\begin{minipage}[h]{\columnwidth}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-spectrum-Trace-U05-zoomed-3d.tex} }
\end{minipage}
\begin{minipage}[h]{\columnwidth}
\vspace{-1.8cm}
\resizebox{\columnwidth}{!}{ \input{tex-DMFT-spectrum-Trace-U055-zoomed-3d.tex} }
\end{minipage}
\end{center}
\vspace{-0.7cm}
\caption{Trace of the spectral function $A(\vec{k},\omega)$ at $\beta=100$ in the doublet
regime for different values of $U$. Here, we only show the $f$-bands.
The parameters of the simulation were given by $V=0.5$, $\Delta=2$ and
$\mu=\epsilon_f=0$. }
\label{fig:dmft-spectral-doublet-zoom}
\end{figure}
In the doublet phase, $U > U_c$, the paramagnetic slave-boson mean-field approach fails. In this state,
the $f$-spin is frozen and in the DMFT cycle we have imposed spin symmetric baths thereby inhibiting magnetic
ordering.
The QMC data of Fig. \ref{fig:dmft-spectral-doublet-zoom} points to a very incoherent $f$-spectral function.
It is therefore tempting to model this state in terms of spin disorder: the spin of the
$f$-electrons on each site
is static and points in a random direction. To provide some support for this picture
we stay in the dynamical mean field framework but
consider a mean-field
decomposition of the Hubbard term in the action of the impurity problem:
\begin{equation}
U \left( \tilde{n}_{f,\uparrow} - \frac{1}{2} \right) \left( \tilde{n}_{\downarrow} - \frac{1}{2} \right) \rightarrow
-\frac{Um_z}{2}\left( \tilde{n}_{f,\uparrow} - \tilde{n}_{f,\downarrow} \right)
\end{equation}
This mean field approximation, accounts for the local moment formation with $z$-component of spin $m_z$. The corresponding
mean-field action of the impurity model now reads:
\begin{equation}
S_{MF} = \int \limits_{0}^{\beta} {\rm d} \tau \int \limits_{0}^{\beta} {\rm d} \tau' \tilde{\vec{f}}^{\dagger}(\tau)
{\matgr{\cal G} }^{-1}(\tau - \tau') \tilde{\vec{f}}(\tau')
-\frac{U m_z}{2} \int \limits_{0}^{\beta} \mathrm{d} \tau \tilde{\vec{f}}^{\dagger}(\tau) \tilde{\vec{f}}(\tau)
\label{Eq:S_eff_mz}
\end{equation}
where $ \tilde{\vec{f}}^{\dagger} = \left(\tilde{f}^{\dagger}_{\uparrow} , \tilde{f}_{\downarrow} \right) $ and
$ \matgr{\cal G }(\tau - \tau')$ corresponds to the bath Green function. To account for disorder,
the $z$-component of the
f-spin is sampled from the box distribution $ m_z \in [-M_z, M_z] $.
Averaging over disorder at each iteration in the DMFT cycle yields the spectral function shown in Fig. \ref{fig:CPA}.
As apparent, the disorder average generates a finite lifetime.
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{ \input{tex-CPA-spec-3d.tex} }
\end{center}
\vspace{-0.7cm}
\caption{Trace of the spectral function $A(\vec{k},\omega)$ as obtained from using Eq.
\ref{Eq:S_eff_mz} for the impurity action.
The z-component of the local moment is sampled from the box distribution $m_z \in [-M_z, M_z] $.
The parameters used for this plot were given by $V=0.5$, $U=0.5$, $\Delta=2$ and
$M_z=0.0375$. Here, the calculations are carried out on the real time axis such that no analytical
continuation is required.
\label{fig:CPA} }
\end{figure}
\section{Introduction}
Magnetic degrees of freedom in superconducting environments have attracted considerable interest due to the
underlying competing effects. Already a classical spin oriented along the $z$-axis
\cite{shiba.spin.supercond,sakurai.supercond.mag.imp} embedded in a superconducting host generates a localized
state within the superconducting gap. As a function of the interaction strength this excitation
crosses the Fermi energy thereby triggering a first order transition
between a ground state with
vanishing total electronic spin and a ground state with nonzero total electronic spin.
For a quantum spin, the Kondo effect sets in. Being a Fermi surface instability, the opening of the superconducting
gap competes with Kondo screening and ultimately leads to a local moment regime. This transition is accompanied by
a $0$ to $\pi$ phase shift in the Josephson current. In the local moment regime the $\pi$-shift
occurs since a Cooper pair tunneling through the junction necessarily accumulates a phase $\pi$
\cite{josephson.effect,kulik-pi-shift,glazman-matveev,PhysRevB.43.3740}.
The interest in the problem has been renewed in the last decade by the rapid progress in nanotechnology
which made a direct experimental realization of quantum dots
coupled to superconducting leads feasible so that many experiments have been designed to directly measure
the $0$ to $\pi$ transition of the Josephson current. Experiments using a carbon
nanotube\cite{eichler:161407,wernsdorfer-squid-cnt,0-pi-transition-jorgensen} but also InAs
nanowires\cite{vandam-supercurrent-reversal} as a
quantum dot coupled to superconducting leads were able to observe the sign change of the Josephson
current by increasing the gate
voltage and thus manipulating the number of electrons on the quantum dot.
The effect of the changing electron number on the behavior of such systems has been extensively
studied\cite{1367-2630-9-5-124,rgensen:207003,sand-jespersen:126603,eichler:126602} and the
theoretical expectation of the collapse of the Kondo effect if the superconducting gap $\Delta$
exceeds the Kondo temperature $T_K$ has been confirmed by experiments of Buitelaar et al.\cite{PhysRevLett.89.256801}.
From the numerical point of view, a combination of algorithmic development and computational power
has allowed for a more detailed study of the problem using the numerical renormalization
group\cite{PhysRevB.70.020502,JPSJ.73.2494,0953-8984-19-48-486211,0953-8984-20-27-275213},
quantum Monte Carlo
simulations\cite{PhysRevLett.93.047002,PhysRevLett.94.039902,PhysRevLett.94.229702} as well as
functional renormalization group calculations\cite{karrasch:024517}.
Most numerical works present in the literature only present either the study of the Josephson current
\cite{karrasch:024517,PhysRevLett.93.047002,PhysRevLett.94.039902,PhysRevB.70.020502} or the study
of the spectral properties of the Quantum dot \cite{0953-8984-19-48-486211}.
One of the goals of this article is to use the weak coupling CTQMC method \cite{rubtsov:035122}
to compute the Josephson current as well as the spectral functions for the same parameter set in order to present
a comprehensive study of the $0$ to $\pi$ transition of a Josephson quantum dot. Our numerically exact data
clearly confirms the picture of a first order phase transition from a singlet phase linked to the
$0$-junction regime of the Josephson current to a doublet phase corresponding to the $\pi$-junction regime.
In addition to numerical efforts, many analytical approximations have been introduced to tackle
different aspects of the physics of the problem. The non crossing approximation has been used to
show that Andreev bound states crossing the Fermi energy are connected to the $0$ to $\pi$
transition of the Josephson current\cite{PhysRevB.61.9109}.
Perturbative methods as well as mean field theory have brought a quite complete understanding of the
phase diagram featuring the $0$ and $\pi$ phases as well as the intermediate phases $0'$ and
$\pi'$\cite{PhysRevB.62.6687,PhysRevB.68.035105,meng:224521}.
Another method employed by several authors is the introduction of different analytically solvable effective models,
which are valid in different limits \cite{PhysRevB.68.035105,meng:224521,0953-8984-19-48-486211}. These models
are very useful to acquire an intuitive understanding of the physics.
We will present the study of an effective Hamiltonian for the limit of a superconducting gap
$\Delta$ much larger than the bandwidth to support the interpretation of the CTQMC data.
Another motivation of this paper, is to study within dynamical mean field theory (DMFT) \cite{RevModPhys.68.13}
the periodic Anderson model with an s-wave BCS-conduction band (BCS-PAM). Within this approximation, the BCS-PAM
maps onto the single impurity Anderson model with superconducting baths supplemented with a self-consistency
condition. We will show that the physics of the impurity model can be taken over to the lattice case.
In particular the first order transition observed in the impurity model
is reproduced in the BCS-PAM and is signalized by the crossing of the low energy excitations in the local
density of states. The momentum resolved single particle spectral function in the singlet phase reveals
the coherent, Bloch-like, superposition of Andreev bound states. In the doublet or local moment phase the single
particle spectral function is
characterized by incoherent quasiparticle excitations. We provide an understanding of this in terms of
models of disorder.
The paper is organized as follows. After introducing the model in Sec. \ref{sec:impurity-model},
we discuss in Sec. \ref{sec:toy-model} an effective toy model valid in the limit of a
superconducting gap, $\Delta$, much larger than the bandwidth $W$. This simple toy model goes a good way
at understanding certain aspects of the underlying physics.
A brief outline of the employed CTQMC result including the proof of Wick's theorem for each
configuration in the Monte Carlo simulation will be presented in Sec. \ref{sec:CTQMC}.
The results of the toy model are then compared to the results of the
CTQMC simulation, which are discussed in detail in Sec. \ref{sec:NumericalResults}.
Sec. \ref{sec:DMFT} is dedicated to the study of the BCS-PAM within DMFT.
We include an appendix \ref{sec:proof_of_det_identity} featuring the proof of a general
determinant identity needed for the proof of Wick's theorem for every configuration in the CTQMC.
\section{Model}
\label{sec:impurity-model}
The physics of a quantum dot coupled to two superconducting leads (L=left, R=right) via a hybridization
term is captured by the single impurity Anderson model with the leads described
by the BCS mean-field Hamiltonian:
\begin{equation}
\label{eq:qd_supercond_hamiltonian}
\tilde{H} = \sum _ {\alpha=L} ^{R} \tilde{H}_{0, \alpha} + \tilde{H}_d + \tilde{H}_V,
\end{equation}
with
\begin{equation}
\begin{split}
&\tilde{H}_{0,\alpha} = \sum_{k,\sigma} \xi_k \tilde{c}^\dagger_{k,\sigma, \alpha} \tilde{c}_{k,
\sigma, \alpha} \\ & \quad \quad- \sum_{k} \left( {\Delta} \mathrm{e}^{i \phi_{\alpha}} \tilde{c}^\dagger_{k,\uparrow,\alpha}
\tilde{c}^\dagger_{-k,\downarrow,\alpha} + \text{h.c.} \right),\\
&\tilde{H}_d=\sum \limits_\sigma \xi_d \tilde{d}^\dagger_\sigma \tilde{d}_\sigma + U \left(
\tilde{d}^\dagger_\uparrow \tilde{d}_\uparrow - \frac{1}{2} \right)\left( \tilde{d}^\dagger_\downarrow \tilde{d}_\downarrow
- \frac{1}{2} \right), \\
& \tilde{H}_V= -\frac{V}{\sqrt{N}} \sum \limits_{\alpha=L}^R \sum \limits_{\sigma, k} \left(
\tilde{c}^\dagger_{k,\sigma,\alpha } \tilde{d}_\sigma + \tilde{d}^\dagger_\sigma
\tilde{c}_{k,\sigma,\alpha } \right).
\end{split}
\end{equation}
The operators $\tilde{c}^\dagger_{k,\sigma,\alpha}$ are creation operators for electrons with a
$z$-component of the spin
$\sigma$ and momentum $k$ in lead $\alpha$, $\tilde{d}^\dagger_{\sigma}$ is a creation operator of
an electron with a $z$-component of the spin $\sigma$ on the quantum dot. $\xi_k=\epsilon(k) - \mu = -2t \cos(k) -\mu$ is
the dispersion relation for the electrons in the leads, where we assume, that the dispersion is
independent of the lead index $\alpha$, and $\xi_d=\epsilon_d-\mu$ is the position of the dot level.
Throughout this paper, we will express all quantities in units of $t=1$.
The superconducting order parameter has a modulus $\Delta$ and a phase $\phi_\alpha$.
The parameter $V$ characterizes the
strength of the hybridization, and $U$ corresponds to the Coulomb blockade.
Since the Hamiltonian does not conserve the electron number as a consequence of the BCS-term, we use the standard trick of rewriting the Hamiltonian in terms of creation and annihilation operators of quasiparticles, which for spin up are identical to the electrons, but correspond to holes in the spin down sector. This can also be expressed as a canonical transformation:
\begin{equation}
\label{eq:canonical_transformation}
\tilde{d}^\dagger_\uparrow \rightarrow d^\dagger_\uparrow, \,\, \tilde{d}^\dagger_\downarrow \rightarrow d_\downarrow,\,\, \tilde{c}^\dagger_{k,\uparrow,\alpha} \rightarrow c^\dagger_{k,\uparrow,\alpha}, \,\, \tilde{c}^\dagger_{-k,\downarrow,\alpha} \rightarrow c_{k,\downarrow,\alpha}.
\end{equation}
Using the new operators, the Hamiltonian can be written in a Nambu notation:
\begin{equation}
\label{eq:qd_bcs_hamiltonian_nambu}
\begin{split}
&H = H_0+H_U = \sum_{k,\alpha} \vec{c}_{k,\alpha}^\dagger \mat{E}_\alpha(k) \vec{c}_{k,\alpha} +\vec{d}^\dagger \matgr{\epsilon}_d \vec{d} \\&- \frac{V}{\sqrt{N}} \sum \limits_{k,\alpha} \left( \vec{c}_{k,\alpha}^\dagger \matgr{\sigma}_z \vec{d} + \vec{d}^\dagger \matgr{\sigma}_z \vec{c}_{k,\alpha} \right) + H_U
\end{split}
\end{equation}
with $H_U=-U(d^\dagger_\uparrow d_\uparrow - \frac{1}{2})(d^\dagger_\downarrow d_\downarrow - \frac{1}{2})$, the Nambu spinors
\begin{equation}
\vec{d}=\begin{pmatrix}
d_\uparrow\\
d_\downarrow\\
\end{pmatrix},\quad \vec{c}_{k,\alpha}=\begin{pmatrix}
c_{k,\uparrow,\alpha}\\
c_{k,\downarrow,\alpha}\\
\end{pmatrix},
\end{equation}
the matrices
\begin{equation}
\mat{E}_\alpha(k) = \begin{pmatrix}
\xi_k & - \Delta \mathrm{e}^{i \phi_\alpha} \\
- \Delta \mathrm{e}^{-i \phi_\alpha} & -\xi_k
\end{pmatrix}, \quad
\matgr{\epsilon}_d = \begin{pmatrix}
\xi_d & 0\\
0 & -\xi_d\\
\end{pmatrix}
\end{equation}
and the Pauli matrix
\begin{equation}
\matgr{\sigma}_z=\begin{pmatrix}
1 & 0 \\
0 & -1 \\
\end{pmatrix}.
\end{equation}
For practical reasons, we use the following definition for the single particle Green's function
throughout Sec. \ref{sec:impurity-model} to Sec. \ref{sec:NumericalResults}:
\begin{equation}
G_{dd}^{\sigma \sigma'} ( i \omega_m)= \int \limits_0 ^\beta \mathrm{d} \tau \exp(i \omega_m \tau)
\thavg{T d^\dagger_\sigma (\tau) d_{\sigma'} }.
\label{eq:Green-function-definition}
\end{equation}
With this definition, the resolvent operator $ \mat{G}^0(i \omega_m) = \left( - i \omega_m \mat{1} -
\mat{H}_0^T \right)^{-1}$ can be used to obtain the Green's function of the noninteracting system:
\begin{equation}
\label{eq:greens_function_Gdd}
\begin{split}
\mat{G}_{dd}^0(i\omega_n)^{-1} &= (-i\omega_n \mat{1} - \matgr{\epsilon}_d) \\
&+ \frac{V^2}{N} \sum \limits_{\alpha, k} \matgr{\sigma}_z \left( i \omega_n \mat{1} +
\mat{E}_\alpha^T(k) \right)^{-1} \matgr{\sigma}_z .
\end{split}
\end{equation}
\section{Effective Hamiltonian in the Limit $\Delta/W \rightarrow \infty$}
\label{sec:toy-model}
To gain a deeper understanding of the physics on the quantum dot, it is useful to search for analytically solvable toy models. We will study an effective model, which reproduces the physics of the Hamiltonian (\ref{eq:qd_supercond_hamiltonian}) in the limit $\Delta/W \rightarrow \infty$, where $W$ is the band width.
To derive the effective model, we look at the limit $\Delta\rightarrow \infty$ of the Green's
function in Eq. (\ref{eq:greens_function_Gdd}). The superconducting order parameter $\Delta$ appears only in the matrix $\mat{E}_\alpha(k)$, thus we examine the behavior of this matrix for large values of $\Delta$. This can easily be done by diagonalizing $\mat{E}_\alpha(k)$ for $\phi_\alpha=0$:
\begin{equation}
\label{eq:diagonalization_Ek}
\mat{E}_\alpha(k) = \mat{U}_\Delta^{-1} \begin{pmatrix}
-\sqrt{\Delta^2 +\xi_k^2} & 0 \\ 0 & \sqrt{\Delta^2 +\xi_k^2}
\end{pmatrix} \mat{U}_\Delta.
\end{equation}
Let us first look at the limit $\Delta \rightarrow \infty$ of the transformation matrix $U_\Delta$,
which for brevity is not a unitary matrix.
\begin{equation}
\mat{U}_\Delta = \begin{pmatrix}
-\frac{\xi_k - \sqrt{\Delta^2 + \xi_k^2}}{\Delta} & 1 \\
-\frac{\xi_k + \sqrt{\Delta^2 + \xi_k^2}}{\Delta} & 1
\end{pmatrix} \Rightarrow
\mat{U}_\infty = \begin{pmatrix}
1 & 1 \\ -1 & 1
\end{pmatrix}.
\end{equation}
The diagonal matrix in Eq. (\ref{eq:diagonalization_Ek}) can be considered in a similar manner and we obtain for $\lim \limits_{\Delta \rightarrow \infty}\mat{E}_\alpha(k)= \mat{E}_\infty$:
\begin{equation}
\mat{E}_\infty = \mat{U}_\infty^{-1} \begin{pmatrix}
-\Delta & 0 \\ 0 & \Delta
\end{pmatrix} \mat{U}_\infty =
\begin{pmatrix}
0 & -\Delta \\ - \Delta & 0
\end{pmatrix}.
\end{equation}
Using this result, for large values of $\Delta$ the sum over $k$ and $\alpha$ in Eq. (\ref{eq:greens_function_Gdd}) can be carried out yielding
\begin{equation}
\label{eq:greens_function_Gdd_eff}
\mat{G}_{dd}^{0,\infty}(i\omega_n)^{-1} = (-i\omega_n \mat{1} - \epsilon_d) + 2 V^2 \matgr{\sigma}_z \left( i \omega_n \mat{1} + \mat{E}_\infty \right)^{-1} \matgr{\sigma}_z.
\end{equation}
This is exactly the free Green's function obtained from a Hamiltonian of the form:
\begin{equation}
\label{eq:Heff}
H_\text{eff} = -\sqrt{2} V ( \vec{c}^\dagger \matgr{\sigma}_z \vec{d} + \vec{d}^\dagger \matgr{\sigma}_z \vec{c} ) + \vec{c}^\dagger \mat{E}_\infty \vec{c} + \vec{d}^\dagger \matgr{\epsilon}_d \vec{d} +H_U.
\end{equation}
$H_\text{eff}$ describes a system consisting of one bath site $c$ connected by a hybridization term to the correlated quantum dot $d$. The dispersion of the bath has completely vanished, as the superconducting band gap becomes much larger than the bandwidth.
We chose a basis of the 16 dimensional Hilbert space and write the Hamiltonian
as a matrix, which subsequently can be diagonalized. As we have restricted the
parameter space for the Monte Carlo simulations to $\epsilon_d = 0$ and $\mu=0$
in the original Hamiltonian of Eq. (\ref{eq:qd_supercond_hamiltonian}),
we will use the same parameters for the exact diagonalization results.
\begin{figure}
\resizebox{\columnwidth}{!}{\input{tex-Heff-eigenvalues-U.tex}}
\vspace{-0.7cm}
\caption{\label{fig:level_crossing_eigenenergies} (Color online) Eigenenergies of the effective Hamiltonian (\ref{eq:Heff}) for varying $U$. The fixed parameters are given by $V=0.5$ and $\Delta=1$. The crossing of the two lowest levels is clearly seen at $U \approx 1.7$. The ground state for $U < 1.7$ is a singlet state. For larger values of $U$, the twofold degenerate doublet state becomes energetically more favorable. }
\end{figure}
\begin{figure}
\resizebox{\columnwidth}{!}{\input{tex-weightrunning.tex}}
\vspace{-0.7cm}
\caption{\label{fig:weightrunning} (Color online) For
$\Delta < 1.412$ the ground state is the singlet state from Eq.
(\ref{eq:singlet-ground-state}). If $\Delta$ is increased, the weight $\alpha$ of the single
occupied states $\ket{\tilde \uparrow, \tilde \downarrow}$ and $\ket{ \tilde \downarrow,\tilde \uparrow}$ decreases in
favor of the states with a double occupied quantum dot, corresponding to the weights $\beta$ and
$\gamma$. At $\Delta = 1.412$ the ground state changes to the twofold degenerate doublet state given
in (\ref{eq:doublet-ground-state}) and the weight of the states with a single occupied quantum dot
$b$ increases with $\Delta$. The parameters in this plot are $V=0.5$ and $U=1.0$. }
\end{figure}
\subsection{Ground state of the effective model}
The ground state of the system (\ref{eq:Heff}) can be determined by diagonalizing the Hamiltonian $H_\text{eff}$. As
depicted in Fig. \ref{fig:level_crossing_eigenenergies}, the energy levels cross at a critical
value of $U=U_c$ and a similar behavior can be observed by varying $\Delta$ with a corresponding
critical value $\Delta_c$. For $U<U_c$ and $\Delta<\Delta_c$, the ground state is given by
$\ket{\psi_s}= -\alpha \left( \ket{\uparrow\downarrow ,0} - \ket{0,\uparrow\downarrow} \right) - \beta \left( \ket{\uparrow,\downarrow} +
\ket{\downarrow,\uparrow} \right) - \gamma \left( \ket{\downarrow,\downarrow} + \ket{\uparrow,\uparrow} \right)$, with the notation
$c_\sigma^\dagger \ket{0,0} = \ket{\sigma,0}$ and $d_\sigma^\dagger \ket{0,0} = \ket{0,\sigma}$.
Note, that we are using the unphysical basis introduced in Eq.
(\ref{eq:canonical_transformation}). To interpret this ground state it is better to return to the
physical basis by inverting the canonical transformation in Eq.
(\ref{eq:canonical_transformation}) and transforming the vacuum state $\ket{0,0} \rightarrow
\ket{\tilde{\downarrow},\tilde{\downarrow}}$. The ground state can then be rewritten in the physical basis as:
\begin{equation}
\label{eq:singlet-ground-state}
\begin{split}
\ket{\psi_s}&=\alpha \left( \ket{\tilde \downarrow, \tilde \uparrow} - \ket{\tilde \uparrow,\tilde \downarrow} \right) \\ &+
\beta \left( \ket{ \tilde 0,\tilde \uparrow \tilde \downarrow} + \ket{ \tilde \uparrow \tilde \downarrow, \tilde 0} \right)
+ \gamma \left( \ket{\tilde 0, \tilde 0} + \ket{ \tilde \uparrow \tilde \downarrow, \tilde \uparrow \tilde \downarrow}
\right).
\end{split}
\end{equation}
This state is clearly a singlet state, corresponding to a Kondo singlet between the quantum dot and
the bath with the dominant weight $\alpha$. The states representing a pairing on the quantum dot or
in the bath have the suppressed weights $\beta$ and $\gamma$ for small values of $\Delta$ but grow
more important if $\Delta$ is increased as is shown in Fig. \ref{fig:weightrunning}.
At $U > U_c$, the ground state changes and we get the twofold degenerate ground states
$\ket{\psi_{d,\uparrow}}=a \left( \ket{\uparrow \downarrow,\uparrow } - \ket{\uparrow\downarrow,\downarrow} \right) + b \left(
\ket{\uparrow,\uparrow\downarrow} + \ket{\downarrow,\uparrow\downarrow} \right) $ and $\ket{\psi_{d,\downarrow}} = a \left( \ket{ 0,\uparrow} -
\ket{0,\downarrow} \right) + b \left( \ket{\downarrow, 0} + \ket{\uparrow ,0} \right)$, rewritten in the physical basis:
\begin{equation}
\label{eq:doublet-ground-state}
\begin{split}
\ket{\psi_{d,\uparrow}} &=a \left( \ket{\tilde \uparrow ,\tilde 0} - \ket{ \tilde \uparrow,\tilde{\uparrow\downarrow}} \right)
+ b \left( \ket{\tilde 0, \tilde \uparrow } + \ket{ \tilde {\uparrow\downarrow},\tilde \uparrow} \right) \\
\ket{\psi_{d,\downarrow}} &=a \left( \ket{ \tilde \downarrow , \tilde 0} - \ket{ \tilde \downarrow, \tilde {\uparrow \downarrow}}
\right) + b \left( \ket{ \tilde 0, \tilde \downarrow } + \ket {\tilde {\uparrow\downarrow}, \tilde \downarrow} \right).
\end{split}
\end{equation}
This two-fold degenerate ground state has a $z$-component of the total spin $\pm 1/2$ and hence corresponds to a
local moment.
\subsection{Phase diagram}
\begin{figure}
\resizebox{\columnwidth}{!}{\input{tex-double-occ-delta-U-B200-3d.tex}}
\vspace{-0.7cm}
\caption{ \label{fig:double_occupancy} (Color online) Double occupancy $\thavg{\tilde d_\uparrow ^\dagger \tilde d_\uparrow \tilde d_\downarrow^\dagger \tilde d_\downarrow}$ of the quantum dot in the effective model at $\beta=200$ and $V=0.5$. This plot can be understood as a phase diagram of the effective model, as the phase boundary is accompanied by a sharp decay of the double occupancy. }
\end{figure}
To further illustrate the phase transition between the singlet state $\ket{\psi_s}$ and the doublet
states $\ket{\psi_{d,\uparrow \downarrow}}$, the double occupancy $\thavg{\tilde d_\uparrow ^\dagger \tilde d_\uparrow
\tilde d_\downarrow^\dagger \tilde d_\downarrow}$ of the quantum dot in the effective model is shown in Fig.
\ref{fig:double_occupancy}. At low temperature a very sharp drop of the double occupancy on the
phase boundary can be observed, which evolves to a jump at $T=0$.
Here the larger values of the double occupancy are connected to the
singlet phase, while the lower values belong to the doublet phase, where single occupancy is
favored. This can be understood by studying the expectation value of the double occupancy in the
ground state. In the singlet phase, we obtain
\begin{equation}
\bra{\psi_s} \tilde{d}^\dagger _{\uparrow} \tilde d_\uparrow \tilde d_\downarrow^\dagger \tilde d_\downarrow \ket{\psi_s} =
\left| \beta \right|^2 + \left| \gamma \right|^2,
\end{equation}
and for the doublet phase:
\begin{equation}
\bra{\psi_{d,\uparrow \downarrow}} \tilde{d}^\dagger _{\uparrow} \tilde d_\uparrow \tilde d_\downarrow^\dagger \tilde d_\downarrow \ket{\psi_{d,\uparrow\downarrow}} = \left| a \right|^2.
\end{equation}
From the behavior of the weights $\beta$, $\gamma$ and $a$ shown in Fig. \ref{fig:weightrunning}
it is clear that the double occupancy increases with $\Delta$ in the singlet phase and decreases in
the doublet phase.
Note, that many of the results presented in this paper can be observed either at fixed $U$ or
$\Delta$ as can be conjectured from Fig. \ref{fig:double_occupancy}.
\subsection{Proximity effect}
\label{subsec:proximity_effect_eff}
To gain further insight in the sign change of the local pair correlations
$\thavg{\tilde{d}_\uparrow^\dagger \tilde{d}_\downarrow^\dagger}$
\cite{PhysRevB.70.020502,PhysRevB.55.12648,balatsky:373}, we calculate the ground state expectation value of the local pair
correlations in the effective model (\ref{eq:Heff}). For the singlet phase, we obtain
\begin{equation}
\begin{split}
\bra{\psi_s} \tilde{d}_\uparrow^\dagger \tilde{d}_\downarrow^\dagger \ket{\psi_s} &= \bra{\psi_s} \left(
\beta \ket{\tilde{\uparrow}\tilde{\downarrow},\tilde{\uparrow}\tilde{\downarrow}} + \gamma \ket{\tilde{\uparrow}
\tilde{\downarrow},\tilde{0}} \right) \\ &= 2 \text{Re} (\beta^* \gamma) \geq 0.
\end{split}
\label{eq:eff_pair_corr_singlet}
\end{equation}
Clearly, only terms describing the pairing on the quantum dot contribute to the pair correlations, whereas
the Kondo singlet of electrons on the quantum dot and in the bath does not. From Fig.
\ref{fig:weightrunning}, it is obvious that the resulting pairing
correlation is positive and increases with $\Delta$. This illustrates the proximity effect, as a
pair field in the bath induces a pair field on the quantum dot.
On the other hand, in the doublet phase, we obtain
\begin{equation}
\bra{\psi_{d,\downarrow}} \tilde{d}_\uparrow^\dagger \tilde{d}_\downarrow^\dagger \ket{\psi_{d,\downarrow}} =
\bra{\psi_{d,\downarrow}} a \ket{\tilde{\downarrow}, \tilde{\uparrow\downarrow}} = - \left| a \right|^2 <0.
\label{eq:eff_pair_corr_doublet}
\end{equation}
As in the singlet phase, only the states corresponding to a pairing on the quantum dot contribute to
the pair correlations. The local moment part of the ground state does not generate pair
correlations. As the weight $a$ in the doublet phase ground state is positive and decreases with
$\Delta$ (see Fig. \ref{fig:weightrunning}), the local pair correlations have a negative sign in
contrast to the positive sign in the singlet phase and decrease with $\Delta$.
\subsection{Spectral function}
\label{subsec:spectral_function_eff}
Using the Lehmann representation, the spectral function $A_{\uparrow\up}(\omega)$ of the effective model is easily calculated. It is defined by
\begin{equation}
A_{\uparrow \uparrow}(\omega) = \frac{\pi}{Z} \sum \limits_{n,m}
M_{nm}
\left( \mathrm{e}^{-\beta E_m} \! + \! \mathrm{e}^{-\beta E_n} \right) \delta(\omega \! + \! E_n \! - \! E_m),
\end{equation}
with the matrix elements $ M_{nm} = \left|\bra{n} \tilde d_\uparrow^\dagger \ket{m} \right|^2 $.
\begin{figure}
\vspace{-0.5cm}
\resizebox{\columnwidth}{!}{\input{tex-spectralfunction-B200-3d.tex}}
\vspace{-1.5cm}
\caption{ \label{fig:spectral-effective} (Color online) Spectral function $A_{\uparrow \uparrow}(\omega)$ of the effective model for different values of $\Delta$ at $\beta=200$, $U=1$ and $V=0.5$. The $\delta$-peaks have been broadened by a Gaussian function of width $\sigma=0.04$ for better visibility. }
\end{figure}
The spectral function is shown in Fig. \ref{fig:spectral-effective}.
Comparing this plot to the numerical solution of the full model as depicted in Fig. \ref{fig:Aom-of-Delta}, we observe, that the simple model already shows the important feature of an excitation at the position $\omega=0$ at the critical value of $\Delta$.
Even though for very small values of $\Delta$, the Kondo resonance at $\omega=0$ can not be seen in
the simple model, we see a precursor of the Kondo resonance as a pole of the Green's function, which
develops into a resonance if we increase the number of sites in the bath\cite{hewson}.
A careful analysis reveals, that the low frequency signature of the spectral function reflects the excitation between the two lowest lying states of the spectrum.
These states are the ground states of the singlet and the doublet phase and therefore, the position
$\omega$ of the excitation marks precisely the energy difference of the two ground states.
At the critical value of $\Delta=1.412$, the level crossing occurs and leads to a vanishing energy
difference of the two ground states, meaning that the excitation between the two states lies now
precisely at $\omega=0$.
\subsection{Dynamical spin structure}
\label{subsec:dynamical_spin_structure_eff}
Like the spectral function, the dynamical spin structure factor $S(\omega)$ can be calculated using the Lehmann representation:
\begin{equation}
\label{eq:spinstructure-lehmann_2}
S(\omega) = \frac{\pi}{Z} \sum_{n,m} \mathrm{e}^{-\beta E_n} \left| \bra{n} \tilde{S}_+ \ket{m}
\right|^2 \delta( \omega + E_n - E_m).
\end{equation}
In the Monte Carlo simulation, a numerically more stable quantity is obtained by replacing $S_+$ by $S_z$ in the
above equation. This quantity is completely equivalent to $S(\omega)$, as we only make use of the
$SU(2)$-symmetry of the problem, and is therefore used in the following.
In the representation (\ref{eq:spinstructure-lehmann_2}) of $S(\omega)$, it is clear that the dynamical spin structure factor will show excitations
at frequencies corresponding to the energy needed to flip the spin on the quantum dot. Therefore,
the dynamical spin structure factor is very well suited to determine whether the system is in the
singlet or in the doublet regime.
In Fig. \ref{fig:spinstructure-eff} the phase transition from the singlet phase to the doublet phase is reflected by the fact, that in the singlet phase, a gapped excitation can be observed, whereas in the doublet phase, a peak at $\omega=0$ emerges, which corresponds to a local magnetic moment.
\begin{figure}
\resizebox{\columnwidth}{!}{\input{tex-spinstructure-eff-3d.tex}}
\vspace{-0.7cm}
\caption{\label{fig:spinstructure-eff}
(Color online)
Dynamical spin structure factor $S(\omega)$ of the effective model at $\beta=200$. The phase transition from the singlet-phase to the doublet-phase for $U=1$ and $V=0.5$ occurs at $\Delta \approx 1.412$. At this point a transition from a gapped excitation to a peak at $\omega=0$ corresponding to a local magnetic moment in the doublet phase is observed. To visualize the $\delta$-functions, a Gaussian broadening of width $\sigma=0.05$ has been applied. }
\end{figure}
\subsection{Dynamical charge structure}
\label{subsec:dynamical_charge_structure_eff}
The dynamical charge structure factor $N(\omega)$ can be defined by the Lehman representation
\begin{equation}
N(\omega) = - \frac{\pi}{Z} \sum_{n,m} \left| \bra{n} \tilde{n} - \delta_{n,m} \ket{m}
\right|^2 \mathrm{e}^{-\beta E_m} \delta(\omega + E_n - E_m).
\label{eq:chargestructure-lehmann}
\end{equation}
As for the other spectral functions, the charge structure factor $N(\omega)$ shown in Fig.
\ref{fig:chargestructure-eff}, exhibits a sharp change of its behavior at the phase transition for
the critical value of the superconducting gap $\Delta$. We observe, that the charge structure shows a
finite gap for all values of $\Delta$ and that for large values of $\Delta$, the gap increases in a
slightly nonlinear manner.
A more detailed study of the matrix elements contributing to the charge structure factor reveals,
that because of correlation we have completely different excitations than for the spectral function.
In fact, the most prominent excitations are excitations from the respective ground states in the two
different phases to higher energy states with structure similar to that of the ground states.
\begin{figure}
\resizebox{\columnwidth}{!}{\input{tex-chargestructure-b200-eff-3d.tex}}
\vspace{-0.7cm}
\caption{ \label{fig:chargestructure-eff}
(Color online)
Dynamical charge structure factor $N(\omega)$ of the effective model at $\beta=200$. We have used
the same parameters as for Fig. \ref{fig:spectral-effective} }
\end{figure}
\section{CTQMC}
\label{sec:CTQMC}
\subsection{Basic outline of the algorithm}
For the numerically exact solution of the BCS-Anderson-model, we used the weak coupling CTQMC-method
\cite{rubtsov:035122}, which is based on a perturbation expansion around the limit of $U=0$.
Following the presentation of the CTQMC-algorithm in \cite{assaad:035116}, we will shortly outline
the basic principles of the method.
As pointed out in \cite{rubtsov:035122,assaad:035116} the interacting Hamiltonian $H_U$ in Eq. (\ref{eq:qd_bcs_hamiltonian_nambu}) can up to a constant be rewritten as
\begin{equation}
H_U = -\frac{U}{2} \sum_{s=\pm 1} \left( d_\uparrow^\dagger d_\uparrow - \alpha_\uparrow^s \right) \left( d_\downarrow^\dagger d_\downarrow - \alpha_\downarrow^s \right)
\end{equation}
introducing the parameters $\alpha_\sigma^s$ to minimize the sign problem. For the present case, a choice of $\alpha_\uparrow^s = \alpha_\downarrow^s = \frac{1}{2} + s \delta$ with $\delta = \frac{1}{2} + 0^+$ was found to completely eliminate the sign problem at half filling, even after the complex phase factors $\exp(i\phi_\alpha)$ in the Hamiltonian were introduced.
Using perturbation theory, the partition function $Z$ of the full Hamiltonian (\ref{eq:qd_bcs_hamiltonian_nambu}) can be written as:
\begin{equation}
\begin{split}
\frac{Z}{Z_0} &= \thavg{ T \mathrm{e}^{- \int_0^\beta \mathrm{d} \tau H_U(\tau) } }_0 =\\
&= \sum_{n=0}^\infty \left( \frac{U}{2} \right)^n \int _0 ^\beta \mathrm{d} \tau_1 \dots \int_0 ^{\tau_{n-1}} \mathrm{d} \tau_n \sum_{s_1,\dots, s_n} \times \\
&\times \thavg{T \left(\hat{n}_\uparrow(\tau_1) - \alpha_\uparrow^{s_1} \right) \dots
\left(\hat{n}_\downarrow(\tau_n) - \alpha_\downarrow^{s_n} \right) }_0.
\end{split}
\end{equation}
with the number operators $\hat{n}_\sigma = d^\dagger_\sigma d_\sigma$ and the thermal expectation value $\thavg{ \bullet }_0 = \frac{1}{Z_0} \tr \left[ \mathrm{e}^{-\beta H_0} \bullet \right]$.
As $H_0$ is a noninteracting Hamiltonian, Wick's theorem holds, and the expectation value $\thavg{T (\hat{n}_\uparrow(\tau_1) - \alpha_\uparrow^1 ) \dots (\hat{n}_\downarrow(\tau_n) - \alpha_\uparrow^n ) }_0$ can be cast in a determinant of a matrix $\mat{M}_{C_n}$ of size $2n\times 2n$, where $C_n$ is a configuration of vertices $\{\tau_i,s_i\}$. In contrast to the formulation for the Hubbard model given in \cite{assaad:035116}, we do not need to include an index for the lattice site as we only have one correlated site, the impurity. The Matrix $\mat{M}_{C_n}$ is not block diagonal for the two spin sectors in the case $\Delta \neq 0$, so we cannot factor the determinant in two determinants of $n\times n$ matrices. Finally, the partition function of the model is given by
\begin{equation}
\frac{Z}{Z_0} = \sum_{C_n} \left( \frac{U}{2} \right)^n \det \mat{M}_{C_n},
\end{equation}
where the sum runs over all possible configurations $C_n$ of vertices as in \cite{assaad:035116}. The matrix $\mat{M}_{C_n}$ is defined by
\begin{equation}
\mat{M}_{C_n} =
\begin{pmatrix}
\mat G_{dd}^0(\tau_1,\tau_1) - \matgr \alpha_1 & \dots & \mat G_{dd}^0(\tau_n,\tau_1) \\
\vdots & \ddots & \vdots \\
\mat G_{dd}^0(\tau_1,\tau_n) & \dots & \mat G_{dd}^0(\tau_n,\tau_n) - \matgr \alpha_n \\
\end{pmatrix
\end{equation}
using the $2\times 2$ Green's function matrices $\mat{G}_{dd}^0(\tau,\tau') = \left(\begin{smallmatrix}
\thavg{T d^\dagger_\uparrow(\tau) d_\uparrow(\tau')}_0 & \thavg{T d^\dagger_\downarrow(\tau) d_\uparrow(\tau')}_0 \\
\thavg{T d^\dagger_\uparrow(\tau) d_\downarrow(\tau')}_0 & \thavg{T d^\dagger_\downarrow(\tau) d_\downarrow(\tau')}_0 \\
\end{smallmatrix}\right)$
and with $\matgr{\alpha}_i= \left( \begin{smallmatrix} \alpha_\uparrow^i & 0 \\ 0 & \alpha_\downarrow^i \end{smallmatrix}\right)$.
A similar reasoning yields an expression for the thermal expectation value $\thavg{O(\tau)} = \frac{1}{Z} \tr \left[ \mathrm{e}^{-\beta H} O(\tau) \right]$ of the full model:
\begin{equation}
\label{eq:ddqmc_thermal_expectation_value}
\thavg{O(\tau)} = \frac{ \sum_{C_n} \left(\frac{U}{2} \right)^n \det \mat{M}_{C_n} \langle \langle O(\tau) \rangle \rangle_{C_n} }{ \sum_{C_n} \left(\frac{U}{2} \right)^n \det \mat{M}_{C_n} }.
\end{equation}
Here $\langle \langle O(\tau) \rangle \rangle_{C_n}$ is the contribution of the configuration $C_n$ to the observable $O(\tau)$, which is given by
\begin{equation}
\label{eq:config_contrib}
\langle \langle O(\tau) \rangle \rangle_{C_n} = \frac{\thavg{T (\hat{n}_\uparrow(\tau_1) - \alpha_\uparrow^1 ) \dots (\hat{n}_\downarrow(\tau_n) - \alpha_\downarrow^n ) O(\tau) }_0}{ \thavg{T (\hat{n}_\uparrow(\tau_1) - \alpha_\uparrow^1 ) \dots (\hat{n}_\downarrow(\tau_n) - \alpha_\downarrow^n ) }_0 }.
\end{equation}
Both, the numerator and the denominator of the above Eq. (\ref{eq:config_contrib}) can be written as determinants of matrices using Wick's theorem.
Eq. (\ref{eq:ddqmc_thermal_expectation_value}) is the central relation of the CTQMC algorithm,
because starting from this equation, the Metropolis-Hastings-Algorithm can be employed to generate a
Markov chain of configurations $C_n$.
At this point, we have to interpret $\left(\frac{U}{2} \right)^n \det \mat{M}_{C_n}$ as the
statistical weight of a given configuration $C_n$ what in general is impossible, as $\det
\mat{M}_{C_n}$ is a complex number. Therefore, we have to replace $\left(\frac{U}{2} \right)^n \det
\mat{M}_{C_n}$ by its modulus and account for the phase in the measurement of the observables.
Fortunately, in the present case, the statistical weights are always real and nonnegative, so that we
can simply calculate the contribution to the observable $O(\tau)$ for a given configuration $C_n$ in
the Markov chain as $\langle \langle O(\tau) \rangle \rangle_{C_n}$.
\subsection{ Wick's theorem for each configuration }
For the measurement of higher Green's functions of the form $\thavg{ T \gamma_1^\dagger \gamma_{1'}
\dots \gamma_m^\dagger \gamma_{m'}}$, where $\gamma_i^\dagger$ stands for
$d^\dagger_{\sigma_i}(\tau_{i,\text{meas}})$ or
$c^\dagger_{k_i,\sigma_i,\alpha_i}(\tau_{i,\text{meas}})$ depending on the quantity of interest, the
calculation of the contribution $\langle \langle T \gamma_1^\dagger \gamma_{1'} \dots
\gamma_m^\dagger \gamma_{m'} \rangle \rangle_{C_n}$ is tedious and time consuming. Luckily for every
configuration $C_n$ a relation similar to Wick's theorem can be found, which greatly simplifies the
calculation of higher Green's functions. It is closely connected to the determinant identity
(\ref{eq:determinant_identity}) proven in appendix \ref{sec:proof_of_det_identity}. The application
of the ordinary Wick's theorem to the denominator and the numerator of Eq. (\ref{eq:config_contrib}) yields
\begin{equation}
\langle \langle T \gamma_1^\dagger \gamma_{1'} \dots \gamma_m^\dagger \gamma_{m'} \rangle \rangle_{C_n} =
\frac{ \det \mat{B}_{C_n} }{ \det \mat{M}_{C_n} },
\end{equation}
where we have defined the matrix $\mat B_{C_n} \in \mathbb{C}^{(2n+m) \times (2n+m)} $ as
\begin{widetext}
\begin{equation}
\mat B_{C_n} =
\begin{pmatrix}
& & & \thavg{T \gamma_1^\dagger \vec d(\tau_1)}_0 & \dots & \thavg{T \gamma_m^\dagger \vec d(\tau_1)}_0 \\
& \mat{M}_{C_n} & & \vdots & \ddots & \vdots \\
& & & \thavg{T \gamma_1^\dagger \vec d(\tau_n)}_0 & \dots & \thavg{T \gamma_m^\dagger \vec d(\tau_n)}_0 \\
\thavg{T \vec d^\dagger(\tau_1) \gamma_{1'}}_0 & \dots & \thavg{T \vec d^\dagger(\tau_n) \gamma_{1'}}_0 & \thavg{T \gamma_1^\dagger \gamma_{1'}}_0 & \dots & \thavg{T \gamma_m^\dagger \gamma_{1'}}_0 \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
\thavg{T \vec d^\dagger(\tau_1) \gamma_{m'}}_0 & \dots & \thavg{T \vec d^\dagger(\tau_n) \gamma_{m'}}_0 & \thavg{T \gamma_1^\dagger \gamma_{m'}}_0 & \dots & \thavg{T \gamma_m^\dagger \gamma_{m'}}_0 \\
\end{pmatrix}.
\end{equation}
\end{widetext}
Defining the matrices $\mat {B}^{\mathbf{ij} }_{C_n} \in \mathbb{C}^{(2n+1)\times(2n+1)}$, we can make use of the determinant identity (\ref{eq:determinant_identity})
\begin{equation}
\! \mat {B}^{ \mathbf{ij} }_{C_n} = \!\!
\begin{pmatrix}
& & & \!\! \thavg{T \gamma_j^\dagger \vec d(\tau_1)}_0 \\
& \!\!\!\! \mat{M}_{C_n} \!\!\!\! & & \vdots\\
& & & \!\! \thavg{T \gamma_j^\dagger \vec d(\tau_n)}_0 \\
\thavg{T \vec d^\dagger(\tau_1) \gamma_{i'}}_0 \! & \! \dots \! & \! \thavg{T \vec d^\dagger(\tau_n) \gamma_{i'}}_0 \!\! & \thavg{T \gamma_j^\dagger \gamma_{i'}}_0
\end{pmatrix}\!\!,
\end{equation}
yielding
\begin{equation}
\frac{ \det \mat B_{C_n} }{ \det \mat M_{C_n} } = \frac{1}{(\det \mat M_{C_n})^n }
\det \begin{pmatrix}
\det \mat {B}^{ \mathbf{11} }_{C_n} & \dots & \det \mat {B}^{ \mathbf{1m} }_{C_n} \\
\vdots & \ddots & \vdots \\
\det \mat {B}^{ \mathbf{m1} }_{C_n} & \dots & \det \mat {B}^{ \mathbf{mm} }_{C_n} \\
\end{pmatrix}\!\!.
\end{equation}
From Eq. \ref{eq:config_contrib} it is obvious, that $\det \mat {B}^{ \mathbf{ij} }_{C_n} / \det \mat M_{C_n} $ is identical to the contribution of the configuration $C_n$ to the one particle Green's function $\thavg{ T \gamma_j^\dagger \gamma_{i'} }$. Hence, Wick's theorem holds for every configuration $C_n$ and is given by
\begin{equation}
\begin{split}
&\langle \langle T \gamma_1^\dagger \gamma_{1'} \dots \gamma_m^\dagger \gamma_{m'} \rangle \rangle_{C_n} =\\ &\det
\begin{pmatrix}
\langle \langle T \gamma_1^\dagger \gamma_{1'} \rangle \rangle_{C_n} & \dots & \langle \langle T \gamma_m^\dagger \gamma_{1'} \rangle \rangle_{C_n} \\
\vdots & \ddots & \vdots \\
\langle \langle T \gamma_1^\dagger \gamma_{m'} \rangle \rangle_{C_n} & \dots & \langle \langle T \gamma_m^\dagger \gamma_{m'} \rangle \rangle_{C_n} \\
\end{pmatrix}.
\end{split}
\end{equation}
This relation is particularly useful in a simulation measuring multiple physical observables as measurements of single particle Green's functions can be reused in an economic way.
\section{Numerical Results}
\label{sec:NumericalResults}
In this section, we present the results obtained by CTQMC simulations for the model
(\ref{eq:qd_supercond_hamiltonian}). We restrict ourselves to the case of half filling,
$\epsilon_d=0$ and $\mu=0$. In the first part of this section, we will discuss the results for
static quantities including the Josephson current, double occupancy and pair
correlations on the quantum dot. We then proceed to dynamical quantities such as the single particle
spectral function and the dynamical spin structure factor.
\subsection{Josephson current}
\label{sec:Josephson}
The Josephson current flowing through the Quantum dot can be calculated directly within the CTQMC method, as it is given by an equal time Green's function:
\begin{equation}
\thavg{j_\alpha} = i \frac{V}{\sqrt{N}} \sum_{k,\sigma} \thavg{ \tilde{c}_{k,\sigma,\alpha}^\dagger \tilde{d}_\sigma - \tilde{d}_\sigma^\dagger \tilde{c}_{k,\sigma,\alpha} }
\label{eq:josephson_current}
\end{equation}
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{\input{tex-josephson_pi_shift_1.tex}}
\end{center}
\vspace{-0.7cm}
\caption{(Color online) Josephson current in the $0$ junction regime}
\label{fig:josephson-0-junction}
\end{figure}
We show here our results for the Josephson current at an inverse temperature of $\beta=50$ as a function of the superconducting gap $\Delta$.
For small values of $\Delta$, we observe a sinusoidal form of the Josephson current as a function of
the phase difference $\phi$ with increasing amplitude, as $\Delta$ increases (see Fig.
\ref{fig:josephson-0-junction}).
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{\input{tex-josephson_pi_shift_2.tex}}
\end{center}
\vspace{-0.7cm}
\caption{(Color online) Josephson current in the $0'$ and $\pi'$ junction regime.}
\label{fig:josephson-transition}
\end{figure}
This parameter regime is known as the 0-Junction regime, because the Josephson current $I_j(\phi) =
\frac{\partial \Omega}{\partial \phi}$ has a zero with positive slope at $\phi=0$, corresponding to
a minimum in the grand potential $\Omega$ at $\phi=0$ (see Fig. 5 in reference
\cite{karrasch:024517}).
If the value of $\Delta$ is further increased, the behavior of the Josephson current changes, as in
the region $\Delta \approx 0.15 \dots 0.35$ the Josephson current shows a zero between $\phi=0$ and
$\phi=\pi$. (see Fig. \ref{fig:josephson-transition}).
This leads to a minimum in the grand potential at $\pi$ and the parameter regime is called $0'$ or
$\pi'$ regime depending on which minimum of the grand potential is the global one
\cite{PhysRevLett.82.2788}.
The behavior of the Josephson current is in accordance with the behavior of the double occupancy
seen in Fig. \ref{fig:double-occ-of-Delta}, as in the same parameter region, where we observe the $0'$ to $\pi'$ transition, the drop of the double occupancy as a function of $\phi$ can be observed, which is linked to the change of the curvature of the current-phase relation of the Josephson current.
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{\input{tex-josephson_pi_shift_3.tex}}
\end{center}
\vspace{-0.7cm}
\caption{(Color online) Josephson current in the $\pi$ junction regime.}
\label{fig:josephson-pi-junction}
\end{figure}
For larger values of $\Delta$, the sign of the Josephson current changes and the grand potential
shows now a single minimum at $\phi=\pi$, this regime is therefore called the $\pi$ regime.
(see Fig. \ref{fig:josephson-pi-junction}).
The picture for the behavior of the grand potential as a function of $\phi$ that we get from the
current phase relation of the Josephson current agrees very nicely with the results presented by
Benjamin et al.\cite{controllable.pi.junction.benjamin}.
The current phase relations for the different phases presented here were also extensively studied by
Karrasch et al. using the fRG and NRG methods \cite{karrasch:024517}, Choi et al. using the NRG
method \cite{PhysRevB.70.020502}, as well as by Siano and Egger using
the Hirsch-Fye QMC method \cite{PhysRevLett.93.047002,PhysRevLett.94.039902,PhysRevLett.94.229702}.
Even though the numerical exactness of certain results has been debated, the results of all
numerical works show very good qualitative agreement and are confirmed by the present results.
\begin{figure}
\resizebox{\columnwidth}{!}{\input{tex-josephson-curr-temp.tex}}
\vspace{-0.7cm}
\caption{ \label{fig:jos-curr-temperature} (Color online) Josephson current for different temperatures. The current
phase relations do not intersect at one single point as suggested by the NRG results of Karrasch et
al.\cite{karrasch:024517}.
}
\end{figure}
In the literature\cite{PhysRevLett.94.229702,karrasch:024517}, the temperature dependence of the
current phase relation of the Josephson current has been discussed. We show CTQMC results in Fig.
\ref{fig:jos-curr-temperature} which look
very similar to the Siano and Egger result\cite{PhysRevLett.94.229702}. As CTQMC is numerically
exact, our result suggests that the crossing of all curves in one single point\cite{karrasch:024517}
at $I_j=0$ found in the approximate finite temperature NRG is not universal.
\subsection{Double occupancy}
We learned from the toy model described in Sec. \ref{sec:toy-model} that the system exhibits
a phase transition from the singlet phase to the doublet phase as $U$ is increased. This picture is
consistent with the NRG results of Bauer et al. \cite{0953-8984-19-48-486211}.
The phase transition can be observed in the double occupancy $\thavg{\hat{n}_\uparrow \hat{n}_\downarrow}$ of
the quantum dot, which is proportional to $\frac{\partial \Omega}{\partial U}$, where $\Omega$ is
the grand potential.
At $T=0$, a sharp step function of the double occupancy is expected.
While the $T=0$ regime is not directly accessible to quantum Monte Carlo calculations, we calculated the double occupancy for different temperatures using the CTQMC-method.
The results are shown in Fig.
\ref{fig:double_occupancy_D1.0}. From the data, it is obvious that with decreasing temperature the
curves converge to the step function of the limit $T=0$, which is a clear sign for a first order
phase transition, reflecting a level crossing of the two ground states. This is in complete
accordance with the results for the toy model.
\begin{figure}[h]
\resizebox{\columnwidth}{!}{\input{tex-double-occ-measurements-D1.tex}}
\vspace{-0.7cm}
\caption{ \label{fig:double_occupancy_D1.0} (Color online) Double occupancy $\thavg{\hat{n}_\uparrow \hat{n}_\downarrow}$ of
the quantum dot at $\Delta=1.0$. The data shows a jump in the double occupancy becoming sharper with
decreasing temperature.
}
\end{figure}
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{\input{tex-josephson_double_occ1.tex}}
\end{center}
\vspace{-0.7cm}
\caption{(Color online) Double occupancy of the quantum dot as a function of the phase difference $\phi=\phi_L-\phi_R$ for different values of $\Delta$.}
\label{fig:double-occ-of-Delta}
\end{figure}
It is interesting to correlate the Josephson current as a function of the phase difference
$\phi=\phi_L-\phi_R$ for various values of $\Delta$ (see Sec. \ref{sec:Josephson}), with the double occupancy on
the dot.
As depicted in Fig. \ref{fig:double-occ-of-Delta}, for very small values of $\Delta$ as well as
for $\Delta >\approx 0.4$, we see that the double occupancy is a constant function of $\phi$.
This corresponds to a current-phase-relation for the Josephson current fixed in either
the $\pi$- or the $0$-junction regime.
For intermediate values of $\Delta$, we observe a far more interesting behavior of the double
occupancy: At a certain value of $\phi$, the double occupancy drops to a smaller value. This drop is
of course smeared out by the finite temperature, but can be understood as a way to drive the phase
transition from the $0$- to the $\pi$-junction regime by the phase difference~$\phi$.
\subsection{Pair correlation}
In agreement with the NRG result of Choi et al. \cite{PhysRevB.70.020502} as well as with
the mean field results by Salkola et. al. \cite{PhysRevB.55.12648}, we obtain the
local pair correlation on the quantum dot shown in Fig. \ref{fig:local_pair_correlation}.
For small $\Delta$, the local pair correlation increases because of the proximity effect, as an
increasing magnitude of the pair field $\Delta$ in the leads induces a growing pair correlation on
the quantum dot.
The sharp sign change at the critical value of $\Delta$
observed at zero temperature is smeared out at finite temperatures, but the qualitative behavior
is exactly the same as for the effective model discussed in Sec.
\ref{subsec:proximity_effect_eff}. We therefore conclude, that the sign change of the pair
correlation is due to residual pairing on the quantum dot in the doublet phase which decreases with
$\Delta$.
The same qualitative behavior of the local pair correlation is also observed, if $U$ is changed
instead of $\Delta$ as discussed in \cite{0953-8984-19-48-486211,PhysRevB.55.12648}.
The sign change of the local pair correlation $\Delta_d$ is traditionally expressed as a $\pi$-phase shift in
$\Delta_d$.
\begin{figure}
\resizebox{\columnwidth}{!}{\input{tex-local_pair_correlation.tex}}
\vspace{-0.7cm}
\caption{ \label{fig:local_pair_correlation} (Color online) Local pair correlation $\Delta_d=\langle \tilde{d}_\uparrow^\dagger
\tilde{d}_\downarrow^\dagger \rangle$ as a function of $\Delta$. We observe the same behavior as Choi et
al. \cite{PhysRevB.70.020502}, which is also in very good agreement with the pair correlation
expected for the effective model discussed in \ref{subsec:proximity_effect_eff}.
}
\end{figure}
\subsection{Spectral function}
\label{subsec:impurity-spectral}
All quantities studied so far suggest that a first order phase transition occurs when
we tune the system from the $0$-Junction to the $\pi$-Junction regime. This can be confirmed by
studying dynamical quantities such as the spectral function.
In Fig. \ref{fig:Aom-of-Delta} we show the spectral function $A(\omega)$ of the quantum dot as a function of $\Delta$.
The data has been calculated from the CTQMC data for the Green's function $G_{dd}^{\uparrow\up}(\tau)$
using stochastic analytic continuation\cite{PhysRevB.57.10287,beach-2004}.
This method works especially well for the low energy spectrum and sharp excitations while the high
energy spectrum and excitation continua are more difficult to resolve.
Inside the gap, the formation of Andreev bound states can be seen very well.
In the region of $\Delta\approx0$ we see the Kondo-resonance. As a function of growing values of $\Delta$
and as a consequence of the opening of the quasiparticle gap at the Fermi level, the Kondo resonance evolves to
Andreev bound state. Note that at the mean-field level, the Kondo resonance merely corresponds to a
virtual bound state. Opening a quasiparticle gap at the Fermi level drives the lifetime of the this virtual
bound state to infinity. In the parameter region which corresponds to the $0$-Junction regime of the Josephson current
($\Delta \approx 0\dots0.1$), we observe Andreev bound states with excitation energies approaching $\omega=0$. This corresponds to the crossing point in Fig. \ref{fig:Aom-of-Delta}
and has also been observed by Bauer et al. for fixed $\Delta$ and increasing $U$ in \cite{0953-8984-19-48-486211}.
\begin{figure}[h]
\vspace{-0.6cm}
\resizebox{\columnwidth}{!}{\input{tex-impurity-Aom-of-Delta-3d.tex}}
\vspace{-1.5cm}
\caption{ \label{fig:Aom-of-Delta} (Color online) Spectral function $A(\omega)$ as a function of $\Delta$ for the parameters $\beta=100$, $U=1.0$ and $V=0.5$ at half filling and zero phase difference between the two superconductors.}
\end{figure}
The comparison of the Quantum Monte Carlo data shown in Fig. \ref{fig:Aom-of-Delta} with the
result obtained from the effective model discussed in Sec. \ref{subsec:spectral_function_eff} is
particularly insightful. The spectral signature is very similar except for the lack of the Kondo resonance due
to the finite size of the effective model. In the effective model, the Andreev
bound state excitation corresponds to the energy difference between the ground states of the singlet
and the doublet phase. The position $\Delta$ at which the Andreev bound states cross at $\omega=0$
has been identified as a clear sign for the crossing of the ground states of the singlet and doublet
phases. Hence, we interpret the crossing of the Andreev bound states in the CTQMC data as a
very strong sign for a level crossing and hence a first order phase transition from the singlet
to the doublet phase in the full model.
\subsection{Dynamical spin structure factor}
In addition to the spectral function, the dynamical spin structure factor $S(\omega)$ defined in
Eq. \ref{eq:spinstructure-lehmann_2}, provides a way of characterizing the phases of the system.
For $\Delta=0$, we clearly see a suppressed spectral weight at $\omega=0$ and a peak which
corresponds to the characteristic energy scale of the Kondo temperature $T_K$. From the peak
position, we obtain a rough estimate for the Kondo temperature of $T_K\approx 0.06$.
From $\Delta\approx0.05$ onwards, spectral weight is accumulated at $\omega=0$ ultimately forming a pronounced
sharp local moment peak for large values of $\Delta$. As the Kondo temperature is a measure for the energy
required to break the Kondo singlet, we expect the Kondo effect to break down at a value of $\Delta\approx T_K$.
This is indeed observed in Fig. \ref{fig:Som-of-Delta}.
The signature of the breakdown of the Kondo resonance also shows up in the spectral function plotted Fig. \ref{fig:Aom-of-Delta}.
Since the Kondo resonance stems from a screening of the magnetic moment by conduction electrons in an energy window
$T_K$ around the Fermi level, the opening of a single particle gap of order $T_K$ destroys the Kondo resonance giving
way to an Andreev bound state.
The breakdown of the Kondo resonance is
accompanied by a change of the curvature in the current-phase-relation of the Josephson current
which is a precursor for the transition to the $0'$ phase
(see the curves for $\Delta=0.05$ and $\Delta=0.08$ in Fig. \ref{fig:josephson-0-junction}).
We also observe that after the transition from the $\pi'$- to the $\pi$- regime has occurred
(see the current-phase-relation of the Josephson current of Fig. \ref{fig:josephson-transition})
the peak at finite $\omega$ vanishes and all the spectral weight is accumulated in the very sharp local moment peak at $\omega=0$.
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{\input{tex-impurity-Som-of-Delta.tex} }
\end{center}
\caption{ \label{fig:Som-of-Delta} (Color online) Dynamical spin structure factor $S(\omega)$ as a function of $\Delta$ for the parameters $\beta=100$, $U=1.0$ and $V=0.5$ at half filling and zero phase difference between the two superconductors.
For $\Delta=0$ we can roughly estimate the Kondo-Temperature $T_K\approx 0.06$ from the peak position of $S(\omega)$.
}
\end{figure}
\subsection{Charge gap}
\begin{figure}[h]
\begin{center}
\resizebox{\columnwidth}{!}{\input{tex-chargegap.tex}}
\end{center}
\vspace{-0.7cm}
\caption{(Color online) Charge gap $\Delta_c$ as a function of $\Delta$. We calculated the dynamical charge
structure factor from the charge-charge correlation function $C_c(\tau)$ using
stochastic analytic continuation and extracted the charge gap using two different methods.
First, we read off the charge gap directly from the stochastic analytic continuation data, secondly, we calculated the
charge gap from the charge-charge correlation function. The straight line is a linear fit
through the numerical data.}
\label{fig:chargegap}
\end{figure}
From the dynamical charge structure factor, we can determine the gap $\Delta_c$ to local charge fluctuations on
the dot with two different methods
\footnote{The dynamical charge structure factor itself can in principle be calculated from the
CTQMC result for the charge correlation function $C_c(\tau)=\thavg{\tilde{n}(\tau)\tilde{n}} -
\thavg{\tilde{n}} \thavg{\tilde{n}}$ using
stochastic analytic continuation. This, however is numerically demanding and requires a very high
quality of data. In the present case, we were unable to extract more than the lowest lying
excitation of the dynamical charge structure factor, which is directly connected to the charge gap.
The higher energy spectrum showed an extremely complex structure which is difficult to capture with
stochastic analytic continuation.}.
One way to extract the charge gap is to read off the peak position of
the lowest lying excitation in the dynamical charge structure factor obtained from the charge
correlation function $C_c(\tau) = \thavg{\tilde{n}(\tau)\tilde{n}} -\thavg{\tilde{n}} \thavg{\tilde{n}}$
via stochastic analytic continuation.
The other way of extracting the charge gap from $C_c(\tau)$ is based on the fact, that the
charge structure factor $N(\omega)$ is linked to $C_c(\tau)$ via
\begin{equation}
C_c(\tau) \propto \int \limits_{-\infty}^{\infty} \mathrm{d} \omega \, \mathrm{e}^{-\tau \omega} N(\omega).
\label{eq:link_cc_Nom}
\end{equation}
If $N(\omega)$ is sharply peaked around a certain value $\omega_p$, we can approximate $N(\omega)$
by $N(\omega) \approx \delta(\omega - \omega_p)$. This corresponds to $C_c(\tau) \approx \mathrm{e}^{-\tau
\omega_p}$. Therefore, a least squares fit of an exponential function $\mathrm{e}^{-\tau \omega_p}$ to
$C_c(\tau)$ in a region where only one single mode dominates, can reveal the frequency $\omega_p$ at which
$N(\omega)$ is peaked. The applicability of the method can be seen in the half logarithmic plot of
$C_c(\tau)$, where a sharply peaked charge structure factor $N(\omega)$ is reflected by a region, in
which $C_c(\tau)$ can be well approximated by a straight line.
The data obtained using these methods is shown in Fig. \ref{fig:chargegap}. In the context of the
effective model discussed in Sec. \ref{subsec:dynamical_charge_structure_eff}, we observe, that
the behavior of the charge gap of the full model clearly differs from that of the effective model.
Especially, we do not see any signature of the phase transition in the behavior of the charge gap.
The charge gap opens approximately linearly with $\Delta$. It is very
hard to extract the charge gap from the numerical data at small $\Delta$, therefore we can only
extrapolate to $\Delta=0$. Here, it appears, that we have a finite charge gap even in the absence of
superconductivity.
The fact that the local charge fluctuations remain gaped confirms the picture that the 0 to $\pi$ transition
occurs only in the spin sector.
\section{DMFT}
\input{dmft-results}
\section{Conclusion}
We have shown that the weak-coupling CTQMC algorithm is an extremely powerful unbiased tool to compute
thermodynamic as well as dynamical quantities of impurity models in superconducting environments. The method
can cope very well with a complex phase of the superconducting order parameter thereby allowing for the
calculation of the Josephson current.
Our detailed results for the impurity problem confirm the picture of a first order phase
transition between a single and doublet state. It is accompanied by a $\pi$
phase shift in the Josephson current. Being completely unbiased, our approach provides the first
numerically exact
results for this model Hamiltonian.
Within DMFT, the physics of the BCS-PAM is mapped onto the single impurity Anderson model supplemented by a
self-consistency loop. We have shown that within this approximation, the physics of the impurity model
can be carried over to the lattice. In particular at fixed superconducting order parameter $\Delta$ the first order
transition between a singlet and local moment state as a function of growing values of $U$ shows up in a hysteresis behavior of
the double occupancy. Furthermore, the low energy features of the local $f$-spectral function
are reminiscent of the
Andreev bound states with vanishing excitation energy (i.e. a crossing point) at the critical coupling. Within the DMFT
approximation, we can look into the single particle spectral function. In the singlet phase, the low energy features can
be interpreted in terms of a dispersion relation of Andreev bound states. This state is continuously linked to the $U = 0$ limit.
In the doublet state or local moment regime, the low energy features of the spectral functions are incoherent. We propose to
understand this in terms of models of disorder. In particular in this state, the spin dynamics
of the $f$-electron is frozen and
since we are considering paramagnetic states it points in a random different direction in each unit cell. A simple model of
disorder following this picture accounts very well for the observed incoherent spectral function.
\section{Acknowledgments}
We thank Julia Wernsdorfer for interesting discussions and for bringing up the subject in her
diploma thesis. We also wish to thank Volker Meden for fruitful discussion and advice. Part of the
calculations were carried out at the Leibniz Rechenzentrum in Munich on HLRB2. We thank this
institution for allocation of CPU time.
DJL also thanks Jutta Ortloff and Manuel Schmidt for many valuable
discussions as well as Burkhard Ritter for critical reading of the manuscript.
FFA would like to thank the KITP, where part of this work
was carried out, for hospitality (NFS Grant PHY05-51164).
We thank the DFG for financial support.
|
1,116,691,500,631 | arxiv | \section{Introduction}
\begin{figure}[t]
\centerline{\includegraphics[width=0.35\textwidth]{StraightRiblets.jpg}\ \ \includegraphics[clip,trim=6em 2em 5em 4em,width=0.55\textwidth]{SpanwiseWallMotion.jpg}}
\centerline{a) \hspace{0.5\textwidth} b)}
\caption{
a) Riblets; b) Spanvise wall motion. \label{fig:RibletsSpanwiseWallMotion}}
\end{figure}
Turbulent skin-friction control techniques are classified into passive and active. Passive techniques do not require energy input. For example, the shape of the solid surface can be made such that the skin friction will be less in the flow past this surface than the skin friction in the flow past a flat wall at the same conditions. The only surface shape known, reliably, to decrease drag is one covered with riblets~\cite{GarciaMayoralJimenez2011}. Riblets are longitudinal grooves in the surface exemplified in Figure~\ref{fig:RibletsSpanwiseWallMotion}a. Riblets inhibit spanwise velocity fluctuations, that are fluctuations directed across the grooves, and this modification of the structure of turbulence reduces the drag. The cross-sectional shape of the riblets can be different from that shown. Importantly, to reduce the drag, both the distance between the neighbouring groves and their height should be about 15 wall units. A wall unit is defined as the distance normalised by the wall skin friction and fluid viscosity. For aircraft applications a wall unit might be of order of one micron, and riblets have, therefore, to be very small. The drag-reduction level achievable with riblets is less than 12\%, and the extra manufacturing and maintenance costs involved make riblets only marginally effective in a practical environment.
Active control techniques require energy input. For example, the surface of the wall can perform oscillatory in-plane motion in such a way that the turbulent skin-friction drag is reduced.
Spanwise oscillations of the wall can reduce the skin-friction drag by up to 40\%~\cite{JungMangiavacchiAkhavan1992}. This effect has been predicted computationally and confirmed experimentally \cite{Choi_Clayton_Debisschop_1998}. More complicated in-plane motions, in which different parts of the wall surface move with different time-dependent velocities, can produce even higher effects as was shown in \cite{Quadrio2009}. Importantly, the energy required for such an actuation need only be about half of the gain obtained due to the drag reduction, so that the energy budget is favourable. Of particular interest for the present work are the findings of \cite{Viotti_Quadrio_Luchini_2009}, where a steady wall motion, illustrated in Figure\,\ref{fig:RibletsSpanwiseWallMotion}b, was considered. The authors showed that net energy savings of 23\% were possible. They also found that the optimal longitudinal wavelength of the forcing is somewhat larger than 1000 wall units.
Other approaches to turbulent-flow control, such as blowing and suction, using micro-electro-mechanical actuators with feedback control, or using plasma actuators for creating the cross-flow motion, have been proposed, but so far none of these proposals has resulted in practical applications.
With the above described approaches not being widely used in practice, there remains a need for a practical and cost-effective control system allowing large skin-friction reductions to be achieved. The present work proposes a simple and practical method of passive control intended to achieve the same effect as the spanwise wall motion shown in Fig.~\ref{fig:RibletsSpanwiseWallMotion}b.
\section{The idea}
The idea consists in generating, by selecting the appropriate shape of the surface, the cross-flow motion producing the drag-reduction effect. An example of such a wall shape is shown in
Figure\,\ref{fig:WavyWall}. The deflection of the main flow by the wall creates different spanwise pressure gradients on the upwind and downwind sides of the smooth waves. This pressure gradient pushes the fluid sideways, similar to a spanwise wall velocity shown in Figure\,\ref{fig:RibletsSpanwiseWallMotion}b.
\begin{figure}
\centerline{\includegraphics[width=0.51\textwidth]{WavyWall.jpg}}
\caption{
Drag-reducing wavy wall\label{fig:WavyWall}}
\end{figure}
Unlike the case of active control, the rigid wall will not require energy input. However, the additional motion generated by the non-flat wall will result in an increase in energy dissipation similar to that for a moving wall. This additional energy dissipation will manifest itself as a pressure drag on the non-flat surface. Therefore, the proposed method can be considered as a simple and feasible approach to providing energy for generating the drag-reducing spanwise motions.
Crucially, even a relatively small variation of the wall shape will produce a significant variation in the velocity near the wall, where the drag-reduction mechanism is concentrated, because of the mechanism involved in the well-known phenomenon of viscous-inviscid interaction (see, for example, \cite{Messiter1983}). The properties of boundary layer flow are such that the displacement of streamlines caused by variation in wall surface shape is passed with little change to the streamlines at the outer edge of the boundary layer, where the flow velocity is large. This displacement then results in the pressure variation proportional to the velocity at the outer edge of the boundary layer and to the displacement magnitude. The pressure variation is then passed back to the wall, since the pressure does not vary significantly across the boundary layer. Near the wall this pressure variation results in significantly larger variation of the velocity, because the velocity itself is small near the wall. At the intuitive level this can be understood by recollecting the well-known Bernoulli equation, from which it follows that the pressure variation and the variation of velocity squared are of the same order of magnitude. At the outer edge of the boundary layer the velocity variation $\Delta U,$ created by a small displacement of the streamlines, is small as compared to the velocity $U$ itself, so that the pressure variation $\Delta p$ is of order $U\Delta U,$ but near the wall, where the velocity approaches zero, the velocity variation $\Delta u$ is of the same order as the velocity itself, so that the same $\Delta p$ corresponds to $\Delta u^2.$ Hence, $\Delta u$ is of the same order as $(U/\Delta u)\Delta U,$ that is much greater than $\Delta U.$
For this effect to take place, the characteristic thickness of the near-wall layer should be small as compared to the longitudinal and spanwise length scale of the wall-shape variation. Fortunately, this is possible since the characteristic scale of the near-wall layer is about 100 wall units, as evidenced by the typical streak spacing, while the optimal longitudinal wavelength of the steady spanwise wall motion (Figure\,\ref{fig:RibletsSpanwiseWallMotion}) is somewhat greater than 1000 wall units --- that is, one order of magnitude greater. This wavelength is almost two orders of magnitude greater than the wavelength of riblets. Hence, it is much easier to manufacture and maintain in practice.
Direct numerical simulation of a flow past such a wall is more computationally expensive than a simulation of a flow past a flat wall, not only because of a more complicated geometry, but also because the waviness of the wall adds an additional spanwise length scale, which is about an order of magnitude greater than the streak spacing. Before performing expensive direct numerical simulations one can attempt to use the similarity between the action of spanwise pressure gradient and the action of spanwise wall motion in order to determine the range of the parameters where the drag reduction is most likely to be observed. This similarity was already reported in \cite{JungMangiavacchiAkhavan1992} where drag reduction by wall oscillations was first demonstrated. In the case considered in that paper the similarity was in fact exact, since the two situations could be made identical by a simple change of the frame of reference. In our case this can only be approximate. We will assume that if the spanwise motion generated by a particular wavy wall is close to the spanwise motion generated by a particular in-plane spanwise wall motion then the effect of this two methods of control will be similar, resulting in the same reduction in the skin friction at the wall. The difference between the two methods of control is that the spanwise motion of the wall consumes power, while a rigid wavy wall does not consume power. However, the wavy wall will experience pressure drag, not present in the flow past a spanwise moving wall, and overtaking this additional drag will require additional power. Thus, in both cases there is a price to be paid for control, which can be expressed as the power required for control. Viotti, Quadrio, and Luchini~\cite{Viotti_Quadrio_Luchini_2009} calculated both the power saved, $P_{\mathrm{sav}}$ and the (negative) power required, $P_{\mathrm{req}},$ for the spanwise-moving wall, and considered their difference, $P_{\mathrm{net}},$ which characterised the net gain achievable by the method in principle. We will find the shapes of the wavy walls generating a spanwise shear similar to the spanwise shear obtained in several cases by \cite{Viotti_Quadrio_Luchini_2009}, and assume that $P_{\mathrm{sav}}$ is the same in both cases. We will then calculate $P_{\mathrm{req}}$ for a flow past a wavy wall, using an approach demonstrated in \cite{Viotti_Quadrio_Luchini_2009} to work well for the spanwise-moving wall, thus obtaining $P_{\mathrm{net}}$ for the wavy wall. We then optimise it to get an estimate of the drag reduction achievable in the flow past a wavy wall and the optimal parameters of the wavy wall.
\section{Boundary layer on a wavy wall and the power required}
\subsection{Boundary layer equations for the wavy-wall case and the spatial Stokes
layer case}
Similar to \cite{Viotti_Quadrio_Luchini_2009}, the boundary layer equations, linearized around a linear profile will be used. Written in wall units, they have the form
$$
y\pd ux + v =-\pd px + \pdd uy
$$
$$
0=-\pd py
$$
$$
y\pd wx = - \pd pz + \pdd wy
$$
$$
\pd ux + \pd vy + \pd wz =0
$$
\subsubsection{The spatial Stokes
layer case}
\newcommand\SSL{\mathrm{ssl}}
In the spatial-Stokes-layer (SSL) case the flow is driven by the movement of the boundary, so that $u=v=0,$ $w=\hat W e^{ik_x x}$ at $y=0.$ The pressure gradient is zero, and $u_{\SSL}=v_{\SSL}=0.$ Taking $w =\hat W \tilde w_{\SSL}(\tilde y) e^{ik_x x},$ where $y=k_x^{-1/3}\tilde y$ gives
\begin{equation}\label{eqn:TildeWssl}
i\tilde y\tilde w_{\SSL} = \derder{\tilde w_{\SSL}}{\tilde y}
\end{equation}
with boundary conditions $\tilde w_{\SSL} =1$ at $\tilde y=0$, and $\tilde w_{\SSL}\to0$ as $\tilde y\to\infty,$
the same as in \cite{Viotti_Quadrio_Luchini_2009}. We solved (\ref{eqn:TildeWssl}) numerically and obtained a perfect agreement with the formula $\tilde w_{\SSL}(\tilde y)=Ai(-i\tilde y e ^{-4\pi i/4})/Ai(0),$ equivalent to (7) in \cite{Viotti_Quadrio_Luchini_2009}.
\subsubsection{Wavy wall case}
In the wavy wall case the boundary condition is $u=v=w=0$ at $y=0.$
The pressure distribution is generated by the inviscid flow above the boundary layer. We assume it to have the form
$
p=\hat p e^{i(k_x x + k_z z)}.
$
Accordingly, $(u,v,w)=(\hat u(y), \hat v(y), \hat w(y))e^{i(k_x x + k_z z)}.$
This leads to
\begin{equation}\label{eqn:hatu}
i k_x y \hat u + \hat v = -i k_x \hat p + \hat u''
\end{equation}
$$i k_x y \hat w = - i k_z \hat p + \hat w''$$
$$i k_x \hat u +\hat v' + i k_z \hat w =0$$
Eliminating $\hat v$ and then rescaling as
$$
\hat w(y)=i k_z k_x^{-2/3}\hat p \tilde w(\tilde y),\quad
\hat u(y)=i k_z^2k_x^{-5/3} \hat p \tilde u(\tilde y)
$$
gives, after simple transformations, the following system
$$
i\tilde y \der {\tilde u}{\tilde y}-i\tilde w=\derderder {\tilde u}{\tilde y}
$$
$$
i\tilde y\tilde w = -1 +\derder{\tilde w}{\tilde y}
$$
with boundary conditions $\pd{\tilde u}y\to0$ and $\tilde w\to0$ as $\tilde y\to\infty,$ and $\tilde u = \tilde w =0,$ ${\tilde u}''=k_x^2/k_z^2$ at $\tilde y=0.$ The last condition is in fact (\ref{eqn:hatu}) taken at the wall. The equation for $\tilde w$ separates from the equation for $\tilde u$ and can be solved first. It is then natural to take
$$
\tilde u =\tilde u_w+\frac{k_x^2}{k_z^2}\tilde u_p,
$$
with $\tilde u_w$ and $\tilde u_p$ satisfying
$$
i\tilde y \der {\tilde u_w}{\tilde y} - i\tilde w = \derderder {\tilde u_w}{\tilde y},\quad
\tilde u_w\to0 \mathrm{\ as\ } \tilde y\to\infty, \mathrm{\ and\ } \tilde u_w =0, \ {\tilde u_w}''=1 \mathrm{\ at\ } \tilde y=0,
$$
$$
i\tilde y \der {\tilde u_p}{\tilde y} =\derderder {\tilde u_p}{\tilde y},\quad
\tilde u_p\to0 \mathrm{\ as\ } \tilde y\to\infty, \mathrm{\ and\ } \tilde u_p =0, \ {\tilde u_p}''=0 \mathrm{\ at\ } \tilde y=0.
$$
From the physical viewpoint $\tilde u_w$ corresponds to the perturbation of $u$ due to wall-normal velocity induced by spanwise velocity dependence on $z,$ while $u_p$ is related to the perturbation of $u$ due to the longitudinal pressure gradient induced by the wall.
These ordinary differential equations were solved numerically using Mathematica.
\subsubsection{Matching the spanwise shear profiles in the SSL and wavy-wall case}
At the wall the spanwise velocity is zero in the wavy-wall case and nonzero (and has a maximum amplitude) in the SSL case. Therefore, they cannot be directly matched. However, a simple translational motion might be not so important because of Galilean invariance. We presume that it is the spanwise shear that favourably affects turbulence leading to drag reduction. Therefore, we seek to match the SSL-case spanwise shear $\hat W \tilde w_{\SSL}'(\tilde y) e^{ik_x x} $ with the wavy-wall-case spanwise shear $i k_z k_x^{-2/3}\hat p \tilde w'(\tilde y)e^{i(k_x x + k_z z)}.$ Note that the wavy-wall period in the spanwise direction is expected to be much larger than the characteristic scale of near-wall turbulent structure and, hence, dependence on $z$ can simply be neglected. On the other hand, a phase shift between the SSL and wavy-wall cases is acceptable. Accordingly, the matching was done by minimising numerically
$$
\int_0^\infty\int_0^{2\pi/k_x} \left|\tilde w_{\SSL}'(\tilde y)e^{i(k_x x)}-\frac{i k_z k_x^{-2/3}\hat p}{\hat W } \tilde w'(\tilde y)e^{i(k_x x + \phi)}\right|^2\,dx\,dy
$$
over $C={i k_z k_x^{-2/3}\hat p}/{\hat W }$ and $\phi.$ This gave $C=C_m=0.8980$ and $\phi=\phi_m=1.5708.$ Hence, to achieve matching, the height of the wall waves should be such that
\begin{equation}\label{eqn:pOfW}
\hat p=C_m{\hat W }/(i k_z k_x^{-2/3})
\end{equation}
Note that $\phi_m\approx\pi/2.$ Figure~\ref{fig:SpanwiseShear} shows the quality of the matching.
\begin{figure}
$\tilde y$
\includegraphics[width=0.15\textwidth]{omegaWsslWwwX0o6}~
\includegraphics[width=0.15\textwidth]{omegaWsslWwwX1o6}~
\includegraphics[width=0.15\textwidth]{omegaWsslWwwX2o6}~
\includegraphics[width=0.15\textwidth]{omegaWsslWwwX3o6}~
\includegraphics[width=0.15\textwidth]{omegaWsslWwwX4o6}~
\includegraphics[width=0.15\textwidth]{omegaWsslWwwX5o6}
\caption{Comparison of spanwise shear $\RealPart \tilde w_{\SSL}'(\tilde y)e^{ik_x x}$ (solid) with $\RealPart C_m\tilde w'(\tilde y)e^{i(k_x+\phi_m)}$ (dashed), for $k_x x/(2\pi)=0, 1/6,2/6,3/6,4/6,$ and $5/6.$
\label{fig:SpanwiseShear}}
\end{figure}
We believe that the agreement is sufficiently close to expect that these two spanwise profiles will lead to similar magnitudes of skin-friction reduction. For completeness, the comparison of the spanwise velocity profiles is shown in Figure~\ref{fig:SpanwiseVelocity}.
\begin{figure}
$\tilde y$
\includegraphics[width=0.15\textwidth]{WsslWwwX0o6}~
\includegraphics[width=0.15\textwidth]{WsslWwwX1o6}~
\includegraphics[width=0.15\textwidth]{WsslWwwX2o6}~
\includegraphics[width=0.15\textwidth]{WsslWwwX3o6}~
\includegraphics[width=0.15\textwidth]{WsslWwwX4o6}~
\includegraphics[width=0.15\textwidth]{WsslWwwX5o6}
\caption{Comparison of spanwise velocity $\RealPart \tilde w_{\SSL}(\tilde y)e^{ik_x x}$ (solid) with $\RealPart C_m\tilde w(\tilde y)e^{i(k_x+\phi_m)}$ (dashed), for $k_x x/(2\pi)=0, 1/6,2/6,3/6,4/6,$ and $5/6.$
\label{fig:SpanwiseVelocity}}
\end{figure}
\subsubsection{Comparison of the power required for control in the SSL and wavy-wall cases}
The power per unit wall area (on one wall) required to drive the SSL flow can be expressed in wall units as
$$
\Phi^+_{\SSL}=\hat W^2 k_x^{1/3}\int_0^\infty | \tilde w'_{\SSL} |^2/2\,d\tilde y,
$$
while the corresponding expression in the wavy-wall case is $\Phi^+=\Phi^+_{\mathrm{w}}+\Phi^+_{\mathrm{u}},$ where
$$
\Phi^+_{\mathrm{w}}= k_z k_x^{-2/3} \hat p \int_0^\infty |\tilde w'(\tilde y)|^2/2\,d\tilde y,
$$
$$
\Phi^+_{\mathrm{u}}= k_z k_x^{-2/3} \hat p \int_0^\infty |\tilde u'(\tilde y)|^2/2\,d\tilde y.
$$
If $\hat p$ is selected by the matching rule (\ref{eqn:pOfW}) then $\Phi^+_{\mathrm{w}}\le\Phi^+_{\SSL}.$ This is an artefact of the specific way matching was done: optimising over $C$ is equivalent to projecting the SSL solution onto the direction of the wavy-wall solution in the $L_2$ functional space: the length of the projection of a vector is always less than or equal to the length of the vector itself. To be on the safe side, therefore, it is better to assume that the energy needed to drive the spanwise component of the wavy-wall flow to achieve the same skin friction reduction as in the SSL flow is the same as the energy needed to drive the SSL flow. With this assumption, if the equivalent wavy-wall flow had no $u$ component it would generate the same overall net drag reduction as the SSL flow. In reality, the $u$ component, however, results in an additional energy dissipation, $\Phi^+_{\mathrm{u}}.$
Hence, the power required to generate the wavy-wall flow equivalent to the SSL flow will always be greater, with the ratio equal to
$$
r=\frac{\Phi^+}{\Phi_{w}^+}=\frac{||\hat w^2||+||\hat u||^2}{||\hat w^2||}=1+\frac{k_z^2}{k_x^2}\frac{||\tilde u_w+\frac{k_x^2}{k_z^2}\tilde u_p||^2}{||\tilde w ||^2}.
$$
Here, the squared norm $||.||^2=\int_0^\infty |.|^2\,dy.$
The ratio $r$ turns out to be dependent on $k_x/k_z$ only, and it is also obvious that there exist a value of $k_x/k_z$ for which the power required will have a minimum. Calculating the norms numerically gives
$$r=3.122 +2.323
\frac{k_x^2}{k_z^2}+0.7986\frac{k_z^2}{k_x^2}.
$$
Minimizing $r$ gives $r_{\mathrm{min}}=5.846,$ which is attained at
$$
\left.\frac{k_x}{k_z}\right|_{\mathrm{opt}}=0.7657.
$$
This corresponds to the angle between the mean flow direction and the direction perpendicular to the wall crests and troughs
$\Theta_{\mathrm{opt}}=52.56^\circ.$
\subsection{Drag reduction estimate}
\begin{figure}
\begin{center}
\raisebox{0.27\textwidth}{$P_{\mathrm{req}}$}\includegraphics[width=0.5\textwidth]{PowerRequiredSSL}
\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$\lambda_x^+$
\end{center}
\caption{Power required to move the wall in SSL flow as a function of the wavelength $\lambda_x^+$; $\hat W^+=6$ (solid), $\hat W^+=12$ (dashed), DNS \cite{Viotti_Quadrio_Luchini_2009} (dots). \label{PowerRequiredSSL}}
\end{figure}
In \cite{Viotti_Quadrio_Luchini_2009} the drag reduction was characterised by the net gain in the power needed to drive the flow, expressed as a percentage share of the power needed to drive the uncontrolled flow. The power needed to drive the uncontrolled flow in a plane channel, if expressed in wall units, is
$$
\Phi^+_0=2U_b^+=2\sqrt{\frac 2{C_f}},$$
where $U_b^+$ is the bulk velocity. We will use the same empirical formula for the skin friction coefficient $C_f=0.0336{ Re}_\tau^{-0.273}$ as in \cite{Viotti_Quadrio_Luchini_2009}. The net power gain $P_{\mathrm{net}}$ was obtained as a sum of the power $P_{\mathrm{sav}}$ saved due to the reduction of the skin friction and the (negative) power $P_{\mathrm{req}}$ required to drive the in-plane wall motion. In \cite{Viotti_Quadrio_Luchini_2009} $P_{\mathrm{sav}}$ and $P_{\mathrm{req}}$ were obtained from direct numerical simulations, and expressed in \% of the power needed to drive the uncontrolled flow, and the same convention is used here. It was also shown that with good accuracy $P_{\mathrm{req}}$ can be also obtained from the solution of the linearized boundary layer equation (\ref{eqn:TildeWssl}). It should be understood, of course, that (\ref{eqn:TildeWssl}) is formulated in wall units of the controlled flow while $P_{\mathrm{req}}$ is presented in \cite{Viotti_Quadrio_Luchini_2009} as a function of the wavelength $\lambda^+_x$ in the units normalised with the skin friction of the uncontrolled flow. The reduction of the skin friction is proportional to $100\%-P_{\mathrm{sav}},$ which, after some manipulation, gives the formula
$$
\frac{P_{\mathrm{req}}}{100\%}=-\frac{2\Phi_{\SSL}^+}{\Phi_0^+}=-{\hat W}^2 \sqrt{\frac {C_f}2} \left( \frac{2 \pi(1-P_{\mathrm{sav}}/100\%) }{ \lambda_x^+}\right)^{1/
3}\int_0^\infty |\tilde w'_{\SSL}(\tilde y)|^2/2\,d\tilde y. $$
This formula, although not given explicitly in \cite{Viotti_Quadrio_Luchini_2009}, was obviously used there. According to our calculations and the solution given in \cite{Viotti_Quadrio_Luchini_2009}, $\int_0^\infty |\tilde w'_{\SSL}(\tilde y)|^2/2\,d\tilde y=0.3157.$ Comparison of this formula with the DNS results of \cite{Viotti_Quadrio_Luchini_2009}, measured from a digitised plot in that paper, is shown in Figure~\ref{PowerRequiredSSL} and is in fact equivalent to a part of their Figure\,6. The agreement confirms that we use the same approach as in \cite{Viotti_Quadrio_Luchini_2009}.
\begin{figure}
\begin{center}
\raisebox{0.27\textwidth}{$P_{\mathrm{sav}}$}\includegraphics[width=0.5\textwidth]{PowerSavedFits}
\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$\lambda_x^+$
\end{center}
\caption{
Power saved fits (\ref{eqn:PsavA1}-\ref{eqn:PsavA12}), bottom to top, for the DNS data of \cite{Viotti_Quadrio_Luchini_2009}.
\label{fig:PowerSaved}
}
\end{figure}
Using the data provided by the authors of \cite{Viotti_Quadrio_Luchini_2009}, the following fits were obtained for $P_{\mathrm{sav}}$ for $\hat W^+=1,2,6$ and $12$:
\begin{multline}\label{eqn:PsavA1}
P_{\mathrm{sav},1}=1.135 + 0.002929 {\lambda_x^+} - 1.205\cdot 10^{-6} {\lambda_x^+}^2 +
1.447\cdot 10^{-10} {\lambda_x^+}^3\\ - 1.047\cdot 10^{-13} {\lambda_x^+}^4 +
2.609\cdot 10^{-17} {\lambda_x^+}^5,
\end{multline}
\begin{multline}\label{eqn:PsavA2}
P_{\mathrm{sav},2}=-1.856 + 0.03954 {\lambda_x^+} - 5.28537\cdot 10^{-5} {\lambda_x^+}^2 +
3.498\cdot 10^{-8} {\lambda_x^+}^3\\ - 1.127\cdot 10^{-11} {\lambda_x^+}^4 +
1.328\cdot 10^{-15} {\lambda_x^+}^5,
\end{multline}
\begin{multline}\label{eqn:PsavA6}
P_{\mathrm{sav},6}=15.25 + 0.04888\lambda_x^+ - 4.441\cdot 10^{-5} {\lambda_x^+}^2 +
1.628\cdot 10^{-8} {\lambda_x^+}^3\\
- 2.845\cdot10^{-12} {\lambda_x^+}^4 +
1.938\cdot10^{-16} {\lambda_x^+}^5,
\end{multline}
\begin{multline}\label{eqn:PsavA12}
P_{\mathrm{sav},12}=27.90 + 0.03824 {\lambda_x^+} - 2.810\cdot10^{-5} {\lambda_x^+}^2 +
8.015\cdot10^-9 {\lambda_x^+}^3\\ - 1.082\cdot10^{-12} {\lambda_x^+}^4 +
5.535\cdot10^{-17} {\lambda_x^+}^5.
\end{multline}
The quality of the fits is illustrated by Figure~\ref{fig:PowerSaved}.
When the wavy wall is used to create the spanwise shear instead of the in-plane wall motion, the power required should be multiplied by the ratio $r$ depending on the angle of the wall wave, with the minimal value $r_m.$ The net power saving is then $P_{\mathrm{sav}}+rP_{\mathrm{req}}.$ Figure~\ref{fig:DragReduction} shows the result for $r=r_m$ for two wall heights equivalent to $\hat W^+=2$ and $6$, together with the $\hat W^+=6$ SSL case. As one can see, in the case corresponding to $\hat W^+=2$ the wavy wall can be expected to give a positive net power saving. The maximum of about 2.4\% drag reduction is attained at $\lambda_x^+\approx1520.$ In the case corresponding to a wavy wall equivalent to SSL with $\hat W^+=6$ only drag increase is predicted. We interpolated between $\hat W^+=1,$ $\hat W^+=2,$ and $\hat W^+=6$ cases, and it turned out that $\hat W^+=2$ gives very nearly the best result.
Note that the calculations were done using the Mathematica package, and while the parameters affecting the accuracy were varied to verify it, the actual accuracy might be somewhat less than four digits given here. In any case, the nature of the present study is such that the values obtained are only indicative.
\begin{figure}
\begin{center}
\raisebox{0.27\textwidth}{$P_{\mathrm{net}}$}\includegraphics[width=0.5\textwidth]{DragReductionCorrected}
\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$\lambda_x^+$
\end{center}
\caption{
Net power savings for wavy wall case: $P_{\mathrm{sav},2}+r_m\left.P_{\mathrm{req}}\right|_{\hat W^+=2}$ (solid line) and $P_{\mathrm{sav},6}+r_m\left.P_{\mathrm{req}}\right|_{\hat W^+=6}$ (thick dashed line) and for SSL: DNS for $\hat W^+=6$ (dots) and $P_{\mathrm{sav},6}+\left.P_{\mathrm{req}}\right|_{\hat W^+=6}$ (thin dashed line).
\label{fig:DragReduction}
}
\end{figure}
\section{Discussion and conclusions}
The present analysis is based on assumptions which, although being reasonable, have not yet been confirmed. The most natural next step will consist in performing direct numerical simulation of a flow past a wavy wall and checking that the drag is indeed reduced. This task is more complicated than a now-standard direct numerical simulation of a flow in a flat channel, not only because of the more complicated geometry, but also because the wavy wall introduces an additional length scale in the spanwise direction, which is noticeably greater than the typical spanwise scale of near-wall structures. For this reason the spanwise dimension of the computational domain might need to be increased. The effect itself is not particularly large, and it is expected to be observed in a relatively narrow range of wall shape parameters. Therefore, a trial-and-error approach to selecting these parameters in direct numerical simulation could be inefficient. Providing a reasonable estimate for these parameters was the main motivation of the present study.
Note that this study is based on the direct numerical simulation results \cite{Viotti_Quadrio_Luchini_2009} for $\mathrm{Re}_\tau=200,$ and the conclusions for other values of the Reynolds number might be somewhat different.
Concerning the required wave height, our results only indicate the magnitude of the spanwise motion that the wall should generate. This might be enough for direct numerical simulations, since estimating approximately this magnitude for a particular wall height does not require averaging over a long time: here, trial-end-error is more likely to work. From the mathematical viewpoint, the required estimate could be obtained by finding the inviscid outer limit of the asymptotic expansion of the flow past a wavy wall as the Reynolds number tends to infinity. This was not undertaken because first direct numerical simulations are likely to be tried for relatively small Reynolds numbers. An alternative is to perform an analogue of the analysis presented in this work numerically using an eddy viscosity model. This line of research is being pursued currently by the colleagues of the author.
Drag reduction by 2.4\% is much smaller than what can be obtained by in-plane wall motion. However, the method proposed here is much easier to implement in practice, since it is passive and does not require wall motion.
It is also easier to implement than riblets, because the wavelength of the proposed wall is two orders of magnitude greater than the wavelength of riblets, and because the height-to-wavelength ratio for the proposed wall is also much smaller than that of riblets.
The present analysis is based on the assumption that similar distributions of spanwise shear will result in similar decrease in turbulent friction, while the longitudinal perturbation of the mean velocity will not affect turbulent friction. This can only be verified in direct numerical simulations or experiment. Note that for the very first tests numerical simulations need not be done for a wavy wall; the same effect might be expected to be achieved if a steady body force is applied simulating the effect of the pressure distribution caused by a wavy wall.
Another significant source of inaccuracy in the above analysis is the use of linearized boundary layer equations and the assumption that the mean profile in the uncontrolled flow is linear. Therefore, the particular value of 2.4\% should be considered as indication of the accuracy with which the drag needs to be determined in direct numerical simulations rather than a definite prediction. Similarly, the optimal parameters of the wall obtained here are just an indication of the reasonable initial approximation for the search of the true optimal.
It might be possible to increase the effect by optimising the wall shape, since it need not be of the form $\sin{(k_x x + k_z z)}.$ More complicated shapes might be better. Optimisation over complex wall shapes would be impossible on the basis of direct numerical simulations with computing power currently available. Fortunately, recent studies \cite{DuqueDaza2012,Moarref2012} suggest a much more efficient than direct numerical simulation, albeit approximate, approach to such optimisation. Once approximate optimisation using these approaches has been performed, further refinement might be done by direct numerical simulations and experiment.
Since the riblet wavelength and the proposed wall wavelength are so different, they can be combined to work simultaneously. It has already been demonstrated that the effect of wall oscillations and effect of riblets are almost additive~\cite{Vodopianov2013}.
The main conclusion of the present work can be formulated in the following way. There are reasons to expect that a turbulent flow past a wall the height of which is of the form $h=H\sin (k_x^+ x^+ + k_z^+ z),$ where $x^+$ is the coordinate in the main flow direction, with $k_x^+=2\pi/\lambda_x^+\approx 2\pi/1520$ and $k_x^+/k_z^+\approx 0.7657$ and with a suitable $H$ can be expected to have the skin friction about 2.4\% less than the flow past a flat wall, other things being the same. The value of $H$ should be such as to generate spanwise velocity of the order of $2$ wall units. The wall units here are based on the skin friction in the flow past a flat wall. The values given are only indicative.
\bigskip
This work would be unlikely to appear without the research environment provided by the large team working on the project on turbulent drag reduction under the
grant EP/G060215/1, funded by EPSRC together with Airbus Operations Ltd and EADS UK Ltd. The author would like to use this opportunity to thank all his colleagues in this team.
\bibliographystyle{unsrt}
|
1,116,691,500,632 | arxiv | \section{Introduction}
For autonomous control systems, the control objectives need to be achieved robustly against system uncertainties.
Central to many control synthesis techniques for uncertain systems is backward reachability analysis.
Given an uncertain control system and a set $X_0$ of target states, the backward reachable set (BRS) consists of the states that can be steered into $X_0$ in finite time, regardless of the system uncertainties.
Being able to compute such sets is important to design controllers with safety or reachability objectives \cite{bertsekas1972infinite,lygeros1999controllers}, and is one building block for achieving more complicated control tasks
\cite{chen2018signal}.
Whenever the exact computation is hard, an under-approximation can be still used to define a conservative control law.
A variety of approaches exists in the literature, including polyhedral computation \cite{blanchini2008set}, interval analysis \cite{li2017invariance}, HJB method \cite{mitchell2005time} and polynomial optimization \cite{lasserre2015tractable}, just to name a few.
For linear dynamics with linear constraints, polyhedra can be used to represent the BRSs as they are closed under linear transformation, Minkowski addition and subtraction, and can be computed leveraging linear optimization tools. However, it is limited to low dimensional systems (typically, state dimension $\leq 4$) due to an expensive quantifier elimination step.
One related problem is the forward reachability analysis, where we deal with uncertain system with \textit{no control inputs} (e.g., closed-loop systems), and compute the set of states that can be visited in the future from some initial state in a given set $X_0$.
Such forward reachable sets can be computed offline for verification and online for state prediction \revise{\cite{althoff2021set}}.
Often times, the forward reachable sets are overestimated for robustness.
For linear systems, a special polyhedron called zonotope is widely used to represent forward reachable sets thanks to the favorable complexity of applying linear transformations (for forward state evolution) and Minkowski additions (to account for additive uncertainty) to zonotopes (see, e.g., \cite{althoff2015introduction,girard2005reachability}).
Algorithms that compute zonotopic forward reachable sets are more scalable than those dealing with general polyhedra.
One natural question is: for \textit{uncertain} linear dynamics, is there a way to reverse the time so that the efficient zonotopic set computation for forward reachability analysis can be directly adopted to compute BRSs? Unfortunately, this is not the case. The main reason is that there lacks a meaningful notion of two-player game in forward reachability analysis.
In the forward case, there is only one player (i.e., the environment) picking the initial state and the system uncertainty, whereas in the backward case, there are two players (i.e., the controller and the environment) picking the control input and the uncertainty in turn \revise{(see Section 4.2 of \revise{\cite{mitchell2007comparing}})}. Particularly, the existence of the environment player leads to a Minkowski subtraction step in the sequential BRS computation, but zonotopes are not closed under Minkowski subtraction \cite{althoff2015computing}.
\revise{Therefore, combining the idea of time-reversing and efficient computational tools for forward reachability (e.g., based on zonotopes \cite{liebenwein2020compositional}, \cite{han2016enlarging} or polynomial zonotopes \cite{kochdumper2020computing})} were explored only for \textit{deterministic} systems, but using zonotopes for \textit{uncertain} systems' backward reachability, to the best of our knowledge, is still missing.
In this paper, we use zonotopes to represent and compute BRSs for uncertain linear systems.
The key ingredient is an efficient way to under/over-approximate the Minkowski difference of two zonotopes by solving convex optimization problems.
While the under approximation allows us to efficiently compute a subset of the BRS \lcss{without polyhedral projection}, the over-approximation can be used to quantify how conservative this subset is.
Different from \cite{althoff2015computing}, which manipulates a hyperplane-representation, our approach only deals with the generator-representations of zonotopes, and hence is more efficient and suitable for sequential computation, but at the cost of accuracy. The accuracy issue, however, is mitigated by the fact that our subtrahend zonotope represents the impact of uncertainties and is usually small compared to the minuend zonotope. Moreover, \cite{althoff2015computing} does not guarantee if the approximation is an inner one or an outer one.
\lcss{We also leverage the linear encoding of zonotope-containment problems \cite{sadraddini2019linear} and derive an alternative approach for under-approximating the Minkowski difference between zonotopes.
Theoretical analysis and experiments show that our approach scales differently from this alternative.
}
In order to upper bound the complexity of each step of the computation, we further present a way to reduce the order of the obtained zonotopic BRSs.
Zonotope order reduction is extensively studied (e.g., see \cite{kopetzki2017methods}, \cite{yang2018comparison} and the references therein), but our approach is different: we search for a lower order zonotope \textit{enclosed} by the given zonotope, whereas existing techniques, focusing on forward reachability analysis, all look for outer approximations.
Our approach is evaluated with randomly generated zonotopes with different dimensions and orders, and its efficacy is illustrated with several examples.
\section{Preliminaries}
Let $G = [g_1, g_2, \dots g_N] \in \mathbb{R}^{n\times N}$ be a set of \text{generators}, and $c \in \mathbb{R}^{n}$ be a \text{center vector}. A \text{zonotope} $Z$ with generator-representation (or G-rep) $(G, c)$ is defined to be the set
$\big\{c + \sum_{i=1}^N \theta_i g_i \mid \theta_i \in [-1,1], \ i = 1,2,\dots N\big\}$.
With a slight abuse of notation, we will write $Z = (G, c)$.
Let $H \in \mathbb{R}^{L\times n}$ and $h \in \mathbb{R}^L$, a \textit{polyhedron} with hyperplane-representation (or H-rep) $(H,h)$ is the set $\{x \in \mathbb{R}^n \mid Hx \leq h\}$. If polyhedron $X$ is bounded, $X$ is called a \text{polytope}.
A set $V= \{x_1, x_2 \dots, x_M\}\subseteq \mathbb{R}^{n}$ is called a
vertex-representation (or V-rep) of a polytope $X$ if $X$ is the convex hull of $V$, i.e., $X = \text{cvxh}(V) : = \big\{ \sum_{j=1}^M \lambda_j x_j \mid \sum_{j=1}^M \lambda_j = 1, \lambda_j \in [0,1], j = 1,2,\dots, M\big\}$, \revise{where $\text{cvxh}$ denotes the convex hull}. Let $A \in \mathbb{R}^{L \times n}$ and $X \subseteq \mathbb{R}^n$ be a set, $AX$ denotes the set $\{Ax \mid x \in X\}$.
Let $X, Y \subseteq \mathbb{R}^n$ be two sets, the \text{Minkowski sum} of $X$ and $Y$, denoted by $X \oplus Y$, is the set $\{x + y\mid x \in X, y \in Y\}$.
Whenever $X = \{x\}$ is a singleton set, we will write $x + Y$ for $X \oplus Y$.
The \text{Minkowski difference} of $X$ and $Y$, denoted by $X\ominus Y$, is defined to be $\{z \in \mathbb{R}^n \mid z + Y\subseteq X\}$.
For the Minkowski arithmetics, we assume that the operations are done in order from left to right, except as specified otherwise by parentheses. The following lemmas will be useful.
\revise{
\begin{lem}\label{lem:Min}
Let $X, Y, Z \subseteq \mathbb{R}^n$.
\begin{itemize}[nolistsep]
\item[i)] [\cite{li2019robustly}, Proposition 3.1, \cite{yang2020efficient}, Lemma 4] $X \ominus Y \oplus Z \subseteq X \oplus Z \ominus Y$, particularly, $X \ominus Y \oplus Y \subseteq X \subseteq X \oplus Y \ominus Y$.
\item[ii)] [From \cite{montejano1996some}] If $X$, $Y$ and $Z$ are convex, compact and nonempty, then $X \oplus Z = Y \oplus Z$ implies that $X = Y$.
\item[iii)] If $X$, $Y$ are convex, compact and nonempty, then $X \oplus Y \ominus Y = X$.
\end{itemize}
\end{lem}
\begin{proof}
To prove iii), note that, by i), $X \oplus Y \ominus Y \oplus Y = X \oplus Y$. Then applying item ii) yields $X \oplus Y \ominus Y = X$.
\end{proof}
}
\begin{lem} \ [\liren{From \cite{girard2005reachability}}] Let $Z = (G, c) \subseteq \mathbb{R}^n$ be a zonotope.
\begin{itemize}[nolistsep]
\item[i)] $Z = \bigoplus_{i=1}^N Z_i$ where $Z_i = \big(\{g_i\}, c_i\big)$ s.t. $\sum_{i=1}^N c_i = c$.
\item[ii)] Let $A \in \mathbb{R}^{L\times n}$, $AZ = \big(AG, Ac)$.
\item[iii)] Let $Z' = (G', c')$, $Z \oplus Z' = ([G, G'], c + c')$.
\end{itemize}
\end{lem}
\section{Backward Reachable Sets}
Consider a discrete-time system in the following form:
\begin{align}
x_{t+1} = A x_t + B u_t + E w_t + K,
\label{eq:sys}
\end{align}
where $x \in \mathbb{R}^{n_x}$ is the state, $u \in U \in \mathbb{R}^{n_u}$ is the control input and $w \in W \in \mathbb{R}^{n_w}$ is the disturbance input.
Given a set $X_0$ of target states, we want to compute (or to under-approximate, if exact computation is hard) the $k$-step backward reachable set $X_k$ of set $X_0$, defined recursively as
\begin{align}
\hspace{-1.5mm}X_{k+1} & = \{x\in \mathbb{R}^{n_x} \mid \exists u \in U: \forall w \in W: \nonumber \\
& \ \ \ Ax + Bu + Ew + K\in X_k\}, \ \ k = 0,1,2 \dots \label{eq:cpre}
\end{align}
Set $X_k$ contains the states from where it is possible to reach the target set $X_0$ in \textit{exactly} $k$ steps.
A weaker definition of the $k$-step BRS would require $X_0$ to be reached in \textit{no more than} $k$ steps, whose formal definition is similar to Eq. \eqref{eq:cpre} except for an extra ``$\cup X_k$'' at the end of the formula.
Here, we adopt the stronger definition in Eq. \eqref{eq:cpre} for simplicity because the union operation may lead to non-convex sets.
There exists slightly different notions of reachable sets \cite{kurzhanskiy2011reach}, depending on the order of the quantifiers. We will focus on under-approximating the set defined by \eqref{eq:cpre} while our approach applies in general.
Suppose that set $U$, $W$, and $X_0$ are polytopes, and that the H-rep of $U$, $X_0$ and the V-rep of $W$ is known, one can compute $X_k$ as a polytope in H-rep, i.e.,
\begin{align}
\hspace{-2.5mm}X_{k+1} & = \textbf{Proj}_x\big(\{x\in \mathbb{R}^{n_x}, u \in U \mid\forall w_j \in V_W: \nonumber \\ & \ \ \ Ax + Bu + Ew_j \in X_k\}\big), \ k = 0,1,2\dots,
\end{align}
where $\textbf{Proj}_x(S) = \{x\mid \exists u: (x,u)\in S\}$ is the projection operation \revise{(e.g., see \cite{smith2016interdependence}, Proposition 1)}.
However, polytope projection is time-consuming, which limits the use of this approach to low dimensional systems (typically $n_x \leq 4$).
In this paper, we consider under-approximating the BRSs of system \eqref{eq:sys} instead
under the following assumptions.
\begin{itemize}[nolistsep]
\item[A1.] The target set is a zonotope (denoted by $Z_0$ hereafter), whose G-rep is known.
\item[A2.] The disturbance set $W$ is a polytope, whose H-rep $(H,h)$ and V-rep $V$ are both known.
\item[A3.] Matrix $A \in \mathbb{R}^{n_x \times n_x}$ is invertible. This assumption is true whenever Eq. \eqref{eq:sys} is obtained by time-discretizing an underlying continuous-time linear dynamics.
\end{itemize}
Finding under-approximation of BRSs is useful in control problems with reachability objectives and falsification problems against safety requirements \cite{chou2018using}.
\section{Solution Approach}
We explore the use of zonotopes in under-approximating the BRS $X_k$. This is based on i) the modest computational complexity of operations on zonotopes such as Minkowski addition and affine transformation, and ii) the fact that \eqref{eq:cpre} can be re-written as follows using Minkowski arithmetic \revise{\cite{kurzhanskiy2011reach}}:
\begin{align}
X_{k+1} = \{x \in \mathbb{R}^{n_x} \mid Ax \in X_k \ominus EW \oplus -BU - K\}. \label{eq:cpreM}
\end{align}
In Eq. \eqref{eq:cpreM}, if $W = \{0\}$ and the term ``$\ominus EW$" were not there, then one could show inductively that, under assumption A1-A3, $X_{k+1}$ is a zonotope whose G-rep can be easily computed from the G-reps of $X_k$ and $U$ after Minknowski addition and linear transformation.
Whenever $W$ is not a singleton set, the key step is to efficiently under and over approximate the Minkowski difference in Eq. \eqref{eq:cpreM} with zonotopes in their G-reps. Whereas the former leads to an inner approximation of $X_{k+1}$,
the latter one can be used to quantify the conservatism of this inner approximation.
\subsection{Zonotopic Inner/Outer Approximation of $Z\ominus EW$}
Let $Z = (G, c) \subseteq \mathbb{R}^{n_x}$ be a zonotope, where $G = [g_1, g_2, \dots, g_N]$. We formulate two optimization problems, one computes a zonotopic outer approximation $\overline{\mathfrak{Z}}(Z, EW)$, and the other computes a zonotopic inner approximation $\underline{\mathfrak{Z}}(Z, EW)$, of set $EW$ using $Z$ as a ``template''. The obtained outer/inner approximation are also in G-reps. Particularly, their generators are scaled versions of $Z$'s generators, i.e., in the form of $\alpha_i g_i$ for some $\alpha_i \in [0,1]$ (see Definition \ref{defn:align}). We then show that the Minkowski difference $Z \ominus \overline{\mathfrak{Z}}(Z, EW)$ and $Z \ominus \underline{\mathfrak{Z}}(Z, EW)$ can be done element-wise via generator subtraction. This leads to an efficient way to inner/outer approximate $Z \ominus EW$ with zonotopes in G-reps.
This technique will become our key ingredient of BRS under-approximation.
\begin{defn}\label{defn:align}
Let $Z = (G, c)$ and $Z'= (G', c')$ be zonotopes. $Z'$ is \textit{aligned with} $Z$ if $G = [g_1, g_2, \dots, g_N]$ and $G' =[\alpha_1 g_1, \alpha_2 g_2, \dots, \alpha_N g_N]$ for some $\alpha_i \in [0,1]$.
\end{defn}
\subsubsection{Outer approximation of $EW$}
Consider the following linear programming problem:
\begin{align}
\hspace{-5mm} \begin{array}{rl}
\min_{\theta, \, \alpha, \, c} & \sum_{i=1}^N b_i \alpha_i \\
\text{s.t.} & \forall w_j \in V: c + \sum_{i=1}^N \theta_{ij} g_i = E w_j \\
& \ \ \ |\theta_{ij}| \leq \alpha_i \leq 1, \ i = 0,1,\dots N \\
\end{array},
\tag{min-out}
\label{eq:minout}
\end{align}
where $b_i >0$ are constants and $\theta$ and $\alpha$ are vectors aggregated from $\theta_{ij}$ and $\alpha_i$ respectively.
The V-rep $V$ of the disturbance set $W$, which is available by Assumption A2, is used to formulate the above problem. Let $N$ be the number of generators in the template zonotope $Z$, $n_x$ be the dimension of the ambient space, and $M$ be the number of vertices in $V$. In the optimization problem \eqref{eq:minout},
\begin{align}
\begin{array}{rl}
\# \text{variables} & \hspace{-2mm} = \mathcal{O}(MN + n_x), \\
\# \text{constraints} & \hspace{-2mm}= \mathcal{O}\big(M(N + n_x)\big).
\end{array}
\label{eq:bigO1}
\end{align}
\begin{prop}\label{prop:Zminout}
Let $(\theta, \overline{\alpha}, \overline{c})$ be the minimizer of the optimization problem \eqref{eq:minout}. Define $\overline{\mathfrak{Z}}(Z, EW) = \big([\overline{\alpha}_1 g_1, \overline{\alpha}_2 g_2, \dots, \overline{\alpha}_N g_N ], \overline{c}\big)$.
We have $EW \subseteq \overline{\mathfrak{Z}}(Z, EW)$.
\end{prop}
\begin{proof}
By the conditions in \eqref{eq:minout}, for any $i$ and $w_j \in V$, there exist $\theta_{ij} \in [-\overline{\alpha}_i,\overline{\alpha}_i]$
s.t. $Ew_j = \overline{c} + \sum_{i=1}^N \theta_{ij}g_i $.
Equivalently, there exist $\theta_{ij}' \in [-1,1]$ s.t. $Ew_j = \overline{c} + \sum_{i=1}^N \theta_{ij}' \overline{\alpha}_i g_i $.
Hence $EV \subseteq \overline{\mathfrak{Z}}(Z, EW) = \big([\overline{\alpha}_1 g_1, \overline{\alpha}_2 g_2, \dots, \overline{\alpha}_N g_N], \overline{c}\big)$.
It then follows that $EW = E \text{cvxh}(V) = \text{cvxh}(EV)\subseteq \overline{\mathfrak{Z}}(Z, EW)$ from the convexity of zonotope $\overline{\mathfrak{Z}}(Z, EW)$.
\end{proof}
In general, there does not exist a unique minimal (in the set inclusion sense) zonotopic outer approximation of $EW$ that aligns with the template zonotope $Z$. We hence minimize a weighted sum of $\alpha_i$'s. The weights $b_i > 0$ can be used for heuristic design to incorporate prior knowledge of disturbance set $W$. For example, when $W$ is a hyper-rectangle and $E \in \mathbb{R}^{n_x \times n_w}$ is full rank, we use
$b_i = \Vert Tg_i\Vert_1 - \Vert T g_i \Vert_\infty$,
where $T = (E^\top E)^{-1}E$ when $n_x \geq n_w$ and $T =E^\top (E E^\top)^{-1}$ otherwise.
The idea is to encourage using generators that closely align with vector $Ee_p$, where $e_p$ is the $p^{\rm th}$ natural basis of vector space $\mathbb{R}^{n_w}$. A similar criteria
was used for zonotope order reduction in \cite{girard2005reachability}.
\subsubsection{Inner approximation of $EW$}
Consider the following optimization problem:
\begin{align}
\begin{array}{rl}
\max_{\alpha, \, c} & \textstyle\sum_{i=1}^N d_i \log(\alpha_i) \\
\text{s.t.} & H c + |HG| \alpha\leq h\\
& 0 \leq \alpha \leq 1
\end{array},
\tag{max-in}
\label{eq:maxin}
\end{align}
where \revise{$d_i \geq 0$ are constants} and
$|HG|$ is a matrix obtained by taking element-wise absolute value of matrix $HG$. The H-rep $(H, h)$ of the disturbance set $W$, which is available by Assumption A2, is used to formulate the above problem.
Suppose that $H$ has $L$ rows.
In \eqref{eq:maxin}, we have
\begin{align}
\begin{array}{rl}
\# \text{ variables} & \hspace{-2mm} = \mathcal{O}(N + n_x), \\
\# \text{ constraints}& \hspace{-2mm}= \mathcal{O}(N+L).
\end{array}
\end{align}
\begin{prop}
Let $(\underline{\alpha}, \underline{c})$ be the maximizer of optimization problem \eqref{eq:maxin}. Define $\underline{\mathfrak{Z}}(Z, EW) = \big([\underline{\alpha}_1 g_1, \underline{\alpha}_2 g_2, \dots, \underline{\alpha}_N g_N ], \underline{c}\big)$.
We have $\underline{\mathfrak{Z}}(Z, EW)\subseteq EW$.
\end{prop}
\begin{proof}
We first show that, for $\alpha \geq 0$ and any $c$, $Hc + |HG|\alpha \leq h$ if and only if
\begin{align}
\forall \theta \in \textstyle\prod_{i=1}^N[-\alpha_i, \alpha_i]: H (c + \sum_{i=1}^N \theta_{i} g_i)\leq h,
\label{eq:prop21}
\end{align}
where $\theta_i$ is the $i^{\rm th}$ element of $\theta$. Let $H_{\ell}$ and $h_{\ell}$ be the $\ell^{\rm th}$ row of $H$ and $h$ respectively. Eq. \eqref{eq:prop21} is equivalent to
\begin{align}
& \forall \ell \in \{1,2,\dots, L\}:
\begin{array}{rl}
\max_{\theta} & H_\ell (\underline{c} + G\theta) \leq h_\ell \\
\text{ s.t. }&
\theta \in \textstyle\prod_{i=1}^N[-\alpha_i,\alpha_i]
\end{array} \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Updownarrow \nonumber \\
& \forall \ell \in \{1,2,\dots, L\}: H_\ell c + |H_\ell G|\alpha \leq h_\ell.
\label{eq:prop22}
\end{align}
Eq. \eqref{eq:prop22} is equivalent to $Hc + |HG|\alpha \leq h$. Therefore the maximizer $(\underline{\alpha}, \underline{c})$ satisfies Eq. \eqref{eq:prop21}, which implies
\begin{align}
\forall \theta' \in \textstyle\prod_{i=1}^N[-1, 1]: H (\underline{c} + \sum_{i=1}^N \theta_{i}' \underline{\alpha}_i g_i)\leq h.
\end{align}
That is, $ \big([\underline{\alpha}_1 g_1, \underline{\alpha}_2 g_2, \dots, \underline{\alpha}_N g_N ], \underline{c}\big) \subseteq EW$.
\end{proof}
Again, the maximal (in the set inclusion sense) inner approximation does not exist in general. Here we maximize the volume of a hyper-rectangle in $\mathbb{R}^N$, defined by $d_i$ and $\alpha$.
\revise{In particular, as a heuristic, we pick $d_i = \Vert g_i \Vert$ for $i = 1,2,\dots, N$ throughout the paper}.
\subsubsection{Efficient Minkowski Difference between Aligned Zonotopes}
Next, we show that the Minkowski difference amounts to element-wise generator subtraction when the subtrahend zonotope is aligned with the minuend zonotope.
\begin{prop}\label{prop:ZMinkowskiminus}
\normalfont
Let $Z = (G, c)$ and $Z'= (G', c')$ be zonotopes and suppose that $Z'$ is aligned with $Z$. Then $Z \ominus Z' = \big([(1-\alpha_1) g_1, (1-\alpha_2) g_2, \dots, (1 - \alpha_N) g_N], c - c'\big)$.
\end{prop}
\revise{
\begin{proof}
Let $\Delta: = \big([(1-\alpha_1) g_1, (1-\alpha_2) g_2, \dots, (1 - \alpha_N) g_N], c - c'\big)$.
By Lemma \ref{lem:Min} iii), $\Delta= \Delta \oplus Z' \ominus Z'$ as $\Delta$, $Z'$ are convex, compact and nonempty. Also note that $\Delta \oplus Z' = Z$, hence $\Delta = Z\ominus Z'$.
\end{proof}
}
We summarize this part by the following proposition.
\begin{prop}\label{prop:MinkMinusOverUnder}
Let $Z$ be a zonotope and let $\overline{\mathfrak{Z}}(Z, EW)$, $\underline{\mathfrak{Z}}(Z, EW)$ be defined by solving \eqref{eq:minout}, \eqref{eq:maxin} respectively, then $Z \ominus \overline{\mathfrak{Z}}(Z, EW) \subseteq Z \ominus EW \subseteq Z \ominus \underline{\mathfrak{Z}}(Z, EW)$. Particularly, $Z \ominus \overline{\mathfrak{Z}}(Z, EW)$ and $Z \ominus \underline{\mathfrak{Z}}(Z, EW)$ can be computed efficiently with generator-wise subtraction.
\end{prop}
\begin{proof}
It follows from Proposition \ref{prop:Zminout}-\ref{prop:ZMinkowskiminus} and the fact that both $\overline{\mathfrak{Z}}(Z, EW)$ and $\underline{\mathfrak{Z}}(Z, EW)$ are aligned with $Z$.
\end{proof}
\subsection{Approximation of Backward Reachable Sets}
We can compute a zonotopic over/under-approximation of the BRS $X_k$ recursively as follows:
\begin{align}
\underline{Z}_0 & = \overline{Z}_0 = Z_0, \label{eq:cpreZapprx1}\\
\underline{Z}_{k+1} & = A^{-1} \big(\underline{Z}_k \ominus \overline{\mathfrak{Z}}(\underline{Z}_k, EW)\oplus - BU - K\big), \label{eq:cpreZapprx2} \\
\overline{Z}_{k+1} & = A^{-1} \big(\overline{Z}_k \ominus \underline{\mathfrak{Z}}(\overline{Z}_k, EW)\oplus - BU - K\big). \label{eq:cpreZapprx3}
\end{align}
\begin{prop}
\normalfont Let $X_k$ be defined by Eq. \eqref{eq:cpre}, and $\underline{Z}_k, \overline{Z}_k$ be defined by Eq. \eqref{eq:cpreZapprx1}-\eqref{eq:cpreZapprx3}, we have $\underline{Z}_k \subseteq X_k \subseteq \overline{Z}_k$.
\end{prop}
\begin{proof}
We prove this by induction.
When $k = 0$, $\underline{Z}_0 = \overline{Z}_0 = Z_0 = X_0$ by \eqref{eq:cpreZapprx1}.
Suppose that $\underline{Z}_k \subseteq X_k \subseteq \overline{Z}_k$, we have
\begin{align}
\hspace{-7mm}\underline{Z}_k \ominus \overline{\mathfrak{Z}}(\underline{Z}_k, EW) & \subseteq \underline{Z}_k \ominus EW & (\text{Proposition }\ref{prop:MinkMinusOverUnder}) \nonumber \\
& \subseteq X_k \ominus EW. & (\underline{Z}_k \subseteq X_k) \label{eq:prop5}
\end{align}
Combining Eq. \eqref{eq:prop5}, \eqref{eq:cpreZapprx1} and Eq. \eqref{eq:cpreM} yields $\underline{Z}_{k+1} \subseteq X_{k+1}$. Similarly, one can show $X_{k+1} \subseteq \overline{Z}_{k+1}$.
\end{proof}
Eq. \eqref{eq:cpreZapprx2}, \eqref{eq:cpreZapprx3} only involve Minkowski addition, linear transformation of zonotopes and Minkowski difference where the subtrahend zonotope is aligned with the minuend zonotope. The above three operations can be done efficiently with G-rep manipulations.
The time for computing $\underline{Z}_k$ grows modestly with $k$ because the number of $\underline{Z}_k$'s generators, denoted by $N_k$, increases linearly with $k$. In fact, $N_{k+1} = N_k + N_U$ where $N_U$ is the (constant) number of generators of the zonotopic control input set $U$.
In what follows, we introduce an order reduction technique to upper bound the time complexity of computing $\underline{Z}_k$.
\subsubsection{Zonotope Order Reduction}\label{sec:red}
The order of an $n$-dimensional zonotope with $N$ generators is defined to be $N/n$.
Zonotope order reduction problem concerns approximating a given zonotope with another one with lower order.
Most of the existing techniques focus on finding outer approximations because zonotopes are typically used to overestimate forward reachable sets. Whereas in this paper, we find inner approximations using the following fact.
\begin{prop}\label{prop:reduction}
Let $Z = \big(G= [g_1, g_2, \dots, g_N], c\big)$ be a zonotope. Define $G_1$ to be the matrix after removing arbitrary two columns $g_i$, $g_j$ from $G$ and appending $g_i + g_j$, and define $G_2$ to be the matrix after removing columns $g_i$, $g_j$ from $G$ and appending $g_i -g_j$. Then $Z_1 = (G_1, c) \subseteq Z$ and $Z_2 = (G_2, c) \subseteq Z$.
\end{prop}
\revise{
\begin{proof}
Let $\l(g_k) := \{\theta g_k \mid \theta \in [-1,1]\}$, then $Z = c + \bigoplus_{k=1}^N l(g_k)$, $Z_1 = c+ \bigoplus_{k\neq i,j}^N l(g_k) \oplus l(g_i + g_j)$. Since $l(g_i + g_j) = \{\theta g_1 + \theta g_2 \mid \theta \in [-1,1]\} \subseteq \{\theta_1 g_1 + \theta_2 g_2 \mid \theta_1, \theta_2 \in [-1,1]\}= l(g_i) \oplus l(g_j)$, $Z_1 \subseteq Z$. Similarly, $Z_2 \subseteq Z$.
\end{proof}
}
Note that, in Proposition \ref{prop:reduction}, the number of generators of $Z_1$ (or $Z_2$) is fewer than that of $Z$ by one. Our zonotope order reduction procedure will keep replacing two generators $g_i, g_j$ by their combination (either $g_i + g_j$ or $g_i - g_j$) until the order of the resulting zonotope is small enough. Particularly, we use the following heuristic to select $g_i, g_j$:
\begin{align}
(i,j) = \text{arg\,}\text{min}_{1\leq i < j\leq N} \Vert g_i\Vert_2 \Vert g_j - \hat{g}_i g_j^\top \hat{g}_i \Vert_2,
\end{align}
where $\hat{g}_i = \tfrac{g_i}{\Vert g_i \Vert_2}$.
Then we will add $g_i + g_j$ if $\Vert (g_i + g_j)^\top G'\Vert_2 \geq \Vert (g_i - g_j)^\top G'\Vert_2$, and add $g_i - g_j$ otherwise, where $G'$ is the transpose of the right inverse of the generator matrix after removing columns $g_i$, $g_j$. The idea is to combine two generators that are either closely aligned or small in 2-norm, and the combined generator should be larger and more perpendicular to the remaining generators.
\subsubsection{Deriving Reachability Control Law using $\underline{Z}_k$}\label{sec:law}
Once zonotopic inner approximations $\underline{Z}_k$ of the BRSs are computed, checking if a state $x$ belongs to $\underline{Z}_k$ amounts to solving a linear program.
Moreover, for any state $x \in \underline{Z}_{k+1}$, we can find a control input $u \in U(x, \underline{Z}_k)$ that brings $x$ to $\underline{Z}_{k}$ in one step, where $U(x, \underline{Z}_k)$ is defined to be
\begin{align}
& \{u \in U \mid \forall w \in W: Ax + Bu + Ew + K \in \underline{Z}_{k}\} \nonumber \\
= & \textbf{Proj}_{u}\left\{(u, \theta)\, \Bigg\vert\, \begin{array}{l} Ax + Bu + K = \\
c^{(k)} + \textstyle\sum_{i=1}^{N_{k}} \theta_i g_i^{(k)}, \\ u \in U, \theta_i \in [-1,1]\end{array}\right\}, \label{eq:getu}
\end{align}
where $\big([g_1^{(k)}, g_2^{(k)}, \dots, g_{N_k}^{(k)}], c^{(k)}\big)$ is the G-rep of $\underline{Z}_k \ominus \overline{\mathfrak{Z}}(\underline{Z}_k, EW)$, which can be saved during the computation (see Eq. \eqref{eq:cpreZapprx2}). We do not need to explicitly perform the projection step in Eq. \eqref{eq:getu} as it is sufficient to find one $u \in U(x, \underline{Z}_k)$ by solving a linear program.
For any initial state $x_0 \in \underline{Z}_k$, iteratively applying $u_t \in U(x_{t},\underline{Z}_{k-t-1})$ yields a feedback control strategy, which generates a sequence $u_0, u_1, \dots u_{k-1}$ and drives the initial state $x_0$ into the target set $\underline{Z}_0 = Z_0$ in precisely $k$-steps, regardless of the disturbance inputs.
\section{Evaluation \& Discussion}
\subsection{Comparisons}
We compare our approach for under-approximating $Z \ominus EW$ with two other methods: one by Althoff \cite{althoff2015computing} and one based on the work by Sadraddini and Tedrake \cite{sadraddini2019linear}.
Whenever the disturbance set $W$ is a zonotope in its G-rep, $Z \ominus EW$ can be estimated by \cite{althoff2015computing}, but the result is not guaranteed to be an under-approximation.
This approach
outperforms the exact computation but is still expensive due to an H-rep manipulation.
\lcss{Alternatively, using the linear encoding of zonotope-containment problems \cite{sadraddini2019linear}, we derive the linear program below that under-approximates $Z\ominus EW$:}
\begin{align}
\begin{array}{rl}
\max_{\Gamma, \gamma, \alpha, c} & \textstyle\sum_{i=1}^N \alpha_i \\
\text{s.t.}& [G_Z\text{diag}(\alpha), EG_W] = G_Z \Gamma \\
& c_Z - (c + E c_W) = G_Z \gamma\\
& \Vert [\Gamma, \gamma] \Vert_\infty \leq 1, \ 0 \leq \alpha \leq 1
\label{eq:Sadra}
\end{array},
\end{align}
where $(G_W, c_W)$ and $(G_Z, c_Z)$ are the G-reps of $W$ and $Z$ respectively.
Similar to our approach, the solution of \eqref{eq:Sadra} also gives a zonotopic under-approximation $(G_Z\text{diag}(\alpha), c )$ of $Z \ominus EW$ that aligns with the template $Z$.
The linear program \eqref{eq:Sadra} scales differently from \eqref{eq:minout}, which dominates the time of computing BRSs.
Let $N_W$ and $N$ be the number of generators of $W$ and $Z$ respectively. For \eqref{eq:Sadra},
\begin{align}
\begin{array}{rl}
\# \text{variables} & \hspace{-2mm}= \mathcal{O}\big(N(N+N_W) + n_x\big), \\
\# \text{constraints} & \hspace{-2mm}= \mathcal{O}(N + n_x).
\end{array}
\label{eq:bigO2}
\end{align}
\hl{The size of \eqref{eq:Sadra} is independent of the number of $W$'s vertices and grows with $N_W$, the number of generators of $W$. Thus \eqref{eq:Sadra} is more advantageous than \eqref{eq:minout} whenever $W$ is a high dimensional zonotope with a small order. } \revise{On the other hand, the number of variables in \eqref{eq:minout} is linear in $N$, whereas that in \eqref{eq:Sadra} is quadratic in $N$. }
We randomly generate about 2000 test cases, each case consists of a zonotope $Z \subseteq \mathbb{R}^{n_x}$, a hyper-rectangular $W \subseteq\mathbb{R}^{n_x}$ and a square matrix $E\in \mathbb{R}^{{n_x}\times {n_x}}$. The Minkowski difference $Z \ominus EW$ is estimated using the three different methods.
Fig. \ref{fig:time} shows the computation time w.r.t. the dimension and the order of zonotope $Z$. Each dot represents the time for a specific case, and the surface is plotted with averaged values.
All the experiments are run on a 1.80 GHz laptop with 16 GB RAM.
The computation time of Althoff's approach grows fast w.r.t. the order and the dimension of $Z$ (in fact, we could not finish running any one of the higher-order cases after hours).
\hl{Our approach scales better with the order of $Z$, but still grows relatively fast with the dimension $n_x$ because the number of $W$'s vertices grows exponentially with $n_x$ since we choose $W$ to be hyper-rectangles in this example.
Somewhat surprisingly, the computation time of Sadraddini's approach grows very slowly w.r.t. the order and the dimension of $Z$.
This is consistent with the big-O analysis: in the largest test case, $n_x = 10$ and $N = 100$,
but $W$ has about $10^3$ vertices $(M = 1000)$. Hence (min-out) has approximately ten times more variables than \eqref{eq:Sadra}. }
\begin{figure}[h]
\centering
\includegraphics[width=2.9in]{figs/plottime31.jpg}\\
\caption{Upper: computation time for estimating $Z \ominus EW$. Lower: volume ratio distribution.}\label{fig:time}
\end{figure}
\lcss{Another metric is the size of the obtained estimation. The volumes of the obtained zonotopic estimations are comparable. Define $r_1 = \left(\tfrac{V_{\rm Althoff}}{V_{\rm min-out}}\right)^{1/n_x}$ and $r_1 = \left(\tfrac{V_{\rm Sadraddini}}{V_{\rm min-out}}\right)^{1/n_x}$, the statistics of $r_1$, $r_2$ are given in the table below.
\begin{table}[h]
\vspace{-2mm}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
& mean & std. & min & max & confidence of $[0.95, 1.05]$\\ \hline
$r_1$ & $1.0017$ & $0.0577$ & $0.9900$ & $1.3856$ & $98.83\%$ \\ \hline
$r_2$ & $0.9678$ & $0.1891$ & $0.8372$ & $1.7498$ & $95.10\%$ \\ \hline
\end{tabular}
\end{table}
}
\subsection{Order Reduction}
We evaluate our order reduction technique with 29000 randomly generated zonotopes with different dimensions and orders.
The approach introduced in Section \ref{sec:red} is used to reduce the order of each testing zonotope by one.
As shown in Fig. \ref{fig:redtime} (upper), the computation time grows modestly with the zonotope's dimension and order.
\begin{figure}[h]
\centering
\includegraphics[width=2.9in]{figs/plottimeredlog.jpg}\\
\caption{Upper: averaged computation time for reducing a zonotope's order by one. Lower: volume ratio between the reduced-order zonotope and the one before reduction.}\label{fig:redtime}
\end{figure}
The quality of the reduced-order zonotope is measured by the ratio between its volume and that of the original zonotope before reduction, defined in Fig. \ref{fig:redtime} (lower). We are able to run this evaluation for lower-dimensional cases because computing the exact volume of a zonotope is difficult for high-dimensional case due to the combinatorial complexity \cite{gover2010determinants}.
In Fig. \ref{fig:redtime} (lower), the volume ratio increases with the the original zonotope's order because higher order means more freedom in selecting the generators to combine.
In the presented cases, the ratio is close to one if the original zonotope's order is greater than three.
\section{Case Studies}
\begin{figure*}[h]
\centering
\includegraphics[width=6.4in]{figs/lattime.jpg}\\
\caption{Backward reachable set computation for lateral dynamics. Left: computation time. Right: set volume.}\label{fig:lat}
\includegraphics[width=6.4in]{figs/longtime.jpg}\\
\caption{Backward reachable set computation for longitudinal dynamics. Left: computation time. Right: set volume.}\label{fig:long}
\end{figure*}
\subsection{Aircraft Position Control. }
With an aircraft position control system, we illustrate the overall BRS computation approach that combines Minkowski difference and order reduction to implement the iterations in Eq. \eqref{eq:cpreZapprx1}-\eqref{eq:cpreZapprx3}.
The linearized 6D lateral dynamics and the 6D longitudinal dynamics of the aircraft are in the form of Eq. \eqref{eq:sys}, whose $A$, $B$ matrices are given in Eq. \eqref{eq:ABlong}.
For both systems, $E_{\rm lat} = E_{\rm long} = I$.
The states of the lateral and longitudinal dynamics are $x_{\rm lat} = [v, p, r, \phi, \psi, y]^\top$ and $x_{\rm long} = [u,w,q,\theta, x, h]^\top$ respectively, and control inputs are $u_{\rm lat} = [\delta_a, \delta_r]^\top$ and $u_{\rm long} = [\delta_e, \delta_t]^\top$ respectively (see TABLE \ref{tab:plane} and Fig. \ref{fig:ap}).
We assume that the disturbance sets are hyper-boxes and their G-rep are $W_{\rm lat} = (\text{diag}([0.037, 0.00166, 0.0078, 0.00124, 0.00107,$ $0.07229]), 0)$ and $W_{\rm long} = (\text{diag}([0.3025,0.4025,0.01213, $ $0.006750,1.373,1.331]), 0)$.
\begin{align}
A_{\rm lat} = &
\begin{psmallmatrix}
1.004 & 0.1408 & 0.3095 & -0.3112 & 0 & 0 \\
0.03015 & 1.177 & 0.6016 & -0.6029 & 0 & 0 \\
-0.02448 & -0.1877& 0.3803 & 0.5642 & 0 & 0\\
-0.01057 & -0.09588 & -0.3343 & 1.277 & 0 & 0\\
0.0003943 & 0.0095901 & -0.005341 & -0.007447 & 1 & 0\\
-0.2579 & -23.32 & -51.03 & 61.35 & -37.86 & 1\\
\end{psmallmatrix},
\nonumber \\
A_{\rm long} = &
\begin{psmallmatrix}
0.9911 & -0.04858 & -0.01709 & -0.4883 & 0 & 0 \\
0.0005870 & 0.9968 & 0.5168 & -0.0001398 & 0 & 0\\
0.0002070 & -0.001123 & 0.9936 & -5.092\times 10^{-5} & 0 & 0\\
1.907 & -1.032 & 0.01832 & 1 & 0 & 0 \\
-0.04601 & 0.001125 & 0.0002638 & 0.01130 & 1 & 0\\
-5.095\times 10^{-5} & -0.1874 & -0.01185 & 4.004 & 0 & 1\\
\end{psmallmatrix},
\nonumber \\
B_{\rm lat} = &
\begin{psmallmatrix}
-0.1189 & 0.007812\\
-0.1217 & 0.2643\\
0.01773& -0.2219\\
-0.02882&-0.09982\\
-0.0005607&0.002437\\
0.1120&-0.5785\\
\end{psmallmatrix},
\nonumber \\
B_{\rm long} = &
\begin{psmallmatrix}
1.504 & 7.349\times 10^{-5}\\
-0.04645 & -3.421 \times 10^{-6}\\
-0.009812&-1.488\times 10^{-6} \\
-9.080 \times 10^{-5}& -1.371\times 10^{-8}\\
-0.03479 & -1.700\times 10^{-6}\\
0.004171& 2.913\times 10^{-7}\\
\end{psmallmatrix}.
\label{eq:ABlong}
\end{align}
\begin{figure}[h]
\centering
\includegraphics[width=2.75in]{figs/ap.png}\\
\caption{Illustration of the states and control inputs.}\label{fig:ap}
\end{figure}
\begin{table}[]
\centering
\caption{Variables in the aircraft model}
\begin{tabular}{c | c | c | c }
\hline
variable & physical meaning & range & unit \\ \hline
\hline
$v$ & velocity & $[-1, 1] $ & m/s\\
$p$ & roll angular rate &$ [-1, 1] $& rad/s\\
$r$ & yaw angular rate &$[-1, 1]$ & rad/s\\
$\phi$ & roll angle & $[-\pi/5, \pi/5] $& rad\\
$\psi$ & yaw angle & $[-\pi/5, \pi/5]$& rad\\
$y$ & lateral deviation &$ [-2, 2]$ & m\\
\hline
$u$ & velocity & $[40,60]$ & m/s\\
$w$ & velocity &$[0,10]$ & rad/s\\
$q$ & pitch angular rate &$[-0.1,0.1]$ & rad/s\\
$\theta$ & pitch angle & $[-\pi,\pi]$ & rad\\
$x$ & horizontal displacement & $[0,800]$ & rad\\
$h$ & altitude &$ [260 ,390]$ & m\\
\hline
$\delta_a$ & aileron deflection &$ [-\pi, \pi]$ & m\\
$\delta_r$ & rudder deflection &$ [-\pi, \pi]$ & m\\
\hline
$\delta_e$ & elevator deflection &$[-0.262,0.524]$ & m\\
$\delta_t$ &throttle control &$[0,10^4]$ & m\\ \hline
\end{tabular}
\label{tab:plane}
\end{table}
For both the lateral and longitudinal dynamics, we can efficiently compute their $k$-step BRSs using the proposed approach for reasonably large horizons $k$, whereas the computation gets stuck at $k=3$ using the exact Minkowski difference provided by MPT3 \cite{MPT3}, or the approximation function implemented in CORA.
Fig. \ref{fig:lat}, \ref{fig:long} show the results for the lateral dynamics and the longitudinal dynamics, respectively.
In each figure, the left (right, resp.) plot shows the cpu time for computing $\underline{Z}_k$ (the size of $\underline{Z}_k$, resp.) versus $k$, the number of backward expansion steps.
The red curves are for our approach and the blue ones for the approach using Sadraddini’s zonotope containment encoding.
\subsubsection{Cpu time plots} The solid (dotted, resp.) lines correspond to the computation time with (without, resp.) zonotope order reduction.
Using the order reduction technique (actived at $k = 39$), our approach and Sadraddini's approach give comparable results.
Without order reduction, the computation time of Sadraddini's approach (dotted blue) grows faster w.r.t. $k$ than ours (dotted red).
This is consistent with the big-O analysis because in our approach, the time-dominant Minkwoski difference step amounts to solving a linear program whose number of variables is linear in $k$ (proportional to the $\underline{Z}_k$'s order), whereas
the number of variables is quadratic in $k$ in Sadraddini's formulation.
Although our approach scales well even without order reduction,
order reduction is still important in efficiently storing the zonotopic BRSs and
deriving the control law.
\subsubsection{Volume plots} The solid lines correspond to the results with order reduction ``in the loop" (i.e., in the $k^{\rm th}$ step, $\underline{Z}_k$ is reduced to a certain order before $\underline{Z}_{k+1}$ is computed).
Whereas the dotted lines correspond to the results with order reduction after all $\underline{Z}_k$'s are computed (Ideally, we would like to compute $\underline{Z}_k$'s volume without order reduction at all, but this is impossible with the off-the-shelf volume computation tools in CORA because the complexity is combinatorial in $\underline{Z}_k$'s order).
The two approaches give comparable results with order reduction.
Moreover, since the dotted red line and the solid red line are close to each other, this indicates that the
``wrapping effect" due to the order reduction in-the-loop is relatively small.
\liren{
\subsection{Double Integrator with Uncontrollable Subspace}
With a 10D system, we show the effectiveness of the reachability controller derived from the zonotopic BRSs as described in Section \ref{sec:law}.
The system consists of a double-integrator dynamics in the 3D space and a 4D uncontrollable subspace
(the uncontrollable part
affects the controllable part).
The continuous-time dynamics is
\begin{align}
\dot{x}_1 & = x_2 + x_7 + x_{10} + w_1, \ \ \dot{x}_2 = u_1 + w_2, \nonumber \\
\dot{x}_3 & = x_4 - x_8 + w_3, \ \ \dot{x}_4 = u_2 + w_4, \nonumber \\
\dot{x}_5 & = x_6 + x_9 + w_5, \ \ \dot{x}_6 = u_3 + w_6, \\
\dot{x}_7 & = -0.01x_7 + x_8 + w_7, \ \ \dot{x}_8 = - x_8 - 0.01x_7 + w_8, \nonumber \\
\dot{x}_9 & = -10^{-4}x_7 +2 x_{10} + w_9, \nonumber \\
\dot{x}_{10} & = - 2 x_9 - 10^{-4}x_{10} + w_{10}. \nonumber
\end{align}
We discretize the above dynamics with a sampling period $\Delta t = 0.5$s, and define the disturbance set $W$ so that
$w_{\{1,3,5\}} \in [-0.12, 0.12]$, $w_{\{2,4,6\}} \in [-0.2, 0.2]$, $w_{\{7,8,9,10\}} \in [-0.1, 0.1]$,
and the control set $U = [-0.5, 0.5]^3$.
Starting from a randomly picked initial condition in $\underline{Z}_{50}$, our goal is to reach a final state for which $x_i \in [9.5, 10.5]$ for $i \in \{1,3,5\}$ and $x_i \in [-0.5, 0.5]$ for the remaining $i$'s.
We defined a controller as described in Section \ref{sec:law}.
Fig. \ref{fig:int} shows a closed-loop trajectory under random disturbances.
The small target set is reached despite the oscillating uncontrollable dynamics.
\begin{figure}[h]
\centering
\includegraphics[width=2.85in]{figs/2int_10D.png}\\
\caption{A closed-loop trajectory for the double-integrator dynamics. The red box is the target set. }\label{fig:int}
\end{figure}
\section{Conclusion}
In this paper, we develop an approach that under-approximates the backward reachable sets for uncertain linear systems using zonotopes.}
The main technical ingredients are i) under-approximating the Minkowski difference between two zonotopes and ii) an order reduction technique tailored to enclosed zonotopes.
These developments were evaluated with randomly generated instances and two case studies.
Experiments show that our method is more scalable than the off-the-shelf tools (MPT3, CORA) and scales differently from the approach based on Sadraddini's zonotope-inclusion technique.
In our method, the dominant Minkowski subtraction step requires solving a linear program whose size is linear in the zonotope's order, while that dependency is quadratic in Sadraddini's approach.
We will investigate extending our approach to nonlinear systems in the future.
\noindent{\em Acknowledgments:} The authors would like to thank Yuhao Zhang from the University of Wisconsin-Madison and Sara Shoouri and Jiahong Xu from the University of Michigan for sharing the aircraft model.
\balance
\bibliographystyle{abbrv}
|
1,116,691,500,633 | arxiv | \section{Introduction}\label{s1}
Multi-label classification (MLC) tasks are very popular in data science for now and it has a very wide range of applications such as multi-object detection for images\cite{2017Faster}, multi-topic discovery for texts\cite{2020Federated}, multi-function classification for genes\cite{2021AC}, and so on\cite{Christel2022Multilabel}.
The framework of MLC can be expressed as following:
\begin{itemize}
\item it includes a sample space $\mathcal{X}$ and a label space $\mathcal{Y}$, where the label space is a combination of a base category set $\Lambda = \{\lambda_l\}_{l = 1}^L$;
\item it includes a dataset $D_{train} = (X, Y)$ for training with both the samples $X = \{x_n| x_n \in \mathcal{X}\}_{n = 1}^N$ and the labels $Y = \{y_n| y_n \in \mathcal{Y}\}_{n = 1}^N$;
\item it includes another dataset $D_{test} = (X', )$ to predict with only the samples $X' = \{x_m'| x_m' \in \mathcal{X}\}_{m = 1}^M$;
\item what we are going to do is training a model $f_{\theta}: \mathcal{X} \rightarrow \mathcal{Y}$ with $D_{train}$ and use it to get appropriate labels $Y' = \{\hat{y}_m'| \hat{y}_m' = f_{\theta}(x_m'), x_m' \in X'\}_{m = 1}^M$ for $D_{test}$.
\end{itemize}
This is not really different from "normal" classification tasks, which is usually called the multi-class classification (MCC)\footnote{
The binary classification is regarded as a special case of MCC ($L = 2$).
}.
However, the label space of an MCC task has strong mutual exclusion, which makes the labels $y^{mcc}$ all have and only have one base category, i.e., $|y^{mcc}| \equiv 1$.
On the contrary, MLC tasks allow one or more category elements to appear at the same time, or no elements at all.
The label space of an MCC task is just its own base label set, i.e., $\mathcal{Y}^{mcc} = \Lambda^{mcc}$, while the label space of an MLC task can be the power set of its base category set, i.e., $\mathcal{Y}^{mlc} = 2^{\Lambda^{mlc}}$.
It can be seen that single-label MCC is a special case of the MLC framework.
There are several challenges faced by research in MLC.
In addition to the category imbalance problem\cite{2017AnB} commonly found in classification, MLC tasks are also disrupted by other three issues due to the nature of their label space.
First of all, the label of a data sample in MLC is actually a set of basic category elements, which, analogous to the one-hot vector in MCC, can be represented by a "multi-hot" vector.
It would be simpler if the number of 1s in the multi-hot vector was fixed, but for most MLC tasks, the size of labels is indeterminate.
For example, in the same MLC task, the labels of some data samples may contain multiple category elements, some may have only one, and some may not even have one.
This adds difficulty to the modeling of the task.
Second, the plurality of data labels may imply correlation information between categories.
For example, in an image object detection task, cats and dogs may often appear together, whereas cats and sharks do not.
This is in stark contrast to MCC, where there is a known clear mutual exclusivity between categories in MCC tasks but correlations between categories in MLC tasks require further modeling or learning from data.
Modeling and learning between category correlations has always been one of the core research problems of the MLC community\cite{2020Fast}.
The last thing to point out is the sparsity of labels in MLC, which is actually a further result of the previous two problems.
Since the number of categories of samples is uncertain, the size of the label space in MLC is potentially exponential to the size of the base category set, i.e., $|\mathcal{Y}^{mlc}| \sim 2^{|\Lambda^{mlc}|}$.
However, due to unknown correlations between categories, some labels in the exponential label space may never appear, while others appear frequently.
This leads to the sparsity of the label space and the imbalance between labels\cite{2014Towards}, which, to a certain extent, replaces the problem of category imbalance.
In the traditional machine learning community, various methods have been proposed for the above issues in MLC\cite{bogatinovski2022comprehensive}.
These methods are classified into two categories: algorithm adaption (AD) methods and problem transformation (PT) methods.
AD methods primarily focus on expanding the MCC algorithms to address MLC tasks, such as MLKNN\cite{2021Multi}, MMP\cite{4634206} and Rank-SVM\cite{2008Calibrated}.
PT methods transforms MLC tasks into one or more single-label classification\cite{2016DeepBE}, or label ranking tasks\cite{2015Deep}.
In terms of flexibility, AD methods seem to be more rigid because each adaptive algorithm can only work on a specific model, while PT methods allow various models to be used for handling the task after transformation.
With the widespread success of deep learning, more and more MLC tasks are starting to work using end-to-end neural networks as base models, which is also the result of the structure of neural networks that are naturally suitable for MLC tasks\cite{2014Deep}.
When applying deep learning to deal with MLC tasks, in addition to using the structure advantages of the neural network itself, it is often combined with PT methods to enhance the processing power of the model.
Such a combination is often reflected in the design of the loss function, because it is the most task-relevant part in the deep learning architecture.
However, the loss functions proposed in previous studies either only meet partial requirements\cite{2017Focal} or go through complex multi-stage processing\cite{2017Improving}, lacking flexibility and effectiveness to deal with the above three problems in MLC.
To fill such a research gap and better support the application of deep learning in MLC tasks, we propose the ZLPR (Zero-bounded Log-sum-exp \& Pairwise Rank-based) loss in this paper:
\begin{equation}\label{eq1}
\mathcal{L}_{zlpr} = \log\left(1+\sum_{i \in \Omega_{pos}}e^{-s_i}\right) + \log\left(1+\sum_{j \in \Omega_{neg}}e^{s_j}\right),
\end{equation}
where $\Omega_{pos}$ is the label (set) and $\Omega_{neg} = \Lambda/\Omega_{pos}$, and $s_i$ is the model output score of the $i$-th category ($\lambda_i$).
During prediction, $s_i > 0$ implies that $\lambda_i$ could be a target category and $s_i < 0$ doesn't, which is the meaning of "Zero-bounded".
More details about ZLPR and how it handles the above three common challenges in MLC tasks will be explained in Sec.\ref{s3}.
Before that, we will review some loss functions commonly used in deep learning for MLC tasks in Sec.\ref{s2}.
In Sec.\ref{s4}, we will compare the effects of these losses on multiple datasets with multiple evaluation metrics.
Sec.\ref{s5} is the conclusion of this paper.
Additionally, we propose the soft version and the corresponding calculation method of KL-divergence of ZLPR in the appendix.
With these two, some regularization tricks such as label smoothing\cite{2021Delving} or R-drop\cite{2021arXiv210614448L} can be applied to enhance models' generalization ability.
\section{Related Work}\label{s2}
As mentioned earlier, the purpose of this paper is to promote the application of deep learning in MLC tasks.
The traditional MLC processing methods that can be combined with neural networks are mainly problem transformation (PT) methods.
Therefore, we mainly review the work about combination of PT methods and neural network in this section.
There are four modes of PT methods, namely the Binary Relevance (BR), the Label Powerset (LP), the Classifier Chian (CC) and the Label Ranking (LR)\cite{2012article}.
The BR mode converts an MLC task into multiple independent binary classification problems, where each binary classifier determines whether the corresponding category is included in the label set or not.
When combining the BR mode with a neural network, only a binary classification loss needs to be applied to each output node.
The BR mode can handle cases where the number of target categories is uncertain based on linear complexity, and due to the simplicity, it is the most frequently mode used in MLC tasks with deep learning.
Works based on the BR mode are mainly to propose binary loss functions with various characteristics.
The baseline is the (weighted) binary cross entropy loss.
Beyond that, Lin et al.\cite{2017Focal} proposed the focal loss making the model focus on hard-to-learn samples during training, which has been applied in many fields and received lots of favorable comments.
Millerari et al.\cite{2016V} proposed the dice loss trying to optimize the F1-measurement rather than the accuracy.
Based on that, Li et al.\cite{li2019dice} proposed the self-adjusting dice loss, which, however, did not gets much credit as the original paper implied.
Menon et al.\cite{menon2020long} proposed a novel logit adjustment method to mitigate the impact of data imbalance by introducing the prior of categories.
The simplicity and ease of use of the BR mode has attracted much attention, but it assumes conditional independence between categories and can not capture the correlation within the label set.
The LP mode directly takes the subsets of the base category set in MLC tasks as the prediction output, which means there should be a total of $2^{|\Lambda|}$ output nodes of the neural network.
In this way, an MLC task is transformed into an MCC task and any MCC loss function can be used.
Although the LP mode can adaptively determine the number of target categories and fully capture the correlation within the label set, the potential exponential space complexity is unacceptable, and it also faces serious sparsity and label set imbalance problems.
Therefore, this mode is not used as much as the others.
The CC mode transforms an MLC task into a sequence prediction problem with a pre-defined category order.
When predicting whether a category belongs to the target label set or not, the CC mode use the previously obtained category prediction result as input, which forms a chain architecture to capture the correlation between categories.
The CC mode is not implemented by modifying the loss function, but by leveraging the model architecture, which can apply the RNN\cite{2016CNN} or the Seq2Seq\cite{2021History} modules in deep learning.
It should be noted that the outputs of the CC mode are in series while others are parallel, so the CC mode is not as time efficient as the BR mode and LP mode.
The LR mode is different from the above three modes in that it no longer treats MLC tasks as classification but as ranking.
The basic idea of the LR mode is requiring that the rank scores of target categories are greater than non-target categories.
Based on the framework of learning to rank\cite{2020Learning} via pairwise comparisons, some rank-based loss functions have been proposed to deal with MLC tasks in the LR mode.
Weston et al.\cite{1970Weighted} proposed the WARP loss that puts different weights on violations where the score of target category is less than non-target category.
The intuition is that if the positive category is ranked lower, the violation should be penalized higher.
Zhang et al.\cite{2008Improved} proposed the BP-MLL loss, which is essentially an exponential pairwise ranking loss.
Li et al.\cite{2017Improving} proposed the LSEP loss based on the pairwise ranking loss and proved it has favorable theoretical guarantees compared to the hinge alternative.
Sun et al.\cite{2020Circle} proposed a unified loss function for pairwise optimization, which is a generalization of the LSEP loss.
The LR mode has parallel outputs as the BR mode, square space complexity during training and linear during prediction.
Better than the BR mode, the LR mode does not introduce additional independence assumptions, thus preserving the category correlations implied by the original data during training.
Models trained by the loss functions mentioned above in the LR mode cannot directly output the target label set when making predictions.
To deal with that, there are two empirical methods often used.
The first is to manually set a threshold, and categories with scores greater than this threshold will be considered as the target label set.
The second is to directly output categories with top-k scores as the target label set.
These two methods cannot handle the problem of manual intervention or uncertainty about the number of target categories.
Li et al.\cite{2017Improving} proposed a two-stage approach to deal with the above situation, first using the LSEP loss to optimize the class scores, and then adding new modules to estimate the number of target categories or the threshold.
This two-stage approach increases the model complexity and the training instability.
The ZLPR loss proposed in this paper combines the advantages of the LR mode and the BR mode, achieves a high level of time and space efficiency, retains the information of the category correlation implied in the original data, and can adapt to the situation of uncertain number of categories.
\section{The ZLPR Loss}\label{s3}
The commonly used loss function for single-label MCC tasks is the cross entropy loss.
If logits are used to represent it, the formula is as follows:
\begin{equation}\label{eq2}
\mathcal{L}_{ce} = -\log\frac{e^{s_i}}{\sum_{j = 1}^Le^{s_j}} = \log\left(1+\sum_{j = 1, j \neq i}^Le^{s_j - s_i}\right),
\end{equation}
where the target category is $\lambda_i$ and $s_j$ is the logit for the $j$-th category.
Since the log-sum-exp operator is a smooth approximation of the maximum operator, i.e.,
\begin{equation}\label{eq3}
\log\left(1+\sum_{j = 1, j \neq i}^Le^{s_j-s_i}\right) \approx \max\left(0, \underbrace{s_j-s_i, \cdots}_{\forall j \neq i}\right),
\end{equation}
minimizing the cross entropy loss actually means to make the logits of negative categories smaller than the positive one.
We extend this point to MLC tasks and get:
\begin{equation}\label{eq4}
\log\left(1+\sum_{j \in \Omega_{neg}}e^{s_j}\sum_{i \in \Omega_{pos}}e^{-s_i}\right) \approx \max\left(0, \underbrace{s_j - s_i, \cdots}_{i \in \Omega_{pos}, j \in \Omega_{neg}}\right),
\end{equation}
where the formula on the left is actually the LSEP loss proposed by Li et al.\cite{2017Improving} and we derived it from a different perspective.
As mentioned in Sec.\ref{s2}, the LSEP loss lacks the ability to adapt to the variable number of target categories like other general pairwise rank-based losses.
To handle such a situation, we introduce a threshold $s_0$ in the loss, hoping that the logits of positive categories are greater than it and the negative categories is less.
In this way, we get the threshold-bounded log-sum-exp \& pairwise rank-based (TLPR) loss as follows:
\begin{equation}\label{eq5}
\mathcal{L}_{tlpr} = \log\left(1+\sum_{j\in\Omega_{neg}}e^{s_j}\sum_{i\in\Omega_{pos}}e^{-s_i}+\sum_{j\in\Omega_{neg}}e^{s_j-s_0}+\sum_{i\in\Omega_{pos}}e^{s_0-s_i}\right),
\end{equation}
which can also be simplified as:
\begin{equation}\label{eq6}
\mathcal{L}_{tlpr} = \log\left(e^{-s_0}+\sum_{i \in \Omega_{pos}}e^{-s_i}\right) + \log\left(e^{s_0}+\sum_{j \in \Omega_{neg}}e^{s_j}\right).
\end{equation}
For sake of simplicity, we set $s_0 = 0$ here and get the ZLPR loss expressed in Equation \ref{eq1}.
The form of ZLPR seems to calculate the positive and negative categories separately, which does not show the characteristics of "pairwise ranking".
However, through the log-sum-exp and maximum approximation, we have:
\begin{equation}\label{eq7}
\mathcal{L}_{zlpr} \approx \max\left(0, \underbrace{-s_i, \cdots}_{i \in \Omega_{pos}}\right) + \max\left(0, \underbrace{s_j, \cdots}_{j \in \Omega_{neg}}\right)
= \max\left(0, \underbrace{s_j, \cdots}_{j \in \Omega_{neg}}\right) - \min\left(0, \underbrace{s_i, \cdots}_{i \in \Omega_{pos}}\right),
\end{equation}
from which we can still get the inspiration of intuitive "pairwise ranking".
The ZLPR loss can be rewritten as:
\begin{equation}\label{eq8}
\mathcal{L}_{zlpr} = \log\left(1+\langle y, e^{-s}\rangle\right) + \log\left(1+\langle 1-y, e^{s}\rangle\right),
\end{equation}
where $y$ is the multi-hot label, $s$ is the logit vector and $\langle\cdot, \cdot\rangle$ is the inner product operator.
\subsection{Label Dependence}\label{s3-1}
The goal of classification algorithms in general is to capture dependencies between input features $x$ and the target label $y$, i.e., to find the conditional probability $\mathcal{P}(y|x)$.
In MLC, dependencies may not only exist between the features $x$ and the target categories $y$, but also between the categories themselves.
The idea to improve predictive accuracy by capturing such dependencies is a driving force in research on MLC\cite{2020AF}.
In this subsection, we theoretically demonstrate that the ZLPR loss has the ability to capture label dependencies from original data.
The key to this problem is determining whether it is necessary to rely on information from the joint distribution when minimizing the corresponding empirical risk of the loss function, and if the answer is yes, it means that this loss can capture label dependencies\cite{2012OnL}.
The empirical risk of ZLPR is:
\begin{equation}\label{eq9}
\mathcal{R}_{zlpr} = \mathbb{E}_{y \sim \mathcal{P}(y|x)}\left[\log\left(1+\langle y, e^{-s}\rangle\right) + \log\left(1+\langle 1-y, e^{s}\rangle\right)\right],
\end{equation}
and the gradient is:
\begin{equation}\label{eq10}
\frac{\partial\mathcal{R}_{zlpr}}{\partial s} = \mathbb{E}_{y \sim \mathcal{P}(y|x)}\left[-\frac{y\odot e^{-s}}{1+\langle y, e^{-s}\rangle} + \frac{(1-y)\odot e^s}{1+\langle 1-y, e^s\rangle}\right],
\end{equation}
where $\odot$ is the Hadamard product operator.
Setting the gradient to be zero, $\forall t \in \{1, \cdots, L\}$ we have:
\begin{equation}\label{eq11}
s_t = \frac{1}{2}\underbrace{\log\frac{\mathcal{P}(y^{(t)} = 1|x)}{\mathcal{P}(y^{(t)} = 0|x)}}_{T_1} + \frac{1}{2}\underbrace{\log\frac{\mathbb{E}_{\tilde{y}^{(t)}\sim\mathcal{P}(\tilde{y}^{(t)}|y^{(t)} = 1, x)}\left[\phi_1(s, \tilde{y}^{(t)})\right]}{\mathbb{E}_{\tilde{y}^{(t)}\sim\mathcal{P}(\tilde{y}^{(t)}|y^{(t)} = 0, x)}\left[\phi_0(s, \tilde{y}^{(t)})\right]}}_{T_2},
\end{equation}
where $y^{(t)} \in \{0, 1\}$ is the binary state variable of the category $\lambda_t$, $\tilde{y}^{(t)} = \{y^{(1)}, \cdots, y^{(t-1)}, y^{(t+1)}, \cdots, y^{(L)}\}$ is the state variable set of other categories;
$\phi_1(s, \tilde{y}^{(t)}) = [1+e^{-s_t}+\sum_{r\neq t}y^{(r)}e^{-s_r}]^{-1}$ and $\phi_0(s, \tilde{y}^{(t)}) = [1+e^{s_t}+\sum_{r\neq t}(1-y^{(r)})e^{s_r}]^{-1}$.
On the right side of the equation, $T_1$ is the margin logit of the corresponding category, and $T_2$ is the dependence coupling term.
Due to the complexity of $T_2$, we are unlikely to find a solution of $s$ such that $T_2$ does not depend on $\mathcal{P}(\tilde{y}^{(t)}|y^{(t)}, x)$, which means that $s$ will depend on the joint distribution with minimal empirical loss.
This demonstrates the ability of ZLPR to capture label dependencies.
\subsection{Comparison to Related Loss Functions}
We compare two other loss functions with ZLPR, namely the LSEP loss and the BCE loss.
\textbf{LSEP.}
The LSEP loss is expressed as:
\begin{equation}\label{eq12}
\mathcal{L}_{lsep} = \log\left(1+\sum_{j \in \Omega_{neg}}e^{s_j}\sum_{i \in \Omega_{pos}}e^{-s_i}\right).
\end{equation}
Compared to ZLPR/TLPR, it lacks two summations for comparison with thresholds in the $\log$ operator, which makes it lose the ability to adaptively adjust the number of predicted categories.
Additionally, to make the LSEP loss scale linearly to the category size, Li et al.\cite{2017Improving} adapt the negative sampling technique\cite{2013Distributed} and sample at most $t$ pairs from the Cartesian product, which can be denoted as:
\begin{equation}\label{eq13}
\mathcal{L}_{lsep}^{sampled} = \log\left(1+\sum_{(i, j) \in \phi(y, t)}\exp(s_j-s_i)\right),
\end{equation}
where $\phi(y, t) \subseteq y\otimes(\Lambda/y)$ is the sampled pair set.
However, our goal is to generalize the application of deep learning in MLC, compared to the size of the current popular large models\cite{2021GPT}, the computational cost of the rank-based loss is negligible.
Therefore, we will not use the sampling method, which can also guarantee the accuracy of the results.
Finally, it should be noted that the LSEP loss can also capture label dependencies implied in the original data.
\textbf{BCE.}
The binary cross entropy (BCE) loss can be applied in MLC tasks by combining with the BR mode, which forms:
\begin{equation}\label{eq14}
\mathcal{L}_{bce} = \sum_{i \in \Omega_{pos}}\log(1+e^{-s_i}) + \sum_{j \in \Omega_{neg}}\log(1+e^{s_j}).
\end{equation}
In fact, for the BR mode, the loss is the sum of binary losses for each category, i.e.,
\begin{equation}\label{eq15}
\mathcal{L}_{BR} = \sum_{l = 1}^L\mathcal{L}_{b}(y^{(l)}, s_l),
\end{equation}
and the corresponding empirical risk is
\begin{equation}\label{eq16}
\mathcal{R}_{BR} = \mathbb{E}_{y \sim \mathcal{P}(y|x)}\left[\mathcal{L}_{BR}\right] = \sum_{l = 1}^L\left[\mathcal{P}(y^{(l)} = 1|x)\mathcal{L}_{b}(1, s_l)+\mathcal{P}(y^{(l)} = 0|x)\mathcal{L}_{b}(0, s_l)\right],
\end{equation}
from which we can see that the categories are not related to each other.
This is why it is said that the BR mode introduces conditional independence and fails to capture label dependencies.
Specifically for the BCE loss, the condition of minimizing the empirical risk is
\begin{equation}\label{eq17}
\forall l = 1, \cdots, L: s_l = \log\frac{\mathcal{P}(y^{(l)} = 1|x)}{\mathcal{P}(y^{(l)} = 0|x)}.
\end{equation}
In addition, the BCE loss can be rewritten as:
\begin{align}\label{eq18}
\mathcal{L}_{bce} &= \log\left[\prod_{i \in \Omega_{pos}}(1+e^{-s_i})\right] + \log\left[\prod_{j \in \Omega_{neg}}(1+e^{s_j})\right] \\ \nonumber
&= \log\left[1+\sum_{i \in \Omega_{pos}}e^{-s_i}+\underbrace{\sum_{k, t \in \Omega_{pos}, k < t}e^{-(s_k+s_t)}+\cdots+e^{-\sum_{r \in \Omega_{pos}}s_r}}_{\text{positive higher-order terms}}\right] \\ \nonumber
&~ + \log\left[1+\sum_{j \in \Omega_{neg}}e^{s_j}+\underbrace{\sum_{k, t \in \Omega_{neg}, k < t}e^{s_k+s_t}+\cdots+e^{\sum_{r \in \Omega_{neg}}s_r}}_{\text{negative higher-order terms}}\right],
\end{align}
which, compared with the ZLPR loss in Equation \ref{eq1}, has more higher-order terms.
In MLC tasks, negative categories tends to far outnumber the positives, and these higher-order terms in BCE significantly strengthen this category imbalance problem.
Conversely, the ZLPR loss, which removes these higher-order terms, balances the influence between positive and negative categories, making training fairer across them.
\section{Experiments and Discussion}\label{s4}
\subsection{Datasets}\label{s4-1}
We mainly conduct experiments on MLC of texts, including 3 Chinese datasets and 8 English datasets:
\begin{itemize}
\item \textbf{CNIPA\footnote{http://patdata1.cnipa.gov.cn/}:}
The chinese patent dataset, which has a classification system called IPC (international patent classification).
Each innovation patent is assigned one or more IPC codes, which can be used as the label set.
We downloaded about 210k invention patents, and constructed two Chinese text MLC datasets using titles and abstracts, respectively called \textbf{CNIPA-Title} and \textbf{CNIPA-Abstract}.
\item \textbf{Toutiao\footnote{https://github.com/aceimnorstuvwxz/toutiao-multilevel-text-classfication-dataset}:}
The chinese news dataset from Toutiao, which asks for MLC of news based on headlines.
\item \textbf{USPTO\footnote{https://bulkdata.uspto.gov/}:}
The english patent dataset, which also uses the IPC system as labels.
We downloaded about 350k invention patents, and also constructed two english text MLC datasets with titles as abstracts, respectively called \textbf{USPTO-Title} and \textbf{USPTO-Abstract}.
\item \textbf{Archive\footnote{https://www.kaggle.com/datasets/shivanandmn/multilabel-classification-dataset}:}
This dataset contains 6 different categories (Computer Science, Physics, Mathematics, Statistics, Quantitative Biology, Quantitative Finance) to classify the research papers.
We constructed two datasets based on titles and abstracts, namely \textbf{Archive-Title} and \textbf{Archive-Abstract}.
\item \textbf{CMU-Movie\footnote{https://www.cs.cmu.edu/~ark/personas/}:}
This dataset contains texts about 42,306 movies extracted from Wikipedia and Freebase.
However, it is a noisy dataset because the same movie category may have different descriptions, like "actions" and "action movie".
We constructed two datasets based on names and summaries, namely \textbf{Movie-Name} and \textbf{Movie-Summary}.
\item \textbf{GoEmotions\footnote{https://github.com/google-research/google-research/tree/master/goemotions}:}
It is a corpus of 58k carefully curated comments extracted from Reddit, with human annotations to 27 emotion categories or Neutral.
\item \textbf{Toxic\footnote{https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/data}:}
This dataset contains a large number of Wikipedia comments which have been labeled by human raters for toxic behavior.
\end{itemize}
We divided the above 11 datasets into training set, validation set and test set according to the ratio of 8:1:1, and the final results are examined on the test set.
See appendix for more information about these datasets.
\subsection{Baselines}\label{s4-2}
We choose the following losses as our comparison objects:
\begin{itemize}
\item \textbf{The binary cross entropy (BCE)}, which is showed in Equation \ref{eq14}.
\item \textbf{The focal loss (FL)}:
\begin{equation}\label{eq19}
\mathcal{L}_{fl} = -\sum_{i \in \Omega_{pos}}(1-p_i)^{\gamma_{fl}}\log{p_i} -\sum_{j \in \Omega_{neg}}p_j^{\gamma_{fl}}\log(1-p_j),
\end{equation}
where $p_l = \text{sigmoid}(s_l)$.
We set $\gamma_{fl} = 2$ in experiments.
\item \textbf{The first version of dice loss (DL1)}:
\begin{equation}\label{eq20}
\mathcal{L}_{dl1} = \sum_{i \in \Omega_{pos}}\left(1-\frac{2p_i + \gamma_{dl1}}{p_i^2 + 1 + \gamma_{dl1}}\right) + \sum_{j \in \Omega_{neg}}\left(1-\frac{\gamma_{dl1}}{p_j^2 + \gamma_{dl1}}\right),
\end{equation}
where $p_l = \text{sigmoid}(s_l)$.
We set $\gamma_{dl1} = 1$ in experiments.
\item \textbf{The second version of dice loss (DL2)}:
\begin{equation}\label{eq21}
\mathcal{L}_{dl2} = \sum_{l = 1}^L\left(1-\frac{2\sum_{k = 1}^Bp_{k, l}y_k^{(l)}+\gamma_{dl2}}{\sum_{k = 1}^Bp_{k, l}^2 + \sum_{k = 1}^By_k^{(l)} + \gamma_{dl2}}\right),
\end{equation}
where $p_{k, l} = \text{sigmoid}(s_{k, l})$ and $k$ stands for the $k$-th sample.
We set $\gamma_{dl2} = 1$ in experiments.
\item \textbf{The ranking loss (RL)}:
\begin{equation}\label{eq22}
\mathcal{L}_{rl} = \sum_{i \in \Omega_{pos}}\sum_{j \in \Omega_{neg}}\max(0, \alpha_{rl}+s_j-s_i).
\end{equation}
We set $\alpha_{rl} = 1$ in experiments.
\item \textbf{The WARP loss}:
\begin{equation}\label{eq23}
\mathcal{L}_{warp} = \sum_{i \in \Omega_{pos}}\sum_{j \in \Omega_{neg}}w(r_i)\max(0, \alpha_{warp}+s_j-s_i),
\end{equation}
where $w$ is a monotonically increasing function and $r_i$ is the ranking order of $i$-th category.
We set $w(r_i) = r_i$ and $\alpha_{warp} = 1$ in experiments.
\item \textbf{The BP-MLL loss}:
\begin{equation}\label{eq24}
\mathcal{L}_{bp-mll} = \sum_{i \in \Omega_{pos}}\sum_{j \in \Omega_{neg}}\exp(s_j - s_i).
\end{equation}
To ensure numerical stability, we take its logarithmic form, i.e., $\log\mathcal{L}_{bp-mll}$ in experiments.
\item \textbf{The LSEP loss}, which is showed in Equation \ref{eq12}.
\end{itemize}
The first four are the results of the corresponding binary loss combined with the BR mode, and the last four are ranking-based losses in LR mode.
\subsection{Metrics}\label{s4-3}
We use a variety of evaluation metrics to measure the effect of each loss:
\begin{itemize}
\item \textbf{The example-based subset accuracy (SubACC)}:
\begin{equation}\label{eq25}
\text{SubACC} = \frac{1}{N}\sum_{n = 1}^NI(\hat{y}_n = y_n),
\end{equation}
where $y_n$ is the $n$-th true label set, $\hat{y}_n$ is the $n$-th predicted label set, and $I$ is the indicative function.
\item \textbf{The example-based F1-measure (MLC-F1)}:
\begin{equation}\label{eq26}
\text{MLC-F1} = \frac{1}{N}\sum_{n = 1}^N\frac{2|\hat{y}_n \cap y_n|}{|\hat{y}_n| + |y_n|}.
\end{equation}
\item \textbf{The label-based Macro-F1}:
\begin{equation}
\text{Macro-F1} = \frac{1}{L}\sum_{l = 1}^L\frac{2TP_l}{2TP_l+FP_l+FN_l},
\end{equation}
where $TP_l$ donates the number of True Positive of the $l$-th category, $FP_l$ the number of False Positive, $FN_l$ the number of False Negative.
\item \textbf{The label-based Micro-F1}:
\begin{equation}
\text{Micro-F1} = \frac{2\sum_{l = 1}^LTP_l}{2\sum_{l = 1}^LTP_l + \sum_{l = 1}^LFP_l + \sum_{l = 1}^LFN_l}.
\end{equation}
\item \textbf{The ranking-based average precision (AvgPrec)}:
\begin{equation}
\text{AvgPrec} = \frac{1}{N}\sum_{n = 1}^N\frac{1}{|y_n|}\sum_{\lambda \in y_n}\frac{|\{\lambda' \in y_n: r_n(\lambda') \leq r_n(\lambda)\}|}{r_n(\lambda)}.
\end{equation}
where $r_n(\lambda)$ is the predicted ranking order of category $\lambda$.
\item \textbf{The ranking-based ranking loss (RankLoss)}:
\begin{equation}
\text{RL} = \frac{1}{N}\sum_{n = 1}^N\frac{1}{|\hat{y}_n||y_n|}|\{(\lambda_a, \lambda_b): r_n(\lambda_a)>r_n(\lambda_b), (\lambda_a, \lambda_b) \in y_n\otimes(\Lambda/y_n)\}|,
\end{equation}
\end{itemize}
In the above metrics, the smaller the RL value, the better the result, and the larger the other values, the better the result.
\subsection{Results}
We use Bert-base\cite{DBLP:journals/corr/abs-1810-04805} and Roberta-base\cite{DBLP:journals/corr/abs-1907-11692} as backbones combined with each loss function, and a total of 22 (11 datasets paired with 2 backbones) sets of comparative experiments were carried out.
We evaluate the performance of BCE, FL, DL1, DL2, and ZLPR (ours), which can adapt to the changing number of target categories, on all metrics.
The results are shown in Table \ref{tab1}.
\begin{table}[htbp]
\centering
\caption{
The performance of the 5 loss functions under all metrics.
}\label{tab1}
\begin{tabular}{ccccccc}
\toprule
~ & SubACC & MLC-F1 & Micro-F1 & Macro-F1 & AvgPrec & RankLoss \\
\midrule
BCE & 0 & 2 & 2 & 2 & 4 & 2 \\
FL & 0 & 1 & 2 & 6 & 5 & 3 \\
DL1 & 2 & 0 & 0 & 0 & 0 & 0 \\
DL2 & 4 & \textbf{10} & \textbf{13} & 6 & 0 & 0 \\
ZLPR (ours) & \textbf{16} & 9 & 5 & \textbf{8} & \textbf{13} & \textbf{18} \\
\bottomrule
\end{tabular}
\end{table}
The numbers in the table are the optimal times for the corresponding loss in 22 comparative experiments.
For example, the ZLPR loss performs very well under the SubACC metric, achieving the best in 16 out of the 22 comparative experiments.
The SubACC metric is very strict on the results and is generally used to measure the model's ability to capture label dependencies\cite{2012OnL}.
The good performance of ZLPR on SubACC demonstrates its ability to capture label dependencies, as described in sub-section \ref{s3-1}.
Obviously, ZLPR has also achieved good performance on the two ranking-based measures, which we can consider is because ZLPR requires the model to rank labels.
On the three metrics of MLC-F1, Micro-F1 and Macro-F1, DL2 is a strong competitor of ZLPR, and even outperforms ZLPR on Micro-F1 by a significant gap.
This may be because DL2 is an approximation of the (negative) F1-measure and optimized using batch data instead of one sample.
Even though, ZLPR still outperforms than BCE, FL and DL1 on those F1-measures.
We also perform tests on four ranking-based losses (RL, WARP, BP-MLL and LSEP) on the three Chinese datasets, of which only the RL value of LSEP slightly outperforms ZLPR on CNIPA-Title, and ZLPR is the best for the rest datasets and metrics.
All experiments are implemented based on Bert4Keras\footnote{https://github.com/bojone/bert4keras}.
Adam\cite{2014Adam} with learning rate $2\times 10^{-4}$ is used as the optimizer and models are trained for 20 epochs.
Details of all experiment results can be found in the appendix section.
\section{Conclusion}\label{s5}
We propose the ZLPR loss to generalize the application of deep learning in multi-label classification and conduct extensive textual experiments.
Compared to some binary losses, the ZLPR loss is able to capture better label dependencies and the ranking relation between positive and negative categories.
Compared with the previous ranking-based losses, ZLPR can adaptively determine the number of target categories, and also strengthen the label ranking ability of models.
However, the performance of ZLPR on F1-measures is not as dazzling as other metrics, which we will improve in the future.
\bibliographystyle{unsrt}
|
1,116,691,500,634 | arxiv | \section{Introduction}
\label{sec/introduction}
Synchronization of coupled dynamical units is a prevalent phenomenon in nature \cite{Arenas2008,Acebron2005kuramoto} and
many mathematical models have been used in its study.
Among them, coupled Kuramoto oscillators is one of the most popular models \cite{Kuramoto1987,Rodrigues2016}.
Since 1991, second-order Kuramoto oscillators where frequency adaptations (inertias) are added to the Kuramoto model have been proposed to describe the dynamics of three tropical Asian firefly species \cite{Ermentrout1991}.
Several applications of this model have been found, for Josephson junction arrays \cite{Levi1978,Watanabe1994,Trees2005}, goods markets \cite{Ikeda2012}, dendritic neurons \cite{Sakyte2011}, and power grids \cite{Filatrella2008,Rohden2012,Rohden2014,Lozano2012,Witthaut2012,Menck2013,Hellmann2016,Kim2015,Gambuzza2017,Dorfler2013c,Grzybowski2016,Maizi2016,Manik2016a,Pinto2016,Rohden2017,Witthaut2016}.
In this paper we consider a model of coupled second-order oscillators, where the dynamics is given by
\begin{equation}\label{eq_dynamics_original}
m \ddot{\theta}_i + D\dot{\theta}_i
= \Omega_i
+ \frac{K}{N}\sum_{j=1}^{N}\sin(\theta_j-\theta_i),
\quad i=1,\dots,N.
\end{equation}
Here $m$ is the inertia and $D$ the damping coefficient for all oscillators, $N$ is the number of oscillators and $K$ is the coupling strength.
The natural frequencies $\Omega_i$ are randomly chosen from a distribution $g(\Omega)$.
The state of the $i$-th oscillator is described by its phase $\theta_i \in \mathbb{S} = \mathbb{R}/2\pi\mathbb{Z}$.
The collective state of the oscillators is described by the \emph{order parameter}
\[ r e^{i\phi} = \frac{1}{N} \sum_{j=1}^{N}e^{i\theta_j},\]
where $r$ measures the phase coherence, and $\phi$ represents a collective phase.
If all the oscillators move in a single tight cluster we have $r \approxeq 1$.
On the contrary, if the oscillators move incoherently, scattered around the circle, we have $r \approxeq 0$.
When $m=0$, the second-order oscillators become Kuramoto oscillators.
Ku\-ra\-mo\-to oscillators with a symmetric unimodal distribution $g(\Omega)$ have a continuous synchronization transition with the increase of $K$ from $r \approxeq 0$ to $r\approxeq 1$ \cite{Kuramoto1987}.
In the presence of inertias, the dynamics of oscillators becomes much more complicated.
In particular, with the increase of $m$, several new features manifest in second-order oscillators, such as hysteresis \cite{Olmi2014}, change of the type of phase transitions \cite{Tanaka1997,Tanaka1997a,Acebron2000synchronization,Barre2016}, and finally oscillatory states with periodic oscillations of the order parameter \cite{Olmi2014}.
Such oscillatory states are not only found in systems with unimodal distributions of $\Omega$ \cite{Tanaka1997a}, but also with bimodal distributions \cite{Olmi2016} and in complex networks \cite{Olmi2014}.
With the help of the self-consistent method, the dynamics of hysteresis and discontinuous transitions have been recently analyzed in \cite{Gao2018}.
For oscillatory states, Olmi \textit{et al} \cite{Olmi2014} have related the oscillation of the order parameter to the appearance of secondary synchronized clusters using numerical simulations.
However, the dynamics of this oscillatory state and the appearance of additional synchronized clusters is still not well understood.
\section{Additional synchronized clusters}
\label{sec/numerics}
To provide a more refined description of collective states (compared to the global description provided by the order parameter) we use the mean frequency $\langle\dot{\theta}_j\rangle$ of each oscillator.
Two oscillators are \emph{synchronized} (or \emph{frequency locked}) if they have the same mean frequency.
A group of oscillators with the same value of mean frequency forms a \emph{synchronized cluster}.
States without any synchronized cluster are steady states with $r = 0$, called \emph{incoherent states}.
States with only one synchronized cluster are \emph{(partial) synchronization states}.
Their order parameters have a constant modulus, $r(t) = r$ and a uniformly rotating phase $\phi=\Omega^r t+\Psi$.
When all the oscillators have the same phase, we have the \emph{complete synchronization state} with $r=1$. If there are more than one synchronized clusters, the order parameter may have a time-dependent modulus, such as in \emph{standing waves} and \emph{oscillating states}.
To explore oscillatory states, we have numerically calculated the dynamics of a network with $N=10000$ oscillators, following Eq.~\eqref{eq_dynamics_original}.
The integration was done using the fourth order Runge-Kutta method with fixed-size time-step $dt = 10^{-3}$.
The natural frequency $\Omega_i$ for each oscillator is chosen randomly from a distribution that is either Gaussian or a double Gaussian.
To describe the upper and lower branches of hysteresis loops, we consider forward and backward processes.
In the forward process the initial states of the oscillators are randomly chosen as $\theta(0) \in [0,2\pi]$, $\dot{\theta}(0) \in [0,1]$ (incoherent state) and then the coupling strength is gradually increased with step $dK = 0.01$.
At each step, the initial states of all the oscillators are the final states in the previous step.
After a transient period $t_0 = 100$, we calculate the order parameter $r$ and mean frequencies $\avg{\dot{\theta}_i}$ over a measurement period $\Delta t=10$ and then move to the next step increasing $K$ by $dK$.
In the backward process, the initial states of the oscillators are randomly chosen as $\theta(0)\in [0,0.02\pi]$, $\dot{\theta}(0)\in[0,1]$ (synchronized state) and the previously described procedure is followed with the value of $K$ decreasing at each step by $dK = -0.01$.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\columnwidth]{uni-sigma10_m2r}
\includegraphics[width=0.48\columnwidth]{uni-sigma10_m5r}
\includegraphics[width=0.48\columnwidth]{uni-sigma10_m2theta}
\includegraphics[width=0.48\columnwidth]{uni-sigma10_m5theta}
\includegraphics[width=0.48\columnwidth]{uni-sigma10_m2k6}
\includegraphics[width=0.48\columnwidth]{uni-sigma10_m5k6}
\caption{Backward and forward processes for $N=10000$ oscillators with (left column) $m=2$ and (right column) $m=5$ for oscillators with Gaussian distributed natural frequencies, Eq.~\eqref{eq_unimodal}.
The black circle (gray diamond) in panels (a,b) are the numerical results in the forward (backward) process showing the evolution of $r$ with decreasing (increasing) $K$.
the error bars show the minimum and maximum values of $r$ for each $K$; oscillatory states correspond to large error bars.
The evolution of the mean frequency $\avg{\dot\theta}$ for the forward process (increasing $K$) is shown in panels (c,d).
The dependence of $\avg{\dot\theta}$ on $\Omega$ is shown panels (e,f) for two typical states with different inertias and same coupling strength $K=6$.
}
\label{fig_unimodal}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\columnwidth]{bi-sigma05_m2r}
\includegraphics[width=0.48\columnwidth]{bi-sigma05_m10r}
\includegraphics[width=0.48\columnwidth]{bi-sigma05_m2theta}
\includegraphics[width=0.48\columnwidth]{bi-sigma05_m10theta}
\includegraphics[width=0.48\columnwidth]{bi-sigma05_m2k5}
\includegraphics[width=0.48\columnwidth]{bi-sigma05_m10k5}
\caption{Backward and forward processes for $N=1000$ oscillators with (left column) $m=2$ and (right column) $m=10$ for oscillators with bimodal natural frequency distribution $g_3(\Omega)$, Eq.~\eqref{eq_bimodal_narrow}.
In panels (e,f) two typical states are shown with the same coupling strength $K=5$.
Other details are the same as in Fig.~\ref{fig_unimodal}.}
\label{fig_bimodal_narrow}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\columnwidth]{bi-sigma07_m2r}
\includegraphics[width=0.48\columnwidth]{bi-sigma07_m10r}
\includegraphics[width=0.48\columnwidth]{bi-sigma07_m2theta}
\includegraphics[width=0.48\columnwidth]{bi-sigma07_m10theta}
\includegraphics[width=0.48\columnwidth]{bi-sigma07_m2k9}
\includegraphics[width=0.48\columnwidth]{bi-sigma07_m10k9}
\caption{Backward and forward processes for $N=1000$ oscillators with (left column) $m=2$ and (right column) $m=10$ for oscillators with bimodal natural frequency distribution $g_2(\Omega)$, Eq.~\eqref{eq_bimodal_wide}.
In panels (e,f) two typical states are shown with the same coupling strength $K=9$.
Other details are the same as in Fig.~\ref{fig_unimodal}.}
\label{fig_bimodal_wide}
\end{figure}
In these calculations, we consider two effects:
the effect of inertia, $m$, and the effect of the natural frequencies distribution $g(\Omega)$.
In the first calculation oscillators with a Gaussian distribution
\begin{equation}\label{eq_unimodal}
g_1(\Omega) = G(\Omega;0,1),
\end{equation}
where
\[
G(\Omega;\mu,\sigma) = \frac{1}{\sqrt{2\pi \sigma^2}}
\exp\left(-\frac{(\Omega-\mu)^2}{2\sigma^2}\right),
\]
are numerically explored with $m=2$ and $m=5$.
When $m=2$, the transition is discontinuous with a clear hysteresis, see Fig.~\ref{fig_unimodal}(a).
All the states are steady states, coinciding with the analytical calculation in \cite{Gao2018}.
For the larger $m=5$, depicted in Fig.~\ref{fig_unimodal}(b), the oscillators do not always reach a steady state and we have the appearance of oscillatory states comprising more than one synchronized clusters, as shown in Fig.~\ref{fig_unimodal}(d,f).
Note that the system still supports a steady state as predicted in \cite{Gao2018} but instead the oscillatory state is numerically observed.
We thus conjecture that the steady state becomes unstable leading to the appearance of the oscillatory state.
For both $m=2$ and $m=5$ there is a large synchronized cluster with average frequency $0$, see Fig.~\ref{fig_unimodal}(e,f).
However, for $m=5$ we observe the appearance in Fig.~\ref{fig_unimodal}(f) of two additional synchronized clusters with non-zero average frequency at both sides of the main cluster.
These two clusters lead to the oscillations of the modulus of the order parameter, shown in the inset in Fig.~\ref{fig_unimodal}(f).
In previous studies, this state has been called \emph{secondary synchronization} \cite{Tanaka1997a} or \emph{oscillatory state} \cite{Olmi2014,Olmi2016}.
Fig.~\ref{fig_unimodal}(c,d) show how oscillators abruptly join the main synchronized cluster for $m=2$ when $K$ increases above the transition value, but they form additional synchronized clusters for $m=5$ even after $K$ goes above the transition value.
In the second calculation we consider oscillators with $m=2$ or $m=10$ and with a bimodal Gaussian distribution of natural frequencies
\begin{equation}
g_3(\Omega) = \tfrac{1}{2} \big[ G(\Omega;1.5,0.5) + G(\Omega;-1.5,0.5) \big],
\label{eq_bimodal_narrow}
\end{equation}
where the two modes have a small overlap, see Fig.~\ref{fig_bimodal_narrow}. All
the oscillators can be divided into two sub-groups with either positive or negative natural frequencies. For $m=2$ we observe the appearance of two sub-populations, whereas the $\avg{\dot\theta} = 0$ synchronized cluster is completely missing in contrast to the unimodal case where it was the most prominent one. The collective behavior of the system resembles two oscillators rotating in opposite directions where the order parameter oscillates along a constant or slowly varying direction. This is the \emph{standing wave} which is also observed for Kuramoto oscillators ($m=0$) with a bimodal natural frequency distribution \cite{Martens2009,Bonilla1998time}. For $m=10$, the oscillators are separated to two subgroups, with either negative or positive natural frequencies. For each of these subgroups, similar to the unimodal case, we observe the appearance in Fig.~\ref{fig_bimodal_narrow}(f) of two additional synchronized clusters at both sides of its main cluster. In addition, the $\avg{\dot\theta} = 0$ synchronized cluster is also observed in contrast to the case with $m=2$.
Finally, in the third calculation we consider oscillators with $m=2$ or $m=10$ and with a bimodal Gaussian distribution of natural frequencies
\begin{equation}
g_2(\Omega) = \tfrac{1}{2} \big[ G(\Omega;1,0.7) + G(\Omega;-1,0.7) \big],
\label{eq_bimodal_wide}
\end{equation}
where the two modes strongly overlap, see Fig.~\ref{fig_bimodal_wide}.
For $m=2$, Fig.~\ref{fig_bimodal_wide}(a,c,e), the system behaves very similarly to the unimodal case, which can be expected from the study of Kuramoto oscillators with $m=0$.
For $m=10$, however, we observe states similar to a \emph{standing wave} Fig.~\ref{fig_bimodal_wide}(b,d), with several clusters besides this main structure. With the effect of inertias, the intrinsic two sub-groups structure is activated, forming a pair of synchronized clusters that rotate like two giant oscillators.
These numerical results indicate that the appearance of additional synchronized clusters is a general phenomenon of second-order oscillators.
In summary, the calculations indicate that increasing the inertia results in the appearance of additional synchronized clusters leading to oscillatory or standing wave states.
In addition, the phenomenon of additional synchronized clusters always appears in the lower branch of hysteresis loops (along the forward process), while the states in the upper branches (along the backward process) are not affected.
\section{Time-periodic mean-field}
\label{sec/time-periodic-mean-field}
To understand the intrinsic synchronized clusters of the system, naturally the first step is to answer the question how synchronized clusters manifest under a given oscillatory mean-field.
For Kuramoto oscillators this question has only been recently addressed in \cite{Engelbrecht2012}.
For the second-order oscillators we consider here, the question becomes more complicated even though the idea behind our approach is similar to \cite{Engelbrecht2012}.
To analyze the oscillatory states, we first write the dynamics, Eq.~\eqref{eq_dynamics_original}, in mean-field form as
\begin{equation}\label{eq_dynamics_mean}
m \ddot{\theta}_i + D\dot{\theta}_i
= \Omega_i + K r(t) \sin(\Omega^r(t) t + \phi_0 - \theta_i),
\end{equation}
for $i=1,\dots,N$.
In Eq.~\eqref{eq_dynamics_mean} the oscillators interact with the mean-field through the order parameter.
In the particular case of oscillatory states we assume a time-dependent periodic mean-field modulus $r(t) = r_0 (1 + \varepsilon f(t))$ and constant $\Omega^r$.
Here $f(t)$ is a $T$-periodic function with zero average and $\varepsilon \ge 0$ measures the relative size of the time-dependent term.
With this assumption, the dynamics of oscillators can be written in mean-field form in a frame rotating as $\Omega^r t + \phi_0$ as
\begin{equation}\label{eq_dynamics}
m \ddot{\theta} + D \dot{\theta} = \Omega - D\Omega^r
- K r_0 (1+\varepsilon f(t)) \sin\theta.
\end{equation}
Further defining $\omega = \dot\theta$ we obtain on $M = \mathbb S \times \mathbb R$ the system of first-order differential equations
\begin{eqnarray}\label{Dynamics}
\eqalign{\dot\theta &= \omega,} \\
\eqalign{m \dot\omega &= - D \omega + (\Omega - D\Omega^r) - K r_0 (1+\varepsilon f(t)) \sin\theta.}
\end{eqnarray}
For a given initial state $(\theta(0), \omega(0))$, one can define the time-$T$ Poincar\'e map induced by Eq.~\eqref{Dynamics} as
\[
F : M \to M : (\theta(0), \omega(0)) \mapsto (\theta(T), \omega(T)).
\]
\begin{figure}
\centering
\includegraphics[width=0.48\columnwidth]{new3}
\includegraphics[width=0.48\columnwidth]{new4}\\
\includegraphics[width=0.48\columnwidth]{stand3}
\includegraphics[width=0.48\columnwidth]{stand2}\\
\caption{Mean-frequency $\langle\dot{\theta}\rangle$ with respect to natural frequency $\Omega$ with $m=1$ (a) and $m=5$ (b) where $r(t)=0.4+0.1\sin(t)$, and $m=1$ (c) and $m=5$ (d) where $r(t)=0.6\sin(t)$. The other parameters read $D=1, K=4.5$.
The gray circles are typical states independent of different inertias (steady states for $r(t)=0.4+0.1\sin(t)$ and standing wave for $r(t)=0.6\sin(t)$), and the black dots are the states from the bi-stability of oscillators with inertia effect. The bistable regions are colored with gray.}
\label{fig_bistable_region}
\end{figure}
The case $\varepsilon = 0$ corresponds to a steady state with $r(t) = r_0$.
In this case, Eq.~\eqref{eq_dynamics_original} has two possible stable states \cite{Levi1978,Strogatz2014,Guckenheimer2013,Gao2018}.
Introducing the parameters $a = D / (K r_0 m)^{1/2}$ and $b = (\Omega-D\Omega^r)/(Kr_0)$, it is known that for $b \ge b_L := 1$ the only stable state is a limit cycle $L$ where the motion has frequency $\Omega_L = \Omega/D - \Omega^r$.
For $b \le b_S(a)$ the only stable state is a fixed point $(\theta_0,0)$.
The bifurcation curve $b_S(a)$ is given by
\begin{equation}
b_S(a) \approxeq
\left\{\eqalign{
1.2732\,a - 0.3056\,a^3, & \quad 0 \le a \le 1.193, \\
1, & \quad a \ge 1.193.
}\right.
\end{equation}
When $b_S(a) \le b \le b_L := 1$ the system is \emph{bistable}---the stable fixed point and stable limit cycle co-exist.
Several properties of second-order oscillators, such as the discontinuous phase transitions to synchronization and the corresponding hysteresis of steady states, are closely related to this bistability.
Consider the extended phase space $\widehat M = \mathbb S \times \mathbb R \times \mathbb S_T$ with coordinates $(\theta,\omega,t)$ where $t$ is viewed as a periodic variable in $\mathbb S_T := \mathbb R / T \mathbb Z$.
The stable fixed point $(\theta_0,0)$ becomes in $\widehat M$ a stable limit cycle $(\theta_0,0,t)$, or equivalently a fixed point $(\theta_0,0)$ of the Poincar\'e map $F$.
Similarly, the stable limit cycle $L$ becomes in $\widehat M$ a stable limit torus $L \times \mathbb S_T$ carrying quasi-periodic motions with frequencies $\omega_1 = \Omega_L$ and $\omega_2 = 2\pi/T$ and manifests on the Poincar\'e section as an invariant curve $L_0$ of $F$ carrying a quasi-periodic circle map with rotation number
\begin{equation*}
\rho_0 = \frac{\omega_1}{\omega_2} = \frac{\Omega_L T}{2\pi}.
\end{equation*}
Recall that the rotation number for an orbit of the Poincar\'e map $F$ with initial condition $(\theta_0,\omega_0)$ is
\begin{equation*}
\mathrm{rot}(F) (\theta_0,\omega_0) \mathrel{:=}
\lim_{n \to \infty} \frac{\theta_n - \theta_0}{2 \pi n},
\end{equation*}
where $(\theta_n,\omega_n) = F^n(\theta_0,\omega_0)$.
Both the stable limit cycle and the stable limit torus are compact normally hyperbolic invariant manifolds and thus by Fenichel's theory \cite{Fenichel1971,Hirsch1970} we expect that for sufficiently small $\varepsilon > 0$ these structures will persist.
In particular, the fixed point $(\theta_0,0)$ of $F$ persists as the fixed point $(\theta_\varepsilon, \omega_\varepsilon)$ while the invariant curve $L_0$ persists as the invariant curve $L_\varepsilon$.
Extending terminology from the case $\varepsilon = 0$ we will refer to oscillators converging to the fixed point as \emph{locked} and those converging to the invariant curve as \emph{running}, cf.~\cite{Gao2018}.
The restriction of the Poincar\'e map $F$ on the invariant curve $L_\varepsilon$ gives rise to a circle map with a rotation number $\rho_\varepsilon$ independent of the initial condition on the invariant curve \cite{Devaney2003introduction}.
Consider now an ensemble of oscillators characterized by different $\Omega$ while the other parameters determining the dynamics, that is, $m$, $D$, $K$, $r_0$, $\Omega^r$, $\varepsilon$ and the $T$-periodic function $f(t)$ are the same.
Then the value of $\Omega$ determines whether the Poincar\'e map $F$ for Eq.~\eqref{Dynamics} has a fixed point, an invariant curve, or both.
If there is only a (stable) fixed point $(\theta_\varepsilon, \omega_\varepsilon)$ then all orbits will eventually converge to it and their rotation number will be $\mathrm{rot}(F)(\theta_0,\omega_0) = 0$.
Similarly, if there is only a stable invariant curve $L_\varepsilon$ then all orbits will have rotation number $\mathrm{rot}(F)(\theta_0,\omega_0) = \rho_\varepsilon$.
In the bistable case, where both a fixed point and an invariant curve co-exist, we will find some oscillators with rotation number $0$ and some oscillators with rotation number $\rho_\varepsilon$ depending on their initial condition, which determines if they converge to the fixed point or the invariant curve, respectively.
Therefore, a plot of $\mathrm{rot}(F)$ vs $\Omega$ for each oscillator will consist of three regions: one where all oscillators are running and have rotation number $\rho_\varepsilon$ (which however depends on $\Omega$ and exhibits the typical devil's staircase structure), one where all oscillators are locked with rotation number $0$, and the bistable region where some oscillators have rotation number $0$ and some have rotation number $\rho_\varepsilon$.
With a larger bistable region, we have more and larger plateaus in the graph of $\rho_\varepsilon$ vs $\Omega$ (corresponding to synchronized clusters) as shown in Fig.~\ref{fig_bistable_region}(a) and (b). With a larger bistable region and corresponding larger plateaus, the time-periodic mean-field can excite a larger oscillation of the order parameter of the oscillators. Taking the time-periodic mean-field as an oscillating perturbation around steady states, a sufficient large excited oscillation of the order parameter means the instability of such steady state and the formation of an oscillatory state.
The size of the bistable region depends on the value of $a$ which in turn depends, for fixed $K r_0$, on the \emph{reduced mass} $\mu = m/D^2$.
In particular, for small $\mu$ we expect that there is no bistable region, while for sufficiently large $\mu$, there is a bistable region whose size increases with $\mu$.
This observation explains why oscillatory states do not appear for small values of $\mu$.
In addition, in the backward process, all the oscillators in the bistable region have rotation number $0$ and they are not located on the plateaus where the rotation number is $\rho_\varepsilon \ne 0$. Therefore, in this case the appearance of bistable regions due to inertia does not contribute to the appearance of oscillatory states. This is the reason why backward processes are always similar with either large or small values of $\mu$ and do not support oscillatory states.
The inertia effect is not limited to the steady states. A special case of the previous analysis is when $r_0 = 0$ or very small and $\varepsilon$ is large. The first condition, $r_0 = 0$ or small, implies that the order parameter oscillates, corresponding to the standing waves. The mean-field equation becomes
\begin{equation*}
m \ddot \theta + D \dot \theta = (\Omega - D \Omega^r) - K \varepsilon f(t) \sin \theta.
\end{equation*}
For $\varepsilon = 0$, the system has a stable limit cycle $L$ with frequency $\Omega_L = \Omega/D - \Omega^r$.
As $\varepsilon$ starts increasing the limit cycle persists as an invariant curve $L_\varepsilon$.
However, for larger values of $\varepsilon$ the dynamics on $L_\varepsilon$ will give rise to fixed points when $\mu$ is large enough, see Fig.~\ref{fig_bistable_region}(c) and (d).
With the increase of inertia, locked oscillators can also appear in the purely oscillating mean-field (standing wave) together with the running oscillators that also exist for small inertias.
Note that this process is the opposite of the process that occurs in the case of oscillatory mean-field, see Fig.~\ref{fig_bistable_region}(a) and (b), where for small inertia we have only locked oscillators and with the increase of inertia we also observe the appearance of running oscillators.
These results, in particular the bi-stability of oscillators and devil's staircase structure of $\rho_\varepsilon$, explain why a periodic mean-field leads to the appearance of secondary synchronized clusters besides the cluster of locked oscillators. With the same time-periodic mean-field, the appearance of secondary synchronized clusters depends on the value of $\mu$ and the direction (backward and forward) of synchronization processes. Such analysis links the states of oscillators with the mean-field of the system and shows the inertia effect in these correlations. However, the mean-field of coupled oscillators is formed from the self-organization of all the oscillators. In the next section, to study the the self-organization processes, we will focus on a more detailed model.
\section{Self-organization processes}
\label{sec/three}
As a complement to the analysis based on the mean-field, this section is devoted to the dynamics of a few oscillators and their self-organization toward synchronization. Since all the oscillators are connected with each other, there are no topological differences and the only factor that affects their synchronization process is their natural frequencies' distribution $g(\Omega)$.
To mimic the effect of $g(\Omega)$, we introduce a weighted model of finitely many oscillators where the dynamics is given for $i = 1,\dots,N$ by
\begin{equation}\label{eq_weighted_model}
m\ddot{\theta}_i + D\dot{\theta}_i
= \Omega_i
+ \frac{K}{N}\sum_{j=1}^{N} a_j \sin(\theta_j-\theta_i).
\end{equation}
Here $a_i$ is the weight, describing the fraction of oscillators with natural frequency $\Omega_i$, and is used to mimic the distribution of oscillators.
\begin{figure}
\centering
\includegraphics[width=0.24\textwidth]{3a}%
\includegraphics[width=0.24\textwidth]{3b}\\
\includegraphics[width=0.24\textwidth]{3c}%
\includegraphics[width=0.24\textwidth]{3d}
\caption{(a) Weights for oscillators with different natural frequency $\Omega$ as $a_1=10, a_2=0.8, a_3=0.5$. (b) The factor $C$ and $F_1/F_2\equiv F(a_{12})/F(a_{23})$ with different rescaled inertia $\mu$. (c) and (d) are the synchronization processes with increasing coupling $K$ with $m=0.2$ (b) and $m=2.5$ (d). The damping coefficient is $D=1$.}
\label{fig_three_oscillator}
\end{figure}
To obtain more theoretical insights about the inertia effect, we begin by considering three coupled oscillators with $a_1\gg a_2>a_3$ and $\Omega_3>\Omega_2\gg\Omega_1$. Oscillator 1 is assigned the largest weight, describing the giant synchronized group, and oscillators 2 and 3 are assigned smaller weights but closer natural frequency, describing two small groups away from the giant group, see Fig.~\ref{fig_three_oscillator}(a). Introducing the phase differences $\varphi_1=\theta_1-\theta_2$, $\varphi_2=\theta_2-\theta_3$, we rewrite the dynamics as
\begin{eqnarray}
\eqalign{
m\ddot{\varphi}_1+D\dot{\varphi}_1 &= (\Omega_1-\Omega_2) - \frac{K}{2}
\Big[(a_2+a_1)\sin(\varphi_1) \cr & -a_3\sin(\varphi_2)+a_3\sin(\varphi_1+\varphi_2)\Big],}\\
\eqalign{
m\ddot{\varphi}_2+D\dot{\varphi}_2 &= (\Omega_2-\Omega_3) - \frac{K}{2} \Big[(a_3+a_2)\sin(\varphi_2) \cr &-a_1\sin(\varphi_1)+a_1\sin(\varphi_1+\varphi_2)\Big].}
\label{eq_dynmaics_two}\end{eqnarray}
Even though the system Eq.~\eqref{eq_dynmaics_two} has various dynamical properties, we are only interested in the synchronization of each pair of oscillators, which means $\dot{\varphi}_1 \approxeq 0$ or $\dot{\varphi}_2 \approxeq 0$. The effect of the other oscillator is approximated as a periodic forcing by $\varphi_1\approxeq \omega_1t\equiv (\Omega_1-\Omega_2)/Dt$ when $\dot{\varphi}_2 \approxeq 0$ and $\varphi_1\approxeq \omega_2t\equiv (\Omega_2-\Omega_3)/Dt$ when $\dot{\varphi}_1 \approxeq 0$.
For the synchronization of the oscillators $\theta_1$ and $\theta_2$, we have
\begin{equation*}
m\ddot{\varphi}_1+D\dot{\varphi}_1 =(\Omega_1-\Omega_2)-\frac{K}{2}\Big[(a_2+a_1)\sin(\varphi_1)
-a_3\sin(\omega_2t)+a_3\sin(\varphi_1+\omega_2t)\Big],
\end{equation*}
From the fact that $a_1$ is the largest weight and $(a_2+a_1)\gg a_3$, we can ignore the periodic perturbations from $\varphi_2$ and obtain
\begin{equation}\label{eq_weight_first}
m\ddot{\varphi}_1+D\dot{\varphi}_1=(\Omega_1-\Omega_2)-\frac{K}{2}(a_2+a_1)\sin(\varphi_1).
\end{equation}
As for the synchronization of the oscillators $\theta_2$ and $\theta_3$, we have
\begin{equation*}
m\ddot{\varphi}_2+D\dot{\varphi}_2
=(\Omega_2-\Omega_3)-\frac{K}{2}\Big[(a_3+a_2)\sin(\varphi_2)
-a_1\sin(\omega_1t)+a_1\sin(\varphi_2+\omega_1t)\Big].
\end{equation*}
From the fact that $\omega_2\gg \omega_1$, we can average the fast periodic perturbation from $\varphi_1$ over time and obtain the dynamics of $\varphi_2$ as
\begin{equation}\label{eq_frequency_first}
m\ddot{\varphi}_2+D\dot{\varphi}_2=(\Omega_2-\Omega_3)-\frac{K}{2}(a_3+a_2)\sin(\varphi_2).
\end{equation}
Both the dynamics Eq.~\eqref{eq_weight_first} and Eq.~\eqref{eq_frequency_first} are the same as the dynamics of a single second-order oscillator Eq.~\eqref{eq_dynamics} with $\varepsilon=0$, studied in detail in \cite{Gao2018}. Hence the synchronization of each pair of oscillators, $\dot{\varphi_1}=0$ and $\dot{\varphi_2}=0$, can be obtained respectively as,
\begin{equation}\label{eq_synchronization_condition}
\eqalign{
\frac{2(\Omega_1-\Omega_2)}{K(a_1+a_2) \, b\Big(\frac{\sqrt{2}}{\sqrt{K\mu(a_1+a_2)}}\Big)} &\equiv \frac{\Delta\Omega_{12}}{F(a_{12})}\leq 1, \cr
\frac{2(\Omega_2-\Omega_3)}{K(a_3+a_2) \, b\Big(\frac{\sqrt{2}}{\sqrt{K\mu(a_3+a_2)}}\Big)} &\equiv \frac{\Delta\Omega_{23}}{F(a_{23})}\leq 1,}
\end{equation}
where $F(a)=\frac{1}{2}K a\,b\big(\frac{\sqrt{2}}{\sqrt{K\mu a}}\big)$ is a function of the weight $a$. The boundary function $b(x)$ equals either $b_S(x)$ or $b_L(x)\equiv1$ in the forward and backward processes respectively. The frequency differences and sums of weights read $\Delta\Omega_{12}=\Omega_1-\Omega_2, a_{12}=a_1+a_2$ and $\Delta\Omega_{23}=\Omega_2-\Omega_3, a_{23}=a_2+a_3$. Since in the backward process the function $b_L(x)\equiv1$ is constant, the transition process is independent of $\mu$. However, in the forward process, with nonlinear boundary function $b_S(x)$, the synchronization processes depends on the value of $\mu$ crucially.
To compare these two synchronization conditions, we define the factor
\begin{equation}\label{eq_condition}
C = \frac{\Delta\Omega_{12}}{\Delta\Omega_{23}}\frac{F(a_{23})}{F(a_{12})}.
\end{equation}
If $C<1$ the dominant synchronization process is the growth of the giant group from oscillator 1, while if $C>1$ the additional synchronized cluster will form between oscillators 2 and oscillator 3.
The value of $C$ and hence the synchronization processes depends on the value of oscillators' rescaled inertia $\mu$, see Fig.~\ref{fig_three_oscillator}(b). When $\mu$ is small, we have that $F(a)=Ka/2$, and hence $F(a_{23})/F(a_{12})=a_{23}/a_{12}$. On the other hand, when $\mu$ is sufficient large the function $F$ can be approximated as $F(a)\approx \sqrt{8K/\mu\pi^2}\sqrt{a}$ and correspondingly $F(a_{23})/F(a_{12})\approx \sqrt{a_{23}/a_{12}}$. Even though the ratio $\Delta\Omega_{12}/\Delta\Omega_{23}$ is fixed, the ratio $F(a_{23})/F(a_{12})$ increases monotonically with the increase of $\mu$, from $a_{23}/a_{12}$ to $\sqrt{a_{23}/a_{12}}$, as shown in Fig.~\ref{fig_three_oscillator}(b). The effect of weights is weakened by the increase of inertias along the lower boundary $b_S$. Therefore, for larger inertias the oscillators are more likely to synchronize among ones with closer natural frequencies than with the giant group with larger weights. Consequently, we observe the appearance of additional synchronized clusters in the forward processes with sufficient large inertias as shown in Fig.~\ref{fig_three_oscillator}(d). As a generalization of these three-coupled oscillators, one can also consider three groups of oscillators as a limiting case of multimodal frequency distributions. This is beyond the scope of this paper and we refer to \cite{Acebron1998asymptotic,Acebron2001bifurcations}.
\begin{figure}
\centering
\includegraphics[width=0.24\textwidth]{12a}%
\includegraphics[width=0.24\textwidth]{12b}\\
\includegraphics[width=0.24\textwidth]{12c}%
\includegraphics[width=0.24\textwidth]{12d}\\
\includegraphics[width=0.24\textwidth]{12e}%
\includegraphics[width=0.24\textwidth]{12f}
\caption{ Mean-frequency of $N=12$ oscillators with increasing coupling strength $K$ for unimodal distributed weights (a) ($\mu=1$) and (b) ($\mu=6$), large overlapped bimodal distributed weights (c) ($\mu=0.2$) and (d) ($\mu=6$), and small overlapped bimodal distributed weights (e) ($\mu=0.2$) and (f) ($\mu=6$). The natural frequencies are uniformly chosen as $\Omega_i=-5.5,-4.5,\dots,5.5$ with weights $a_i=0.2,0.3,0.5,0.9,1.2,1.9,1.9,1.2,0.9,0.5,0.3,0.2$ in (a) and (b), $a_i=0.3,0.4,0.5,1.1,1.4,0.9,0.9,1.4,1.1,0.5,0.4,0.3$ in (c) and (d), $a_i=0.5,0.6,0.9,1.4,0.9,0.2,0.2,0.9,1.4,0.9,0.6,0.5$ in (e) and (f).}
\label{fig_12_oscillator}
\end{figure}
Following the analysis above for three oscillators, we further consider a larger system, where the synchronization process is a more complicated self-organization progress. Considering coupled $2N$ oscillators, with the dynamics Eq.~\eqref{eq_weighted_model} and uniformly spaced frequency $\Omega_i$ and different weights $a_i$, we could find different synchronization processes and states, see Fig.~\ref{fig_12_oscillator}.
Firstly, for a system with a symmetric and unimodal natural frequency distribution, the weights of the central pair of oscillators $\theta_N$ and $\theta_{N+1}$ have the largest weights.
From the fact that all nearest pairs of oscillators have the same natural frequencies difference $\Delta\Omega$, the central pair with the largest weights will synchronize first in the process of increasing coupling strength $K$ from Eq.~\eqref{eq_condition}. Once they are synchronized, they form a synchronized cluster or equivalently an effective oscillator $\theta_0$ with the frequency $\Omega_0\equiv(\Omega_N+\Omega_{N+1})/2$ with weights $a_0\equiv(a_{N}+a_{N+1})$.
With the further increase of coupling strength, for all the oscillators with $\theta_i$ with $i>N$ the next synchronization phenomenon will happen either between oscillators $\theta_{N+2}$ with $\theta_{N+3}$ or between the oscillator $\theta_{N+2}$ with the synchronized group $\theta_0$, determined by the condition Eq.~\eqref{eq_condition} as
\begin{equation}\label{eq_condition_uni}
C = \frac{3}{2}\frac{F(a_{N+2}+a_{N+3})}{F(a_N+a_{N+1}+a_{N+2})},
\end{equation}
where $3/2$ is the ratio of their natural frequency differences.
When $\mu$ is sufficient small, we have that $F(a)/F(b)=a/b$. From the unimodal property, we have $(a_{N+2}+a_{N+3})/(a_N+a_{N+1}+a_{N+2})<2/3$ hence $C > 1$ in Eq.~\eqref{eq_condition_uni}. The synchronized group $\theta_0$ grows bigger and includes $\theta_{N+2}$.
The same process also takes place for the oscillators $\theta_i$ with $i<N$ due to the symmetry.
After such step of synchronization, the central synchronized group includes four oscillators $\theta_{i}$ with $i=N-1,\dots,N+2$. With the increase of $K$ further, the next synchronization condition factor for the oscillators $\theta_{N+3}$ reads
\begin{equation}
C = \frac{5}{2}\frac{F(a_{N+3}+a_{N+4})}{F(a_{N-1}+a_N+a_{N+1}+a_{N+2}+a_{N+3})},
\end{equation}
With similar analysis and the unimodal property we assumed, we find that the oscillator $\theta_{N+3}$ will synchronize with the group $\theta_0$ when $\mu$ is sufficient small. The central synchronized group gets larger. Following this process, with the increase of coupling strength, it is straightforward to show that all the oscillators will be included in this group one by one. This follows directly from the unimodal property and also the linear dependence of $F$ on the weight, as shown in Fig.~\ref{fig_12_oscillator}(a).
On the contrary, when inertia sufficiently increases, such linear dependence of $F$ will be weakened to square root and the self-organization chain will be broken at some points $\theta_{N+n}$. Instead of contributing to the growth of the central synchronized group $\theta_0$, a new cluster will form from the synchronization of $\theta_{N+n}$ and $\theta_{N+n+1}$. Then the new cluster will grow larger with the increase of $K$ to another point where the chain is broken again due to the non-linearity of $F(a)$ and a third synchronized group forms. Continuing this argument we obtain the multi-cluster devil's staircase structure, see Fig.~\ref{fig_12_oscillator}(b). In this way, additional synchronized clusters are formed apart from the central one, forming the oscillatory state as in Fig.~\ref{fig_unimodal}(f).
Secondly, in the bimodal case with sufficient large distance between the two peaks, the system can be approximated as two independent unimodal systems when the coupling strength is small enough. In this case two synchronized clusters will form and grow initially from oscillators with the largest weights, corresponding to the two peaks of the bimodal distribution. The self-organizing takes place independently for each cluster.
For small inertias we have continuous growth from the two peaks of the distribution forming a standing wave, see Fig.~\ref{fig_12_oscillator}(e). For large inertias we have the appearance of several small clusters, see Fig.~\ref{fig_12_oscillator}(f). When the coupling strength $K$ is large enough, these two branches of synchronization processes will merge to one, by creating one large central cluster.
Thirdly, for the case where the two peaks of the bimodal distributions have a large overlap, the synchronization process is more complicated. The two oscillators at each peak are close to each other and the oscillators between them also have relatively large weights. In this case, we need to consider about higher order terms in the synchronization processes, i.e. the synchronization condition of several oscillators.
Considering the case where $\theta_{N-1}$ and $\theta_{N+2}$ are the two peaks oscillators with the maximum weights, and the other two oscillators $\theta_{N},\theta_{N+1}$ between them have slightly smaller weights.
The condition of four-oscillator synchronization group $\theta_{N-1},\dots,\theta_{N+2}$ can be estimated by the synchronization condition of the oscillator $\theta_{N+2}$ with the assumed synchronized cluster of $\theta_{N-1},\theta_{N},\theta_{N+1}$. Then the condition factor comparing the appearance of four-oscillator group and the synchronized central group between $\theta_{N+2}$ and $\theta_{N+3}$ reads
\begin{equation}
C = \frac{3}{2}\frac{F(a_{N+2}+a_{N+3})}{F(a_{N-1}+a_N+a_{N+1}+a_{N+2})}.
\end{equation}
From the fact that $a_{N-1}=a_{N+2}>a_{N+3}$ as the maximum weight, if the two central oscillators have relatively large weights $a_N+a_{N+1}>a_{N+3}$, we have $C<1$ and observe the abrupt appearance of four-oscillator synchronization group when $\mu$ is small, as shown in Fig.~\ref{fig_12_oscillator}(c). On the contrary, if such weight is weakened by the inertia effect, the synchronization process will start from the appearance of two clusters of $\theta_{N-2},\theta_{N-1}$ and $\theta_{N+2},\theta_{N+3}$, forming standing wave states as the one for bimodal cases with smaller overlaps, see Fig.~\ref{fig_12_oscillator}(d).
\section{Discussion}\label{sec/discussion}
Based on the theoretical analysis in Sec.~\ref{sec/time-periodic-mean-field} and the simplified models in Sec.~\ref{sec/three} we conclude that the main effect of inertias is the weakening of the synchronization influence of giant synchronized clusters on the other oscillators, when the system is in the lower branch of hysteresis loops.
As a result, additional synchronized clusters appear besides the giant clusters when $\mu$ is sufficiently large, thus leading to the appearance of oscillatory states or standing waves.
\ack{
We thank the Center for Information Technology of the University of Groningen for the use of the Peregrine HPC cluster.
J.\ Gao acknowledges support by a China Scholarship Council (CSC) scholarship.}
\section*{References}
\bibliographystyle{unsrt}
|
1,116,691,500,635 | arxiv | \section{Introduction}\label{sec:intro}
We study the problem of maintaining a set of planar points in external memory
subject to insertions and deletions of points in order to support planar
orthogonal 3-sided range skyline reporting queries efficiently in the worst
case. For two points~$p,q \in \mathbb{R}^d$, we say that~$p$
\textit{dominates}~$q$, if and only if all the coordinates of~$p$ are greater
than those of~$q$. The~\textit{skyline} of a pointset~$P$ consists of
the~\textit{maximal points} of~$P$, which are the points in $P$ that are not
dominated by any other point in $P$. \textit{Planar 3-sided range skyline
reporting queries} that report the maximal points among the points that lie
Skyline computation has been receiving increasing attention in the field of
databases since the introduction of the skyline operator for SQL~\cite{BKS01}.
Skyline points correspond to the``interesting'' entries of a relational database
as they are optimal simultaneously over all attributes. The considered variant
of planar skyline queries adds the capability of reporting the interesting
entries among those input entries whose attribute values belong to a given
3-sided range. Databases used in practical applications usually process massive
amounts of data in dynamic environments, where the data can be modified by
update operations. Therefore we analyze our algorithms in the \textit{I/O
model}~\cite{AV88}, which is commonly used to capture the complexity of massive
data computation. It assumes that the input data resides in the disk (external
memory) divided in blocks of $B$ consecutive words, and that computation occurs
for free in the internal memory of size $M$ words. An \textit{I/O-operation
(I/O)} reads a block of data from the disk into the internal memory, or writes a
block of data to the disk. Time complexity is expressed in number of I/Os, and
space complexity in the number of blocks that the input data occupies on the
disk.
\paragraph{Previous Results}
Different approaches have been proposed for maintaining the $d$-dimensional
skyline in external memory under update operations, assuming for example offline
updates over data streams~\cite{TP06,MPG07}, only online
deletions~\cite{WAEA07}, online average case updates~\cite{PTFS05}, arbitrary
online updates \cite{HZK08} and online updates over moving input
points~\cite{HLOT06}. The efficiency of all previous approaches is measured
experimentally in terms of disk usage over average case data. However, even for
the planar case, no I/O-efficient structure exists that supports both arbitrary
insertions and deletions in sublinear worst case I/Os. Regarding internal
memory, Brodal and Tsakalidis~\cite{BT11} present two linear space dynamic data
structures that support 3-sided range skyline reporting queries in~$\mathcal{O}(\log n
+ t)$ and~$\mathcal{O}(\frac{\log n}{\log \log n} +t)$ worst case time, and updates
in~$\mathcal{O}(\log n)$ and~$\mathcal{O}(\frac{\log n}{\log \log n})$ worst case time in
the pointer machine and the RAM model, respectively, where~$n$ is the input size
and~$t$ is the output size. They also present an~$\mathcal{O}(n \log n)$ space dynamic
pointer-based data structure that supports 4-sided range skyline reporting
queries in~$\mathcal{O}(\log^2 n + t)$ worst case time and updates in~$\mathcal{O}(\log^2
n)$ worst case time. Adapting these structures to the I/O model attains
$\mathcal{O}(\log^{\mathcal{O}(1)}_B n + t)$ query I/Os, which is undesired since $\mathcal{O}(1)$
I/Os are spent per reported point.
Regarding the static variant of the problem, Sheng and Tao~\cite{ST11} obtain an
I/O-efficient algorithm that computes the skyline of a static $d$-dimensional
pointset in~$\mathcal{O}(\frac{n}{B}\log^{d-2}_{\frac{M}{B}}\frac{n}{B} )$ worst case
I/Os, for~$d\geq 3$, by adapting the internal memory algorithms of
\cite{KLP75,B80} to external memory. $\mathcal{O}(
\frac{n}{B}\log_{\frac{M}{B}}\frac{n}{B})$~I/Os can be achieved for the planar
case. There exist two~$\mathcal{O}(n \log n)$ and~$\mathcal{O}(n \frac{\log n}{\log \log
n})$ space static data structures that support planar 4-sided range skyline
reporting queries in~$\mathcal{O}(\log n +t )$ and~$\mathcal{O}(\frac{\log n}{\log \log n}
+ t)$ worst case time, for the pointer machine and the RAM,
respectively~\cite{KDKS11,DGKASK12}.
\paragraph{Our Results}
In Section~\ref{sec:iocpqa} we present the basic building block of the
structures for dynamic planar range skyline reporting queries that we present in
Section~\ref{sec:skyline}. That is pointer-based~\textit{I/O-efficient catenable
priority queues with attrition (I/O-CPQAs)} that support the
operations~\textsc{DeleteMin} and~\textsc{CatenateAndAttrite} in~$\mathcal{O}(1/B)$
amortized I/Os and in~$\mathcal{O}(1)$ worst case I/Os, using $\mathcal{O}(\frac{n-m}{B})$
disk blocks, after~$n$ calls to \textsc{CatenateAndAttrite} and~$m$ calls to
\textsc{DeleteMin}. The result is obtained by modifying appropriately a proposed
implementation for priority queues with attrition of Sundar~\cite{S89}.
In Section~\ref{sec:skyline} we present our main result, namely I/O-efficient
dynamic data structures that support 3-sided range skyline reporting queries in
$\mathcal{O}(\log_{2B^\epsilon} n + \frac{t}{B^{1-\epsilon}})$ worst case~I/Os and
updates in $\mathcal{O}(\log_{2B^\epsilon} n)$ worst case~I/Os, using
$\mathcal{O}(\frac{n}{B^{1-\epsilon}})$ blocks, for a parameter~$0\leq \epsilon \leq
1$. These are the first fully dynamic skyline data structures for external
memory that support all operations in polylogarithmic worst case time. The
results are obtained by following the approach of Overmars and van
Leeuwen~\cite{OL81} for planar skyline maintainance and utilizing confluently
persistent I/O-CPQAs (implemented with functional catenable deques~\cite{KT99}).
Applying the same methodology to internal memory pointer-based CPQAs yields
alternative implementations for dynamic 3-sided
reporting in the pointer machine in the same bounds as in~\cite{BT11}.
Finally, in Section~\ref{sec:dommaxlb} we prove that any pointer-based static
data structure that supports reporting the maximal points among the points that
are dominated by a given query point in~$\mathcal{O}(\log^{\mathcal{O}(1)}n)$ worst case
time must occupy~$\Omega(n \frac{\log n}{\log \log n})$ space, by adapting the
similar lower bounding argument of Chazelle~\cite{C90} for planar 4-sided range
reporting queries to the considered dominated skyline reporting queries. These
queries are termed as~\textit{dominating minima reporting queries}. The
symmetric case of~\textit{dominated maxima reporting queries} is equivalent and
comprises a special case of rectangular visibilty queries~\cite{OW88} and
4-sided range skyline reporting queries \cite{BT11,KDKS11}. The result shows
that the space usage of the pointer-based structures in~\cite{OW88,BT11,KDKS11}
is optimal within a $\mathcal{O}(\log \log n)$ factor, for the attained query time.
\section{Preliminaries} \label{sect:prel}
\paragraph{Priority Queues with Attrition}
Sundar~\cite{S89} introduces pointer-based \emph{priority queues with attrition
(PQAs)} that support the following operations in~$\mathcal{O}(1)$ worst case time on
a set of elements drawn from a total order: \textsc{DeleteMin} deletes and
returns the minimum element from the PQA, and \textsc{InsertAndAttrite($e$)}
inserts element~$e$ into the PQA and removes all elements larger than~$e$ from
the PQA. PQAs use space linear to the number of inserted elements minus the
number of elements removed by \textsc{DeleteMin}.
\paragraph{Functional Catenable Deques}
A dynamic data structure is \textit{persistent} when it maintains its previous
versions as update operations are performed on it. It is \textit{fully
persistent} when it permits accessing and updating the previous versions. In
turn, it is called \textit{confluently persistent} when it is fully persistent,
and moreover it allows for two versions to be combined into a new version, by
use of an update operation that merges the two versions. In this case, the
versions form a directed acyclic version graph. A catenable deque is a list
that stores a set of elements from a total order, and supports the operations
~\textsc{Push} and~\textsc{Inject} that insert an element to the head and tail
of the list respectively, ~\textsc{Pop} and~\textsc{Eject} that remove the
element from the head and tail of the list respectively, and~\textsc{Catenate}
that concatenates two lists into one. Kaplan and Tarjan \cite{KT99} present
\textit{purely functional catenable deques} that are confluently persistent and
support the above operations in~$\mathcal{O}(1)$ worst case time.
\paragraph{Searching Lower Bound in the Pointer Machine}
In the pointer machine model a data structure that stores a data set $S$ and
supports range reporting queries for a query set $\mathcal{Q}$, can be modelled
as a directed graph $G$ of bounded out-degree. In particular, every node in $G$
may be assigned an element of $S$ or may contain some other useful information.
For a query range $Q_i\in \mathcal{Q}$, the algorithm navigates over the edges
of $G$ in order to locate all nodes that contain the answer to the query. The
algorithm may also traverse other nodes. The time complexity of reporting the
output of~$Q_i$ is at least equal to the number of nodes accessed in graph~$G$
for~$Q_i$. To prove a lower bound we need to construct hard instances with
particular properties, as discussed by Chazelle and Liu~\cite{C90,CL04}. In
particular, they define the graph~$G$ to be
$(\alpha,\omega)$-\textit{effective}, if a query is supported in $\alpha(t +
\omega)$ time, where~$t$ is the output size,~$\alpha$ is a multiplicative factor
for the output size ($\alpha = \mathcal{O}(1)$ for our purposes) and~$\omega$ is the
additive factor. They also define a query set~$\mathcal{Q}$ to be
$(m,\omega)$-\textit{favorable} for a data set~$S$, if $|S \cap Q_i| \geq
\omega, \forall Q_i \in \mathcal{Q}$ and $|S \cap Q_{i_1}\cap \cdots \cap
Q_{i_m}| = \mathcal{O}(1), \forall i_1 <i_2 \cdots< i_m$. Intuitively, the first part
of this property requires that the size of the output is large enough (at
least~$\omega$) so that it dominates the additive factor of~$\omega$ in the time
complexity. The second part requires that the query outputs have minimum
overlap, in order to force~$G$ to be large without many nodes containing the
output of many queries. The following lemma exploits these properties to provide
a lower bound on the minimum size of~$G$.
\begin{lemma} \label{lem:lower} \cite[Lemma 2.3]{CL04} For an
$(m,\omega)$-favorable graph~$G$ for the data set~$S$, and for an
$(\alpha,\omega)$-effective set of queries~$\mathcal{Q}$, $G$
contains~$\Omega(|\mathcal{Q}|\omega/m)$ nodes, for
constant~$\alpha$ and for any large enough~$\omega$.
\end{lemma}
\section{I/O-Efficient Catenable Priority Queues with Attrition} \label{sec:iocpqa}
In this Section, we present I/O-efficient~\textit{catenable priority queues with
attrition (I/O-CPQAs)} that store a set of elements from a total order in
external memory, and support the following operations:
\begin{description}
\item \textsc{FindMin}($Q$) returns the minimum element in I/O-CPQA~$Q$.
\item \textsc{DeleteMin}($Q$) removes the minimum element~$e$ from
I/O-CPQA~$Q$ and returns element~$e$ and the new I/O-CPQA~$Q' = Q
\backslash\{e\}$.
\item \textsc{CatenateAndAttrite}($Q_1,Q_2$) concatenates I/O-CPQA~$Q_2$ to
the end of I/O-CPQA~$Q_1$, removes all elements in~$Q_1$ that are larger
than the minimum element in~$Q_2$, and returns a new I/O-CPQA $Q'_1 = \{e
\in Q_1 | e < \min(Q_2)\} \cup Q_2$. We say that the removed elements have
been~\emph{attrited}.
\item \textsc{InsertAndAttrite}($Q,e$) inserts element $e$ at the end of $Q$
and attrites all elements in $Q$ that are larger than the value of~$e$.
\end{description}
All operations take~$\mathcal{O}(1)$ worst case I/Os and~$\mathcal{O}(1/b)$ amortized I/Os,
given that a constant number of blocks is already loaded into main memory, for a
parameter~$1 \leq b \leq B$. To achieve the result, we modify an implementation
for the PQAs of Sundar~\cite{S89}.
An I/O-CPQA~$Q$ consists of~$k_Q+2$ deques of records, called the clean
deque~$C(Q)$, the buffer deque~$B(Q)$ and the dirty
deques~$D_1(Q),\ldots,D_{k_Q}(Q)$, where~$k_Q \geq 0$. A
\emph{record}~$r = (l,p)$ consists of a buffer~$l$ of~$[b,4b]$ elements
of strictly increasing value and a pointer~$p$ to an I/O-CPQA. The ordering
of~$r$ is; first all elements of~$l$ and then all elements of the I/O-CPQA
pointed to by~$p$. We define the queue order of~$Q$ to be~$C(Q)$,~$B(Q)$
and~$D_1(Q),\ldots,D_{k_Q}(Q)$. A record is \emph{simple} when its pointer~$p$
is \emph{null}. The clean deque and the buffer deque only contains simple
records.\fullcmt{ See Figure~\ref{fig:Overview} for an overview of the
structure.}
\fullcmt{
\inkfig{htb}{\linewidth}{CPQAOverview}{fig:Overview}{A I/O CPQA $Q$ consists of
$k_Q + 2$ deques of records;~$C(Q), B(Q), D_1(Q),\ldots, D_{k_Q}(Q)$. The
records in~$C(Q)$ and~$B(Q)$ are simple, the records of~$D_1(Q),\ldots,
D_{k_Q}(Q)$ may contain pointers to other I/O CPQA's. Gray recordsare always
loaded in memory.}}
Given a record $r =(l,p)$ the minimum and maximum elements in the buffers
of~$r$, are denoted by~$\min(r) = \min(l)$ and~$\max(r) = \max(l)$,
respectively. They appear respectively first and last in the queue order of~$l$,
since the buffer of~$r$ is sorted by value. Henceforth, we do not distinguish
between an element and its value. Given a deque~$q$ the first and the last
record is denoted by~$\text{first}(q)$ and~$\text{last}(q)$, respectively. Also~$\text{rest}(q)$
denotes all records of the deque~$q$ excluding the record~$\text{first}(q)$.
Similarly,~$\text{front} (q)$ denotes all records for the deque~$q$ excluding the
record~$\text{last}(q)$. The size~$|r|$ of a record~$r$ is defined to be the number of
elements in its buffer. The size~$|q|$ of a deque~$q$ is defined to be the
number of records it contains. The size~$|Q|$ of the I/O-CPQA~$Q$ is defined to
be the number of elements that $Q$ contains. For an I/O-CPQA $Q$ we denote by
$\text{first}(Q)$ and $\text{last}(Q)$, the first and last of the records in $C(Q), B(Q),
D_1(Q), \ldots, D_{k_Q}(Q)$ that exists, respectively. By~$\text{middle}(Q)$ we denote
all records in~$Q$ and the records in the I/O-CPQAs pointed by $Q$, except for
records $\text{first}(Q)$~and~$\text{last}(Q)$ and the I/O-CPQAs they point to. We call an
I/O-CPQA~$Q$ \emph{large} if $|Q| \geq b$ and \emph{small} otherwise. The
minimum value of all elements stored in the I/O-CPQA~$Q$ is denote by~$\min(Q)$.
For an I/O-CPQA~$Q$ we maintain the following invariants:
\begin{enumerate}[{I}.1)]
\item \label{in:records} For every record~$r = (l,p)$ where pointer~$p$ points
to I/O-CPQA~$Q'$,~$\max(l) < \min(Q')$ holds.
\item \label{in:recordpairs} In all deques of~$Q$, where
record~$r_1=(l_1,p_1)$ precedes record~$r_2=(l_2,p_2)$,~$\max(l_1) <
\min(l_2)$ holds.
\item \label{in:queuevalues} For the deques~$C(Q),B(Q)$ and~$D_1(Q)$,
$\max(\text{last}(C(Q))) < \min(\text{first}(B(Q))) < \min(\text{first}(D_1(Q)))$ holds.
\item \label{in:min} Element~$\min(\text{first}(D_1(Q)))$ has the minimum value
among all the elements in the dirty deques~$D_1(Q),\ldots,D_k(Q)$.
\item \label{in:simple} All records in the deques~$C(Q)$ and~$B(Q)$ are
simple.
\item \label{in:ineq} $|C(Q)| \geq \sum_{i=1}^{k_Q}{|D_i(Q)|}+k_Q-1$.
\item \label{in:small} $|\text{first}(C(Q))|<b$ holds, if and only if $|Q|< b$
holds.
\item \label{in:smalltail} $|\text{last}(D_{k_Q}(Q))|< b$ holds, if and only if
record~$\text{last}(D_{k_Q}(Q))$ is simple. In this case $|r|\in[b,5b]$ holds.
\end{enumerate}
From Invariants~\iref{in:recordpairs},~\iref{in:queuevalues} and~\iref{in:min},
we have that the minimum element~$\min(Q)$ stored in the I/O-CPQA~$Q$ is
element~$\min(\text{first}(C(Q)))$. We say that an operation \textit{improves} or
\textit{aggravates} by a parameter~$c$ the inequality of
invariant~\iref{in:ineq} for I/O-CPQA~$Q$, when the operation increases or
decreases $\Delta (Q) = |C(Q)| - \sum_{i=1}^{k_Q}{|D_i(Q)|} - k_Q + 1$ by~$c$,
respectively. To argue about the~$\mathcal{O}(1/b)$ amortized I/O bounds we define the
following potential functions for large and small I/O-CPQAs. In particular, for
large I/O-CPQAs~$Q$, the potential~$\Phi(Q)$ is defined as
\[
\Phi(Q) = \Phi_F(|\text{first}(Q)|) + |\text{middle}(Q)| + \Phi_L(|\text{last}(Q)|),
\]
where
\[
\begin{array}{ccc}
{
\Phi_F(x) = \left\{
\begin{array}{cl}
3 -\frac{x}{b}, & b \leq x < 2b \\
1, & 2b \leq x < 3b \\
\frac{2x}{b}-5, & 3b \leq x \leq 4b \\
\end{array}
\right.
} & \text{and} & {
\Phi_L(x) = \left\{
\begin{array}{cl}
0, & 0 \leq x < 4b \\
\frac{3x}{b}-12, & 4b \leq x \leq 5b \\
\end{array}
\right.
}
\end{array}
\]
For small I/O-CPQAs~$Q$, the potential~$\Phi(Q)$ is defined as
\[
\Phi(Q) = \frac{3|Q|}{b}
\]
The total potential $\Phi_T$ is defined as
\[
\Phi_T = \sum_{Q}{\Phi(Q)} + \sum_{Q|b \leq |Q|}{1},
\]
where the first sum is over all I/O-CPQAs~$Q$ and the second sum is only over
all large I/O-CPQAs~$Q$.
\subsection{Operations}
In the following, we describe the algorithms that implement the operations
supported by the I/O-CPQA~$Q$. The operations call the auxiliary operation
\textsc{Bias}$(Q)$, which will be described last, that improves the inequality
of invariant \iref{in:ineq} for~$Q$ by at least~$1$. All operations
take~$\mathcal{O}(1)$ worst case I/Os. We also show that every operation
takes~$\mathcal{O}(1/b)$ amortized~I/Os, where~$1 \leq b \leq B$.
\paragraph{\textsc{FindMin}($Q$)} returns the value $\min(\text{first}(C(Q)))$.
\paragraph{\textsc{DeleteMin}($Q$)} removes element $e = \min(\text{first}(C(Q)))$
from record $(l,p) = \text{first}(C(Q))$. After the removal, if $|l| < b$ and $|Q|
\geq b$ hold, we do the following. If $b \leq |\text{first}(\text{rest}(C(Q)))| \leq 2b$,
then we merge $\text{first} (C(Q))$ with $\text{first} (\text{rest}(C(Q)))$ into one record which
is the new first record. Else if $2b < |\text{first}(\text{rest}(C(Q)))| \leq 3b$ then we
take~$b$ elements out of $\text{first}(\text{rest}(C(Q)))$ and put them into $\text{first}(C(Q))$.
Else we have that $3b < |\text{first}(\text{rest}(C(Q)))|$, and as a result we take $2b$
elements out of $\text{first}(\text{rest}(C(Q)))$ and put them into $\text{first}(C(Q))$. If the
inequality for $Q$ is aggravated by $1$ we call \textsc{Bias}($Q$) once.
Finally, element $e$ is returned.
\noindent \textit{Amortization:} Only if the size of $\text{first}(C(Q))$ becomes
$|\text{first}(C(Q))| = b -1$ do we incur any I/Os. In this case~$r =\text{first}(Q)$ has a
potential of $\Phi_F(|r|) =2$, and since we increase the number of elements
in~$r$ by~$b$ to~$2b$ elements, the potential of~$r$ will then only
be~$\Phi_F(|r|) =1$. Thus, the total potential decreases by~$1$, which also pays
for any I/Os including those incurred if \textsc{Bias}$(Q)$ is invoked.
\paragraph{\textsc{CatenateAndAttrite}($Q_1, Q_2$)} concatenates~$Q_2$ to the
end of~$Q_1$ and removes the elements from~$Q_1$ with value larger
than~$\min(Q_2)$. To do so, it creates a new I/O-CPQA~$Q'_1$ by modifying~$Q_1$
and~$Q_2$, and by calling \textsc{Bias}($Q'_1$) and \textsc{Bias}($Q_2$).
If $|Q_1| < b$, then $Q_1$ is only one record~$(l_1,\cdot)$, and so we prepend
it into the first record $(l_2,\cdot) = \text{first}(Q_2)$ of~$Q_2$. Let~$l_1'$ be
the non-attrited elements of~$l_1$. We perform the prepend as follows.
If~$|l_1'| + |l_2| \leq 4b$, then we prepend~$l_1'$ into~$l_2$. Else, we take
$2b - |l_1'|$ elements out of~$l_2$, and make them along with~$l_1'$ the new
first record of~$Q_2$.
\noindent \textit{Amortization:} If we simply prepend~$l_1'$ into~$l_2$, then
the potential~$\Phi_S(|l_1|)$ pays for the increase in potential of
$\Phi_F(|\text{first}(C(Q_2))|)$. Else, we take~$2b - |l_1'|$ elements out of~$l_2$,
and these elements along with~$l_1'$ become the new first record of~$Q_2$ of
size~$2b$. Thus,~$\Phi_F(2b) = 1$ and the potential drops by~$1$, which is
enough to pay for the I/Os used to flush the old first record of~$C(Q_2)$ to
disk.
\noindent If~$|Q_2| < b$, then $Q_2$ only consists of one record. We have two
cases, depending on how much of~$Q_1$ is attrited by~$Q_2$. Let~$r_1$ be the
second last record for~$Q_1$ and let~$r_2 = \text{last}(Q_1)$ be the last record.
If~$e$ attrites all of $r_1$, then we just pick the appropriate case among
(\ref{it:Q1C}--\ref{it:D}) below. Else if~$e$ attrites partially~$r_1$, but not
all of it, then we delete $r_2$ and we merge~$r_1$ and~$Q_2$ into the new last
record of~$Q_1$, which cannot be larger than $5b$. Otherwise if~$e$ attrites
partially~$r_2$, but not all of it, then we simply append the single record
of~$Q_2$ into~$r_2$, which will be the new last record of $Q_1$ and it cannot be
larger than $5b$.
\noindent \textit{Amortization:} If~$e$ attrites all of~$r_1$, then we release
at least~$1$ in potential, so all costs in any of the cases
(\ref{it:Q1C}--\ref{it:D}) are paid for. If~$e$ attrites partially~$r_1$, then
the new record cannot contain more than~$5b$ elements, and thus any increase in
potential is paid for by the potential of~$Q_2$. Thus, the I/O cost is covered
by the decrease of~$1$ in potential, caused by~$r_1$. If~$e$ attrites
partially~$r_2$, any increase in potential is paid for by the potential of
$Q_2$.
\noindent We have now dealt with the case where~$Q_1$ is a small queue, so in
the following we assume that~$Q_1$ is large. Let~$e = \min(Q_2)$.
\begin{enumerate}[1)]
\item \label{it:Q1C} If $e \leq \min(\text{first}(C(Q_1)))$, we discard
I/O-CPQA~$Q_1$ and set~$Q'_1 = Q_2$.
\item \label{it:Q1lastC} Else if $e \leq \max(\text{last}(C(Q_1)))$, we remove the
simple record $(l,\cdot) = \text{first}(C(Q_2))$ from~$C(Q_2)$, we set $C(Q'_1) =
\emptyset$, $B(Q'_1) = C(Q_1)$ and $D_1(Q'_1) = (l,p)$, where~$p$ points
to~$Q_2$, if it exists. This aggravates the inequality for~$Q_2$ by at
most~$1$, and gives~$\Delta (Q'_1) = - 1$. Thus, we call
\textsc{Bias}$(Q_2)$ once and \textsc{Bias}$(Q'_1)$ once.
\item \label{it:B} Else if $e \leq \min(\text{first}(B(Q_1)))$ or $e \leq
\min(\text{first}(D_1(Q_1)))$ holds, we remove the simple record $(l,\cdot) =
\text{first}(C(Q_2))$ from~$C(Q_2)$, set $D_1(Q'_1) =(l,p)$, and make~$p$ point to
$Q_2$, if it exists. If $e \leq \min(\text{first}(B(Q_1)))$, we set~$B(Q_1') =
\emptyset$. This aggravates the inequality for~$Q_2$ by at most~$1$, and
aggravates the inequality for~$Q_1$ by at most $1$. Thus, we call
\textsc{Bias}$(Q_2)$ once and \textsc{Bias}$(Q'_1)$ once.
\item \label{it:D} Else, let $(l_1,\cdot) = \text{last}(D_{k_{Q_1}})$. We remove
$(l_2,\cdot) =\text{first}(C(Q_2))$ from $C(Q_2)$. If~$|l_1| < b$, then remove the
record $(l_1,\cdot)$ from $D_{k_{Q_1}}$. Let $l_1'$ be the non-attrited
elements under attrition by~$e = \min(l_2)$. If $|l_1'| + |l_2| \leq 4b$,
then we prepend~$l_1'$ into~$l_2$ of record~$r_2 =(l_2, p_2)$, where~$p_2$
points to~$Q_2$. Otherwise. we make a new simple record $r_1$ with~$l_1'$
and~$2b$ elements taken out of~$r_2=(l_2,p_2)$. Finally, we put the
resulting one or two records~$r_1$ and~$r_2$ into a new
deque~$D_{k_{Q_1}+1}(Q_1)$. This aggravates the inequality for~$Q_2$ by at
most~$1$, and the inequality for~$Q_1$ by at most~$2$. Thus, we call
\textsc{Bias}$(Q_2)$ once and \textsc{Bias}$(Q'_1)$ twice.
\end{enumerate}
\noindent \textit{Amortization:} In all the cases (\ref{it:Q1C}--\ref{it:D})
both $Q_1$ and $Q_2$ are large, hence when we concatenate them we decrease the
potential by at least~$1$, as the number of large I/O-CPQA's decrease by one
which is enough to pay for any \textsc{Bias} operations.
\paragraph{\textsc{InsertAndAttrite}($Q$, $e$)} inserts an element~$e$ into
I/O-CPQA~$Q$ and attrites the elements in~$Q$ with value larger than~$e$. This
is a special case of operation \textsc{CatenateAndAttrite}($Q_1$,$Q_2$),
where~$Q_1 = Q$ and~$Q_2$ is an I/O-CPQA that only contains one record with the
single element~$e$.
\noindent \textit{Amortization:} Since creating a new I/O-CPQA with only one
element and calling \textsc{CatenateAndAttrite} only costs~$\mathcal{O}(1/b)$ I/Os
amortized, the operation \textsc{InsertAndAttrite} also costs~$\mathcal{O}(1/b)$
I/Os amortized.
\fullcmt{
\inkfig{htb}{\linewidth}{CPQABias}{fig:Bias}{In the case of \textsc{Bias}$(Q)$,
where~$B(Q) = \emptyset$ and~$k_Q = 1$, we need to follow the pointer~$p$
of~$(l,p) = \text{first}(D_1(Q))$ that may point to an I/O-CPQA~$Q'$. If so, we merge
it into~$Q$, taking into account attrition of~$Q'$ by~$e =
\min(\text{first}(D_1(Q)))$.}}
\paragraph{\textsc{Bias}$(Q)$} improves the inequality in \iref{in:ineq} for~$Q$
by at least~$1$.
\noindent \textit{Amortization:} Since all I/Os incurred by \textsc{Bias}$(Q)$
are already paid for by the operation that called \textsc{Bias}$(Q)$, we only
need to argue that the potential of~$Q$ does not increase due to the changes
that \textsc{Bias}$(Q)$ makes to~$Q$.
\begin{enumerate}[1)]
\item \label{it:Blg0} $|B(Q)| > 0$: We remove the first record $\text{first}(B(Q))
=(l_1,\cdot)$ from $B(Q)$ and let $(l_2,p_2) = \text{first}(D_1(Q))$. Let~$l_1'$
be the non-attrited elements of~$l_1$ under attrition from~$e = \min(l_2)$.
\begin{enumerate}[1)]
\item $0 \leq |l_1'| < b$: If $|l_2| \leq 2b$, then we just prepend~$l_1'$
onto~$l_2$. Else, we take~$b$ elements out of~$l_2$ and append them
to~$l_1'$.
\item $b \leq |l_1'| < 2b$: If $|l_2| \leq 2b$, and if furthermore $|l_1'|
+ |l_2| \leq 3b$ holds, then we merge~$l_1'$ and~$l_2$. Else~$|l_1'| +
|l_2| > 3b$ holds, so we take~$2b$ elements out of~$l_1'$ and~$l_2$ and
put them into $l_1'$, leaving the rest in~$l_2$.
Else~$|l_2| > 2b$ holds, so we take~$b$ elements out of~$l_2$ and put
them into~$l_1'$.
\end{enumerate}
If we did not prepend~$l_1'$ onto~$l_2$, we insert~$l_1'$ along with any
elements taken out of~$l_2$ at the end of~$C(Q)$ instead. If $|l_1'| <
|l_1|$, we set $B(Q) = \emptyset$. Else, we did prepend~$l_1'$ onto~$l_2$,
and then we just recursively call~\textsc{Bias}. Since~$|B(Q)| = 0$ we will
not end up in this case again. As a result, in all cases the inequality
of~$Q$ is improved by~$1$.
\textit{Amortization:} If~$l_1 = \text{first}(Q)$, then after calling
\textsc{Bias} we ensure that~$2b \leq |\text{first}(Q)| \leq 3b$, and so the that
potential of~$Q$ does not increase.
\item \label{it:Beq0} $|B(Q)| = 0$: When $|B(Q)| = 0$ holds, we have two
cases depending on the number of dirty queues, namely cases $k_Q >
1$~and~$k_Q = 1$.
\begin{enumerate}[1)]
\item \label{it:KQgt1} $k_Q > 1$: Let $e = \min(\text{first}(D_{k_Q}(Q)))$. If
$e \leq \min(\text{last}(D_{k_Q-1}(Q)))$ holds, we remove the record
$\text{last}(D_{k_Q -1}(Q))$ from~$D_{k_Q-1}(Q)$. This improves the
inequality of~$Q$ by~$1$.
Else, if $\min(\text{last}(D_{k_Q-1}(Q))) < e \leq \max(\text{last}(D_{k_Q-1}(Q)))$
holds, we remove record $r_1 = (l_1,p_1) = \text{last}(D_{k_Q-1}(Q))$
from~$D_{k_Q-1}(Q)$ and let $r_2 = (l_2,p_2) = \text{first}(D_{k_Q}(Q))$. We
delete any elements in~$l_1$ that are attrited by~$e$, and let~$l_1'$
denote the non-attrited elements.
\begin{enumerate}[1)]
\item $0 \leq |l_1'| < b$: If $|l_2| \leq 2b$, then we just
prepend~$l_1'$ onto~$l_2$. Otherwise, we take~$b$ elements out
of~$l_2$ and append them to~$l_1'$.
\item If $b \leq |l_1'| < 2b$: If $|l_2| \leq 2b$ and $|l_1'| + |l_2|
\leq 3b$, then we merge~$l_1'$ and~$l_2$. Else, $|l_1'| + |l_2| > 3b$
holds, so we take~$2b$ elements out of~$l_1'$ and~$l_2$ and put them
into~$l_1'$, leaving the rest in~$l_2$.
Else $|l_2| > 2b$, so we take~$b$ elements out of~$l_2$ and put them
into~$l_1'$.
\end{enumerate}
If~$r_1$ still exists, we insert it in the front of~$D_{k_Q}(Q)$. Finally,
we concatenate~$D_{k_Q-1}(Q)$ and~$D_{k_Q}(Q)$ into one deque. This improves
the inequality of~$Q$ by at least~$1$.
Else $\max(\text{last}(D_{k_Q-1}(Q))) < e$ holds, and we just concatenate the
deques~$D_{k_Q-1}(Q)$ and~$D_{k_Q} (Q)$, which improves the inequality
for~$Q$ by~$1$.
\textit{Amortization:} If not all of~$l_1$ is attrited then we ensure that
its record~$r_1$ has size between~$2b$ and~$3b$. Thus, if $r_1 = \text{first}(Q)$
holds, we will not have increased the potential of~$Q$. In the cases where
all or none of~$l_1$ is attrited, the potential of $Q$ can only be decreased
by at least~$0$.
\item \label{it:KQeq1} $k_Q = 1$: In this case~$Q$ contains only deques~$C(Q)$
and~$D_1(Q)$. We remove the record $r = (l,p) = \text{first}(D_1(Q))$ and
insert~$l$ into a new record at the end of~$C(Q)$. This improves the
inequality of~$Q$ by at least~$1$. If~$r$ is not simple, let~$r$'s
pointer~$p$ point to I/O-CPQA~$Q'$. We restore \iref{in:simple} for~$Q$ by
merging I/O-CPQAs~$Q$ and~$Q'$ into one I/O-CPQA.\fullcmt{ See
Figure~\ref{fig:Bias} for this case of operation~\textsc{Bias}.} In
particular, let~$e = \min(\text{first}(D_1(Q)))$, we now proceed as follows:
If $e \leq \min(Q')$, we discard~$Q'$. The inequality for~$Q$ remains
unaffected.
Else, if $\min(\text{first}(C(Q'))) < e \leq \max(\text{last} (C(Q'))$, we set $B(Q) =
C(Q')$ and discard the rest of~$Q'$. The inequality for~$Q$ remains
unaffected.
Else if $\max(\text{last}(C(Q')) < e \leq \min(\text{first}(D_1(Q')))$, we concatenate
the deque~$C(Q')$ at the end of~$C(Q)$. If moreover $\min(\text{first}(B(Q'))) <
e$ holds, we set $B(Q) = B(Q')$. Finally, we discard the rest of~$Q'$. This
improves the inequality for~$Q$ by~$|C(Q')|$.
Else $\min(\text{first}(D_1(Q'))) < e$ holds. We concatenate the deque~$C(Q')$ at
the end of~$C(Q)$, we set~$B(Q) = B(Q')$, we
set~$D_1(Q'),\ldots,D_{k_{Q'}}(Q')$ as the first~$k_{Q'}$ dirty queues
of~$Q$ and we set~$D_1(Q)$ as the last dirty queue of~$Q$. This improves the
inequality for~$Q$ by~$\Delta(Q') \geq 0$, since~$Q'$ satisfied
\iref{in:ineq} before the operation.
If~$r=\text{first}(Q)$ and $|l| \leq 2b$, then we remove $r$ and run \textsc{Bias}
recursively. Let $r' = (l',p') = \text{first}(Q)$. If $|l| + |l'| > 3b$, then we
take the~$2b$ first elements out and make them the new first record
of~$C(Q)$. Else we merge~$l$ into~$l'$, so that~$r$ is removed and~$r'$ is
now~$\text{first}(Q)$.
\textit{Amortization:} Since $\text{first}(Q)$ is either untouched or left with
$2b$ to~$3b$ elements, in which case its potential is~$1$, and since all
other changes decrease the potential by at least~$0$, we have that
\textsc{Bias} does not increase the potential of~$Q$.
\end{enumerate}
\end{enumerate}
\begin{theorem} \label{thm:iocpqa}
A set of~$\ell$ I/O-CPQA's can be maintained supporting the operations
\textsc{FindMin}, \textsc{DeleteMin}, \textsc{CatenateAndAttrite} and
\textsc{InsertAndAttrite} in~$\mathcal{O}(1/b)$ I/Os amortized and~$\mathcal{O}(1)$ worst
case I/Os per operation. The space usage is $\mathcal{O}(\frac{n-m}{b}) $ blocks
after calling \textsc{CatenateAndAttrite} and \textsc{InsertAndAttrite}~$n$
times and \textsc{DeleteMin}~$m$ times, respectively. We require that~$M \geq
\ell b$ for~$1 \leq b \leq B$, where~$M$ is the main memory size and~$B$ is
the block size.
\end{theorem}
\begin{proof}
The correctness follows by closely noticing that we maintain invariants
\iref{in:records}--\iref{in:smalltail}, and from those we have that
\textsc{DeleteMin}$(Q)$ and \textsc{FindMin}$(Q)$ always returns the minimum
element of~$Q$.
The worst case I/O bound of~$\mathcal{O}(1)$ is trivial as every operation only
touches~$\mathcal{O}(1)$ records. Although \textsc{Bias} is recursive, we notice
that in the case where $|B(Q)| > 0$, \textsc{Bias} only calls itself after
making $|B(Q)| = 0$, so it will not end up in this case again. Similarly, if
$|B(Q)| = 0$ and $k_Q > 1$ there might also be a recursive call to
\textsc{Bias}. However, before the call at least~$b$ elements have been taken
out of~$Q$, and thus the following recursive call to \textsc{Bias} will ensure
at least~$b$ more are taken out. This is enough to stop the recursion, which
will have depth at most~$3$.
The~$\mathcal{O}(1/b)$ amortized I/O bounds, follows from the potential analysis
made throughout the description of each operation.
\end{proof}
\subsection{Concatenating a Sequence of I/O-CPQAs} \label{ssec:seq}
We describe how to \textsc{CatenateAndAttrite} I/O-CPQAs
$Q_{1},Q_2,\ldots,Q_{\ell}$ into a single I/O-CPQA in $\mathcal{O}(1)$ worst case
I/Os, given that \textsc{DeleteMin} is not called in the sequence of operations.
We moreover impose two more assumptions. In particular, we say that I/O-CPQA $Q$
is in \textit{state} $x \in \mathbb{Z}$, if $|C(Q)| = \sum_{i=1}^{k_Q}{|D_i(Q)|}
+ k_Q -1 +x$ holds. Positive~$x$ implies that \textsc{Bias}$(Q)$ will be called
after the inequality for~$Q$ is aggravated by~$x+1$. Negative~$x$ implies that
\textsc{Bias}$(Q)$ need to be called~$x$ operations times in order to restore
inequality for~$Q$. So, we moreover assume that I/O-CPQAs~$Q_{i},i \in[1,\ell]$
are at state at least~$+2$, unless~$Q_{i}$ contains only one record in which
case it may be in state~$+1$. We call a record $r=(l,p)$ in an I/O-CPQA $Q_i$
\textit{critical}, if~$r$ is accessed at some time during the sequence of
operations. In particular, the critical records for~$Q_i$ are
$\text{first}(C(Q_i)),\text{first}(\text{rest}(C(Q_i))),\text{last}(C(Q_i)),\text{first}(B(Q_i)),\text{first}(D_1(Q_i)),\text{last}(D_{k_{Q_i}}(Q_i))$,
and $\text{last}(\text{front}(D_{k_{Q_i}}(Q_i)))$ if it exists. Otherwise, record
$\text{last}(D_{k_{Q_i}-1}(Q_i))$ is critical. So, we moreover assume that the
critical records for I/O-CPQAs~$Q_{i},i\in[1,\ell]$ are loaded into memory.
The algorithm considers I/O-CPQAs~$Q_{i}$ in decreasing index~$i$ (from right to
left). It sets $Q^{i}=Q_\ell$ and constructs the temporary I/O-CPQA $Q^{i-1}$ by
calling \textsc{CatenateAndAttrite}($Q_{i-1}$,$Q^{i}$). This yields the final
I/O-CPQA~$Q^{1}$.
\begin{lemma} \label{lem:seq_concats} I/O-CPQAs
$Q_{i},i\in[1,\ell]$ can be \textsc{CatenateAndAttrite}d into a single
I/O-CPQA without any access to external memory, provided that:
\begin{enumerate}
\item $Q_{i}$ is in state at least $+2$, unless it contains only one record,
in which case its state is at least $+1$,
\item all critical records of all $Q_{i}$ reside in main memory.
\end{enumerate}
\end{lemma}
\confcmt{We refer the reader to the full version of the paper.}
\begin{fullenv}
\begin{proof}
To avoid any I/Os during the sequence of \textsc{CatenateAndAttrite}s, we
ensure that~\textsc{Bias} is not called, and that the critical records are
sufficient, and thus no more records need to be loaded into memory.
To avoid calling~\textsc{Bias} we prove by induction the invariant that the
temporary I/O-CPQAs $Q^{i},i\in[1,\ell]$ constructed during the sequence are
in state at least $+1$. Let the invariant hold of $Q^{i+1}$ and let $Q^{i}$
be constructed by \textsc{CatenateAndAttrite}($Q_{i}$,$Q^{i+1}$). If~$Q_i$
contains at most two records, which both reside in dequeue~$C(Q_i)$, we only
need to access record $\text{first} (C(Q^{i+1}))$ and the at most two records
of~$Q_i$. The invariant holds for~$Q^i$, since it holds inductively for
$Q^{i+1}$ and the new records were added at~$C(Q^{i+1})$. As a result, the
inequality of \iref{in:ineq} for~$Q^{i+1}$ can only be improved. If $Q^{i+1}$
consists of only one record, then either one of the following cases apply or
we follow the steps described in operation \textsc{CatenateAndAttrite}. In the
second case, there is no aggravation for the inequality of \ref{in:ineq} and
only critical records are used.
In the following, we can safely assume that~$Q_i$ has at least three records
and its state is at least~$+2$. We parse the cases of the
\textsc{CatenateAndAttrite} algorithm assumming that $e=\min(Q^{i+1})$.
\begin{itemize}
\item[Case 1] {The invariant holds trivially since~$Q_i$ is discarded and no
change happens to~$Q^{i} = Q^{i+1}$. \textsc{Bias} is not called.}
\item[Cases 2,3] {The algorithm checks whether the first two records
of~$C(Q_i)$ are attrited by~$e$. If this is the case, we continue as
denoted at the start of this proof. Otherwise, case~\ref{it:Q1lastC} of
\textsc{CatenateAndAttrite} is applied as is. $Q^{i+1}$ is in state $0$
after the concatenation and $Q^i$ is in state $+1$. Thus the invariant
holds, and \textsc{Bias} is not. Note that all changes take place at the
critical records of $Q_i$ and $Q^{i+1}$.}
\item[Case 4] {The algorithm works exactly as in case~\ref{it:D} of
\textsc{CatenateAndAttrite}, with the following exception. At the
end,~$Q^i$ will be in state~$0$, since we added the deque
$D_{k_{Q^{i+1}}+1}$ with a new record and the inequality of
\iref{in:ineq} is aggrevated by $2$. To restore the invariant we apply
case 2(1) of~\textsc{Bias}. This step requires access to records~$\text{last}
(D_{k_{Q^i}-1})$ and $\text{first} (D_{k_{Q^i}})$. These records are both
critical, since the former corresponds to $\text{last} (D_{k_{Q^{i+1}}})$ and
the latter to $\text{first} C(Q^{i+1})$. In addition, \textsc{Bias}$(Q^{i+1})$
need not be called, since by the invariant, $Q^{i+1}$ was in state $+1$
before the removal of $\text{first} C(Q^{i+1})$. In this way, we improve the
inequality for $Q^i$ by~$1$ and invariant holds.}
\end{itemize}
\end{proof}
\end{fullenv}
\section{Dynamic Planar Range Skyline Reporting} \label{sec:skyline}
In this Section we present dynamic I/O-efficient data structures that support
3-sided planar orthogonal range skyline reporting queries.
\paragraph{3-Sided Skyline Reporting}
We describe how to utilize I/O-CPQAs in order to obtain dynamic data structures
that support 3-sided range skyline reporting queries and arbitrary insertions
and deletions of points, by modifying the approach of~\cite{OL81} for the
pointer machine model. In particular, let~$P$ be a set of~$n$ points in the
plane, sorted by $x$-coordinate. To access the points, we store
their~$x$-coordinates in an $(a,2a)$-tree~$T$ with branching parameter~$a\geq 2$
and leaf parameter~$k\geq1$. In particular, every node has degree within
$[a,2a]$ and every leaf contains at most~$k$ consecutive by $x$-coordinate input
points. Every internal node~$u$ of~$T$ is associated with an I/O-CPQA whose
non-attrited elements correspond to the maximal points among the points stored
in the subtree of~$u$. Moreover, $u$ contains a \textit{representative block}
with the critical records of condition 2 in Lemma~\ref{lem:seq_concats} for the
I/O-CPQAs associated with its children nodes.
To construct the structure, we proceed in a bottom up manner. First, we compute
the maximal points among the points contained in every leaf of~$T$. In
particular for every leaf, we initialize an I/O-CPQA~$Q$. We consider the
points~$(p_x,p_y)$ stored in the block in increasing $x$-coordinate, and
call~\textsc{InsertAndAttrite}($Q,-p_y$). In this way, a point~$p$ in the block
that is dominated by another point~$q$ in the block, is inserted before~$q$
in~$Q$ and has value~$-p_y > -q_y$. Therefore, the
dominated points in the block correspond to the attrited elements in~$Q$.
We construct the I/O-CPQA for an internal node~$u$ of~$T$ by concatenating the
already constructed I/O-CPQAs~$Q_{i}$ at its children nodes~$u_i$ of~$u$, for $
i \in [1,a]$ in Section~\ref{sec:iocpqa}. Then we call~\textsc{Bias} to
the resulting I/O-CPQA appropriately many times in order to satisfy condition 1
in Lemma~\ref{lem:seq_concats}. The procedure ends when the I/O-CPQA is
constructed for the root of~$T$. Notice that the order of concatenations
follows implicitly the structure of the tree~$T$. To insert (resp. delete) a
point~$p =(p_x,p_y)$ to the structure, we first insert (resp.
delete)~$p_x$ to~$T$. This identifies the leaf with the I/O-CPQA that
contains~$p$. We discard all I/O-CPQAs from the leaf to the root of~$T$, and
recompute them in a bottom up manner, as described above.
To report the skyline among the points that lie within a given 3-sided query
rectangle~$[x_\ell, x_r] \times [y_b, +\infty)$, it is necessary to obtain
the maximal points in a subtree of a node~$u$ of $T$ by querying the I/O-CPQA
stored in~$u$. Notice, however, that computing the I/O-CPQA of an internal
node of~$T$ modifies the I/O-CPQAs of its children nodes. Therefore, we can
only report the skyline of all points stored in~$T$, by calling
\textsc{DeleteMin} at the I/O-CPQA stored in the root of~$T$. The rest of the
I/O-CPQAs in~$T$ are not queriable in this way, since the corresponding nodes
do not contain the version of their I/O-CPQA, before it is modified by the
construction of the I/O-CPQA for their parent nodes. For this reason we render
the involved I/O-CPQAs confluently persistent, by implementing their clean,
buffer and dirty deques as purely functional catenable deques~\cite{KT99}. In
fact,~$T$ encodes implicity the directed acyclic version graph of the
confluently persistent I/O-CPQAs, by associating every node of~$T$ with the
version of the I/O-CPQA at the time of its construction. Every internal node
of $T$ stores a representative block with the critical records for the
versions of the I/O-CPQAs associated with its children nodes. Finally, the
update operation discards the I/O-CPQA of a node in~$T$, by performing in
reverse the operations on the purely functional catenable deques involved in
the construction of the I/O-CPQA (undo operation).
With the above modification it suffices for the query operation to identify the
two paths $p_\ell, p_r$ from the root to the leaves of $T$ that contain the
$x$-successor point of $x_\ell$ and the $x$-predecessor point of $x_r$,
respectively. Let $R$ be the children nodes of the nodes on the paths $p_\ell$
and $p_r$ that do not belong to the paths themselves, and also lie within the
query $x$-range. The subtrees of $R$ divide the query $x$-range into disjoint
$x$-ranges. We consider the nodes of $R$ from left to right. In particular,
for every non-leaf node in $p_\ell \cup p_r$, we load into memory the
representative blocks of the versions of the I/O-CPQAs in its children nodes
that belong to $R$. We call \textsc{CatenateAndAttrite} on the loaded I/O-CPQAs
and on the resulting I/O-CPQAs for every node in $p_\ell \cup p_r$, as decribed
in Section~\ref{sec:iocpqa}. The non-attrited elements in the resulting
auxiliary I/O-CPQA correspond to the skyline of the points in the query
$x$-range, that are not stored in the leaves of~$p_\ell$ and~$p_r$. To report
the output points of the query in increasing $x$-coordinate, we first report the
maximal points within the query range among the points stored in the leaf of
$p_\ell$. Then we call \textsc{DeleteMin} to the auxiliary I/O-CPQA that returns
the maximal points in increasing $x$-coordinate, and thus also in decreasing
$y$-coordinate, and thus we terminate the reporting as soon as a skyline point
with $y$-coordinate smaller than $y_b$ is returned. If the reporting has not
terminated, we also report the rest of the maximal points within the query range
that are contained in the leaf of $p_r$.
\begin{theorem} \label{thm:3sided}
There exist I/O-efficient dynamic data structures that store a set of $n$
planar points and support reporting the $t$ skyline points within a given
3-sided orthogonal range unbounded by the positive $y$-dimension in~$\mathcal{O}
(\log_{2B^{\epsilon}} n + t/B^{1-\epsilon})$ worst case I/Os, and updates
in~$\mathcal{O}(\log_{2B^{\epsilon}} n)$ worst case I/Os, using~$\mathcal{O}
(n/B^{1-\epsilon})$ disk blocks, for a parameter~$0 \leq \epsilon \leq 1$.
\end{theorem}
\begin{proof}
We set the buffer size parameter $b$ of the I/O-CPQAs equal to the leaf
parameter $k$ of $T$, and we set the parameters~$a = 2B^\epsilon$ and~$k =
B^{1-\epsilon}$ for~$0 \leq \epsilon \leq 1$. In this way, for a node of $T$,
the representative blocks for all of its children nodes can be loaded into
memory in $\mathcal{O}(1)$ I/Os. Since every operation supported by an I/O-CPQA
involves a~$\mathcal{O}(1)$ number of deque operations, I/O-CPQAs can be made
confluently persistent without deteriorating their I/O and space complexity.
Moreover, the undo operation takes $\mathcal{O}(1)$ worst case I/Os, since the
purely functional catenable deques are worst case efficient.
Therefore by Theorem~\ref{thm:iocpqa}, an update operation takes $\mathcal{O}
(\log_{2B^{\epsilon}} \frac{n}{B^{1-\epsilon}}) = \mathcal{O}(\log_{2B^{\epsilon}}
n) $ worst case I/Os. Lemma~\ref{lem:seq_concats} takes $\mathcal{O}(1)$ I/Os to
construct the temporary I/O-CPQAs for every node in the search paths, since
they satisfy both of its conditions. Moreover, by Theorem~\ref{thm:iocpqa}, it
takes $\mathcal{O}(\frac{\log_{2B^{\epsilon}} n}{B^{1-\epsilon}})$ I/Os to
catenate them together. Thus, the construction of the auxiliary query
I/O-CPQA takes $\mathcal{O}(\log_{2B^{\epsilon}} n)$ worst case I/Os in total.
Moreover, it takes $\mathcal{O}(1 + t/B^{1-\epsilon})$ worst case I/Os to report
the output points. There are $\mathcal{O}(\frac{n}{B^{1- \epsilon}})$ internal
nodes in $T$, and every internal node contains $\mathcal{O}(1)$ blocks.
\end{proof}
\begin{fullenv}
\paragraph{4-Sided Skyline Reporting}
Dynamic I/O-efficient data structures for 4-sided range skyline reporting
queries can be obtained by following the approach of Overmars and Wood for
dynamic rectangular visibility queries~\cite{OW88}. In particular, 4-sided range
skyline reporting queries are supported in $\mathcal{O}(\frac{a \log^2 n} {\log a \log
{2B^{\epsilon}} } + t/B^{1-\epsilon})$ worst case I/Os, using
$\mathcal{O}(\frac{n}{B^{1-\epsilon}} \log_a n)$ blocks, by employing our structure
for 3-sided range skyline reporting as a secondary structure on a dynamic range
tree with branching parameter~$a$, built over the $y$-dimension. Updates are
supported in $\mathcal{O}(\frac{\log^2 n} {\log a \log {2B^{\epsilon}} })$ worst case
I/Os, since the secondary structures can be split or merged in
$\mathcal{O}(\log_{2B^\epsilon} n)$ worst case I/Os.
\begin{remark}
In the pointer machine, the above constructions attains the same
complexities as the existing structures for dynamic 3-sided and
4-sided range maxima reporting~\cite{BT11}, by setting the buffer
size, branching and leaf parameter to $\mathcal{O}(1)$.
\end{remark}
\end{fullenv}
\section{Lower Bound for Dominating Minima Reporting} \label{sec:dommaxlb}
Let~$S$ be a set of~$n$ points in~$\mathbb{R}^2$. Let~$\mathcal{Q} =\{Q_i\}$ be
a set of~$m$ orthogonal 2-sided query ranges~$Q_i \in \mathbb{R}^2$. Range $Q_i$
is the subspace of $\mathbb{R}^2$ that dominates a given point~$q_i\in
\mathbb{R}^2$ in the positive $x$- and $y$- direction (the ``upper-right''
quadrant defined by~$q_i$). Let $S_i=S \cap Q_i$ be the set of all points in~$S$
that lie in the range~$Q_i$. A \textit{dominating minima reporting query} $Q_i$
contains the points~$\min(S_i) \in S_i$ that do not dominate any other point
in~$S_i$. In this section we prove that any pointer-based data structure that
supports dominating minima queries in~$\mathcal{O}(\log^{\mathcal{O}(1)}{n}+t)$ time, must
use superlinear space. This separates the problem from the easier problem of
supporting dominating maxima queries and the more general 3-sided range skyline
reporting queries. The same trade-off also holds for the symmetric
\textit{dominated maxima reporting queries} that are the simplest special case
of 4-sided range skyline reporting queries that demands superlinear space.
Moreover, the lower bound holds trivially for the I/O model, if no address
arithmetic is being used. In particular, for a query time of~$\mathcal{O}
(\frac{\log^{\mathcal{O}(1)}{n}}{B}+\frac{t}{B})$ the data structure must definitely
use~$\Omega (\frac{n}{B}\frac{\log{n}}{\log{\log{n}}})$ blocks of space. In the
following, we prove the lower bound for the dominating minima reporting queries.
Henceforth, we use the terminology presented in Section~\ref{sect:prel}. Without
loss of generality, we assume that $n = \omega^\lambda$, since this
restriction generates a countably infinite number of inputs and thus the lower
bound is general. In our case, $\omega =\log^\gamma{n}$ holds for some
$\gamma \> 0$, $m=2$ and $\lambda=\left\lfloor
\frac{\log{n}}{1+\gamma\log{\log{n}}}\right \rfloor$. Let~$\rho_{\omega}(i)$ be
the integer obtained by writing~$0\leq i <n$ using~$\lambda$
digits in base~$\omega$, by first reversing the digits and then taking their
complement with respect to~$\omega$. In particular, if~$i=i^{(\omega)}_0
i^{(\omega)}_1 \ldots i^{(\omega)}_{\lambda-1}$ holds, then
\[
\rho_{\omega}(i)
=
(\omega-i^{(\omega)}_{\lambda-1}-1)(\omega-i^{(\omega)}_{\lambda-2}-1)\ldots
(\omega-i^{(\omega)}_1-1)(\omega-i^{(\omega)}_0-1)
\]
where~$i^{(\omega)}_j$ is the~$j$-th digit of number~$i$ in base~$\omega$. We
define the points of~$S$ to be the set $\{(i,\rho_{\omega}(i))| 0 \leq i <n\}$.
Figure~\ref{fig:lower} shows an example with $\omega=4$, $\lambda=2$.
To define the query set~$\mathcal{Q}$, we encode the set of points
$\{\rho_\omega(i)|0 \leq i < n\}$ in a full trie structure of depth~$\lambda$.
Recall that $n = \omega^{\lambda}$. Notice that the trie structure is implicit
and it is used only for presentation purposes. Input points correspond to the
leaves of the trie and their $y$ value is their label at the edges of the trie.
Let~$v$ be an internal node at depth~$d$ (namely,~$v$ has~$d$ ancestors), whose
prefix~$v_0, v_1, \ldots, v_{d-1}$ corresponds to the path from~$v$ to the
root~$r$ of the trie. We take all points in its subtree and sort them by~$y$.
From this sorted list we construct groups of size~$\omega$ by always picking
each~$\omega^{\lambda-d-1}$-th element starting from the smallest non-picked
element. Each such group corresponds to the output of each query.\fullcmt{ See
Figure~\ref{fig:lower} for an example.} In this case, we say that the query is
\textit{associated} to node~$v$.
\fullcmt{
\fig{}{1}{scale=0.7}{lower}{fig:lower}{An example for $\omega = 4$ and
$\lambda = 2$. Two examples of queries are shown, out of the~$8$ possible
queries with different output. Connecting lines represent points whose~$L_1$
distance is~$\omega^k, 1 \leq k \leq \lambda$. All $8$ possible queries can be
generated by translating the blue lines horizontally so that the answers of
all $4$ queries are disjoint. Similarly for the red lines with the exception
that we translate them vertically.}}
A node of with depth~$d$ has~$\frac{n}{\omega^d}$ points in its subtree and thus
it defines at most~$\frac{n}{\omega^{d-1}}$ queries. Thus, the total number of
queries is:
\[
\left|\mathcal{Q}\right| = \sum_{d=0}^{\lambda-1}{\omega^d
\frac{n}{\omega^{d+1}}} = \sum_{d=0}^{\lambda-1}{\frac{n}{\omega}} =
\frac{\lambda n}{\omega}
\]
This means that the total number of queries is
\[
|\mathcal{Q}|=\frac{\lambda n}{\omega}
=
\frac{\log{n}}{1+\gamma\log{\log{n}}}\frac{1}{\log^\gamma{n}}n
=
\frac{n}{\log^{\gamma-1}{n}(1+\gamma\log{\log{n}})}
\]
The following lemma states that~$\mathcal{Q}$ is appropriate for our purposes.
\begin{lemma} \label{lem:query}
$\mathcal{Q}$ is~$(2,\log^\gamma{n})$-favorable.
\end{lemma}
\begin{proof}
First we prove that we can construct the queries so that they have output size
$\omega = \log^\gamma{n}$. Assume that we take a group of~$\omega$ consecutive
points in the sorted order of points with respect to the $y$-coordinate at the
subtree of node~$v$ at depth~$d$. These have common prefix of length~$d$. Let
the $y$-coordinates of these points be
$\rho_{\omega}(i_1),\rho_{\omega}(i_2),\ldots,\rho_{\omega}(i_{\omega})$ in
increasing order, where $\rho_{\omega}(i_j) -\rho_{\omega}(i_{j-1})
=\omega^{\lambda-d-1}, 1 < j \leq \omega$. This means that these numbers
differ only at the $\lambda - d -1$-th digit. This is because they have a
common prefix of length~$d$ since all points lie in the subtree of $v$. At the
same time they have a common suffix of length~$\lambda -d -1$ because of the
property that $\rho_{\omega}(i_j) -\rho_{\omega}(i_{j-1})
=\omega^{\lambda-d-1}, 1 < j \leq \omega$ which comes as a result from the way
we chose these points. By inversing the procedure to construct these
$y$-coordinates, the corresponding $x$-coordinates~$i_j, 1 \leq j \leq \omega$
are determined. By complementing we take the increasing sequence
$\bar{\rho}_{\omega}(i_{\omega}),\ldots,\bar{\rho}_{\omega}(i_2),\bar{\rho}_{\omega}(i_1)$,
where $\bar{\rho}_{\omega}(i_j)=\omega^\lambda-\rho_{\omega}(i_j)-1 $ and
$\bar{\rho}_{\omega}(i_{j-1}) -\bar{\rho}_{\omega}(i_{j})
=\omega^{\lambda-d-1}, 1 < j \leq \omega$. By reversing the digits we finally
get the increasing sequence of $x$-coordinates $i_{\omega},\ldots,i_2,i_1$,
since the numbers differ at only one digit. Thus, the group of~$\omega$ points
are decreasing as the $x$-coordinates increase, and as a result a query~$q$
whose horizontal line is just below~$\rho_{\omega}(i_1)$ and the vertical line
just to the left of~$\rho_{\omega}(i_{\omega})$ will certainly contain this
set of points in the query. In addition, there cannot be any other points
between this sequence and the horizontal or vertical lines defining query $q$.
This is because all points in the subtree of~$v$ have been sorted with respect
to~$y$, while the horizontal line is positioned just
below~$\rho_{\omega}(i_1)$, so that no other element lies in between. In the
same manner, no points to the left of~$\rho_{\omega}(i_{\omega})$ exist, when
positioning the vertical line of~$q$ appropriately. Thus, for each query~$q
\in \mathcal{Q}$, it holds that~$|S\cap q|=\omega=\log^\gamma{n}$.
It is enough to prove that for any two query ranges $p,q \in \mathcal{Q}$, $|S
\cap q \cap p| \leq 1$ holds. Assume that~$p$ and~$q$ are associated to
nodes~$v$ and~$u$, respectively, and that their subtrees are disjoint. That
is,~$u$ is not a proper ancestor or descendant of~$v$. In this case, $p$
and~$q$ share no common point, since each point is used only once in the trie.
For the other case, assume without loss of generality that~$u$ is a proper
ancestor of~$v$ ($u \neq v$). By the discussion in the previous paragraph,
each query contains~$\omega$ numbers that differ at one and only one digit.
Since~$u$ is a proper ancestor of~$v$, the corresponding digits will be
different for the queries defined in~$u$ and for the queries defined in~$v$.
This implies that there can be at most one common point between these
sequences, since the digit that changes for one query range is always set to a
particular value for the other query range. The lemma follows.
\end{proof}
Lemma~\ref{lem:query} allows us to apply Lemma~\ref{lem:lower}, and thus the
query time of $\mathcal{O}(\log^\gamma{n} + t)$, for output size~$t$, can only be
achieved at a space cost of $\Omega
\left(n\frac{\log{n}}{\log{\log{n}}}\right)$. The following theorem summarizes
the result of this section.
\begin{theorem} \label{thm:lower}
The dominating minima reporting problem can be solved with
$\Omega\left(n\frac{log{n}}{\log{\log{n}}}\right)$ space, if the query is
supported in~$\mathcal{O}(\log^\gamma{n} + t)$ time, where~$t$ is the size of the
answer to the query and parameter $\gamma = \mathcal{O}(1)$.
\end{theorem}
\section{Conclusion} \label{sect:concl}
We presented the first dynamic I/O-efficient data structures for 3-sided planar
orthogonal range skyline reporting queries with worst case polylogarithmic
update and query complexity. We also showed that the space usage of the existing
structures for 4-sided range skyline reporting in pointer machine is optimal
within doubly logarithmic factors.
It remains open to devise a dynamic I/O-efficient data structure that supports
reporting all $m$ planar skyline points in $\mathcal{O}(m/B)$ worst case I/Os and
updatess in $\mathcal{O}(\log_B n)$ worst case I/Os. It seems that the hardness for
reporting the skyline in optimal time is derived from the fact that the problem
is dynamic. The dynamic indexability model of Yi~\cite{Y09} may be useful to
prove a lower bound towards the direction of rendering our structure for 3-sided
range skyline reporting~\textit{I/O-optimal}, as defined by Papadias et
al.\cite{PTFS05}. Finally it remains open to obtain a $\mathcal{O}(\frac{n}{B}\log_B
n)$ space dynamic I/O-efficient data structures for 4-sided range skyline
reporting with $\mathcal{O}(\log^2_B n)$ worst case query and update I/Os, regardless
of the I/O-complexity per reported point.
\clearpage
\bibliographystyle{plain}
|
1,116,691,500,636 | arxiv | \section{Introduction}
In conventional pictures for the growth of structure,
galaxies and clusters are thought to originate from the growth
of small density fluctuations due to gravitational
instability, in a universe dominated by dark matter.
In hierarchical clustering models, like the cold dark matter (CDM) models,
small mass clumps of dark matter form first and gather into larger
and larger masses subsequently. The structure of these dark matter clumps,
which we will refer to as "halos", is likely to be
related to how the halos formed, the initial spectrum
of the density fluctuations and to the underlying cosmology.
Much of the early work on the structure of dark halos
concentrated on their density profiles in the outer regions,
especially in the context of understanding the flat rotation
curves of disk galaxies. The secondary infall paradigm
introduced by Gunn and Gott (1972) and subsequently amplified
by Fillmore and Goldreich (FG, 1984) and Bertschinger (B85, 1985)
suggests that gravitational collapse around a seed
perturbation will generically lead to divergent extended halos which
produce a nearly flat rotation curve in the outer regions for
the case $\Omega_{matter} \equiv \Omega_m =1$.
The Gunn-Gott picture would lead
to steep convergent profiles in the outer regions for low
density ($\Omega_{m} < 1$) universes. If current estimates
for the global density parameter are correct ($\Omega_m = 0.3 \pm 0.1$),
then the high density core ($\rho_{core}/<\rho> > 10^3$ ) were formed
early enough so that the $\Omega_m = 1$ picture effectively
applies. But the outer halos represent the current, low density
universe.
The nature of the density profiles of dark halos in the inner regions
is also of importance from several points of view.
The structure of dark halo cores determines the efficiency
of gravitational lensing by the galactic and cluster halos,
the X- ray emissivity of clusters and
galactic rotation curves in the inner regions.
These properties of galactic and cluster halos can be well
constrained by observations. So, if the core density profile of dark
halos are fossils which do depend on some of the properties of structure
formation models, like their initial power spectrum,
one would have a useful observational handle on these properties.
It is therefore necessary to understand what determines
the nature of the density profiles of dark matter halos,
and their cores, {\it ab initio}. We discuss this issue here.
Further, Navarro, Frenk and White (NFW) (1995, 96, 97)
have proposed from their N-body simulations,
that dark matter halos in hierarchical
clustering scenarios develop a universal density profile,
regardless of the scenario for structure formation or cosmology.
The NFW profile has an inner cuspy form with the density
$\rho \propto r^{-1}$ and an outer envelope of
the form $\rho \propto r^{-3}$. There does not
appear to be any reason, apriori, why halo density profiles should
prefer such a form,
but empirically, several investigators have found that the NFW
profile provides a moderately good fit to numerical simulations
( Cole and Lacey 1996,
Tormen, Bouchet $\&$ White 1997, Huss, Jain $\&$ Steinmetz 1997, 1999,
Thomas {\it et al.}, 1998). Recently, though, high resolution
simulations of cluster formation in a CDM model,
by Moore {\it et al.} (1998), yielded a core
density profile $\rho(r) \propto r^{-1.4}$,
shallower than $r^{-2}$, but steeper than the $r^{-1}$ form
preferred by NFW, consistent with the earlier high resolution
work of Xu (1995). A similar result was also found earlier
by Fukushige and Makino (1997). (For small mass halo
cores on the other hand, Kravtsov {\it et al.} (1998), find
an average core density profile, shallower than the NFW form ).
Xu (1995) also found that there was a large scatter in the
the logarithmic slope of halo density profiles in both the
core and outer regions. One motivation of our work
was to examine this issue on general theoretical grounds;
while at the same time check in some of our own
numerical experiments, the properties of dark halo
density and velocity dispersion profiles.
In the next section we discuss the processes which may determine the
halo density profile and consider the role
of undigested cores in setting the structure of dark halos cores.
For a flat universe with $P(k) \propto k^n$,
scaling arguments suggest that
$\rho \propto r^{-\alpha}$ with $\alpha = \alpha_n = (9+3n)/(5+n)$.
As an aside we note here that for popular cosmological
models $n \approx -2$, in the appropriate range of
wavelengths, giving $\alpha = 1$, the NFW value.
But whether such a scaling law indeed obtains depends on the
detailed dynamics.
In order to explore the dynamical issues, we consider first
self similar collapse of a single spherically symmetric density
perturbation, in a scale free universe. We introduce a fluid
approach for analyzing this problem, in Section 3.
We highlight the importance of tangential velocity dispersions to
obtain density laws shallower than $1/r^2$ in the core regions.
In a companion paper (Subramanian, 1999, S99 henceforth), one of us (KS)
considers these self-similar collapse solutions in greater detail,
by deriving and solving numerically the scaled moment equations
for such a collapse, including the effect of
tangential velocity dispersions.
In Section 4 we analyze, following the Gunn-Gott paradigm,
the outer profiles expected in low density universes, where
an outer profile steeper than $r^{-3}$ must obtain.
In Section 5, we analyze dark halo density and velocity dispersion
profiles obtained in cosmological N-body
simulations of models with $n= 0, -1$ and $-2$. We show that
the core-density profiles of dark halos, show some scatter
in their properties, but nevertheless do appear to
reflect a memory of the initial power spectrum.
The final Section discusses the results and presents our conclusions.
\section{The density profiles of dark halos}
To fix ideas, let us limit ourself initially, to the Einstein de-Sitter
universe, with $\Omega =1$.
This is almost certainly not the correct cosmological model,
but it provides a convenient context within which to discuss
the formation of structure and it is likely
to be a very good approximation at epochs $(1+z)>\Omega_m^{-1}$
at which the cores of familiar objects have formed.
Let us also assume that the
Fourier space power spectrum of density fluctuations is a power law,
$P(k) = A k^n$, where the spectral index n lies between the limits
$-3 < n < 1$. In this case structure grows hierarchically
with small scales going non-linear first and larger and larger
mass scales going non-linear at progressively later times.
For such a scale free universe,
all properties of dark matter distributions at
each epoch are just a
scaled version of those in previous epochs i.e. the universe is
self-similar. What would decide the density profile
of a dark matter halo in such a cosmological setting?
It is likely that at least three processes are important.
Firstly, when some mass scale decouples from the general
Hubble expansion and collapses in an inhomogeneous
fashion to form a dark matter halo,
the changing gravitational potential and
phase mixing will cause some amount of violent relaxation
or " virialisation" to occur.
A general constraint on the
equilibrium distribution of such a halo will be set by
energy and mass conservation together with scaling laws which obtain in
a hierarchical clustering scenario.
Secondly, in the cosmological context, a collapsed mass is not
isolated and will therefore continue to accrete surrounding
material, as long as matter dominates the energy density.
Such a secondary infall onto the collapsed
halo will alter/determine its structure in the outer
regions.
We emphasize here a third process:
When any mass scale collapses, in a hierarchical
theory, it will already contain a dominant smaller mass dark halo
which collapsed earlier, and is therefore denser
on the average. Typically the
core of such a smaller mass halo, is dense enough to resist
disruption and survive undigested, when it is
incorporated into the bigger object.
A nested sequence of
undigested cores in the center of the halo,
which have survived the hierarchical
inhomogeneous collapse to form larger and larger objects,
could thus determine the halo structure in the inner regions.
We illustrate this idea schematically in Figure 1.
Suppose a halo of mass M collapses to form a "virialised"
object with a characteristic density $\rho_0$ and core radius $r_c$.
For $P(k) \propto k^n$, simple standard scaling arguments
using linear theory (cf. Peebles 1980, Padmanabhan and Subramanian 1992,
Padmanabhan 1993), predict that $\rho_0$ and
$r_c$, scale with mass $M$ as
\begin{equation}
\rho_0(M) \propto M^{-(n+3)/2} ;\qquad
r_c(M) \propto M^{(n+5)/6} ;\qquad
\rho_0 \propto r_c^{-(9+3n)/(5+n)}
\label{scale}
\end{equation}
So, in the above sequential collapse to form larger and larger
objects, the undigested core of each member of the sequence,
typically contributes a density $\rho_0$ at a scale $r_c$,
satisfying the the relation $\rho_0 = c_1 r_c^{-(9+3n)/(5+n)}$,
with some constant $c_1$.
This suggests that the inner density profile of the
bigger halo, which is the envelope of the profiles of the
nested sequence of smaller mass cores could have the form
\begin{equation}
\rho(r) \propto r^{-\alpha}, \qquad
\alpha = \alpha_n = {9 + 3n \over 5 + n}
\label{dencor}
\end{equation}
Note that for any $n < 1$, $\alpha < 2$. So one
expects the core density profile to have a power law
dependence, shallower than $r^{-2}$,
when smaller mass cores remain undigested in the formation
of a larger mass.
It is intriguing that the same form for the {\it density profile}
(as against the correlation function) is also
argued for by Peebles (1980; section 26.).
In a paper which appeared during the course
of this work Syer $\&$ White (1998) give a similar
argument, for the case when bigger halos form by purely mergers of
smaller halos. Our argument (concluding with equation (\ref{dencor}))
of course neglects both previous generation of
undigested cores and secondary infall
and is only designed to model the innermost part of a currently
virializing object, where their effects on the energetics should be minimal.
One can also state the above argument
in terms of the velocity dispersion or the rotation
velocity profiles. The typical
velocity dispersion of a collapsed halo $\sigma \propto (M/r_c)^{1/2}$.
Since the scaling argument gives $r_c \propto M^{(n+5)/6}$, we therefore
have $\sigma^2 \propto r_c^{(1-n)/(5+n)} $. So for any $n<1$,
smaller mass objects have a smaller velocity dispersion
and higher phase space densities than larger mass objects.
The survival of a nested sequence of
cores during the inhomogeneous collapse to form bigger and
bigger objects, then suggests that the
velocity dispersion profile in the core regions will
scale as
$\sigma^2(r) \propto r^{(1-n)/(5+n)}$. For any $n < 1$,
an alternate signature of undigested cores is then
a velocity dispersion which decreases with decreasing radius
in the above fashion. It is interesting to note in this context
that, the cluster scale halo core in the Moore {\it et al.} (1998)
simulation, does indeed show such a velocity dispersion profile,
with $\sigma$ decreasing with decreasing $r$
(Moore, private communication).
We have assumed above that each stage of the hierarchy
arises from a typical density fluctuation.
In general there would be a scatter in the sub-halo
properties since the initial density peaks heights are
random with a Gaussian probability
distribution. This will lead to a scatter in the slope $\alpha$,
for any individual halo (cf. Nusser and Sheth 1999 and section 5 below).
The arguments so far have been semi-quantitative but general. We
consider, in the next section,
another approach to the density profiles of halo cores via
spherically symmetric, self similar collapse solutions to the
Vlasov equation. Our motivation is to see
whether the results derived
above, can be recovered in any simple, analytically tractable model.
This model will also allow
us to examine, in a simple setting, the dynamical
constraints on obtaining core density profiles of the form
given by Eq. (\ref{dencor}).
\section{ Self similar collapse and halo density profiles: a fluid
approach}
Consider the collapse of a single spherically
symmetric density perturbation, in a flat background universe.
Suppose the initial density fluctuation is a power law in radius.
Then there is no special scale in the
problem either from initial conditions or cosmology.
We expect to be able to describe the further evolution of
such a density perturbation, through a self similar solution.
FG and B85 looked at purely radial self similar collapse by
solving for the self similar particle trajectory.
We adopt a different approach here, examining
directly the evolution of the distribution function of the
dark matter. During the course
of this work we learned that a number
of authors (Padmanabhan 1994, unpublished notes; Padmanabhan 1996a,
Chieze, Teyssier $\&$ Alimi 1997, Henriksen $\&$ Widrow 1997)
have also adopted this approach to the self
similar collapse problem considered by FG and B85. We will
emphasize here also the role of non-radial motions
in self similar collapse solutions.
\subsection{ The self similar solution}
The evolution of dark matter phase space
density $f({\bf r}, {\bf v}, t)$ is governed by the Vlasov Equation,
\begin{equation}
{\partial f \over \partial t} + {\bf v}. {\partial f \over \partial{\bf r}}
+ {\bf a}. {\partial f \over \partial{\bf v}} = 0 ,
\label{Vlasov}
\end{equation}
where ${\bf r}$ and ${\bf v} = \dot {\bf r}$ are the proper co-ordinate
and velocity of the particles respectively. Also the acceleration
${\bf a} = \dot {\bf v} = - {\bf \nabla }\Phi$, with
\begin{equation}
{\bf \nabla}^2\Phi = 4 \pi G \rho = 4 \pi G \int f d^3 {\bf v} .
\label{pois}
\end{equation}
By direct substitution, it is easy to verify that these equations admit
self similar solutions of the form
\begin{equation}
f({\bf r}, {\bf v}, t) = k_2 k_1^{-3} t^{-q -2p} F( {{\bf r}\over k_1 t^p},
{{\bf v}\over k_1 t^q}) ; \qquad p = q + 1 ,
\label{scalf}
\end{equation}
where $k_1,k_2$ are constants which we will fix to
convenient values below.
We have used proper co-ordinates here
since the final equilibrium halo is most simply described in these
co-ordinates. (The same solution in co-moving
co-ordinates for the density is given by Padmanabhan (1996a)).
Defining a new set of co-ordinates
${\bf y} = {\bf r}/(k_1t^p)$, ${\bf w} = {\bf v}/(k_1t^q)$ and a scaled
potential $\chi =k_1^{-2} t^{-2q}\Phi$,
the scaled phase space density $F$ satisfies
\begin{equation}
-(q + 2p) F - p {\bf y}. {\partial F \over \partial{\bf y}}
-q {\bf w}. {\partial F \over \partial{\bf w}}
+ {\bf w}. {\partial F \over \partial{\bf y}}
-{\bf \nabla}_{\bf y}\chi . {\partial F \over \partial{\bf w}} = 0 ;
\label{valsc}
\end{equation}
\begin{equation}
{\bf \nabla}_{\bf y}^2\chi = 4 \pi G k_2 \int F d^3 {\bf w} .
\label{potsc}
\end{equation}
Consider the evolution of a spherically symmetric density perturbation,
in a flat universe whose scale factor $a(t) \propto t^{2/3}$.
For self similar evolution, the density is given by
\begin{equation}
\rho(r,t) = \int f d^3{\bf v} =
k_2 t^{2q -2p} \int F(y, {\bf w}) d^3{\bf w}
= k_2 t^{-2}\int F(y, {\bf w}) d^3{\bf w} \equiv k_2 t^{-2} \psi(y)
\label{densc}
\end{equation}
where we have defined $r = \vert {\bf r} \vert$, $y = \vert {\bf y} \vert$
and used the relation $p = q+1$. For the flat universe,
the background matter density evolves as
$\rho_b(t) = 1/(6 \pi G t^2)$. So the density contrast
$\rho(r,t)/\rho_b(t) = \psi(y)$, where we take $k_2 = 1/(6\pi G)$.
\subsection{ Linear and non-linear limits}
Let the initial excess density contrast averaged over a
sphere of co-moving radius $x= r/a(t) \propto rt^{-2/3}$ be a power law
$\bar\delta(x,t_i) \propto x^{-3\epsilon}$.
Since $\rho/\rho_b$ is a function of $y$ alone, the $\bar\delta(x,t)$
will also be a function only of $y$.
Note that, in the linear regime, it is the excess density contrast
averaged over a {\it co-moving} sphere,
which grows as the scale factor $a(t)$. So one can write
for the linear evolution of the spherical perturbation
\begin{equation}
\bar\delta(r,t)= \bar\delta_0 x^{-3\epsilon}t^{2/3}= \bar\delta_0 r^{-3\epsilon}t^{2/3 + 2\epsilon} =
\bar\delta_0 y^{-3\epsilon}t^{- 3\epsilon p + 2/3 + 2\epsilon} ,
\label{lincon}
\end{equation}
where we have substituted $r = y t^p$.
This can be a function of $y$ alone,
for a range of $t$ in the linear regime iff
$- 3\epsilon p + 2/3 + 2\epsilon = 0$, which gives
\begin{equation}
p = {2 + 6\epsilon \over 9\epsilon} .
\label{adet}
\end{equation}
We see that once the initial density profile is specified, the
exponents $p,q$ of the self similar solution are completely determined.
(For an initial $\bar\delta(x,t_i) \propto x^{-3\epsilon}$, the
radius of the shell turning around at time $t$ is, $r_t(t) \propto t^p$.
So a natural way of fixing the constant $k_1$ is by taking
$k_1t^p = r_t(t)$, and $y = r/r_t(t)$. We will do this in
what follows.)
Consider now what happens in the non-linear limit.
The zeroth moment of the Vlasov equation gives
\begin{equation}
{\partial \rho \over \partial t} + {\bf \nabla}_{\bf r}.(\rho \bar{\bf v}) = 0
\label{contm}
\end{equation}
Here $\bar{\bf v}$ is the mean velocity
(first moment of $f$ over the velocity).
In the non-linear regime, one expects,
each shell of dark matter, which was initially expanding,
to turn around, after reaching a maximum radius and
collapse. Subsequently the shell would oscillate between a minimum
radius, which depends on how much non-radial velocities the shell particles
have and a maximum radius, which depends on how the mass within the shell
grows with time. In regions which have had a large amount of
shell crossings, the halo particles
have settled to nearly zero average infall velocity, that is $ \bar{v_r} = 0$.
(They could of course still have velocity dispersions). Using
$\bar{ v_r} \equiv 0$ in (\ref{contm}) , we have $(\partial \rho /
\partial t) = 0$, in the non-linear regime. In this regime therefore,
\begin{equation}
\rho(r,t) = Q(r) = Q(yt^{p}) = {1 \over 6 \pi G t^{2}} \psi(y)
\label{nonc}
\end{equation}
This functional equation has only power law solution,
because of the power law dependences on $t$.
Substituting $Q(r) = q_0 r^{-\alpha}$ into Eq. (\ref{nonc}),
and using $r \propto yt^p$,
we obtain $y^{-\alpha} t^{-p \alpha} \propto t^{-2} D(y)$. This can
only be satisfied for range of $t$ in the non-linear regime
provided $p\alpha = 2$. So, for an initial density profile
with a power law slope $3\epsilon$, the power law slope of the
density in the non-linear regime is given by,
\begin{equation}
\alpha = {2 \over p} = {9\epsilon \over 3\epsilon + 1} .
\label{nonpow}
\end{equation}
B85 considered the self-similar secondary infall onto an initially
localised, overdense perturbation,
corresponding to taking $\epsilon = 1$. Using Eq. (\ref{nonpow})
this gives $\alpha = 9/4$, the slope for the density
at the non-linear end deduced by
following the self similar particle trajectory.
FG considered a range of $\epsilon$ and our value
of $\alpha$ agrees with that obtained by them,
again by following
particle trajectories. Both these authors also restricted
themselves to purely radial orbits. In this case FG
argued that while the above form for $\alpha$ should obtain
for $2/3 \leq \epsilon < 1$, for $\epsilon < 2/3$, one goes to
the limiting value $\alpha = 2$. However, this is only true for
purely radial trajectories (cf. White and Zaritsky 1992;
Sikvie, Tkachev $\&$ Wang 1997). We will also see below,
by considering the higher moments of the Vlasov equation, that
$\alpha < 2$ can only obtain if the system has
non-radial velocity dispersions.
What should we choose for the value of $\epsilon$? For a power
law $P(k) \propto k^n$,
the fractional density contrast averaged over a co-moving sphere of
radius $x$, is distributed as a Gaussian, with a variance
$ \propto x^{-(3+n)/2}$ (cf. Peebles 1980).
This suggests a "typical" spherically averaged initial
density law for a halo collapsing around a randomly placed point
of the form $\bar\delta(x,t_i) \propto x^{-(3+n)/2}$, or
$3\epsilon = (3 + n)/2$. Suppose we use this value of $\epsilon$ for
the initial density profile of a halo. Then in the non-linear stage,
the halo density in regions which have settled down to a zero mean
radial velocity, will be $\rho(r,t) \propto r^{-\alpha}$, where,
using $ 3\epsilon = (3 + n)/2$ in Eq. ( \ref{nonpow} )
\begin{equation}
\alpha = \alpha_n = { 9 + 3n \over 5 + n}
\label{aln}
\end{equation}
This result should obtain for collapses from power law
initial power spectrum.
Remarkably, this is the same law we derived
earlier for the core of a collapsed halo, assuming that the cores of
sequence of sub-halos are left undigested, during the formation
of the bigger halo.
An alternate choice, $3\epsilon = (3+n)$ would be relevant
if one were considering the collapse around an isolated
high density peak; since in this case the initial density
profile would be proportional to the correlation function
to lowest order (cf. Bardeen {\it et al.} 1986).
In this case one gets $\alpha = (9+3n)/(4+n)$
(Hofmann $\&$ Shaham 1985, Padmanabhan 1996b).
(Since $\epsilon < 1$ for overdense perturbations, we can use
this choise only for $n < 0$).
Note that for $n < 1$ the density law given by (\ref{aln})
is shallower than $1/r^2$,
which was claimed to be a limiting form by FG in case of
radial collapse. To see how such a restriction
comes about and when one can obtain
a shallower slope than $r^{-2}$ for the halo cores,
it is interesting to consider
the higher moments of the Vlasov equation (the Jeans equations) for
the spherical self similar solution.
\subsection{A fluid approach to collisionless dynamics }
Suppose we multiply the Vlasov equation by the components of
${\bf v}$ and integrate over all ${\bf v}$.
In regions where large amounts
of shell crossing have occurred, one can assume that
a quasi "equilibrium" state obtains,
whereby all odd moments of the distribution function, over
$({\bf v} - \bar{\bf v})$, may be neglected.
Assume there is no mean rotation to the halo, that is
$\bar v_{\theta} = 0$ and $\bar v_{\phi} = 0$. Then we get
\begin{equation}
{\partial(\rho \bar v_r) \over \partial t}
+{\partial(\rho \bar{v_r^2}) \over \partial r}
+{\rho \over r} (2\bar{v_r^2} - \bar{v_{\theta}^2} - \bar{v_{\phi}^2})
+ {GM(r)\rho \over r^2} = 0 ;
\label{radm}
\end{equation}
\begin{equation}
\bar{v_{\theta}^2} = \bar{v_{\phi}^2} .
\label{thetm}
\end{equation}
Here $M(r)$ is the mass contained in a sphere of radius $r$.
For a purely radial collapse we can set
$\bar{v_{\theta}^2} = \bar{v_{\phi}^2} =0$.
Let us also assume to begin with that one
can set $\bar v_r = 0$, in the inner parts.
Then integrating the Jeans equation
( \ref{radm} ), with $\rho = q_0 r^{-\alpha}$ gives
\begin{equation}
\bar{v_r^2} = r^{2- \alpha} \left [{4\pi G q_0 \over
2(\alpha -2 )(3-\alpha)} \right ]
\equiv {1 \over (\alpha -2 )} {GM(r)\over 2r} .
\label{consisr}
\end{equation}
So purely radial self-similar collapse with no tangential
velocities, and with $\alpha > 2$, leads to a
radial velocity dispersion in the core which scales as
$\bar{v_r^2}\propto r^{-(\alpha - 2)}$.
This agrees with the
radial velocity dispersion scaling as $r^{-1/8}$ for B85
gaseous collapse solution. ($\alpha = 2$ needs to be treated separately).
Further the RHS of Eq. (\ref{consisr}) should be necessarily non-negative,
which is violated when $\alpha < 2$. If one has a purely
spherically symmetric collapse and zero tangential
velocities, then the density law cannot become shallower
than $\alpha=2$ and maintain a static core with
$\bar{v_r}=0$. This agrees with FG.
Our example illustrates a point we mentioned earlier.
Even if simple scaling arguments suggest a $\alpha < 2$
possibility, there could be dynamical restrictions
for realizing such core profiles.
Let us now include the effect of tangential velocity dispersions.
The Jeans equation gives
two equations for the three unknown velocity
dispersions, even for a static core.
To see if one can close the system let us look
at the second moments of the Vlasov equation (the energy equations)
We get
\begin{equation}
{\partial(\rho \bar{v_{\theta}^2}) \over \partial t}
+{1 \over r^4}{\partial(\rho r^4<v_rv_{\theta}^2>) \over \partial r} -
{2\rho <v_{\theta}v_{\phi}^2> cot\theta \over r}
+ {\rho \bar{v_{\theta}^3} cot\theta \over r} = 0 ,
\label{thetsqm}
\end{equation}
\begin{equation}
{\partial(\rho \bar{v_{\phi}^2}) \over \partial t}
+{1 \over r^4}{\partial(\rho r^4 < v_rv_{\phi}^2 >) \over \partial r}
+ {\rho < v_{\theta}v_{\phi}^2 > cot\theta \over r} = 0 ,
\label{phisqm}
\end{equation}
\begin{equation}
{\partial(\rho \bar{v_r^2}) \over \partial t}
+{1 \over r^2}{\partial(\rho r^2\bar{v_r^3}) \over \partial r} -
{2\rho < v_r(v_{\theta}^2+v_{\phi}^2) > \over r}
+2 \bar{v_r} \rho {GM/r^2} = 0 ,
\label{radsqm}
\end{equation}
where $M = \int 4 \pi r^2 \rho$ is the mass within $r$
and both $<>$ or a bar over a variable denotes
a normalized moment over $f$.
Consistent with our statistical assumption for the core regions, we
assume that initially the tangential velocities have zero skewness.
Then in purely spherically symmetric
evolution they would not develop any skewness, that is
$\bar{v_{\theta}^3} = \bar{v_{\phi}^3} =
< v_{\theta}v_{\phi}^2 > = 0$ for all times.
Also if the initial velocity ellipsoid had one
of its principle axis pointing radially, we do not expect this axis
to become misaligned in purely spherical evolution.
This means we can assume $< v_r v_{\theta}^2 > =
\bar{v_r} \bar{v_{\theta}^2 } = 0 $ in the static core.
Eq. ( \ref{thetsqm} ) then implies
$(\partial(\rho \bar{v_{\theta}^2})/\partial t) = 0$
or $\rho \bar{v_{\theta}^2} = K(r)$ independent of $t$.
For the scaling solution we then have
\begin{equation}
\rho \bar{v_{\theta}^2} = K(r) = K(yt^p) = k_2k_1^2
t^{4q -2p}\int w_{\theta}^2 F(y,{\bf w})d^3{\bf w} .
\label{tan}
\end{equation}
Once again substituting a power law solution $K(r) = K_0 r^s$,
to this functional equation, we get the constraint from matching
power of $t$ on both sides,
$ps =4q - 2p$. Using $p = q +1$,
we then get
$s = 2 - 4/p = 2 - 2\alpha$, and so
\begin{equation}
\rho \bar{v_{\theta}^2} = K_0 r^{2 - 2\alpha} .
\label{tanvel}
\end{equation}
Integrating the radial momentum equation using
Eq. (\ref{radm}) , (\ref{thetm}), (\ref{tanvel}) and using
$\rho = q_0 r^{-\alpha}$. Equation (\ref{consisr}) for the radial
velocity dispersion is now altered to
\begin{equation}
\bar{v_r^2} = r^{2 - \alpha} \left [ {K_0 \over (2 - \alpha) q_0} -
{4\pi G q_0 \over 2(2-\alpha)(3-\alpha)} \right ]
\equiv {1 \over (2 - \alpha)} \left [ \bar{v_{\theta}^2}(r)
- {GM(r)\over 2r} \right ] .
\label{consist}
\end{equation}
Several important points are to be noted from the
above equation. A crucial one is that, when $ \alpha < 2$,
the RHS of Eq. (\ref{consist}) can
remain positive, only provided that one has a non zero tangential
velocity dispersion. In fact, for any $\alpha < 2$, one
needs the tangential velocity dispersion to be at least
as large as $GM/2r$, comparable to the gravitational potential
energy per unit mass.
Also one can see that to obtain static cores with $\alpha < 1$,
the required tangential dispersion must be
larger than the radial velocity dispersion. So if
in halo cores tangential velocity dispersions are constrained
to be smaller than radial velocity dispersions, then a
core density profile shallower than $1/r$ cannot obtain
in the self-similar case.
Also note that for $\alpha < 2$, all the components
of velocity dispersions decrease with decreasing radius,
as suggested by the simple scaling arguments of the previous section.
In a realistic collapsing halo it is quite likely that particles
develop non-radial velocities. Tidal forces by mass concentrations
outside the halo and the presence of substructure within the collapsing
halo will lead to non-radial motion of particles.
More generally the process of violent relaxation during the inhomogeneous
collapse to form the halo will lead to a more isotropic velocity
dispersion.
The above results for the halo core arise simply
from the properties of the self similar solution
and the assumption of a static core. From the energy equation
(\ref{radsqm}) we note that a time independent
radial velocity dispersion, can only obtain
if the radial velocity skewness $<(v_r -\bar{v_r})^3>$
is also zero. Note that in the core regions where large amounts
of shell crossing has occurred, as we stated earlier, the
radial skewness is indeed expected to be small. So for
the core regions one can in fact make this statistical assumption.
Such a treatment will correspond to considering a fluid like
limit to the Vlasov equation.
However, the radial skewness
will become important near the radius, where infalling matter
meets the outermost re-expanding shell of matter. This region
will appear like a shock front in the fluid limit.
A possible treatment of the full problem in the fluid approach
to the Vlasov equation then suggests itself. This is
to take the radial skewness to be zero both inside and outside a
"shock or caustic" radius, whose location is to be
determined as an eigenvalue, so as to match the inner core
solution that we determine in this section
with an outer spherical infall solution.
One has to also match various quantities
across this "shock", using jump conditions,
derived from the equations themselves.
To do this requires numerical solution of the self consistent
set of moment equations, to the scaled Vlasov equation.
The details of such a treatment are given a
companion paper (Subramanian 1999, S99).
Here we summarise the general conclusions of this work.
The numerical results in S99 shows the importance of
tangential velocity dispersions, in deciding whether the
self similar solution, with an initial density profile
shallower than $1/r^2$ ($\epsilon < 2/3$) retains a memory of this initial
profile or whether the density profile tends to a universal $1/r^2$ form.
The set of solutions show that for
a large enough ${\bar v_{\theta}^2}/{\bar v_r^2} > 1$, the
the core density profile is indeed close to the form
$\rho \propto r^{-\alpha}$,
with $\alpha = 9\epsilon/(1+3\epsilon)$.
For ${\bar v_{\theta}^2}/{\bar v_r^2} \sim 1$,
some memory of the
initial density profile is always retained; the density profile
has an asymptotic form $\rho \propto r^{-\bar\alpha}$, with
$ \alpha < \bar\alpha < 2$.
When ${\bar v_{\theta}^2}/{\bar v_r^2} << 1$, the density profile goes
over to the $1/r^2$ form derived by FG. Also for
very shallow initial density profiles with $\alpha < 1$,
one must necessarily have a tangential dispersion much larger
than radial dispersion to get a static core region,
retaining the memory of the initial density profile.
The spherical self similar collapse solutions provide a
useful means of examining the dynamics of dark halo formation, and
its implications for the core-density profiles,
although limited by the spherical symmetry assumption.
A complimentary approach would be direct cosmological
N- Body simulations of halo formation, which we
will consider in Section 5. Before this, we consider
briefly below, the outer profiles of dark halos, especially in
low density universes.
\section{ Outer profiles of dark halos}
In the cosmological context, as mentioned in section 2,
any collapsed mass will continue to accrete surrounding material,
as long as the matter density dominates the energy density.
We now analyze the consequence of such
secondary infall for the outer profile of halos,
following the Gunn-Gott paradigm, relaxing the
restriction of a flat universe.
Consider the collapse a spherically symmetric density perturbation,
in a universe with present matter density parameter $\Omega_m$
and cosmological constant $\Lambda$. Let the initial
density distribution (at time $t_i$), be $\rho(r,t_i) = \rho_b(t_i)
(1 + \delta_i(r))$, and initial velocity
of the perturbation be the Hubble velocity.
Here $\rho_b(t)$ is the matter density of the smooth universe,
and $\delta_i$ the initial fractional density contrast of the
perturbation, as before.
Consider a spherical shell initially at a radius $r_i$.
The evolution of the proper radius $r(t)$
of any such shell before shell crossing
is governed by
\begin{equation}
{1 \over 2}\left({dr\over dt}\right)^2 - {GM\over r}
-{\Lambda r^2 \over 6} = E(M) .
\label{energ}
\end{equation}
Here $M = \rho_b(t_i) (4\pi r_i^3/3)(1 + \bar\delta_i)$,
is the mass enclosed by the shell and
\begin{equation}
\bar\delta_i(r_i) = {3 \over r_i^3} \int_0^{r_i} \delta_i(u) u^2 du
\label{avd}
\end{equation}
is the spherically averaged value of $\delta_i(r)$ within $r_i$.
The "energy" $E(M)$ can be fixed by evaluating the LHS of
Eq. (\ref{energ}) at the initial time. The shell
will turn around at a time say $t_m$, when $dr/dt = 0$,
and its radius is say $r(t_m,r_i) \equiv R(r_i)$.
Setting $dr/dt =0$ in Eq. (\ref{energ}) gives
\begin{equation}
{(1 + \bar\delta_i) \over y} + y^2 \lambda
= \lambda -(\Omega_{mi}^{-1} - 1) +\bar\delta_i
=\bar\delta_i + {(\Omega_t - 1) \over \Omega_{mi}} ,
\label{turn}
\end{equation}
where the total value of $\Omega$ (including the
cosmological constant) is $\Omega_t = \Omega_{mi}(1 + \lambda)$.
Here $y \equiv R(r_i)/r_i$ and
$\lambda = \Lambda /(3 H_i^2 \Omega_{mi})$ with
$H_i$ the Hubble parameter and $\Omega_{mi}$ the
matter density parameter at the initial time $t_i$.
Let us begin with the case $\Lambda = 0$.
Then Eq. (\ref{turn}) gives
\begin{equation}
y \equiv {R(r_i) \over r_i} = {(1 +\bar\delta_i)
\over \bar\delta_i - \delta_c} ,
\label{turopen}
\end{equation}
where we define $\delta_c \equiv (\Omega_{mi}^{-1} - 1)$.
In an open universe with $\Omega_{mi} < 1$,
one needs $\bar\delta_i > \delta_c$ for a shell to turn around
and collapse. For a monotonically decreasing initial density profile,
there will then be an outer most shell with an initial
radius $r_c$, satisfying $\bar\delta_i(r_c) = \delta_c$,
such that only shells with $r_i < r_c$, can recollapse onto the
density perturbation. To work out the outer profile after collapse,
we need to know the final effective radius $r(R)$, of a shell
turning around at radius $R$. Following the Gunn-Gott picture,
we assume that these two radii can be related by
$r(R) = f_0R$, where $f_0$ is some constant. Note that
$f_0$ is indeed a constant if the collapse is exactly self similar,
and the initial density profile is sufficiently steep
( Section 3, with $\epsilon > 2/3$).
For deriving the dominant scaling of the outer
profile with radius, even in an open universe,
it should also suffice to treat it to
be approximately constant (see below).
Consider an initial density profile
with $\bar\delta_i(r_i) = d_0 (r_i/r_0)^{-3\epsilon} $.
Then using mass conservation, $4\pi r^2 dr \rho(r) =
4\pi r_i^2 dr_i \rho_b(t_i) (1 +\delta_i)$, and
the final density profile $\rho(r)$
after collapse is given by
\begin{equation}
\rho(r) = {\rho_b(t_i) \over f_0^3} { \bar\delta_i^3(r_i)
\left[1 - (r_i/r_c)^{3\epsilon}\right]^4 \over
\left[1 + 3\epsilon - (r_i/r_c)^{3\epsilon} \right] }
\label{rhof}
\end{equation}
with
\begin{equation}
r_i = r { \bar\delta_i [1 - (r_i/r_c)^{3\epsilon} ] \over f_0}
\label{rrirel}
\end{equation}
(Here we have also assumed that the initial $ \delta_i << 1$.)
Two limits are of interest. First, for a flat matter
dominated universe we
have $\delta_c = 0$ and $r_c \to \infty$. Then
$[1 - (r_i/r_c)^{3\epsilon} ] \to 1$ and we recover
the standard result
\begin{equation}
\rho(r) = {\rho_b(t_i) \over 1 + 3\epsilon }
\left({d_0 \over f_0} \right)^{3/( 1+3\epsilon)}
\left({r \over r_0} \right)^{- 9\epsilon /(1 + 3\epsilon)}.
\label{rhofl}
\end{equation}
For $\epsilon = 1$, which is the steepest possible value
obtaining for an initially localised overdense perturbation,
we recover a halo profile $\rho(r) \propto r^{-9/4}$ (B85 and Section 3).
Now we return to the case of an open universe but examine the
outermost profile where $r_i \to r_c$,
the critical radius. In this case, from (\ref{rhof}) and
(\ref{rrirel}), the outer profile at large radii is given by
\begin{equation}
\rho(r) \to {\rho_b(t_i) f_0 \over 3\epsilon \delta_c}
\left({r \over r_c} \right)^{- 4},
\label{rhoflo}
\end{equation}
the slope being independent of the initial power law
slope $\epsilon$.
This interesting result seems to have been already known to Gunn (1977),
but not much emphasised since then.
As the outer profile in this case is
also a pure power law, it is likely that our assumption
of a constant $f_0$ is valid in this case. It would be of
interest to find an exact similarity solution of the
form given in Section 3, and valid in an open universe
which recovers this outer profile.
Let us now turn to the currently
popular flat cosmological models with $\Lambda \ne 0$.
First from Einstein's equation for the scale factor,
we have $\lambda - \Omega_{mi}^{-1} + 1 = 0$
(or $\Omega_t = 1$). Using this
and taking $\bar\delta_i << 1$,
Eq. (\ref{turn}) for $R(r_i)$ becomes
\begin{equation}
{1 \over y} + y^2 \lambda = \bar\delta_i .
\label{turnc}
\end{equation}
For a monotonically decreasing $\bar\delta_i(r_i)$,
the RHS of (\ref{turnc}) monotonically decreases.
However the LHS as a function of $y$ has a
minimum value at $y =y_c = (2\lambda)^{-1/3}$,
which is given by $\delta_{\lambda} = (3/2) (2\lambda)^{1/3}$.
There again exists a critical radius $r_\lambda$
defined by $\bar\delta_i(r_\lambda) = \delta_\lambda$,
such that, only those shells with $r_i < r_\lambda$ will be able to
turn around and collapse. For shells with
initial radii $r_i > r_{\lambda}$,
the repulsion due to the cosmological
constant overcomes the attractive gravitational force,
and so they expand for ever.
(The critical value of $y=y_c$ can also be written
as $y_c = 3/(2\delta_\lambda)$, a limit got by
Barrow and Saich (1993; eq. 26)).
Although this feature is similar to the open universe
case, there is a major difference between the
open model and the flat universe with a
cosmological constant. In the $\Lambda$ dominated model,
even for the limiting case $r_i \to r_\lambda$
the turn around radius tends to a finite limit,
$R(r_i) \to r_\lambda y_c$. In the open model
on the other hand, as $r_i \to r_c$,
(the limiting critical radius beyond which shells expand for ever),
$\bar\delta_i \to \delta_c$,
and $R(r_i)\propto (\bar\delta_i -\delta_c)^{-1} \to \infty$.
But in both cases the accreted mass is finite and equal to that
initially within $r_c$ or $r_{\lambda}$.
Let us now consider the limiting denity profile about this
outer most cut off radius, for the $\Lambda$ dominated model.
For this, it is sufficient to consider values of $r_i$
close to but less than $r_\lambda$ and expand the LHS of Eq. (\ref{turnc}),
about $y = y_c$, retaining upto quadratic terms in
$(y -y_c)$. We then have for the outermost shells
$\delta_\lambda + 3\lambda (y -y_c)^2 = \bar\delta_i(r_i)$ or
\begin{equation}
R(r_i; r_i \to r_\lambda) = r_i y_c - (3\lambda)^{-1/2}
\left[ \bar\delta_i(r_i) -\delta_\lambda \right]^{1/2}r_i ,
\label{turnl}
\end{equation}
where the negative square root has to be taken
as the turn around radii for the collapsing shells are smaller
than the maximum value of $R = r_\lambda y_c$.
For computing the collapsed density profile,
we need again the relation between the turn
around radius and the final effective radius.
For a $\Lambda$ dominated model, it is known that
$f_0$, the ratio of the effective "virial" radius to
the turn around radius,
depends on the turn around radius itself,
that is $r = f_0(R) R$ (Lahav {\it et al.} 1991; Barrow and Saich 1993).
So $dr/dr_i = f_0 (dR/dr_i) (1 + c_1)$,
where $c_1 = d(ln(f_0)/d(ln(R)$. Let us assume
the power law form of $\bar\delta_i(r_i)$ given above.
From mass conservation once again $\rho(r) =
\rho_b(t_i) (r_i/R)^2 (dr_i/dR) f_0^{-3} (1 + c_1)^{-1}$.
From Eq. (\ref{turnl}), in the limit of $r_i \to r_\lambda$,
which is the limit relevant for evaluating the outer profile,
$(r_i/R) \to y_c^{-1}$ a finite value. However
from this equation $dr_i/dR \to 0$ and the outer density
profile cuts off as one nears a critical radius.
Using Eq. (\ref{turnl})
the limiting outer halo density profile becomes
\begin{equation}
\rho(r) \to {4 \lambda \rho_b(t_i) \over 3 \epsilon f_0^3 (1 + c_1)}
\left [ 1 - (r/\bar r_\lambda)^{3\epsilon} \right]^{1/2} ,
\label{cosmo}
\end{equation}
where $\bar r_\lambda = r_\lambda /(y_c f_0)$.
(Here we can treat $f_0$ and $c_1$ as constants
evaluated at the limiting $R = \delta_\lambda y_c$).
The above profile shows that in a universe dominated
by a cosmological constant, the mass of halos
is again convergent. We caution that the above forms
for the outer profile of halos in open models (viz. Eq. (\ref{rhoflo}) and
(\ref{cosmo}) ), obtain only near the
cutoff radius and only at late times, as the outermost bound shell
turns around only as $t \to \infty$.
For a finite fixed time $t$ and general $r_i$ a more detailed
solution of (\ref{energ}) and (\ref{turnc}) is needed, for
finding the density profile. Of course in the innermost regions,
the $\lambda y^2$ term in (\ref{turnc}) is
expected to be small compared to $1/y$ term,
and we will recover the results of the
flat model, discussed above and in Section 3.
We now turn to the study of halo
properties through direct cosmological
N-body simulations.
\section{ Halo properties through cosmological N- body simulations}
In order to clarify the effect of non-linear evolution
in determining the structure of dark halo cores,
it is best to look at power-law spectra, as this has
no special scale, rather than a model like the CDM.
We have therefore simulated three power law models with index
$n=-2,-1,0$, with $\Omega_0=1$. Each simulation is run
using a particle-mesh (PM) code,
with $768^3$ mesh points and $256^3$ particles.
Although each model has no intrinsic scale,
we choose the box size of each simulation to
be comoving $10h^{-1}$Mpc. At the end of the simulation,
which we identify with redshift zero, the
rms density fluctuations on a $8h^{-1}$Mpc sphere,
$\sigma_8 = 1/1.3$ in the $n=-2$ model, while in the
$n=-1$ model, $\sigma_8 = 1/1.5$ and in the $n=0$ model,
$\sigma_8 = 1/4$. Thus, in all cases waves larger than
the box size have typically not grown to the non-linear
domain. We will use this scale notation to facilitate
discussions at the relevant scales.
The nominal spatial resolution of the simulations
is $13h^{-1}$kpc, adequate for resolving the galaxy size halos
that we are interested in here.
The mass of each particle is $1.65\times 10^7h^{-1}$M$_\odot$.
Thus a galaxy size halo would contain of order $10^5$ particles,
essential for our purpose of computing
density and velocity dispersion profiles. The highest mass
resolution in the present simulations is also required to avoid
two-body relaxation in the inner regions of halos, which we are
most interested in. Within the innermost bin of our calculation
($10h^{-1}$kpc, see below) at an overdensity of about $10^5$
(see Figures (2,3,4) below) there are of order thousand
particles, thus two-body relaxation
should be negligibly small (see also Eq. (\ref{trel}) below).
In each simulation, the center of each halo
is selected as the local maximum of the mass distribution within
spheres of comoving radius of $10\rm h^{-1}$kpc.
The density profile of each halo is calculated
using spherical averaging with a logarithmic bin size of 0.02 dex.
The velocity dispersions (both tangential and radial) are computed
in the restframe of each spherical shell.
We have used the density and velocity dispersion profiles of
20 halos in each model for the analysis to be described below.
In particular we would like to examine if the halo density profiles
show evidence for the scaling laws of Section 2 and 3, and a dependence
on the power spectrum.
In our analysis of the halo density profiles, for each halo
we first fitted $\rho(r)$, by a double power law model of the form given by
\begin{equation}
\rho(r) = \left ({r_c \over r}\right )^{\alpha_0}
{ \rho_0 2^{\beta_0} \over [1 +(r/r_c)]^{\beta_0}}.
\label{den}
\end{equation}
We used the log density versus log radius data, taking
all radii within a nominal "virial radius", say $r_v$, where the density
dropped to about $200$ times the background density.
We made these fits by using an IDL routine (curvefit) which
takes a trial model set of parameters and iterates them
to minimize the squared sum of the deviations of the fit from
the actual density profile. By judicious choice of the
starting values of the parameters, it is relatively
simple to obtain good convergence in the set of model parameters.
The density profile of the 20 halos
in each model $n=0$, $n=-1$, and $n=-2$ are shown as dotted
lines in Figures 2, 3 and 4 respectively.
(Here the density has been normalised with respect to
$\rho_c = \rho_b(t_0)$, the background density at the present epoch $t_0$).
The converged model fit is also shown as a light solid line in these figures.
One can see from the figures that, in general, the
double power law fit is excellent. Infact, for every halo,
the fractional deviations, of the actual log$(\rho)$
compared to the model fit, are very small;
in general much less than $1\%$ for most radii,
with maximum deviations less than a few percent.
Further as in these fits one minimizes the total
least square deviations of the data from the fitted
profile, the fitted function is moderately robust to local
perturbations in the density.
We can therefore use the model fitted profiles to study
the properties of halo density profiles.
For each halo, we then calculated the local power law
slope of the density profile by evaluating, $s(r) = d(ln \rho)/d(ln r)$,
from the model fit. We plot this local slope, $s(r)$, for every halo,
as a thick solid line in the same plot as the density profile plot,
in Figures 2, 3 and 4.
These $s(r)$ plots give the most detailed information
regarding the slope of the halo density profiles.
If the density profile has a power law regime,
then for this radius range, $s(r)$ would be a straight horizontal line.
Some general conclusions can be drawn from the figures themselves.
First for almost all the halos,
$s(r)$ keeps increasing monotonically with radius; showing
that the halo density profiles are in general curved, and
that the density profile keeps steepening with radius.
Previous workers like NFW have in general adopted model
double power law profiles to fit halo density profiles,
with specific values of the inner slope $\alpha_0 = 1$ and
outer slope $\beta_0 + \alpha_0 = 3$. We find
the hetrogeneity of the density profiles to be striking.
No simple formulae with fixed ($\alpha_0$,$\beta_0$) can fit this data.
Indeed for an unbiased double power law fit, the innermost value
of $s(r)$ lies between $1 - 2 $, shows a general increase with increasing $n$,
and is in general not equal to $1$. Also,
the outermost value is generally not equal to $3$.
In order to quantify these conclusions better,
and since we have a moderately large (20) number of halos in each
model, we can infact look at the the statistics of
the inner core and outer slopes, for each model of structure
formation (given by the power spectrum index $n$).
We do this by looking at the distribution of $s_i(r_0)$
and $s_i(r_v)$ for the 20 halos in each model with a given
value of $n$. Here $s_i(r)$ is the slope function of
the $i$'th halo in a given model, calculated from the model fit,
and $r_0$ is some fiducial characteristic inner radius of a halo.
In Figure 5, we have given a histogram of the
distribution of the inner core slopes $s(r_0)$, for the different
models of structure formation with $n=0, -1$ and $-2$,
adopting three different values for $r_0$. In the
left hand side of this plot, we have taken $r_0$
to be the innermost radius $r_i$ of the halo,
in the middle histograms we have taken it
to be a fixed percentage ($10\%$, say) of the virial radius,
with $r_0 = 0.1 r_v$,
while in the right most histograms have taken
a larger value $r_0 = 0.15 r_v$.
For all the halos in the $n=-1$, or $n=0$ models,
these 3 choices correspond to progressively larger and larger
value of $r_0$. For $n = -2$ case for about half the halos,
the innermost radius $r_i$ is of order $0.1 r_v$;
hence the close similarity of the histograms for these two cases.
The solid arrow in each of these histogram plots shows the
location of the median value of the distribution. We also show for
comparison by a thin arrow, the location of the core-slope
$\alpha_n = 3(3+n)/(5+n)$, expected on the basis of the scaling
arguments of section 2.
From this figure we see first
that all halos do have a cuspy inner density profile,
mostly with $s(r_0) > 1$. Further,
the core slopes show a clear spectral index dependence although
the inner power laws are all somewhat steeper than the
$\alpha_n = 3(3+n)/(5+n)$ form, predicted by the scaling laws.
For example, the median value of the core slope for the
$n=-2$ models is $s(r_i) \sim 1.3$ (leftmost histogram),
(compared to $\alpha_n = 1$),
while for the $n = -1$ and $n= 0$ models, the corresponding
median value of the core slope shifts to $s(r_i) \sim 1.6$
($\alpha_n = 1.5$) and $s(r_i) = 1.8$ ($\alpha_n = 1.8$), respectively.
For any fixed $n$, there is also a systematic increse of the median slope
as one increases $r_0$ and goes from the left-most to
the right most histogram. This is to be expected as we
have a curved and continuously steepening
density profile. However the
trends between different models remain (a steeper
inner profile for larger n).
This can be seen already for example by comparing
the left and right side of Figure 5.
Further, we checked using the Kolmogorov-Smirnov
two-sample test whether the distribution of core slopes
$s(r_i)$ for different values of $n$, are drawn from the same
population or not. We used the one-tailed test to decide
if the vaues in one sample (say $n=0$) are stochastically
larger than the values of the other sample (say $n=-2$).
In this test one computes the
two-sample statistic $D_{M,N} = max[S_M(X)- S_N(X)]$,
where $S_M(X), S_N(X)$ are the cumulative probability distributions
of the two samples with $M$ and $N$ number of points respectively
in each sample (in our case $M=N=20$). The
value of $NMD_{M,N}$ being larger than a given number is then
used to rule out the hypothesis (that the samples are drawn from the same
population) at various levels of confidence
(cf. Siegel and Castellan 1988, pg. 144).
This test shows that the distribution
of core slopes of the $n=0$ model is stochastically larger than the
core slopes of the $n=-2$ model, and not drawn from the same population,
at a $ 99\%$ confidence level. The hypothesis that the core slopes of the
$n=-1$ and $n=-2$ are drawn from the sampe population is ruled out at
a weaker $90\%$ confidence level. And the hypothesis of
the core slopes of $n=0$ and $n-1$ models being drawn from the
same population is ruled out at a $90 -95 \%$ confidence level,
depending on the binning used for the data.
Our preliminary conclusion, therefore,
from analyzing these cosmological N- body simulations (cf. Figures 2 - 5),
is that the core density profiles of dark matter halos, do depend on
the initial power spectrum of density fluctuations; becoming steeper
as the spectral index increases from $n = -2$ to $n =0$.
In Figure 6, we have given the corresponding
histogram for the distribution of the outer
slopes $s_i(r_v)$, for different models
of structure formation. We see from the figure that
the distribution of outer profiles is fairly
broad. For the models with $n =0$ and $n=-1$, they are
spread, with large deviations, about a median value
$s(r_v) = 3.06$ and $s(r_v) = 3.02$ respectively. For the
halos in the $n=-2$ simulation the outer profile
is somewhat shallower, being spread around a median value
$s(r_v) = 2.55$. These results for the
outer profile suggest a large scatter about the favored NFW
value of $\beta_0 + \alpha_0 = 3$.
We have summarised the information on the core and outer slopes
of dark matter halos,
in different models, as a scatter plot in Figure 7.
Each point in these scatter plot marks the value of the
inner core and outer slope for a particular halo. We also show as
a solid cross the location of the median value of the distributions
of slopes, with the extent of the cross giving the
$\pm \sigma_m$ error on the median. (We adopt an
error on the median $\sigma_m = c_N\sigma_N/\sqrt{N}$, where
$\sigma_N$ is the standard deviation of the $N$ values of the
the slope distribution and $c_N = 1.214$
(cf. Kendall and Stewart 1963, pg 327).
This figure further illustrates the result that the distribution of the
core and outer slopes have a large scatter but appear to display
systematic trends as one goes from $n=0$ to $n=-2$.
At this point we should add a note of caution regarding
the determination of the inner slopes. Ideally for determining
the inner core properties one has to have a resolution as small
as possible and as many particles within the virial radius as possible,
though it is not at present clear what these numbers should be.
In the biggest halos we have few times $10^5$ particles; and
our resolution in these halos is about $5 - 7.5\%$ of the virial
radius. The larger resolution scale is because we have extracted
halos from a cosmological PM simulation, though it is one of the
best resolved PM simulation (with a $768^3$ mesh).
Of course one advantage of the present work is that
we have a large number of halos (20) in each model,
and so can look at the halo properties in a
statistical fashion as well. And we saw above that the statistical
analysis reveals the trend in core slopes, more clearly.
At the spatial resolution of the current simulations,
the overdensity is about $10^4-10^5$, which is about the
overdensity of real galaxy or cluster halos on the
same scale. Therefore, our simulations may not be severely spatial
resolution limited for the present purpose of examining
the properties of halos on these scales and larger.
Still, it would be very useful
to see whether the trends
we have found here for the core slope, hold
up with further high resolution
studies of a large number of halo density profiles.
In particular, it will be of great interest to go to even higher
overdensity, more inner regions of halos using higher mass and
spatial resolution simulations to further test the present
findings from simulations as well as our analytic results.
Apart from the density profiles, it is also of
interest to study the velocity dispersion
profiles of the halos, to see if there is any evidence for
undigested cores. As we argued in section 2 and 3, this will
lead to a rising velocity dispersions in the core regions,
of the form $\sigma \propto r^{(1-n)/(2(5+n))}$, which also
implies a mild but systematic spectral dependence.
It is also of interest to check the relative importance
of tangential and radial dispersions in the halo.
Recall from the work of section 3, that tangential
dispersions are needed to get cuspy density profiles shallower
than $1/r^2$ in self similar collapse from power
law initial density profiles; and that a cuspy
profile shallower than $1/r$ required tangential
dispersions to dominate radial dispersion.
Our data on the velocity dispersion profiles are
too noisy for drawing very firm conclusions.
However we do find most halos showing a rise in the
radial velocity dispersion with increasing radius
in the core regions. Further,
the tangential dispersions are smaller than radial and also
in general show much weaker (sometimes no) rise with
increasing radius in the core.
One may wonder about the importance of 2-body relaxation effects
in determining the properties of the halo cores in the simulation;
for example significant 2-body relaxation could lead
to an artificial steepening of the density profiles of
the halos in the core regions. We can use the halo
properties in the simulation itself to check this,
using the standard estimate for the 2-body relaxation
time scale $t_{rel}$ (cf. Binney and Tremaine, 1987 (Eq. 8.71);
see also Steinmetz and White 1997),
and comparing it with the Hubble time $t_{hub} = t_0$.
We obtain
\begin{equation}
{t_{rel} \over t_{hub}} = 11.8 h^{-1} \left({\sigma \over \sigma_*}\right)^3
\left({m_* \over m}\right) \left ((\rho/\rho_b) \over d_* \right )^{-1}
\left({{\rm ln}(\Lambda) \over 10} \right)^{-1} ,
\label{trel}
\end{equation}
where we have taken fiducial values $\sigma_* = 200$ km s$^{-1}$,
$m_* = 1.65 \times 10^7 M_\odot$, $d_* = 4 \times 10^4$,
and ln$(\Lambda)$ is the usual "coulomb" logarithm, which is
of order $10$. In general we find this number is much larger
than unity for all halos, even in the innermost regions.
So 2-body relaxation is not expected to be important in the halo cores.
\section{Discussion and Conclusions}
We have concentrated in this work on
the structure of the cores of dark halos in hierarchical clustering theories
of galaxy formation. In such theories, it is very likely that cores
of dark matter halos harbor undigested earlier generation
material. Their density structure, in physical as well as phase space,
will reflect the times and the cosmological densities when the core
material was gathered.
In a flat universe with a power spectrum $P(k) \propto k^n$,
a consequence of undigested cores, could be a cuspy core density profile,
shallower than that of a
singular isothermal profile, having a velocity dispersion
profile and rotation curve, which rises with increasing radius.
Scaling arguments,
incorporating energy and mass conservation, suggests a
form for the core density profile, $\rho(r) \propto r^{-\alpha_n} $,
with $\alpha_n=(9+3n)/(5+n)$. This profile will transit to a
steeper power law, determined by say secondary infall,
beyond the core radius. The core radius, which is also the radius,
where the density has an effective logarithmic slope of $-2$, is
related to and a fraction of the turn around radius for the matter
infalling to the halo. The characteristic density at this radius is a
few hundred times the turn around density, and therefore
correlates inversely with typical halo mass, and directly
with the halo formation redshift.
Although scaling laws suggest a possible form of the core density
profile, they do not tell us how and in fact whether this
form will be realized dynamically. To explore this dynamical issue,
we have adopted two complimentary approaches. First, in section 3,
we studied a simple tractable model: The spherical self similar collapse
of dark matter density perturbations, in a flat universe.
Then in section 5, we analyzed the properties of halos
obtained in some cosmological N-body simulations,
with a power law spectrum of density fluctuations.
The problem of spherical self similar collapse,
has often been solved earlier by following
particle trajectories. We adopted instead another approach,
examining directly the evolution of the moments of
the phase space density. For a purely radial
collapse, with the initial density profile $\propto r^{-3\epsilon}$,
and steeper than $r^{-2}$, we recover, by demanding that the core be static,
the asymptotic form of the non-linear density profile:
$\rho \propto r^{-\alpha} \propto r^{-9\epsilon/(1 + 3\epsilon)}$
(see also Padmanabhan 1996b).
For initial density profiles shallower
than $1/r^2$, with $\epsilon < 2/3$,
we showed that, non radial velocities are necessarily
required to obtain a static core. These results agree with
the work of B85 and FG who followed particle trajectories
to solve this problem.
The consequences of introducing non radial velocity dispersions,
in this approach, can only be examined, by adopting
a closure approximation to the moment equations.
In the spherical collapse problem, the skewness of
the tangential velocities can be assumed to be zero,
in the core regions. Infact,
in regions where large amounts
of shell crossing has occurred, one can assume that
a quasi "equilibrium" state obtains,
whereby all odd moments of the distribution function, over
$({\bf v} - \bar{\bf v})$, may be neglected.
One can then analytically integrate
the Jeans equation for the self similar collapse,
including the effect of tangential velocity dispersions.
For an initial density profile shallower that $1/r^2$,
with $\epsilon < 2/3$, a static core with a non-linear density profile,
with $\alpha= 9\epsilon/(1 + 3\epsilon)$, is
possible, only if the core has sufficiently large tangential
velocity dispersions. In fact, one
needs $\bar{v_{\theta}^2} > GM/2r$.
Also if a static core has to have a cuspy density
profile shallower than $1/r$, (with $\alpha < 1$),
one requires $\bar{v_{\theta}^2} > \bar{v_r^2}$.
Importantly for the case $3\epsilon = (3+n)/2$
(as would be relevant for collapse around a typical point in the
universe), we recover the simple result $\alpha=\alpha_n
= (9 +3n)/(5 + n) $, with $\alpha_n < 2$, for $n < 1$.
Note that the radial peculiar velocity which could have
negligible skewness in the core, will necessarily have a
non-zero skewness (non zero third moment) near a caustic radius,
where collapsing dark matter particles meet the
outermost shell of re-expanding matter.
In a companion paper (S99),
to take this into account, we have introduced the following
fluid approach. In this approach, the effect of peculiar
velocity skewness are neglected in all regions except
at location of the caustic,
which we call the shock. In the particle picture the shock
is where a single stream flow becomes a muti stream flow.
In the fluid picture it is a where some of the
average infall velocity (of the single stream flow),
is converted to velocity dispersion (of the multi stream flow).
The location of the caustic, $y_s$, in
scaled co ordinates, is found
as an eigenvalue to the problem of matching
the single stream collapse solution at $y > y_s$, with
a static core solution within $y << y_s$, as determined here.
This is done in S99 by numerically integrating
the full set of moment equations.
The results largely bear out the expectations of section 3;
the importance of
tangential velocity dispersions, in deciding the nature of
the core density profile. For the details please see S99.
The above results and that of S99, although derived for the
case of a purely self similar collapse, illustrate
the importance of dynamical considerations, in determining
the structure of halo cores. They illustrate features which
are like to obtain in more realistic collapse:
If newly collapsing material is constrained to mostly contribute to
the density at larger and larger radii, then memory
of initial conditions can be retained.
In the more general case, when newly collapsing
material is able to occupy similar regions as the matter
which collapsed earlier, the core density profile
will only partially reflect a memory of the initial conditions.
The density profiles of the outer regions of dark halos
are briefly studied in Section 4, relaxing the restriction
to a flat universe and following the Gunn-Gott paradigm.
For an open universe and at late times,
the outer density profile goes over to a limiting form
$\rho(r) \propto r^{-4}$, where the slope is independent
of the power law slope of the initial density profile.
The corresponding
limiting outer profile for a $\Lambda$ dominated universe was shown to
have a form
$\rho(r) \propto [1 - (r/\bar r_{\lambda})^{-3\epsilon}]^{1/2}$,
where $\bar r_\lambda$ is a characteristic cut-off radius.
These density profile laws show that in open
and $\Lambda$ dominated models, the halo mass is convergent and
halos have characteristic density cut-off's, which may be
observationally testable.
We then turned to a complimentary approach, of looking
at halo properties in numerical simulations of
structure formation models
with a power spectrum $P(k) \propto k^n$; with 3 different values
of the spectral index $n=-2,-1,0$, and with $\Omega_0=1$.
The results are summarized in figures 2 - 7.
One preliminary conclusion,
is that the core density profiles of dark matter halos, do depend on
the initial power spectrum of density fluctuations;
with the local core slope becoming steeper
as the spectral index increases from $n = -2$ to $n =0$.
For example, the median value of the inner core slope $s(r_i)$
for the $n=-2$ models is $1.3 \pm 0.07$,
while for the $n = -1$ and $n= 0$ models, the corresponding
median value of the core slope shifts to $ 1.6 \pm 0.09$,
and $1.8 \pm 0.09$ respectively.
For any fixed $n$, there is also a systematic increse of the median slope
as one increases $r_0$ and goes from the left-most to
the right most histogram in Figure 5.
These values are generally steeper than $\alpha_n = 1, 1.5$ and $1.8$,
which scaling arguments predict for models with $n=-2,-1$ and $0$
respectively. Further the Kolmogorov-Smirnov two sample test
shows that the distribution
of core slopes of the $n=0$ model is stochastically larger than the
core slopes of the $n=-2$ model, and not drawn from the same population,
at a $ 99\%$ confidence level. It also rules out the hypothesis
that the core slopes of the
$n=-1$ and $n=-2$ models (or the $n=0$ and $n=-1$ models)
are drawn from the sampe population, at a weaker $90\%$ confidence level.
The NFW value of $\alpha_0 = 1$ is not favored;
most halos having a steeper core density profile.
Some recent higher resolution simulations, of cluster and
galactic scale dark halo formation in the CDM model,
by Moore {\it et al.} (1998, 1999), which resolve the core
very well ( but only for a few halos), has also
obtained a core profile $\rho \propto r^{-1.5}$,
steeper than the NFW profile.
The velocity dispersion profiles of halo cores,
in the N-body simulations are somewhat noisy,
but do indicate for most halos
a rise in the radial velocity dispersion in the
core regions. The tangential dispersions are smaller than radial and also
in general show much weaker (sometime no) rise in the core.
The Moore {\it et al.} (1998) cluster halo also shows a rise in the velocity
dispersions in the core (Moore, private communication).
Indeed Moore {\it et al.} (1998) point out that
a significant fraction of the core material is made from
high density material which collapsed at higher redshift.
It would be very useful to see in the future
whether these trends hold up with large statistical studies
of halo density profiles, with higher spatial resolution.
An understanding of what determines the core density
profiles of dark halos is important for several issues.
Perhaps one of the most
relevant and dramatic effect will be on strong gravitational
lensing properties of galactic and clusters scale dark matter halos.
For example the multiple imaging lensing cross
sections will be very different whether one has
$\alpha=0, -1$ or $\alpha=-2$. Compare the work of
Subramanian, Rees and Chitre (1987) where
the lensing cross section by dark halos
was estimated assuming that they have soft cores with the work of
Narayan and White (1988) where halos were assumed
to be singular isothermal spheres.
More specifically, a given system is capable of producing multiple
images if its surface density exceeds a critical value
$\Sigma_c$ (Turner, Ostriker and Gott 1984; Subramanian and Cowling 1986).
For $\alpha \geq 1$ the surface density is divergent in the
central regions. So, generically, if $\alpha \geq 1$,
{\it all} clusters are capable of producing multiple images.
For $\alpha < 1$, some clusters can produce multiple images
and some can not. The NFW value of $\alpha = 1$
is at the boundary with surface density logarithmically singular.
Thus the actual values of $\alpha$ for real systems
is quite relevant to the frequency of strong lensing.
As the frequency of multiple images from gravitational lensing also
provides a powerful independent test of cosmological models,
(cf. Cen {\it et al.}, 1994, Wambsganss, Cen and Ostriker 1998,
Cen 1998), it is important to determine the structure of halo cores.
Further, the existence and properties of radial arcs, which
have been observed in some cluster
scale lenses depends on the slope of the inner cusp (cf.
Mellier, Fort and Kneib 1993,
Miralda-Escude 1995, Bartelmann 1997, Evans and Wilkinson 1998).
For the singular isothermal profile and stronger cusps,
radial arcs do not form.
The rotation curves of disk galaxies
also holds clues to the core density profile of dark halos.
But it is more difficult to decompose the observed rotation curve
unambiguously into contributions from the luminous stellar disk/bulge
and the dark halo.
The fluid approach adopted in section 3 and in S99 raises a
new way of exploring non linear dynamics, which
can extend analytic approximations like the Zeldovich
approximation, valid in a single stream flow, to
the multi streaming regime. In the fluid approach multistreaming
regions would correspond to regions with velocity dispersions,
generated by the Zeldovich type caustics. Note that the adhesion
approximation, is one extreme where the multi streaming
regions are collapsed onto a caustic. It would be interesting
to explore this issue further.
In this work we have not included
the dynamics of the gaseous (the baryonic)
component, which will be in fact relevant for the interpretation of
x-ray observations of clusters. The gas necessarily has an isotropic
velocity dispersion, and so will have a different dynamical
evolution compared to the dark matter. We hope to return to
some of these issues in the future.
\acknowledgments
This work was begun when KS visited the Princeton University
Observatory, during Sept-Nov 1996.
Partial travel support to Princeton came from IAU Commission 38.
Some of the work was done at the University of Sussex where
KS was supported by a PPARC
Visiting Fellowship. He thanks John Barrow, Ed Turner, the other
Princeton and Sussex astronomers for warm hospitality.
T. Padmanabhan is thanked for critical comments on
an earlier version of this work. KS also thanks
Ben Moore, Bepi Tormen, Ravi Sheth, Dave Syer
and Simon White for several helpful discussions.
This research is supported in part by grants AST93-18185
and ASC97-40300.
\clearpage
|
1,116,691,500,637 | arxiv | \section{Introduction}
The relationship between physical systems and information has been of increasing and compelling interest in the domains of physics \cite{jaynes1957information,wheeler1992recent,ben2008farewell}, neuroscience \cite{sejnowski1988computational, lynn2019physics}, computer science \cite{shannon1941mathematical, turing1936computable, dubbey2004mathematical,wolfram2002new,conway1970game,lizier2014framework, rojas2013neural}, quantum computing \cite{lloyd2013universe, wheeler1992recent,perseguers2010quantum}, and other fields such as computation in social networks \cite{schelling1971dynamic, granovetter1978threshold, sakoda1971checkerboard}, or biology \cite{schrodinger1992life,brooks1988evolution} to the point where some consider information to be a fundamental phenomenon in the universe \cite{lloyd2010computational,wheeler2018information,knuth2011information}. Often, physical systems operating on information take place on, or can be modeled by, network activity \cite{watts2002simple}, since information is transmitted and processed by interactions between physical entities.
The principle of Occam’s razor and goals of achieving a deeper understanding of these physical-information interactions encourage us to find the simplest possible processes achieving computation. Thus we may conduct \textit{basic research into understanding necessary and sufficient conditions for systems to perform information processing}.
Cascades, particularly on networks, are such a simple and ubiquitous process. Cascades are found in a great number of systems – the brain, social networks, chemical-, physical- , and biological- systems – occurring as neuronal avalanches, information diffusion, influence spreading, chemical reactions, chain reactions, activity in granular media, forest fires or metabolic activity, to name a few \cite{watts2002simple,kempe2003maximizing,newman2018networks, easley2010networks,christensen2005complexity,jalili2017information}.
The Linear Threshold Model (LTM) is among the simplest theoretical models to undergo cascades. As a simple threshold network, the LTM is also similar to artificial models of neural networks, without topology restrictions \cite{rojas2013neural}.
Since the work of Shannon \cite{shannon1948mathematical}, the \textit{bit} has been considered the basic unit of information. Therefore, whatever we can learn about processing of bits can be extended to information processing in non-Boolean systems. The tools of Boolean logic then allow us to begin to develop a formalism linking LTM and other cascades to information processing in the theory of computing \cite{von1956probabilistic}. In systems of computation or statistical learning, patterns of inputs are mapped to patterns of output by Boolean functions \cite{savage1998models, rojas2013neural}.
Another way to express this is that a bit is the simplest possible perturbation of a system. Bits can interact via some medium, these interactions can be represented by edges in a network, and Boolean functions describe the results of possible interaction patterns.
Since we aim to study this topic from first principles, we are interested in how the combinatorial space of possible networks interacts with the combinatorial space of possible Boolean functions, via cascades and the control parameters. Particularly, we would like to understand the \textit{phase space of Boolean functions} computed by LTM nodes on the input (seed) nodes by the cascade action.
From a mathematical perspective, we can treat the brain or other natural systems having $N$ elements in the worst case as a random network, where there are $N(N-1)/2$ possible connections, yielding $2^{(N(N-1)/2)}$ possible networks. Meanwhile, the space of Boolean functions grows exceptionally quickly. There are $2^{2^k}$ unique Boolean functions on $k$ inputs. This immediately makes us ask how this space behaves, and how large networks such as the brain can navigate toward particular functions in this vast space.
We also observe that for the all the functions available on $k$ inputs, the decision tree complexity (depth of decision tree computing them) appears exponentially distributed, meaning that the vast majority of functions available are complex as $k$ increases.
A somewhat surprising initial result in this investigation is that \textit{complex functions on inputs emerge spontaneously and seemingly inevitably as threshold networks are connected at random}.
\section{Linear Threshold Model (LTM), Boolean logic, and antagonism}
The Linear Threshold model (LTM) \cite{watts2002simple} is defined as follows: A random (Erdos-Renyi-Gilbert) graph is constructed, having $N$ nodes and $p$, the probability of an edge between each pair of nodes. Each node is then assigned a random threshold $\phi$ from a uniform distribution, $\phi \sim U[0,1]$. Nodes can be \textit{unlabelled} or \textit{labelled}, and are all initialized as \textit{unlabelled}. To run the cascade, a small set of seed nodes are \textit{perturbed}, marked as labelled. Now, each unlabelled node $u$ is examined randomly and asynchronously, and the fraction of its graph neighbors that are labelled $\big(\frac{L(u)}{deg(u)}\big)$ is determined, where $L(u)$ is the number of $u$'s neighbors that are labeled, and $deg(u)$ is $u$'s degree. If $u$'s fraction reaches its threshold $\big(\frac{L(u)}{deg(u)}\ge \phi \big)$, $u$ is marked labelled. This process continues until no more nodes become labelled. Here we note that the LTM may be written in vector form, and bears some similarity to the artificial McCulloch-Pitts neuron \cite{rojas2013neural}.
It has been shown that the LTM exhibits \textit{percolation}, where a giant connected component (GCC) of easily-influenced \textit{vulnerable} nodes $u$ (having $\phi \le 1/deg(u)$, where $deg$ is the degree of $u$) suddenly arises at the critical connectivity \cite{watts2002simple}.
We observe that cascades in the LTM compute monotone Boolean functions (the number of true outputs cannot decrease in the number of true inputs) at each node on input perturbation patterns \cite{wilkerson2019universal}. In our numerical experiments, we create the LTM as above, but choose input seed nodes $a$ and $b$ (for $k = 2$ inputs) as the only possible loci of initial perturbation. In one trial, we create a network, freezing network edges and thresholds across all possible input patterns [Table \ref{truth_table2}, cols. a, b]. For each input pattern we reset non-seed nodes to unlabelled, set seeds according to inputs, and run the cascade. We then identify the function computed by each node ($f_0, ..., f_{15}$) [Table \ref{truth_table2}, cols. 0-15].
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c||}{inputs}&\multicolumn{16}{c|}{functions} \\ \hline
a & b & 0 & 1& 2& 3& 4& 5& 6& 7& 8& 9& 10& 11& 12& 13& 14& 15 \\ \hline
0 & 0 & 0& 0& 0& 0& 0& 0& 0& 0& 1& 1& 1& 1& 1& 1& 1& 1\\
0 & 1 & 0& 0& 0& 0& 1& 1& 1& 1& 0& 0& 0& 0& 1& 1& 1& 1\\
1 & 0 & 0& 0& 1& 1&0& 0& 1& 1&0& 0& 1& 1&0& 0& 1& 1\\
1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\
\hline
\end{tabular}
\vspace{\baselineskip}
\caption{Truth tables for binary functions. The truth tables of all possible unique binary $(k =2)$ Boolean functions are shown ($2^{2^k} = 16$ functions). The LTM can only compute \textit{monotonically-increasing} Boolean functions (columns 0, 1, 3, 5, 7), where the first row equals zero, since the seed nodes are unlabelled. Thus, it cannot compute functions 2, 4, 6, or 8 to 15.}
\label{truth_table2}
\end{table}
The zero function, $f_0(a,b) = 0$ (False) is computed by a simple sub-network, where node $u$ has no path to either seed node [Fig. \ref{fig:simplest_networks}]. Similarly, function $f_1(a,b) = a \wedge b$ (AND) is computed by $u$ with a sub-network having paths from both seed nodes $a,b$, and a threshold $\phi > \frac 12$. Similar sub-networks allow us to obtain nodes computing monotone functions $f_3, f_5, f_7$ [Fig. \ref{fig:simplest_networks}]. These sub-networks are therefore logical automata \cite{von1951general,von1956probabilistic}, and we note that they form functional \textit{logic motifs} in the network \cite{milo2002network}.
\begin{figure}[ht]
\centering
\includegraphics[scale = 0.7]{images/BooleanMotifs.pdf}
\caption{\textit{Logic motifs} compute Boolean functions. The simplest LTM sub-networks are logical automata \textit{(logic motifs)} and compute the monotone functions for $k = 2$ inputs at node $u$ on perturbations of $a$ and $b$. Dashed lines are network paths.}
\label{fig:simplest_networks}
\end{figure}
We find that an LTM network cascade will yield a distribution of Boolean functions on its input nodes, and the possible functions computed by network nodes will partition the set of monotone Boolean functions [Fig. \ref{fig:schematic}] (with the exception of $f_{15}$).
Thus the LTM carries out \textit{computational cascades} on input perturbation patterns.
\begin{figure}[ht]
\centering
\includegraphics[scale = 0.70]{images/BooleanSchematic.pdf}
\caption{LTM nodes compute Boolean functions in \textit{computational cascades}. Iterating through all possible perturbations of input seed nodes $a$ and $b$, each network node must compute some Boolean function on the inputs.}
\label{fig:schematic}
\end{figure}
We then obtain monotonically decreasing functions (negation of the LTM), by taking the logical complement of the original LTM labelling rule, so that some node $u$ is instead activated when its \textit{fraction of labelled neighbors is less than its threshold $\big(\frac{L(u)}{deg(u)} < \phi\big)$}. We call such nodes \textit{antagonistic}, from which we can construct an \textit{antagonistic linear threshold model (ALTM)}. For 2 inputs, replacing $u$ with an ALTM node $\neg u$, will compute $f_{15}, f_{14}, f_{12}, f_{10}$, and $f_8$ [Table \ref{truth_table2}], and the sub-networks are antagonistic versions of those for $f_0, f_1, f_3, f_5,$ and $f_7$, respectively [Fig. \ref{fig:simplest_networks}].
\begin{figure}[ht
\centering{
\begin{subfigure}[t]{0.47\textwidth}
\vskip 0pt
\includegraphics[scale =.42]{images/probfreqk2N10000z4.0theta0.pdf}
\caption{}
\label{fig:rank_ordering_LTM}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\vskip 0pt
\includegraphics[scale =.43]{images/logprobfreqk4N10000.pdf}
\caption{}
\label{fig:rank_ordering_LTM_k_4}
\end{subfigure}}
\caption{Function frequency corresponds to probability of required paths in a rank-ordering. (a) Logarithmic frequency of non-zero functions computed by the ensemble of LTM cascades for $N = 10000$ nodes, average degree $z = 4$ and $k = 2$ inputs, over 500 realizations reveals an apparent rank-ordering (solid line). Mean frequency is proportional to path probabilities, having a Pearson correlation of $1.0$, both predicted by probabilities derived from logic motifs using $p_{path}$ ('+') (e.g. see (\ref{eq:prob_from_p_path})), and complexity $p_{gcc}^{C(f) + 1}$ (large dot) (\ref{eq:p_propto_p_C+1}) (rescaled, overlaid, both dashed). Thus (\ref{eq:p_propto_p_C+1}) also well-predicts (\ref{eq:prob_from_p_path}). Frequency therefore varies inversely with decision tree complexity $C$ ('+'). (b) Rank-ordering is more evident for $k = 4$ inputs, appearing as a decreasing exponential with goodness of fit $r^2 = 0.88$. Again, $N = 10000$ and $z = 4$. Here, Pearson correlation between $p(f)$ and mean frequency is $0.74$. Shaded regions are one standard deviation. Probabilities have been centered and normalized.}
\end{figure}
A sufficiently large ALTM, by composing monotone decreasing functions (e.g. NAND, NOR), can undergo a cascade to compute any logical function on its nodes, forming a \textit{universal basis} \cite{savage1998models}.
\section{Statistics of attractors in the Boolean function space}
We experiment first on the LTM, to investigate the observed frequency of Boolean functions in simulation. With a network having $N = 10000$ nodes, ensembled over 500 realizations,
at mean degree $z = 4$ we observe that the frequency of functions is very skewed [Fig. \ref{fig:rank_ordering_LTM}]. Experiments for $k = 4$ inputs, again for $N = 10000$ nodes at mean degree $z = 4$, ensembled over $500$ realizations, also yield an approximate exponential decay of the rank ordering function [Fig. \ref{fig:rank_ordering_LTM_k_4}].
We investigate the skewed distribution of these functions by asking \textit{"What is the probability of obtaining the simplest network to compute each of these functions?"}. From Fig. \ref{fig:simplest_networks}, we can derive the probability of each monotone function. For example, if there is no path from seed nodes $a$ and $b$ to some node $u$ we obtain $f_0$, thus
$$
p(f_0) \propto (1 - p_{path})^2,
$$
where $p_{path}$ is the probability of a path between two randomly chosen nodes.
The function $f_1$ requires paths from $a$ and $b$ to $u$, thus
\begin{equation}
p(f_1) \propto p_{path}^2.
\label{eq:prob_from_p_path}
\end{equation}
However, with percolation in mind, we observe that for large graphs, the probability of paths between $n$ nodes approaches the probability that all $n$ nodes belong to the giant connnected component (GCC) \cite{newman2018networks}.
This gives us, again from Fig. \ref{fig:simplest_networks},
$$
p(f_1) \propto p_{path(A,B,u)} \propto p_{gcc}^3,
$$
where $p_{gcc}$ is the probability for a random node to belong to the GCC.
From \cite{newman2018networks}, we have the recursive relation
\begin{equation}
p_{gcc} = v = 1 - e^{-zv},
\label{eq:p_gcc}
\end{equation}
where $z$ is the mean degree.
We subsequently observe that the number of required paths from seed nodes to node $u$, computing monotone function $f$, is equal to the \textit{decision tree complexity }($C$), the depth of the shortest decision tree to compute $f$. In order for $u$ to decide the value of a seed node, the seed's perturbation information must be transmitted along a path to $u$.
Taking a Boolean function's Hamming cube representation, its decision tree complexity $C$ is complementary to the number of congruent axial reflections $R$ along each of its axes $D$ (details in supplemental information A.1)
That is, if a Boolean function's Hamming cube is constant along an axis, it is independent of that axis, giving us
\begin{equation}
C = D - R.
\label{eq:complexity_vs_symmetry}
\end{equation}
In other words, \textit{the number of paths a monotone Boolean function requires is exactly the number of axial reflection asymmetries of its Hamming cube.}
This allows us to relate function frequency to decision tree complexity. Recall that the critical percolation threshold in an arbitrarily large Erdos-Renyi-Gilbert graph occurs at mean degree $z_c = 1$, a very small connectivity. Thus since $p \sim \frac{z_c}N$, $p_c \ll 1$. Therefore, the network will be be tree-like, since the clustering coefficient $C_{\rm clus} \propto p$ \cite{newman2018networks}. In a tree, the number of nodes is one more than the number of edges $N = |E| + 1$. Thus, as $p \to p_c$,
\begin{equation}
p(f) \propto p_{gcc}^{C(f) + 1}.
\label{eq:p_propto_p_C+1}
\end{equation}
Indeed it appears that (\ref{eq:p_propto_p_C+1}) is highly correlated with the probabilities derived from logic motifs (\ref{eq:prob_from_p_path}), and that observed function frequency is proportional to (\ref{eq:p_propto_p_C+1}) as well [Fig. \ref{fig:rank_ordering_LTM}], having a Pearson correlation of approximately 1.0 for k = 2, and 0.74 for k = 4. This also shows, due to (\ref{eq:p_propto_p_C+1}) an inverse rank ordering relation between frequency and decision-tree complexity, appearing as a decreasing exponential in frequency. Given that, as mentioned in the introduction, there is a increasing exponential distribution of decision tree complexity in the truth table of all Boolean functions, this result is especially surprising.
\subsection{Function Distribution with Antagonism}
A similar simulation, having $N = 10000$ nodes, $k = 2$ inputs, ensembled over 500 realizations in a range of mean degree values $z$ and fraction of antagonistic nodes $\theta \in \{0, \frac16, \frac26,... 1\}$, reveals a sudden increase in the number of unique non-zero functions vs. both $z$ and $\theta$ [Fig. \ref{fig:num_functions_ALTM}].
The number of unique functions is maximized over several orders of magnitude near criticality, for $z \in [2^3, 2^{10}$], and $\theta = 1/3$. Observing that antagonism and inhibition are interchangeable \cite{rojas2013neural} (Supplemental section A.2)
, this lends support to optimal information processing around $30 \%$ inhibition, found in other research \cite{capano2015optimal}, and why this fraction of inhibitory neurons seems prevalent biologically.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\vskip 0pt
\centering
\includegraphics[scale=.5]{images/numuniquefunctionsvsz.pdf}
\caption{}
\label{fig:num_functions_ALTM}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\vskip 0pt
\centering
\includegraphics[scale=.43]{images/probfreqk2N10000z64.0theta033.pdf}
\caption{}
\label{fig:rank_ordering_ALTM}
\end{subfigure}
\hfill
\caption{Antagonism fraction ($\theta$) agrees with biology; non-monotone functions also predicted by path requirements. (a) For networks with $N = 10000$ nodes and $k = 2$ inputs, over 500 realizations, varying the mean degree $z$ and fraction of antagonistic nodes $\theta \in \{0, \frac16, \frac26,... 1\}$, we observe that the mean number of unique functions per network is maximized over several orders of magnitude ($z \in [2^3, 2^{10}]$) by networks having a fraction of antagonistic nodes $\theta = \frac13$ (triangles), coinciding with other findings \cite{capano2015optimal}.
(b) At $\theta = \frac 13$ and $z = 2^6$, we again observe a skewed frequency, and a proportional relationship between function frequency and probability due to complexity (\ref{eq:p_propto_p_C+1}), having Pearson correlation of $0.91$. Shaded region is one standard deviation. Probabilities have been centered and normalized. (Functions $f_0$ and $f_{15}$ have been removed, since in the ALTM they can occur outside of the GCC.)
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale =0.7]{images/nonmonotonicfunctionmotifs.pdf}
\caption{Motifs for non-monotone functions. Simplest \textit{logic motifs} to compute non-monotone Boolean functions $\{f_2$, $f_4$, $f_6\}$ [Table \ref{truth_table2}] in the ALTM at random node $u$, on seed nodes $a, b$. Dashed lines represent paths, and dashed nodes are antagonistic. Functions $f_{13}$, $f_{11}$, and $f_9$ are negations of these, respectively, so have very similar networks, negating each node.}
\label{fig:non_monotone_motifs}
\end{figure}
For this mix of LTM and ALTM nodes, we again observe a similar rank-ordering of functions, here at $z = 64, \theta = 1/3$, and that, as in the LTM, frequency is again proportional to probability derived from function complexity [Fig. \ref{fig:rank_ordering_ALTM}], having a Pearson correlation of 0.91.
We note, however, that (\ref{eq:complexity_vs_symmetry}) under-estimates the number of paths required for non-monotone functions. For example, $f_6$ (XOR) requires 4 paths between 5 nodes, all of which must be in the GCC [Fig. \ref{fig:non_monotone_motifs}], so that $p(f_6) \propto p_{gcc}^5$. However, this function's decision tree complexity $C = 2$, predicting by (\ref{eq:p_propto_p_C+1}) that $p(f_6) \propto p_{gcc}^3$. Therefore a more informative complexity measure is needed for non-monotone functions.
\section{Discussion}
As indicated in the title, we see the main result of interest as the spontaneous emergence of complex logic functions in the minimally-constrained random threshold networks. This then implies that many physical, biological, or other systems are able to perform such computation by ubiquitous avalanches or cascades.
We note that this result also begins to give us an explanation of the \textit{criticality hypothesis} vis-\`a-vis neuroscience \cite{massobrio2015criticality,hesse2014self,shew2013functional}. That is, at the critical threshold, with the emergence of the giant component, the number of unique functions spontaneously increases. Along with that comes an increase in the number of complex functions. As neuronal networks need to compute integrative complex functions on sensory information, or on information passed between modular areas in the brain, the utility of this complexity is self-evident \cite{sejnowski1988computational}. We note that in computational neuroscience, there is also discussion of the integration of information and complexity or consciousness \cite{tononi1994measure,lynn2019physics}. These motifs therefore give us a starting point for the relationship between structure and function as well.
Also, the present work connects to machine- or statistical-learning, where in classification, Boolean functions are computed on high-dimensional data. Until now, however, despite their ubiquity in nature, neither criticality nor cascades have played a large role in machine learning as a design paradigm or analytical framework \cite{rojas2013neural}. We see this as a large potential opportunity to improve deep learning methods.
The spontaneous emergence of complex computation is an example of a symmetry breaking phase transition, as the giant connected component (spanning cluster) comes into existence at the critical connectivity \cite{landau1937broken, anderson1972more}. We conjecture that we are witnessing how \textit{complexity of functionality results from symmetry breaking in systems} \cite{anderson1972more}. This complexity takes on a distribution that reflects a hierarchy in an exponential rank-ordering law.
We also see that, from a larger theoretical perspective, the confluence of cascades (percolation branching processes) and information processing by Boolean logic stands at the intersection between several very large and highly developed areas of research -- percolation- and computational automata-theory \cite{von1956probabilistic,christensen2005complexity}.
The specific mechanism of the logical automata realized by \textit{logic motifs} extends previous work about network motifs and their function, mainly in the genetic domain \cite{milo2002network}, into many other areas, again due to the ubiquity of cascades in threshold networks.
The observance of logic motifs as automata also allows us to change our perspective on network percolation. In the past, we saw it perhaps only in terms of connected component size distribution. Now, however, we may view these components as a \textit{zoo or library of functions}, available to the network by connection, much as importing a function occurs in programming languages. We note that the scale invariance at criticality may exist at the Pareto-optimal point between complexity and diversity. That is, there will be a small number of larger components computing complex functions, and a great number of very small, simple components having a large variety of thresholds.
\subsection{Future work}
In developing this work, we inevitably stumbled across an overwhelming number of ideas and directions that we can take. We can only briefly list them.
We have seen above that other complexity measures could be found for non-monotone functions, to better predict their frequency in mixed LTM/ALTM networks. We suspect that Boolean Fourier analysis would be fruitful here. We also expect that, for larger inputs, these non-monotone functions will dominate the function space, and that the Hamming cube symmetries make it possible to write a partition function for them. Along with this, it should be possible to predict more exact probabilities of functions, which depend on the occurrence of cascades being blocked, and of nodes inheriting their neighbors' complexity, among other factors.
We would also like to generalize these predictions to $k \gg 2$ inputs and much larger networks ($N \sim 10^9$ nodes), while understanding mechanisms and heuristics for learning by re-wiring in these large combinatorial spaces.
For example, we suspect that modularity develops as a network's capacity to extract complexity from inputs is exhausted. We also suspect that function distribution can be understood in terms of multiple network density percolation thresholds, depending on function path requirements, more evident for larger inputs.
Furthermore, we intend to study the relation between function and network symmetry in the context of symmetry breaking. We conjecture, for example, that there is a conservation law of complexity or information, meaning that what we call computation comes at the expense of lost information, rendering the network a kind of \textit{information engine} \cite{landauer1961irreversibility}, whose output is \textit{computation}, and that this lies at the heart of information creation.
Of course, it could also be fruitful to understand this work in terms of information processing, using measures such as transfer entropy, of increasing use in computational neuroscience and automata theory \cite{lizier2014framework}. Along with this we see an opportunity to formalize the \textit{criticality hypothesis} in light of our results on computation. In the hypothesis, avalanche criticality (the kind of percolation seen here) and so-called \textit{edge of chaos} are convolved qualitatively, by saying that information processing is optimized 'near criticality' \cite{beggs2008criticality,jensen2021critical}.
We would like to research the effects of geographic energy constraints and other network topologies, found in real-world systems, on the function phase space. For example we conjecture that both modularity and layering will result from restricting geographic connection distance, with a result that complex functions appear at nodes on the surface (or interface) of networks, convenient for passing to subsequent networks.
Finally, although we have used the term \textit{computation} here, it would be useful to carefully study the linear threshold model as a computing machine, especially when re-wiring, investigating its Turing completeness, run-time, and related phenomena.
\section{Conclusion}
Here we have shown that the Linear Threshold Model computes a distribution of monotone Boolean logic functions on perturbation inputs at each node in its network, and that with the introduction of antagonism (inhibition), any function can be computed. Notably, complex functions arise in an apparent exponentially decreasing rank-ordering due to their requirements for perturbation information from seed nodes, and these requirements correspond to their functional asymmetries. These asymmetries can be used to obtain their probability exponent as a function of the probability of belonging to the network's giant connected component. Finally, we observe that the number of unique functions computed by an LTM of mixed excitatory and antagonistic nodes is maximized near $1/3$ antagonism, over several orders of magnitude of connectivity, coinciding with other research.
\bibliographystyle{abbrv}
|
1,116,691,500,638 | arxiv |
\section{Introduction}
Choosing appropriate hardware and tuning configuration parameters
is a common task when one wants to run software optimally.
For a complex and truly parallel interactive proof assistant such as Isabelle,
many factors influence run-time performance:
The prover needs a Java and a Meta Language (ML) run-time,
the number of threads is variable,
as is the amount of heap memory --
which in turn
(in combination with the CPU architecture family)
dictates which ML platform and hence Poly/ML backend may be used.
On a hardware level, CPU specs, the memory hierarchy, and interconnects
all influence how well the software components perform and how the system as a whole behaves.
The parallel efficiency of Isabelle
(i.e., the ratio of actual time versus sequential time divided by the number of parallel units)
decays according to a non-linear characteristic~\cite{Parallel2009Wenzel}, as is the case in most parallel systems.
As a result, there is no single hardware or software characteristic that dominates the observed performance behavior.
In Isabelle,
performance is important both in the interactive mode (such that processing changes and running solvers is faster)
and in a batch build mode, where \emph{sessions}
(i.e., collections of formalizations) can be processed.
Independent sessions can even be run in parallel with multiple ML processes.
However, making informed decisions on hardware is no trivial task.
Members of the Isabelle community have to rely on word of mouth to determine which processors and memory to use,
and configuration parameters
(such as the number of threads or heap sizes)
are largely folk knowledge
-- backed by experience collected over the years, ad-hoc experiments, and sometimes intuition.
While there is some performance data available,
it is not helpful in that regard as it only covers a very small number of machines.
With new and exciting hardware being developed at a fast pace,
one can often be overwhelmed by the sheer variety of hardware options available.
Hence, the question of which hardware to recommend for Isabelle can often not be answered exhaustively or satisfactory.
This is relevant both for individuals working with Isabelle,
and for the larger-scale server infrastructure maintained for continuous integration and for Isabelle and the Archive of Formal Proofs.
To alleviate this problem,
a solid data base with performance benchmark results
for a wide variety of involved hardware and configurations
is needed.
Not only would that directly answer the question of optimal configurations for a given system
and allow one to compare the hardware on the market,
but such a collection of data
(if large enough, and kept up to date)
would also allow one to predict performance of other hardware for which no Isabelle data is available yet.
In this paper,
we outline our Isabelle community benchmark,
discuss the immediate results and findings,
and derive a model to predict the Isabelle performance of unknown CPUs
with the help of widely used benchmarks for which more data is retrievable.
Our source-code and data and is made available publicly\footnote{\texttt{2022-paper} folder in \url{https://isabelle.systems/benchmark}}.
Section~\ref{sec:related} covers related work;
we explain our benchmark set-up in Section~\ref{sec:benchmark},
and discuss the results in Section~\ref{sec:results}.
In Section~\ref{sec:conclusion},
we conclude and discuss future work.
\section{Related Work}\label{sec:related}
Parallel run-time performance has been first analyzed for Isabelle
when parallelism was introduced by \citeauthor{Parallel2009Wenzel} in~\cite{Parallel2009Wenzel}.
Benchmarks for multiple different sessions on a single test machine already showed
that the speedup
(in terms of run-time)
peaked at three worker threads with a factor of \num{3.0},
and slightly decreased for four cores.
\citeauthor{PolyParallel2010Matthews} described the necessary adaptations to the Poly/ML run-time
that were necessary for introducing parallelism,
and analyzed the resulting bottlenecks~\cite{PolyParallel2010Matthews}.
They found that the parallelization model for Isabelle sometimes failed to fully utilize all worker threads.
Moreover, the synchronization model that uses a single signal across all threads for guarded access
was identified (but not analyzed) as a potential bottleneck.
Finally, it was observed that the single-threaded garbage collection is responsible for up to \SI{30}{\percent} CPU-time for \num{16} threads.
Overall, a maximum speedup of \num{5.0} to \num{6.2} could be achieved
using \num{8} threads.
In automatic theorem provers, run-time is an important factor,
since it can dictate whether a goal can be proven within the given cut-off time.
As a result, much research includes analysis of the run-time performance of provers
or individual prover components.
Typically, only a single hardware configuration is used,
which is reasonable for the analysis for single-threaded systems~\cite{PerformanceESat2016Schulz}.
However, since performing such analysis on a wide range of different hardware is often impractical,
run-time performance analysis of parallel approaches
is frequently carried out on single systems or clusters~\cite{PerformanceOR1991Ertel,ParallelDeduction1992Jindal,ParallelHyper2001Wu}.
These results don't always generalize, because the hardware used can have a significant impact on the observed results.
In contrast, results for the Isabelle \texttt{sledgehammer} proof-finder tool show that when running \emph{multiple} automatic provers to solve a goal,
run-time becomes less important:
In their \emph{judgement day} study~\cite{Judgementday2010Boehme},
\citeauthor{Judgementday2010Boehme} found that running three different Automated Theorem Provers for five seconds each
solved as many goals as running the most effective one for \SI{120}{\second}.
Subsequently, in direct follow-up work~\cite{SMTHammer2011Blanchette}, run-time was not analyzed.
For automatic provers, a large range of benchmarks exist to judge their effectiveness on a given set of problems.
One of these is the widely known TPTP library~\cite{TPTP2009Sutcliffe}.
However, there is not much work investigating the effect of hardware in the field of automated reasoning.
To the best of our knowledge,
there exists no other benchmark comparing the hardware impact on run-time performance of any theorem prover,
and this is the first work that analyzes this effect on a wide range of different hardware.
\section{Benchmarking Methodology}\label{sec:benchmark}
The benchmark has to fulfill to multiple requirements:
It needs to capture typical computations found in Isabelle
--- mostly symbolic computation ---,
have a reasonable run-time for the end user,
and motivate users to want to see how their machines perform
(i.e., results should be self-evident).
We settled for a clean build of the HOL-Analysis session:
It is a typical Isabelle formalization
which runs in approximately five minutes on typical consumer machines.
Many Isabelle users have likely run a similar workload for their own work in the past.
While users can easily contribute results for their favourite Isabelle configuration,
we supplied a small script to run a comparable set of configurations automatically
\footnote{Documentation and code at \url{https://isabelle.systems/benchmark}}.
This way,
the whole benchmark can be run with a single command
(assuming a working installation of Isabelle 2021-1)
on any platform.
We vary the underlying ML platform between $64\_32$ ($64$-bit mode with $32$-bit values) and true \num{64}-bit mode,
heap sizes of both the ML and JVM process
(set to the same value to reduce the number of linear combinations,
as early benchmark results indicated they play only a minor role here),
and the number of worker threads for parallel proof checking.
Results are collected in a collaborative spreadsheet\footnote{\url{https://docs.google.com/spreadsheets/d/12GhEwSNSopowDBq5gSem3u39fliiIcoTIZHMnX4RE3A}}
with automatically updated figures for fastest CPUs and parallel efficiency.
The benchmark is not intended as one-shot experiment,
but rather as a continuous community effort
to maintain an overview over the Isabelle computing landscape as new hardware emerges.
It is being kept open for new results, and will be maintained for future Isabelle versions.
\subsection{Benchmark Score (Isascore)}
For the benchmark results,
we use the wall-clock build time as an intuitive metric.
Together with the well-known HOL-Analysis build target,
the metric immediately gives a good understanding of Isabelle performance.
However, it is not well suited to compare to other metrics such as throughput,
because the relationship between time to solution and throughput is inverse.
To still allow using simple linear models such as the Pearson correlation,
we introduce a benchmark score that we call \emph{Isascore}.
It reflects the number of HOL-Analysis runs one could complete in a day, i.e.:
\begin{equation}
\text{Isascore}=\frac{\SI{1}{\day}}{\text{wall-clock time}}
\end{equation}
\subsection{Threats to Validity}
The experiments discussed in this paper could not be performed in a controlled environment,
since they were run by members of the Isabelle community rather than exclusively by the authors of this paper.
This means that various outside factors may have influence on the reported results,
though it seems reasonable to assume that those factors should usually be constant between different configurations of the same benchmark run.
The effect of machine-local anomalies can be mitigated for hardware where we received several independent measurements by using statistical techniques.
Furthermore,
due to reasons of practicality in orchestrating data collection,
extended system specifics beyond the CPU model, OS, and memory configuration were not recorded.
There is a possibility that relevant parameters may have been missed.
Therefore, like all performance benchmarks, these results represent upper bounds of what might be achieved with a given system configuration.
Lastly,
while the benchmark was posted on the Isabelle-users mailing list,
in principle the data entry was open to public and could have been misused.
\section{Results}\label{sec:results}
At the time of writing this paper,
\num{669} results for a total of \num{594} unique configurations have been reported,
utilizing \num{54} distinct CPUs.
Those include Intel Desktop/Server CPUs from Sandy Bridge to Alder Lake,
AMD Ryzen Zen2 to Zen4 processors as well as Epyc and Threadripper server systems,
a Fujitsu A64FX, and Apple M1 processors.
\input{tables/top_cpus}
Table \ref{tab:top_5_cpus} shows the five CPUs with the lowest time to solution,
using the median value as an aggregate for multiple runs of the same configuration.
Older Intel and AMD consumer hardware is surpassed by the Apple M1 Pro chip;
only the most recent Intel core line performs better.
Due to the nature of the benchmark, server and high performance hardware does not rank highly,
with the best performing system (2x AMD Epyc 7742) clocking in at \SI{184}{\second}.
In the following, we analyze how Isabelle configuration influences performance,
investigate the impact of hardware parameters,
and then compare our results to other computational benchmarks.
Where individual CPUs were concerned,
we filtered out special system configurations (e.g., overclocked hardware, dual-cpu systems, power-saving mode).
We also encountered a small number of extreme outliers where Isabelle run-time was much longer than expected.
For two of those, we could identify the user and investigate;
in both cases, the system configuration was at fault
(excessive swapping, UEFI set to \enquote{silent mode})
and when corrected, results were much closer to the rest of the data.
We could not investigate the third extreme outlier but excluded it from the following,
since it is likely to stem from a similar cause.
\subsection{Multi-Threaded Performance}\label{sec:performance_threads}
The number of threads used plays a major role in the overall performance.
\autoref{fig:time_by_threads} illustrates how the wall clock time and CPU time compare from a single thread to up to \num{128} threads.
The optimal wall-clock time is achieved with \numrange{8}{16} threads
depending on the hardware and greatly increases if more threads are used.
This is typical behavior for a strong-scaling benchmark like ours,
where the relative impact of communication increases with an increase in the number of threads used.
For more than the optimal number of threads,
the run-time increases substantially.
The underlying limitations of the parallel computing model
-- the single-threaded garbage collection of the Poly/ML run-time and worker starvation after parallelization is saturated --
were already discussed in~\cite{PolyParallel2010Matthews},
albeit tests were run on a machine with 32 cores.
It might be a surprise that the scalability is so low when distributing across more threads.
In contrast,
the CPU time divided by number of threads
(which is not an ideal metric, but the only feasible solution due to the nature of the benchmark)
flattens out at eight threads.
In small-scale experiments, we found that the JVM process takes up a constant amount of CPU time independent of the number of threads (about \SI{26}{\percent} in single-core mode).
This means that there is not too much computation overhead
but the hardware can not be properly utilized by the ML process,
most likely due to the single-threaded garbage collection that stops all threads when running.
This is an inherently sequential task, which means that Amdahl's Law (the speedup is limited by $1 / (1-\text{parallelizable portion})$~\cite{amdahls}) limits the achievable speedup for this problem.
\input{figures/time_by_threads}
The parallel efficiency paints a similar picture in \autoref{fig:parallel_efficiency},
decreasing almost linearly (on the logarithmic x-axis) up to \num{32} threads
at which it is at a median of \num[round-mode=places,round-precision=3]{0.0653066}.
With the number of threads tending to the limit of \num{128},
it approaches \num[round-mode=places,round-precision=3]{0.0024071}.
There is an outlier where the parallel efficiency is over one --
super-linear speedup is unusual but can appear in practice because of caching effects or resource contention in the measured system.
\input{figures/parallel_efficiency}
\subsection{Performance Impact of Heap Memory}
As preliminary results indicated that heap memory
(as long as sufficient)
only plays a minor role in performance,
we keep the JVM and Poly/ML processes at the same heap size.
We know from experience that a few gigabytes of memory suffice for HOL-Analysis;
however, increased parallelism requires more memory in principle due to memory overhead.
Hence, the range of examined heap sizes depends on the number of threads used.
\autoref{fig:heap_boxplot} shows the change in run-time for different heap settings relative to the minimal setting.
The boxes capture the \num{25} and \num{75} percentiles as height and sampling size as width;
whiskers correspond to the extreme values.
The results show that performance is not affected very much by heap size.
Following the line of medians,
wall-clock time slightly increases above \SI{16}{\giga\byte}
(where the \SI{64}{\bit} Poly/ML backend needs to be used, as the more efficient $64\_32$ mode does not allow more than \SI{16}{\giga\byte}),
as well as for very large values.
We observed a single outlier for \num{64} threads and \SI{128}{\giga\byte} heap memory
at a relative factor of \num[round-mode=places,round-precision=2]{0.657892}.
\input{figures/heap_boxplot}
\subsection{Influence of Hardware Characteristics}
Based on folk knowledge about Isabelle performance,
we suspected that cache size would be a major factor;
it was debated whether boost clock speed would be relevant.
To test the hypotheses, we analyzed the impact of size of the L3-cache, base clock speed,
and maximal (boost) clock speed
(ignoring power-save cores where applicable)
on Isabelle performance.
Table \ref{tab:hardware_param_cor} shows the correlation between Isascore and those parameters (APA notation as explained in caption).
At our significance level of $0.05$,
we did not find cache size to impact performance significantly.
Base frequency is weakly correlated with the Isascore for a single thread (though at the edge of significance) and a bit more strongly (and much more significantly) in the multi-threaded scenarios.
Finally, boost frequency has a significant medium correlation for all modes,
which is strongest in the single-threaded configuration with a value of \num[round-mode=places,round-precision=2]{0.5534098}.
\input{tables/hardware_param_cor}
A possible explanation is that boost frequency can only be sustained for a single core in most CPUs,
hence single-threaded performance profits from it a lot;
in the multi-threaded scenario, the actual core frequency is much closer to the base frequency
and thus its impact is larger.
\subsection{Comparison to Computational Benchmarks}
Performance benchmarks exist for many applications;
additionally, synthetic benchmarks are often used to evaluate hardware performance.
They can roughly be categorized into scientific computing versus consumer benchmarks.
In the following, we compare the results of our Isabelle community benchmark with a number of publicly available datasets for such benchmarks.
For the comparison, we selected results with matching processors,
and matched the benchmarks' multi-thread setting (e.g., specific thread count, or all cores).
To obtain sufficiently large datasets,
we selected some of the most popular benchmarks.
\subsubsection{Benchmarks in High-Performance Computing}
The first analysis we wish to conduct is a comparison of Isabelle performance with some scientific programs.
For this analysis, we chose to import data from the High Performance Computing suite on OpenBenchmarking.
We selected the three benchmarks that had the most public results available
(in their primary configuration)
at the time of writing: \emph{Himeno}, \emph{NAMD}, and \emph{Dolfyn}.
Himeno\footnote{Results from \url{https://openbenchmarking.org/test/pts/himeno}} is an incompressible fluid analysis code written in Fortran and C~\cite{himeno}.
While a distributed memory parallel version exists (using MPI with Fortran VPP),
we concern ourselves with the sequential implementation.
NAMD\footnote{Results from \url{https://openbenchmarking.org/test/pts/namd}} is a shared memory parallel molecular dynamics code based on C++ and Charm++~\cite{namd}.
The data we use stems from machine-wide parallel trials.
Finally, Dolfyn\footnote{Results from \url{https://openbenchmarking.org/test/pts/dolfyn}} is a sequential computational fluid dynamics code based on Fortran~\cite{dolfyn}.
\input{figures/hpc_benchmarks}
\autoref{fig:hpc_benchmarks} shows the results when correlating each of the high-performance computing benchmarks with Isabelle performance.
Himeno reports performance in terms of work done over time
(where higher is better),
while NAMD and Dolfyn measure time
(per simulated \si{\nano\second}, and to solution;
lower is better).
For Himeno, we therefore compare against Isascore,
while with NAMD and Dolfyn we compare against our observed wall clock time.
NAMD, as the only benchmark of these three
that scales well with parallel resources,
has no significant correlation with single-threaded Isabelle time.
However, it has a strong linear relation with multi-threaded time.
The two less scalable benchmarks correlate much closer with Isabelle single-thread performance, where Dolfyn has a particularly nice correlation that holds well for the most performant processors.
In both cases, correlation with multi-threaded Isabelle results is much worse
($R^2$-values: Himeno \num[round-mode=places,round-precision=2]{0.4563649}, Dolfyn \num[round-mode=places,round-precision=2]{0.5242167}).
For both the Isabelle benchmark and Dolfyn,
the top processor that was tested is the same
(Intel i7-12700K),
and on both benchmarks it has a margin on the runner-ups.
This is also visible on the Himeno benchmark, where the 12700K produces the highest floating point throughput of all tested processors.
However, it is not a highly parallel processor, which is why its NAMD results are less favorable.
This again shows that Isabelle performance is significantly impacted by the single-thread performance of the underlying processor.
\subsubsection{Consumer CPU Benchmarks}
For our second comparison,
we chose some of the most common consumer benchmarks to compare to:
\emph{PassMark CPU Mark}\footnote{Results from \url{https://www.cpubenchmark.net/CPU_mega_page.html}}, \emph{Geekbench 5}\footnote{Results from \url{https://browser.geekbench.com/processor-benchmarks.json}}, \emph{Cinebench R15}\footnote{Results from \url{https://us.rebusfarm.net/en/tempbench}}, and \emph{3DMark CPU Profile}\footnote{Results from \url{https://www.3dmark.com/search}, median over the top-\num{100} values}.
For sequential performance,
\autoref{fig:consumer_benchmarks} shows the scatter plots of Isascore to consumer benchmark scores,
which are normalized to a $[0;1]$ range so the plots can be compared against.
A strong positive relationship can be observed for all benchmarks,
with $R^2$-values in the range \numrange[round-mode=places,round-precision=2]{0.6272175}{0.8247232}.
A few moderate outliers are present (possibly due to system configuration).
All in all, the Isabelle benchmark seems quite similar to those consumer benchmarks for a single thread.
\input{figures/consumer_benchmarks}
This gives rise to prediction of Isabelle performance,
which would allow one to judge hardware on which Isabelle was not executed before.
However, single-threaded results are not meaningful for real-world performance,
and scaling them according to the average parallel efficiency did not yield helpful results
($R^2$-values: Cinebench \num[round-mode=places,round-precision=2]{0.590943}, Geekbench \num[round-mode=places,round-precision=2]{0.6969933}, PassMark \num[round-mode=places,round-precision=2]{0.6784529}, 3DMark \num[round-mode=places,round-precision=2]{0.5845242}).
Not many datasets for consumer benchmarks report on results for different number of threads,
most report only a single \enquote{multi-core} value where all threads are utilized.
An exception to that is the 3DMark CPU Profile benchmark,
where results are reported for \numrange{1}{16} threads individually
(in steps by power of two).
This allows us to create a better correlation, because all consumer benchmarks tested had a far better parallel efficiency in the limit
and were hence not suited for direct prediction.
When using \num{8} and \num{16} threads in both the 3DMark and Isabelle benchmark,
score and Isascore are strongly to moderately correlated and have individual $R^2$-values of \num[round-mode=places,round-precision=2]{0.7683492} and \num[round-mode=places,round-precision=2]{0.6113401}, respectively.
This makes the 3DMark well suited for performance prediction.
Since the optimal number of threads is in between,
we use the average of its \num{8}-thread and \num{16}-thread results
to create a linear model for performance prediction
(tuning for a non-uniform split did not yield better results).
Using ten times ten-fold cross-validation
(i.e., averaging results over multiple iterations,
splitting the data into ten parts, and using each part as a test set and the remainder as training set),
the linear regression has an average $R^2$-value of \num[round-mode=places,round-precision=2]{0.8678399}.
\autoref{fig:predictor} shows the final model ($R^2=\num[round-mode=places,round-precision=2]{0.8442465}$) and the resulting predictor for wall-clock time, which has a mean absolute error (MAE) of \SI[round-mode=places,round-precision=1]{46.5588704}{\second}.
However, that error is somewhat exaggerated by the data collection method:
The public 3DMark data only shows only the top-$100$ results, from which we use the median values.
The regression improves slightly when the
(non-public) true medians are used,
and the MAE decreases to \SI[round-mode=places,round-precision=1]{37.1461619}{\second}.
\input{figures/predictor}
The residual plot displayed in \autoref{fig:predictor_residual} has no noticeable patterns
and the residual distribution roughly follows a normal distribution.
All in all, the model simplicity and good fit
indicate that this linear model is quite well suited for performance prediction,
as long as the other system parameters are kept within reasonable bounds
and the configuration is well tuned.
\input{figures/predictor_residual}
\section{Conclusion}\label{sec:conclusion}
This work resolves our questions on Isabelle performance for the 2021-1 version.
The Isabelle community benchmark that we initiated saw lively participation
and hundreds of results were reported for a total of \num{54} distinct CPUs.
The results form a solid data base for tuning of Isabelle configuration;
when not constrained, the optimal configuration is at \numrange{8}{16} threads with \SI{16}{\giga\byte} heap memory for both the Java and ML process
(at least for HOL-Analysis, larger sessions might require more).
When buying new hardware,
the benchmark results give a good indication of which processor is desirable for Isabelle.
Individual CPU parameters are not as important
as clock speeds are only correlated with medium strength (though boost clock more than base clock)
and cache size not significantly at all.
Instead, for hardware that has not yet been tested with Isabelle,
other benchmarks can greatly help in judging performance:
While the single-threaded Isabelle benchmark score is strongly correlated with many benchmarks
(most strongly with the Dolfyn high-performance benchmark and the PassMark CPU Mark),
the multi-threaded scenario was more difficult to model;
In the end, we found a good predictor by using 3DMark CPU Profile scores from \num{8} and \num{16} threads,
with a final mean absolute error of \SI[round-mode=places,round-precision=2]{46.5588704}{\second}.
The model has a good fit, and one can assume that it is fairly future-proof given that hardware from the last ten years is properly predicted.
\section{Future Work}\label{sec:future}
If the Isabelle computation model does not change,
there is not much left to be done on the topic of performance prediction:
The benchmark from which Isabelle performance is predicted is widely popular and data for new hardware is usually quickly added,
and the fit is about as good as one can hope for.
Still, the model should be validated after a few years,
and we are curious to see whether future hardware characteristics will be able to break the trend.
One other aspect that we did not touch on is running multiple independent sessions in parallel,
which is often possible on automated large-scale Isabelle builds, e.g., for the Archive of Formal Proofs.
This can be done on multiple processes that run independently of each other and greatly increases the usefulness of larger server CPUs with many cores;
then, other parameters such as memory bandwidth might be more important and have to be analyzed.
However, given the large cost of such machines,
it would be much more economical to instead distribute the build on multiple cheap but fast desktop CPUs
(especially when latency is a concern, not throughput).
|
1,116,691,500,639 | arxiv | \section{Abstract}
The wavelength range $912-2000$\,\AA\ (hereafter far-UV) provides access to atomic and molecular transitions of many species the interstellar medium (ISM), circumgalactic medium (CGM), and intergalactic medium (IGM), within phases spanning a wide range of ionization, density, temperature, and molecular gas fraction.
Far-UV space telescopes have enabled detailed studies of the ISM in the Milky Way, paving the way to understand, in particular, depletion of elements by condensation onto dust grains, molecular gas formation pathways, ionization processes, and the dynamics of shocks. Absorption features appearing in the UV spectra of hot stars yield fundamental insights into the composition and physical characteristics of all phases of the ISM along with the processes that influence them. \textit{However, no single instrument has as yet given access to species in all ISM phases at the same high spectral resolution}: from the molecular bands of CO and H$_2$, to the cold neutral medium tracers (e.g., C{~\sc i}, S{~\sc i}), the warm medium tracers (e.g., C{~\sc ii}, N{~\sc i}, O{~\sc i}, Mg{~\sc ii}, Fe{~\sc ii}, Si{~\sc ii}\ etc...), and to the multiply charged ions of the hot ionized medium (e.g., C{~\sc iv}, Si{~\sc iv}, as well as O{~\sc vi}).
A comprehensive study of ISM phases, their interaction, and the nature of their boundaries requires comparing abundances and velocity profiles of tracers within these different phases but we have yet to design the spectrometer able to observe the full UV domain at resolving power $R>100\,000$ and detectors that can reach a signal-to-noise ratio SNR$>500$. The line FWHM being governed by turbulence, temperature, and species mass, such a resolution is necessary to resolve lines from both the cold molecular hydrogen and the warm metal ions with a turbulent velocity of $\approx1$ {\,km\,s$^{-1}$}, and to differentiate distinct velocity components, typically separated by less than $2$\,{\,km\,s$^{-1}$}. Future UV spectroscopic studies of the Milky Way ISM must \textit{revolutionize our understanding of the ISM as a dynamical, unstable, and magnetized medium, and rise to the challenge brought forward by current theories}. In particular, can we obtain observational signatures of the dissipation of turbulence in a magnetized medium, and how does the magnetic field help in structuring the ISM and in the transport of matter between phases? At stake is the full understanding of the atomic-to-molecular transition, the molecular complexity, and the various associated diagnostics that are currently used from dissipative to galactic scales.\\
\textit{[Cont'd next page.]}
Another interesting prospect is to \textit{transpose the same level of details that has been reached for the Milky Way to the ISM in external galaxies}, in particular in metal-poor galaxies, where the ISM chemical composition, physical conditions, and topology (arrangement of various phases) change dramatically, with significant consequences on the star-formation properties and on the overall galaxy evolution. We need to know, in particular, how star formation proceeds in quasi-pristine or pristine environments and what is the role of accreting clouds and compact objects in regulating the star formation process. To circumvent systematic biases in column density determinations and to examine the ISM enrichment as a function of the environment, next far-UV missions should be versatile enough to observe stellar clusters and individual O/B stars at distances of few Mpc to few $100$s\,Mpc respectively, with a spectral resolution power $R\sim10^5$ and $10^4$ respectively.
Such requirements are also necessary to perform statistical analyses of background quasar lines of sight intersecting the CGM of galaxies at various redshifts. The CGM is an important component of galaxies that connects the galaxy to the cosmic web and is, as such, at the center of many processes that we do not yet fully understand. With future UV missions, we ought to be able to \textit{fill the gap between the various physical scales and phases and to comprehend the role of gas exchanges and flows for galaxy evolution}.
We advocate a far-UV space telescope that will be able to tackle these issues. Such an observatory would enable access to the range $\approx900-3100$\,\AA, and would require an exceptionally large mirror $>5$\,m. The optimal observation mode would be spectropolarimetry with a resolution of $R\gtrsim10^5$.
\bigskip
\textbf{This ESA white paper draws from and expands on the following white papers previously submitted for the Astro 2020 Decadal Survey, \cite{Gry2019a}, \cite{Lebouteiller2019a}, and \cite{Yan2019a}. The original ESA white paper was reformatted as a 2-column paper for the present document.}
\par
\hfill
\end{strip}
\section{Understanding the chemistry and physics of the ISM: The Milky Way laboratory}
\textbf{
\begin{itemize}
\item \textit{How is the ISM structured? Is the structure of molecular clouds compatible with our current understanding of physical and chemical processes in the ISM?}
\item \textit{What are the gas ionizing and heating mechanisms and what consequences for the chemistry and for star formation?}
\item \textit{How do the cold and hot medium trade matter and entropy at their interfaces?}
\end{itemize}}
\bigskip
The Milky Way ISM provides the prime reference to understand the basic processes occuring in the ISM. While it is important to consider and to examine the ISM as an astrophysical object by itself, a profound understanding of its properties is also expected to lead to parameters that are relevant for star formation and galaxy evolution.
Matter in the interstellar space has long been considered to be distributed in diverse, but well defined phases that consist of (1) the hot, ionized interstellar medium (HIM; $T\sim10^{6-7}$\,K) emitting soft X-rays, (2) the warm-hot ionized medium (WHIM; $\sim10^{4-6}$\,K), (3) the warm neutral or ionized medium (WNM+WIM) ($6000-10^4$\,K), and (4) the cold ($\sim10-200$\,K) neutral medium (CNM) and molecular star-forming clouds that occupy only a few percent of the volume (e.g., \citealt{Field1969a,Bialy2019a}). The different phases are supposed to be globally consistent with hydrostatic equilibrium \citep{Ferriere1998a}, but the multi-phase aspect of the ISM is not fully understood and
non-equilibrium conditions are often contemplated. Observations do seem to point out that non-equilibrium conditions might be key to describe the observed abundances. How the different phases are related, how mass flows from one phase to the other, and how the transition from H$^0$ to H$_2$\ occurs -- which is the first step in the path way leading to the formation of more complex molecules -- are still open questions.
A comprehensive study of ISM phases and the nature of their boundaries or connections requires comparing abundances and velocity profiles from tracers of the different phases. In the UV, studies of a wealth of absorption features appearing in the spectra of hot stars yield fundamental insights into the composition and physical characteristics of all phases of the ISM along with the processes that influence them. They also inform us on the nature of boundaries between them. However no single UV instrument has as yet given access to species in all ISM phases at the same high spectral resolution: from the molecular bands of CO and H$_2$ to the highly ionized transitions that trace the HIM (C{~\sc iv}, Si{~\sc iv}, as well as O{~\sc vi}), and passing through the CNM (traced by C{~\sc i}\ and S{~\sc i}) and the WIM tracers (like C{~\sc ii}, N{~\sc i}, O{~\sc i}, Mg{~\sc ii}, Fe{~\sc ii}, Si{~\sc iii}...).
The line width being governed by turbulence, temperature, and mass, a resolution power $R=\Delta\lambda/\lambda>120\,000$ is necessary to resolve cold atomic and molecular gas with a turbulent velocity of $1-2${\,km\,s$^{-1}$}. \textit{We have yet to design the instrument that observes at $R>120\,000$ the full UV domain to study the multi-phase ISM.}
\subsection{Cold, dense and molecular gas}
The H$_2$ molecule is extremely difficult to detect and until now only few missions have been able to trace it directly (in particular \textit{Copernicus} and FUSE). H$_2$ can only be observed in two ways, (1) rovibrational transitions in the IR and (2) in absorption in the far-UV. Observing lines in absorption is the only way to measure atoms and molecules in their ground state, which is particularly important for H$_2$, found essentially in the rotational excitation level $J=0$ and $1$ levels in the ISM. Infrared vibrational emission lines concern a negligible fraction of the total H$_2$. Therefore a direct estimate of the mass of the cold molecular gas can only be obtained by observing H$_2$\ in absorption in the far UV. Such a direct estimate is necessary in order to confirm and improve the various recipes (e.g., based on CO or dust measurements) used to infer the mass of H$_2$ and in turn to relate to the star formation process (e.g., \citealt{Narayanan2012a}). At the same time, a detailed knowledge of the molecular gas physical conditions and chemical properties is necessary in order to understand the role of the environment (radiation, shocks...) for the star formation process.
Basic processes of interest in the molecular ISM include: (1) H$^0$-to-H$_2$\ transition that can be probed with a large sample of various low- to high-extinction targets and the role of internal spatial structures for the self-shielding process of diffuse H$_2$, (2) chemistry in truly molecular regions (molecular fraction $f_{\rm H{_2}}\approx 1$, shielded from UV radiation by dust and H$_2$) where ionization by penetrating cosmic rays plays an important role, (3) change of molecular composition with $A_V$ by reaching opacities where HCN and HNC exist, (4) variation of the CO/H$_2$\ abundance ratio with cloud depth, (5) characterization of the C$^0$-to-CO transition (occurring around $A_V=2$ depending on the interstellar radiation field), and (6) internal velocity structure and measure of the turbulence in molecular clouds.
At the same time, observations of H$_2$ can be interpreted correctly only if the dynamical influence of the environment as well as the interplay between the thermal processes related to the formation and destruction of H$_2$ are accounted for \citep{Valdivia2016a,Valdivia2017a}. A significant fraction of warm H$_2$, heated by the local dissipation of turbulent kinetic energy, exists in the low-density gas, thereby reflecting the complex intermix between the warm and cold gas in molecular clouds. The warm out-of-equilibrium H$_2$ is especially important for the formation of molecular species with high endothermicity, such as CH$^+$ \citep{Nehme2008a}. Section\,\ref{sec:dynamic} specifically discusses the dynamical aspects in the diffuse ISM.
Carbon chemistry, in particular the abundance of C$^0$ can also be examined by looking for an expected discontinuity in obscured spectra at $1102$\,\AA\ due to the carbon photoionization continuum \citep{Rollins2012a}. For $A_V=2$ (resp.\ $A_V=3$) a flux decrease by a factor of $10$ (resp.\ $100$) is expected. Such a discontinuity depends on the relative abundance of C$^0$, governed by $n_{\rm H}$, $A_V$, and chemistry. Reaching higher $A_V$ ($\gtrsim2$) than those probed with FUSE is therefore required. It must also be noted that carbon ionization also depends on the abundance and charge of polycyclic aromatic hydrocarbons \citep{Kaufman2006a}. Column densities and abundance ratios of C$^0$, C$^+$, CO, and other molecules such as OH, H$_2$O, and C$_2$ (with far-UV transitions) or CH and CH$^+$ (in the visible) will then have to be compared to detailed photo-chemical models (e.g., Meudon PDR; \citealt{LePetit2006a}), dynamical models (e.g., Paris-Durham Shock model, \citealt{Lesaffre2013a,Godard2019a}), and numerical simulations of multiphasic turbulent ISM (e.g., \citealt{Valdivia2016a}).
High resolution of the order of $1-2$\,{\,km\,s$^{-1}$}\ is required to disentangle individual velocity components and separate fine-structure lines or H$_2$ rotational and rovibrational levels. H$_2$, in particular, has been observed at a resolution better than $10$\,{\,km\,s$^{-1}$}\ only in a few lightly-reddened -- thus low $N({\rm H}_2)$ -- Galactic lines of sight \citep[with IMAPS, e.g.,][]{Jenkins1997a}. Paradoxically, higher spectral resolution has been more readily available in the distant universe, due to redshifts that bring H$_2$\ lines into the visible spectrum \citep[e.g.,][]{Noterdaeme2007a}. High resolution in the far-UV is absolutely necessary to study the physical conditions, the excitation and formation mechanisms of H$_2$\ in the local universe. In more reddened sight lines it will be challenging to distinguish individual components in the $J=0,1$ lines which will be damped, but they could be resolved in the higher-$J$ level lines.
High sensitivity is also required to reach high-extinction targets since we need to measure abundances in clouds with various opacities, and access regions where molecular composition changes dramatically and where the chemistry is influenced by penetration of cosmic rays or X-rays.
Following \cite{Jenkins1999a}, for the usual gas-to-dust ratio, the log of the stellar flux at $1150$\,\AA\ decreases by $\sim -6.4\times10^{-22}\times N({\rm H}_{\rm tot}$), thus by $\sim -1.2 \times A_V$ relative to a non-reddened star. An extinction of $A_V=4$ produces an obscuration by a factor $60\,000$. Still, with an effective area three times that of HST/COS, $20$ minutes exposure time could provide a signal-to-noise ratio of $100$ for stars comparable to the bright $\it Copernicus$ targets but obscured by this amount of material.
We list below some of the most important applications enabled by such requirements.
\paragraph{Calibrating the $I_{\rm CO}/N({\rm H}_2)$ relation}
The CO emission is widely used to trace H$_2$\ but the CO-to-H$_2$\ conversion factor, $X_{\rm CO}$, is known to depend on metallicity and most of the time it relies on indirect measurements of H$_2$\ (e.g., \citealt{Bolatto2013a}) such as virial mass, gamma-rays, dust emission, dust absorption, surrogate molecules, all of which with potentially uncertain calibration relationships. By measuring CO and H$_2$\ together in absorption in low-optical depths transitions, and CO in emission along the same lines of sight, we can calibrate the CO-to-H$_2$\ conversion factor traditionally used for emission lines
\citep{Burgh2007a,Liszt2008a,Liszt2017a}. By observing lines of sight of different extinction, different metallicity, and resolving the different components in the line of sight, we can measure X$_{\rm CO}$ as a function of cloud depth ($A_V$) and of metallicity (which can then be applied to the low-metallicity systems in the distant universe).
It is also important to characterize the "CO-dark" zones, where hydrogen is already molecular, but carbon still in the form of C$^+$ or C$^0$. These zones, with $A_V= 0.1-1$ \citep[e.g.,][]{Wolfire2010a}, can represent a significant or even dominant amount of H$_2$ (e.g., \citealt{Grenier2005a,Madden1997a}).
\paragraph{How is H$_2$ excited?}
The population of H$_2$\ in the $J>2$ rotational levels in the standard diffuse ISM is not well understood: is it due to optical pumping radiative excitation \citep{Gillmon2006a} or to the presence of warm H$_2$\ \citep{Verstraete1999a,Gry2002a,Falgarone2005a}?
A deeper understanding requires a
better characterization of the gas through the
measurement of temperature and turbulence of H$_2$\ in the high-$J$ levels.
At a resolving power higher than $10^5$, distinctive signatures of individual components or cloud regions with different excitation could be identified.
Some models invoke shocks or the dissipation of turbulence in vortices to produce the $J>2$ H$_2$\ as well
as CH$^+$ \citep{Godard2014a}, with specific signatures in the velocity distribution of the warm H$_2$\ gas.
In several cases an increase in velocity dispersion with $J$ has been observed \citep[often at high redshift because of higher resolution,][]{Noterdaeme2007a,Klimenko2015a}. Towards the star $\zeta$OriA (observed at high resolution with IMAPS) \cite{Jenkins1997a} have interpreted it as H$_2$\ being created in a post-shock zone via the formation of H$^-$.
Other interpretations involve energy being transferred to vibrational and rotational excitation upon H$_2$\ formation.
If a significant fraction goes into ejecting the newly formed molecules at large velocity \citep[fast H$_2$\ production,][]{Barlow1976a}, \cite{Jenkins1999a} have shown that this would produce detectable broad wings in the high-$J$ H$_2$\ profiles, provided the resolution is high enough to differentiate them from the slow H$_2$. Observing the detailed velocity dispersions makes it possible to infer the fraction of the available $4.5$\,eV energy that is transferred to H$_2$\ excitation, to kinetic motion of the molecules, or in the form of heat for the grain.
\paragraph{Small-scale structures in interstellar clouds}
Knowing both the velocity and spatial structure is critical to describe the H$_2$\ self-shielding and the H$^0$-to-H$_2$\ transition.
The spatial structure can be studied through repeated observations of lines of sight as they drift through the foreground clouds due to the motions of the target star or the observer \citep{Boisse2005a,Lauroesch2007a}, for instance through observations of runaway stars or binary stars. One needs to observe atoms or various molecules such as H$_2$\ and CO at very high SNR and high resolution, to be combined with such observations of CN, CH, and CH$^+$ in the visible. \cite{Welty2007a} provides an illustration for multi-epoch optical and UV observations of variable absorption in C{~\sc i} fine-structure lines tracing variations in local n$_{\rm H}$.
Tracking of spatial and temporal absorption variations enable a better understanding of the nature and the properties of the so-called tiny-scale atomic structures \citep[$\sim$10$^{1-4}$\,AU,][]{Heiles1997a} thought to be part of a universal turbulent cascade \citep{Stanimirovic2018a}.
Such observations in the far-UV enable the detection of structures down to potentially much lower H{~\sc i}\ column densities as compared to H{~\sc i}\ 21 cm absorption surveys.
\subsection{Warm gas ionization}
\subsubsection{Ionization structure}
The diffuse H$\alpha$ emission in the Galaxy (which dominates the mass budget of ionized gas) is thought to be related to early-type stars and to supernova-driven turbulence and superbubble structures (e.g., \citealt{Wood2010a}). However, many questions remain regarding (1) some unexpectely large measured temperatures which require heating mechanisms other than photoionization, (2) the spatial distribution of the diffuse gas and the escape fraction of ionizing photons (to be compared to direct derivation of rest-frame Lyman continuum across many lines of sight), and (3) the derivation of useful constraints that can be used for 3D large-scale MHD models (e.g., \citealt{Haffner2009a}).
A better knowledge of the exciting mechanism for the warm diffuse medium requires studying the ionization structure which, in turn, involves getting detailed ionization fractions as a function of ionizing energy. This is possible through the observations of different ionization stages like [S{~\sc i}, S{~\sc ii}, S{~\sc iii}, S{~\sc iv}, S{~\sc vi}], [O{~\sc i}, O{~\sc vi}], [C{~\sc i}, C{~\sc ii}, C{~\sc iii}, C{~\sc iv}], [Si{~\sc i}, Si{~\sc ii}, Si{~\sc iii}, Si{~\sc iv}], [N{~\sc i}, N{~\sc ii}, N{~\sc iii}, N{~\sc v}], and H{~\sc i}. Many of these species have lines in the UV domain, however important stages only have lines in the far-UV domain. It is therefore imperative to get access to both the UV and far-UV domains at the same high resolution. Locally the simplicity of the short sight lines toward stars in the solar vicinity ($<100$\,pc) provides a unique opportunity to study the ionization structure of individual interstellar regions, clouds, and interfaces, that are usually blended in longer sight lines \citep{Gry2017a}. The detection of the weak lines that are critical for these studies requires the possibility to record UV and far-UV spectra of hot nearby stars at high SNR (well in excess of $100$). At larger scales, observing samples of post-asymptotic giant branch (PAGB) and blue horizontal-branch (BHB) stars ($V\sim15$) toward globular clusters that also contain a pulsar (whose dispersion measure yields an integrated value for n$_e$) provides detailed ionization fractions of the WIM as a function of ionizing energy \citep{Howk2012a}.
\subsubsection{Partly-ionized neutral gas}
The ionization fraction and electron density in the neutral gas are important parameters to examine as they tell us about the abundance of free electrons heating the gas and providing pathways for molecular gas formation in the gas phase. They can also be used to infer the influence of supernovae and compact objects on the ISM properties through the propagation of ionizing cosmic rays and soft X-rays. Partly ionized gas can be studied through species with large photoionization cross-section (e.g., N{~\sc ii}, Ar{~\sc i}) and through the population in fine-structure levels (e.g., the N{~\sc ii}\ multiplet at $1084$\,\AA, C{~\sc ii}*/C{~\sc ii})\footnote{Absorption lines arising from fine-structure levels are noted with * (** for the second fine-structure level) to differentiate them from the ground-state transitions. }, while temperature can, for instance be determined from Mg{~\sc i}/Mg{~\sc ii}\ or Si{~\sc ii}*/Si{~\sc ii}\ (e.g., \citealt{Jenkins2000a,Vladilo2003a,Jenkins2013a,Gry2017a}).
High spectral resolution is necessary to disentangle the multiplets while high SNR is necessary to detect weak absorption lines arising from fine-structure levels (e.g., Si{~\sc ii}*, O{~\sc i}**).
\subsection{Hot ionized gas}\label{sec:layers}
The origin and nature of the collisionally-ionized gas, seen notably in O{~\sc vi}\ absorption in the disk and halo of our Galaxy (corresponding to $T\approx3\times10^5$\,K), is still debated \citep{Wakker2003a,Savage2006a,Otte2006a,Welsh2008a}: is it formed in radiative cooling supernovae-shocked gas, in conductive interfaces with cold gas, or in turbulent mixing layers? While information on O{~\sc vi}\ has been limited so far to its bulk column density and its abundance relative to other species, we also need information on its velocity structure and line width in order to infer physical conditions (in particular the temperature) and to disentangle the different components and to relate them to the other low- and high-ionization species (e.g., Si{~\sc iii}, C{~\sc iv}, Si{~\sc iv}, N{~\sc v}). The electron density can also be calculated through combined observations of O{~\sc vi}\ in absorption and in emission \citep{Otte2006a}.
The boundaries between the different phases can often be quite abrupt, and it is not yet clear how they trade matter and
entropy. Interfaces between the hot and warm media may play an important role in enhancing the cooling of the hot material through either conductive or radiative losses. Our understanding of the physical properties of such boundaries should help us to construct more accurate accounts on how rapidly hot gas volumes created by supernova explosions dissipate, which, in turn, influences the morphology of galaxies and some key aspects of the overall cycle of matter and thermal energy within them.
Many theorists have confronted this issue and have concluded that two basic categories of interactions can take place: (1) the establishment of a conductive interface where evaporation or condensation can occur \citep{Cowie1977a,McKee1977a,Ballet1986a,Slavin1989a,Borkowski1990a,Dalton1993a,Gnat2010a} (Fig.\,\ref{fig:mixing}{\it a}) and (2) a turbulent mixing layer, where the existence of any shear in velocity between the phases creates instabilities and mechanically induced chaotic interactions \citep{Begelman1990a,Slavin1993a,Kwak2010a} (Fig.\,\ref{fig:mixing}{\it b}), which can ultimately lead to ablation in the extreme cases of the High Velocity Clouds (HVCs) passing through a hot medium \citep{Kwak2011a,Henley2012a}.
\begin{figure}[h]
\begin{centering}
\includegraphics[width=0.49\textwidth]{Figures/CI.png}
\includegraphics[width=0.49\textwidth,height=0.25\textheight]{Figures/TML.png}
\caption{\small {\it a)} The structure of temperature vs.\ hydrogen column density in a conduction front. Different curves show changes that occur as the front evolves from a young evaporation front to an older condensation front \citep{Borkowski1990a}. {\it b)} Schematic drawing of a turbulent mixing layer, showing hot and cold gas separated by a thin, intermediate photoionized layer. The hot gas moves at a transverse velocity $v_t$ relative to the layer \citep{Slavin1993a}.}\label{fig:mixing}
\end{centering}
\end{figure}
Observers have attempted to identify these processes chiefly by analyzing interstellar absorption features of ions that are most abundant at intermediate temperatures, such as Si{~\sc iv}, C{~\sc iv}, N{~\sc v} and O{~\sc vi}, and then comparing their column density ratios with theoretical predictions \citep{Spitzer1996a,Sembach1997a,Indebetouw2004a,Zsargo2003a,Indebetouw2004b,Wakker2012a}. In a survey probing the Galactic disk, \cite{Lehner2011a} showed that Si{~\sc iv}\ and C{~\sc iv}\ are found in both broad and narrow components, and the high-ion column density ratios exhibit substantial variations in most lines of sight,
which implies that very different processes operate in different environments in the Galaxy. However, the confusion caused by the overlap of many different regions over the large distances covered by surveys (e.g., for O{~\sc vi}, \citealt{Jenkins1978a,Jenkins1978b,Bowen2008a}) has made it difficult to get
a clear picture of the nature of these interfaces.
The possibility of much simpler lines of sight is offered within the local ISM, where a single warm, diffuse cloud accounts for most of the matter within the first $50$\,pc \citep{Gry2014a}, generating a single interface with the surrounding hot bubble gas.
Its signature should be observed in C{~\sc iv}, N{~\sc v}, O{~\sc vi}, Si{~\sc iii}, Si{~\sc iv}\ toward hot nearby stars (white dwarfs or B stars). Up to now the detection of this interface has been elusive, mostly due to the extreme weakness of the expected absorption -- SNR in excess of $200$ is necessary. Such SNR is in principle easy to reach with nearby hot stars, but it has often been limited by the bright-object limits of UV detectors and drastic neutral density filters.
Spectral resolution is also a key requirement to understand the highly ionized gas and its relation with other gas-phases. This is well illustrated by the study of \cite{Lehner2014a} who, thanks to an improved spectral resolution by a factor of $\approx2$ enabled by instruments in the visible as compared to rest-frame studies with FUSE, uncovered differences between the profiles of [O{~\sc vi}, N{~\sc v}] and [Si{~\sc iv}, C{~\sc iv}] suggesting that the bulk of the O{~\sc vi}\ absorption is produced in a radiatively cooling gas produced between a shock-heated outflowing gas rather than in a very diffuse, extended gas photoionized by the extragalactic UV background radiation (see also Sect.\,\ref{sec:cgm}).
Last but not least, high SNR spectra obtained toward stars can reveal weak components at high velocities, caused by cooling layers behind shock fronts \citep{Welty2002a}. Tracking velocity shifts and line widths for ions that should appear at different locations in downstream flows offers insights on how the gas cools and recombines in a time-dependent ionization scheme. The depth of absorption features in radiative shocks is expected to be larger than a few \%\ in S{~\sc iii}, Si{~\sc iv} and C{~\sc iv}\ for large enough shock velocities, and should be detectable in spectra recorded at SNR in excess of a few hundreds.
\subsection{Dynamics in the diffuse multiphasic turbulent interstellar gas}\label{sec:dynamic}
In many ways, the latest generation of astronomical instruments has shaken our understanding of the diffuse interstellar matter. Long thought to be composed of independent phases which could be studied in the framework of static models at chemical equilibrium, the diffuse interstellar gas is in fact restless, turbulent and magnetized, as well as out-of-dynamical, -thermal, or -chemical equilibrium.
Because of thermal instability, the neutral gas is known to settle in two stable phases, the WNM and the CNM which are roughly at thermal pressure equilibrium. However, theoretical predictions (e.g., \citealt{Hennebelle2007a}) as well several sets of measurements complicate this simple picture. (1) The discovery for the past 15 years of the Lukewarm Neutral Medium (LNM) phase observed in H{~\sc i}\ shows that as much as $30$\%\ of the gas mass is in the unstable regime (e.g., \citealt{Kalberla2018a,Marchal2019a}), indicating that the exchange of matter between the WNM and the CNM is paramount. (2) The gas pressure deduced from C{~\sc i}\ UV absorption lines (e.g., \citealt{Jenkins2011a}) shows a strong dispersion in the local ISM; whether this dispersion is due to variations in the local irradiation conditions, the action of turbulence, the presence of self-gravitating clouds along the lines of sight, or a combination of all, is still an ongoing issue.
(3) The infrared dust emission and its polarization indicates that the CNM is organized in projected filamentary structures whose orientations and physical conditions are tightly linked to the orientation and strength of the local magnetic field (e.g., \citealt{PlanckCollaboration2018a}). (4) The characterization of the kinematic and chemical signatures of molecules and
molecular ions with \textit{Herschel} and SOFIA finally suggest that the different phases are entangled in the production and excitation of chemical species in the diffuse gas (e.g., \citealt{Neufeld2015a}).
On the theoretical side, astrochemical models have drastically improved during the past decade to study the at- or out-of-equilibrium chemistry in 3D dynamical models of isothermal turbulence (e.g., \citealt{Bialy2017a}), monophasic CNM turbulence at
intermediary (e.g., \citealt{Glover2010a}) or dissipative scales (Lesaffre et al.\ submitted), and multiphasic turbulence at intermediary (e.g., \citealt{Levrier2012a,Valdivia2016a,Valdivia2017a}) or galactic scales \citep{Seifried2017a,Girichidis2018a}. It is concluded that the combination of turbulence, magnetic field, and gravity strongly perturbs the chemical composition of the diffuse matter though density fluctuations induced by supersonic motions, out-of-equilibrium mixing at phase interfaces, or dissipation processes. The H$^0$-to-H$_2$ transition, the CO/H$_2$ abundance ratio, and the abundances and kinematics of atomic and molecular tracers are all affected.
Several questions arise from these studies that can only be answered through the systematic observations of a statistically significant number of sources at several wavelengths, and most notably in the UV range. What are the mass transfer rates between
the different phases of the ISM? What are their survival timescales and their volume filling factor? How far from equilibrium are the ionization and molecular fractions? What fraction of H$_2$ belongs to the LNM thermally unstable phase? What are the separate
roles of supersonic motions, turbulent dissipation, and turbulent transport in the chemical richness observed in the ISM ? Finally, what does a statistical study of the ionization and molecular fractions tell us about the magnetic field and the thin Faraday structures observed with LOFAR (e.g., \citealt{Zaroubi2015a,VanEck2019a})?
In this context, a far-UV spectrograph is necessary to get access to the amount of H$^0$, H$_2$, and of atomic ions in the local interstellar gas (in a radius of $4$\,kpc around the Sun). High spectral resolution observations are mandatory to extract the kinematics, and hence to identify the phases responsible for a given absorption profile. High sensitivity will allow a survey over hundreds of targets, including short and long lines of sight with high extinction, which can be used in synergy with GAIA, \textit{Planck}, and SKA data to build a statistical sample of the local interstellar diffuse medium that can be compared with state-of-the-art dynamical models.
\subsection{Detection of very faint lines of scarce elements}
A number of scarce elements with high astrophysical interest require very high SNR to be detected because of the weakness of their lines, most of which being in the far-UV domain. Let's mention: (1) the light elements like $^{10}$B, $^{11}$B \citep{Lambert1998a}, $^6$Li, $^7$Li (in the visible; \citealt{Meyer1993a}), or Be \citep[undetected,][]{Hebrard1997a}; (2) The r- and s- process elements like Ga, Ge, As, Se, Kr, Cd, Sn, Pb \citep{Ritchey2018a}. It would be interesting to look for localized enhancements of these elements in regions (serendipitous discovery!) where a neutron star merger occurred in a time that is more recent than a mixing time for the ISM; and (3) Isotope ratios of atomic and molecular species. For instance HCl, whose line has barely been detected at $1290$\,\AA, may be split in lines of H$^{35}$Cl and H$^{37}$Cl. However the interpretation of molecular isotopes may be confused by the influence of chemical reactions. Atomic shifts are of order of a few {\,km\,s$^{-1}$}\ in some cases, so measurements should be feasible with high resolution in cold regions where velocity dispersions are low.
\section{Precision measurement of magnetic field from near to far, from small to large scales in ISM}\label{sec:bfield}
\textbf{
\begin{itemize}
\item \textit{What is the role of the magnetic field in the distribution of ISM phases? How does the magnetic field modulate the transport of matter for star formation?}
\item \textit{What impact does ground state alignment have on the physical parameter derivation (abundances, ionization...)?}
\end{itemize}}
\bigskip
Magnetic fields have important or dominant effects in many areas of astrophysics, but have been very difficult to quantify. Spectropolarimetry from Ground State Alignment (GSA) has been suggested as a direct tracer of magnetic field in interstellar diffuse medium. The alignment itself is an effect well studied in the laboratory: the effect arises due to the ability of atoms/ions with fine and hyperfine structure to get aligned in the ground/metastable states. Owing to the long life of the atoms on ground states, the Larmor precession in an external magnetic field imprints the direction of the field onto the polarization of absorbing species. This provides a unique tool for studies of sub-gauss magnetic fields using polarimetry of UV, optical and radio
lines. Many spectral lines with strong signals from GSA are in the UV band. By discerning magnetic fields in gas with different dynamical properties, high spectral resolution measurement of spectral polarization will allow the study of 3D magnetic field distribution and interstellar turbulence.
GSA also provides a unique chance to map 3D direction of magnetic field on small scales, e.g., disks, where grain alignment is unreliable. The range of objects suitable for studies is extremely wide and includes magnetic fields in the interplanetary medium, in the ISM, and in circumstellar regions as well as diffuse media in extragalactic objects. Last but not least, the consequences of the alignment should be taken into account for correct determination of the abundances of alignable species.
\subsection{Magnetic field measurement in the ISM}
Magnetic fields play a crucial role in various astrophysical processes, including star and planet formation, accretion of matter, or transport processes. Recent dust polarization measurements by, e.g., \textit{Planck}, have represented a huge step forward in the knowledge of the Galactic magnetic field in terms of sensitivity, sky coverage, and statistics (e.g., \citealt{PlanckCollaboration2018a}). However, they unfortunately tell us nothing on the distance of the magnetic field and its distribution in the different components or phases. Furthermore, they can only provide the integrated mean orientation of the magnetic field projected onto the plane of the sky, as they cannot quantify the strength of the magnetic field nor access the third dimension.
Therefore it is important to explore new effects which can bring information about the magnetic field properties and signatures. ``Ground state alignment'' has been identified as an innovative way to determine the magnetic field in the diffuse medium. The atoms get aligned in terms of their angular momentum and, as the life time of the atoms/ions we deal with is long, the alignment induced by anisotropic radiation is sensitive to very weak magnetic fields ($10^{-15}-1$\,G, \citealt{Yan2012a}), which is precisely the level of the magnetism in diffuse medium, including in both ISM and IGM. It must be noted that even the general interstellar radiation field presents enough anisotropy to align the atoms and create GSA \citep{Zhang2015a}. Observing several hundred hot stars with a SNR of $500$ to measure linear polarization from optically thin UV absorption lines will provide us exclusive information on magnetic field distribution and turbulence properties in different interstellar phases.
\textit{Most of the resonance absorption lines are in the UV domain}. A UV band polarimeter with high spectral resolution ($R>20\,000$) will thus provide an incomparable opportunity for precision magnetic field measurement, which no other current instruments can offer. Particularly, the high spectral resolution allows simultaneous determination of both velocity and magnetic field, filling the gap of 3D magnetic tomography in ISM, which is so far missing. The resonance absorption lines appropriate for studying magnetic fields in diffuse, low column density ($A_V \sim$ few tenths)
neutral clouds in the interstellar medium are those from N{~\sc i}, O{~\sc i}, S{~\sc ii}, Mn{~\sc ii}, and Fe{~\sc ii}, all in the UV range. At higher column densities, the above lines become optically thick, and excited states become available as well as lines from less abundant species.
\begin{figure*}[h]
\begin{centering}
\includegraphics[width=0.3\textwidth]{Figures/UV_pol_turb.png}
\includegraphics[width=0.3\textwidth,height=0.25\textheight]{Figures/abspol_geom.pdf}
\includegraphics[width=0.3\textwidth,height=0.25\textheight]{Figures/Geometry_89Her_both3.png}
\caption{\small {\it a)} Synthetic polarization map of the simulated super-Alfvenic diffuse ISM. The size of the field is $1$\,pc$^2$, with an O/B star located $0.1$\,pc to the left. (a) The contour color reveals the percentage of polarization induced in the S{~\sc ii}\ $1250$\,\AA\ absorption line and the orientation of the bars represents the direction of the polarization. The expected polarization is mostly above $5$\%.
{\it b)} Typical astrophysical environment where
GSA can occur. A pumping source deposits angular momentum to
atoms in the direction of radiation and causes differential occupations on
their ground states. In a magnetized medium where the Larmor precession
rate $\nu_L$ is larger than the photon arrival rate $R_F$, however,
atoms are realigned with respect to magnetic field. Observed polarization depends on both $\theta_r$ (angle between magnetic field and illuminating star) and $\theta$ (angle between the magnetic field and the line of sight).
In general, there are two situations: the alignment
is produced by a pumping source while we observe another weak background
source whose light passes through the aligned medium ({\it upper part}) or the background source coincides with the pumping source, in which case $\theta_r=\theta$ ({\it lower part}). {\it c)} 3D topology of magnetic field in the 89\,Her post-AGB binary system. The system is plotted from two different orientations showing the line-of-sight and plane-of-sky projections. The color scale indicates the line-of-sight velocity ($v_z$) of the medium. The plane-of-sky projection of the symmetric axis of the outflow is $45^\circ$ to the East-West direction. The inferred 3D magnetic field directions for different orbital phases are displayed \citep{Zhang2019a}.
}\label{fig:gsa}
\end{centering}
\end{figure*}
As a first step, with low resolution measurement, 2D magnetic field can be easily obtained from the direction of polarization with a $90^\circ$ degeneracy, similar to the Goldreich-Kylafis effect \citep{Goldreich1981a} for molecules and similar to grain alignment according to some theories.
UV absorption lines are polarized through GSA exclusively. Any polarization, if detected, in absorption lines, would not only be an exclusive indicator of alignment, but also of the magnetic field since no other mechanisms can induce polarization in absorption lines. With a high resolution spectropolarimeter, 3D direction of magnetic field can be extrapolated by combining the polarization of two lines or one line polarization with their line intensity ratio, which is influenced by the magnetic field as well \citep[see][]{Yan2006a,Zhang2018a}. With the knowledge of the degree of polarization and/or $\theta_r$, the angle between magnetic field and line of sight, the $90^\circ$ degeneracy can be lifted.
\subsubsection{Large scale magnetic field distribution and turbulence: 3D tomography}
On large scales, spectropolarimetric maps must constrain much better the 3D distribution of the magnetic field. The interstellar magnetic field is turbulent with velocity and magnetic fluctuations ranging from large injection scales to small dissipation scales. High resolution spectroscopy and spectropolarimetry combined bring forth a wealth of information on interstellar turbulence. Most magnetic diagnostics only constrain averaged mean magnetic field on large scales. In this respect GSA fits a unique niche as it reveals the small scale structure of magnetic field (see Fig.\,\ref{fig:gsa}{\it a}). Measuring 3D turbulence will shed light on many open questions regarding for instance star formation and cosmic ray or interstellar chemistry (see, e.g., \citealt{Hennebelle2019a}).
In highly turbulent environments, we expect magnetic fields to be entangled. A UV instrument with sufficient spectral resolution would be valuable since it naturally reduces line of sight averaging. If the pumping star is along the line of sight, as in the central star of a reflection nebula, this is the so-called ``degenerate case'' (see Fig.\,\ref{fig:gsa}{\it b}), where the position angle of the polarization provides the 2D magnetic field in the plane of sky, and the degree gives the angle to the line of sight. In the more general case, though, an observed cloud might be pumped from the side and the positional angle of magnetic field is available with $90^\circ$ degeneracy, so that the derivation of the full magnetic geometry requires measuring two lines (not necessarilly from the same species).
\subsubsection{Small scale magnetic field: 3D direction}
On small scales, spectropolarimetry from GSA is an ideal tracer of local magnetic fields. Examples include disks, local bubble, and PDR regions. One interesting case is that of circumstellar disks, for which grain alignment has been found unreliable. In the case of pre-main sequence stars, pumping conditions are similar to those for comets in the Solar System \citep[see][]{Shangguan2013a}: pumping rates on the order of $0.1-1$\,Hz, and realignment for fields greater than $10-100$\,mG. Conditions here seem to be conducive to substantial populations in CNO metastable levels above the ground term: \cite{Roberge2002a} find strong absorption in the FUV lines ($1000-1500$\,\AA) of O{~\sc i}\ (1D) and N{~\sc i}\ and S{~\sc ii}\ (2D), apparently due to dissociation of common ice molecules in these disks (also common in comet comae). Since these all have total angular momentum quantum number $>1$, they should be pumped and realigned. This presents the exciting prospect of detecting the magnetic geometry in circumstellar disks and monitoring them with time. The potential has been clearly revealed by the detection on a binary system where 3D magnetic fields are precisely mapped for the first time via polarization of absorption lines (see Fig.\,\ref{fig:gsa}c; \citealt{Zhang2019a}).
\subsubsection{Polarization of other lines}
\paragraph{Resonance and fluorescence lines} The magnetic realignment diagnostic can also be used in resonant and fluorescent scattering lines. This is because the
alignment of the ground state is partially transferred to the upper
state in the absorption process \citep{Yan2007a}. If the direction of optical pumping is known, e.g., in planetary system and circumstellar regions, magnetic realignment induces a line polarization whose positional angle is neither perpendicular or parallel to the incident radiation. This deviation depends on the magnetic geometry and the scattering angle. The degree of polarization also depends on these two factors. In practice, GSA can be identified by comparing the polarizations from non-alignable (which do not trace the magnetic field) and alignable species. There are many fluorescent lines in emission nebulae that are potential candidates \citep[see][]{Nordsieck2008a}. Reflection nebulae would be an ideal place to test the diagnostic, since the lack of ionizing flux limits the number of levels being pumped, and especially since common fluorescent atoms like N{~\sc i}\ and O{~\sc i}\ would not be ionized, eliminating confusing recombination radiation.
\paragraph{IR/submillimeter transitions within ground state} The alignment on the ground state affects not only the optical or UV transitions to the excited state, but also the magnetic dipole transitions within the ground state. The submillimeter lines emitted and absorbed from the aligned medium are polarized in a same fashion as that of the absorption lines, i.e., either parallel or perpendicular to the magnetic field \citep{Yan2008a,Zhang2018c}. Current facilities, e.g., SOFIA and ALMA, already have the capability to cover the submillimeter band for the spectral polarimetry observation. One can also mention the linear polarization of the $21$\,cm line (e.g., \citealt{Clark2019b}).
\subsubsection{Magnetic field strength}
GSA is usually by itself not directly sensitive to the magnetic field strength. The exception is a special
case of pumping photon absorption rate being comparable with the Larmor frequency \citep[see][]{Yan2008a,Zhang2019a}. However, this should not
preclude the use of GSA for studies of magnetic field. Grain alignment is not sensitive to magnetic field strength either.
This does not prevent polarization arising from aligned grains to be used to study magnetic field strength with the so-called
Chandrasekhar-Fermi technique \citep{Chandrasekhar1953a}. In this technique the fluctuations of the magnetic field direction are associated with Alfven perturbations and therefore simultaneously measuring the velocity dispersion using optical/absorption lines arising from the same regions
it is possible to estimate the magnetic field strength. The Chandrasekhar-Fermi technique and its modifications (see, e.g., \citealt{Hildebrand2009a,FalcetaGoncalves2008a,Cho2016a}) can be used to derive the magnetic field strength using GSA.
The advantage of using spectral lines compared to dust grains is that both polarization and line broadening can be measured from the same lines, making sure that both polarization and line broadening arise from the same volumes. In addition, GSA, unlike grain alignment does not contain ambiguities related to dust grain shape. Thus, potentially, Chandrasekhar-Fermi technique can be more accurate when used with GSA.
\subsection{Synergy of techniques for magnetic field studies}
The GSA and grain alignment complement each other. For instance, measurements of grain alignment in the region where GSA is mapped for a single species
can remove ambiguities in the magnetic field direction. At the same time, GSA is capable of producing a much more detailed map of magnetic field in the diffuse gas and measure magnetic field direction in the regions where the density
of dust is insufficient to make any reliable measurement of dust polarization. In addition, for interplanetary magnetic field measurements it is important that GSA can measure magnetic fields on time scales much shorter than aligned grains.
The synergy exists with other techniques as well. For instance, GSA allows to reveal the 3D direction of magnetic field. This gives the direction of the magnetic field, but not its magnitude. If Zeeman effects allow to get the magnitude of one projected component of magnetic field, this limited input enables GSA to determine the entire 3D vector of magnetic field including its magnitude. Such synergetic measurements are crucial.
As astrophysical magnetic fields cover a large range of scales, it is important to have techniques to study magnetic
fields at different scales. For instance, we have discussed the possibility of studying magnetic fields in the interplanetary medium. This can be done without conventional expensive probes by studying polarization of spectral lines. In some cases
spreading of small amounts of sodium or other alignable species can produce detailed magnetic field maps of a particular regions of the interplanetary space, e.g., the Earth magnetosphere.
\section{Beyond the Milky Way: the ISM properties in external galaxies} \label{sec:ism}
\textbf{
\begin{itemize}
\item \textit{How can we remove biases in abundance determinations in unresolved nearby galaxies? Can we map the ecosystem of interstellar clouds in external galaxies, is there evidence of metal-free gas accretion?}
\item \textit{Does star formation proceed in cold atomic gas in quasi-pristine environments at redshift $\sim0$? What is the influence of compact objects in the ISM properties and on star formation?}
\end{itemize}}
\bigskip
While past and present UV spectroscopic instruments have greatly improved our knowledge of the ISM physics and chemistry in the Milky Way, the extragalactic ISM is mostly uncharted territory with these instruments. Much of what we know is based on near-UV to far-infrared spectroscopic observations. Only a limited number of nearby galaxies could be investigated in the far-UV, with unavoidable biases concerning the confusion of lines of sight and concerning the low required spectral resolution owing to the low fluxes. It is now urgent to reach the same level of details on the ISM properties for nearby galaxies as what is currently possible for the Milky Way.
\subsection{Solving column density determination biases}\label{sec:biases}
In the far-UV, apart from a few lines of sight toward individual stars in the Magellanic Clouds (e.g., \citealt{Welty2016a,RomanDuval2019a}), the extragalactic ISM has been mostly observed toward stellar clusters and at low spectral resolution ($R\lesssim20\,000$), with inherent biases for the column density determination. First, unresolved absorption lines observed toward a single line of sight may show a low apparent optical depth (i.e., corresponding to the linear regime of the curve of growth) even though some individual velocity components are saturated, leading to the ``hidden'' saturation problem and to the possible underestimate of column densities by factors of a few or more (e.g., \citealt{James2014a}). Second, multiple lines of sight toward stars of different brightness may contribute to the observed spectrum, with each line of sight intersecting multiple interstellar clouds with different properties (metallicity, column density, turbulent velocity, radial velocity). The resulting combination is highly non-linear, especially if some of the individual (i.e., a given line of sight and a given cloud) components are saturated.
While the biases related to saturated components can be mitigated for a single line of sight with a suite of transitions of varying strengths and with a well-behaved distribution of components \citep{Jenkins1986a}, biases related to the multiplicity of lines of sight have been little explored, even in the favorable case of unsaturated components, notably because the spatial distribution of bright stars in the dispersion direction of slit spectrographs complicates even further the line profile and its analysis (e.g., \citealt{Lebouteiller2006a}).
Hence robust column density determinations in nearby galaxies ideally require a spectral resolution high enough to disentangle $\approx2$\,{\,km\,s$^{-1}$}\ wide components (typically observed in the Milky Way) and a spatial resolution high enough to resolve individual stars. The corresponding signal-to-noise requirement quickly becomes prohibitive for galaxies further than a few Mpc but satisfactory compromises can be obtained by observing (1) nearby stellar clusters -- such as those observed with HST or FUSE -- with improved spectral resolution $R\sim10^5$, (2) distant/faint stellar clusters with $R\sim20\,000$, i.e., similar to most current extragalactic ISM spectra, or (3) individual O/B stars with $R\sim20\,000$. Another alternative, as proposed by \cite{Kunth1986a}, is to use background QSOs (to be identified with \textit{Euclid}, LLST...), with similar spectral resolution requirement, provided they are bright enough in rest-frame far-UV where absorption lines in nearby galaxies are probed (e.g., \citealt{Bowen2005a,Kozlowski2013a}).
Overall, an observatory versatile enough to propose high spatial and spectral resolution would solve most of the systematic issues in deriving robust column densities in external galaxies using stellar clusters as background light, thereby nicely complementing studies of Damped Lyman-$\alpha$ systems (DLAs; \citealt{Wolfe2005a}). We review in the following the corresponding science motivations.
\subsection{Pristine gas accretion?}\label{sec:chemab}
Complex lines of sight and limited sensitivity have mostly restricted the study of the extragalactic ISM in the far-UV to resonance lines and chemical abundance determinations, although fine-structure lines can be observed in some nearby galaxies (Sect.\,\ref{sec:sf}). \cite{Kunth1994a} proposed a method to measure neutral gas abundances in blue compact dwarf galaxies (BCDs) using unresolved massive stellar clusters in H{~\sc ii}\ regions as background continuum, thereby providing independent results from abundances traditionally derived using optical emission lines arising in the ionized gas of H{~\sc ii}\ regions. The comparison led to a still ongoing debate, which is well illustrated by several studies of the BCD I\,Zw\,18 ($18$\,Mpc, $\approx2\%$ solar metallicity). Early observations with HST/GHRS showed a discrepancy between the oxygen abundance measured in emission and in absorption, leading to the hypothesis of self-enrichment by the current starburst episode \citep{Kunth1986a,Kunth1994a} but, due to a limited sensitivity, only strong lines were accessible and hidden saturation could not be identified easily \citep{Pettini1995a}. Later studies of the same galaxy with FUSE, which enabled the observation of several metal lines and of hydrogen, highlighted issues regarding the stellar continuum and the selection of weak lines \citep{Aloisi2003a} and showed that only a small discrepancy may exist, if any \citep{Lecavelierdesetangs2004a}. More recently, \cite{Lebouteiller2013a} confirmed, using HST/COS and weak lines such as $\lambda1254$ S{~\sc ii}, that a small discrepancy does exist in I\,Zw\,18. Overall, studies have showed that weak lines (also $\lambda1356$ O{~\sc i}) may minimize column density determination biases, but at the expense of an in-depth analysis of abundance ratios and of the spatial distribution of metals.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figures/fig.png}
\caption{\small Abundance discontinuity between the neutral (H{~\sc i}\ region; observed in absorption with FUSE and HST) and ionized (H{~\sc ii}\ region; observed in emission) phases in a sample of BCDs \citep{Lebouteiller2009a,Lebouteiller2013a}. Two different methods (diamonds) are compared for each object (numbers). Globally neutral abundances are lower by a factor of a few and the minimum metallicity derived in the neutral phase is around $-2.5$\,dex. }\label{fig:chemab}
\end{figure}
Such discrepancies are important to quantify robustly to understand the distribution of metals and metallicity buildup in galaxies. A sample analysis of neutral gas abundances in BCDs by \cite{Lebouteiller2009a} showed that an overall metallicity floor of $\sim2\%$ solar may exist for galaxies in the nearby Universe which could be linked to the IGM enrichment (Fig.\,\ref{fig:chemab}).
Metallicity discontinuity between the ionized and neutral phases seem to occur preferentially for moderately metal-poor ($10-50\%$ solar) galaxies, which could be due to dilution by metal-poor/free gas in the halos rather than by self-enrichment in the H{~\sc ii}\ regions (see also \citealt{James2015b}). It is worth noting that extremely low metallicities ($\lesssim1$\% solar) in CGM absorbers at $0.1 \lesssim z \lesssim 1$ are preferentially found in low H$^0$ column densiy clouds ($N({\rm H})<10^{18.5}$\,cm$^{-2}$; e.g., \citealt{Lehner2013a,Lehner2019a,Wotta2016a,Wotta2019a}), which raises the question of the role and properties of infalling clouds in the observed metallicity discontinuity in some BCDs. It is now urgent to perform a complete census of the metals in and around low-mass galaxies in order to understand star formation in metal-poor environments.
The presence of quasi pristine gas in the outskirts of galaxies has important implications for the galaxy evolution (e.g., infalling scenarii, dispersal/mixing of elements). For instance, the dust-to-gas mass ratio (DGR) obtained for nearby galaxies shows a steep dependence with metallicity \citep{RemyRuyer2014a} which is at variance with the shallower trend obtained for DLAs (see \citealt{Galliano2018b}). On the one hand, metallicity in nearby galaxies was derived in the ionized gas near young star-forming regions where most of the emitting dust is presumably located. On the other hand, the relative dust abundance in DLAs is derived from the depletion strength of refractory species in the far-UV and the metallicity is derived from lines of sight intersecting the entire galaxy body, also in the far-UV, \textit{including regions that may be quasi-pristine or pristine (i.e., metal and dust-free)}. Hence the observed DGR vs.\ metallicity slope in DLAs could reflect a dilution factor, with the dust-rich regions having properties in fact similar to the Milky Way value. High spectral resolution is necessary to decompose the velocity profile in metal lines, to infer the corresponding and expected H{~\sc i}\ absorption, and to compare to the
observed one in order to quantify the dilution factor.
It should be also noted that the determination of elemental abundances in nearby metal-poor galaxies also provide a powerful tool to understand depletion patterns (dust composition) and strengths (dust-to-metal ratio) as a function of metallicity, but such a technique is limited by the small number of metal-poor galaxies with far-UV absorption spectra (see \citealt{RomanDuval2019b}). The abundance of deuterium also needs to be explored in the metal-poor ISM, either through D{~\sc i}/H{~\sc i}, HD/H$_2$, or D{~\sc i}/O{~\sc i}\ (the two latter minimizing systematic effects when comparing lines on very different part of the curve of growth; \citealt{Hebrard2002a}), in order to mitigate astration effects (destruction in interiors of stars) and to provide potentially better constraints for Big Bang nucleosynthesis models.
A significant gain in sensitivity is now required to obtain a large sample of metal-poor galaxies (for instance drawn from SDSS; \citealt{Izotov2019a}) while a gain in sensitivity and spatial resolution is required to target individual stars in nearby low-metallicity systems (e.g., Leo\,P, $1.6$\,Mpc, $3\%$ solar; \citealt{McQuinn2015b}) with expected continuum fluxes around $10^{-16}$\,erg\,s$^{-1}$\,cm$^{-2}$\,\AA$^{-1}$. In addition, spectral and spatial resolution are required to solve various biases regarding column density determinations (Sect.\,\ref{sec:biases}) and to determine exact spatial origin of the absorption within the galaxy, which is still unknown. On the other hand, abundances derived from optical emission-lines also suffer from some systematic uncertainties, with a discrepancy observed between abundances derived from collisionally-excited lines and recombination lines (e.g., \citealt{Esteban2002a,Esteban2016a}). This particular discrepancy needs to be explored further by accessing faint recombination lines and abundances in the photosphere of young stars as comparison for various metallicities (e.g., \citealt{Bresolin2016a}).
\subsection{Gas reservoirs for star formation}\label{sec:sf}
While most far-UV absorption lines observed in nearby galaxies toward stars or clusters correspond to resonance lines of atomic species, fine-structure atomic transitions and molecular transitions have been detected in a few objects, paving the way to a better understanding of the star-forming gas reservoir. The apparent lack of molecular gas in nearby star-forming low-metallicity galaxies (e.g., \citealt{Taylor1998a,Cormier2014a}) poses fundamental questions regarding the exact role of H$_2$ in the star-formation process as compared to the more generally defined cold dense gas (e.g., \citealt{Glover2012a}). While CO is often used to trace H$_2$, it is expected that CO emission is globally weaker in low-metallicity galaxies because of lower C and O abundance and because of selective photodissociation of CO in a dust-poor environment (e.g., \citealt{Wolfire2010a,Schruba2012a}), leading to a potentially large or even dominant reservoir of ``CO-dark'' H$_2$ gas \citep{Grenier2005a}. Accessing both CO and H$_2$ absorption lines would allow a direct measurement of the CO-to-H$_2$ conversion as a function of metallicity and extinction (from translucent to truly molecular), a conversion that is notoriously uncertain. At the same time, other molecules such as OH, CH$^+$, or HD could also be examined as potential tracers of the CO-dark H$_2$ gas.
Accessing H$_2$ in absorption in molecular clouds is the most direct way to probe molecular gas in low-metallicity environments but it has been limited to translucent clouds ($A_V\sim1-3$), with the lack of diffuse H$_2$ detections in the far-UV in the metal-poor ISM (e.g., \citealt{VidalMadjar2000a}) being explained by enhanced photodissociation and a larger critical surface density for H$_2$ formation \citep{Hoopes2004a,Sternberg2014a}. As observations of molecular clouds in low-metallicity galaxies reach smaller spatial scales, in particular with ALMA, it seems that H$_2$ may exist mostly in dense clumps of size $\lesssim 1$\,pc in such environments (e.g., \citealt{Rubio2015a}, Shi et al.\ in preparation). Such clumps may be identified thanks to near-infrared observations of warm H$_2$ layers (e.g., \citealt{Thuan2004a,Lebouteiller2017a}) but the determination of physical properties (temperature, density, magnetic field, DGR) as a function of the environment (e.g., Milky Way vs.\ low-metallicity galaxies, quiescent vs.\ active star-formation) requires the observation of H$_2$ absorption lines in various rotational and vibrational levels.
Finally, thermal processes can be investigated through the use of absorption lines arising from the fine-structure levels such as C{~\sc ii}*, O{~\sc i}*, O{~\sc i}**, Si{~\sc ii}*... Such tracers give valuable information on the ionization degree, temperature, and density of the neutral star-forming gas reservoir and provide indirect constraints on the gas heating mechanisms (photoelectric effect on dust grains, ionization by far-UV or X-ray photons, shocks...) that are independent and complementary to the information provided by far-IR cooling lines. Fine-structure absorption lines have been observed in and around the Milky Way, in DLAs (shifted to the optical domain), and a few nearby BCDs (e.g., \citealt{Lehner2004a,Wolfe2003a,Howk2005a,Lebouteiller2013a,Lebouteiller2017a}) but the number of Si{~\sc ii}*\ and O{~\sc i}**\ detections (required to measure the gas temperature) remains small due to limited sensitivity. Through fine-structure cooling lines in absorption one can hope to measure the thermal balance in the gas in regions of various extinctions well resolved in space as compared to IR observations, including in low column density infalling filaments/clouds.
\subsection{Nature of compact objects in primitive environments and their influence on the ISM}\label{sec:compact}
The presence of energetic X-ray binaries may influence the ISM properties in galaxies with extremely low DGR and metallicity, with implications for the formation of molecular gas and cold gas and for the star-formation history (see \citealt{Lebouteiller2017a}). The nature of such sources is still debated, though, and the modeling of their properties (including the luminosity in the soft X-rays which deposit their energy in the neutral gas), relies on the absorbing column density toward the X-ray source. An interesting prospect is thus to measure accurately the absorbing column density and ISM metallicity and ionization structure toward X-ray binaries (or toward OB stars in the same region) in nearby galaxies. The identification of compact objects in dwarf galaxies is important as such to probe potential intermediate-mass black holes and to understand whether they participate in the formation of supermassive black holes through coalescence (e.g., \citealt{Mezcua2019a}). Finally, another issue at stake is to understand the relative contribution of Wolf-Rayet stars vs.\ X-ray binaries in the nebular He{~\sc ii}\ emission in low-metallicity star-forming galaxies \citep{Schaerer2019a}.
\section{Gas flows and exchanges in the CGM}\label{sec:cgm}
\textbf{
\begin{itemize}
\item \textit{Is there enough CGM clouds to sustain star formation through accretion?}
\item \textit{What is the origin of high velocity clouds and how do they trade matter with the halo?}
\item \textit{How are halos energized?}
\end{itemize}}
\bigskip
No model of galaxy formation and evolution is complete without considering gas flows around galaxies. The CGM, in particular, stretches out to about the virial radii of galaxies and represents a key component of a galaxy’s matter budget that strongly influences its evolution over cosmic timescales. The evolution of galaxies is indeed thought to be regulated by a competition and balance between the gas accretion rate via cool gas streams infalling from the cosmic web, gas cooling in the halo, and mergers (the gas ``source term''), star formation in galaxies (the gas ``sink term''), and outflows driven by intense star formation and/or active galactic nuclei (the gas ``loss term''; e.g., \citealt{Bouche2010a,Richter2017a,Tumlinson2017a}). Of course, this combination of processes is too simplistic, there must be more complex microscopic processes that influence this cycle of accretion, star formation, and outflows. The question is then: what are those micro-physical processes that regulate the cooling and dissipation of the accreting and outflowing gas that maintain this apparent macroscopic gas balance in galaxies? The CGM in the halos of galaxies is a direct result of two of these processes -- gas accretion and outflows -- the two most important terms in the ``equation of gas balance in galaxies''. If we understand the nature and evolution of the CGM, we understand how galaxies acquire and lose their gas.
\subsection{The multi-phase CGM: a prime laboratory for galaxy evolution}
The CGM of galaxies is certainly not devoid of gas, perhaps containing up to approximately half of the total baryon content of the halo (e.g., \citealt{Werk2014a,Peek2015a}). The CGM is characterized by its multi-phase nature consisting of cold neutral/molecular gas clumps embedded in diffuse, highly-ionized gas filaments, with a wide range of temperature ($50-5\times10^6$\,K) and density ($10^{-5}-100$\cc) (e.g., \citealt{Jaffe2005a,Edge2010a,Salome2011a,Tremblay2012a,Emonts2018a}).
Absorption spectroscopy in the UV, where the quasars (or bright galaxies) are used as background sources to illuminate the foreground CGM of a galaxy (Fig.\,\ref{cgm}) represents the key method to study the physical nature of the CGM.
The UV range covers in fact most of the diagnostic absorption lines to trace all of these gas phases from low to high redshifts with far-UV absorption but also emission lines (including far-UV rest-frame and redshifted extreme-UV transitions) down to very low gas column densities. Complementary methods include the detection of diffuse ionized gas emission, molecular gas emission, dust absorption, or hot gas in X-ray emission lines.
\begin{figure}
\includegraphics[width=0.48\textwidth]{./Figures/richter_fig_CGM.pdf}
\caption{\small Left-hand side: synthetic spectrum at $R=120\,000$ spectral resolution showing the typical velocity structure of a CGM absorber in several UV transitions. Due to the multi-phase nature of the absorber, the low ions O{~\sc i}, Si{~\sc ii}, and Fe{~\sc ii}\ show velocity structure (blue vertical lines) that is different from that of the high ions O{~\sc vi}\ and C{~\sc iv}\ (red vertical lines). Right-hand side: velocity offset, $\Delta v$, (indicated by the green bar) between H{~\sc i}\ and C{~\sc iv}\ in another synthetic ($R=120\,000$) indicating the kinematic displacement of a metal patch residing in an intergalactic filament as a result of an inhomogeneous metal mixing in the IGM. }\label{cgm}
\end{figure}
Past and present far-UV spectrographs (e.g., HST/STIS and HST/COS) have been used to study the CGM along quasar lines of sight, pioneering this observational approach to characterize the complex gas circulation processes around galaxies. The excellent sensitivity of COS enabled increasing the sample of absorbers at $z\lesssim1$ by about one order of magnitude as compared to previous studies \citep{Lehner2018a,Lehner2019a}. However, due to the limitations in sensitivity only a few quasar(s) per galaxy halo might be bright enough to spectroscopically investigate the CGM of foreground galaxies, hampering our knowledge of how the CGM functions as a dynamic reservoir for both infalling and outflowing gas. At present, only in M\,31 has it been possible to probe a significantly large ($\approx50$) number of QSOs, probing the CGM out to $1.5$ times the virial radius of the galaxy (\citealt{Lehner2015a,Howk2017a}, Lehner et al.\ in prep.). Increasing again the total number of absorbers by one or two orders of magnitude will be another game changer and will provide access to more reference sources such as M\,31.
Another limitation has been the spectral resolution, which has not enabled us to kinematically disentangle the different gas phases, preventing accurate ionization models that are necessary to characterize the physical conditions in the gas and the role of this gas for galactic feedback. The conclusions that can be drawn from these simple experiments therefore remain afflicted with large uncertainties. As a result, our current understanding of the nature of the CGM is highly incomplete.
We now need to determine both the spatial distribution and large-scale kinematics of hydrogen and metal ions in the CGM around low- and high-redshift galaxies, as well as the physical conditions in the gas and its internal density structure. Because the enrichment of the IGM comes from galaxies that reside in the knots of the cosmological filaments, the metal distribution in the IGM might be inhomogeneous and patchy. Very high spectral resolution is required to systematically investigate velocity offsets between the H{~\sc i}\ and metal ions (e.g., C{~\sc iv}, Si{~\sc iii}) that could indicate a poor metal mixing in the IGM.
Ionization conditions need to be determined for individual phases in order to provide reliable estimates of the total gas mass. Key diagnostic species range from molecular species such as H$_2$ to highly-ionized species such as O{~\sc vi}. At a spectral resolution of $\sim3$\,km\,s$^{-1}$ at $1000$\,\AA, profile fitting of absorption features from metal ions in the CGM directly delivers the temperature of absorbing gas (by resolving thermal line broadening), which then can be used together with the observed ion ratios to self-consistently model the ionization structure of CGM absorbers and their internal gas pressure. On the other hand, the analysis of fine-structure lines such as C{~\sc ii}$^{\star}$ helps to constrain the local cooling rates.
More than the distribution of metals and large-scale kinematics in the CGM, we also need to understand how galaxy halos are energized -- one of the crucial remaining frontiers in theoretical astrophysics. We are only starting to understand empirically how outflows from galaxies might be able to heat the halo gas. \cite{Hayes2016a} have, for instance, undertaken a comprehensive study of halos of star forming galaxies at $z\sim0.2$ with HST/ACS and COS. In this first study of a single star-forming galaxy, they have detected a halo of O{~\sc vi}\ $1032,1038$\,\AA\ emission. Since the O{~\sc vi}\ coronal doublet lines are the most significant coolant of gas at $T\sim10^{5-6}$\,K, this indicates that gas in this temperature range is cooling rapidly. The resolved O{~\sc vi}\ emission is extended over $26$\,kpc, i.e., ten times the size of the photoionized gas. About $1/6$ of the total O{~\sc vi}\ luminosity was estimated to come from resonantly scattered continuum radiation. From the spectra, \cite{Hayes2016a} derive a large column of O{~\sc vi}\ absorption which is outflowing at several hundred {\,km\,s$^{-1}$}. Since the UV spectrum of this galaxy resembles a stack of star-forming galaxies in the HST archive, this strongly suggests that this is a common phenomenology in star-forming galaxies. The mapping of this gas and its characterisation at high spectral resolution represents a crucial step in further constraining galaxy formation scenarios and provides the first direct evidence of how outflowing gas interacts with and heating the ambient halo gas (see also Sect.\,\ref{sec:layers}).
\subsection{The properties and role of high velocity clouds}
Sustainable star formation is achieved thanks to low-metallicity gas inflow. High velocity clouds (HVCs) are the prime candidates for such an inflow. HVCs are gaseous (apparently starless) clouds evolving in the halo of spiral galaxies and confined to the inner CGM. HVCs are, as such, useful probes of the gas exchanges between the disk and the halo (see \citealt{Lockman2019a}). While much progress has been done over the last decade or so notably concerning their distance and metallicity (e.g., \citealt{Lehner2011a,Lehner2012a,Wakker2007a,Wakker2008a,Fox2016a}), many questions remain open: (1) do HVCs provide enough mass to sustain star formation? (2) Can metallicity and abundance patterns be used to infer their origin and the chemical composition changes as it evolves through the halo? (3) How much dark matter do they contain?
Several avenues can be considered to improve our knowledge on the properties and role of HVCs. We refer to \cite{Lockman2019a} for a review and we concentrate below on the need for UV observations. Since they can cover a broad range of ionization conditions, knowledge of the ionization corrections to be applied is key to determine the metallicities of HVCs. Since the gas may well not be in equilibrium a decisive way to free oneself from uncertainties is to observe all conceivable ionization states for a same atom, like, for instance [C{~\sc i}, C{~\sc ii}, C{~\sc iii}, C{~\sc iv}], [Si{~\sc i}, Si{~\sc ii}, Si{~\sc iii}, Si{~\sc iv}], or [N{~\sc i}, N{~\sc ii}, N{~\sc iii}, N{~\sc v}]. This requires the far-UV domain, especially access to N{~\sc ii} at $1084$\,\AA\ and O{~\sc vi} at $1032$ and $1038$\,\AA.
Determining the covering factor of HVCs as a function of distance and $z$-height by observing absorption lines toward a large sample of halo stars (B1 to B5, PAGB, BHB) with known distances (GAIA) would help infer the 3D structure of HVCs and determine whether the HVCs are disrupted and incorporated into the halo coronal gas as they fall or if they survive as neutral or ionized gas and reach the disk where they can fuel star formation. Only the H{~\sc i}\ Lyman lines enable measuring $N({\rm H})$ down to $\lesssim10^{15}$\,cm$^{-2}$ (e.g., \citealt{Lehner2006a,Fox2006a,Zech2008a}).
\section{2020-2035 landscape}
The landscape of available observatories in the period 2020-2035 is not expected to change significantly as far as far-UV spectrographs or spectropolarimeters are concerned. Proposals for probe-class missions (e.g., CETUS for NASA, CASTOR for CSA) are being put forward to bridge the gap between HST and next generation UV missions but they should focus on surveys to complement those already planned in other wavelength ranges in the 2020s notably around the quest for dark energy. Other already proposed or planned mission accessing to the far-UV have requirements that do not enable the science objectives described in this paper (e.g., HabEx with $R<60\,000$ and $\lambda_{\rm min}=1150$\,\AA\ or WSO-UV with an effective area $A_{\rm eff}\lesssim10^3$\,cm$^{-2}$, $R<50\,000$, and $\lambda_{\rm min}=1150$\,\AA). There is currently to our knowledge no plan for a UV spectrolarimeter apart from the proposed NASA mission LUVOIR, which is the only planned mission with requirements strongly overlapping with the ones we advocate for (Sect.\,\ref{sec:req}).
This does not imply, however, that the science topics described in this document will not be tackled until 2035 and afterwards.
ALMA is expected to continue characterizing the molecular chemistry in the ISM, but is unable to access directly the reservoir of H$_2$ or to probe the various ISM phases. Future IR space missions may, on the other hand, provide access to many ISM tracers, including molecular and atomic fine-structure lines in a wide range of objects. Hence the ESA M5 proposed mission SPICA, if selected, or the NASA Origins mission submitted to the 2020 Decadal survey (with high spectral resolution $R>10^{5}$ across $100-200${\,$\mu$m}) could greatly improve the knowledge of the ISM in general, in particular of PDRs and molecular clouds in the Milky Way and in nearby galaxies. However, these missions will not provide access to many important transitions (e.g., cold H$_2$, H{~\sc i}, hot gas) and are expected to primarily tackle the physics in star-forming regions rather than more diffuse gas, quiescent regions (i.e., irradiated by the general interstellar radiation field), or potential pristine gas pockets or HVCs. Polarimetry is proposed for both missions but only in large bands with no spectral resolution.
Furthermore, MUSE on the VLT is enabling a significant leap for the characterization of the ionized gas in nearby objects and such instruments provide important comparisons with ISM properties derived in the far-UV, for instance regarding the ISM enrichment. The ELT will focus more on resolved stellar populations or star-forming systems and high-$z$ galaxies but will be an extremely valuable observatory to use in synergy with a dedicated far-UV mission, concerning the properties of the stellar population in various wavelength ranges but also the interplay between the ISM and star formation. SKA will be a game changer as far as the distribution and properties of atomic hydrogen in and around galaxies are concerned (small-scale structures, dynamics, matter exchange around galaxies), and a far-UV mission should be capable of characterizing the metals and molecular gas in low surface brightness regions and filaments that will be discovered with SKA.
In summary, the landscape of existing, planned, or proposed missions until and after 2035 will enable a multi-wavelength view of galaxies, but missing the far-UV domain and its unique tracers. There exists, therefore, the opportunity that a far-UV mission may fill the gap in this landscape, with great potential synergies.
\section{Conclusion and requirements}\label{sec:req}
\begin{table*}
\caption{High-level requirements. }\label{tab:req}
\begin{tabular}{l|p{10cm}}
\hline
\hline
{\bf Requirement} & {\bf Justification and comments}\\
\hline
\textit{\textbf{Wavelength range}} & \\
\hline
$\lambda_{\rm min} {\rm (strict)}=1020$\,\AA & H{~\sc i}\ Lyman $\beta$ $1025$\,\AA; strongest H$_2$\ Lyman bands: $1030-1155$\,\AA; CO up to $1455$\,\AA; O{~\sc vi} $1032$\,\AA; N{~\sc ii} $1084$\,\AA; Ar{~\sc i} $1048, 1066$\,\AA; O{~\sc i} $1039, 1026$\,\AA. \\
$\lambda_{\rm min} {\rm (preferred)} = 900$\,\AA & H{~\sc i}\ Lyman series down to $912$\,\AA; H$_2$\ lines $912-1155$\,\AA; CO lines $912-1455$\,\AA; many O{~\sc i}\ lines of various strengths $916-988$\,\AA. Rest-frame Lyman continumm (no resolution requirement). \\
$\lambda_{\rm max} = 3100$\,\AA & Mg{~\sc ii} $2800$\,\AA; Mg{~\sc i}\ $2853$\,\AA; OH lines $3072-3082$\,\AA. \\
\hline
\textit{\textbf{Resolution}} & \\
\hline
$R=\lambda/\Delta\lambda {\rm (strict)} >120\,000$ & Resolve line profiles from cold H$_2$\ with $T\simeq100$\,K and $v_{\rm turb}\simeq1.2$\,{\,km\,s$^{-1}$}; resolve line profiles, separate thermal and turbulent contributions for warm gas: Fe{~\sc ii}\ with $T\simeq6500$\,K and $v_{\rm turb}\simeq1.0$\,{\,km\,s$^{-1}$}; separate velocity components with $\Delta{v}\simeq3$\,{\,km\,s$^{-1}$}; resolve profiles from rotational levels in H$_2$\ and CO bands; resolve isotopical shifts of atomic species. \\
$R {\rm (preferred)} > 200\,000$ & Resolve line profiles from cold gas with $v_{\rm turb}\leq1.0$\,{\,km\,s$^{-1}$}; resolve isotopes with $\Delta{v}\leq1.5$\,{\,km\,s$^{-1}$}. \\
$R\sim30\,000$ & FUSE/COS-quality spectra toward individual stars a few Mpc away; minimum resolution for GSA for absorption lines. \\
Spatial (PSF or aperture size) $=10$\,mas & Observe individual lines of sight toward stars in galaxies few Mpc away; tomographic mapping of chemical properties. \\
\hline
\textit{\textbf{Sensitivity}} & \\
\hline
SNR $> 500$ & Detect faint features in a reasonable amount of time (signatures of GSA in Milky Way, scarce elements, fine-structure lines in extragalactic ISM...). This implies achieving detectors with limited fixed-pattern noise and that can deal with high count rates. \\
$A_{\rm eff}>6\,000$\,cm$^{2}$ & $3\times$ the HST/COS value. Needed to reach $A_V>4$. Achieved with a telescope $5$\,m (resp. $8$\,m) if the efficiency (optical $\times$ detector) reaches $3$\% (resp. $1.3$\%). \\
$A_{\rm eff}>40\,000$\,cm$^{2}$ & $R\sim10^5$ spectra with main features toward galaxies $1-3$\,Mpc away; current LUVOIR-B designs for LUMOS and POLLUX; \\
\hline
magnitude [AB] $\approx27$ & Individual stars in I\,Zw\,18 ($18$\,Mpc) with $R\approx30\,000$. \\
\hline
\textit{ \textbf{Observational modes}} & \\
\hline
Apertures: MOS \& pinhole/long-slit & Ability to observe single stars / QSOs either sequentially or simultaneously; small enough to warrant required high spectral resolution. \\
Polarimetric mode & Magnetic fields. Linear (QU) polarization required. Circular(V)+linear prefered for evidence of chirality. Ability to use full spectroscopic mode. \\
\hline
\end{tabular}
\end{table*}
The science requirements described in the paper dictate the instrumental specifications in Table\,\ref{tab:req}. Concerning the polarimetric capability, since birefringent materials do not transmit light below $1200$\,\AA, new techniques have to be developped. The ability to operate in pure spectroscopic mode is important. Considering the sensitivity and SNR requirements, detectors have to be able to withstand high count rates without saturating.
Long, narrow-slit spectroscopy or small aperture pinholes are well adapted to the observation of single stars (in the Milky Way or in nearby galaxies) or QSO lines of sight. A multi-object or integral field spectrograph would be required for crowded fields or extended regions, in particular for nearby galaxies (either multiple stars within galaxy or multiple QSO sightlines intersecting ISM+CGM).
All in all, the proposed mission corresponds to an L-class asssuming current and proposed technology developments such as those presented in the NASA Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR) report \citep{TheLUVOIRTeam2018a}. LUVOIR is one of four missions of the flagship class that are being studied by NASA in the framework of the US 2020 Decadal Survey. LUVOIR is a multi-purpose observatory, designed to address very ambitious scientific questions at the core of modern Astrophysics, astrophysics, thanks to four instruments. The European community is involved in the LUVOIR project, through the proposal of one of the four instruments: POLLUX (PIs J.-C.\ Bouret \& C.\ Neiner), a high-resolution spectropolarimeter operating at UV wavelengths, designed for the $15$-meter primary mirror option (LUVOIR-A). POLLUX, whose current development is funded by CNES is at present the only non-US instrument proposed for LUVOIR. POLLUX uses multiple reflections to circumvent the transmission issue, and is well adapted to the requirements needed for the magnetic field science case in this document \citep{Muslimov2018a,LeGal2019a}. Technology challenges will include characterization of reflective coating materials (see \citealt{Muslimov2018a}).
Globally, the POLLUX and LUMOS instruments proposed for LUVOIR enable the science objectives described in this paper. We therefore advocate for an ESA contribution to an international effort such as LUVOIR around the POLLUX instrument. It should be emphasized, however, that since proper coatings have to be used in order to obtain useful effective areas around and below $1000$\,\AA, LUVOIR is itself not completely optimized for this specific wavelength range and the question of having a dedicated far-UV mission should be raised.
\setlength{\bibsep}{0.0pt}
|
1,116,691,500,640 | arxiv | \section{Introduction}
Fix a prime number $p$, and let $K/\Q_p$ be a finite extension with residue field
$k$ and absolute Galois group $G_K := \Gal(\overline{K}/K)$. In the paper \cite{cegsB}, inspired by a construction of Kisin~\cite{kis04} in the
setting of formal deformations, we constructed and began to study the geometry of certain moduli stacks ${\mathcal Z}^{\mathrm{dd}}$. The stacks ${\mathcal Z}^{\mathrm{dd}}$ can be thought of as moduli of two-dimensional tamely potentially Barsotti--Tate representations of $G_K$;\ they are
in fact moduli stacks of \'etale $\varphi$-modules with descent data, and by construction are
equipped with a partial resolution
\[ {\mathcal C}^{\mathrm{dd},\operatorname{BT}} \to {\mathcal Z}^{\mathrm{dd}} \]
where ${\mathcal C}^{\mathrm{dd},\operatorname{BT}}$ is a moduli stack of rank two Breuil--Kisin modules with tame descent data and height one.
The purpose of this paper is to make an explicit study of the morphism $ {\mathcal C}^{\mathrm{dd},\operatorname{BT}} \to {\mathcal Z}^{\mathrm{dd}}$ at the level of irreducible components. To be precise, for each two-dimensional tame inertial type $\tau$ there are closed substacks ${\mathcal C}^{\tau,\operatorname{BT}} \subset {\mathcal C}^{\mathrm{dd},\operatorname{BT}}$ and ${\mathcal Z}^{\tau} \subset {\mathcal Z}^{\mathrm{dd}}$ corresponding to representations having inertial type $\tau$, and a morphism ${\mathcal C}^{\tau,\operatorname{BT}} \to {\mathcal Z}^{\tau}$. These are $p$-adic formal algebraic stacks;\ let ${\mathcal C}^{\tau,\operatorname{BT},1}$ be the special fibre of ${\mathcal C}^{\tau,\operatorname{BT}}$, and ${\mathcal Z}^{\tau,1}$ its scheme-theoretic image in ${\mathcal Z}^{\tau}$ (in the sense of \cite{EGstacktheoreticimages}). These were proved in \cite{cegsB} to be equidimensional of dimension $[K:\Q_p]$. Moreover the finite type points $\Spec({\mathbb F}) \to {\mathcal Z}^{\tau,1}$ are in bijection with Galois representations $G_K \to \GL_2({\mathbb F})$ admitting a potentially Barsotti--Tate lift of type $\tau$.
(In fact ${\mathcal C}^{\tau,\operatorname{BT},1}$ is shown in \cite{cegsB} to be reduced, from which it follows that ${\mathcal Z}^{\tau,1}$ is also reduced. The special fibre of ${\mathcal Z}^{\tau}$ need not be reduced, so it need not equal ${\mathcal Z}^{\tau,1}$, but it will be proved in the sequel \cite{cegsA} that it is \emph{generically reduced}, using the results of this paper as input.)
Much of the work in our study of the irreducible components of ${\mathcal Z}^{\tau,1}$
involves an explicit construction of families of extensions of
characters. Intuitively, a natural source of ``families'' of representations
$\overline{r}:G_K \to\GL_2(\Fbar_p)$ is given by the
extensions of two fixed characters. Indeed, given two
characters~$\chi_1,\chi_2: G_K \to\overline{{\mathbb F}}^\times_p$, the
$\Fbar_p$-vector space $\Ext^1_{G_K}(\chi_2,\chi_1)$
is usually $[K:\Q_p]$-dimensional, and a back of the envelope
calculation
suggests that as a stack the collection of these representations should have dimension $[K:\Q_p]-2$:\ the difference between an extension and a
representation counts for a $-1$, as does the~$\GG_m$ of endomorphisms. Twisting~$\chi_1,\chi_2$ independently
by unramified characters gives a candidate for a $[K:\Q_p]$-dimensional
family;\ if contained in ${\mathcal Z}^{\tau}$, then since~${\mathcal Z}^{\tau}$ is equidimensional of dimension~$[K:\Q_p]$, the
closure of such a family should be an irreducible component of~${\mathcal Z}^{\tau}$.
Since there are only finitely many possibilities for the restrictions
of the~$\chi_i$ to the inertia subgroup~$I_K$, this gives a finite list of
maximal-dimensional families. On the other hand, there are up to
unramified twist only finitely many irreducible two-dimensional
representations of~$G_K$, which suggests that the
irreducible representations should correspond to $0$-dimensional
substacks. Together these considerations suggest that the irreducible
components of our moduli stack should be given by the closures of the
families of extensions considered in the previous paragraph, and in
particular that the irreducible representations should arise as limits
of reducible representations. This could not literally be the case
for families of Galois representations, rather than families of
\'etale $\varphi$-modules, and may seem surprising at first glance,
but it is indeed what happens.
In the body of the paper we make this analysis
rigorous, and we show
that the different families that we have constructed exhaust the
irreducible components.
We can therefore label the irreducible components of~${\mathcal Z}^{\tau,1}$
as follows. A component is specified by an ordered pair of characters~$I_K \to\Fbar_p^\times$, which via local class field theory corresponds to a pair of characters~$k^\times\to\overline{{\mathbb F}}^\times_p$. Such a pair can be thought of as the highest weight of a \emph{Serre
weight}:\ an irreducible $\Fbar_p$-representation of $\GL_2(k)$. To each irreducible component we have thus associated a Serre weight. (In fact, we need to make a
shift in this dictionary, corresponding to half the sum of the
positive roots of~$\GL_2(k)$, but we ignore this for the purposes of
this introduction.)
This might seem artificial, but in fact it is completely natural, for
the following reason. Following the pioneering work of
Serre~\cite{MR885783} and Buzzard--Diamond--Jarvis~\cite{bdj} (as
extended in~\cite{MR2430440} and~\cite{gee061}), we now know how to
associate a set $W(\overline{r})$ of Serre weights to each continuous
representation $\overline{r}:G_K\to\GL_2(\Fbar_p)$, with the property
that if $F$ is a totally real field and $\overline{\rho}:G_F\to\GL_2(\Fbar_p)$
is an irreducible representation coming from a Hilbert modular form,
then the possible weights of Hilbert modular forms giving rise
to~$\overline{\rho}$ are precisely determined by the sets
$W(\overline{\rho}|_{G_{F_v}})$ for places $v|p$ of~$F$ (see for example
\cite{blggu2,geekisin,gls13}).
Going back to our labelling of irreducible components
above, we have associated a Serre weight~$\sigmabar$ to each
irreducible component of~${\mathcal Z}^{\tau,1}$. The inertial local Langlands
correspondence assigns a finite set of Serre weights
$\operatorname{JH}(\sigmabar(\tau))$ to~$\tau$, the Jordan--H\"older factors of the
reduction mod~$p$ of the representation~$\sigma(\tau)$ of~$\GL_2({\mathcal O}_K)$
corresponding to~$\tau$. One of our main theorems is that the components of ${\mathcal Z}^{\tau,1}$ are labeled precisely by the Serre weights $\sigmabar \in \operatorname{JH}(\sigmabar(\tau))$. Furthermore the component labeled by $\sigmabar$ has a dense set of finite type points $\overline{r}$ with $\sigmabar \in W(\overline{r})$. In the sequel \cite{cegsA} this will be strengthened to the statement that the representations~$\overline{r}$ on the
irreducible component labelled by~$\sigmabar$ are precisely the
representations with $\sigmabar\in W(\overline{r})$,
We also study the irreducible components of the stack~${\mathcal C}^{\tau,\operatorname{BT},1}$. If $\tau$ is a non-scalar principal series type then the set~$\operatorname{JH}(\sigmabar(\tau))$ can be identified with a subset of the
power set~${\mathcal S}$ of the set of embeddings~$k\hookrightarrow\Fbar_p$ (hence, after fixing one such embedding, with a subset ${\mathcal P}_{\tau}$ of ${\mathbb Z}/f{\mathbb Z}$). For generic
choices of~$\tau$, this subset is the whole of~${\mathcal S}$.
We are able to show, using the theory of Dieudonn\'e modules, that for
any non-scalar principal series type~$\tau$ the irreducible components of~${\mathcal C}^{\tau,\operatorname{BT},1}$
can be identified with~${\mathcal S}$, and those irreducible components not
corresponding to elements of~$\operatorname{JH}(\sigmabar(\tau))$ have image
in~${\mathcal Z}^\tau$ of positive codimension. There is an analogous statement for cuspidal types, while for scalar types,
both~${\mathcal C}^{\tau,\operatorname{BT},1}$ and~${\mathcal Z}^{\tau,1}$ are irreducible.
To state our main results precisely we must first introduce a bit more notation. Fix a tame inertial type $\tau$ and a uniformiser $\pi$ of $K$. Let $L$ be the unramified quadratic extension of $K$, and write $f$ for the inertial degree of $K/\Q_p$. We set $K' = K(\pi^{1/p^f-1})$ if $\tau$ is principal series, and set $K' = L(\pi^{1/(p^{2f}-1)})$ if $\tau$ is cuspidal. Our moduli stacks of $p$-adic Hodge theoretic objects with descent data will have descent data from $K'$ to $K$. Let $f'$ be the inertial degree of $K'/\Q_p$, so that $f' = f$ if the type $\tau$ is principal series, while $f' = 2f$ if the type $\tau$ is cuspidal.
We say that a subset $J \subset {\mathbb Z}/f'{\mathbb Z}$ is a \emph{shape} if:
\begin{itemize}
\item $\tau$ is scalar and $J = \varnothing$,
\item $\tau$ is a non-scalar principal series type and $J$ is arbitrary, or
\item $\tau$ is cuspidal and $J$ has the property that $i \in J$ if and only if $i+f \not\in J$.
\end{itemize}
If $\tau$ is non-scalar then there are exactly $2^f$ shapes.
As above, write $\sigma(\tau)$ for the representation of $\GL_2({\mathcal O}_K)$ corresponding to $\tau$ under the inertial local Langlands correspondence of Henniart. The Jordan--H\"older factors of the reduction mod $p$ of $\sigma(\tau)$ are parameterized by an explicit set of shapes ${\mathcal P}_{\tau}$, and we write $\sigmabar(\tau)_J$ for the factor corresponding to $J$.
To each shape $J$, we will associate a closed substack $\overline{\mathcal{C}}(J)$ of ${\mathcal C}^{\tau,\operatorname{BT},1}$. The stack $\overline{{\mathcal Z}}(J)$ is then defined to be the scheme-theoretic image of $\overline{{\mathcal C}}(J)$ under the map ${\mathcal C}^{\tau,\operatorname{BT},1} \to {\mathcal Z}^{\tau,1}$, in the sense of \cite{EGstacktheoreticimages}. Then the following is our main result.
\begin{ithm}\label{thm:main thm cegsC} The irreducible components of ${\mathcal C}^{\tau,\operatorname{BT},1}$ and ${\mathcal Z}^{\tau,1}$ are as follows.
\begin{enumerate}
\item The irreducible
components of~${\mathcal C}^{\tau,1}$ are precisely the~$\overline{{\mathcal C}}(J)$
for shapes~$J$, and if $J\ne J'$ then~$\overline{{\mathcal C}}(J)\ne\overline{{\mathcal C}}(J')$.
\item The irreducible
components of~${\mathcal Z}^{\tau,1}$ are precisely the~$\overline{{\mathcal Z}}(J)$
for shapes~$J\in{\mathcal P}_\tau$, and if $J\ne J'$ then~$\overline{{\mathcal Z}}(J)\ne\overline{{\mathcal Z}}(J')$.
\item For each $J \in {\mathcal P}_{\tau}$, there is a dense open substack ${\mathcal U}$ of
$\overline{{\mathcal C}}(J)$ such that the map $\overline{{\mathcal C}}(J)
\to \overline{{\mathcal Z}}(J)$ restricts to an open immersion on ${\mathcal U}$.
\item
For each $J\in{\mathcal P}_\tau$, there is a dense set of finite type
points of $\overline{{\mathcal Z}}(J)$ with the property that the corresponding Galois
representations have $\sigmabar(\tau)_J$ as a Serre weight, and which
furthermore admit a unique Breuil--Kisin model of type~$\tau$.
\end{enumerate}
\end{ithm}
\begin{iremark}\label{rem:phantom-weights}
We emphasize in Theorem~\ref{thm:main thm cegsC} that the components of ${\mathcal Z}^{\tau,1}$ are indexed by shapes $J \in {\mathcal P}_{\tau}$, \emph{not} by all shapes. If $J \not\in {\mathcal P}_{\tau}$, then the stack $\overline{{\mathcal Z}}(J)$ has dimension strictly smaller than $[K:\Q_p]$, and so is properly contained in some component of ${\mathcal Z}^{\tau,1}$. We anticipate that the loci $\overline{{\mathcal Z}}(J)$ will nevertheless be of interest when $J \not\in {\mathcal P}_\tau$:\ we expect that they will correspond to ``phantom'' (partial weight one) Serre weights of relevance to the geometric variant of the weight part of Serre's conjecture proposed by Diamond--Sasaki \cite{DiamondSasaki}. This will be the subject of future work.
\end{iremark}
We assume that~$p>2$ in much of the paper; while we expect that our
results should also hold if~$p=2$, there are several reasons to
exclude this case. We are frequently able to considerably simplify our
arguments by assuming that the extension~$K'/K$ is not just tamely
ramified, but in fact of degree prime to~$p$; this is problematic
when~$p=2$, as the consideration of cuspidal types involves a
quadratic unramified extension. Furthermore, in the sequel \cite{cegsA} we will use results on the
Breuil--M\'ezard conjecture which ultimately depend on automorphy
lifting theorems that are not available in the case $p=2$ at present
(although it is plausible that the methods of~\cite{Thornep=2} could
be used to prove them).
We conclude this introduction by discussing the relationship between our results and those of \cite{EGmoduli}. Two of us (M.E. and T.G.) have constructed moduli stacks ${\mathcal X}_{d}$ of rank~$d$ \'etale $(\varphi,\Gamma)$-modules for $K$, as well as substacks ${\mathcal X}_d^{\lambda,\tau}$ which may be regarded as stacks of potentially crystalline representations of $G_K$ with inertial type $\tau$ and Hodge type~$\lambda$. When $d=2$ and $\lambda$ is the trivial Hodge type, these are stacks ${\mathcal X}_2^{\tau,\operatorname{BT}}$ of potentially Barsotti--Tate representations of $G_K$ of inertial type $\tau$, and we anticipate that ${\mathcal X}_2^{\tau,\operatorname{BT}}$ is isomorphic to ${\mathcal Z}^{\tau,\operatorname{BT}}$ (but since we do not need this fact, we have not proved it).
One of the main results of the book \cite{EGmoduli} is that the irreducible components of the underlying reduced stacks ${\mathcal X}_{d,\operatorname{red}}$ are in bijection with the irreducible representations of $\GL_d(k)$. This bijection is characterised in essentially exactly the same way as our description of the components of ${\mathcal Z}^{\tau,1}$ in this paper:\ a Serre weight has a highest weight, which corresponds to a tuple of inertial characters, which gives rise to a family of successive extensions of $1$-dimensionals representations. Then the closure of this family is a component of ${\mathcal X}_{d,\operatorname{red}}$.
The crucial difference between our setting and that of \cite{EGmoduli}
is that we could prove in \cite{cegsB} that the stacks ${\mathcal Z}^{\tau,1}$
are reduced.
The proof makes use of the resolution ${\mathcal C}^{\tau,\operatorname{BT},1} \to {\mathcal Z}^{\tau,1}$ and the fact that we are able to relate the stack ${\mathcal C}^{\tau,\operatorname{BT}}$ to a local model at Iwahori level, whose special fibre is known to be reduced. In the sequel \cite{cegsA} we combine the characterisation of the components of ${\mathcal Z}^{\tau,1}$ from this paper with the reducedness of ${\mathcal Z}^{\tau,1}$ from \cite{cegsB} to prove that the special fibre of ${\mathcal Z}^{\tau}$ is \emph{generically} reduced. This will then allow us to completely characterise \emph{all} of the finite type points on each component of ${\mathcal Z}^{\tau,1}$ (not just a dense set of points), and to prove geometrisations of the Breuil--M\'ezard conjecture and of the weight part of Serre's conjecture for the stacks ${\mathcal Z}^{\mathrm{dd},1}$. Furthermore, by means of a comparison of versal rings, these results can be transported to the stacks ${\mathcal X}^{\tau,\operatorname{BT}}_2$ of \cite{EGmoduli} as well.
\subsection{Acknowledgements}We would like to thank
Kalyani Kansal for helpful comments.
\subsection{Notation and conventions}\label{subsec: notation}
\subsubsection*{Topological groups} If~$M$ is an abelian
topological group with a linear topology, then as
in~\cite[\href{https://stacks.math.columbia.edu/tag/07E7}{Tag
07E7}]{stacks-project} we say that~$M$ is {\em complete} if the
natural morphism $M\to \varinjlim_i M/U_i$ is an isomorphism,
where~$\{U_i\}_{i \in I}$ is some (equivalently any) fundamental
system of neighbourhoods of~$0$ consisting of subgroups. Note that in
some other references this would be referred to as being~{\em complete
and separated}. In particular, any $p$-adically complete ring~$A$ is
by definition $p$-adically separated.
\subsubsection*{Galois theory and local class field theory} If $M$ is a field, we let $G_M$ denote its
absolute Galois group.
If~$M$ is a global field and $v$ is a place of $M$, let $M_v$ denote
the completion of $M$ at $v$. If~$M$ is a local field, we write~$I_M$
for the inertia subgroup of~$G_M$.
Let $p$ be a prime number.
Fix a finite extension $K/\Q_p$, with
ring of integers ${\mathcal O}_K$ and residue field $k$. Let $e$ and $f$
be the ramification and inertial degrees of $K$, respectively, and
write $\# k=p^f$ for the cardinality of~$k$.
Let $K'/K$ be a finite
tamely ramified Galois extension. Let $k'$ be the residue field of $K'$, and let $e',f'$ be the
ramification and inertial degrees of $K'$ respectively.
Our representations of $G_K$ will have coefficients in $\Qbar_p$,
a fixed algebraic closure of $\Q_p$ whose residue field we denote by~$\Fbar_p$. Let $E$ be a finite
extension of $\Q_p$ contained in $\Qbar_p$ and containing the image of every
embedding of $K'$ into $\Qbar_p$. Let ${\mathcal O}$ be the ring of integers in
$E$, with uniformiser $\varpi$ and residue field ${\mathbb F} \subset
\Fbar_p$.
Fix an embedding $\sigma_0:k'\hookrightarrow{\mathbb F}$, and recursively define
$\sigma_i:k'\hookrightarrow{\mathbb F}$ for all $i\in{\mathbb Z}$ so that
$\sigma_{i+1}^p=\sigma_i$; of course, we have $\sigma_{i+f'}=\sigma_i$
for all~$i$. We let $e_i\in k'\otimes_{\F_p} {\mathbb F}$ denote the idempotent
satisfying $(x\otimes 1)e_i=(1\otimes\sigma_i(x))e_i$ for all $x\in
k'$; note that $\varphi(e_i)=e_{i+1}$. We also denote by $e_i$ the
natural lift of $e_i$ to an idempotent in
$W(k')\otimes_{\Z_p}{\mathcal O}$. If $M$ is an
$W(k')\otimes_{\Z_p}{\mathcal O}$-module, then we write $M_i$ for
$e_iM$.
We write ${\operatorname{Art}}_K \colon K^\times\to W_K^{\mathrm{ab}}$ for
the isomorphism of local class field theory, normalised so that
uniformisers correspond to geometric Frobenius elements.
\begin{lemma}\label{lem:cft} Let $\pi$ be any uniformiser
of ${\mathcal O}_K$.
The composite $I_K \to {\mathcal O}_K^{\times} \to k^{\times}$, where the map
$I_K \to {\mathcal O}_K^\times$ is induced by the restriction of ${\operatorname{Art}}_K^{-1}$,
sends an element $g \in I_K$ to the image in $k^{\times}$ of
$g(\pi^{1/(p^f-1)})/\pi^{1/(p^f-1)}$.
\end{lemma}
\begin{proof}
This follows (for example) from the construction in \cite[Prop.~4.4(iii), Prop.~4.7(ii), Cor.~4.9, Def.~4.10]{MR2487860}.
\end{proof}
For each $\sigma\in \Hom(k,\Fbar_p)$ we
define the fundamental character $\omega_{\sigma}$ to~$\sigma$ to be the composite \[\xymatrix{I_K \ar[r] & {\mathcal O}_{K}^{\times}\ar[r] & k^{\times}\ar[r]^{\sigma} & \Fpbar^{\times},}\]
where the map $I_K \to {\mathcal O}_K^\times$ is induced by the restriction of ${\operatorname{Art}}_K^{-1}$.
Let $\varepsilon$ denote the $p$-adic cyclotomic
character and $\overline{\varepsilon}$ the mod~$p$ cyclotomic
character, so that $\prod_{\sigma \in \Hom(k,\Fbar_p)}
\omega_{\sigma}^{e} = \overline{\varepsilon}$. We will often identify
characters of $I_K$ with characters of $k^{\times}$ via the Artin
map.
\subsubsection*{Inertial local Langlands} A two-dimensional \emph{tame inertial type} is (the isomorphism
class of) a tamely ramified representation
$\tau : I_K \to \GL_2(\Zbar_p)$ that extends to a representation of $G_K$ and
whose kernel is open. Such a representation is of the form $\tau
\simeq \eta \oplus \eta'$, and we say that $\tau$ is a \emph{tame principal series
type} if
$\eta,\eta'$ both extend to characters of $G_K$. Otherwise,
$\eta'=\eta^q$, and $\eta$ extends to a character
of~$G_L$, where~$L/K$ is a quadratic unramified extension.
In this case we say
that~$\tau$ is a \emph{tame cuspidal type}.
Henniart's appendix to \cite{breuil-mezard}
associates a finite dimensional irreducible $E$-representation $\sigma(\tau)$ of
$\GL_2({\mathcal O}_K)$ to each inertial type $\tau$; we refer to this association as the {\em
inertial local Langlands correspondence}. Since we are only working
with tame inertial types, this correspondence can be made very
explicit as follows.
If $\tau
\simeq \eta \oplus \eta'$ is a tame principal series type, then we
also write $\eta,\eta':k^\times\to{\mathcal O}^\times$ for the
multiplicative characters determined by
$\eta\circ{\operatorname{Art}}_K|_{{\mathcal O}_{K}^\times},\eta'\circ{\operatorname{Art}}_K|_{{\mathcal O}_{K}^\times}$
respectively. If $\eta=\eta'$, then we set
$\sigma(\tau)=\eta\circ\det$. Otherwise, we write $I$ for the Iwahori subgroup of $\GL_2({\mathcal O}_K)$ consisting of
matrices which are upper triangular modulo a uniformiser~$\varpi_K$
of~$K$, and write $\chi = \eta'\otimes \eta:
I\to{\mathcal O}^\times$ for the character \[
\begin{pmatrix}
a&b\\\varpi_K c&d
\end{pmatrix}\mapsto \eta'(\overline{a})\eta(\overline{d}).\] Then $\sigma(\tau) := \Ind_I^{\GL_2({\mathcal O}_K)}
\chi$.
If $\tau=\eta\oplus\eta^q$ is a tame cuspidal type, then as above we
write~$L/K$ for a quadratic unramified extension, and~$l$ for the
residue field of~${\mathcal O}_L$. We write
$\eta :l^\times\to{\mathcal O}^\times$ for the
multiplicative character determined by
$\eta\circ{\operatorname{Art}}_L|_{{\mathcal O}_{L}^\times}$; then $\sigma(\tau)$ is the
inflation to $\GL_2({\mathcal O}_K)$ of the cuspidal representation of $\GL_2(k)$
denoted by~$\Theta(\eta)$ in~\cite{MR2392355}.
\subsubsection*{$p$-adic Hodge theory} We normalise Hodge--Tate weights so that all Hodge--Tate weights of
the cyclotomic character are equal to $-1$. We say that a potentially
crystalline representation $\rho:G_K\to\GL_2(\Qbar_p)$ has \emph{Hodge
type} $0$, or is \emph{potentially Barsotti--Tate}, if for each
$\varsigma :K\hookrightarrow \Qbar_p$, the Hodge--Tate weights of $\rho$ with
respect to $\varsigma$ are $0$ and $1$. (Note that this is a more
restrictive definition of potentially Barsotti--Tate than is sometimes
used; however, we will have no reason to deal with representations
with non-regular Hodge-Tate weights, and so we exclude them from
consideration. Note also that it is more usual in the literature to
say that $\rho$ is potentially Barsotti--Tate if it is potentially
crystalline, and $\rho^\vee$ has Hodge type $0$.)
We say
that a potentially crystalline representation
$\rho:G_K\to\GL_2(\Qbar_p)$ has \emph{inertial type} $\tau$ if the traces of
elements of $I_K$ acting on~$\tau$ and on
\[\operatorname{D_{pcris}}(\rho)=\varinjlim_{K'/K}(\tB_{\cris}\otimes_{\Q_p}V_\rho)^{G_{K'}}\] are
equal (here~$V_\rho$ is the underlying vector space
of~$V_\rho$).
A representation $\overline{r}:G_K\to\GL_2(\Fbar_p)$ \emph{has a potentially
Barsotti--Tate lift of
type~$\tau$} if and
only if $\overline{r}$ admits a lift to a representation
$r:G_K\to\GL_2(\Zbar_p)$ of Hodge type~$0$ and inertial type~$\tau$.
\subsubsection*{Serre weights}
By definition, a \emph{Serre weight} is an irreducible
${\mathbb F}$-representation of $\GL_2(k)$. Concretely, such a
representation is of the form
\[\sigmabar_{\vec{t},\vec{s}}:=\otimes^{f-1}_{j=0}
(\det{\!}^{t_j}\Sym^{s_j}k^2) \otimes_{k,\sigma_{j}} {\mathbb F},\]
where $0\le s_j,t_j\le p-1$ and not all $t_j$ are equal to
$p-1$. We say that a Serre weight is \emph{Steinberg} if $s_j=p-1$ for all $j$,
and \emph{non-Steinberg} otherwise.
\subsubsection*{A remark on normalisations} Given a continuous representation $\overline{r}:G_K\to\GL_2(\Fbar_p)$, there
is an associated (nonempty) set of Serre weights~$W(\overline{r})$ whose
precise definition we will recall in Appendix~\ref{sec: appendix on tame types}. There are in fact
several different definitions of~$W(\overline{r})$ in the literature; as a
result of the papers~\cite{blggu2,geekisin,gls13}, these definitions
are known to be equivalent up to normalisation.
However, the normalisations of
Hodge--Tate weights and of inertial local Langlands used in
\cite{geekisin,gls13,emertongeesavitt} are not all the same, and so
for clarity we lay out how they differ, and how they compare to the normalisations of
this paper.
Our conventions for Hodge--Tate weights and
inertial types agree with those of~\cite{geekisin, emertongeesavitt}, but our
representation~$\sigma(\tau)$ is the
representation~$\sigma(\tau^\vee)$ of~\cite{geekisin, emertongeesavitt}
(where~$\tau^\vee=\eta^{-1}\oplus(\eta')^{-1}$);\ to see this, note the
dual in the definition of~$\sigma(\tau)$ in~\cite[Thm.\
2.1.3]{geekisin} and the discussion in \S 1.9 of
\cite{emertongeesavitt}.\footnote{However, this dual is erroneously
omitted when the inertial local Langlands correspondence is made
explicit at the end of \cite[\S3.1]{emertongeesavitt}. See
Remark~\ref{arem: wtf were we thinking in EGS}.}
In all cases one chooses to normalise the set of Serre weights so
that the condition of Lemma~\ref{lem: list of things we need to know about Serre
weights}(1) holds. Consequently, our set of weights~$W(\overline{r})$ is the
set of duals of the weights~$W(\overline{r})$ considered
in~\cite{geekisin}. In turn, the paper~\cite{gls13} has the opposite
convention for the signs of Hodge--Tate weights to our convention (and
to the convention of~\cite{geekisin}), so we find that our set of
weights~$W(\overline{r})$ is the set of duals of the weights~$W(\overline{r}^\vee)$
considered in~\cite{gls13}.
\subsubsection*{Stacks}We follow the terminology of~\cite{stacks-project}; in
particular, we write ``algebraic stack'' rather than ``Artin stack''. More
precisely, an algebraic stack is a stack in groupoids in the \emph{fppf} topology,
whose diagonal is representable by algebraic spaces, which admits a smooth
surjection from a
scheme. See~\cite[\href{http://stacks.math.columbia.edu/tag/026N}{Tag
026N}]{stacks-project} for a discussion of how this definition relates to
others in the literature, and~\cite[\href{http://stacks.math.columbia.edu/tag/04XB}{Tag
04XB}]{stacks-project} for key properties of morphisms
representable by algebraic spaces.
For a commutative ring $A$, an \emph{fppf stack over $A$} (or
\emph{fppf} $A$-stack) is a stack fibred in groupoids over the big \emph{fppf}
site of $\Spec A$.
\section{Preliminaries}
We begin by reviewing the various constructions and results that we will need from \cite{cegsB}. Section~\ref{subsec: kisin modules with dd} recalls the definition and a few basic algebraic properties of Breuil--Kisin modules with coefficients and descent data, while Section~\ref{subsec: etale phi modules
and Galois representations} does the same for \'etale $\varphi$-modules. In Section~\ref{sec:recoll-from-citec} we define the stacks ${\mathcal C}^{\tau,\operatorname{BT},1}$ and ${\mathcal Z}^{\tau,1}$ (as well as various other related stacks) and state the main results of \cite{cegsB}. Finally, in Section~\ref{sec:dieudonne-stacks} we introduce and study stacks of Dieudonn\'e modules that will be used at the end of the paper to determine the irreducible components of ${\mathcal C}^{\tau,\operatorname{BT},1}$.
\subsection{Breuil--Kisin modules
with descent data}\label{subsec: kisin modules with dd}
Recall that we have a finite
tamely ramified Galois extension $K'/K$. Suppose further that there exists a uniformiser $\pi'$ of
${\mathcal O}_{K'}$ such that $\pi:=(\pi')^{e(K'/K)}$ is an element of~$K$,
where $e(K'/K)$ is the ramification index of
$K'/K$.
Recall that $k'$ is the residue field of $K'$, while $e',f'$ are the
ramification and inertial degrees of $K'$ respectively.
Let $E(u)$ be the minimal polynomial of $\pi'$ over $W(k')[1/p]$.
Let $\varphi$ denote the arithmetic Frobenius automorphism of $k'$, which lifts uniquely
to an automorphism of $W(k')$ that we also denote by $\varphi$. Define
$\mathfrak{S}:=W(k')[[u]]$, and extend $\varphi$ to $\mathfrak{S}$ by \[\varphi\left(\sum a_iu^i\right)=\sum
\varphi(a_i)u^{pi}.\] By our assumptions that $(\pi')^{e(K'/K)} \in K$
and that $K'/K$ is Galois, for each
$g\in\Gal(K'/K)$ we can write $g(\pi')/\pi'=h(g)$ with $h(g)\in
\mu_{e(K'/K)}(K') \subset W(k')$,
and we
let $\Gal(K'/K)$ act on $\mathfrak{S}$ via \[g\left(\sum a_iu^i\right)=\sum g(a_i)h(g)^iu^i.\]
Let $A$ be a $p$-adically complete $\Z_p$-algebra, set $\mathfrak{S}_A:=(W(k')\otimes_{\Z_p} A)[[u]]$, and extend
the actions of $\varphi$ and $\Gal(K'/K)$ on $\mathfrak{S}$ to actions on $\mathfrak{S}_A$ in the
obvious ($A$-linear) fashion.
\begin{lemma}\label{lem:projectivity descends}
An $\mathfrak{S}_A$-module is
projective if and only if it is projective as an
$A[[u]]$-module.
\end{lemma}
\begin{proof}
Suppose that $\mathfrak{M}$ is an $\mathfrak{S}_A$-module that is projective as an
$A[[u]]$-module. Certainly $W(k') \otimes_{\Z_p} \mathfrak{M}$ is projective
over $\mathfrak{S}_A$, and we claim that it has $\mathfrak{M}$ as an $\mathfrak{S}_A$-module direct summand.
Indeed, this follows by rewriting $\mathfrak{M}$ as $W(k')\otimes_{W(k')}
\mathfrak{M}$ and noting that $W(k')$ is a $W(k')$-module direct summand of $W(k')
\otimes_{\Z_p} W(k')$.
\end{proof}
The actions of $\varphi$ and $\Gal(K'/K)$ on $\mathfrak{S}_A$
extend to actions on $\mathfrak{S}_A[1/u]=(W(k')\otimes_{\Z_p} A)((u))$ in the obvious
way. It will sometimes be necessary to consider the subring $\mathfrak{S}_A^0
:=(W(k)\otimes_{\Z_p} A)[[v]]$ of $\mathfrak{S}_A$
consisting of power series in
$v:=u^{e(K'/K)}$, on which $\Gal(K'/K)$ acts
trivially.
\begin{defn}\label{defn: Kisin module with descent data}
Fix a $p$-adically complete $\Z_p$-algebra~$A$. A \emph{Breuil--Kisin module with
$A$-coefficients and descent data from $K'$ to $K$} (or often simply
a \emph{Breuil--Kisin module})
is a triple $(\mathfrak{M},\varphi_{\mathfrak{M}},\{\hat{g}\}_{g\in\Gal(K'/K)})$ consisting of
a
$\mathfrak{S}_A$-module~$\mathfrak{M}$ and a $\varphi$-semilinear map
$\varphi_{\mathfrak{M}}:\mathfrak{M}\to\mathfrak{M}$
such that:
\begin{itemize}
\item the $\mathfrak{S}_A$-module $\mathfrak{M}$ is finitely generated and projective,
and
\item the induced
map $\Phi_{\mathfrak{M}} = 1 \otimes \varphi_{\mathfrak{M}} :\varphi^*\mathfrak{M}\to\mathfrak{M}$ is an isomorphism after
inverting $E(u)$ (here as usual we write $\varphi^*\mathfrak{M}:=\mathfrak{S}_A \otimes_{\varphi,\mathfrak{S}_A}\mathfrak{M}$),
\end{itemize}
together with
additive bijections $\hat{g}:\mathfrak{M}\to\mathfrak{M}$, satisfying the further
properties that
the maps $\hat{g}$ commute with $\varphi_\mathfrak{M}$, satisfy
$\hat{g_1}\circ\hat{g_2}=\widehat{g_1\circ g_2}$, and have
$\hat{g}(sm)=g(s)\hat{g}(m)$ for all $s\in\mathfrak{S}_A$, $m\in\mathfrak{M}$.
We say that $\mathfrak{M}$ is has \emph{height at most $h$} if the cokernel of
$\Phi_{\mathfrak{M}}$ is killed by $E(u)^h$.
The Breuil--Kisin module $\mathfrak{M}$ is said to be of rank~$d$ if the underlying
finitely generated projective $\mathfrak{S}_A$-module has constant rank~$d$. It
is said to be free if the underlying $\mathfrak{S}_A$-module is free.
\end{defn}
A morphism of Breuil--Kisin modules with descent data is
a morphism
of~$\mathfrak{S}_A$-modules
that commutes with $\varphi$ and with the~$\hat{g}$.
In the case that $K'=K$ the data of the $\hat{g}$ is trivial, so
it can be forgotten, giving the category of \emph{Breuil--Kisin modules with
$A$-coefficients.} In this case it will sometimes be convenient to elide the difference between a
Breuil--Kisin module with trivial descent data, and a Breuil--Kisin module without
descent data, in order to avoid making separate definitions in the
case of Breuil--Kisin modules without descent data.
\begin{rem}
\label{rem:projectivity for Kisin modules} We refer the reader
to~\cite[\S5.1]{EGstacktheoreticimages} for a
discussion of foundational results concerning finitely generated modules
over the power series ring $A[[u]]$. In particular (using
Lemma~\ref{lem:projectivity descends}) we note the
following.
\begin{enumerate}
\item An $\mathfrak{S}_A$-module $\mathfrak{M}$ is finitely generated and projective if
and only if it is $u$-torsion free and $u$-adically complete, and $\mathfrak{M}/u\mathfrak{M}$ is a finitely generated projective
$A$-module (\cite[Prop.~5.1.8]{EGstacktheoreticimages}).
\item If the $\mathfrak{S}_A$-module $\mathfrak{M}$ is projective of
rank~$d$, then it is Zariski locally free of rank~$d$ in the sense that there is a cover of $\Spec A$
by affine opens $\Spec B_i$ such that each of the base-changed modules
$\mathfrak{M}\otimes_{\mathfrak{S}_A}\mathfrak{S}_{B_i}$ is free of rank $d$ (\cite[Prop.~5.1.9]{EGstacktheoreticimages}).
\item If $A$ is coherent (so in
particular, if $A$ is Noetherian), then $A[[u]]$ is faithfully
flat over~$A$, and so $\mathfrak{S}_A$ is faithfully flat over~$A$, but
this need not hold if $A$ is not coherent.
\end{enumerate}
\end{rem}
\begin{df}
\label{def:completed tensor}
If $Q$ is any (not necessarily finitely generated) $A$-module,
and $\mathfrak{M}$ is an $A[[u]]$-module,
then we let $\mathfrak{M}\, \widehat{\otimes}_A Q$ denote the $u$-adic completion
of $\mathfrak{M}\otimes_A Q$.
\end{df}
\begin{lem}
\label{rem: base change of locally free Kisin module is a
locally free Kisin module}
If $\mathfrak{M}$ is a Breuil--Kisin module and $B$ is an $A$-algebra, then the base
change $\mathfrak{M} \, \widehat{\otimes}_A B$ is a Breuil--Kisin module.
\end{lem}
\begin{proof} This is \cite[Lem.~2.1.4]{cegsB}.
\end{proof}
We make the following two further remarks concerning base change.
\begin{remark}
\label{rem:completed tensor}
(1) If $A$ is Noetherian, if $Q$ is finitely generated over $A$,
and if $\mathfrak{N}$ is
finitely generated over $A[[u]]$, then $\mathfrak{N}\otimes_A Q$
is finitely generated over $A[[u]]$, and hence (by the Artin--Rees
lemma) is automatically $u$-adically complete. Thus
in this case the natural morphism $\mathfrak{N}\otimes_A Q \to \mathfrak{N}\, \widehat{\otimes}_A Q$
is an isomorphism.
\smallskip
(2)
Note that $A[[u]]\, \widehat{\otimes}_A Q = Q[[u]]$ (the $A[[u]]$-module
consisting of power series with coefficients in the $A$-module $Q$),
and so if $\mathfrak{N}$ is Zariski locally free on $\Spec A$,
then $\mathfrak{N}\, \widehat{\otimes}_A Q$ is Zariski locally isomorphic to a direct sum
of copies of $Q[[u]]$, and hence is $u$-torsion free (as well as
being $u$-adically complete). In particular, by
Remark~\ref{rem:projectivity for Kisin modules}(2), this holds if~$\mathfrak{N}$
is projective.
\end{remark}
Let $A$ be a $\Z_p$-algebra. We define a \emph{Dieudonn\'e module of rank $d$ with $A$-coefficients and
descent data from $K'$ to $K$} to be a finitely generated projective
$W(k')\otimes_{\Z_p}A$-module $D$ of constant rank
$d$ on $\Spec A$, together with:
\begin{itemize}
\item $A$-linear endomorphisms $F,V$ satisfying $FV = VF = p$ such that $F$ is $\varphi$-semilinear and $V$ is
$\varphi^{-1}$-semilinear for the action of $W(k')$, and
\item a $W(k')\otimes_{\Z_p}A$-semilinear action of $\Gal(K'/K)$
which commutes with $F$ and $V$.
\end{itemize}
\begin{defn}\label{def: Dieudonne module formulas}
If $\mathfrak{M}$ is a Breuil--Kisin module of height at most~$1$ and rank~$d$ with descent data,
then there is a
corresponding Dieudonn\'e module $D=D(\mathfrak{M})$ of rank~$d$ defined as follows. We set
$D:=\mathfrak{M}/u\mathfrak{M}$
with the induced action of $\Gal(K'/K)$, and $F$ given by the induced
action of $\varphi$.
The endomorphism $V$ is determined as follows. Write $E(0) = c p$,
so that we have $p \equiv c^{-1}E(u) \pmod{u}$. The
condition that the cokernel of $\varphi^*\mathfrak{M}\to\mathfrak{M}$ is killed by $E(u)$
allows us to factor the multiplication-by-$E(u)$ map on $\mathfrak{M}$ uniquely
as $\mathfrak{V} \circ \varphi$, and $V$ is
defined to be
$c^{-1} \mathfrak{V}$ modulo~$u$.
\end{defn}
\subsection{\'Etale \texorpdfstring{$\varphi$}{phi}-modules and
Galois representations}
\label{subsec: etale phi modules
and Galois representations}
\begin{defn}\label{defn: etale phi module}
Let $A$ be a ${\mathbb Z}/p^a{\mathbb Z}$-algebra for some $a\ge 1$. A \emph{weak \'etale
$\varphi$-module} with $A$-coefficients
and descent data from $K'$ to $K$ is a triple
$(M,\varphi_M,\{\hat{g}\})$ consisting of:
\begin{itemize}
\item
a finitely generated
$\mathfrak{S}_A[1/u]$-module $M$;
\item a $\varphi$-semilinear map $\varphi_M:M\to M$ with the
property that the induced
map \[\Phi_M = 1 \otimes \varphi_M:\varphi^*M:=\mathfrak{S}_A[1/u]\otimes_{\varphi,\mathfrak{S}_A[1/u]}M\to M\]is an
isomorphism,
\end{itemize}
together with additive bijections $\hat{g}:M\to M$ for $g\in\Gal(K'/K)$, satisfying the further
properties that the maps $\hat{g}$ commute with $\varphi_M$, satisfy
$\hat{g_1}\circ\hat{g_2}=\widehat{g_1\circ g_2}$, and have
$\hat{g}(sm)=g(s)\hat{g}(m)$ for all $s\in\mathfrak{S}_A[1/u]$, $m\in M$.
If $M$ as above is projective as an $\mathfrak{S}_A[1/u]$-module then we say
simply that $M$ is an \'etale $\varphi$-module. The \'etale $\varphi$-module $M$ is said to be of rank~$d$ if the underlying
finitely generated projective $\mathfrak{S}_A[1/u]$-module has constant rank~$d$.
\end{defn}
\begin{rem}
\label{rem: completed version if $p$ not nilpotent}We could also
consider \'etale $\varphi$-modules for general $p$-adically complete $\Z_p$-algebras~$A$, but
we would need to replace $\mathfrak{S}_A[1/u]$ by its $p$-adic completion. As
we will not need to consider these modules in this paper, we do not
do so here, but we refer the interested reader to~\cite{EGmoduli}.
\end{rem}
A morphism
of weak \'etale
$\varphi$-modules with $A$-coefficients and descent data from $K'$ to
$K$
is a morphism of~$\mathfrak{S}_A[1/u]$-modules
that commutes with $\varphi$ and with the
$\hat{g}$. Again, in the case $K'=K$ the descent data is trivial, and we
obtain the usual category of \'etale $\varphi$-modules with
$A$-coefficients.
Note that
if $A$ is a ${\mathbb Z}/p^a{\mathbb Z}$-algebra, and $\mathfrak{M}$ is a Breuil--Kisin module
with descent data, then $\mathfrak{M}[1/u]$ naturally has the
structure of an \'etale $\varphi$-module
with descent data.
Suppose that $A$ is an ${\mathcal O}$-algebra (where ${\mathcal O}$ is as in
Section~\ref{subsec: notation}). In making calculations, it is often
convenient to use the idempotents~$e_i$ (again as in
Section~\ref{subsec: notation}). In particular if $\mathfrak{M}$ is a Breuil--Kisin
module, then writing as usual
$\mathfrak{M}_i:=e_i\mathfrak{M}$, we write $\Phi_{\mathfrak{M},i}:\varphi^*(\mathfrak{M}_{i-1})\to\mathfrak{M}_{i}$ for
the morphism induced by~$\Phi_{\mathfrak{M}}$. Similarly if $M$ is an
\'etale $\varphi$-module then we write
$M_i:=e_iM$, and we write $\Phi_{M,i}:\varphi^*(M_{i-1}) \to M_{i}$ for
the morphism induced by~$\Phi_{M}$.
To connect \'etale $\varphi$-modules to
$G_{K_{\infty}}$-representations we begin by recalling
from \cite{kis04} some constructions arising in $p$-adic Hodge theory
and the theory of fields of norms, which go back to~\cite{MR1106901}.
Following Fontaine,
we write $R:=\varprojlim_{x\mapsto
x^p}{\mathcal O}_{\bar{K}}/p$.
Fix a compatible system $(\! \sqrt[p^n]{\pi}\,)_{n\ge 0}$ of
$p^n$th roots of $\pi$ in $\bar{K}$ (compatible in the obvious sense that
$\bigl(\! \sqrt[p^{n+1}]{\pi}\,\bigr)^p = \sqrt[p^n]{\pi}\,$),
and let
$K_{\infty}:=\cup_{n}K(\sqrt[p^n]{\pi})$, and
also $K'_\infty:=\cup_{n}K'(\sqrt[p^n]{\pi})$. Since $(e(K'/K),p)=1$, the compatible system
$(\! \sqrt[p^n]{\pi}\,)_{n\ge 0}$ determines a unique compatible system $(\!
\sqrt[p^n]{\pi'}\,)_{n\ge 0}$ of $p^n$th roots of~$\pi'$ such that $(\!
\sqrt[p^n]{\pi'}\,)^{e(K'/K)} =\sqrt[p^n]{\pi}$.
Write
$\underline{\pi}'=(\sqrt[p^n]{\pi'})_{n\ge 0}\in R$, and $[\underline{\pi}']\in
W(R)$ for its image under the natural multiplicative map $R \to W(R)$. We have a Frobenius-equivariant
inclusion $\mathfrak{S}\hookrightarrow W(R)$ by sending $u\mapsto[\underline{\pi}']$. We can naturally identify
$\Gal(K'_\infty/K_\infty)$ with $\Gal(K'/K)$, and doing this we see that the
action of $g\in G_{K_\infty}$ on $u$ is via $g(u)=h(g)u$.
We let ${\mathcal O}_{{\mathcal E}}$ denote the $p$-adic completion of $\mathfrak{S}[1/u]$, and let ${\mathcal E}$ be the
field of fractions of~${\mathcal O}_{\mathcal E}$. The inclusion $\mathfrak{S}\hookrightarrow W(R)$ extends to an
inclusion ${\mathcal E}\hookrightarrow W(\operatorname{Frac}(R))[1/p]$. Let ${\mathcal E}^{\text{nr}}$ be
the maximal unramified extension of ${\mathcal E}$ in $ W(\operatorname{Frac}(R))[1/p]$,
and let ${\mathcal O}_{{\mathcal E}^{\text{nr}}}\subset W(\operatorname{Frac}(R))$ denote
its ring of
integers. Let ${\mathcal O}_{\widehat{{\mathcal E}^{\text{nr}}}}$ be the $p$-adic completion of
${\mathcal O}_{{\mathcal E}^{\text{nr}}}$. Note that ${\mathcal O}_{\widehat{{\mathcal E}^{\text{nr}}}}$ is stable
under the action of $G_{K_\infty}$.
\begin{defn}
Suppose that $A$ is a ${\mathbb Z}/p^a{\mathbb Z}$-algebra for some $a \ge 1$. If $M$
is a weak \'etale $\varphi$-module with $A$-coefficients and descent data, set
$T_A(M):=\left({\mathcal O}_{\widehat{{\mathcal E}^{\text{nr}}}}\otimes_{\mathfrak{S}[1/u]}M
\right)^{\varphi=1}$, an $A$-module with a
$G_{K_\infty}$-action (via the diagonal action on
${\mathcal O}_{\widehat{{\mathcal E}^{\text{nr}}}}$ and $M$, the latter given by
the~$\hat{g}$). If $\mathfrak{M}$ is a Breuil--Kisin module with
$A$-coefficients and descent data,
set
$T_A(\mathfrak{M}):=T_A(\mathfrak{M}[1/u])$.
\end{defn}
\begin{lem}
\label{lem: Galois rep is a functor if A is actually finite local} Suppose
that $A$ is a local $\Z_p$-algebra and that $|A|<\infty$. Then~$T_A$ induces an
equivalence of categories from the category of weak \'etale
$\varphi$-modules with $A$-coefficients and descent data to the category of continuous
representations of $G_{K_\infty}$ on finite $A$-modules. If $A\to A'$
is finite, then there is a natural isomorphism $T_A(M)\otimes_A A'\buildrel \sim \over \longrightarrow
T_{A'}(M\otimes_A A')$. A weak \'etale $\varphi$-module with
$A$-coefficients and descent
data~$M$ is free of rank~$d$ if and only if $T_A(M)$ is a free
$A$-module of rank~$d$.
\end{lem}
\begin{proof}
This is due to Fontaine~\cite{MR1106901}, and can be proved in exactly the same way as~\cite[Lem.\ 1.2.7]{kis04}.
\end{proof}
We will frequently simply write $T$ for $T_A$. Note that if we let
$M'$ be the \'etale $\varphi$-module obtained from $M$ by forgetting the
descent data, then by definition we have
$T(M')=T(M)|_{G_{K'_\infty}}$.
\begin{rem}\label{rem:ht-1-extend}
Although \'etale $\varphi$-modules naturally give rise to representations
of~$G_{K_\infty}$, those coming from Breuil--Kisin modules of height at most~$1$
admit canonical extensions to~$G_K$ by~\cite[Prop.\ 1.1.13]{kis04}.
\end{rem}
\begin{lem}
\label{lem: restricting to K_infty doesn't lose information about
rbar}If $\overline{r},\overline{r}':G_K\to\GL_2(\Fbar_p)$ are continuous
representations, both of which arise as the reduction mod~$p$ of
potentially Barsotti--Tate representations of tame inertial type,
and there is an isomorphism $\overline{r}|_{G_{K_\infty}}\cong \overline{r}'|_{G_{K_\infty}}
$, then $\overline{r}\cong\overline{r}'$.
\end{lem}
\begin{proof}
The extension $K_\infty/K$ is totally wildly ramified. Since the
irreducible $\Fbar_p$-representations of~$G_K$ are induced from
tamely ramified characters, we see that~$\overline{r}|_{G_{K_\infty}}$ is
irreducible if and only if~$\overline{r}$ is irreducible, and if
$\overline{r}$ or $\overline{r}'$ is irreducible then we are done. In the
reducible case, we see that $\overline{r}$ and $\overline{r}'$ are extensions of
the same characters, and the result then follows from~\cite[Lem.\
5.4.2]{gls13} and Lemma~\ref{lem: list of things we need to know about Serre weights}~(2).
\end{proof}
\subsection{Recollections from \texorpdfstring{\cite{cegsB}}{[CEGS20b]}}
\label{sec:recoll-from-citec}
The main objects of study in this paper are certain algebraic stacks
${\mathcal C}^{\tau,\operatorname{BT}}$ and ${\mathcal Z}^{\tau,1}$, of rank two Breuil--Kisin modules and
\'etale $\varphi$-modules respectively, that were introduced and studied in
\cite{cegsB}. We review their definitions now, and recall the main
properties of these stacks that were established in \cite{cegsB}.
To define ${\mathcal C}^{\tau,\operatorname{BT},1}$ we first introduce stacks of Breuil--Kisin modules with descent
data;\ then we impose two conditions on them, corresponding (in the
analogy with Galois representations) to fixing an inertial type~$\tau$
and requiring all pairs Hodge--Tate weights to be
$\{0,1\}$.
Take $K'/K$ to be any Galois extension such that $[K':K]$ is prime to
$p$, and write $N = K \cdot W(k')[1/p]$.
\begin{defn}
\label{defn: C^dd,a }For each integer $a\ge 1$, we let ${\mathcal C}_{d,h}^{\mathrm{dd},a}$ be
the {\em fppf} stack over~${\mathcal O}/\varpi^a$ which associates to any ${\mathcal O}/\varpi^a$-algebra $A$
the groupoid ${\mathcal C}_{d,h}^{\mathrm{dd},a}(A)$
of rank~$d$ Breuil--Kisin modules of height at most~$h$ with $A$-coefficients and descent data from
$K'$ to~$K$.
By~\cite[\href{http://stacks.math.columbia.edu/tag/04WV}{Tag 04WV}]{stacks-project},
we may also regard each of the stacks ${\mathcal C}_{d,h}^{\mathrm{dd},a}$ as an {\em fppf}
stack over ${\mathcal O}$,
and we then write ${\mathcal C}_{d,h}^{\mathrm{dd}}:=\varinjlim_{a}{\mathcal C}_{d,h}^{\mathrm{dd},a}$; this
is again an {\em fppf} stack over~${\mathcal O}$. We will omit the subscripts $d,h$ from this notation
when doing so will not cause confusion.
\end{defn}
\begin{defn}
Let $\tau$ be a $d$-dimensional $E$-representation of $I(K'/K)$. We say that an
object $\mathfrak{M}$ of ${\mathcal C}^{\mathrm{dd},a}$ is has \emph{type} $\tau$ if Zariski locally on $\Spec A$ there is an
$I(K'/K)$-equivariant isomorphism $\mathfrak{M}_i/u \mathfrak{M}_i \cong A \otimes_{{\mathcal O}}
\tau^\circ$ for each $i$. (Here we recall that $\mathfrak{M}_i := e_i \mathfrak{M}$, and
$\tau^{\circ}$ denotes an ${\mathcal O}$-lattice in $\tau$.)
\end{defn}
\begin{defn}
Let ${\mathcal C}^{\tau}$ be the \'etale substack of ${\mathcal C}^{\mathrm{dd}}$
consisting of the objects of type $\tau$. This is an open and
closed substack of ${\mathcal C}^{\mathrm{dd}}$ (see \cite[Prop.~3.3.5]{cegsB}).
\end{defn}
For the remainder of this section we fix $d=2$ and $h=1$.
Suppose that~$A$ is an ${\mathcal O}/\varpi^a$-algebra and consider a pair~$(\mathfrak{L},\mathfrak{L}^+)$, where:
\begin{itemize}
\item $\mathfrak{L}$ is a rank $2$ projective ${\mathcal O}_{K'}\otimes_{\Z_p} A$-module,
with a $\Gal(K'/K)$-semilinear, $A$-linear action of~$\Gal(K'/K)$;
\item $\mathfrak{L}^+$ is an ${\mathcal O}_{K'}\otimes_{\Z_p} A$-submodule of~$\mathfrak{L}$, which is
locally on~$\Spec A$ a direct summand of~$\mathfrak{L}$ as an $A$-module
(or equivalently, for which $\mathfrak{L}/\mathfrak{L}^+$ is projective as an $A$-module),
and is preserved by~$\Gal(K'/K)$.
\end{itemize}
For each character $\xi : I(K'/K) \to {\mathcal O}^{\times}$, let $\mathfrak{L}_{\xi}$
(resp.\ $\mathfrak{L}^+_{\xi}$) be the ${\mathcal O}_N\otimes_{\Z_p} A$-submodule of $\mathfrak{L}$ (resp.\ $\mathfrak{L}^+$) on
which $I(K'/K)$ acts through $\xi$. We say that the pair $(\mathfrak{L},\mathfrak{L}^+)$ \emph{satisfies the strong determinant condition} if Zariski
locally on $\Spec A$ the
following condition holds: for all $\alpha\in{\mathcal O}_N$ and all $\xi$, we
have \addtocounter{subsubsection}{1}\begin{equation}\label{eqn: strong det condn}
\det{\!}_A(\alpha{}|\mathfrak{L}^+_\xi)
=\prod_{\psi:N\hookrightarrow E}\psi(\alpha{})
\end{equation}
as polynomial functions on ${\mathcal O}_N$
in the sense of~\cite[\S5]{MR1124982}.
\begin{defn}\label{def:strongdet}
An object $\mathfrak{M}$ of ${\mathcal C}^{\mathrm{dd},a}$ \emph{satisfies the strong
determinant condition} if the pair $(\mathfrak{M}/E(u)\mathfrak{M}, \im
\Phi_{\mathfrak{M}}/E(u)\mathfrak{M})$ satisfies the strong determinant condition as
in the previous paragraph.
We define ${\mathcal C}^{\tau,\operatorname{BT}}$ to be the substack of ${\mathcal C}^{\tau}$ of
objects satisfying the strong determinant condition. This is a
$\varpi$-adic formal algebraic stack of finite presentation
over~${\mathcal O}$ by
\cite[Prop.~4.2.7]{cegsB}, and so its special fibre ${\mathcal C}^{\tau,\operatorname{BT},1}$ is an
algebraic stack, locally of finite type over ${\mathbb F}$.
The $\Spf({\mathcal O}_{E'})$-points of ${\mathcal C}^{\tau,\operatorname{BT}}$, for any finite extension $E'/E$,
correspond to potentially Barsotti--Tate Galois representations $G_K
\to \GL_2({\mathcal O}_{E'})$ of
inertial type $\tau$ (\cite[Lem.~4.2.16]{cegsB}).
\end{defn}
The following result combines \cite[Cor.~4.5.3, Prop.~5.2.21]{cegsB}.
\begin{thm} \label{cor: Kisin moduli consequences of local models} We have:
\begin{enumerate}
\item ${\mathcal C}^{\tau,\operatorname{BT}}$ is analytically normal, and Cohen--Macaulay.
\item The special fibre ${\mathcal C}^{\tau,\operatorname{BT},1}$ is reduced and
equidimensional of dimension equal to $[K:\Q_p]$.
\item ${\mathcal C}^{\tau,\operatorname{BT}}$ is flat over~${\mathcal O}$.
\end{enumerate}
\end{thm}
We now introduce our stacks of \'etale $\varphi$-modules.
\begin{defn}\label{defn: R^dd}
Let
${\mathcal R}^{\mathrm{dd},1}$ be the \emph{fppf} ${\mathbb F}$-stack which
associates
to any ${\mathbb F}$-algebra $A$ the groupoid ${\mathcal R}^{\mathrm{dd},1}(A)$ of rank $2$ \'etale
$\varphi$-modules with $A$-coefficients and descent data from $K'$ to
$K$.
\end{defn}
Inverting $u$ gives a proper morphism ${\mathcal C}^{\mathrm{dd},1} \to {\mathcal R}^{\mathrm{dd},1}$, which
then restricts to a proper morphism ${\mathcal C}^{\tau,\operatorname{BT},1} \to {\mathcal R}^{\mathrm{dd},1}$ for each
$\tau$.
We now briefly remind the reader of some
definitions from~\cite[\S3.2]{EGstacktheoreticimages}.
Let
${\mathcal X} \to {\mathcal F}$ be a proper morphism of stacks over a locally
Noetherian base-scheme~$S$,
where ${\mathcal X}$ is an algebraic stack which is locally of finite presentation over~$S$,
and the diagonal of ${\mathcal F}$ is representable by algebraic spaces and locally of
finite presentation.
We refer to~\cite[Defn.\ 3.2.8]{EGstacktheoreticimages} for the
definition of the \emph{scheme-theoretic image}~${\mathcal Z}$ of the proper morphism ${\mathcal X} \to
{\mathcal F}$. By definition, it is a full subcategory in groupoids of~${\mathcal F}$, and in fact
by~\cite[Lem.\ 3.2.9]{EGstacktheoreticimages} it is a Zariski substack
of~${\mathcal F}$. By~\cite[Lem.\ 3.2.14]{EGstacktheoreticimages}, the finite type points
of~${\mathcal Z}$ are precisely the finite type points of~${\mathcal F}$ for which the
corresponding fibre of~${\mathcal X}$ is nonzero.
The results of~\cite[\S3.2]{EGstacktheoreticimages} give criteria
for~${\mathcal Z}$ to be an algebraic stack, and prove a number of associated results
(such as universal properties of the morphism ${\mathcal Z}\to{\mathcal F}$, and a description of
versal deformation rings for~${\mathcal Z}$). This formalism applies in
particular to the proper morphism ${\mathcal C}^{\tau,\operatorname{BT},1} \to {\mathcal R}^{\mathrm{dd},1}$,
and so we make the following definition.
\begin{defn}
We define ${\mathcal Z}^{\tau,1}$ to be the scheme-theoretic image (in the
sense of~\cite[Defn.\ 3.2.8]{EGstacktheoreticimages}) of the morphism
${\mathcal C}^{\tau,\operatorname{BT},1}\to{\mathcal R}^{\mathrm{dd},1}$.
\end{defn}
In \cite[Thm.~5.1.2, Prop.~5.2.20]{cegsB}\ we established the following properties of this
construction.
\begin{prop}\label{prop:Zproperties} \hfill
\begin{enumerate}
\item ${\mathcal Z}^{\tau,1}$ is an algebraic stack of finite presentation
over ${\mathbb F}$, and is a closed substack of ${\mathcal R}^{\mathrm{dd},1}$.
\item The morphism ${\mathcal C}^{\tau,\operatorname{BT},1}\to{\mathcal R}^{\mathrm{dd},1}$ factors through
a morphism ${\mathcal C}^{\tau,\operatorname{BT},1}\to{\mathcal Z}^{\mathrm{dd},1}$ which is
representable by algebraic spaces, scheme-theoretically dominant,
and proper.
\item The $\Fbar_p$-points of ${\mathcal Z}^{\tau,1}$ are naturally in
bijection with the continuous representations $\overline{r} : G_K \to
\GL_2(\Fbar_p)$ which have a potentially Barsotti--Tate lift of
type $\tau$.
\end{enumerate}
\end{prop}
\begin{thm}
\label{prop: dimensions of the Z stacks}
The algebraic stacks
${\mathcal Z}^{\tau,1}$ are equidimensional of
dimension equal to~$[K:\Q_p]$.
\end{thm}
\subsection{Dieudonn\'e and gauge stacks}
\label{sec:dieudonne-stacks}
We now specialise the choice of $K'$ in the following way. Choose a
tame inertial type $\tau=\eta\oplus\eta'$.
Fix a uniformiser $\pi$ of~$K$. If $\tau$ is a tame
principal series type, we take $K'=K(\pi^{1/(p^f-1)})$, while
if~$\tau$ is a tame cuspidal type, we let $L$ be an unramified
quadratic extension of~$K$, and set $K'=L(\pi^{1/(p^{2f}-1)})$. Let
$N$ be the maximal unramified extension of $K$ in $K'$. In
either case $K'/K$ is a Galois extension; in the principal series
case, we have $e'=(p^f-1)e$, $f'=f$, and in the cuspidal case we have
$e'=(p^{2f}-1)e$, $f'=2f$. We refer to this choice of extension as the
\emph{standard choice} (for the fixed type $\tau$ and uniformiser
$\pi$).
For the rest of this section we assume that
$\eta\ne\eta'$ (we will not need to consider Dieudonn\'e modules for
scalar types).
Let $\mathfrak{M}$ be an object of ${\mathcal C}^{\tau,\operatorname{BT}}(A)$, and let $D := \mathfrak{M}/u\mathfrak{M}$ be
its corresponding Dieudonn\'e module as in Definition~\ref{def:
Dieudonne module formulas}. The group
$I(K'/K)$ is abelian of order prime to $p$, and so we can write
$D=D_\eta\oplus D_{\eta'}$, where $D_\eta$ is the submodule on which
$I(K'/K)$ acts via~$\eta$. Setting $D_{\eta,j} := e_j D_{\eta}$, it
follows from the
projectivity of $\mathfrak{M}$ that each $D_{\eta,j}$ is an invertible
$A$-module. The maps $F,V$ induce linear
maps $F:D_{\eta,j}\to D_{\eta,j+1}$ and $V: D_{\eta,j+1} \to D_{\eta,j}$
such that $FV = VF = p$.
\begin{defn}\label{def:dieudonne-stack}
If $\tau$ is a principal series type we define a stack
$${\mathcal D}_{\eta} :=
\Big[
\bigl( \Spec W(k)[X_0,Y_0,\ldots,X_{f-1},Y_{f-1}]/(X_j Y_j - p)_{j = 0,\ldots, f-1}) \bigr) /
\mathbb G_m^f \big],$$
where the $f$ copies of $\mathbb G_m$ act as
$(u_0,\ldots,u_{f-1}) \cdot (X_j,Y_j) \mapsto (u_j u_{j+1}^{-1} X_j, u_{j+1} u_j^{-1} Y_j).$
If instead $\tau$ is a cuspidal type we define $${\mathcal D}_{\eta} :=
\Big[
\bigl( \Spec W(k)[X_0,Y_0,\ldots,X_{f-1},Y_{f-1}]/(X_j Y_j - p)_{j = 0,\ldots, f-1}) \times\GG_m\bigr) /
\mathbb G_m^{f+1} \big],$$
where the $f+1$ copies of $\mathbb G_m$ act as $$(u_0,\ldots,u_{f-1},u_f) \cdot ((X_j,Y_j),\alpha) \mapsto ((u_j u_{j+1}^{-1}
X_j, u_{j+1} u_j^{-1} Y_j),\alpha ).$$
\end{defn}
In \cite[Sec.~4.6]{cegsB}\ we explained how the stack ${\mathcal D}_{\eta}$
classifies the line bundles $D_{\eta,j}$ together with the maps $F,V$,
so that in either case
(principal series or cuspidal) there is a natural map ${\mathcal C}^{\tau,\operatorname{BT}} \to
{\mathcal D}_{\eta}$.
It will be helpful to introduce another stack,
the stack ${\mathcal G}_{\eta}$ of $\eta$-gauges. This classifies
$f$-tuples of line bundles ${\mathcal D}_j$ ($j = 0,\ldots,f-1$) equipped
with sections $X_j \in {\mathcal D}_j$ and $Y_j \in {\mathcal D}_j^{-1}$.
Explicitly, it can be written as the quotient stack
$${\mathcal G}_{\eta} :=
\Big[
\bigl( \Spec W(k)[X_0,Y_0,\ldots,X_{f-1},Y_{f-1}]/(X_j Y_j - p)_{j = 0,\ldots, f-1}) \bigr) /
\mathbb G_m^f \big],$$
where the $f$ copies of $\mathbb G_m$ act as follows:
$$(v_0,\ldots,v_{f-1}) \cdot (X_j,Y_j) \mapsto (v_j X_j, v_j^{-1}
Y_j).$$
There is a morphism of stacks ${\mathcal D}_{\eta} \to {\mathcal G}_{\eta}$ which we can
define explicitly using their descriptions as quotient stacks.
Indeed, in the principal series case
we have a morphism $\GG_m^f \to \GG_m^f$ given by
$(u_j)_{j = 0,\ldots,f-1} \mapsto (u_j u_{j+1}^{-1})_{j = 0,\ldots,f-1}$,
which is compatible with the actions of these two groups on
$\Spec W(k)[(X_j,Y_j)_{j=0,\ldots,f-1}]/(X_j Y_j - p)_{j = 0,
\ldots , f-1},$ and we are just considering the map from the quotient
by the first $\GG_m^f$ to the quotient by the second~$\GG_m^f$. In the
cuspidal case we have a morphism $\GG_m^{f+1} \to \GG_m^f$ given by
$(u_j)_{j = 0,\ldots,f} \mapsto (u_j u_{j+1}^{-1})_{j = 0,\ldots,f-1}$,
and the morphism ${\mathcal D}_{\eta} \to
{\mathcal G}_{\eta}$ is the obvious one which forgets the factor of~$\GG_m$
coming from~$\alpha$.
Composing our morphism ${\mathcal C}^{\tau,\operatorname{BT}} \to {\mathcal D}_{\eta}$ with the forgetful morphism
${\mathcal D}_{\eta} \to {\mathcal G}_{\eta}$, we obtain a morphism ${\mathcal C}^{\tau,\operatorname{BT}} \to {\mathcal G}_{\eta}$.
For our analysis of the irreducible components of the stacks
${\mathcal C}^{\tau,\operatorname{BT},1}$ at the end of Section~\ref{sec: extensions of rank one Kisin
modules},
it will be useful to have a
more directly geometric interpretation
of a morphism $S \to {\mathcal G}_{\eta}$, in the case that
the source is a {\em flat} $W(k)$-scheme, or, more generally,
a flat $p$-adic formal algebraic stack over~$\Spf
W(k)$. In order to do this we will need some basic material on
effective Cartier divisors for (formal) algebraic stacks; while it is
presumably possible to develop this theory in considerable generality,
we only need a very special case, and we limit ourselves to this
setting.
The property of a closed subscheme being an effective Cartier divisor is not
preserved under arbitrary pull-back, but it is preserved under flat
pull-back. More precisely, we have the following result.
\begin{lemma}\label{lem:Cartier divisors are flat local}
If $X$ is a scheme,
and $Z$ is a closed subscheme of $X$,
then the following are equivalent:
\begin{enumerate}
\item $Z$ is an effective Cartier divisor on $X$.
\item For any flat morphism of schemes $U \to X$,
the pull-back $Z\times_{X} U$
is an effective Cartier divisor on $U$.
\item For some fpqc covering $\{X_i \to X\}$ of $X$,
each of the pull-backs $Z\times_{X} X_i$
is an effective Cartier divisor on $X_i$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $Z$ is an effective Cartier divisor if and only if its ideal sheaf
${\mathcal I}_Z$ is an invertible sheaf on $X$, this follows from
the fact that the invertibility of a quasi-coherent sheaf
is a local property in the {\em fpqc} topology.
\end{proof}
\begin{lemma}
\label{lem:comparing closed subsets}
If $A$ is a Noetherian adic topological ring,
then pull-back under the natural morphism $\Spf A \to \Spec A$
induces a bijection between the closed subschemes of
$\Spec A$ and the closed subspaces of
$\Spf A$.
\end{lemma}
\begin{proof}
It follows
from~\cite[\href{http://stacks.math.columbia.edu/tag/0ANQ}{Tag
0ANQ}]{stacks-project}
that closed immersions $Z \to \Spf A$
are necessarily of the form $\Spf B \to \Spf A$,
and correspond to continuous morphisms $A \to B$, for some complete
linearly topologized
ring $B$, which are taut (in the sense
of~\cite[\href{http://stacks.math.columbia.edu/tag/0AMX}{Tag
0AMX}]{stacks-project}),
have closed kernel, and dense image.
Since $A$ is adic, it admits a countable basis of neighbourhoods of the origin,
and so it follows
from~\cite[\href{http://stacks.math.columbia.edu/tag/0APT}{Tag
0APT}]{stacks-project} (recalling also~\cite[\href{http://stacks.math.columbia.edu/tag/0AMV}{Tag
0AMV}]{stacks-project}) that $A\to B$ is surjective.
Because any ideal of definition $I$ of $A$ is finitely generated, it follows
from~\cite[\href{http://stacks.math.columbia.edu/tag/0APU}{Tag
0APU}]{stacks-project} that $B$ is endowed with the $I$-adic topology.
Finally, since $A$ is Noetherian, any ideal in $A$ is $I$-adically closed.
Thus closed immersions $\Spf B \to \Spf A$ are determined by giving
the kernel of the corresponding morphism $A \to B$, which can be arbitrary.
The same is true of closed immersions $\Spec B \to \Spec A$,
and so the lemma follows.
\end{proof}
\begin{df} If $A$ is a Noetherian adic topological ring,
then we say that a closed subspace of $\Spf A$
is an {\em effective Cartier divisor} on $\Spf A$ if the corresponding closed
subscheme of $\Spec A$ is an effective Cartier divisor on $\Spec A$.
\end{df}
\begin{lemma}
Let $\Spf B \to \Spf A$ be a flat adic morphism of Noetherian
affine formal algebraic spaces.
If $Z \hookrightarrow \Spf A$ is a Cartier divisor,
then $Z \times_{\Spf A} \Spf B \hookrightarrow \Spf B$
is a Cartier divisor. Conversely, if $\Spf B \to \Spf A$
is furthermore surjective, and if $Z \hookrightarrow \Spf A$
is a closed subspace for which the base-change
$Z \times_{\Spf A} \Spf B \hookrightarrow \Spf B$
is a Cartier divisor,
then $Z$ is a Cartier divisor on $\Spf A$.
\end{lemma}
\begin{proof}
The morphism $\Spf B \to \Spf A$ corresponds to an adic flat morphism
$A \to B$
(\cite[\href{http://stacks.math.columbia.edu/tag/0AN0}{Tag
0AN0}]{stacks-project}
and \cite[Lem.\ 8.18]{Emertonformalstacks})
and hence is induced by a flat morphism $\Spec B \to \Spec A$,
which is furthermore faithfully flat if and only if
$\Spf B \to \Spf A$ is surjective
(again by \cite[Lem.\ 8.18]{Emertonformalstacks}).
The present lemma thus follows from Lemma~\ref{lem:Cartier divisors
are flat local}.
\end{proof}
The preceding lemma justifies the following
definition.
\begin{df} We say that a closed substack ${\mathcal Z}$ of a locally Noetherian
formal algebraic stack
${\mathcal X}$
is an {\em effective Cartier divisor} on ${\mathcal X}$ if
for any morphism $U \to {\mathcal X}$ whose source
is a Noetherian affine formal algebraic space,
and which is representable by algebraic spaces and
flat,
the pull-back ${\mathcal Z}\times_{{\mathcal X}} U$
is an effective Cartier divisor on $U$.
\end{df}
We consider the $W(k)$-scheme $\Spec W(k)[X,Y]/(XY - p)$, which we endow with
a $\GG_m$-action via $u\cdot (X,Y) := (uX, u^{-1} Y).$
There is an obvious morphism
$$\Spec W(k)[X,Y]/(XY -p) \to \Spec W(k)[X]=\mathbb A^1$$
given by $(X,Y) \to X$, which is $\GG_m$-equivariant (for the action
of~$\GG_m$ on~$\mathbb A^1$ given by $u\cdot X:=uX$),
and so induces a morphism
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:Cartier divisor construction}
[\bigl(\Spec W(k)[X,Y]/(XY -p)\bigr) / \GG_m]
\to [\mathbb A^1/\GG_m].
\end{equation}
\begin{lemma}\label{lem: f equals 1 effective Cartier} If ${\mathcal X}$ is a
locally Noetherian $p$-adic formal algebraic stack
which is furthermore flat over $\Spf W(k)$,
then the groupoid of morphisms
\[{\mathcal X} \to [\Spec W(k)[X,Y]/(XY - p) / \GG_m]\] is in fact
a setoid, and is equivalent to the set of effective Cartier divisors
on ${\mathcal X}$ that are contained in the effective Cartier divisor
$(\Spec k) \times_{\Spf W(k)} {\mathcal X}$ on~${\mathcal X}$.
\end{lemma}
\begin{proof}
Essentially by definition (and taking into account \cite[Lem.\
8.18]{Emertonformalstacks}), it suffices to prove this in the case
when ${\mathcal X} = \Spf B$, where $B$ is a flat Noetherian adic
$W(k)$-algebra admitting $(p)$ as an ideal of definition. In this
case, the restriction map
\[[\Spec W(k)[X,Y]/(XY - p) / \GG_m](\Spec B)\to [\Spec W(k)[X,Y]/(XY -
p) / \GG_m](\Spf B)\] is an equivalence of groupoids.
Indeed, the
essential surjectivity follows from the (standard and easily
verified) fact that if $\{M_i\}$ is a compatible family of locally
free $B/p^iB$-modules of rank one, then $M := \varprojlim M_i$ is a
locally free $B$-module of rank one, for which each of the natural
morphisms $M/p^iM \to M_i$ is an isomorphism. The full
faithfulness follows from the fact that a locally free $B$-module
of rank one is $p$-adically complete, and so is recovered as the
inverse limit of its compatible family of quotients $\{M/p^iM\}.$
We are therefore reduced to the same statement with ${\mathcal X} = \Spec B$. The composite morphism $\Spec B \to [\mathbb A^1/\GG_m]$ induced
by~\eqref{eqn:Cartier divisor construction} corresponds to giving a
pair~$({\mathcal D},X)$ where~${\mathcal D}$ is a line bundle on~$\Spec B$, and~$X$
is a global section of~${\mathcal D}^{-1}$. Indeed, giving a morphism $\Spec
B \to [\mathbb A^1/\GG_m]$ is equivalent
to giving a $\GG_m$-torsor $P \to \Spec B$, together with a $\GG_m$-equivariant
morphism $P \to \mathbb A^1$. Giving a $\GG_m$-torsor
$P$ over $\Spec B$ is equivalent to giving an invertible sheaf
${\mathcal D}$ on $\Spec B$
(the associated $\GG_m$-torsor is then obtained by deleting the
zero section from the line bundle $D\to X$ corresponding to ${\mathcal D}$),
and giving a $\GG_m$-equivariant morphism $P \to \mathbb A^1$
is equivalent to giving a global section of ${\mathcal D}^{-1}$.
It follows that giving a morphism
$\Spec B \to [\Spec W(k)[X,Y]/(XY - p) / \GG_m]$ corresponds to giving
a line bundle ${\mathcal D}$ and sections $X \in {\mathcal D}^{-1}$, $Y \in {\mathcal D}$ satisfying
$X Y = p$. To say that $B$ is flat over $W(k)$ is just to say that
$p$ is a regular element on $B$, and so we see that $X$
(resp.\ $Y$) is a regular section of ${\mathcal D}^{-1}$ (resp.\ ${\mathcal D}$).
Again, since $p$ is a
regular element on $B$, we see that $Y$ is uniquely determined by
$X$ and the equation $X Y = p$, and so giving a morphism
$\Spec B\to [\Spec W(k)[X,Y]/(XY - p) / \GG_m]$ is equivalent to giving a line
bundle ${\mathcal D}$ and a regular section $X$ of ${\mathcal D}^{-1}$, such that
$pB \subset X\otimes_B {\mathcal D} \subset {\mathcal D}^{-1}\otimes_B {\mathcal D}\buildrel \sim \over \longrightarrow B$;
this last condition guarantees the existence of the (then uniquely
determined) $Y$.
Now giving a line bundle ${\mathcal D}$ on $\Spec B$ and a regular section
$X\in{\mathcal D}^{-1}$ is the
same as giving the zero locus $D$ of $X$, which is a Cartier divisor
on $\Spec B$.
(There is a canonical isomorphism $({\mathcal D},X) \cong
\bigl({\mathcal I}_D,1\bigr)$, where ${\mathcal I}_D$ denotes the ideal sheaf of $D$.)
The condition that $pB \subset X\otimes_B {\mathcal D}$ is equivalent
to the condition that $p \in {\mathcal I}_D$,
i.e.\ that $D$ be contained in $\Spec B/pB$, and we are done.
\end{proof}
\begin{lemma}\label{lem: maps to gauge stack as Cartier
divisors}
If ${\mathcal S}$ is a locally Noetherian $p$-adic formal algebraic stack which
is flat over $W(k)$,
then giving a morphism ${\mathcal S} \to {\mathcal G}_{\eta}$
over $W(k)$ is equivalent to giving a collection
of effective Cartier divisors ${\mathcal D}_j$ on ${\mathcal S}$ {\em (}$j = 0,
\ldots,f-1${\em )}, with each ${\mathcal D}_j$ contained in the Cartier divisor
$\overline{{\mathcal S}}$ cut out by the equation $p = 0$ on ${\mathcal S}$ {\em (}i.e.\
the {\em special fibre} of ${\mathcal S}${\em )}.
\end{lemma}
\begin{proof}
This follows immediately from Lemma~\ref{lem: f equals 1
effective Cartier}, by the definition of~${\mathcal G}_\eta$.
\end{proof}
\section{Families of extensions of Breuil--Kisin modules}\label{sec: extensions of rank one Kisin modules}
The goal of the next two sections is to construct certain universal families of
extensions of rank one Breuil--Kisin modules over ${\mathbb F}$ with descent data;\ these families will be
used in Section~\ref{sec:Components} to describe the generic behaviour of the various irreducible
components of the special fibres of~${\mathcal C}^{\tau,\operatorname{BT}}$ and~${\mathcal Z}^{\tau}$.
In Subsections~\ref{subsec:ext generalities} and~\ref{subsec:families of extensions} we present some generalities
on extensions of Breuil--Kisin modules and on families of these extensions, respectively. In Subsection~\ref{subsec:universal families}
we specialize the discussion of Subsection~\ref{subsec:families of extensions} to the case of extensions of two rank one Breuil--Kisin modules, and thus explain how to construct our desired families of extensions.
In Section~\ref{sec:extensions-of-rank-one} we recall the fundamental
computations related to extensions of rank one Breuil--Kisin modules
from \cite{DiamondSavitt}, to which the results of
Subsection~\ref{subsec:universal families} will be applied at the end of Subsection~\ref{sec:extensions-shape-J} to construct the components $\overline{{\mathcal C}}(J)$ and $\overline{{\mathcal Z}}(J)$ of Theorem~\ref{thm:main thm cegsC}.
We assume throughout this section that $[K':K]$ is not divisible
by~$p$; since we are assuming throughout the paper that $K'/K$ is
tamely ramified, this is equivalent to assuming that $K'$ does not
contain an unramified extension of $K$ of degree~$p$. In our final
applications $K'/K$ will contain unramified extensions of degree at
most~$2$, and $p$ will be odd, so this assumption will be satisfied.
(In fact, we specialize to such a context begining in
Subsection~\ref{subsec:irreducible}.)
\subsection{Extensions of Breuil--Kisin modules with descent data}
\label{subsec:ext generalities}
When discussing the general theory of extensions of Breuil--Kisin
modules, it is convenient to embed the category of Breuil--Kisin modules
in a larger category which is abelian,
contains enough injectives and projectives,
and is closed under passing to arbitrary limits and colimits.
The simplest way to obtain such a category is as the category of modules
over some ring, and so we briefly recall how a Breuil--Kisin module with
$A$-coefficients and descent
data can be interpreted as a module over a certain $A$-algebra.
Let $\mathfrak{S}_A[F]$ denote the twisted polynomial ring over $\mathfrak{S}_A$,
in which the variable $F$ obeys the following commutation relation
with respect to elements $s \in \mathfrak{S}_A$:
$$ F \cdot s = \varphi(s) \cdot F.$$
Let $\mathfrak{S}_A[F, \Gal(K'/K)]$ denote the twisted group ring over $\mathfrak{S}_A[F]$,
in which the elements $g \in \Gal(K'/K)$ commute with $F$,
and obey the following commutation
relations with elements $s \in \mathfrak{S}_A$:
$$ g \cdot s = g(s) \cdot g.$$
One immediately confirms that giving a left $\mathfrak{S}_A[F, \Gal(K'/K)]$-module $\mathfrak{M}$
is equivalent to equipping the underlying $\mathfrak{S}_A$-module
$\mathfrak{M}$ with a $\varphi$-linear morphism
$\varphi:\mathfrak{M} \to \mathfrak{M}$ and a semi-linear action of $\Gal(K'/K)$
which commutes with $\varphi$.
In particular, if we let $\K{A}$ denote the category of left $\mathfrak{S}_A[F,
\Gal(K'/K)]$-modules, then
a Breuil--Kisin module with descent data from $K'$ to $K$
may naturally be regarded as an object of $\K{A}$.
In the following lemma, we record the fact that extensions of Breuil--Kisin modules
with descent data may be computed as extensions in the category~$\K{A}$.
\begin{lemma}
\label{lem:ext of a Kisin module is a Kisin module}
If $0 \to \mathfrak{M}' \to \mathfrak{M} \to \mathfrak{M}'' \to 0$ is a short exact sequence
in $\K{A}$, such that $\mathfrak{M}'$ {\em (}resp.\ $\mathfrak{M}''${\em )}
is a Breuil--Kisin module with descent data
of rank $d'$ and height at most $h'$
{\em (}resp.\ of rank $d''$ and height at most $h''$\emph{)},
then $\mathfrak{M}$ is a Breuil--Kisin module with descent data
of rank $d'+d''$ and height at most $h'+h''$.
More generally, if
$E(u)^h\in\operatorname{Ann}_{\mathfrak{S}_A}(\coker\Phi_{\mathfrak{M}'})\operatorname{Ann}_{\mathfrak{S}_A}(\coker\Phi_{\mathfrak{M}''})$,
then $\mathfrak{M}$ is a Breuil--Kisin module with descent data of height at most~$h$.
\end{lemma}
\begin{proof}
Note that since $\Phi_{\mathfrak{M}'}[1/E(u)]$ and $\Phi_{\mathfrak{M}''}[1/E(u)]$ are both isomorphisms by
assumption, it follows from the snake lemma that~$\Phi_{\mathfrak{M}}[1/E(u)]$ is
isomorphism. Similarly we have a short exact sequence of
$\mathfrak{S}_A$-modules \[0\to\coker\Phi_{\mathfrak{M}'}\to\coker\Phi_{\mathfrak{M}}\to\coker\Phi_{\mathfrak{M}''}\to
0.\]
The claims about the height and rank of~$\mathfrak{M}$ follow immediately.
\end{proof}
We now turn to giving an explicit description of the functors
$\Ext^i(\mathfrak{M}, \text{--} \, )$ for a Breuil--Kisin module with descent data
$\mathfrak{M}$.
\begin{df}
\label{def:explicit Ext complex}
Let $\mathfrak{M}$ be a
Breuil--Kisin module with $A$-coefficients and descent data (of some
height).
If $\mathfrak{N}$ is any object of $\K{A}$, then
we let $C^{\bullet}_{\mathfrak{M}}(\mathfrak{N})$ denote the complex
$$
\Hom_{\mathfrak{S}_A[\Gal(K'/K)]}(\mathfrak{M},\mathfrak{N}) \to
\Hom_{\mathfrak{S}_A[\Gal(K'/K)]}(\varphi^*\mathfrak{M},\mathfrak{N}),
$$
with differential being given by
$$\alpha \mapsto \Phi_{\mathfrak{N}} \circ \varphi^* \alpha - \alpha \circ \Phi_{\mathfrak{M}}.$$
Also let $\Phi_{\mathfrak{M}}^*$ denote the map $C^0_{\mathfrak{M}}(\mathfrak{N})
\to C^1_{\mathfrak{M}}(\mathfrak{N})$ given by $\alpha \mapsto \alpha \circ
\Phi_{\mathfrak{M}}$. When $\mathfrak{M}$ is clear from the context we will usually suppress it from the
notation and write simply $C^{\bullet}(\mathfrak{N})$.
\end{df}
Each $C^i(\mathfrak{N})$ is naturally an $\mathfrak{S}_A^0$-module.
The formation of $C^{\bullet}(\mathfrak{N})$ is evidently functorial in $\mathfrak{N}$,
and is also exact in $\mathfrak{N}$, since $\mathfrak{M}$, and hence also $\varphi^*\mathfrak{M}$,
is projective over $\mathfrak{S}_A$, and since $\Gal(K'/K)$ has prime-to-$p$ order.
Thus the cohomology functors
$H^0\bigl(C^{\bullet}(\text{--})\bigr)$
and
$H^1\bigl(C^{\bullet}(\text{--})\bigr)$
form a $\delta$-functor on $\K{A}$.
\begin{lemma}\label{lem: C computes Hom}
There is a natural isomorphism
$$\Hom_{\K{A}}(\mathfrak{M},\text{--}\, ) \cong
H^0\bigl(C^{\bullet}(\text{--}\,)\bigr).$$
\end{lemma}
\begin{proof}
This is immediate.
\end{proof}
It follows from this lemma and
a standard dimension shifting argument (or, equivalently, the theory
of $\delta$-functors) that there is an embedding of functors
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:ext embedding}
\Ext^1_{\K{A}}(\mathfrak{M}, \text{--} \, ) \hookrightarrow
H^1\bigl(C^{\bullet}(\text{--})\bigr).
\end{equation}
\begin{lemma}\label{lem: C computes Ext^1}
The embedding of functors~\eqref{eqn:ext embedding}
is an isomorphism.
\end{lemma}
\begin{proof}
We first describe the embedding~\eqref{eqn:ext embedding} explicitly.
Suppose that
$$0 \to \mathfrak{N} \to \mathfrak{E} \to \mathfrak{M} \to 0$$
is an extension in $\K{A}$. Since $\mathfrak{M}$ is projective over $\mathfrak{S}_A$,
and since $\Gal(K'/K)$ is of prime-to-$p$ order,
we split this
short exact sequence
over the twisted group ring $\mathfrak{S}_A[\Gal(K'/K)],$
say via some element $\sigma \in \Hom_{\mathfrak{S}_A[\Gal(K'/K)]}(\mathfrak{M},\mathfrak{E}).$
This splitting is well-defined up to the addition of an
element $\alpha \in \Hom_{\mathfrak{S}_A[\Gal(K'/K)]}(\mathfrak{M},\mathfrak{N}).$
This splitting is a homomorphism in $\K{A}$ if and only
if the element
$$\Phi_{\mathfrak{E}}\circ \varphi^*\sigma - \sigma \circ \Phi_{\mathfrak{M}}
\in \Hom_{\mathfrak{S}_A[\Gal(K'/K)]}(\varphi^*\mathfrak{M},\mathfrak{N})$$ vanishes.
If we replace $\sigma$ by $\sigma +\alpha,$ then this element
is replaced by
$$(\Phi_{\mathfrak{E}}\circ \varphi^*\sigma - \sigma \circ \Phi_{\mathfrak{M}})
+ (\Phi_{\mathfrak{N}} \circ \varphi^* \alpha - \alpha \circ \Phi_{\mathfrak{M}}).$$
Thus the coset of
$\Phi_{\mathfrak{E}}\circ \varphi^*\sigma - \sigma \circ \Phi_{\mathfrak{M}}$
in $H^1\bigl(C^{\bullet}(\mathfrak{N})\bigr)$ is well-defined, independent
of the choice of $\sigma,$
and this coset is the image of the class of the extension
$\mathfrak{E}$ under the embedding
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:explicit embedding}
\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N}) \hookrightarrow
H^1\bigl(C^{\bullet}(\mathfrak{N})\bigr)
\end{equation}
(up to a possible overall sign, which we ignore, since it doesn't
affect the claim of the lemma).
Now, given any element
$\nu \in \Hom_{\mathfrak{S}_A[\Gal(K'/K)]}(\varphi^*\mathfrak{M},\mathfrak{N})$,
we may give the $\mathfrak{S}_A[\Gal(K'/K)]$-module $\mathfrak{E} := \mathfrak{N} \oplus \mathfrak{M}$ the
structure of a $\mathfrak{S}_A[F,\Gal(K'/K)]$-module as follows: we need to
define a $\varphi$-linear morphism $\mathfrak{E}\to\mathfrak{E}$, or equivalently a linear
morphism $\Phi_{\mathfrak{E}}:\varphi^*\mathfrak{E}\to\mathfrak{E}$. We do this
by setting $$\Phi_{\mathfrak{E}} := \begin{pmatrix} \Phi_{\mathfrak{N}} & \nu \\ 0 & \Phi_{\mathfrak{M}}
\end{pmatrix}.$$
Then $\mathfrak{E}$ is an extension of $\mathfrak{M}$ by $\mathfrak{N}$,
and if we let $\sigma$ denote the obvious embedding of $\mathfrak{M}$ into $\mathfrak{E}$,
then one computes that
$$\nu = \Phi_{\mathfrak{E}}\circ \varphi^*\sigma - \sigma \circ \Phi_{\mathfrak{M}} .$$
This shows that~\eqref{eqn:explicit embedding} is an isomorphism, as claimed.
\end{proof}
Another dimension shifting argument, taking into account the preceding
lemma, shows that $\Ext^2_{\K{A}}(\mathfrak{M},\text{--} \,)$ embeds into
$H^2\bigl( C^{\bullet}(\text{--}) \bigr).$ Since the target of this
embedding vanishes,
we find that the same is true of the source. This yields the
following corollary.
\begin{cor}
\label{cor:ext2 vanishes}
If $\mathfrak{M}$ is a Breuil--Kisin module with $A$-coefficients and descent data,
then $\Ext^2_{\K{A}}(\mathfrak{M}, \text{--} \, ) = 0.$
\end{cor}
We summarise the above discussion in the following corollary.
\begin{cor}
\label{cor:complex computes Hom and Ext}If $\mathfrak{M}$ is a
Breuil--Kisin module with $A$-coefficients and descent data, and $\mathfrak{N}$ is an
object of~$\K{A}$, then we have a natural short exact
sequence \[0\to\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N})\to C^0(\mathfrak{N})\to
C^1(\mathfrak{N})\to\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\to 0.\]
\end{cor}
The following lemma records the behaviour of these complexes with
respect to base change.
\begin{lemma}\label{lem:base-change-complexes}
Suppose that $\mathfrak{M}$, $\mathfrak{N}$ are Breuil--Kisin modules with descent data and
$A$-coefficients, that $B$ is an $A$-algebra, and that $Q$ is a
$B$-module. Then the
complexes $C^{\bullet}_{\mathfrak{M}}(\mathfrak{N} \, \widehat{\otimes}_A Q)$ and
$C^{\bullet}_{\mathfrak{M} \, \widehat{\otimes}_A B}(\mathfrak{N} \, \widehat{\otimes}_A Q)$ coincide, the
former complex
formed with respect to $\K{A}$ and the latter with respect to $\K{B}$.
\end{lemma}
\begin{proof}
Indeed, there is a natural isomorphism $$\Hom_{\mathfrak{S}_A[\Gal(K'/K)]}(\mathfrak{M}, \mathfrak{N}\, \widehat{\otimes}_A Q) \cong
\Hom_{\mathfrak{S}_B[\Gal(K'/K)]}(\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A Q),$$
and similarly with $\varphi^*\mathfrak{M}$ in place of $\mathfrak{M}$.
\end{proof}
The following slightly technical lemma is
crucial for establishing
finiteness properties, and also base-change properties,
of Exts of Breuil--Kisin modules.
\begin{lem}
\label{lem:truncation argument used to prove f.g. of Ext Q version}Let $A$ be
a ${\mathcal O}/\varpi^a$-algebra for some $a\ge 1$, suppose that
$\mathfrak{M}$
is a Breuil--Kisin module with descent data and $A$-coefficients,
of height at most~$h$,
and suppose that $\mathfrak{N}$ is a $u$-adically complete, $u$-torsion
free object of $\K{A}$.
Let $C^\bullet$ be the complex defined
in \emph{Definition~\ref{def:explicit Ext complex}}, and write~$\delta$ for
its differential. Suppose that $Q$ is an $A$-module with
the property that $C^i\otimes_A Q$ is $v$-torsion free for $i=0,1$
and $v$-adically separated for $i=0$.
Then:
\begin{enumerate}
\item For any integer $M\ge (eah+1)/(p-1)$, $\ker (\delta\otimes \operatorname{id}_Q)\cap
v^MC^0\otimes_AQ=0$.
\item For any integer $N \ge (peah+1)/(p-1)$, $\delta\otimes\operatorname{id}_Q$ induces an isomorphism
\[(\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1\otimes_AQ) \stackrel{\sim}{\To} v^N (C^1\otimes_AQ).\]
\end{enumerate}
Consequently, for $N$ as in \emph{(2)} the natural morphism of complexes of $A$-modules
$$[ C^0\otimes_AQ \buildrel \delta\otimes\operatorname{id}_Q \over \longrightarrow C^1\otimes_AQ] \to
[C^0\otimes_AQ/\bigl((\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1\otimes_AQ) \bigr) \buildrel \delta\otimes\operatorname{id}_Q
\over \longrightarrow C^1\otimes_AQ/v^N C^1\otimes_AQ ]$$
is a quasi-isomorphism.
\end{lem}
Since we are assuming that the $C^i\otimes_AQ$ are $v$-torsion free,
the expression $v^r C^i(\mathfrak{N}) \otimes_A Q$ may be interpreted as
denoting either $v^r \bigl( C^i(\mathfrak{N})\otimes_A Q\bigr)$ or
$\bigl(v^r C^i(\mathfrak{N}) \bigr)\otimes_A Q$, the two being naturally
isomorphic.
\begin{rem}\label{rem:truncation-remark} Before giving the proof of Lemma~\ref{lem:truncation
argument used to prove f.g. of Ext Q version}, we observe that the
hypotheses on the $C^i \otimes_A Q$ are satisfied if either $Q=A$, or
else $\mathfrak{N}$ is a projective $\mathfrak{S}_A$-module and $Q$ is a finitely
generated $B$-module for some finitely
generated $A$-algebra $B$. (Indeed $C^1 \otimes_A Q$
is $v$-adically separated as well in these cases.)
(1) Since $\mathfrak{M}$ is projective of finite rank over $A[[u]]$,
and since $\mathfrak{N}$ is $u$-adically complete and $u$-torsion free,
each $C^i$ is $v$-adically
separated
and $v$-torsion free. In particular the hypothesis on~$Q$ is always satisfied
by~$Q=A$. (In fact since $\mathfrak{N}$ is $u$-adically complete it also follows that
the $C^i$ are $v$-adically complete. Here we use that $\Gal(K'/K)$ has
order prime to $p$ to see that $C^0$ is an $\mathfrak{S}_A^0$-module direct
summand of $\Hom_{\mathfrak{S}_A}(\mathfrak{M},\mathfrak{N})$, and similarly for $C^1$.)
(2) Suppose $\mathfrak{N}$ is a projective $\mathfrak{S}_A$-module. Then the $C^i$ are
projective $\mathfrak{S}_A^0$-modules, again using that $\Gal(K'/K)$ has
order prime to $p$. Since each $C^i(\mathfrak{N})/v C^i(\mathfrak{N})$ is $A$-flat, it
follows that $C^i(\mathfrak{N}) \otimes_A Q$ is $v$-torsion free. If furthermore $B$ is a finitely
generated $A$-algebra, and $Q$ is a finitely generated $B$-module,
then the $C^i(\mathfrak{N})\otimes_A Q$ are
$v$-adically separated (being finitely generated modules over
the ring $A[[v]]\otimes_A B$, which is a finitely generated
algebra over the Noetherian ring $A[[v]]$, and hence is itself
Noetherian).
\end{rem}
\begin{proof}[Proof of Lemma~{\ref{lem:truncation argument used to
prove f.g. of Ext Q version}}] Since $p^a=0$ in $A$, there exists $H(u)\in\mathfrak{S}_A$ with
$u^{e'ah}=E(u)^hH(u)$ in $\mathfrak{S}_A$. Thus the image of $\Phi_{\mathfrak{M}}$
contains $u^{e'ah} \mathfrak{M}=v^{eah}\mathfrak{M}$, and there exists a map $\Upsilon : \mathfrak{M} \to
\varphi^* \mathfrak{M}$ such that $\Phi_{\mathfrak{M}} \circ \Upsilon$ is multiplication by $v^{eah}$.
We begin with~(1). Suppose that
$f\in\ker (\delta\otimes\operatorname{id}_Q)\cap v^MC^0\otimes_AQ$.
Since $C^0\otimes_AQ$ is $v$-adically separated,
it is enough, applying induction on $M$,
to show that $f\in v^{M+1}C^0\otimes_AQ$. Since
$f\in\ker(\delta\otimes\operatorname{id}_Q)$, we have
$f\circ\Phi_{\mathfrak{M}}=\Phi_{\mathfrak{N}}\circ\varphi^*f$. Since $f\in v^MC^0\otimes_AQ$,
we have $f \circ \Phi_{\mathfrak{M}} = \Phi_{\mathfrak{N}} \circ \varphi^*f \in
v^{pM} C^1 \otimes_A Q$. Precomposing with $\Upsilon$ gives
$v^{eah} f \in v^{pM} C^0 \otimes_A Q$.
Since~$C^0 \otimes_A Q$ is $v$-torsion free, it follows that $f\in
v^{pM-eah}C^0\otimes_AQ\subseteq u^{M+1}C^0\otimes_AQ$, as required.
We now move on to~(2). Set $M = N - eah$.
By precomposing with $\Upsilon$ we see that $\alpha \circ \Phi_{\mathfrak{M}} \in v^N C^1\otimes_A Q$ implies $\alpha \in v^M C^0\otimes_A Q$; from this, together with the inequality
$pM \ge N$, it is
straightforward to check that
\[(\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1\otimes_AQ) = (\delta \otimes \operatorname{id}_Q)^{-1}(v^N
C^1\otimes_AQ)\cap v^M C^0\otimes_AQ.\]
Note that $M$ satisfies the condition in~(1). To complete the proof we
will show that for any $M$ as in (1) and any $N \ge M + eah$ the map
$\delta$ induces an isomorphism
\[ (\delta \otimes \operatorname{id}_Q)^{-1}(v^N C^1\otimes_AQ)\cap v^M C^0\otimes_AQ \stackrel{\sim}{\To} v^N C^1\otimes_AQ.\]
By~(1), $\delta\otimes\operatorname{id}_Q$ induces an injection $(\delta\otimes\operatorname{id}_Q)^{-1}(v^N C^1\otimes_AQ)\cap
v^M C^0\otimes_AQ\hookrightarrow v^N C^1\otimes_AQ$, so it is enough to show that
$(\delta\otimes\operatorname{id}_Q)(v^MC^0\otimes_AQ)\supseteq v^N
C^1\otimes_AQ$.
Equivalently, we need to show that
$$
v^N C^1 \otimes_A Q
\to
(C^1\otimes_A Q) /
(\delta\otimes\operatorname{id}_Q)\bigl(v^M C^0\otimes_A Q)
$$
is identically zero. Since the formation of cokernels is compatible with tensor products,
we see that this morphism is obtained by tensoring the corresponding morphism
$$
v^N C^1
\to
C^1/
\delta\bigl(v^M C^0\bigr)
$$
with $Q$ over $A$, so we are reduced to the case $Q=A$. (Recall from
Remark~\ref{rem:truncation-remark}(1) that the
hypotheses of the Lemma are satisfied in this case, and that $C^1$ is
$v$-adically separated.)
We claim that for
any $g\in v^NC^1$, we can find an $f\in v^{N-eah}C^0$ such that
$\delta(f)-g\in v^{p(N-eah)}C^1$. Admitting the claim, given any
$g\in v^N C^1$, we may find $h\in v^MC^0$ with $\delta(h)=g$ by
successive approximation in the following way:
Set $h_0=f$ for~$f$
as in the claim; then $h_0\in v^{N-eah}C^0\subseteq v^MC^0$, and
$\delta(h_0)-g\in v^{p(N-eah)}C^1\subseteq v^{N+1}C^1$. Applying
the claim again with $N$ replaced by $N+1$, and $g$ replaced by
$g-\delta(h_0)$, we find $f\in v^{N+1-eah}C^0\subseteq v^{M+1}C^0$
with $\delta(f)-g+\delta(h_0)\in v^{p(N+1-eah)}C^1\subseteq
v^{N+1}C^1$. Setting $h_1=h_0+f$, and proceeding inductively, we
obtain a Cauchy sequence converging (in the $v$-adically
complete $A[[v]]$-module $C^0$) to the required element~$h$.
It remains to prove the claim. Since
$\delta(f)=\Phi_{\mathfrak{N}}\circ\varphi^*f-f\circ\Phi_{\mathfrak{M}} $, and since if
$f\in v^{N-eah}C^0$ then $\Phi_{\mathfrak{N}}\circ\varphi^*f\in v^{p(N-eah)}C^1$,
it is enough to show that we can find an $f\in v^{N-eah}C^0$ with
$f\circ\Phi_{\mathfrak{M}}=-g$. Since $\Phi_{\mathfrak{M}}$ is injective, the map
$\Upsilon \circ \Phi_\mathfrak{M}$ is also multiplication by $v^{eah}$, and so
it suffices to take $f$ with $v^{eah} f = -g \circ \Upsilon \in v^N C^0$.
\end{proof}
\begin{cor}\label{cor: base change completion for complex in free case}
Let $A$ be a Noetherian
${\mathcal O}/\varpi^a$-algebra,
and let $\mathfrak{M}$, $\mathfrak{N}$ be Breuil--Kisin modules with descent data
and $A$-coefficients. If $B$ is a finitely generated
$A$-algebra, and $Q$ is a finitely generated $B$-module,
then the natural morphism of complexes of $B$-modules
$$[ C^0(\mathfrak{N})\otimes_A Q \buildrel {\delta\otimes \operatorname{id}_Q} \over \longrightarrow
C^1(\mathfrak{N})\otimes_A Q] \to
[C^0(\mathfrak{N}\, \widehat{\otimes}_A Q) \buildrel \delta \over \longrightarrow
C^1(\mathfrak{N}\, \widehat{\otimes}_A Q)]$$
is a quasi-isomorphism.
\end{cor}
\begin{proof}
By Remarks~\ref{rem:truncation-remark} and~\ref{rem:completed tensor}(2) we can apply Lemma~\ref{lem:truncation argument used to
prove f.g. of Ext Q version} to both
$C^i(\mathfrak{N}\, \widehat{\otimes}_AQ)$ and $C^i(\mathfrak{N})\otimes_AQ$, and we see that it is enough to show that the
natural morphism of complexes
\[\begin{adjustbox}{max width=\textwidth}
\begin{tikzcd}
{[\bigl(C^0(\mathfrak{N})\otimes_A Q \bigr)/
(\Phi_{\mathfrak{M}}^*\otimes \operatorname{id}_Q)^{-1}\bigl(v^N C^1(\mathfrak{N})\otimes_A Q \bigr)
\buildrel \delta
\over \longrightarrow \bigl(C^1(\mathfrak{N})\otimes_A Q\bigr)/\bigl(v^N C^1(\mathfrak{N})
\otimes_A Q\bigr) ]}
\arrow{d}{} \\
{[C^0(\mathfrak{N}\, \widehat{\otimes}_A Q)
/\bigl(\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1(\mathfrak{N}\, \widehat{\otimes}_A Q)\bigr)
\buildrel \delta
\over \rightarrow C^1(\mathfrak{N}\, \widehat{\otimes}_A Q)/v^N C^1(\mathfrak{N}\, \widehat{\otimes}_A Q) ]}
\end{tikzcd}
\end{adjustbox}\]
is a quasi-isomorphism. In fact, it is even an isomorphism.
\end{proof}
\begin{prop}
\label{prop:exts are f.g. over A} Let $A$ be a
${\mathcal O}/\varpi^a$-algebra for some $a\ge 1$, and let $\mathfrak{M}$, $\mathfrak{N}$ be
Breuil--Kisin modules with descent data and $A$-coefficients.
Then $\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ and $\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N}/u^i\mathfrak{N})$
for $i \ge 1$ are finitely presented $A$-modules.
If
furthermore $A$ is Noetherian, then $\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N})$ and
$\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}/u^i\mathfrak{N})$ for $i\ge 1$ are also
finitely presented {\em (}equivalently, finitely generated{\em )} $A$-modules.
\end{prop}
\begin{proof}
The statements for $\mathfrak{N}/u^i\mathfrak{N}$
follow easily from those for $\mathfrak{N}$, by considering the short exact sequence $0 \to u^i\mathfrak{N} \to \mathfrak{N} \to
\mathfrak{N}/u^i\mathfrak{N} \to 0$ in $\K{A}$ and applying Corollary~\ref{cor:ext2
vanishes}.
By Corollary~\ref{cor:complex computes Hom and Ext}, it is enough to consider the cohomology of the
complex~$C^\bullet$. By Lemma~\ref{lem:truncation argument used to
prove f.g. of Ext Q version} with $Q=A$,
the cohomology of~$C^\bullet$ agrees with the
cohomology of the induced complex \[C^0/\bigl((\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1) )\to C^1/ v^N C^1,\]
for an appropriately chosen value of $N$. It follows that for an
appropriately chosen value of~$N$, $\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ can be
computed as the cokernel of the induced morphism $C^0/v^N C^0 \to C^1/ v^N C^1$.
Under our hypothesis on~$\mathfrak{N}$, $C^0/v^N C^0$ and $C^1/v^NC^1$ are finitely
generated projective $A$-modules, and thus finitely presented. It follows that
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ is finitely presented.
In the case that $A$ is furthermore assumed to be Noetherian, it is
enough to note that since $v^NC^0\subseteq (\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1)$,
the quotient $C^0/\bigl((\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1) \bigr)$ is a finitely generated $A$-module.
\end{proof}
\begin{prop}
\label{prop:descent for Homs of free Kisin modules}Let $A$ be a
${\mathcal O}/\varpi^a$-algebra for some $a\ge 1$,
and let $\mathfrak{M}$ and~$\mathfrak{N}$ be Breuil--Kisin modules with descent
data and $A$-coefficients. Let $B$ be an $A$-algebra, and let
$f_B:\mathfrak{M}\, \widehat{\otimes}_AB\to\mathfrak{N}\, \widehat{\otimes}_AB$ be a morphism of Breuil--Kisin
modules with $B$-coefficients.
Then there is a finite type $A$-subalgebra $B'$ of~$B$ and a morphism
of Breuil--Kisin modules $f_{B'}:\mathfrak{M}\, \widehat{\otimes}_A B'\to\mathfrak{N}\, \widehat{\otimes}_A B'$
such that $f_B$ is the base change of~$f_{B'}$.
\end{prop}
\begin{proof}
By Lemmas~\ref{lem: C computes Hom}
and~\ref{lem:base-change-complexes} (the latter applied with $Q=B$) we can and do think
of $f_B$ as being an element of the kernel of
$\delta:C^0(\mathfrak{N}\, \widehat{\otimes}_A B)\to C^1(\mathfrak{N}\, \widehat{\otimes}_A
B)$, the complex $C^\bullet$ here and throughout this proof denoting
$C^{\bullet}_{\mathfrak{M}}$ as usual.
Fix $N$ as in
Lemma~\ref{lem:truncation argument used to prove f.g. of Ext Q version}, and
write~$\overline{f}_B$ for the corresponding element of
$C^0(\mathfrak{N}\, \widehat{\otimes}_A B)/v^N=(C^0(\mathfrak{N})/v^N)\otimes_A B$ (this equality
following easily from the assumption that $\mathfrak{M}$ and $\mathfrak{N}$ are
projective $\mathfrak{S}_A$-modules of finite rank). Since $C^0(\mathfrak{N})/v^N$
is a projective $A$-module of finite
rank,
it follows
that for some finite type
$A$-subalgebra $B'$ of~$B$, there is an element $\overline{f}_{B'}\in
(C^0(\mathfrak{N})/v^N)\otimes_A B'=C^0(\mathfrak{N}\, \widehat{\otimes}_A B')/v^N$ such that
$\overline{f}_{B'}\otimes_{B'}B=\overline{f}_B$. Denote also by $\overline{f}_{B'} $ the induced element of \[C^0(\mathfrak{N}\, \widehat{\otimes}
_A B')/\bigl(\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1(\mathfrak{N}\, \widehat{\otimes}
_A B')).\]
By Lemma~\ref{lem:truncation argument used to prove
f.g. of Ext Q version} (and Lemma~\ref{lem: C computes Hom}) we have a
commutative diagram with exact rows \[\xymatrix{0 \ar[r] & H^0(C^{\bullet}(\mathfrak{N}\, \widehat{\otimes}
_A B')) \ar[d] \ar[r] & C^0(\mathfrak{N}\, \widehat{\otimes}
_A B')/\bigl((\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1(\mathfrak{N}\, \widehat{\otimes}
_A B'))
\bigr) \ar[r]^-{\delta}\ar[d] & C^1(\mathfrak{N}\, \widehat{\otimes}
_A B')/v^N\ar[d]\\ 0 \ar[r] & H^0(C^{\bullet}(\mathfrak{N}\, \widehat{\otimes}
_A B)) \ar[r] & C^0(\mathfrak{N}\, \widehat{\otimes}
_A B)/\bigl((\Phi_{\mathfrak{M}}^*)^{-1}(v^N C^1 (\mathfrak{N}\, \widehat{\otimes}
_A B))
\bigr) \ar[r]^-{\delta} & C^1(\mathfrak{N}\, \widehat{\otimes}
_A B)/v^N } \]
in which the vertical arrows
are induced by $\, \widehat{\otimes}_{B'}B$. By a diagram chase we only need to
show that $\delta(\overline{f}_{B'})=0$. Since $\delta(f_B)=0$, it is
enough to show that the right hand vertical arrow is an
injection. This arrow can be rewritten as the tensor product of the
injection of $A$-algebras $B'\hookrightarrow B$ with the flat (even projective
of finite rank) $A$-module $C^1(\mathfrak{N})/v^N$, so the result follows.
\end{proof}
We have the following key base-change result for $\Ext^1$'s of
Breuil--Kisin modules with descent data.
\begin{prop}
\label{prop:base-change for exts}
Suppose that $\mathfrak{M}$ and $\mathfrak{N}$ are Breuil--Kisin modules with
descent data and coefficients in a ${\mathcal O}/\varpi^a$-algebra $A$.
Then for any $A$-algebra $B$, and for any $B$-module $Q$,
there are natural isomorphisms
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A Q \buildrel \sim \over \longrightarrow
\Ext^1_{\K{B}}(\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A B) \otimes_B Q
\buildrel \sim \over \longrightarrow
\Ext^1_{\K{B}}(\mathfrak{M}\, \widehat{\otimes}_A B,\mathfrak{N}\, \widehat{\otimes}_A Q).$
\end{prop}
\begin{proof}
We first prove the lemma in the case of an $A$-module $Q$.
It follows from Lemmas \ref{lem: C computes Ext^1}
and~\ref{lem:truncation argument used to prove f.g. of Ext Q version}
that we may compute
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$
as the cokernel of the morphism
$$C^0(\mathfrak{N})/v^N C^0(\mathfrak{N})
\buildrel \delta \over \longrightarrow C^1(\mathfrak{N})/v^N C^1(\mathfrak{N}),$$for
some sufficiently large value of $N$ (not depending on $\mathfrak{N}$),
and hence that we may compute
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A Q$
as the cokernel of the morphism
$$\bigl(C^0(\mathfrak{N})/v^N C^0(\mathfrak{N})\bigr) \otimes_A Q
\buildrel \delta \over \longrightarrow
\bigl(C^1(\mathfrak{N})/v^N C^1(\mathfrak{N})\bigr) \otimes_A Q.$$
We may similarly compute
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N}\, \widehat{\otimes}_A Q)$
as the cokernel of the morphism
$$C^0(\mathfrak{N}\, \widehat{\otimes}_A Q)/v^N C^0(\mathfrak{N}\, \widehat{\otimes}_A Q)
\buildrel \delta \over \longrightarrow
C^1(\mathfrak{N}\, \widehat{\otimes}_A Q)/v^N C^1(\mathfrak{N}\, \widehat{\otimes}_A Q).$$
(Remark~\ref{rem:completed tensor}~(2) shows
that $\mathfrak{N}\, \widehat{\otimes}_A Q$ satisfies the necessary hypotheses
for Lemma~\ref{lem:truncation argument used to prove f.g. of Ext Q version}
to apply.)
Once we note that the natural morphism
$$\bigl( C^i(\mathfrak{N})/v^N C^i(\mathfrak{N})\bigr)\otimes_A Q \to C^i(\mathfrak{N}\, \widehat{\otimes}_A Q)/v^N
C^i(\mathfrak{N}\, \widehat{\otimes}_A Q)$$
is an isomorphism for $i = 0$ and $1$ (because $\mathfrak{M}$ is a finitely
generated projective $\mathfrak{S}_A$-module),
we obtain the desired isomorphism
$$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A Q \buildrel \sim \over \longrightarrow \Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N}\, \widehat{\otimes}_A Q).$$
If $B$ is an $A$-algebra, and $Q$ is a $B$-module,
then by Lemma~\ref{lem:base-change-complexes}
there is a natural isomorphism
$$\Ext^1_{\K{A}}(\mathfrak{M}, \mathfrak{N}\, \widehat{\otimes}_A Q) \buildrel \sim \over \longrightarrow
\Ext^1_{\K{B}}(\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A Q);$$
combined with the preceding base-change result, this yields
one of our claimed isomorphisms, namely
\[\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A Q \buildrel \sim \over \longrightarrow
\Ext^1_{\K{B}}(\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A Q).\]
Taking $Q$ to be $B$ itself, we then obtain
an isomorphism
\[\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A B \buildrel \sim \over \longrightarrow
\Ext^1_{\K{B}}(\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A B).\]
This allows us to identify
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A Q$,
which is naturally isomorphic to
$\bigr(\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A B\bigr) \otimes_B Q$,
with
$\Ext^1_{\K{B}}(\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A B)\otimes_B Q$,
yielding the second claimed isomorphism.
\end{proof}
In contrast to the situation for extensions
(\emph{cf}.\ Proposition~\ref{prop:base-change for exts}), the formation of
homomorphisms between Breuil--Kisin modules is in general
not compatible with arbitrary base-change, as the following example shows.
\begin{example}\label{example:rank one unramified}
Take $A = ({\mathbb Z}/p{\mathbb Z})[x^{\pm 1}, y^{\pm 1}]$, and let $\mathfrak{M}_x$ be the
free Breuil--Kisin module of rank one and $A$-coefficients with $\varphi(e)
= xe$ for some generator $e$ of $\mathfrak{M}_x$. Similarly define $\mathfrak{M}_y$ with
$\varphi(e') = ye'$ for some generator $e'$ of $\mathfrak{M}_y$. Then
$\Hom_{\K{A}}(\mathfrak{M}_x,\mathfrak{M}_y)=0$. On the other hand, if $B=A/(x-y)$ then
$\mathfrak{M}_x \, \widehat{\otimes}_A B$ and $\mathfrak{M}_y \, \widehat{\otimes}_A B$ are isomorphic, so that
$\Hom_{\K{B}}(\mathfrak{M}_x \, \widehat{\otimes} B, \mathfrak{M}_y \, \widehat{\otimes} B) \not\cong
\Hom_{\K{A}}(\mathfrak{M}_x,\mathfrak{M}_y) \otimes_A B$.
\end{example}
However, it is possible
to establish such a compatibility in some settings.
Corollary~\ref{cor:vanishing of homs non Noetherian}, which
gives a criterion for the vanishing of $\Hom_{\K{B}} (\mathfrak{M}\, \widehat{\otimes}_A
B, \mathfrak{N}\, \widehat{\otimes}_A B)$ for any $A$-algebra $B$, is a first example of
a result in this direction. Lemma~\ref{lem: flat base change for
Homs} deals with flat base change, and Lemma~\ref{lem: vanishing of Kisin module homs implies vanishing on dense open}, which will
be important in Section~\ref{subsec:universal
families}, proves that formation of
homomorphisms is compatible with base-change over a dense open
subscheme of $\Spec A$.
\begin{prop}
\label{prop:vanishing of homs}
Suppose that $A$ is a Noetherian ${\mathcal O}/\varpi^a$-algebra,
and that $\mathfrak{M}$ and $\mathfrak{N}$ are objects of $\K{A}$ that are
finitely generated over $\mathfrak{S}_A$ {\em (}or, equivalently,
over $A[[u]]${\em )}. Suppose also that~$\mathfrak{N}$ is a flat $\mathfrak{S}_A$-module.
Consider the following conditions:
\begin{enumerate}
\item
$\Hom_{\K{B}} (\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A B) = 0$
for any finite type $A$-algebra $B$.
\item
$\Hom_{\K{\kappa(\mathfrak{m})}}\bigl(\mathfrak{M}\otimes_A \kappa(\mathfrak m),
\mathfrak{N}\otimes_A \kappa(\mathfrak m) \bigr) = 0$
for each maximal ideal $\mathfrak m$ of $A$.
\item
$\Hom_{\K{A}}(\mathfrak{M}, \mathfrak{N}\otimes_A Q) = 0$
for any
finitely generated $A$-module $Q$.
\end{enumerate}
Then we have (1)$\implies$(2)$\iff$(3). If $A$ is furthermore
Jacobson, then all three conditions are equivalent.
\end{prop}
\begin{proof}
If $\mathfrak m$ is a maximal ideal of $A$, then $\kappa(\mathfrak m)$
is certainly a finite type $A$-algebra, and so evidently~(1) implies~(2).
It is even a finitely generated $A$-module, and so also~(2) follows
from~(3).
We next
prove that~(2) implies~(3).
To this end, recall that if $A$ is any ring, and $M$ is any $A$-module,
then $M$ injects into the product of its localizations at all maximal ideals.
If $A$ is Noetherian, and $M$ is finitely generated, then, by combining
this fact with the Artin--Rees
Lemma, we see that $M$ embeds into the product of its completions at all
maximal ideals. Another way to express this is that, if $I$ runs
over all cofinite length ideals in $A$ (i.e.\ all ideals for which $A/I$
is finite length), then $M$ embeds into the projective limit
of the quotients $M/IM$ (the point being that
this projective limit is the same as the product
over all $\mathfrak m$-adic completions).
We are going to apply this observation with $A$ replaced by $\mathfrak{S}_A$,
and with $M$ taken to be $\mathfrak{N}\otimes_A Q$ for some finitely generated
$A$-module $Q$.
In $A[[u]]$, one sees that $u$ lies in the Jacobson radical (because
$1 + fu$ is invertible in $A[[u]]$ for every $f \in A[[u]]$), and thus
in every maximal ideal, and so the maximal ideals of $A[[u]]$ are of
the form $(\mathfrak m, u)$, where $\mathfrak m$ runs over the maximal
ideals of~$A$.
Thus the ideals of the form $(I,u^n)$, where $I$ is a cofinite length
ideal in $A$, are
cofinal in all cofinite length ideals in $A[[u]]$.
Since $\mathfrak{S}_A$ is finite over $A[[u]]$, we see that the ideals
$(I,u^n)$ in $\mathfrak{S}_A$ are also
cofinal in all cofinite length ideals in $A[[u]]$.
Since $A[[u]]$, and hence $\mathfrak{S}_A$, is furthermore Noetherian when $A$ is,
we see that if $Q$ is a
finitely generated $A$-module, and $\mathfrak{N}$ is a finitely generated
$\mathfrak{S}_A$-module,
then $\mathfrak{N}\otimes_A (Q/IQ)$ is $u$-adically complete,
for any cofinite length ideal $I$ in $A$, and
hence equal to the limit over $n$ of $\mathfrak{N} \otimes_A Q/(I,u^n)$.
Putting this together with the observation of the preceding paragraph,
we see that the natural morphism
$$\mathfrak{N}\otimes_A Q \to \varprojlim_I \mathfrak{N}\otimes_A (Q/IQ)$$
(where $I$ runs over all cofinite length ideals of $A$)
is an embedding.
The induced morphism
$$ \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}\otimes_A Q)
\to
\varprojlim_I \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}\otimes_A (Q/IQ))$$
is then evidently also an embedding.
Thus, to conclude that
$ \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}\otimes_A Q)$
vanishes,
it suffices to show that
$\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}\otimes_A (Q/IQ))$ vanishes for each
cofinite length ideal $I$ in $A$. An easy induction (using the
flatness of~$\mathfrak{N}$) on the
length of $A/I$ reduces this to showing that
$\Hom_{\K{A}}\bigl(\mathfrak{M},\mathfrak{N}\otimes_A \kappa(\mathfrak m)\bigr),$
or, equivalently, $\Hom_{\K{\kappa(\mathfrak{m})}}\bigl(\mathfrak{M}\otimes_A \kappa(\mathfrak m),
\mathfrak{N}\otimes_A \kappa(\mathfrak m)\bigr),$
vanishes for each maximal ideal~$\mathfrak m$.
Since this is
the hypothesis of~(2), we see that indeed~(2) implies~(3).
It remains to show that~(3) implies~(1) when $A$ is Jacobson.
Applying the result
``(2) implies~(3)'' (with $A$ replaced by~$B$, and taking $Q$ in~(3) to be $B$ itself as a $B$-module) to $\mathfrak{M}\, \widehat{\otimes}_A B$ and $\mathfrak{N}\, \widehat{\otimes}_A B$,
we see that it suffices to prove the vanishing of
$$\Hom_{\K{B}}\bigl( (\mathfrak{M}\, \widehat{\otimes}_A B)\otimes_B \kappa(\mathfrak n),
(\mathfrak{N}\, \widehat{\otimes}_A B)\otimes_B \kappa(\mathfrak n) \bigr)
= \Hom_{\K{A}}\bigl( \mathfrak{M}, \mathfrak{N}\, \widehat{\otimes}_A \kappa(\mathfrak n) \bigr)
$$
for each maximal ideal $\mathfrak n$ of $B$.
Since $A$ is Jacobson, the field $\kappa(\mathfrak n)$ is in fact a
finitely generated
$A$-module, hence $\mathfrak{N}\, \widehat{\otimes}\kappa(\mathfrak n) = \mathfrak{N}\otimes_A
\kappa(\mathfrak n)$, and so the desired vanishing is a special case of~(3).
\end{proof}
\begin{cor}
\label{cor:vanishing of homs non Noetherian}
If $A$ is a Noetherian and Jacobson ${\mathcal O}/\varpi^a$-algebra,
and if $\mathfrak{M}$ and $\mathfrak{N}$ are Breuil--Kisin modules with descent
data and $A$-coefficients,
then the
following three conditions are equivalent:
\begin{enumerate}
\item
$\Hom_{\K{B}} (\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A B) = 0$
for any $A$-algebra $B$.
\item
$\Hom_{\K{\kappa(\mathfrak{m})}}\bigl(\mathfrak{M}\otimes_A \kappa(\mathfrak m),
\mathfrak{N}\otimes_A \kappa(\mathfrak m) \bigr) = 0$
for each maximal ideal $\mathfrak m$ of $A$.
\item
$\Hom_{\K{A}}(\mathfrak{M}, \mathfrak{N}\otimes_A Q) = 0$
for any
finitely generated $A$-module $Q$.
\end{enumerate}
\end{cor}
\begin{proof}By Proposition~\ref{prop:vanishing of homs}, we need only
prove that if $\Hom_{\K{B}} (\mathfrak{M}\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A B)$
vanishes
for all finitely generated $A$-algebras~$B$, then it vanishes for all
$A$-algebras~$B$. This is immediate from Proposition~\ref{prop:descent for Homs of free Kisin modules}.
\end{proof}
\begin{cor}
\label{cor:freeness for exts}
Suppose that $\mathfrak{M}$ and $\mathfrak{N}$ are Breuil--Kisin modules with
descent data and coefficients in a Noetherian ${\mathcal O}/\varpi^a$-algebra $A$,
and that furthermore
$\Hom_{\K{A}}\bigl(\mathfrak{M}\otimes_A \kappa(\mathfrak m),
\mathfrak{N}\otimes_A \kappa(\mathfrak m) \bigr)$ vanishes
for each maximal ideal $\mathfrak m$ of $A$.
Then the $A$-module
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$
is projective
of finite rank.
\end{cor}
\begin{proof}
By Proposition~\ref{prop:exts are f.g. over A},
in order to prove that $\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$
is projective of finite rank over $A$,
it suffices to prove that it is flat over $A$.
For this, it suffices to show that
$Q \mapsto \Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A Q$
is exact when applied to finitely generated $A$-modules $Q$.
Proposition~\ref{prop:base-change for exts} (together with
Remark~\ref{rem:completed tensor}~(1)) allows us to identify
this functor with the functor
$Q \mapsto \Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N}\otimes_A Q).$
Note that the functor $Q\mapsto\mathfrak{N}\otimes_A Q$ is an exact functor of $Q$,
since $\mathfrak{S}_A$ is a flat $A$-module (as $A$ is Noetherian; see Remark~\ref{rem:projectivity for Kisin modules}(3)).
Thus, taking into account
Corollary~\ref{cor:ext2 vanishes},
we see that it suffices to show that
$\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}\otimes_A Q) = 0$
for each finitely generated $A$-module~$Q$,
under the hypothesis that
$\Hom_{\K{A}}\bigl(\mathfrak{M}\otimes_A \kappa(\mathfrak m),
\mathfrak{N}\otimes_A \kappa(\mathfrak m) \bigr) = 0$
for each maximal ideal $\mathfrak m$ of~$A$.
This is the implication (2) $\implies$ (3) of Proposition~\ref{prop:vanishing of homs}.
\end{proof}
\begin{lemma}\label{lem: flat base change for Homs}
Suppose that $\mathfrak{M}$ is a Breuil--Kisin modules with
descent data and coefficients in a Noetherian ${\mathcal O}/\varpi^a$-algebra
$A$. Suppose that $\mathfrak{N}$ is either a Breuil--Kisin module with
$A$-coefficients, or that $\mathfrak{N}=\mathfrak{N}'/u^N\mathfrak{N}'$, where $\mathfrak{N}'$ a Breuil--Kisin module with
$A$-coefficients and $N\ge 1$.
Then,
if $B$ is a finitely generated flat
$A$-algebra, we have a natural isomorphism
\[\Hom_{\K{B}}(\mathfrak{M}\, \widehat{\otimes}_{A} B,
\mathfrak{N}\, \widehat{\otimes}_{A} B) \buildrel \sim \over \longrightarrow \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_{A}B. \]
\end{lemma}
\begin{proof}
By Corollary~\ref{cor:complex computes Hom and Ext} and the
flatness of~$B$,
we have a left exact sequence
\[0\to \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_AB\to C^0(\mathfrak{N})\otimes_AB\to
C^1(\mathfrak{N})\otimes_AB\] and therefore (applying
Corollary~\ref{cor: base change completion for complex in free case}
to treat the case that $\mathfrak{N}$ is projective) a left exact sequence
\[0\to \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_AB\to C^0(\mathfrak{N}\, \widehat{\otimes}_AB)\to
C^1(\mathfrak{N}\, \widehat{\otimes}_AB).\]
The result follows from Corollary~\ref{cor:complex computes Hom and
Ext} and Lemma~\ref{lem:base-change-complexes}.
\end{proof}
\begin{lemma}\label{lem: vanishing of Kisin module homs implies vanishing on dense open}
Suppose that $\mathfrak{M}$ is a Breuil--Kisin module with
descent data and coefficients in a Noetherian ${\mathcal O}/\varpi^a$-algebra
$A$ which is furthermore a domain.
Suppose also that $\mathfrak{N}$ is either a Breuil--Kisin module with
$A$-coefficients, or that $\mathfrak{N}=\mathfrak{N}'/u^N\mathfrak{N}'$, where $\mathfrak{N}'$ is a Breuil--Kisin module with
$A$-coefficients and $N\ge 1$.
Then there is some nonzero $f\in
A$ with the following property:
writing
$\mathfrak{M}_{A_f}=\mathfrak{M}\, \widehat{\otimes}_A A_f$ and $\mathfrak{N}_{A_f}=\mathfrak{N}\, \widehat{\otimes}_A A_f$, then for any
finitely generated $A_f$-algebra $B$, and any finitely
generated $B$-module $Q$, there are natural isomorphisms
\begin{multline*}
\Hom_{\K{A_f}}(\mathfrak{M}_{A_f},\mathfrak{N}_{A_f})\otimes_{A_f}Q \buildrel \sim \over \longrightarrow
\Hom_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B, \mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} B)\otimes_B Q
\\
\buildrel \sim \over \longrightarrow
\Hom_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B, \mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} Q).
\end{multline*}
\end{lemma}
\begin{proof}[Proof of Lemma~{\ref{lem: vanishing of Kisin module homs implies vanishing on dense open}}.]
Note that since $A$ is Noetherian, by Remark~\ref{rem:projectivity for
Kisin modules}(3) we see that~$\mathfrak{N}$ is $A$-flat.
By Corollary~\ref{cor:complex computes Hom and Ext}
we have an exact sequence
\[0\to \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N})\to C^0(\mathfrak{N})\to C^1(\mathfrak{N}) \to
\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\to 0.\]
Since by assumption $\mathfrak{M}$ is a projective $\mathfrak{S}_A$-module, and
$\mathfrak{N}$ is a flat
$A$-module, the $C^i(\mathfrak{N})$ are also flat $A$-modules.
By Proposition~\ref{prop:exts are f.g. over A},
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ is a finitely generated $A$-module, so
by the generic freeness
theorem~\cite[\href{http://stacks.math.columbia.edu/tag/051R}{Tag
051R}]{stacks-project} there is some nonzero $f\in A$ such that $\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})_f$ is
free over~$A_f$.
Since localisation is exact, we obtain an exact
sequence
\[0\to \Hom_{\K{A_f}}(\mathfrak{M},\mathfrak{N})_f \to C^0(\mathfrak{N})_f \to C^1(\mathfrak{N})_f \to
\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})_f\to 0\]and therefore (applying
Corollary~\ref{cor: base change completion for complex in free case}
to treat the case that $\mathfrak{N}$ is a Breuil--Kisin module) an exact sequence
\[0\to \Hom_{\K{A_f}}(\mathfrak{M}_{A_f},\mathfrak{N}_{A_f})\to C^0(\mathfrak{N}_{A_f})\to C^1(\mathfrak{N}_{A_f}) \to
\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})_f\to 0.\]
Since the last three terms are flat over~$A_f$, this sequence remains
exact upon tensoring over $A_f$
with $Q$.
Applying Corollary~\ref{cor: base change completion for complex in free case}
again to treat the case that $\mathfrak{N}$ is a Breuil--Kisin module, we see that in particular we
have a left exact sequence
\[0\to
\Hom_{\K{A_f}}(\mathfrak{M}_{A_f},\mathfrak{N}_{A_f})\otimes_{A_f}Q
\to
C^0(\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}Q)\to C^1(\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}Q),\]
and Corollary~\ref{cor:complex computes Hom and Ext} together with Lemma~\ref{lem:base-change-complexes}
yield one of the desired isomorphisms, namely
$$\Hom_{\K{A_f}}(\mathfrak{M}_{A_f},\mathfrak{N}_{A_f})\otimes_{A_f}Q \buildrel \sim \over \longrightarrow
\Hom_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f}B ,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} Q).$$
If we consider the case when $Q = B$, we obtain an isomorphism
$$\Hom_{\K{A_f}}(\mathfrak{M}_{A_f},\mathfrak{N}_{A_f})\otimes_{A_f}B \buildrel \sim \over \longrightarrow
\Hom_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f}B ,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} B).$$
Rewriting the tensor product $\text{--}\otimes_{A_f} Q $ as
$\text{--}\otimes_{A_f} B \otimes_B Q,$
we then find that
$$
\Hom_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f}B ,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} B)\otimes_B Q
\buildrel \sim \over \longrightarrow
\Hom_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f}B ,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} Q),$$
which gives the second desired isomorphism.
\end{proof}
Variants on the preceding result may be proved using other
versions of the generic freeness theorem.
\begin{example}\label{example:rank one unramified redux} Returning to
the setting of
Example~\ref{example:rank one unramified},
one can check using Corollary~\ref{cor:vanishing of homs non
Noetherian} that the conclusion of Lemma~\ref{lem: vanishing of
Kisin module homs implies vanishing on dense open} (for $\mathfrak{M} =
\mathfrak{M}_x$ and $\mathfrak{N} = \mathfrak{M}_y)$ holds with $f =
x-y$. In this case all of the resulting $\Hom$ groups
vanish (\emph{cf}.\ also the proof of Lemma~\ref{lem: generically no Homs}).
It then follows from
Corollary~\ref{cor:freeness for exts} that
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})_{f}$ is projective over $A_f$, so that the proof
of Lemma~\ref{lem: vanishing of Kisin module homs implies vanishing on
dense open} even goes through with this choice of $f$.
\end{example}
As well as considering homomorphisms and extensions of Breuil--Kisin modules, we need to
consider the homomorphisms and extensions of their associated \'etale $\varphi$-modules;
recall that the passage to associated \'etale $\varphi$-modules amounts
to inverting $u$, and so we briefly discuss this process in the general
context of the category $\K{A}$.
We let $\K{A}[1/u]$ denote the full subcategory of $\K{A}$
consisting of objects on which multiplication by $u$ is invertible.
We may equally well regard it as the category of left
$\mathfrak{S}_A[1/u][F,\Gal(K'/K)]$-modules (this notation being interpreted in
the evident manner).
There are natural isomorphisms (of bi-modules)
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:left tensor iso}
\mathfrak{S}_A[1/u]\otimes_{\mathfrak{S}_A} \mathfrak{S}_A[F,\Gal(K'/K)]
\buildrel \sim \over \longrightarrow \mathfrak{S}_A[1/u][F,\Gal(K'/K)]
\end{equation}
and
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:right tensor iso}
\mathfrak{S}_A[F,\Gal(K'/K)] \otimes_{\mathfrak{S}_A} \mathfrak{S}_A[1/u]
\buildrel \sim \over \longrightarrow \mathfrak{S}_A[1/u][F,\Gal(K'/K)].
\end{equation}
Thus (since $\mathfrak{S}_A \to \mathfrak{S}_A[1/u]$ is a flat morphism of commutative rings)
the morphism of rings $\mathfrak{S}_A[F,\Gal(K'/K)] \to \mathfrak{S}_A[1/u][F,\Gal(K'/K)]$
is both left and right flat.
If $\mathfrak{M}$ is an object of $\K{A}$, then we see from~(\ref{eqn:left tensor
iso}) that
$\mathfrak{M}[1/u] := \mathfrak{S}_A[1/u]\otimes_{\mathfrak{S}_A} \mathfrak{M} \buildrel \sim \over \longrightarrow \mathfrak{S}_A[1/u][F,\Gal(K'/K)]
\otimes_{\mathfrak{S}_A[F,\Gal(K'/K)]} \mathfrak{M}$ is naturally an object
of $\K{A}[1/u]$. Our preceding remarks about flatness show
that $\mathfrak{M} \mapsto \mathfrak{M}[1/u]$ is an exact functor $\K{A}\to \K{A}[1/u]$.
\begin{lemma}\label{lem:ext-i-invert-u}
\begin{enumerate}
\item If $M$ and $N$ are objects
of $\K{A}[1/u]$, then
there is a natural isomorphism
$$\Ext^i_{\K{A}[1/u]}(M,N) \buildrel \sim \over \longrightarrow \Ext^i_{\K{A}}(M,N).$$
\item
If $\mathfrak{M}$ is an object of $\K{A}$ and $N$ is an object of $\K{A}[1/u]$,
then there is a natural isomorphism
$$\Ext^i_{\K{A}}(\mathfrak{M},N) \buildrel \sim \over \longrightarrow \Ext^i_{\K{A}}(\mathfrak{M}[1/u],N),$$
for all $i\geq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
The morphism of~(1) can be understood in various ways; for example,
by thinking in terms of Yoneda Exts, and recalling that $\K{A}[1/u]$
is a full subcategory of $\K{A}.$ If instead we think in terms
of projective resolutions, we can begin with a projective resolution
$\mathfrak{P}^{\bullet} \to M$ in $\K{A}$, and then consider the induced
projective resolution $\mathfrak{P}^{\bullet}[1/u]$ of $M[1/u]$. Noting
that $M[1/u] \buildrel \sim \over \longrightarrow M$ for any object $M$ of $\K{A}[1/u]$,
we then find (via tensor adjunction) that $\Hom_{\K{A}}(\mathfrak{P}^{\bullet},
N) \buildrel \sim \over \longrightarrow \Hom_{\K{A}[1/u]}(\mathfrak{P}^{\bullet}[1/u], N)$,
which induces the desired isomorphism of $\Ext$'s by passing to
cohomology.
Taking into account the isomorphism of~(1), the claim of~(2) is a general
fact about tensoring over a flat ring map (as can again be seen by
considering projective resolutions).
\end{proof}
\begin{remark}
The preceding lemma is fact an automatic consequence of the abstract
categorical properties of our situation:\ the functor $\mathfrak{M} \mapsto \mathfrak{M}[1/u]$
is left adjoint to the inclusion $\K{A}[1/u] \subset\K{A},$
and restricts to (a functor naturally equivalent to) the identity functor
on $\K{A}[1/u]$.
\end{remark}
The following lemma expresses the Hom between \'etale $\varphi$-modules
arising from Breuil--Kisin modules in terms
of a certain direct limit.
\begin{lem}
\label{lem:computing Hom as direct limit}Suppose that $\mathfrak{M}$ is a
Breuil--Kisin module with descent data in a Noetherian ${\mathcal O}/\varpi^a$-algebra~$A$, and that~$\mathfrak{N}$ is an object of $\K{A}$ which is
finitely generated and $u$-torsion free as an
$\mathfrak{S}_A$-module.
Then there is a natural isomorphism
\[ \varinjlim_i\Hom_{\K{A}}(u^i\mathfrak{M},\mathfrak{N}) \buildrel \sim \over \longrightarrow
\Hom_{\K{A}[1/u]}(\mathfrak{M}[1/u],\mathfrak{N}[1/u]),\]
where the transition maps are induced by the inclusions $u^{i+1} \mathfrak{M}
\subset u^i \mathfrak{M}$.
\end{lem}
\begin{rem}
\label{rem: maps in direct limit are injections}Note that since
$\mathfrak{N}$ is $u$-torsion free, the transition maps in the colimit are
injections, so the colimit is just an increasing union.
\end{rem}
\begin{proof}There are compatible injections $\Hom_{\K{A}}(u^i\mathfrak{M},\mathfrak{N}) \to
\Hom_{\K{A}[1/u]}(\mathfrak{M}[1/u],\mathfrak{N}[1/u])$, taking $f'\in
\Hom_{\K{A}}(u^i\mathfrak{M},\mathfrak{N})$ to $f\in\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u])$
where $f(m)=u^{-i}f'(u^im)$. Conversely, given
$f\in\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u])$, there is some~$i$ such that
$f(\mathfrak{M})\subset u^{-i}\mathfrak{N}$, as required.
\end{proof}
We have the following analogue of Proposition~\ref{prop:vanishing of homs}.
\begin{cor}
\label{cor:vanishing homs with u inverted}Suppose that $\mathfrak{M}$ and
$\mathfrak{N}$ are Breuil--Kisin modules with descent data in a Noetherian ${\mathcal O}/\varpi^a$-algebra~$A$.
Consider the following conditions:
\begin{enumerate}
\item
$\Hom_{\K{B}[1/u]} \bigl((\mathfrak{M}\, \widehat{\otimes}_A B)[1/u], (\mathfrak{N}\, \widehat{\otimes}_A B)[1/u]\bigr) = 0$
for any finite type $A$-algebra $B$.
\item
$\Hom_{\K{\kappa(\mathfrak{m})}[1/u]}\bigl((\mathfrak{M}\otimes_A \kappa(\mathfrak m))[1/u],
(\mathfrak{N}\otimes_A \kappa(\mathfrak m))[1/u] \bigr) = 0$
for each maximal ideal $\mathfrak m$ of $A$.
\item
$\Hom_{\K{A}[1/u]}\bigl(\mathfrak{M}[1/u], (\mathfrak{N}\otimes_A Q)[1/u]\bigr) = 0$
for any
finitely generated $A$-module $Q$.
\end{enumerate}
Then we have (1)$\implies$(2)$\iff$(3). If $A$ is furthermore
Jacobson, then all three conditions are equivalent.
\end{cor}
\begin{proof}
By Lemma~\ref{lem:computing Hom as direct limit}, the three conditions
are respectively equivalent to the following conditions.
\begin{enumerate}[label=(\arabic*$'$)]
\item
$\Hom_{\K{B}} \bigl(u^i(\mathfrak{M}\, \widehat{\otimes}_A B), \mathfrak{N}\, \widehat{\otimes}_A B\bigr) = 0$
for any finite type $A$-algebra $B$ and all $i\ge 0$.
\item
$\Hom_{\K{\kappa(\mathfrak{m})}}\bigl(u^i(\mathfrak{M}\otimes_A \kappa(\mathfrak m)),
\mathfrak{N}\otimes_A \kappa(\mathfrak m) \bigr) = 0$
for each maximal ideal $\mathfrak m$ of $A$ and all $i\ge 0$.
\item
$\Hom_{\K{A}}\bigl(u^i\mathfrak{M}, \mathfrak{N}\otimes_A Q\bigr) = 0$
for any
finitely generated $A$-module $Q$ and all $i\ge 0$.
\end{enumerate}
Since $\mathfrak{M}$ is projective, the first two conditions are in turn
equivalent to
\begin{enumerate}[label=(\arabic*$''$)]
\item
$\Hom_{\K{B}} \bigl((u^i\mathfrak{M})\, \widehat{\otimes}_A B, \mathfrak{N}\, \widehat{\otimes}_A B\bigr) = 0$
for any finite type $A$-algebra $B$ and all $i\ge 0$.
\item
$\Hom_{\K{\kappa(\mathfrak{m})}}\bigl((u^i\mathfrak{M})\otimes_A \kappa(\mathfrak m),
\mathfrak{N}\otimes_A \kappa(\mathfrak m) \bigr) = 0$
for each maximal ideal $\mathfrak m$ of $A$ and all $i\ge 0$.
\end{enumerate}
The result then follows from Proposition~\ref{prop:vanishing of homs}.
\end{proof}
\begin{df}\label{def:kext}
If $\mathfrak{M}$ and $\mathfrak{N}$ are objects of $\K{A}$, then we define
$$\kExt^1_{\K{A}}(\mathfrak{M},\mathfrak{N})
:=
\ker\bigl(\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\to\Ext^1_{\K{A}}(\mathfrak{M}[1/u],\mathfrak{N}[1/u])\bigr).$$
The point of this definition is to capture, in the setting of
Lemma~\ref{lem: Galois rep is a functor if A is actually finite local}, the non-split extensions
of Breuil--Kisin modules whose underlying extension of Galois
representations is split.
\end{df}
Suppose now that $\mathfrak{M}$ is a Breuil--Kisin module.
The exact sequence in~$\K{A}$ \[0\to\mathfrak{N}\to \mathfrak{N}[1/u]\to\mathfrak{N}[1/u]/\mathfrak{N}\to 0\]
gives an exact sequence of complexes \[\xymatrix{0\ar[r]&
C^0(\mathfrak{N})\ar[d]\ar[r]&
C^0(\mathfrak{N}[1/u])\ar[d]\ar[r]&C^0(\mathfrak{N}[1/u]/\mathfrak{N})\ar[d]\ar[r]&0\\ 0\ar[r]&
C^1(\mathfrak{N})\ar[r]&
C^1(\mathfrak{N}[1/u])\ar[r]&C^1(\mathfrak{N}[1/u]/\mathfrak{N})\ar[r]&0. } \] It follows from
Corollary~\ref{cor:complex computes Hom and Ext},
Lemma~\ref{lem:ext-i-invert-u}(2), and
the snake lemma that we have an exact
sequence \addtocounter{subsubsection}{1}\begin{equation}\label{eqn: computing kernel of Ext groups}\begin{split}0\to\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N})\to\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u])
\qquad \qquad \\
\to\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N}) \to \kExt^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\to
0.\end{split}\end{equation}
\begin{lem}\label{lem: bound on torsion in kernel of Exts}If $\mathfrak{M}$, $\mathfrak{N}$ are Breuil--Kisin modules with descent data and
coefficients in a Noetherian ${\mathcal O}/\varpi^a$-algebra~$A$, and $\mathfrak{N}$ has
height at most~$h$, then $f(\mathfrak{M})$
is killed by $u^i$ for any $f \in \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N})$ and
any $i \ge \lfloor e'ah/(p-1) \rfloor$.
\end{lem}
\begin{proof}
Suppose that $f$ is an element of
$\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N})$. Then $f(\mathfrak{M})$ is a finitely
generated submodule of $\mathfrak{N}[1/u]/\mathfrak{N}$, and it therefore killed
by~$u^i$ for some $i\ge 0$. Choosing~$i$ to be the exponent of
$f(\mathfrak{M})$ (that is, choosing $i$ to be minimal), it follows
that
$(\varphi^*f)(\varphi^*\mathfrak{M})$ has exponent
precisely~$ip$. (From the choice of $i$, we see that $u^{i-1}
f(\mathfrak{M})$ is nonzero but killed by $u$, i.e., it is just a $W(k')
\otimes A$-module, and so its pullback by~$\varphi:\mathfrak{S}_A\to\mathfrak{S}_A$
has exponent precisely $p$. Then by the flatness
of~$\varphi:\mathfrak{S}_A\to\mathfrak{S}_A$ we have
$u^{ip-1}(\varphi^*f)(\varphi^*\mathfrak{M})=u^{p-1}\varphi^*(u^{i-1}
f(\mathfrak{M})) \neq 0$.)
We claim that $u^{i+e'ah}(\varphi^*f)(\varphi^*\mathfrak{M})=0$; admitting this, we
deduce that $i+e'ah\ge ip$, as required. To see the claim, take
$x\in\varphi^*\mathfrak{M}$, so that $\Phi_{\mathfrak{N}}((u^i\varphi^*f)(x))=
u^if(\Phi_\mathfrak{M}(x))=0$. It is therefore enough to show that the kernel
of \[\Phi_{\mathfrak{N}}:\varphi^*\mathfrak{N}[1/u]/\varphi^*\mathfrak{N}\to \mathfrak{N}[1/u]/\mathfrak{N}\] is killed
by $u^{e'ah}$; but this follows immediately from an application of the
snake lemma to the commutative diagram \[\xymatrix{0\ar[r]&
\varphi^*\mathfrak{N}\ar[r]\ar[d]_{\Phi_\mathfrak{N}}&\varphi^*\mathfrak{N}[1/u]\ar[r]\ar[d]_{\Phi_\mathfrak{N}}
&\varphi^*\mathfrak{N}[1/u]/\varphi^*\mathfrak{N}\ar[r]\ar[d]_{\Phi_\mathfrak{N}}&0
\\ 0\ar[r]&
\mathfrak{N}\ar[r]&\mathfrak{N}[1/u]\ar[r]
&\mathfrak{N}[1/u]/\mathfrak{N}\ar[r]&0}
\]together with the assumption that $\mathfrak{N}$ has height at most~$h$ and
an argument as in the first line of the proof of
Lemma~\ref{lem:truncation argument used to prove f.g. of Ext Q version}.
\end{proof}
\begin{lem}\label{lem:computing kernel of Ext groups finite level}If $\mathfrak{M}$, $\mathfrak{N}$ are Breuil--Kisin modules with descent data and
coefficients in a Noetherian ${\mathcal O}/\varpi^a$-algebra~$A$, and $\mathfrak{N}$ has
height at most~$h$, then for any $i \ge \lfloor e'ah/(p-1) \rfloor$ we have an exact
sequence
\[\begin{split}0\to\Hom_{\K{A}}(u^i\mathfrak{M},u^i\mathfrak{N})\to\Hom_{\K{A}}(u^i\mathfrak{M},\mathfrak{N}) \qquad \qquad \\
\to\Hom_{\K{A}}(u^i\mathfrak{M},\mathfrak{N}/u^i\mathfrak{N}) \to \kExt^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\to
0.\end{split}\]
\end{lem}
\begin{proof} Comparing Lemma~\ref{lem: bound on torsion in kernel of
Exts} with the proof of Lemma~\ref{lem:computing Hom as direct
limit}, we see that the direct limit in that proof has stabilised
at $i$, and we obtain an isomorphism $\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u])
\buildrel\sim\over\to \Hom_{\K{A}}(u^i \mathfrak{M},\mathfrak{N})$ sending a map $f$ to $f' : u^i m
\mapsto u^i f(m)$. The same formula evidently identifies
$\Hom_{\K{A}}( \mathfrak{M},\mathfrak{N})$ with $\Hom_{\K{A}}(u^i\mathfrak{M},u^i\mathfrak{N})$ and
$\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N})$ with $\Hom_{\K{A}}(u^i
\mathfrak{M},\mathfrak{N}[1/u]/u^i \mathfrak{N})$. But any map in the latter group has image
contained in $\mathfrak{N}/u^i \mathfrak{N}$ (by Lemma~\ref{lem: bound on torsion in kernel of
Exts} applied to $\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N})$, together with
the identification in the previous sentence), so that $\Hom_{\K{A}}(u^i
\mathfrak{M},\mathfrak{N}[1/u]/u^i \mathfrak{N}) = \Hom_{\K{A}}(u^i
\mathfrak{M},\mathfrak{N}/u^i \mathfrak{N})$.
\end{proof}
\begin{prop}
\label{prop: base change for kernel of map to etale Ext}
Let $\mathfrak{M}$ and $\mathfrak{N}$ be Breuil--Kisin modules with descent data and
coefficients in a Noetherian ${\mathcal O}/\varpi^a$-domain~$A$.
Then there is some nonzero $f\in
A$ with the following property: if we write
$\mathfrak{M}_{A_f}=\mathfrak{M}\, \widehat{\otimes}_A A_f$ and $\mathfrak{N}_{A_f}=\mathfrak{N}\, \widehat{\otimes}_A A_f$, then if~$B$
is any finitely generated $A_f$-algebra, and if $Q$ is any finitely
generated $B$-module, we have natural isomorphisms
\begin{multline*}
\kExt^1_{\K{A_f}}(\mathfrak{M},\mathfrak{N})\otimes_{A_f}Q \buildrel \sim \over \longrightarrow
\kExt^1_{\K{A_f}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,
\mathfrak{N}\, \widehat{\otimes}_{A_f} B)\otimes_B Q
\\
\buildrel \sim \over \longrightarrow
\kExt^1_{\K{A_f}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,
\mathfrak{N}\, \widehat{\otimes}_{A_f} Q).
\end{multline*}
\end{prop}
\begin{proof}In view of Lemma~\ref{lem:computing kernel of Ext groups
finite level}, this follows from Lemma~\ref{lem: vanishing of
Kisin module homs implies vanishing on dense open},
with $\mathfrak{M}$ there being our $u^i\mathfrak{M}$, and $\mathfrak{N}$ being
each of $\mathfrak{N}$, $\mathfrak{N}/u^i\mathfrak{N}$ in turn.
\end{proof}
The following result will be crucial in our investigation of the
decomposition of ${\mathcal C}^{\mathrm{dd},1}$ and ${\mathcal R}^{\mathrm{dd},1}$ into
irreducible components.
\begin{prop}
\label{prop: we have vector bundles}
Suppose that $\mathfrak{M}$ and $\mathfrak{N}$ are Breuil--Kisin modules with
descent data and coefficients in a Noetherian ${\mathcal O}/\varpi^a$-algebra $A$
which is furthermore a domain,
and suppose that
$\Hom_{\K{A}}\bigl(\mathfrak{M}\otimes_A \kappa(\mathfrak m),
\mathfrak{N}\otimes_A \kappa(\mathfrak m) \bigr)$ vanishes
for each maximal ideal $\mathfrak m$ of $A$.
Then there is some nonzero $f\in
A$ with the following property: if we write
$\mathfrak{M}_{A_f}=\mathfrak{M}\, \widehat{\otimes}_A A_f$ and $\mathfrak{N}_{A_f}=\mathfrak{N}\, \widehat{\otimes}_A A_f$, then
for any finitely generated $A_f$-algebra $B$,
each of $\kExt^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}B)$,
$\Ext^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}B)$,
and
$$\Ext^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f}B ,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}B)/
\kExt^1_{\K{A_f}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} B)$$
is a finitely generated projective $B$-module.
\end{prop}
\begin{proof}
Choose $f$ as in
Proposition~\ref{prop: base change for kernel of map to etale Ext},
let $B$ be a finitely generated $A_f$-algebra,
and let $Q$ be a finitely generated $B$-module.
By Propositions~\ref{prop:base-change for exts} and~\ref{prop: base
change for kernel of map to etale Ext}, the morphism
\[\kExt^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} B)\otimes_B Q\to
\Ext^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}B)\otimes_B Q\]
is naturally identified with the morphism
\[\kExt^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} Q)\to
\Ext^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} Q);\]
in particular, it is injective.
By Proposition~\ref{prop:base-change for exts} and
Corollary~\ref{cor:freeness for exts} we see that
$\Ext^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} B)$ is a
finitely generated projective $B$-module; hence it is also flat.
Combining this with the injectivity just proved, we find that
\[\Tor^1_B\bigl(Q, \Ext^1_{\K{B}}(\mathfrak{M}\, \widehat{\otimes}_{A_f}B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} B)/
\kExt^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f} B)\bigr)=0\]
for every finitely generated $B$-module $Q$, and thus that
$$\Ext^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f}B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}B)/
\kExt^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f}B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}B)$$
is a finitely generated flat,
and therefore finitely generated projective, $B$-module.
Thus $\kExt^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f}B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}B)$
is a direct summand of
the finitely generated projective $B$-module
$\Ext^1_{\K{B}}(\mathfrak{M}_{A_f}\, \widehat{\otimes}_{A_f} B,\mathfrak{N}_{A_f}\, \widehat{\otimes}_{A_f}B)$, and so is
itself a finitely generated projective $B$-module.
\end{proof}
\subsection{Families of extensions}
\label{subsec:families of extensions}
Let $\mathfrak{M}$ and $\mathfrak{N}$ be Breuil--Kisin modules with descent data
and $A$-coefficients, so that
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ is an $A$-module.
Suppose that $\psi: V \to \Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ is
a homomorphism of $A$-modules whose source is a projective $A$-module of
finite rank.
Then we may regard $\psi$ as an element of
$$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})\otimes_A V^{\vee} =
\Ext^1_{\K{A}}(\mathfrak{M}, \mathfrak{N}\otimes_A V^{\vee} ),$$
and in this way $\psi$
corresponds to an extension
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:universal extension}
0 \to \mathfrak{N}\otimes_A V^{\vee} \to \mathfrak{E} \to \mathfrak{M} \to 0,
\end{equation}
which we refer to as the {\em family of extensions} of $\mathfrak{M}$ by $\mathfrak{N}$
parametrised by $V$ (or by $\psi$, if we want to emphasise our
choice of homomorphism). We let $\mathfrak{E}_v$ denote the pushforward of $\mathfrak{E}$ under
the morphism $\mathfrak{N}\otimes_A V^{\vee} \to \mathfrak{N}$
given by evaluation on $v\in V$.
In the special case that
$\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ itself is a projective $A$-module of finite rank,
we can let $V$ be $\Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ and take $\psi$ be the
identity map;
in this case we refer to~(\ref{eqn:universal extension}) as
the {\em universal extension} of $\mathfrak{M}$ by $\mathfrak{N}$.
The reason for this terminology is as follows:
if $v \in \Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$,
then $\mathfrak{E}_v$ is the extension of $\mathfrak{M}$ by $\mathfrak{N}$ corresponding to the element
$v$.
Let $B := A[V^{\vee}]$ denote the symmetric algebra over $A$ generated by
$V^{\vee}$.
The short exact sequence~(\ref{eqn:universal extension}) is a
short exact sequence of Breuil--Kisin modules with descent data,
and so forming its $u$-adically completed tensor product with $B$ over $A$,
we obtain a short exact sequence
$$
0 \to \mathfrak{N}\otimes_A V^{\vee} \, \widehat{\otimes}_A B \to \mathfrak{E}\, \widehat{\otimes}_A B \to
\mathfrak{M}\, \widehat{\otimes}_A B
\to 0$$
of Breuil--Kisin modules with descent data over $B$ (see Lemma~\ref{rem: base change of locally free Kisin module is a
locally free Kisin module}).
Pushing this short exact sequence forward under the natural map
$$V^{\vee} \, \widehat{\otimes}_A B = V^{\vee} \otimes_A B \to B$$
induced by the inclusion of $V^{\vee}$ in $B$
and the multiplication map $B\otimes_A B \to B$,
we obtain a short exact sequence
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:geometric universal extension}
0 \to \mathfrak{N}\, \widehat{\otimes}_A B \to \widetilde{\mathfrak{E}} \to \mathfrak{M}\, \widehat{\otimes}_A B \to 0
\end{equation}
of Breuil--Kisin modules with descent data over $B$,
which we call the {\em family of extensions} of $\mathfrak{M}$ by $\mathfrak{N}$
parametrised by $\Spec B$ (which we note is (the total space of) the
vector bundle over $\Spec A$ corresponding to the projective
$A$-module~$V$).
If $\alpha_v: B \to A$ is the morphism induced
by the evaluation map
$V^{\vee} \to A$ given by some element $v \in V$,
then base-changing~(\ref{eqn:geometric universal extension}) by $\alpha_v$,
we recover the short exact sequence
$$0 \to \mathfrak{N} \to \mathfrak{E}_v \to \mathfrak{M} \to 0.$$
More generally, suppose that $A$ is a ${\mathcal O}/\varpi^a$-algebra for some
$a\ge 1$, and let $C$ be any $A$-algebra. Suppose that $\alpha_{\tilde{v}}:
B \to C$ is the morphism induced
by the evaluation map
$V^{\vee} \to C$ corresponding to some element $\tilde{v} \in C\otimes_A V$.
Then base-changing~(\ref{eqn:geometric universal extension}) by
$\alpha_{\tilde{v}}$
yields a short exact sequence
$$0 \to \mathfrak{N}\, \widehat{\otimes}_A C \to \widetilde{\mathfrak{E}}\, \widehat{\otimes}_B C
\to \mathfrak{M}\, \widehat{\otimes}_A C \to 0,$$
whose associated extension class corresponds
to the image of $\tilde{v}$
under the natural morphism
$C\otimes_A V \to C\otimes_A \Ext^1_{\K{A}}(\mathfrak{M},
\mathfrak{N}) \cong \Ext^1_{\K{C}}(\mathfrak{M}\, \widehat{\otimes}_A C, \mathfrak{N}\otimes_A C),$
the first arrow being induced by $\psi$
and the second arrow being the isomorphism of Proposition~\ref{prop:base-change
for exts}.
\subsubsection{The functor represented by a universal family}
We now suppose that the ring~$A$ and the Breuil--Kisin modules $\mathfrak{M}$ and $\mathfrak{N}$ have the following
properties:
\begin{assumption}
\label{assumption:vanishing}Let $A$ be a Noetherian and Jacobson
${\mathcal O}/\varpi^a$-algebra for some $a\ge 1$, and assume that for each maximal ideal $\mathfrak m$ of $A$, we have that
$$\Hom_{\K{\kappa(\mathfrak{\mathfrak{m}})}}\bigl(\mathfrak{M}\otimes_A \kappa(\mathfrak m) , \mathfrak{N}\otimes_A
\kappa(\mathfrak m)\bigr) = \Hom_{\K{\kappa(\mathfrak{\mathfrak{m}})}}\bigl(\mathfrak{N}\otimes_A \kappa(\mathfrak m) , \mathfrak{M}\otimes_A
\kappa(\mathfrak m)\bigr) = 0.$$
\end{assumption}
By Corollary~\ref{cor:freeness for exts}, this assumption implies in particular
that $V:= \Ext^1_{\K{A}}(\mathfrak{M},\mathfrak{N})$ is projective of finite rank,
and so we may form $\Spec B := \Spec A[V^{\vee}]$,
which parametrised the universal family of
extensions.
We are then able to give the following precise description
of the functor represented by $\Spec B$.
\begin{prop}\label{prop: the functor that Spec B represents}
The scheme $\Spec B$ represents the functor which,
to any ${\mathcal O}/\varpi^a$-algebra $C$, associates the set of isomorphism
classes of tuples $(\alpha, \mathfrak{E}, \iota, \pi)$, where $\alpha$ is a morphism
$\alpha: \Spec C \to \Spec A$, $\mathfrak{E}$ is a Breuil--Kisin module
with descent data and coefficients in $C$, and $\iota$ and $\pi$ are morphisms
$\alpha^* \mathfrak{N} \to \mathfrak{E}$ and $\mathfrak{E} \to \alpha^* \mathfrak{M}$ respectively,
with the property that $0 \to \alpha^*\mathfrak{N} \buildrel \iota
\over \to \mathfrak{E} \buildrel \pi \over \to \alpha^* \mathfrak{M} \to 0$
is short exact.
\end{prop}
\begin{proof}
We have already seen that giving a morphism $\Spec C \to \Spec B$
is equivalent to giving the composite morphism $\alpha:\Spec C \to \Spec B
\to \Spec A$, together with an extension class
$[\mathfrak{E}] \in \Ext^1_{\K{C}}(\alpha^*\mathfrak{M},\alpha^*\mathfrak{N}).$
Thus to prove the proposition, we just have to show that
any automorphism of $\mathfrak{E}$ which restricts to the identity on $\alpha^*\mathfrak{N}$
and induces the identity on $\alpha^*\mathfrak{M}$ is itself the identity on
$\mathfrak{E}$. This follows from Corollary~\ref{cor:vanishing of homs non
Noetherian},
together with Assumption~\ref{assumption:vanishing}.
\end{proof}
Fix an integer $h\ge 0$ so that $E(u)^h\in
\operatorname{Ann}_{\mathfrak{S}_A}(\coker\Phi_{\mathfrak{M}})\operatorname{Ann}_{\mathfrak{S}_A}(\coker\Phi_{\mathfrak{N}})$, so
that by Lemma~\ref{lem:ext of a Kisin module is a Kisin module}, every
Breuil--Kisin module parametrised by $\Spec B$ has height at most~$h$.
There is a natural action of $\GG_m\times_{{\mathcal O}} \GG_m$ on $\Spec B$,
given by rescaling each of $\iota$ and $\pi$.
There is also an evident forgetful morphism
$\Spec B \to \Spec A \times_{{\mathcal O}} {\mathcal C}^{\mathrm{dd},a}$,
given by forgetting $\iota$ and $\pi$, which is evidently invariant
under the $\GG_m\times_{{\mathcal O}} \GG_m$-action. (Here and below,
${\mathcal C}^{\mathrm{dd},a}$ denotes the moduli stack defined in Subsection~\ref{sec:recoll-from-citec}
for our fixed choice of~$h$ and for
$d$ equal to the sum of the ranks of $\mathfrak{M}$ and $\mathfrak{N}$.) We thus obtain a morphism
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:ext relation}
\Spec B \times_{{\mathcal O}} \GG_m\times_{{\mathcal O}} \GG_m \to
\Spec B \times_{\Spec A \times_{{\mathcal O}} {\mathcal C}^{\mathrm{dd},a}} \Spec B.
\end{equation}
\begin{cor}\label{cor: monomorphism to Spec A times C}
Suppose that
$\Aut_{\K{C}}(\alpha^*\mathfrak{M}) = \Aut_{\K{C}}(\alpha^*\mathfrak{N}) = C^{\times}$
for any morphism $\alpha: \Spec C \to \Spec A$.
Then the morphism~{\em (\ref{eqn:ext relation})} is an isomorphism,
and consequently the induced morphism
$$[\Spec B/ \GG_m\times_{{\mathcal O}} \GG_m] \to \Spec A \times_{{\mathcal O}} {\mathcal C}^{\mathrm{dd},a}$$
is a finite type monomorphism.
\end{cor}
\begin{proof}
By Proposition~\ref{prop: the functor that Spec B represents}, a morphism \[\Spec C\to \Spec B \times_{\Spec A
\times_{{\mathcal O}} {\mathcal C}^{\mathrm{dd},a}} \Spec B\] corresponds to an isomorphism class
of tuples $(\alpha,\beta:\mathfrak{E}\to\mathfrak{E}',\iota,\iota',\pi,\pi')$, where
\begin{itemize}
\item $\alpha$ is a morphism
$\alpha:\Spec C\to\Spec A$,
\item $\beta:\mathfrak{E}\to\mathfrak{E}'$ is an isomorphism of Breuil--Kisin modules
with descent data and coefficients in $C$,
\item $\iota:\alpha^* \mathfrak{N} \to \mathfrak{E}$ and $\pi :\mathfrak{E} \to
\alpha^* \mathfrak{M}$ are morphisms
with the property that $$0 \to \alpha^*\mathfrak{N} \buildrel \iota
\over \to \mathfrak{E} \buildrel \pi \over \to \alpha^* \mathfrak{M} \to 0$$ is short
exact,
\item $\iota':\alpha^* \mathfrak{N} \to \mathfrak{E}'$ and $\pi' :\mathfrak{E}' \to
\alpha^* \mathfrak{M}$ are morphisms
with the property that
$$0 \to \alpha^*\mathfrak{N} \buildrel \iota'
\over \to \mathfrak{E}' \buildrel \pi' \over \to \alpha^* \mathfrak{M} \to 0$$ is
short exact.
\end{itemize}
Assumption~\ref{assumption:vanishing} and Corollary~\ref{cor:vanishing of homs non Noetherian} together show that
$\Hom_{\K{C}}(\alpha^*\mathfrak{N}, \alpha^*\mathfrak{M})~=~0$. It follows that the
composite
$\alpha^*\mathfrak{N}\stackrel{\iota}{\to}\mathfrak{E}\stackrel{\beta}{\to}\mathfrak{E}' $
factors through $\iota'$, and the induced endomorphism of
$\alpha^*\mathfrak{N}$ is injective. Reversing the roles of $\mathfrak{E}$ and $\mathfrak{E}'$,
we see that it is in fact an automorphism of $\alpha^*\mathfrak{N}$, and it
follows easily that $\beta$ also induces an automorphism of
$\alpha^*\mathfrak{M}$. Again, Assumption~\ref{assumption:vanishing} and Proposition~\ref{cor:vanishing of homs non Noetherian} together show that
$\Hom_{\K{C}}(\alpha^*\mathfrak{M}, \alpha^*\mathfrak{N}) = 0$, from which it follows
easily that $\beta$ is determined by the automorphisms of
$\alpha^*\mathfrak{M}$ and $\alpha^*\mathfrak{N}$ that it induces.
Since $\Aut_{\K{C}}(\alpha^*\mathfrak{M}) =
\Aut_{\K{C}}(\alpha^*\mathfrak{N}) = C^{\times}$ by assumption, we see that
$\beta \circ \iota, \iota'$ and $\pi,\pi'\circ\beta$ differ only by the action of
$\GG_m\times_{{\mathcal O}}\GG_m$, so the first claim of the corollary
follows.
The claim regarding the monomorphism is immediate from
Lemma~\ref{lem: morphism from quotient stack is a monomorphism}
below. Finally, note that $[\Spec B/\GG_m\times_{{\mathcal O}} \GG_m]$ is
of finite type over $\Spec A$, while ${\mathcal C}^{\mathrm{dd},a}$ has finite type diagonal.
It follows that the morphism
$[\Spec B / \GG_m \times_{{\mathcal O}} \GG_m ] \rightarrow \Spec A\times_{{\mathcal O}}{\mathcal C}^{\mathrm{dd},a}$
is of finite type, as required.
\end{proof}
\begin{lem}
\label{lem: morphism from quotient stack is a monomorphism}Let $X$
be a scheme over a base scheme~$S$, let $G$ be a smooth affine group
scheme over~$S$, and let $\rho:X\times_S G\to X$ be a {\em
(}right{\em )} action of $G$
on~$X$. Let $X\to{\mathcal Y}$ be a $G$-equivariant morphism, whose target is
an algebraic stack over~$S$ on which $G$ acts trivially.
Then the induced
morphism \[[X/G]\to{\mathcal Y}\] is a monomorphism if and only if the
natural morphism \[X\times_S G\to X\times_{{\mathcal Y}} X\] {\em (}induced by the
morphisms $\operatorname{pr}_1,\rho:X\times_S G\to X${\em )} is an isomorphism.
\end{lem}
\begin{proof}
We have a Cartesian diagram as follows. \[\xymatrix{X\times_S G\ar[r]\ar[d]&
X\times_{\mathcal Y} X\ar[d]\\ [X/G]\ar[r]& [X/G]\times_{\mathcal Y}[X/G]}\]The
morphism $[X/G]\to{\mathcal Y}$ is a monomorphism if and only if the bottom horizontal
morphism of this square is an isomorphism; since the right hand
vertical arrow is a smooth surjection, this is the case if and only
if the top horizontal morphism is an isomorphism, as required.
\end{proof}
\subsection{Families of extensions of rank one Breuil--Kisin modules}
\label{subsec:universal families}
In this section we construct universal families of extensions of rank
one Breuil--Kisin modules. We will use these rank two families to study our
moduli spaces of Breuil--Kisin modules, and the corresponding spaces of
\'etale $\varphi$-modules. We show how to compute the dimensions of
these universal families; in the subsequent sections, we will combine
these results with explicit calculations to determine the irreducible
components of our moduli spaces. In particular, we will show that each
irreducible component has a dense open substack given by a family of
extensions.
\subsubsection{Universal unramified twists}
Fix a free Breuil--Kisin module with descent data
$\mathfrak{M}$ over ${\mathbb F}$, and write $\Phi_i$ for
$\Phi_{\mathfrak{M},i}:\varphi^*(\mathfrak{M}_{i-1}) \to\mathfrak{M}_{i}$. (Here we are using the
notation of Section~\ref{subsec: kisin modules with dd}, so
that~$\mathfrak{M}_i=e_i\mathfrak{M}$ is cut out by the idempotent~$e_i$ of Section~\ref{subsec: notation}.)
We will construct the ``universal unramified twist'' of $\mathfrak{M}$.
\begin{df}
\label{def:unramified twist}
If $\Lambda$ is an ${\mathbb F}$-algebra, and if $\lambda \in \Lambda^\times$,
then we define $\mathfrak{M}_{\Lambda,\lambda}$ to be the free Breuil--Kisin
module with descent data and $\Lambda$-coefficients whose underlying
$\mathfrak{S}_\Lambda[\Gal(K'/K)]$-module
is equal to $\mathfrak{M}\, \widehat{\otimes}_{\mathbb F}\Lambda$
(so the usual base change of $\mathfrak{M}$ to $\Lambda$),
and for which $\Phi_{\mathfrak{M}_{\Lambda,\lambda}}: \varphi^*\mathfrak{M}_{\Lambda,\lambda} \to \mathfrak{M}_{\Lambda,\lambda}$ is defined via the $f'$-tuple
$(\lambda \Phi_0,\Phi_1,\ldots,\Phi_{f'-1}).$
We refer to $\mathfrak{M}_{\Lambda,\lambda}$ as the \emph{unramified twist} of $\mathfrak{M}$ by $\lambda$ over $\Lambda$.
If $M$ is a free \'etale $\varphi$-module with descent data, then we
define $M_{\Lambda,\lambda}$ in the analogous fashion. If we write
$X=\Spec \Lambda$, then we will sometimes write $\mathfrak{M}_{X,\lambda}$
(resp.\ $M_{X,\lambda}$) for $\mathfrak{M}_{\Lambda,\lambda}$ (resp.\
$M_{\Lambda,\lambda}$).
\end{df}
As usual, we write $\GG_m := \Spec {\mathbb F}[x,x^{-1}].$
We may then
form the rank one Breuil--Kisin module with descent data
$\mathfrak{M}_{\GG_m,x},$ which is the universal instance of an unramified twist:
given $\lambda\in\Lambda^\times$,
there is a corresponding morphism $\Spec\Lambda
\to \GG_m$ determined by the requirement that
$x \in \Gamma(\GG_m, {\mathcal O}_{\GG_m}^{\times})$ pulls-back to $\lambda$,
and $\mathfrak{M}_{X,\lambda}$ is obtained by pulling back
$\mathfrak{M}_{\GG_m,x}$ under this morphism (that is, by base changing under the
corresponding ring homomorphism ${\mathbb F}[x,x^{-1}]\to\Lambda$).
\begin{lemma}
\label{lem:rank one endos}
If $\mathfrak{M}_\Lambda$ is a Breuil--Kisin module of rank one with $\Lambda$-coefficients, then $\End_{\K{\Lambda}}(\mathfrak{M}) = \Lambda.$
Similarly, if $M_\Lambda$ is a \'etale $\varphi$-module of rank
one with $\Lambda$-coefficients, then
$\End_{\K{\Lambda}}(M_\Lambda) = \Lambda.$
\end{lemma}
\begin{proof}
We give the proof for~$M_\Lambda$, the argument for~$\mathfrak{M}_\Lambda$ being essentially
identical. One reduces easily to the case where $M_\Lambda$ is free. Since an endomorphism~$\psi$ of~$M_\Lambda$ is in particular an
endomorphism of the underlying $\mathfrak{S}_\Lambda[1/u]$-module, we see that there
is some $\lambda\in \mathfrak{S}_\Lambda[1/u]$ such that $\psi$ is given by
multiplication by~$\lambda$. The commutation relation with $\Phi_{M_\Lambda}$
means that we must have $\varphi(\lambda)=\lambda$, so that
certainly
(considering the powers of~$u$ in~$\lambda$ of lowest negative and positive degrees)
$\lambda\in W(k')\otimes_{\Z_p}\Lambda$, and in fact $\lambda\in
\Lambda$. Conversely, multiplication by any element of~$\Lambda$ is evidently an
endomorphism of~$M_\Lambda$, as required.
\end{proof}
\begin{lem}
\label{lem:rank one isomorphism over a field}Let $\kappa$ be a field
of characteristic~$p$, and let $M_\kappa, N_\kappa$ be
\'etale $\varphi$-modules of rank one with $\kappa$-coefficients and
descent data. Then any nonzero element of $\Hom_{\K{\kappa}}(M_\kappa, N_\kappa)$
is an isomorphism.
\end{lem}
\begin{proof}
Since $\kappa((u))$ is a field, it is enough to show that if one of
the induced maps $M_{\kappa,i}\to N_{\kappa,i}$ is nonzero, then
they all are; but this follows from the commutation relation with~$\varphi$.
\end{proof}
\begin{lemma}\label{lem: isomorphic twists are the same twist} If $\lambda,\lambda' \in \Lambda^{\times}$ and $\mathfrak{M}_{\Lambda,\lambda} \cong \mathfrak{M}_{\Lambda,\lambda'}$
{\em (as Breuil--Kisin modules with descent data over $\Lambda$)}, then $\lambda=\lambda'$. Similarly, if $M_{\Lambda,\lambda}\cong M_{\Lambda,\lambda'}$, then $\lambda=\lambda'$.
\end{lemma}
\begin{proof}
Again, we give the proof for~$M$, the argument for~$\mathfrak{M}$ being
essentially identical. Write $M_i={\mathbb F}((u))m_i$, and write
$\Phi_i(1\otimes m_{i-1})=\theta_{i}m_{i}$, where $\theta_{i}\ne 0$. There
are $\mu_i\in \Lambda[[u]][1/u]$ such that the given isomorphism
$M_{\Lambda,\lambda}\cong M_{\Lambda,\lambda'}$ takes~$m_i$ to $\mu_im_i$. The
commutation relation between the given isomorphism and~$\Phi_{M}$ imposes the
condition \[\lambda_{i}\mu_{i}\theta_{i}m_{i}=\lambda'_{i}\varphi(\mu_{i-1})\theta_{i}m_{i}\]where
$\lambda_{i}$ (resp.\ $\lambda'_i$) equals~$1$ unless $i=0$, when it equals~$\lambda$
(resp.\ $\lambda'$).
Thus we have $\mu_{i}=(\lambda'_i/\lambda_i)\varphi(\mu_{i-1})$, so that in
particular $\mu_0=(\lambda'/\lambda)\varphi^{f'}(\mu_0)$. Considering the powers
of~$u$ in~$\mu_0$ of lowest negative and positive degrees we
conclude that $\mu_0\in W(k') \otimes \Lambda$; but then
$\mu_0=\varphi^{f'}(\mu_0)$, so that~$\lambda'=\lambda$, as required.
\end{proof}
\begin{remark}
If $\mathfrak{M}$ has height at most~$h$, and we let ${\mathcal C}$ (temporarily) denote the
moduli stack of rank one Breuil--Kisin modules of height at most~$h$ with ${\mathbb F}$-coefficients and
descent data then Lemma~\ref{lem: isomorphic twists are the same twist} can be interpreted as saying that
the morphism $\GG_m \to {\mathcal C}$
that classifies $\mathfrak{M}_{\GG_m,x}$
is a monomorphism, i.e.\ the diagonal morphism $\GG_m \to \GG_m
\times_{{\mathcal C}}
\GG_m$ is an isomorphism. Similarly, the morphism $\GG_m\to{\mathcal R}$
(where we temporarily let ${\mathcal R}$ denote the moduli stack
of rank one \'etale $\varphi$-modules with ${\mathbb F}$-coefficients and descent data)
that classifies $M_{\GG_m,x}$ is a monomorphism.
\end{remark}
Now choose another rank one Breuil--Kisin module with descent data $\mathfrak{N}$ over ${\mathbb F}$.
Let $(x,y)$ denote the standard coordinates on $\GG_m\times_{{\mathbb F}} \GG_m$,
and consider the rank one Breuil--Kisin modules with descent data $\mathfrak{M}_{\GG_m\times_{{\mathbb F}} \GG_m,x}$
and $\mathfrak{N}_{\GG_m\times_{{\mathbb F}} \GG_m,y}$
over $\GG_m\times_{{\mathbb F}} \GG_m$.
\begin{lemma}\label{lem: generically no Homs}
There is a non-empty irreducible affine open subset $\Spec {A^{\operatorname{dist}}}$ of $\GG_m\times_{{\mathbb F}} \GG_m$
whose finite type points
are exactly the maximal ideals $\mathfrak m$ of $\GG_m\times_{{\mathbb F}} \GG_m$
such that
\[\Hom_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}}[1/u],
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}[1/u]\bigr)=0\]
{\em (}where we have written $\bar{x}$ and $\bar{y}$ to
denote the images of $x$ and $y$ in $\kappa(\mathfrak m)^{\times}${\em )}.
Furthermore, if $R$ is any finite-type ${A^{\operatorname{dist}}}$-algebra,
and if $\mathfrak m$ is any maximal ideal of~$R$,
then
\[\Hom_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr)
= \Hom_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}}[1/u],
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}[1/u]\bigr)
= 0,\]
and also
\[
\Hom_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{N}_{\kappa(\mathfrak m),\bar{y}},
\mathfrak{M}_{\kappa(\mathfrak m),\bar{x}}\bigr)
=
\Hom_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}[1/u],
\mathfrak{M}_{\kappa(\mathfrak m),\bar{x}}[1/u]\bigr)=0.\]
In particular, Assumption~{\em \ref{assumption:vanishing}}
is satisfied by $\mathfrak{M}_{{A^{\operatorname{dist}}},x}$ and $\mathfrak{N}_{{A^{\operatorname{dist}}},y}$.
\end{lemma}
\begin{proof}
If $\Hom\bigl(\mathfrak{M}_{\kappa(\mathfrak m),\bar{x}}[1/u],\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}[1/u])=0$
for all maximal ideals~$\mathfrak{m}$ of ${\mathbb F}[x,y,x^{-1},y^{-1}]$, then we are
done: $\Spec A^{{\operatorname{dist}}} = \GG_m\times\GG_m$. Otherwise, we see that
for some finite extension ${\mathbb F}'/{\mathbb F}$ and some $a,a'\in{\mathbb F}'$, we have a
non-zero morphism $\mathfrak{M}_{{\mathbb F}',a}[1/u]\to\mathfrak{N}_{{\mathbb F}',a'}[1/u]$. By
Lemma~\ref{lem:rank one isomorphism over a field}, this morphism must
in fact be an isomorphism.
Since $\mathfrak{M}$ and $\mathfrak{N}$ are both defined over ${\mathbb F}$, we furthermore see
that the ratio $a'/a$ lies in ${\mathbb F}$.
We then let $\Spec{A^{\operatorname{dist}}}$ be the affine open subset of $\GG_m\times_{{\mathbb F}}
\GG_m$ where $a'x\ne ay$; the claimed property of $\Spec A^{{\operatorname{dist}}}$
then follows easily from
Lemma~\ref{lem: isomorphic twists are the same twist}.
For the remaining statements of the lemma,
note that if $\mathfrak m$ is a maximal
ideal in a finite type $A^{{\operatorname{dist}}}$-algebra, then its pull-back to $A^{{\operatorname{dist}}}$
is again a maximal ideal $\mathfrak m'$
of $A^{{\operatorname{dist}}}$ (since $A^{{\operatorname{dist}}}$ is Jacobson),
and the vanishing of
$$
\Hom_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}}[1/u],
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}[1/u]\bigr)
$$
follows from the corresponding statement for $\kappa(\mathfrak m')$,
together with Lemma~\ref{lem: flat base change for Homs}.
Inverting $u$ induces an embedding
\[\Hom_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr)
\hookrightarrow
\Hom_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}}[1/u],
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}[1/u]\bigr),\]
and so certainly the vanishing of the target implies the vanishing of
the source.
The statements in which the roles of $\mathfrak{M}$ and $\mathfrak{N}$
are reversed follow from
Lemma~\ref{lem:rank one isomorphism over a field}.
\end{proof}
Define
$T := \Ext^1_{\K{\GG_m\times_{{\mathbb F}}\GG_m}}\bigl(\mathfrak{M}_{\GG_m\times_{{\mathbb F}} \GG_m,x},\mathfrak{M}_{\GG_m\times_{{\mathbb F}}\GG_m,y})$;
it follows from
Proposition~\ref{prop:exts are f.g. over A} that
$T$ is finitely generated over ${\mathbb F}[x,x^{-1},y,y^{-1}],$
while
Proposition~\ref{prop:base-change for exts}
shows that
$T_{A^{\operatorname{dist}}} := T\otimes_{{\mathbb F}[x^{\pm 1}, y^{\pm 1}]} {A^{\operatorname{dist}}}$ is naturally isomorphic to
$\Ext^1_{\K{{A^{\operatorname{dist}}}}}\bigl(\mathfrak{M}_{{A^{\operatorname{dist}}},x}, \mathfrak{N}_{{A^{\operatorname{dist}}},y}\bigr)$. (Here and elsewhere
we abuse notation by writing $x$, $y$ for $x|_{{A^{\operatorname{dist}}}}$, $y|_{{A^{\operatorname{dist}}}}$.)
Corollary~\ref{cor:freeness for exts} and Lemma~\ref{lem: generically no Homs}
show that $T_{A^{\operatorname{dist}}}$ is in fact a
finitely generated projective ${A^{\operatorname{dist}}}$-module.
If, for any ${A^{\operatorname{dist}}}$-algebra $B$, we write $T_B := T_{A^{\operatorname{dist}}} \otimes_{A^{\operatorname{dist}}} B \buildrel \sim \over \longrightarrow
T\otimes_{{\mathbb F}[x^{\pm 1}, y^{\pm 1}]} B$,
then
Proposition~\ref{prop:base-change for exts} again shows that
$T_B \buildrel \sim \over \longrightarrow \Ext^1_{\K{B}}\bigl(\mathfrak{M}_{B,x}, \mathfrak{N}_{B,y}\bigr)$.
By Propositions~\ref{prop: base change for kernel of map to etale Ext}
and~\ref{prop: we have vector bundles}, together with
Lemma~\ref{lem: generically no Homs},
there is a nonempty (so dense) affine open subset ~$\Spec{A^{\operatorname{k-free}}}$ of
$\Spec {A^{\operatorname{dist}}}$ with the properties that \[U_{A^{\operatorname{k-free}}} :=
\kExt^1_{\K{{A^{\operatorname{k-free}}}}}(\mathfrak{M}_{{A^{\operatorname{k-free}}},x},\mathfrak{N}_{{A^{\operatorname{k-free}}},y})\] and
\begin{multline*}
T_{A^{\operatorname{k-free}}}/U_{A^{\operatorname{k-free}}} \\
\buildrel \sim \over \longrightarrow
\Ext^1_{\K{{A^{\operatorname{k-free}}}}}(\mathfrak{M}_{{A^{\operatorname{k-free}}},x},\mathfrak{N}_{{A^{\operatorname{k-free}}},y})/\kExt^1_{\K{{A^{\operatorname{k-free}}}}}(\mathfrak{M}_{{A^{\operatorname{k-free}}},x},\mathfrak{N}_{{A^{\operatorname{k-free}}},y})
\end{multline*}
are finitely generated and projective over ${A^{\operatorname{k-free}}}$, and furthermore so
that for all finitely generated ${A^{\operatorname{k-free}}}$-algebras $B$, the formation of
$\kExt^1_{\K{B}}(\mathfrak{M}_{B,x},\mathfrak{N}_{B,y})$ and
$\Ext^1_{\K{B}}(\mathfrak{M}_{B,x},\mathfrak{N}_{B,y})/\kExt^1_{\K{B}}(\mathfrak{M}_{B,x},\mathfrak{N}_{B,y})$
is compatible with base change from $U_{A^{\operatorname{k-free}}}$ and $T_{A^{\operatorname{k-free}}}/U_{A^{\operatorname{k-free}}}$ respectively.
We choose a finite rank projective module $V$ over ${\mathbb F}[x,x^{-1},y,y^{-1}]$
admitting a surjection $V \to T$.
Thus, if we write $V_{A^{\operatorname{dist}}} := V\otimes_{{\mathbb F}[x^{\pm 1}, y^{\pm 1}]} {A^{\operatorname{dist}}}$,
then the induced morphism $V_{A^{\operatorname{dist}}} \to T_{A^{\operatorname{dist}}}$ is a (split) surjection of
${A^{\operatorname{dist}}}$-modules.
Following the prescription
of Subsection~\ref{subsec:families of extensions},
we form the symmetric algebra ${B^{\operatorname{twist}}} := {\mathbb F}[x^{\pm 1}, y^{\pm
1}][V^{\vee}],$
and construct the family of extensions $\widetilde{\mathfrak{E}}$
over $\Spec {B^{\operatorname{twist}}}$. We may similarly form the symmetric algebras
${B^{\operatorname{dist}}} := {A^{\operatorname{dist}}}[T_{{A^{\operatorname{dist}}}}^{\vee}]$ and ${B^{\operatorname{k-free}}} := {A^{\operatorname{k-free}}}[T_{{A^{\operatorname{k-free}}}}^{\vee}]$, and construct the families
of extensions ${\widetilde{\mathfrak{E}}^{\operatorname{dist}}}$ and ${\widetilde{\mathfrak{E}}^{\operatorname{k-free}}}$ over $\Spec
{B^{\operatorname{dist}}}$ and $\Spec{B^{\operatorname{k-free}}}$ respectively.
Since $T_{A^{\operatorname{k-free}}}/U_{A^{\operatorname{k-free}}}$ is projective, the natural morphism
$T_{{A^{\operatorname{k-free}}}}^{\vee} \to U_{{A^{\operatorname{k-free}}}}^{\vee}$ is surjective, and hence
${C^{\operatorname{k-free}}} := A[U_{{A^{\operatorname{k-free}}}}^{\vee}]$ is a quotient of ${B^{\operatorname{k-free}}}$; geometrically,
$\Spec {C^{\operatorname{k-free}}} $ is a subbundle of the vector bundle $\Spec {B^{\operatorname{k-free}}}$
over $\Spec A$.
We write $X := \Spec {B^{\operatorname{k-free}}} \setminus \Spec {C^{\operatorname{k-free}}}$; it is an open subscheme
of the vector bundle $\Spec {B^{\operatorname{k-free}}}$.
The restriction of
$\widetilde{\mathfrak{E}}'$ to~$X$ is the universal family of extensions over~$A$ which
do not split after inverting~$u$.
\begin{remark}
\label{rem:Zariski density}
Since $\Spec A^{{\operatorname{dist}}}$ and $\Spec A^{{{\operatorname{k-free}}}}$ are irreducible,
each of the vector bundles $\Spec B^{{\operatorname{dist}}}$ and $\Spec B^{{{\operatorname{k-free}}}}$
is also irreducible. In particular, $\Spec B^{{{\operatorname{k-free}}}}$ is Zariski dense
in $\Spec B^{{\operatorname{dist}}}$, and if $X$ is non-empty, then it is Zariski dense
in each of $\Spec B^{{{\operatorname{k-free}}}}$ and $\Spec B^{{\operatorname{dist}}}$. Similarly,
$\Spec {B^{\operatorname{twist}}} \times_{\GG_m\times_{{\mathbb F}}\GG_m} \Spec A^{{\operatorname{dist}}}$ is Zariski dense in
$\Spec {B^{\operatorname{twist}}}$.
\end{remark}
The surjection $V_{A^{\operatorname{dist}}} \to T_{A^{\operatorname{dist}}}$ induces a surjection of vector bundles
$\pi: \Spec {B^{\operatorname{twist}}}\times_{\GG_m\times_{{\mathbb F}}\GG_m} \Spec {A^{\operatorname{dist}}} \to \Spec {B^{\operatorname{dist}}}$ over $\Spec {A^{\operatorname{dist}}}$,
and there is a natural isomorphism
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:pull-back iso}
\pi^*{\widetilde{\mathfrak{E}}^{\operatorname{dist}}} \buildrel \sim \over \longrightarrow
\widetilde{\mathfrak{E}}\, \widehat{\otimes}_{{\mathbb F}[x^{\pm 1},y^{\pm 1}]} {A^{\operatorname{dist}}}.
\end{equation}
The rank two Breuil--Kisin module with descent data
$\widetilde{\mathfrak{E}}$ is classified by a morphism
$\xi:\Spec {B^{\operatorname{twist}}} \to {\mathcal C}^{\mathrm{dd},1}$;
similarly,
the rank two Breuil--Kisin module with descent data
${\widetilde{\mathfrak{E}}^{\operatorname{dist}}}$ is classified by a morphism
$\xi^{\operatorname{dist}}: \Spec {B^{\operatorname{dist}}} \to {\mathcal C}^{\mathrm{dd},1}.$
If we write $\xi_{A^{\operatorname{dist}}}$ for the restriction of $\xi$ to the open
subset $\Spec {B^{\operatorname{twist}}} \times_{\GG_m\times_{{\mathbb F}}\GG_m} \Spec {A^{\operatorname{dist}}}$ of $\Spec {B^{\operatorname{twist}}}$,
then the isomorphism~(\ref{eqn:pull-back iso}) shows that
$\xi^{\operatorname{dist}}\circ \pi = \xi_{A^{\operatorname{dist}}}$.
We also write $\xi^{{{\operatorname{k-free}}}}$ for the restriction of $\xi^{{\operatorname{dist}}}$ to
$\Spec B^{{{\operatorname{k-free}}}}$, and $\xi_X$ for the restriction of $\xi^{{{\operatorname{k-free}}}}$
to $X$.
\begin{lemma}\label{lem: scheme theoretic images coincide}
The scheme-theoretic images {\em (}in the sense
of~{\em \cite[Def.\ 3.1.4]{EGstacktheoreticimages})} of
$\xi:\Spec {B^{\operatorname{twist}}}\to {\mathcal C}^{\mathrm{dd},1}$,
$\xi^{\operatorname{dist}}:\Spec {B^{\operatorname{dist}}}\to {\mathcal C}^{\mathrm{dd},1}$,
and $\xi^{{{\operatorname{k-free}}}}: \Spec B^{{{\operatorname{k-free}}}}\to {\mathcal C}^{\mathrm{dd},1}$
all coincide; in particular, the
scheme-theoretic image of~$\xi$ is independent of the choice of
surjection $V\to T$, and the scheme-theoretic image of~$\xi^{{{\operatorname{k-free}}}}$ is
independent of the choice of~${A^{\operatorname{k-free}}}$.
If $X$ is non-empty, then the scheme-theoretic image
of $\xi_X: X \to {\mathcal C}^{\mathrm{dd},1}$ also coincides with these other scheme-theoretic images, and is independent of the choice of ${A^{\operatorname{k-free}}}$.
\end{lemma}
\begin{proof}
This follows from the various observations about Zariski density
made in Remark~\ref{rem:Zariski density}.
\end{proof}
\begin{defn}
\label{def:scheme-theoretic images}
We let $\overline{{\mathcal C}}(\mathfrak{M},\mathfrak{N})$ denote the scheme-theoretic image of
$\xi^{\operatorname{dist}}:\Spec {B^{\operatorname{dist}}}\to {\mathcal C}^{\mathrm{dd},1}$, and
we let $\overline{{\mathcal Z}}(\mathfrak{M},\mathfrak{N})$ denote the
scheme-theoretic image of the composite
$\xi^{\operatorname{dist}}:\Spec {B^{\operatorname{dist}}}\to {\mathcal C}^{\mathrm{dd},1}\to {\mathcal Z}^{\mathrm{dd},1}$.
Equivalently,
$\overline{{\mathcal Z}}(\mathfrak{M},\mathfrak{N})$ is the scheme-theoretic image of the
composite $\Spec {B^{\operatorname{dist}}}\to {\mathcal C}^{\mathrm{dd},1}\to {\mathcal R}^{\mathrm{dd},1}$ (\emph{cf}.\
\cite[Prop.\ 3.2.31]{EGstacktheoreticimages}), and the scheme-theoretic
image of $\overline{{\mathcal C}}(\mathfrak{M},\mathfrak{N})$
under the morphism ${\mathcal C}^{\mathrm{dd},1} \to {\mathcal Z}^{\mathrm{dd},1}$.
(Note that Lemma~\ref{lem: scheme theoretic images coincide}
provides various other alternative descriptions of $\overline{{\mathcal C}}(\mathfrak{M},\mathfrak{N})$
(and therefore also~$\overline{{\mathcal Z}}(\mathfrak{M},\mathfrak{N}$)) as
a scheme-theoretic image.)
\end{defn}
\begin{rem}
\label{rem: C(M,N) is reduced}Note that $\overline{{\mathcal C}}
(\mathfrak{M},\mathfrak{N})$
and $\overline{{\mathcal Z}}(\mathfrak{M},\mathfrak{N})$
are both reduced (because they are each defined as a scheme-theoretic
image of~$\Spec{B^{\operatorname{dist}}}$, which is reduced by definition).
\end{rem}
As well as scheme-theoretic images, as in the preceding Lemma and Definition,
we will need to consider images of underlying topological spaces. If
$\mathcal{X}$ is an algebraic stack we let $|\mathcal{X}|$ be its
underlying topological space, as defined in
\cite[\href{https://stacks.math.columbia.edu/tag/04Y8}{Tag 04Y8}]{stacks-project}.
\begin{lem}
\label{lem:ext images}
The image of the morphism on underlying topological spaces
$| \Spec {B^{\operatorname{twist}}} | \to | {\mathcal C}^{\mathrm{dd},1}|$ induced by $\xi$ is
a constructible subset of $| {\mathcal C}^{\mathrm{dd},1}|$, and is
independent of the choice of $V$.
\end{lem}
\begin{proof}
The fact that the image of $|\Spec {B^{\operatorname{twist}}}|$ is a constructible
subset of $|{\mathcal C}^{\mathrm{dd},1}|$ follows from the fact that
$\xi$ is a morphism of finite presentation between Noetherian
stacks; see~\cite[App.\ D]{MR2818725}.
Suppose now that $V'$ is another choice of finite rank projective
${\mathbb F}[x,x^{-1},y,y^{-1}]$-module surjecting onto~$T$. Since it
is possible to choose
a finite rank projective module surjecting onto the pullback of $V,V'$ with respect to
their maps to $T$,
we see that it suffices to prove the independence claim of the lemma
in the case when $V'$ admits a surjection onto $V$ (compatible
with the maps of each of $V$ and $V'$ onto $T$).
If we write $B' := {\mathbb F}[x^{\pm 1}, y^{\pm 1}][(V')^{\vee}],$
then the natural morphism $\Spec B' \to \Spec {B^{\operatorname{twist}}}$ is a surjection,
and the morphism $\xi': \Spec B' \to {\mathcal C}^{\mathrm{dd},1}$ is the composite
of this surjection with the morphism $\xi$. Thus indeed
the images of $|\Spec B'|$ and of $|\Spec {B^{\operatorname{twist}}}|$ coincide as
subsets of $|{\mathcal C}^{\mathrm{dd},1}|$.
\end{proof}
\begin{df}
\label{df:constructible images}
We write $|{\mathcal C}(\mathfrak{M},\mathfrak{N})|$ to denote the constructible subset
of $|{\mathcal C}^{\mathrm{dd},1}|$ described in Lemma~\ref{lem:ext images}.
\end{df}
\begin{remark}
We caution the reader that we don't define a substack ${\mathcal C}(\mathfrak{M},\mathfrak{N})$
of ${\mathcal C}^{\mathrm{dd},1}$. Rather, we have defined a closed substack
$\overline{{\mathcal C}}(\mathfrak{M},\mathfrak{N})$ of ${\mathcal C}^{\mathrm{dd},1}$, and a constructible subset
$|{\mathcal C}(\mathfrak{M},\mathfrak{N})|$ of $|{\mathcal C}^{\mathrm{dd},1}|$. It follows from
the constructions that $|\overline{{\mathcal C}}(\mathfrak{M},\mathfrak{N})|$ is the
closure in $|{\mathcal C}^{\mathrm{dd},1}|$ of $|{\mathcal C}(\mathfrak{M},\mathfrak{N})|$.
\end{remark}
As in Subsection~\ref{subsec:families of extensions},
there is a natural action of $\GG_m\times_{{\mathbb F}}\GG_m$ on $T$,
and hence on each of $\Spec {B^{\operatorname{dist}}}$, $\Spec {B^{\operatorname{k-free}}}$ and~$X$,
given by the action of $\GG_m$ as automorphisms on each of $\mathfrak{M}_{\GG_m\times_{{\mathbb F}} \GG_m,x}$
and $\mathfrak{N}_{\GG_m\times_{{\mathbb F}} \GG_m,y}$ (which induces a corresponding action on $T$,
hence on $T_{A^{\operatorname{dist}}}$ and $T_{A^{\operatorname{k-free}}}$, and hence on $\Spec {B^{\operatorname{dist}}}$ and
$\Spec {B^{\operatorname{k-free}}}$).
Thus we may form the corresponding quotient stacks $[\Spec {B^{\operatorname{dist}}} / \GG_m\times_{{\mathbb F}} \GG_m]$ and
$[X / \GG_m\times_{{\mathbb F}} \GG_m],$ each of which admits a natural morphism to~${\mathcal C}^{\mathrm{dd},1}$.
\begin{rem}
\label{rem: potential confusion of two lots of Gm times Gm}Note that
we are making use of two independent copies of $\GG_m\times_{{\mathbb F}}\GG_m$; one
parameterises the different unramified twists of $\mathfrak{M}$ and $\mathfrak{N}$, and the other the
automorphisms of (the pullbacks of) $\mathfrak{M}$ and $\mathfrak{N}$.
\end{rem}
\begin{defn}
\label{defn: strict situations}We say that the pair $(\mathfrak{M},\mathfrak{N})$ is
\emph{strict} if $\Spec{A^{\operatorname{dist}}}=\GG_m\times_{\mathbb F}\GG_m$.
\end{defn}
Before stating and proving the main result of this subsection,
we prove some lemmas (the first two of which amount to recollections
of standard --- and simple --- facts).
\begin{lem}
\label{lem: fibre products of finite type}If ${\mathcal X}\to{\mathcal Y}$ is a
morphism of stacks over~$S$, with ${\mathcal X}$ algebraic and of
finite type over~$S$, and ${\mathcal Y}$ having diagonal which is
representable by algebraic spaces and of finite type, then
${\mathcal X}\times_{{\mathcal Y}}{\mathcal X}$ is an algebraic stack of finite type
over~$S$.
\end{lem}
\begin{proof}
The fact that ${\mathcal X}\times_{{\mathcal Y}} {\mathcal X}$ is an algebraic
stack follows
from~\cite[\href{http://stacks.math.columbia.edu/tag/04TF}{Tag 04TF}]{stacks-project}.
Since composites of morphisms of finite type are of finite type, in
order to show that ${\mathcal X}\times_{{\mathcal Y}} {\mathcal X}$ is of finite type over $S$,
it suffices to show that the natural morphism ${\mathcal X}\times_{{\mathcal Y}} {\mathcal X}
\to {\mathcal X}\times_S {\mathcal X}$ is of finite type. Since this
morphism is the base-change of the diagonal morphism ${\mathcal Y} \to {\mathcal Y}\times_S {\mathcal Y},$
this follows by assumption.
\end{proof}
\begin{lem}\label{lem:fibres of kExt}
The following conditions are equivalent:\\
(1) $\kExt^1_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr)= 0$
for all maximal ideals $\mathfrak m$ of ${A^{\operatorname{k-free}}}$.\\
(2) $U_{A^{\operatorname{k-free}}} = 0$.\\
(3) $\Spec C^{{{\operatorname{k-free}}}}$ is the trivial
vector bundle over $\Spec A^{{{\operatorname{k-free}}}}$.
\end{lem}
\begin{proof}
Conditions~(2) and~(3) are equivalent by definition. Since the formation of
$\kExt^1_{\K{{A^{\operatorname{k-free}}}}}(\mathfrak{M}_{{A^{\operatorname{k-free}}},x},\mathfrak{N}_{{A^{\operatorname{k-free}}},y})$ is compatible with base change,
and since ${A^{\operatorname{k-free}}}$ is Jacobson, (1) is equivalent to
the assumption that
$$\kExt^1_{\K{{A^{\operatorname{k-free}}}}}(\mathfrak{M}_{{A^{\operatorname{k-free}}},x},\mathfrak{N}_{{A^{\operatorname{k-free}}},y})=0,$$
i.e.\ that~$U_{A^{\operatorname{k-free}}}=0$, as required.
\end{proof}
\begin{lemma}
\label{lem:C to R, with maps to Akfree}
If the equivalent conditions of Lemma~{\em \ref{lem:fibres of kExt}} hold,
then the natural morphism
\begin{multline*}
\Spec B^{{{\operatorname{k-free}}}} \times_{\Spec A^{{{\operatorname{k-free}}}} \times_{\mathbb F} {\mathcal C}^{\mathrm{dd},1}}
\Spec B^{{{\operatorname{k-free}}}}
\\
\to
\Spec B^{{{\operatorname{k-free}}}} \times_{\Spec A^{{{\operatorname{k-free}}}} \times_{\mathbb F} {\mathcal R}^{\mathrm{dd},1}}
\Spec B^{{{\operatorname{k-free}}}}
\end{multline*}
is an isomorphism.
\end{lemma}
\begin{proof}
Since ${\mathcal C}^{\mathrm{dd},1} \to {\mathcal R}^{\mathrm{dd},1}$ is separated (being
proper) and representable, the diagonal morphism
${\mathcal C}^{\mathrm{dd},1} \to {\mathcal C}^{\mathrm{dd},1}\times_{{\mathcal R}^{\mathrm{dd},1}} {\mathcal C}^{\mathrm{dd},1}$
is a closed immersion, and hence the morphism in the statement
of the lemma is
a closed immersion. Thus, in order to show that it is an
isomorphism, it suffices to show that it induces a surjection
on $R$-valued points, for any ${\mathbb F}$-algebra $R$. Since
the source and target are of finite type
over ${\mathbb F}$, by Lemma~\ref{lem: fibre products of finite type},
we may in fact restrict attention to finite type $R$-algebras.
A morphism $\Spec R\to \Spec{B^{\operatorname{k-free}}}
\times_{\Spec A^{{{\operatorname{k-free}}}} \times_{{\mathbb F}} {\mathcal C}^{\mathrm{dd},1}}\Spec{B^{\operatorname{k-free}}}$
corresponds to an isomorphism class
of tuples $(\alpha,\beta:\mathfrak{E}\to\mathfrak{E}',\iota,\iota',\pi,\pi')$, where
\begin{itemize}
\item $\alpha$ is a morphism
$\alpha:\Spec R\to\Spec {A^{\operatorname{k-free}}}$,
\item $\beta:\mathfrak{E}\to\mathfrak{E}'$ is an isomorphism of Breuil--Kisin modules
with descent data and coefficients in $R$,
\item $\iota:\alpha^* \mathfrak{N} \to \mathfrak{E}$, $\iota':\alpha^* \mathfrak{N} \to \mathfrak{E}'$, $\pi:\mathfrak{E} \to
\alpha^* \mathfrak{M}$ and $\pi':\mathfrak{E}' \to
\alpha^* \mathfrak{M}$ are morphisms
with the properties that $0 \to \alpha^*\mathfrak{N} \buildrel \iota
\over \to \mathfrak{E} \buildrel \pi \over \to \alpha^* \mathfrak{M} \to 0$ and $0 \to \alpha^*\mathfrak{N} \buildrel \iota'
\over \to \mathfrak{E}' \buildrel \pi' \over \to \alpha^* \mathfrak{M} \to 0$ are both
short exact.
\end{itemize}
Similarly,
a morphism $\Spec R\to \Spec{B^{\operatorname{k-free}}}
\times_{\Spec A^{{{\operatorname{k-free}}}} \times_{{\mathbb F}} {\mathcal R}^{\mathrm{dd},1}}\Spec{B^{\operatorname{k-free}}}$
corresponds to an isomorphism class
of tuples $(\alpha,\mathfrak{E},\mathfrak{E}',\beta,\iota,\iota',\pi,\pi')$, where
\begin{itemize}
\item $\alpha$ is a morphism
$\alpha:\Spec R\to\Spec {A^{\operatorname{k-free}}}$,
\item $\mathfrak{E}$ and $\mathfrak{E}'$ are Breuil--Kisin modules
with descent data and coefficients in $R$,
and $\beta$ is an isomorphism
$\beta:\mathfrak{E}[1/u]\to\mathfrak{E}'[1/u]$ of etale $\varphi$-modules with
descent data and coefficients in $R$,
\item $\iota:\alpha^* \mathfrak{N} \to \mathfrak{E}$, $\iota':\alpha^* \mathfrak{N} \to \mathfrak{E}'$, $\pi:\mathfrak{E} \to
\alpha^* \mathfrak{M}$ and $\pi':\mathfrak{E}' \to
\alpha^* \mathfrak{M}$ are morphisms
with the properties that $0 \to \alpha^*\mathfrak{N} \buildrel \iota
\over \to \mathfrak{E} \buildrel \pi \over \to \alpha^* \mathfrak{M} \to 0$ and $0 \to \alpha^*\mathfrak{N} \buildrel \iota'
\over \to \mathfrak{E}' \buildrel \pi' \over \to \alpha^* \mathfrak{M} \to 0$ are both
short exact.
\end{itemize}
Thus to prove the claimed surjectivity, we have to show that, given
a tuple $(\alpha,\mathfrak{E},\mathfrak{E}',\beta,\iota,\iota',\pi,\pi')$ associated
to a morphism $\Spec R\to \Spec{B^{\operatorname{k-free}}}
\times_{\Spec A^{{{\operatorname{k-free}}}} \times_{{\mathbb F}} {\mathcal R}^{\mathrm{dd},1}}\Spec{B^{\operatorname{k-free}}}$,
the isomorphism $\beta$ restricts to an isomorphism
$\mathfrak{E} \to \mathfrak{E}'.$
By
Lemma~\ref{lem:fibres of kExt}, the
natural map
$\Ext^1(\alpha^*\mathfrak{M},\alpha^*\mathfrak{N})\to\Ext^1_{\K{R}}(\alpha^*\mathfrak{M}[1/u],\alpha^*\mathfrak{N}[1/u])$
is injective; so the Breuil--Kisin modules $\mathfrak{E}$ and $\mathfrak{E}'$ are isomorphic. Arguing as in the proof of Corollary~\ref{cor:
monomorphism to Spec A times C}, we see that~$\beta$ is equivalent
to the data of an $R$-point of~$\GG_m\times_{{\mathcal O}}\GG_m$,
corresponding to the automorphisms of $\alpha^*\mathfrak{M}[1/u]$ and
$\alpha^*\mathfrak{N}[1/u]$ that it induces. These restrict to
automorphisms of $\alpha^*\mathfrak{M}$ and $\alpha^*\mathfrak{N}$, so that
(again by the proof of Corollary~\ref{cor:
monomorphism to Spec A times C})
$\beta$ indeed restricts to an
isomorphism $\mathfrak{E}\to\mathfrak{E}'$, as required.
\end{proof}
We now present the main result of this subsection.
\begin{prop}\label{prop: construction of family monomorphing to C and R}
{\em (1)} The morphism
$\xi^{{\operatorname{dist}}}$ induces a morphism
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:unramified morphism}
[\Spec {B^{\operatorname{dist}}} / \GG_m \times_{{\mathbb F}} \GG_m ] \to
{\mathcal C}^{\mathrm{dd},1},
\end{equation}which is representable by algebraic spaces, of finite type,
and unramified,
whose fibres over finite type points are of degree $\leq 2$.
In the strict case, this induced morphism is in fact a monomorphism,
while in general, the restriction $\xi_X$ of $\xi^{{\operatorname{dist}}}$
induces a finite type monomorphism
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:monomorphism}
[X / \GG_m \times_{{\mathbb F}} \GG_m ] \hookrightarrow
{\mathcal C}^{\mathrm{dd},1}.
\end{equation}
{\em (2)}
If
$\kExt^1_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr)=0$
for all maximal ideals $\mathfrak m$ of ${A^{\operatorname{k-free}}}$,
then the composite morphism
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:second unramified morphism}
[\Spec B^{{{\operatorname{k-free}}}}/\GG_m\times_{\mathbb F}\GG_m]\to {\mathcal C}^{\mathrm{dd},1}\to{\mathcal R}^{\mathrm{dd},1}
\end{equation}
is a representable by algebraic spaces, of finite type,
and unramified,
with fibres of degree $\leq 2.$
In the strict case, this induced morphism is in fact a monomorphism,
while in general,
the composite morphism
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:second monomorphism}
[X/\GG_m\times_{\mathbb F}\GG_m]\hookrightarrow {\mathcal C}^{\mathrm{dd},1}\to{\mathcal R}^{\mathrm{dd},1}
\end{equation}
is a finite type monomorphism.
\end{prop}
\begin{remark}\label{rem:explain-hypotheses}
The failure of~\eqref{eqn:unramified morphism} to be a monomorphism in
general is due, effectively, to the possibility that an extension $\mathfrak{E}$ of some
$\mathfrak{M}_{R,x}$ by $\mathfrak{N}_{R,y}$ and an extension $\mathfrak{E}'$ of some
$\mathfrak{M}_{R,x'}$ by $\mathfrak{N}_{R,y'}$ might be isomorphic as
Breuil--Kisin modules while nevertheless $(x,y)\neq (x',y')$. As we
will see in the proof,
whenever this happens the map $\mathfrak{N}_{\Lambda,y} \to \mathfrak{E}\to \mathfrak{E}'
\to \mathfrak{M}_{\Lambda,x'}$ is nonzero, and then
$\mathfrak{E}' \otimes_R \kappa(\mathfrak{m})[1/u]$ is split for some maximal ideal $\mathfrak{m}$
of $R$. This explains why, to obtain a monomorphism,
we can restrict either to the strict case or to the substack of extensions
that are non-split after inverting $u$.
\end{remark}
\begin{remark}
We have stated this proposition in the strongest form that we
are able to prove, but in fact its full strength is not required
in the subsequent applications.
In particular, we don't need the precise bounds on the
degrees of the fibres.
\end{remark}
\begin{proof}[Proof of Proposition~{\ref{prop:
construction of family monomorphing to C and R}}]
By Corollary~\ref{cor: monomorphism to Spec A times C}
(which we can apply because Assumption~\ref{assumption:vanishing} is
satisfied, by Lemma~\ref{lem: generically no Homs})
the natural morphism $[\Spec {B^{\operatorname{dist}}}/ \GG_m \times_{{\mathbb F}} \GG_m ] \to
\Spec {A^{\operatorname{dist}}}\times_{{\mathbb F}} {\mathcal C}^{\mathrm{dd},1}$ is a finite type monomorphism,
and hence so is its restriction to the open substack
$[X/\GG_m\times_{{\mathbb F}} \GG_m]$ of its source.
Let us momentarily write ${\mathcal X}$ to denote either $[\Spec B^{{\operatorname{dist}}}/
\GG_m\times_{{\mathbb F}} \GG_m]$ or $[X/\GG_m\times_{{\mathbb F}} \GG_m]$. To show that
the finite type morphism
${\mathcal X} \to {\mathcal C}^{\mathrm{dd},1}$ is representable by algebraic spaces,
resp.\ unramified, resp.\ a
monomorphism,
it suffices to show that the corresponding diagonal morphism
${\mathcal X} \to {\mathcal X} \times_{{\mathcal C}^{\mathrm{dd},1}} {\mathcal X}$ is a monomorphism, resp.\ \'etale, resp.\
an isomorphism.
Now since ${\mathcal X} \to \Spec A^{{\operatorname{dist}}} \times_{{\mathbb F}} {\mathcal C}^{\mathrm{dd},1}$ is a monomorphism,
the diagonal morphism ${\mathcal X} \to {\mathcal X} \times_{\Spec A^{{\operatorname{dist}}}\times_{{\mathbb F}} {\mathcal C}^{\mathrm{dd},1}}
{\mathcal X}$ {\em is} an isomorphism,
and so it is equivalent to show that the morphism of products
$${\mathcal X} \times_{\Spec A^{{\operatorname{dist}}}\times_{{\mathbb F}} {\mathcal C}^{\mathrm{dd},1}} {\mathcal X} \to
{\mathcal X}\times_{{\mathcal C}^{\mathrm{dd},1}} {\mathcal X}$$
is a monomorphism, resp.\ \'etale, resp.\ an isomorphism.
This is in turn equivalent to showing the corresponding properties
for the morphisms
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:first closed immersion}
\Spec{B^{\operatorname{dist}}}\times_{\Spec {A^{\operatorname{dist}}}\times {\mathcal C}^{\mathrm{dd},1}}\Spec{B^{\operatorname{dist}}} \to
\Spec{B^{\operatorname{dist}}} \times_{{\mathcal C}^{\mathrm{dd},1}}\Spec{B^{\operatorname{dist}}}
\end{equation}
or
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:second closed immersion}
X \times_{\Spec {A^{\operatorname{dist}}}\times {\mathcal C}^{\mathrm{dd},1}}X \to
X \times_{{\mathcal C}^{\mathrm{dd},1}} X.
\end{equation}
Now each of these morphisms is a base-change of the diagonal
$\Spec A^{{\operatorname{dist}}}\to \Spec A^{{\operatorname{dist}}} \times_{{\mathbb F}} \Spec A^{{\operatorname{dist}}},$
which is a closed immersion (affine schemes being separated),
and so is itself a closed immersion. In particular,
it is a monomorphism, and so we have proved the representability
by algebraic spaces
of each of~(\ref{eqn:unramified morphism}) and~(\ref{eqn:monomorphism}).
Since the source and target of each of these monomorphisms
is of finite type
over~${\mathbb F}$, by Lemma~\ref{lem: fibre products of finite type},
in order to show that either of these monomorphisms is
an isomorphism,
it suffices to show that it induces a surjection on
$R$-valued points, for arbitrary finite type ${\mathbb F}$-algebras $R$.
Similarly, to check that the
closed immersion~(\ref{eqn:first closed immersion}) is \'etale,
it suffices to verify that it is formally smooth,
and for this it suffices to verify that it satisfies the
infinitesimal lifting property
with respect to square zero thickenings of finite type
${\mathbb F}$-algebras.
A morphism $\Spec R\to \Spec{B^{\operatorname{dist}}}
\times_{{\mathcal C}^{\mathrm{dd},1}}\Spec{B^{\operatorname{dist}}}$ corresponds to an isomorphism class
of tuples $(\alpha,\alpha',\beta:\mathfrak{E}\to\mathfrak{E}',\iota,\iota',\pi,\pi')$, where
\begin{itemize}
\item $\alpha,\alpha'$ are morphisms
$\alpha,\alpha':\Spec R\to\Spec {A^{\operatorname{dist}}}$,
\item $\beta:\mathfrak{E}\to\mathfrak{E}'$ is an isomorphism of Breuil--Kisin modules
with descent data and coefficients in $R$,
\item $\iota:\alpha^* \mathfrak{N} \to \mathfrak{E}$, $\iota':(\alpha')^* \mathfrak{N} \to \mathfrak{E}'$, $\pi:\mathfrak{E} \to
\alpha^* \mathfrak{M}$ and $\pi':\mathfrak{E}' \to
(\alpha')^* \mathfrak{M}$ are morphisms
with the properties that $0 \to \alpha^*\mathfrak{N} \buildrel \iota
\over \to \mathfrak{E} \buildrel \pi \over \to \alpha^* \mathfrak{M} \to 0$ and $0 \to (\alpha')^*\mathfrak{N} \buildrel \iota'
\over \to \mathfrak{E}' \buildrel \pi' \over \to (\alpha')^* \mathfrak{M} \to 0$ are both
short exact.
\end{itemize}
We begin by proving that~(\ref{eqn:first closed immersion}) satisfies
the infinitesimal lifting criterion (when $R$ is a finite type ${\mathbb F}$-algebra).
Thus we assume given a square-zero ideal $I \subset R$,
such that the induced morphism
$$\Spec R/I \to \Spec B^{{\operatorname{dist}}} \times_{{\mathcal C}^{\mathrm{dd},1}} \Spec B^{{\operatorname{dist}}}$$
factors through $\Spec B^{{\operatorname{dist}}}\times_{\Spec A^{{\operatorname{dist}}} \times_{{\mathbb F}} {\mathcal C}^{\mathrm{dd},1}}
\Spec B^{{\operatorname{dist}}}$. In terms of the data
$(\alpha,\alpha',\beta:\mathfrak{E}\to\mathfrak{E}',\iota,\iota',\pi,\pi')$,
we are assuming that $\alpha$ and $\alpha'$ coincide when restricted
to~$\Spec R/I$, and
we must show that $\alpha$ and $\alpha'$ themselves coincide.
To this end, we consider the composite
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:key composite}
\alpha^*\mathfrak{N}\stackrel{\iota}{\to}\mathfrak{E}\stackrel{\beta}{\to}\mathfrak{E}'\stackrel{\pi'}{\to}(\alpha')^*\mathfrak{M}.
\end{equation}
If we can show the vanishing of this morphism,
then by reversing the roles of $\mathfrak{E}$ and $\mathfrak{E}'$,
we will similarly deduce the vanishing of
$\pi \circ \beta^{-1} \circ \iota'$,
from which we can conclude that $\beta$ induces an isomorphism between
$\alpha^*\mathfrak{N}$ and $(\alpha')^*\mathfrak{N}$. Consequently, it also induces an
isomorphism between~$\alpha^*\mathfrak{M}$ and~$(\alpha')^*\mathfrak{M}$, so it follows from
Lemma~\ref{lem: isomorphic twists are the same twist} that $\alpha=\alpha'$,
as required.
We show the vanishing of~(\ref{eqn:key composite}).
Suppose to the contrary that it doesn't vanish,
so that we have a non-zero morphism
$\alpha^*\mathfrak{N}\to (\alpha')^*\mathfrak{M}.$
It follows from Proposition~\ref{prop:vanishing of homs} that,
for some maximal ideal $\mathfrak{m}$ of $R$, there exists a non-zero morphism
\[\alpha^*(\mathfrak{N})\otimes_R\kappa(\mathfrak m)
{\to}(\alpha')^*(\mathfrak{M})\otimes_R\kappa(\mathfrak
m).\]
By assumption $\alpha$ and $\alpha'$ coincide modulo $I$. Since $I^2 = 0$,
there is an inclusion $I \subset \mathfrak m$,
and so in particular we find that
$$(\alpha')^*(\mathfrak{M}) \otimes_R \kappa(\mathfrak m)
\buildrel \sim \over \longrightarrow \alpha^*(\mathfrak{M})\otimes_R \kappa(\mathfrak m).$$
Thus there exists a non-zero morphism
\[\alpha^*(\mathfrak{N})\otimes_R\kappa(\mathfrak m)
{\to}\alpha^*(\mathfrak{M})\otimes_R\kappa(\mathfrak m).\]
Then, by Lemma~\ref{lem:rank one isomorphism over a field},
after inverting~$u$ we obtain an isomorphism
\[\alpha^*(\mathfrak{N})\otimes_R\kappa(\mathfrak m) [1/u]
{\buildrel \sim \over \longrightarrow}\alpha^*(\mathfrak{M})\otimes_R\kappa(\mathfrak m)[1/u],\]
contradicting the assumption that $\alpha$ maps $\Spec R$
into $\Spec A^{{\operatorname{dist}}}$.
This completes the proof that~(\ref{eqn:first closed immersion})
is formally smooth, and hence that~(\ref{eqn:unramified morphism})
is unramified.
We next show that, in the strict case,
the closed immersion~(\ref{eqn:first closed immersion})
is an isomorphism, and thus that~(\ref{eqn:unramified morphism})
is actually a monomorphism. As noted above,
it suffices to show that~(\ref{eqn:first closed immersion})
induces a surjection on $R$-valued points for finite type ${\mathbb F}$-algebras $R$,
which in terms of the data
$(\alpha,\alpha',\beta:\mathfrak{E}\to\mathfrak{E}',\iota,\iota',\pi,\pi')$,
amounts to showing that necessarily $\alpha = \alpha'$.
Arguing just as we did above,
it suffices show the vanishing of~(\ref{eqn:key composite}).
Again, we suppose for the sake of contradiction that~(\ref{eqn:key composite})
does not vanish. It then follows
from Proposition~\ref{prop:vanishing of homs} that
for some maximal ideal $\mathfrak{m}$ of $R$ there exists a non-zero
morphism \[\alpha^*(\mathfrak{N})\otimes_R\kappa(\mathfrak m)
{\to}(\alpha')^*(\mathfrak{M})\otimes_R\kappa(\mathfrak
m).\]
Then, by Lemma~\ref{lem:rank one isomorphism over a field},
after inverting~$u$ we obtain an isomorphism
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:key isomorphism}
\alpha^*(\mathfrak{N})\otimes_R\kappa(\mathfrak
m)[1/u]
\buildrel \sim \over \longrightarrow (\alpha')^*(\mathfrak{M})\otimes_R\kappa(\mathfrak m)[1/u].
\end{equation}
In the strict case, such an isomorphism cannot exist by assumption,
and thus~(\ref{eqn:key composite}) must vanish.
We now turn to proving that~(\ref{eqn:second closed immersion})
is an isomorphism. Just as in the preceding arguments,
it suffices to show that~(\ref{eqn:key composite}) vanishes, and
if not
then we obtain an isomorphism~(\ref{eqn:key isomorphism}).
Since
we are considering points of $X\times X$,
we are given that the
induced extension $\mathfrak{E}'\otimes_R\kappa(\mathfrak m)[1/u]$ is non-split,
so that the base change of the morphism~(\ref{eqn:key composite})
from $R[[u]]$ to $\kappa(\mathfrak m)((u))$ must vanish. Consequently
the composite $\beta\circ \iota$ induces a non-zero morphism
$\alpha^*(\mathfrak{N})\otimes_R\kappa(\mathfrak m)[1/u] \to (\alpha')^*(\mathfrak{N})
\otimes_R\kappa(\mathfrak m)[1/u],$
which, by Lemma~\ref{lem:rank one isomorphism over a field},
must in fact be an isomorphism. Comparing this isomorphism
with the isomorphism~(\ref{eqn:key isomorphism}),
we find that
$(\alpha')^*(\mathfrak{N})\otimes_R\kappa(\mathfrak m)[1/u]$
and
$(\alpha')^*(\mathfrak{M})\otimes_R\kappa(\mathfrak m)[1/u]$
are isomorphic, contradicting the fact that $\alpha'$ maps
$\Spec R$ to $\Spec A^{{\operatorname{dist}}}$.
Thus in fact the composite~(\ref{eqn:key composite}) must vanish,
and we have completed the proof that~(\ref{eqn:monomorphism})
is a monomorphism.
To complete the proof of part~(1) of the proposition,
we have to show that the fibres of~(\ref{eqn:unramified morphism})
are of degree at most $2$. We have already observed that $[\Spec {B^{\operatorname{dist}}}/ \GG_m \times_{{\mathbb F}} \GG_m ] \to
\Spec {A^{\operatorname{dist}}}\times_{{\mathbb F}} {\mathcal C}^{\mathrm{dd},1}$ is a monomorphism, so it is
enough to check that given a finite extension~${\mathbb F}'/{\mathbb F}$ and an
isomorphism class of tuples $(\alpha,\alpha',\beta:\mathfrak{E}\to\mathfrak{E}',\iota,\iota',\pi,\pi')$, where
\begin{itemize}
\item $\alpha,\alpha'$ are distinct morphisms
$\alpha,\alpha':\Spec {\mathbb F}'\to\Spec {A^{\operatorname{dist}}}$,
\item $\beta:\mathfrak{E}\to\mathfrak{E}'$ is an isomorphism of Breuil--Kisin modules
with descent data and coefficients in ${\mathbb F}'$,
\item $\iota:\alpha^* \mathfrak{N} \to \mathfrak{E}$, $\iota':(\alpha')^* \mathfrak{N} \to \mathfrak{E}'$, $\pi:\mathfrak{E} \to
\alpha^* \mathfrak{M}$ and $\pi':\mathfrak{E}' \to
(\alpha')^* \mathfrak{M}$ are morphisms
with the properties that $0 \to \alpha^*\mathfrak{N} \buildrel \iota
\over \to \mathfrak{E} \buildrel \pi \over \to \alpha^* \mathfrak{M} \to 0$ and $0 \to (\alpha')^*\mathfrak{N} \buildrel \iota'
\over \to \mathfrak{E}' \buildrel \pi' \over \to (\alpha')^* \mathfrak{M} \to 0$ are both
short exact.
\end{itemize}
then~$\alpha'$ is determined by the data of~$\alpha$ and~$\mathfrak{E}$. To see
this, note that since we are assuming that~$\alpha'\ne\alpha$, the
arguments above show that~(\ref{eqn:key composite}) does not vanish,
so that (since~${\mathbb F}'$ is a field), we have an
isomorphism~$\alpha^*\mathfrak{N}[1/u]\stackrel{\sim}{\To}(\alpha')^*\mathfrak{M}[1/u]$. Since we are
over~${A^{\operatorname{dist}}}$, it follows that~$\mathfrak{E}[1/u]\cong\mathfrak{E}'[1/u]$ is split, and
that we also have an
isomorphism~$\alpha^*\mathfrak{M}[1/u]\stackrel{\sim}{\To}(\alpha')^*\mathfrak{N}[1/u]$. Thus
if~$\alpha''$ is another possible choice for~$\alpha'$, we have
$(\alpha'')^*\mathfrak{M}[1/u]\stackrel{\sim}{\To}(\alpha')^*\mathfrak{M}[1/u]$ and
$(\alpha'')^*\mathfrak{N}[1/u]\stackrel{\sim}{\To}(\alpha')^*\mathfrak{N}[1/u]$,
whence~$\alpha''=\alpha'$ by Lemma~\ref{lem: isomorphic twists are the
same twist}, as required.
We turn to proving~(2), and thus
assume that
$$\kExt^1_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr)=0$$
for all maximal ideals $\mathfrak m$ of ${A^{\operatorname{k-free}}}$.
Lemma~\ref{lem:C to R, with maps to Akfree} shows that
$$\Spec B^{{{\operatorname{k-free}}}} \times_{\Spec A^{{{\operatorname{k-free}}}}\times_{{\mathbb F}}
{\mathcal C}^{\mathrm{dd},1}}\Spec B^{{{\operatorname{k-free}}}} \to
\Spec B^{{{\operatorname{k-free}}}} \times_{\Spec A^{{{\operatorname{k-free}}}}\times_{{\mathbb F}}
{\mathcal R}^{\mathrm{dd},1}}\Spec B^{{{\operatorname{k-free}}}}$$
is an isomorphism, from which we deduce
that
$$[\Spec B^{{{\operatorname{k-free}}}}/\GG_m\times_{{\mathbb F}} \GG_m] \to \Spec A^{{{\operatorname{k-free}}}}\times_{{\mathbb F}}
{\mathcal R}^{\mathrm{dd},1}$$
is a monomorphism.
Using this as input, the claims of~(2) may be proved in an essentially identical
fashion to those of~(1).
\end{proof}
\begin{cor}
\label{cor: dimension of families of extensions}
The dimension of $\overline{{\mathcal C}}(\mathfrak{M},\mathfrak{N})$
is equal to
the rank of $T_{A^{\operatorname{dist}}}$ as a projective ${A^{\operatorname{dist}}}$-module.
If
$$\kExt^1_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr)=0$$
for all maximal ideals $\mathfrak m$ of $A^{{{\operatorname{k-free}}}}$,
then the dimension of $\overline{{\mathcal Z}}(\mathfrak{M},\mathfrak{N})$ is also equal to this rank,
while
if
$$\kExt^1_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr) \neq 0$$
for all maximal ideals $\mathfrak m$ of $A^{{{\operatorname{k-free}}}}$,
then the dimension of $\overline{{\mathcal Z}}(\mathfrak{M},\mathfrak{N})$ is strictly less than this rank.
\end{cor}
\begin{proof} The dimension of $[\Spec{B^{\operatorname{dist}}}/\GG_m\times_{{\mathbb F}} \GG_m]$ is equal to
the rank of $T_{A^{{\operatorname{dist}}}}$ (it is the quotient
by a two-dimensional group of a vector bundle over a two-dimensional base of rank
equal to the rank of $T_{A^{{\operatorname{dist}}}}$). By Lemma~\ref{lem: scheme theoretic images coincide},
$\overline{{\mathcal C}}(\mathfrak{M},\mathfrak{N})$ is the scheme-theoretic image
of the morphism
$[\Spec{B^{\operatorname{dist}}}/\GG_m\times_{{\mathbb F}}\GG_m] \to {\mathcal C}^{\mathrm{dd},1}$
provided by
Proposition~\ref{prop: construction of family monomorphing to
C and R}(1), which (by that proposition) is representable
by algebraic spaces and unramified.
Since such a morphism is locally quasi-finite
(in fact, in this particular case,
we have shown that the fibres of this morphism have degree at
most~$2$), \cite[\href{https://stacks.math.columbia.edu/tag/0DS6}{Tag 0DS6}]{stacks-project}
ensures
that $\overline{{\mathcal C}}(\mathfrak{M},\mathfrak{N})$ has the claimed dimension.
If
$\kExt^1_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr) = 0$
for all maximal ideals $\mathfrak m$ of $A^{{{\operatorname{k-free}}}}$,
then an identical argument using Proposition~\ref{prop: construction of family monomorphing to
C and R}(2) implies the claim regarding the dimension
of $\overline{{\mathcal Z}}(\mathfrak{M},\mathfrak{N})$.
Finally, suppose that
$$\kExt^1_{\K{\kappa(\mathfrak m)}}\bigl( \mathfrak{M}_{\kappa(\mathfrak m),\bar{x}},
\mathfrak{N}_{\kappa(\mathfrak m),\bar{y}}\bigr) \neq 0$$
for all maximal ideals $\mathfrak m$ of $A^{{{\operatorname{k-free}}}}$.
Then the composite $[\Spec{B^{\operatorname{k-free}}}/\GG_m\times_{{\mathbb F}} \GG_m] \to {\mathcal C}^{\mathrm{dd},1} \to {\mathcal R}^{\mathrm{dd},1}$
has the property that for every point $t$ in the source, the fibre over the
image of $t$ has a positive dimensional fibre. \cite[\href{https://stacks.math.columbia.edu/tag/0DS6}{Tag 0DS6}]{stacks-project} then implies the remaining
claim of the present lemma.
\end{proof}
\section{Extensions of rank one Breuil--Kisin modules}
\label{sec:extensions-of-rank-one}
\subsection{Rank one modules over finite fields, and their extensions}
\label{subsec: Diamond--Savitt}
We now wish to apply the results of the previous section to study
the geometry of our various moduli stacks. In order to do this, it
will be convenient for us to have an explicit description of the
rank one Breuil--Kisin modules of height at most one with descent data over a
finite field of characteristic $p$, and of their possible extensions.
Many of the results in this section are proved (for $p>2$) in~\cite[\S 1]{DiamondSavitt} in the context of
Breuil modules, and in those cases it
is possible simply to translate the relevant statements to the Breuil--Kisin module context.
Assume from now on that $e(K'/K)$ is divisible by $p^{f}-1$, so
that we are in the setting of~\cite[Remark 1.7]{DiamondSavitt}.
(Note that the parallel in \cite{DiamondSavitt} of our field
extension $K'/K$, with ramification and inertial indices $e',f'$ and
$e,f$ respectively, is the extension $K/L$ with indices $e,f$ and
$e',f'$ respectively.)
Let ${\mathbb F}$ be a finite subfield of $\Fbar_p$ containing the image of some (so all)
embedding(s) $k'\hookrightarrow\Fbar_p$.
Recall that for each
$g\in\Gal(K'/K)$ we write $g(\pi')/\pi'=h(g)$ with $h(g)\in
\mu_{e(K'/K)}(K') \subset W(k')$. We abuse notation and
denote the image of $h(g)$ in $k'$ again by $h(g)$, so that we obtain
a map
$\hchar \colon \Gal(K'/K) \to (k')^{\times}$.
Note that~$\hchar$ restricts to a character on the inertia
subgroup $I(K'/K)$, and is itself a character when $e(K'/K) = p^f-1$.
\begin{lem}
\label{lem:rank one Kisin modules with descent data}Every rank one
Breuil--Kisin module of height at most one with descent data and ${\mathbb F}$-coefficients is isomorphic
to one of the modules $\mathfrak{M}(r,a,c)$ defined by:
\begin{itemize}
\item $\mathfrak{M}(r,a,c)_i={\mathbb F}[[u]]\cdot m_i$,
\item $\Phi_{\mathfrak{M}(r,a,c),i}(1\otimes m_{i-1})=a_{i} u^{r_{i}} m_{i}$,
\item $\hat{g}(\sum_i m_i)=\sum_i h(g)^{c_i} m_i$ for all $g\in\Gal(K'/K)$,
\end{itemize}
where $a_i\in{\mathbb F}^\times$, $r_i\in\{0,\dots,e'\}$ and $c_i\in{\mathbb Z}/e(K'/K)$ are sequences
satisfying
$pc_{i-1}\equiv c_{i}+r_{i}\pmod{e(K'/K)}$, the sums in the third
bullet point run from $0$ to $f'-1$, and the $r_i,c_i,a_i$ are periodic with
period dividing $f$.
Furthermore, two such modules $\mathfrak{M}(r,a,c)$ and $\mathfrak{M}(s,b,d)$ are
isomorphic if and only if $r_i=s_i$ and $c_i=d_i$ for all $i$, and $\prod_{i=0}^{f-1}a_i=\prod_{i=0}^{f-1}b_i$.
\end{lem}
\begin{proof}
The proof is elementary; see e.g.\ \cite[Thm.~2.1,
Thm.~3.5]{SavittRaynaud} for proofs of analogous results.
\end{proof}
We will sometimes refer to the element $m = \sum_i m_i \in
\mathfrak{M}(r,a,c)$ as the standard generator of $\mathfrak{M}(r,a,c)$.
\begin{rem}
When $p > 2$
many of the results in this section (such as the above) can
be obtained by translating \cite[Lem.\ 1.3, Cor.\
1.8]{DiamondSavitt} from the Breuil module context to the Breuil--Kisin module context.
We briefly recall the dictionary between these two categories
(\emph{cf.}\ \cite[\S 1.1.10]{kis04}). If $A$ is a finite local
$\Z_p$-algebra, write $S_A = S \otimes_{\Z_p} A$, where $S$ is Breuil's
ring. We regard $S_A$ as a $\mathfrak{S}_A$-algebra via
$u\mapsto u$, and we let $\varphi:\mathfrak{S}_A\to S_A$ be the composite of this map with
$\varphi$ on $\mathfrak{S}_A$. Then given a Breuil--Kisin module of height at most~$1$ with descent data $\mathfrak{M}$,
we
set ${\mathcal M}:=S_A\otimes_{\varphi,\mathfrak{S}_A}\mathfrak{M}$. We have a map $1\otimes\varphi_\mathfrak{M}:S_A\otimes_{\varphi,\mathfrak{S}_A}\mathfrak{M}\to S_A \otimes_{\mathfrak{S}_A}\mathfrak{M}$,
and we set \[\Fil^1{\mathcal M}:=\{x\in{\mathcal M}\ :\
(1\otimes \varphi_\mathfrak{M})(x)\in\Fil^1S_A\otimes_{\mathfrak{S}_A}\mathfrak{M}\subset
S_A\otimes_{\mathfrak{S}_A}\mathfrak{M}\}\]and define $\varphi_1:\Fil^1{\mathcal M}\to{\mathcal M}$ as the
composite \[\Fil^1{\mathcal M}\overset{1\otimes\varphi_\mathfrak{M}}{\longrightarrow}\Fil^1S_A\otimes_{\mathfrak{S}_A}\mathfrak{M}\overset{\varphi_1\otimes
1}{\longrightarrow}S_A\otimes_{\varphi,\mathfrak{S}_A}\mathfrak{M}={\mathcal M}.\]Finally, we define $\hat{g}$ on ${\mathcal M}$
via $\hat{g}(s\otimes m)=g(s)\otimes \hat{g}(m)$. One checks without difficulty
that this makes ${\mathcal M}$ a strongly divisible module with descent data
(\emph{cf.}\ the
proofs of~\cite[Proposition 1.1.11, Lemma 1.2.4]{kis04}).
In the correspondence described above, the Breuil--Kisin module $\mathfrak{M}((r_i),(a_i),(c_i))$ corresponds to the Breuil module ${\mathcal M}((e'-r_i),(a_i),(pc_{i-1}))$
of~\cite[Lem.\ 1.3]{DiamondSavitt}.
\end{rem}
\begin{defn}
If $\mathfrak{M} = \mathfrak{M}(r,a,c)$ is a rank one Breuil--Kisin module as described in the
preceding lemma, we set $\alpha_i(\mathfrak{M}) := (p^{f'-1} r_{i-f'+1} + \cdots +
r_{i})/(p^{f'} - 1)$ (equivalently, $(p^{f-1} r_{i-f+1} + \cdots
+ r_{i})/(p^f-1)$). We may abbreviate $\alpha_i(\mathfrak{M})$ simply as $\alpha_i$
when $\mathfrak{M}$ is clear from the context.
It follows easily from the congruence $r_i
\equiv pc_{i-1} - c_i \pmod{e(K'/K)}$ together with the hypothesis
that $p^f-1 \mid e(K'/K)$ that $\alpha_i \in {\mathbb Z}$ for all $i$. Note
that the $\alpha_i$'s are the unique solution to the system of
equations $p \alpha_{i-1} - \alpha_i = r_i$ for all $i$. Note also
that $(p^f-1)(c_i-\alpha_i) \equiv 0 \pmod{e(K'/K)}$, so that
$\hchar^{c_i-\alpha_i}$ is a character with image in $k^{\times}$.
\end{defn}
\begin{lem}
\label{lem: generic fibres of rank 1 Kisin modules}We have
$T(\mathfrak{M}(r,a,c))=\left(\sigma_i\circ\hchar^{c_i-\alpha_{i}}\cdot\mathrm{ur}_{\prod_{i=0}^{f-1}a_i}\right)|_{G_{K_\infty}}$,
where $\mathrm{ur}_\lambda$ is the unramified character of $G_K$ sending
geometric Frobenius to $\lambda$.
\end{lem}
\begin{proof}
Set $\mathfrak{N} = \mathfrak{M}(0,(a_i),0)$, so that $\mathfrak{N}$ is effectively a Breuil--Kisin module without
descent data. Then for $\mathfrak{N}$ this result follows from the second paragraph of the
proof \cite[Lem.~6.3]{MR3164985}. (Note that the functor $T_{\mathfrak{S}}$ of
\emph{loc.\ cit.} is dual to our functor $T$;\ \emph{cf}.~\cite[A\
1.2.7]{MR1106901}. Note also that the fact that the base field is
unramified in \emph{loc.\ cit.} does not change the calculation.) If
$n = \sum n_i$ is the standard generator of $\mathfrak{N}$ as in Lemma~\ref{lem:rank one Kisin modules with descent data}, let
$\gamma \in \Z_p^{\mathrm{un}} \otimes_{\Z_p} (k' \otimes_{\F_p} {\mathbb F})$ be an element
so that $\gamma n \in
({\mathcal O}_{\widehat{{\mathcal E}^{\text{nr}}}}\otimes_{\mathfrak{S}[1/u]} \mathfrak{N}[1/u])^{\varphi
= 1}$.
Now for $\mathfrak{M}$ as in the statement of the lemma it is straightforward to verify
that $$\gamma \sum_{i=0}^{f'-1} [\underline{\pi}']^{-\alpha_{i}} \otimes
m_i \in ({\mathcal O}_{\widehat{{\mathcal E}^{\text{nr}}}}\otimes_{\mathfrak{S}[1/u]}
\mathfrak{M}[1/u])^{\varphi=1},$$ and the result follows.
\end{proof}
One immediately deduces the following.
\begin{cor}
\label{cor: Kisin modules with the same generic fibre} Let $\mathfrak{M}=\mathfrak{M}(r,a,c)$ and
$\mathfrak{N}=\mathfrak{M}(s,b,d)$ be rank one Breuil--Kisin modules with descent data as
above. We have $T(\mathfrak{M})=T(\mathfrak{N})$ if and only if $c_i - \alpha_i(\mathfrak{M})
\equiv d_i - \alpha_i(\mathfrak{N}) \pmod{e(K'/K)}$ for some $i$ {\upshape(}hence for all $i${\upshape)} and $\prod_{i=0}^{f-1}a_i=\prod_{i=0}^{f-1}b_i$.
\end{cor}
\begin{lem}
\label{lem: maps between rank 1 Kisin modules} In the notation of the
previous Corollary, there is a nonzero map $\mathfrak{M}\to\mathfrak{N}$
\emph{(}equivalently, $\dim_{{\mathbb F}} \Hom_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})=1$\emph{)} if
and only if $T(\mathfrak{M})=T(\mathfrak{N})$ and $\alpha_i(\mathfrak{M}) \ge\alpha_i(\mathfrak{N})$ for each $i$.
\end{lem}
\begin{proof}
The proof is essentially the same as that of \cite[Lem.\
1.6]{DiamondSavitt}. (Indeed, when $p > 2$ this
lemma can once again be proved by translating directly from
\cite{DiamondSavitt} to the Breuil--Kisin module context.)
\end{proof}
Using the material of Section~\ref{subsec:ext
generalities},
one can compute $\Ext^1(\mathfrak{M},\mathfrak{N})$ for any pair of rank
one Breuil--Kisin modules $\mathfrak{M},\mathfrak{N}$ of height at most one. We begin with the
following explicit description of the complex $C^{\bullet}(\mathfrak{N})$ of Section~\ref{subsec:ext
generalities}.
\begin{defn}\label{notn:calh}
We write $\Czero_{u} = \Czero_{u}(\mathfrak{M},\mathfrak{N}) \subset {\mathbb F}((u))^{{\mathbb Z}/f{\mathbb Z}}$ for
the space of $f$-tuples $(\mu_i)$ such that each nonzero term of
$\mu_i$ has degree congruent to $c_i - d_i \pmod{e(K'/K)}$, and set
$\scrC^0 = \Czero_{u} \cap {\mathbb F}[[u]]^{{\mathbb Z}/f{\mathbb Z}}$.
We further define $\Cone_{u} = \Cone_{u}(\mathfrak{M},\mathfrak{N}) \subset
{\mathbb F}((u))^{{\mathbb Z}/f{\mathbb Z}}$ to be
the space of $f$-tuples $(h_i)$ such that each nonzero term of $h_i$
has degree congruent to $r_i + c_i - d_i \pmod{e(K'/K)}$, and set
$\scrC^1 = \Cone_{u} \cap {\mathbb F}[[u]]^{{\mathbb Z}/f{\mathbb Z}}$. There is a map $\partial \colon \Czero_{u} \to
\Cone_{u}$ defined by
\[ \partial(\mu_i) = (-a_i u^{r_i} \mu_i + b_i \varphi(\mu_{i-1}) u^{s_i}) \]
Evidently this restricts to a map
$\partial \colon \scrC^0 \to \scrC^1$.
\end{defn}
\begin{lemma}\label{lem:explicit-complex}
There is an isomorphism of complexes
\[ [ \scrC^0 \xrightarrow{\partial} \scrC^1 ] \buildrel\sim\over\to C^{\bullet}(\mathfrak{N})\]
in which $(\mu_i) \in \scrC^0$ is sent to the map $m_i \mapsto \mu_i n_i$
in $C^0(\mathfrak{N})$, and $(h_i) \in \scrC^1$ is sent to the map $(1\otimes
m_{i-1}) \mapsto h_i n_i$ in $C^1(\mathfrak{N})$.
\end{lemma}
\begin{proof}
Each element of $\Hom_{\mathfrak{S}_{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ has the form $m_i \mapsto \mu_i
n_i$ for some $f'$-tuple $(\mu_i)_{i \in {\mathbb Z}/f'{\mathbb Z}}$ of elements of ${\mathbb F}[[u]]$.
The condition that this map
is $\Gal(K'/K)$-equivariant
is easily seen to be
equivalent to the conditions that $(\mu_i)$ is periodic with period dividing $f$, and
that each nonzero term of $\mu_i$ has degree congruent to
$c_{i}-d_{i} \pmod{e(K'/K)}$. (For the former consider the action
of a lift to $g \in \Gal(K'/K)$ satisfying $h(g)=1$ of a generator of $\Gal(k'/k)$, and for the
latter consider the action of $I(K'/K)$;\ \emph{cf}.\ the proof of
\cite[Lem.~1.5]{DiamondSavitt}.) It follows that the map $\scrC^0
\to C^0(\mathfrak{N})$ in the statement of the Lemma is an isomorphism. An
essentially identical argument shows that the given map $\scrC^1 \to
C^1(\mathfrak{N})$ is an isomorphism.
To conclude, it suffices to observe that if $\alpha \in C^0(\mathfrak{N})$ is given by $m_i
\mapsto \mu_i n_i$ with $(\mu_i)_i \in \scrC^0$ then
$\delta(\alpha) \in C^1(\mathfrak{N})$ is the map
given by $$(1\otimes m_{i-1}) \mapsto (-a_i u^{r_i} \mu_i + b_i
\varphi(\mu_{i-1}) u^{s_i}) n_i,$$ which follows by a direct calculation.
\end{proof}
It follows from Corollary~\ref{cor:complex computes Hom and Ext}
that $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}) \cong \coker \partial$.
If $h \in \scrC^1$, we write $\mathfrak{P}(h)$ for the element of
$\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ represented by $h$ under this isomorphism.
\begin{remark}
\label{prop: extensions of rank one Kisin modules}Let $\mathfrak{M}=\mathfrak{M}(r,a,c)$ and
$\mathfrak{N}=\mathfrak{M}(s,b,d)$ be rank one Breuil--Kisin modules with descent data as in
Lemma~{\em \ref{lem:rank one Kisin modules with descent data}}. It
follows from the proof of Lemma~\ref{lem: C computes Ext^1}, and in
particular the description of the map~\eqref{eqn:explicit
embedding} found there, that the extension $\mathfrak{P}(h)$ is given by
the formulas
\begin{itemize}
\item $\mathfrak{P}_i={\mathbb F}[[u]]\cdot m_i + {\mathbb F}[[u]]\cdot n_i$,
\item $\Phi_{\mathfrak{P},i}(1\otimes n_{i-1})=b_{i} u^{s_{i}}n_{i}$,
$\Phi_{\mathfrak{P},i}(1\otimes m_{i-1})=a_{i}u^{r_{i}}m_{i}+h_{i}
n_{i}$.
\item $\hat{g}(\sum_i m_i)=\sum_i h(g)^{c_i}m_i$,
$\hat{g}(\sum_i n_i)=h(g)^{d_i} \sum_i n_i$ for all $g\in\Gal(K'/K)$.
\end{itemize}
From this description it is easy to see that the extension $\mathfrak{P}(h)$
has height at most $1$ if and only if each $h_i$ is divisible by $u^{r_i+s_i-e'}$.
\end{remark}
\begin{thm}\label{thm: extensions of rank one Kisin modules}
The dimension of $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ is given by the formula
\[\Delta+\sum_{i=0}^{f-1}\#\biggl\{j\in[0,r_i):j\equiv r_i+c_{i}-d_{i}\pmod{e(K'/K)}\biggr\} \]
where $\Delta = \dim_{{\mathbb F}} \Hom_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ is $1$ if there is a nonzero map $\mathfrak{M}\to\mathfrak{N}$ and $0$
otherwise, while the subspace consisting of extensions of height at most $1$
has dimension
\[\Delta+\sum_{i=0}^{f-1}\#\biggl\{j\in[\max(0,r_i+s_i-e'),r_i):j\equiv r_i+c_{i}-d_{i}\pmod{e(K'/K)}\biggr\} .\]
\end{thm}
\begin{proof}
When $p > 2$, this result (for extensions of height at most $1$) can be obtained by translating
\cite[Thm.~1.11]{DiamondSavitt} from Breuil modules to Breuil--Kisin
modules.
We argue in the same spirit as \cite{DiamondSavitt} using the generalities of Section~\ref{subsec:ext generalities}.
Choose~$N$ as in
Lemma~\ref{lem:truncation argument used to prove f.g. of Ext Q
version}(2).
For brevity we
write $C^{\bullet}$ in lieu of $C^{\bullet}(\mathfrak{N})$. We now use the
description of~$C^{\bullet}$ provided by
Lemma~\ref{lem:explicit-complex}.
As we have noted, $C^0$ consists of the maps $m_i \mapsto \mu_i
n_i$ with $(\mu_i) \in \scrC^0$.
Since $(\varphi^*_{\mathfrak{M}})^{-1}(v^N C^1)$ contains precisely the maps $m_i
\mapsto \mu_i n_i$ in $C^0$ such that $v^{N} \mid u^{r_i} \mu_i$, we
compute that $\dim_{{\mathbb F}} C^0/\bigl((\varphi^*_{\mathfrak{M}})^{-1}(v^N C^1)\bigr)$
is the quantity
$$ Nf - \sum_{i=0}^{f-1} \#\biggl\{ j \in [e(K'/K) N-r_i, e(K'/K) N) : j \equiv c_i-d_i
\pmod{e(K'/K)}\biggr\}.$$ We have
$\dim_{{\mathbb F}} C^1/v^NC^1 = Nf$, so
our formula for the dimension of $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ now follows
from Lemma~\ref{lem:truncation argument used to prove f.g. of Ext Q
version}.
\end{proof}
\begin{remark}\label{rem:representatives-for-ext}
One can show exactly as in \cite{DiamondSavitt} that each element of $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ can be written uniquely in
the form $\mathfrak{P}(h)$ for $h \in \scrC^1$
with $\deg(h_i) < r_i$, except that when there exists a nonzero morphism
$\mathfrak{M}\to\mathfrak{N}$, the polynomials $h_i$ for $f \mid i$ may also have a term of degree
$\alpha_0(\mathfrak{M})-\alpha_0(\mathfrak{N})+r_0$ in common. Since we will not need
this fact we omit the proof.
\end{remark}
\subsection{Extensions of shape $J$}
\label{sec:extensions-shape-J}
We now begin the work of showing, for each non-scalar tame type $\tau$,
that ${\mathcal C}^{\tau,\operatorname{BT},1}$ has $2^f$ irreducible
components, indexed by the subsets $J$ of~$\{0,1,\dots,f-1\}$. We will
also describe the irreducible
components of~${\mathcal Z}^{\tau,1}$.
The proof of this hinges on examining the extensions considered in
Theorem~\ref{thm: extensions of rank one Kisin modules},
and then applying the results of Subsection~\ref{subsec:universal families}.
We will show that
most of these families of extensions have
positive codimension in ${\mathcal C}^{\tau,\operatorname{BT},1}$, and are thus negligible from the
point of view of determining irreducible components. By a base change
argument, we will also be able to show that we can neglect the irreducible
Breuil--Kisin modules. The rest of Section~\ref{sec: extensions of rank one Kisin modules} is devoted to establishing the
necessary bounds on the dimension of the various families of
extensions, and to studying the map from ${\mathcal C}^{\tau,\operatorname{BT},1}$ to
${\mathcal R}^{\mathrm{dd},1}$.
We now introduce notation that we will use for the remainder of the
paper.
We fix a
tame inertial type $\tau=\eta\oplus\eta'$ with coefficients in $\Qbar_p$.
We allow the case of scalar
types (that is, the case $\eta=\eta'$).
Assume that the subfield ${\mathbb F}$ of $\Fbar_p$ is large enough so that the reductions modulo $\mathfrak{m}_{\Z_p}$ of $\eta$ and
$\eta'$ (which by abuse of notation we continue to denote $\eta,\eta'$) have image in ${\mathbb F}$.
We also fix a uniformiser $\pi$ of~$K$.
\begin{remark}\label{rk:ordering}
We stress that when we
write $\tau=\eta\oplus\eta'$, we are implicitly ordering
$\eta,\eta'$. Much of the notation in this section depends on
distinguishing $\eta,\eta'$, as do some of the constructions later in
paper (in particular, those using the
map to the Dieudonn\'e stack of Section~\ref{sec:dieudonne-stacks}).
\end{remark}
As in Subsection~\ref{sec:dieudonne-stacks}, we make
the following ``standard choice'' for the extension~$K'/K$: if $\tau$ is a tame
principal series type, we take $K'=K(\pi^{1/(p^f-1)})$, while
if~$\tau$ is a tame cuspidal type, we let $L$ be an unramified
quadratic extension of~$K$, and set $K'=L(\pi^{1/(p^{2f}-1)})$. In
either case $K'/K$ is a Galois extension and $\eta, \eta'$ both factor through
$I(K'/K)$. In the principal series
case, we have $e'=(p^f-1)e$, $f'=f$, and in the cuspidal case we have
$e'=(p^{2f}-1)e$, $f'=2f$. Either way, we have $e(K'/K) =
p^{f'}-1$.
In either case, it follows from Lemma~\ref{lem:rank one Kisin modules
with descent data} that a Breuil--Kisin module of rank one with descent data
from $K'$ to $K$ is described by the data of the quantities $r_i,a_i,c_i$ for $0\le i\le
f-1$, and similarly from Lemma~\ref{lem:explicit-complex} that extensions between two such Breuil--Kisin modules are
described by the $h_i$ for $0\le i\le
f-1$. This common description will enable us to treat the principal
series and cuspidal cases largely in parallel.
The character
$\hchar |_{I_K}$ of Section~\ref{subsec: Diamond--Savitt} is identified via the Artin map ${\mathcal O}_L^\times \to
I_L^{\mathrm{ab}} = I_K^{\mathrm{ab}} $ with the reduction map
${\mathcal O}_L^{\times} \to (k')^{\times}$. Thus for each $\sigma \in
\Hom(k',\Fbar_p)$ the map $\sigma \circ
\hchar|_{I_L}$ is the fundamental character $\omega_{\sigma}$ defined
in Section~\ref{subsec: notation}.
Define $k_i,k'_i\in {\mathbb Z}/(p^{f'}-1){\mathbb Z}$ for all $i$ by the formulas
$\eta=\sigma_i\circ\hchar^{k_i}|_{I(K'/K)}$ and
$\eta'=\sigma_i\circ\hchar^{k'_i}|_{I(K'/K)}$. In particular we have
$k_i=p^ik_0$, $k'_i=p^ik'_0$ for all $i$.
\begin{defn} Let $\mathfrak{M} = \mathfrak{M}(r,a,c)$ and $\mathfrak{N}=\mathfrak{M}(s,b,d)$ be Breuil--Kisin
modules of rank one with descent data. We say that the pair
$(\mathfrak{M},\mathfrak{N})$ has \emph{type $\tau$} provided that for all $i$:
\begin{itemize}
\item the multisets $\{c_i,d_i\}$ and $\{k_i,k'_i\}$ are equal, and
\item $r_i + s_i = e'$.
\end{itemize}
\end{defn}
\begin{lemma} The following are equivalent.
\begin{enumerate}
\item The pair $(\mathfrak{M},\mathfrak{N})$ has type~$\tau$.
\item Some element of $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ of height at
most one satisfies the strong determinant condition and is of type~$\tau$.
\item Every element of $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ has height at most
one,
satisfies the strong determinant condition, and is of type~$\tau$.
\end{enumerate}
(Accordingly, we will sometimes refer to the condition that
$r_i+s_i=e'$ for all~$i$ as the determinant condition.)
\end{lemma}
\begin{proof}
Suppose first that the pair $(\mathfrak{M},\mathfrak{N})$ has type $\tau$. The last
sentence of Remark~\ref{prop: extensions of rank one Kisin
modules} shows that every element of $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$
has height at most one. Let $\mathfrak{P}$ be such an element. The
condition on the multisets $\{c_i,d_i\}$ guarantees that $\mathfrak{P}$ has
unmixed type $\tau$. By \cite[Prop.~4.2.12]{cegsB}\
we see that $\dim_{{\mathbb F}} (\im_{\mathfrak{P},i}/E(u)\mathfrak{P}_i)_{\tilde{\eta}}$ is
independent of $\tilde{\eta}$. From the condition that $r_i+s_i=e'$ we
know that the sum over all $\tilde{\eta}$ of these dimensions is equal to
$e'$; since they are all equal, each is equal to $e$, and
\cite[Lem.~4.2.11]{cegsB}\
tells
us that $\mathfrak{P}$ satisfies the
strong determinant condition. This proves that (1) implies (3).
Certainly (3)
implies (2), so it remains to check that (2) implies (1). Suppose
that
$\mathfrak{P} \in \Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ has height at most one,
satisfies the strong determinant condition, and has type $\tau$. The
condition that $\{c_i,d_i\}=\{k_i,k'_i\}$ follows from $\mathfrak{P}$ having
type $\tau$, and the condition that $r_i+s_i=e'$ follows from the last
part of \cite[Lem.~4.2.11]{cegsB}.
\end{proof}
\begin{df}
\label{df:extensions of shape $J$}
If $(\mathfrak{M},\mathfrak{N})$ is a pair of type $\tau$ (resp.\ $\mathfrak{P}$ is an extension
of type~$\tau$), we define the {\em shape} of $(\mathfrak{M},\mathfrak{N})$ (resp.\ of $\mathfrak{P}$) to
be the subset $J := \{ i \, | \, c_i = k_i\} \subseteq {\mathbb Z}/f'{\mathbb Z}$,
unless $\tau$ is scalar, in which case we define the shape to be the
subset~$\varnothing$.
(Equivalently, $J$ is in all cases the complement in
${\mathbb Z}/f'{\mathbb Z}$ of the set $\{i \, | \, c_i = k'_i\}.$)
Observe that in the cuspidal case the equality $c_i = c_{i+f}$ means
that $i \in J$ if and only if $i+f \not\in J$, so that the set $J$ is
determined by its intersection with any $f$ consecutive integers
modulo $f' = 2f$.
In the cuspidal case we will say that a subset $J \subseteq {\mathbb Z}/f'{\mathbb Z}$ is a
shape
if it satisfies $i \in J$ if and only if $i+f\not\in J$; in the
principal series case, we may refer to any subset $J \subseteq {\mathbb Z}/f'{\mathbb Z}$
as a shape.
We define the {\em refined shape} of the pair $(\mathfrak{M},\mathfrak{N})$ (resp.\ of $\mathfrak{P}$) to consist of its shape $J$,
together with the $f$-tuple of invariants $r:= (r_i)_{i = 0}^{f-1}$.
If $(J,r)$ is a refined shape that arises from some pair (or extension)
of type $\tau$, then we refer to $(J,r)$ as a refined shape for $\tau$.
We say the pair $(i-1,i)$ is a \emph{transition} for $J$ if $i-1 \in
J$, $i \not\in J$ or vice-versa. (In the first case we sometimes say
that the pair $(i-1,i)$ is a transition out of $J$, and in the latter case a transition
into $J$.) Implicit in many of our arguments below
is the observation that in the cuspidal case $(i-1,i)$ is a transition
if and only if $(i+f-1,i+f)$ is a transition.
\end{df}
\subsubsection{An explicit description of refined shapes}
\label{subsubsec:explicitly refined}
The refined shapes for $\tau$ admit an explicit description.
If $\mathfrak{P}$ is of shape $J$, for some fixed $J \subseteq {\mathbb Z}/f'{\mathbb Z}$
then, since $c_i$, $d_i$ are fixed, we see that the $r_i$ and $s_i$
appearing in $\mathfrak{P}$ are determined
modulo $e(K'/K)=p^{f'}-1$. Furthermore, we see that $r_i+s_i\equiv
0\pmod{p^{f'}-1}$, so that these values are consistent with the
determinant condition; conversely, if we make any choice of the~$r_i$
in the given residue class modulo $(p^{f'}-1)$, then the $s_i$ are
determined by the determinant condition, and the imposed values
are consistent with the descent data. There are of course only
finitely many choices for the~$r_i$, and so there are only finitely
many possible refined shapes for $\tau$.
To make this precise, recall that we have the congruence
$$r_i \equiv
pc_{i-1} - c_i \pmod{p^{f'}-1}.$$ We will write $[n]$ for the
least non-negative residue class of $n$ modulo $e(K'/K) =
p^{f'}-1$.
If both $i-1$ and $i$ lie in $J$,
then we have $c_{i-1} = k_{i-1}$ and $c_i = k_i$. The first of these
implies that $pc_{i-1} = k_i$, and therefore $r_i \equiv 0
\pmod{p^{f'}-1}$. The same conclusion holds if neither $i-1$ and $i$
lie in $J$. Therefore if $(i-1,i)$ is not a transition we may write
\[ r_i =
(p^{f'}-1)y_i \quad \text{ and } \quad s_i = (p^{f'}-1)(e-y_i).\]
with $0 \le y_i \le e$.
Now suppose instead that $(i-1,i)$ is a transition. (In
particular the type $\tau$ is not scalar.) This
time $pc_{i-1} = d_i$ (instead of $pc_{i-1} = c_i$), so that $r_i
\equiv d_i - c_i \pmod{p^{f'}-1}$. In this case we write
\[ r_i =
(p^{f'}-1)y_i - [c_i-d_i] \quad \text{ and } \quad s_i = (p^{f'}-1)(e+1-y_i) -
[d_i-c_i]\]
with $1 \le y_i \le e$.
Conversely, for fixed shape $J$ one checks that each choice of integers $y_i$ in the ranges
described above gives rise to a refined shape for $\tau$.
If $(i-1,i)$ is not a transition
and $(h_i) \in \Cone_{u}(\mathfrak{M},\mathfrak{N})$ then non-zero terms of $h_i$ have
degree congruent to $r_i + c_i - d_i \equiv c_i - d_i \pmod{p^{f'}-1}$.
If instead $(i-1,i)$ is a transition
and $(h_i) \in \Cone_{u}(\mathfrak{M},\mathfrak{N})$ then non-zero terms
of $h_i$ have degree congruent to $r_i + c_i - d_i \equiv 0 \pmod{p^{f'}-1}$.
In either case, comparing with the preceding paragraphs we see that $\#\{ j \in
[0,r_i) : j \equiv r_i + c_i - d_i \pmod{e(K'/K)}\}$ is exactly~$y_i$.
By Theorem~\ref{thm: extensions of rank one Kisin modules}, we conclude
that for a fixed choice of the~$r_i$
the dimension of the
corresponding~$\Ext^1$ is $\Delta + \sum_{i=0}^{f-1} y_i$ (with
$\Delta$ as in the statement of \emph{loc.\ cit.}).
We say that the refined shape $\bigl(J, (r_i)_{i =0}^{f-1}\bigr)$
is \emph{maximal} if the $r_i$ are chosen to be maximal subject to the
above conditions, or equivalently if the $y_i$ are all chosen to be $e$; for each
shape~$J$, there is a unique maximal refined shape~$(J,r)$.
\subsubsection{The sets ${\mathcal P}_{\tau}$}
\label{sec:sets-cp_tau}
To each tame type $\tau$ we now associate a set ${\mathcal P}_{\tau}$, which
will be a subset of the set of shapes in ${\mathbb Z}/f'{\mathbb Z}$. (In Appendix~\ref{sec: appendix on tame
types} we will recall, following~\cite{MR2392355}, that the
set ${\mathcal P}_{\tau}$ parameterises the Jordan--H\"older factors of the
reduction mod~$p$ of
$\sigma(\tau)$.)
We write $\eta (\eta')^{-1} =
\prod_{j=0}^{f'-1} (\sigma_j \circ \hchar)^{\gamma_j}$ for uniquely defined integers $0
\le \gamma_j \le p-1$ not all equal to $p-1$, so that
\begin{equation}\label{eq:k-gamma}
[k_i - k'_i] = \sum_{j=0}^{f'-1} p^{j} \gamma_{i-j}
\end{equation}
with subscripts taken modulo $f'$.
If~$\tau$ is scalar then
we set ${\mathcal P}_\tau=\{\varnothing\}$. Otherwise we let
${\mathcal P}_{\tau}$ be the collection of shapes $J \subseteq {\mathbb Z}/f'{\mathbb Z}$
satisfying the conditions:
\begin{itemize}
\item if $i-1\in J$ and $i\notin J$ then $\gamma_{i}\ne p-1$, and
\item if $i-1\notin J$ and $i\in J$ then $\gamma_{i}\ne 0$.
\end{itemize}
When $\tau$ is a cuspidal type, so that $\eta' = \eta^q$, the integers
$\gamma_j$ satisfy $\gamma_{i+f} = p-1-\gamma_i$ for all $i$; thus the
condition that if $(i-1,i)$ is a transition out of $J$ then $\gamma_i
\neq p-1$ translates exactly into the condition that if $(i+f-1,i+f)$ is a
transition into $J$ then $\gamma_{i+f} \neq 0$.
\subsubsection{Moduli stacks of extensions}\label{subsubsection: stacks of extensions}
We now apply the constructions of stacks and topological spaces of
Definitions~\ref{def:scheme-theoretic images}
and~\ref{df:constructible images} to the families of
extensions considered in Section~\ref{sec:extensions-shape-J}.
\begin{df}\label{defn: M(j,r)}
If $(J,r)$ is a refined shape for $\tau$, then
we let $\mathfrak{M}(J,r) := \mathfrak{M}(r,1,c)$ and let
$\mathfrak{N}(J,r) := \mathfrak{M}(s,1,d),$ where $c$, $d$, and $s$ are determined
from $J$, $r$, and $\tau$ according to the
discussion of~(\ref{subsubsec:explicitly refined}); for instance we
take $c_i
= k_i$ when $i \in J$ and $c_i = k'_i$ when $i \not\in J$.
For the unique
maximal shape~$(J,r)$ refining~$J$, we write simply $\mathfrak{M}(J)$ and $\mathfrak{N}(J)$.
\end{df}
\begin{df}
If $(J,r)$ is a refined shape for $\tau$, then following
Definition~\ref{def:scheme-theoretic images}, we may construct the reduced
closed substack $\overline{{\mathcal C}}\bigl(\mathfrak{M}(J,r),\mathfrak{N}(J,r)\bigr)$ of ${\mathcal C}^{\tau,\operatorname{BT},1}$,
as well as the reduced closed substack $\overline{{\mathcal Z}}\bigl(\mathfrak{M}(J,r),
\mathfrak{N}(J,r)\bigr)$ of ${\mathcal Z}^{\tau,1}.$
We introduce the notation $\overline{{\mathcal C}}(J,r)$ and $\overline{{\mathcal Z}}(J,r)$
for these two stacks, and note that (by definition) $\overline{{\mathcal Z}}(J,r)$
is the scheme-theoretic image of $\overline{{\mathcal C}}(J,r)$ under
the morphism ${\mathcal C}^{\tau,\operatorname{BT},1} \to {\mathcal Z}^{\tau,1}$.
\end{df}
\begin{remark}\label{rem-all-pts} As noted in the final sentence of
Definition~\ref{def:scheme-theoretic images}, Lemma~\ref{lem: scheme theoretic
images coincide} shows that $\overline{{\mathcal C}}(J,r)$ contains all
extensions of refined shape $(J,r)$ over extensions of~${\mathbb F}$, and not
only those corresponding to a maximal ideal of $A^{{\operatorname{dist}}}$.
\end{remark}
\begin{thm}\label{thm: dimension of refined shapes}
If $(J,r)$ is any refined shape for $\tau$,
then $\dim \overline{{\mathcal C}}(J,r) \leq [K:\Q_p],$ with equality if and only if $(J,r)$ is
maximal.
\end{thm}
\begin{proof}
This follows from Corollary~\ref{cor:
dimension of families of extensions}, Theorem~\ref{thm:
extensions of rank one Kisin modules}, and Proposition~\ref{prop:base-change for exts}. (See also the discussion
following Definition~\ref{df:extensions of shape $J$}, and note that over
$\Spec {A^{\operatorname{dist}}}$, we have $\Delta=0$ by definition.)
\end{proof}
\begin{df} If $J \subseteq {\mathbb Z}/f'{\mathbb Z}$ is a shape, and if $r$ is chosen so that
$(J,r)$ is a maximal refined shape for $\tau$,
then we write $\overline{{\mathcal C}}(J)$ to denote the closed substack $\overline{{\mathcal C}}(J,r)$
of ${\mathcal C}^{\tau,\operatorname{BT},1}$,
and $\overline{{\mathcal Z}}(J)$ to denote the closed substack
$\overline{{\mathcal Z}}(J,r)$ of ${\mathcal Z}^{\tau,1}$.
Again, we note that by definition
$\overline{{\mathcal Z}}(J)$ is the scheme-theoretic
image of~$\overline{{\mathcal C}}(J)$ in~${\mathcal Z}^{\tau,1}$.
\end{df}
We will see later that the~$\overline{{\mathcal C}}(J)$ are precisely the
irreducible components of~${\mathcal C}^{\tau,\operatorname{BT},1}$; in particular, their
finite type points can correspond to irreducible Galois
representations. While we do not need it in the sequel, we note the
following definition and result, describing the underlying topological
spaces of the loci of reducible Breuil--Kisin modules of fixed refined shape.
\begin{defn}
For each refined type~$(J,r)$, we write~$|{\mathcal C}(J,r)^\tau|$ for the
constructible subset~$|{\mathcal C}(\mathfrak{M}(J,r),\mathfrak{N}(J,r))|$
of~$|{\mathcal C}^{\tau,\operatorname{BT},1}|$ of Definition~\ref{df:constructible images}
(where $\mathfrak{M}(J,r)$, $\mathfrak{N}(J,r)$ are the Breuil--Kisin modules
of Definition~\ref{defn: M(j,r)}). We
write~$|{\mathcal Z}(J,r)^\tau|$ for the image of~$|{\mathcal C}(J,r)^\tau|$
in~$|{\mathcal Z}^{\tau,1}|$ (which is again a constructible
subset).
\end{defn}
\begin{lem}
\label{lem: closed points of C(J,r)}The $\Fbar_p$-points
of~$|{\mathcal C}(J,r)^\tau|$ are precisely the reducible Breuil--Kisin modules
with $\Fbar_p$-coefficients
of type~$\tau$ and refined shape~$(J,r)$.
\end{lem}
\begin{proof}
This is immediate from the definition.
\end{proof}
\section{Components of Breuil--Kisin and Galois moduli stacks}
\label{sec:Components}
Now that we have constructed the morphisms $\overline{{\mathcal C}}(J) \to \overline{{\mathcal Z}}(J)$ for each $J$, we can begin our study of the components of the stacks ${\mathcal C}^{\tau,\operatorname{BT},1}$ and ${\mathcal Z}^{\tau,1}$. The first step in Subsection~\ref{subsec:vertical-comps} is to determine precisely for which $J$ the scheme-theoretic image $\overline{{\mathcal Z}}(J)$ has dimension smaller than $[K:\Q_p]$, and hence is \emph{not} a component of ${\mathcal Z}^{\tau,1}$. In Section~\ref{subsec:irreducible} we study the irreducible locus in ${\mathcal C}^{\tau,\operatorname{BT},1}$ and prove that it lies in a closed substack of positive codimension. We are then ready to establish our main results in Subsections~\ref{subsec: irred components} and~\ref{subsec: map to
Dieudonne stack}.
\subsection{\texorpdfstring{$\kExt^1$}{ker-Ext} and vertical
components}\label{subsec:vertical-comps}
In this section we will establish some basic facts about $\kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$,
and use these results to study the images of our
irreducible components in ${\mathcal Z}^{\tau,1}$.
Let $\mathfrak{M} = \mathfrak{M}(r,a,c)$ and $\mathfrak{N} = \mathfrak{M}(s,b,c)$ be Breuil--Kisin
modules as in Section~\ref{subsec:
Diamond--Savitt}.
Recall from \eqref{eqn: computing kernel of Ext groups} that the
dimension of $\kExt_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ is bounded above by the
dimension of $\Hom_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N})$;\ more precisely, by
Lemma~\ref{lem: Galois rep is a functor if A is actually finite local}
we find in this setting that
\addtocounter{subsubsection}{1}\begin{equation}\label{eq:ker-ext-formula}\begin{split} \dim_{{\mathbb F}} \kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}) = \dim_{{\mathbb F}}
\Hom_{\K{A}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N}) \\- (\dim_{{\mathbb F}} \Hom_{{\mathbb F}[G_K]}(T(\mathfrak{M}),T(\mathfrak{N})) -
\dim_{{\mathbb F}} \Hom_{\K{A}}(\mathfrak{M},\mathfrak{N})).
\end{split}
\end{equation}
A map $f : \mathfrak{M} \to \mathfrak{N}[1/u]/\mathfrak{N}$ has the form
$f(m_i) = \mu_i n_i$ for some $f'$-tuple of elements $\mu_i \in
{\mathbb F}((u))/{\mathbb F}[[u]]$. By the same argument as in the first paragraph of the proof of
Lemma~\ref{lem:explicit-complex}, such a map belongs to
$C^0(\mathfrak{N}[1/u]/\mathfrak{N})$ (i.e., it is $\Gal(K'/K)$-equivariant) if and only
if the $\mu_i$ are periodic with
period dividing $f$, and each nonzero term of $\mu_i$ has degree
congruent to $c_i-d_i \pmod{e(K'/K)}$. One computes that
$\delta(f)(1\otimes m_{i-1}) = (u^{s_i} \varphi(\mu_{i-1}) - u^{r_i}
\mu_i)n_i$ and so $f \in C^0(\mathfrak{N}[1/u]/\mathfrak{N})$ lies in $\Hom_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N})$
precisely when
\addtocounter{subsubsection}{1}\begin{equation}\label{eq:phi-commute-ker-ext} a_i
u^{r_i} \mu_i = b_i \varphi(\mu_{i-1}) u^{s_i}\end{equation} for all
$i$.
\begin{remark}\label{rem:explicit-ker-ext}
Let $f \in \Hom_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N})$ be given as above. Choose
any lifting $\tilde{\mu}_i$ of $\mu_i$ to ${\mathbb F}((u))$. Then (with
notation as in~Definition~\ref{notn:calh}) the tuple
$(\tilde{\mu}_i)$ is an element of $\Czero_{u}$, and we define $h_i =
\partial(\tilde{\mu}_i)$. Then
$h_i$ lies in ${\mathbb F}[[u]]$ for all $i$, so that $(h_i) \in \scrC^1$, and a
comparison with Lemma~\ref{lem:explicit-complex} shows that $f$ maps
to the extension class in $\kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ represented by $\mathfrak{P}(h)$.
\end{remark}
Recall that Lemma~\ref{lem: bound on torsion in kernel of Exts}
implies that
nonzero terms appearing in $\mu_i$ have degree at least $-\lfloor
e'/(p-1) \rfloor$. From this we obtain the following trivial bound on $\kExt$.
\begin{lemma}\label{cor:bounds-on-ker-ext}
We have $\dim_{{\mathbb F}} \kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}) \le \lceil e/(p-1)
\rceil f$.
\end{lemma}
\begin{proof} The degrees of nonzero terms of $\mu_i$ all lie in a single
congruence class modulo $e(K'/K)$, and are bounded below by
$-e'/(p-1)$. Therefore
there are at most $\lceil e/(p-1) \rceil$ nonzero terms, and since
the $\mu_i$ are periodic with period dividing $f$ the lemma follows.
\end{proof}
\begin{remark}\label{rem:half}
It follows directly from Corollary~\ref{cor:bounds-on-ker-ext} that if
$p > 3$ and $e\neq 1$ then we have $\dim_{{\mathbb F}}
\kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}) \le [K:\Q_p]/2$, for then $\lceil e/(p-1)
\rceil \le e/2$. Moreover these inequalities are strict if $e >
2$.
\end{remark}
We will require a more precise computation of
$\kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ in the setting of
Section~\ref{sec:extensions-shape-J} where the pair $(\mathfrak{M},\mathfrak{N})$ has
maximal refined shape $(J,r)$. We now return to that setting and its
notation.
Let $\tau$ be a tame type. We
will find the following notation to be helpful. We let $\gamma_i^* =
\gamma_i$ if $i-1 \not\in J$, and $\gamma_i^* = p-1-\gamma_i$
if $i-1 \in J$. (Here the integers $\gamma_i$ are as in
Section~\ref{sec:sets-cp_tau}. In the case of scalar types this means
that we have $\gamma^*_i = 0$ for all $i$.)
Since $p[k_{i-1}-k'_{i-1}] - [k_i - k'_i] = (p^{f'}-1)\gamma_i$,
an elementary but useful calculation
shows that
\addtocounter{subsubsection}{1}\begin{equation}\label{eq:gammastar}
p[d_{i-1}-c_{i-1}] - [c_i-d_i] = \gamma_i^* (p^{f'}-1),
\end{equation}
when $(i-1,i)$ is a transition, and that in this case $\gamma_i^*
=0$ if and only if $[d_{i-1}-c_{i-1}] < p^{f'-1}$. Similarly, if
$\tau$ is not a scalar type and $(i-1,i)$ is not a transition then
\addtocounter{subsubsection}{1}\begin{equation}\label{eq:gammastar-2}
p[d_{i-1}-c_{i-1}] + [c_i-d_i] = (\gamma_i^*+1) (p^{f'}-1).
\end{equation}
The main computational result of this section is the following.
\begin{prop}\label{prop:ker-ext-maximal}
Let $(J,r)$ be any maximal refined shape for $\tau$, and suppose that
the pair $(\mathfrak{M},\mathfrak{N})$ has refined shape $(J,r)$. Then
$\dim_{{\mathbb F}} \kExt^1_{\K{{\mathbb F}}} (\mathfrak{M},\mathfrak{N})$ is equal to
\[\# \{ 0 \le i < f \, : \, \text{the pair } (i-1,i) \text{ is a
transition and } \gamma_i^* = 0 \},\]
except that when $e=1$, $\prod_i a_i = \prod_i b_i$, and the quantity displayed above is $f$,
then
the dimension of $\kExt^1_{\K{{\mathbb F}}} (\mathfrak{M},\mathfrak{N})$ is equal to $f-1$.
\end{prop}
\begin{proof}
The argument has two parts. First we show that $\dim_{{\mathbb F}}
\Hom_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N})$ is precisely the displayed quantity
in the statement of the Proposition;\ then we check that
$\dim_{{\mathbb F}} \Hom_{{\mathbb F}[G_K]} (T(\mathfrak{M}),T(\mathfrak{N})) - \dim_{{\mathbb F}} \Hom_{\K{{\mathbb F}}} (\mathfrak{M},\mathfrak{N})$ is
equal to $1$ in the exceptional case of the statement, and $0$
otherwise. The result then follows from~\eqref{eq:ker-ext-formula}.
Let $f : m_i \mapsto \mu_i n_i$ be an element of
$C^0(\mathfrak{N}[1/u]/\mathfrak{N})$.
Since $u^{e'}$ kills $\mu_i$, and all nonzero terms of $\mu_i$ have
degree congruent to
$c_i-d_i \pmod{p^{f'}-1}$, certainly all nonzero terms of $\mu_i$ have
degree at least $-e' + [c_i-d_i]$. On the other hand since the shape
$(J,r)$ is maximal we have $r_i = e' - [c_i-d_i]$ when $(i-1,i)$ is a
transition and $r_i = e'$ otherwise. In either case $u^{r_i}$ kills
$\mu_i$, so that \eqref{eq:phi-commute-ker-ext} becomes simply the
condition that $u^{s_i}$ kills $\varphi(\mu_{i-1})$.
If $(i-1,i)$ is not a transition then $s_i=0$, and we conclude that
$\mu_{i-1}=0$. Suppose instead that $(i-1,i)$ is a transition, so
that $s_i = [c_i-d_i]$. Then all nonzero terms of $\mu_{i-1}$ have
degree at least $-s_i/p > -p^{f'-1} > -e(K'/K)$. Since those terms must have
degree congruent to $c_{i-1}-d_{i-1} \pmod{p^{f'}-1}$, it follows
that $\mu_{i-1}$ has at most one nonzero term (of degree
$-[d_{i-1}-c_{i-1}]$), and this only if $[d_{i-1}-c_{i-1}] <
p^{f'-1}$, or equivalently $\gamma_i^* = 0$ (as noted
above). Conversely if $\gamma_i^*=0$ then
\[ u^{s_i} \varphi(u^{-[d_{i-1}-c_{i-1}]}) = u^{[c_i-d_i] -
p[d_{i-1}-c_{i-1}]} = u^{-\gamma_i^* (p^{f'}-1)}\] vanishes in
${\mathbb F}((u))/{\mathbb F}[[u]]$. We conclude that $\mu_{i-1}$ may have a single nonzero term if
and only if $(i-1,i)$ is a transition and $\gamma_i^*=0$, and this
completes the first part of the argument.
Turn now to the second part.
Looking at Corollary~\ref{cor: Kisin
modules with the same generic fibre} and Lemma~\ref{lem: maps
between rank 1 Kisin modules}, to compare
$\Hom_{{\mathbb F}[G_K]}(T(\mathfrak{M}),T(\mathfrak{N}))$ and $\Hom_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ we need
to compute the quantities $\alpha_i(\mathfrak{M})-\alpha_i(\mathfrak{N})$. By definition
this quantity is equal
to
\addtocounter{subsubsection}{1}\begin{equation}\label{eq:alpha-difference-1}\frac{1}{p^{f'}-1} \sum_{j=1}^{f'} p^{f'-j} \left( r_{i+j} - s_{i+j}
\right).\end{equation}
Suppose first that $\tau$ is non-scalar. When $(i+j-1,i+j)$ is a transition, we have $r_{i+j}-s_{i+j} = (e-1)(p^{f'}-1) +
[d_{i+j}-c_{i+j}] - [c_{i+j}-d_{i+j}]$, and otherwise we
have $r_{i+j}-s_{i+j} = e( p^{f'}-1)= (e-1)(p^{f'}-1) + [d_{i+j}-c_{i+j}] +
[c_{i+j}-d_{i+j}]$. Substituting these expressions into
\eqref{eq:alpha-difference-1}, adding and subtracting $\frac{1}{p^{f'}-1}
p^{f'} [d_i-c_i]$, and regrouping gives
\[ -[d_i-c_i] + (e-1) \cdot \frac{p^{f'}-1}{p-1} + \frac{1}{p^{f'}-1}\sum_{j=1}^{f'}
p^{f'-j} \left( p[d_{i+j-1} - c_{i+j-1}] \mp [c_{i+j} - d_{i+j}]
\right),\]
where the sign is $-$ if $(i+j-1,i+j)$ is a transition and $+$ if
not. Applying the formulas~\eqref{eq:gammastar}
and~\eqref{eq:gammastar-2} we conclude that
\addtocounter{subsubsection}{1}\begin{equation}\label{eq:alpha-difference}
\alpha_i(\mathfrak{M})-\alpha_i(\mathfrak{N}) = -[d_i-c_i] + (e-1) \cdot
\frac{p^{f'}-1}{p-1} + \sum_{j=1}^{f'} p^{f'-j} \gamma^*_{i+j} +
\sum_{j \in S_i} p^{f'-j}
\end{equation}
where the set $S_i$ consists of $1 \le j \le f$ such that $(i+j-1,i+j)$ is not a transition.
Finally, a moment's inspection shows that the same formula still holds
if $\tau$ is scalar (recalling that $J = \varnothing$ in that case).
Suppose that we are in the exceptional case of the proposition, so
that $e=1$, $\gamma_i^*=0$ for all $i$, and every pair $(i-1,i)$ is a
transition. The formula~\eqref{eq:alpha-difference} gives
$\alpha_i(\mathfrak{M})-\alpha_i(\mathfrak{N}) = -[d_i-c_i]$. Since also $\prod_i a_i =
\prod_i b_i$ the conditions of
Corollary~\ref{cor: Kisin modules with the same generic fibre} are
satisfied, so that $T(\mathfrak{M})=T(\mathfrak{N})$;\ but on the other hand
$\alpha_i(\mathfrak{M}) < \alpha_i(\mathfrak{N})$, so that by Lemma~\ref{lem: maps
between rank 1 Kisin modules} there are no nonzero maps $\mathfrak{M}\to
\mathfrak{N}$, and $\dim_{{\mathbb F}} \Hom_{{\mathbb F}[G_K]} (T(\mathfrak{M}),T(\mathfrak{N})) - \dim_{{\mathbb F}}
\Hom_{\K{{\mathbb F}}} (\mathfrak{M},\mathfrak{N}) = 1.$
If instead we are not in the exceptional case of the
proposition, then either $\prod_i a_i \neq \prod_i b_i$, or else~\eqref{eq:alpha-difference}
gives $\alpha_i(\mathfrak{M}) - \alpha_i(\mathfrak{N}) > -[d_i-c_i]$ for all
$i$. Suppose that $T(\mathfrak{M}) \cong T(\mathfrak{N})$. It follows from
Corollary~\ref{cor: Kisin modules with the same generic fibre} that
$\alpha_i(\mathfrak{M}) - \alpha_i(\mathfrak{N}) \equiv -[d_i-c_i]
\pmod{e(K'/K)}$. Combined with the previous inequality we deduce that
$\alpha_i(\mathfrak{M}) - \alpha_i(\mathfrak{N}) > 0$, and Lemma~\ref{lem: maps between
rank 1 Kisin modules} guarantees the existence of a nonzero map $\mathfrak{M}
\to \mathfrak{N}$. We deduce that in any event $\dim_{{\mathbb F}} \Hom_{{\mathbb F}[G_K]}
(T(\mathfrak{M}),T(\mathfrak{N})) = \dim_{{\mathbb F}} \Hom_{\K{{\mathbb F}}} (\mathfrak{M},\mathfrak{N})$, completing the proof.
\end{proof}
\begin{cor}\label{cor:ker-ext-maximal-nonzero}
Let $(J,r)$ be any maximal refined shape for $\tau$, and suppose that
the pair $(\mathfrak{M},\mathfrak{N})$ has refined shape $(J,r)$. If $J \in
{\mathcal P}_{\tau}$ then $\dim_{{\mathbb F}} \kExt^1_{\K{{\mathbb F}}} (\mathfrak{M},\mathfrak{N}) = 0$. Indeed
this is an if and only if except possibly when $K=\Q_p$, the type
$\tau$ is cuspidal, and $T(\mathfrak{M}(J,r)) \cong T(\mathfrak{N}(J,r))$.
\end{cor}
\begin{proof}
The first statement is immediate from Proposition~\ref{prop:ker-ext-maximal},
comparing the definition of $\gamma_i^*$ with the defining
condition on elements of ${\mathcal P}_{\tau}$;\ in fact this gives an if and
only if unless we are in the exceptional case in
Proposition~\ref{prop:ker-ext-maximal} and $f-1=0$. In that case
$e=f=1$, so $K=\Q_p$. In the principal series case for $K=\Q_p$ there
can be no transitions, so the type is cuspidal. Then $\gamma_i^*=0$
for $i=0,1$ and an elementary
analysis of \eqref{eq:gammastar} shows that there exists $x \in {\mathbb Z}/(p-1){\mathbb Z}$
such that $c_i = 1 +x(p+1)$, $d_i = p+x(p+1)$ for $i=0,1$. Then $r_i
= p-1$ and $s_i = p(p-1)$, and Lemma~\ref{lem: generic fibres of rank
1 Kisin modules} gives $T(\mathfrak{M}(J,r)) \cong T(\mathfrak{N}(J,r))$.
\end{proof}
Recall
that
$\overline{{\mathcal Z}}(J)$ is by definition the scheme-theoretic image of~$\overline{{\mathcal C}}(J)$ in
${\mathcal Z}^{\tau,1}$. In the remainder of this section, we show that the $\overline{{\mathcal Z}}(J)$
with $J\in{\mathcal P}_{\tau}$ are pairwise distinct irreducible components
of~${\mathcal Z}^{\tau,1}$. In Section~\ref{subsec: irred components} below we
will show that they in fact exhaust the irreducible components of~${\mathcal Z}^{\tau,1}$.
\begin{thm}\label{thm: identifying the vertical components}
$\overline{{\mathcal Z}}(J)$ has dimension at most $[K:\Q_p]$, with equality occurring if
and only if~$J\in{\mathcal P}_{\tau}$. Consequently, the~$\overline{{\mathcal Z}}(J)$ with
$J\in{\mathcal P}_{\tau}$ are irreducible components of~${\mathcal Z}^{\tau,1}$.
\end{thm}
\begin{proof}The first part is immediate from Corollary~\ref{cor: dimension of
families of extensions}, Proposition~\ref{prop:base-change for
exts},
Corollary~\ref{cor:ker-ext-maximal-nonzero} and Theorem~\ref{thm:
dimension of refined shapes} (noting that the exceptional case of
Corollary~\ref{cor:ker-ext-maximal-nonzero} occurs away from
$\textrm{max-Spec}\,
A^{{\operatorname{dist}}}$). Since~${\mathcal Z}^{\tau,1}$ is
equidimensional of dimension~$[K:\Q_p]$ by Theorem~\ref{prop:
dimensions of the Z stacks}, and the~$\overline{{\mathcal Z}}(J)$ are closed and
irreducible by construction, the second part follows from the first
together with~\cite[\href{https://stacks.math.columbia.edu/tag/0DS2}{Tag 0DS2}]{stacks-project}.
\end{proof}
We also note the following result.
\begin{prop}
\label{prop:C to Z mono}
If~$J\in{\mathcal P}_{\tau}$, then there is a dense open substack ${\mathcal U}$ of
$\overline{{\mathcal C}}(J)$ such that the canonical morphism $\overline{{\mathcal C}}(J)
\to \overline{{\mathcal Z}}(J)$ restricts to an open immersion on ${\mathcal U}$.
\end{prop}
\begin{proof}
This follows from
Proposition~\ref{prop: construction of family monomorphing to C and R}
and Corollary~\ref{cor:ker-ext-maximal-nonzero}.
\end{proof}
For later use, we note the following computation. Recall that we write
$\mathfrak{N}(J) = \mathfrak{N}(J,r)$ for the maximal shape $(J,r)$ refining $J$, and that~$\tau=\eta\oplus\eta'$.
\begin{prop}\label{prop:char-calculation}
For each shape $J$ we have
\[ T(\mathfrak{N}(J)) \cong \eta \cdot \left(\prod_{i=0}^{f'-1} (\sigma_i \circ
\hchar)^{t_i}\right)^{-1} |_{G_{K_{\infty}}} \]
where
\[ t_i =
\begin{cases}
\gamma_i + \delta_{J^c}(i) & \text{if } i-1 \in J \\
0 & \text{if } i-1\not\in J.
\end{cases}\]
Here $\delta_{J^c}$ is the characteristic function of the complement
of $J$ in ${\mathbb Z}/f'{\mathbb Z}$, and we are abusing notation by writing $\eta$ for
the function
$\sigma_i \circ \hchar^{k_i}$, which agrees with $\eta$ on $I_K$.
In particular the map $J \mapsto T(\mathfrak{N}(J))$ is injective on ${\mathcal P}_{\tau}$.
\end{prop}
\begin{remark}\label{rk:cuspidal-char-niveau-1}
In the cuspidal case it is not \emph{a priori} clear that the formula in Proposition~\ref{prop:char-calculation} gives
a character of $G_{K_{\infty}}$ (rather than a character only when
restricted to $G_{L_{\infty}}$), but this is an elementary (if
somewhat painful) calculation using the
definition of the $\gamma_i$'s and the relation $\gamma_i +
\gamma_{i+f} = p-1$.
\end{remark}
\begin{proof}
We begin by explaining how the final statement follows from the rest
of the Proposition. First observe that if $J \in {\mathcal P}_\tau$ then $0 \le t_i \le p-1$ for all
$i$. Indeed the only possibility for a contradiction would be if
$\gamma_i = p-1$ and $i \not\in J$, but then the definition of
${\mathcal P}_\tau$ requires that we cannot have $i-1 \in J$. Next, note that
we never have $t_i = p-1$ for all $i$. Indeed, this would require $J
= {\mathbb Z}/f'{\mathbb Z}$ and $\gamma_i=p-1$ for all~$i$, but by definition the
$\gamma_i$ are not all equal to $p-1$.
The observations in the previous paragraph imply that (for $J \in {\mathcal P}_\tau$) the character
$T(\mathfrak{N}(J))$ uniquely determines the integers $t_i$, and so it remains
to show that the integers $t_i$ determine the set $J$. If $t_i = 0$
for all $i$, then either $J = \varnothing$ or $J = {\mathbb Z}/f'{\mathbb Z}$ (for
otherwise there is a transition out of $J$, and $\delta_{J^c}(i) \neq
0$ for some $i-1 \in J$). But if $J = {\mathbb Z}/f'{\mathbb Z}$ then $\gamma_i = 0$ for
all $i$ and $\tau$ is scalar;\ but for scalar types we have ${\mathbb Z}/f'{\mathbb Z}
\not\in {\mathcal P}_\tau$, a contradiction. Thus $t_i =0$ for all $i$ implies $J =
\varnothing$.
For the rest of this part of the argument, we may therefore suppose $t_i \neq 0$
for some $i$, which forces $i-1 \in J$. The entire set $J$ will then
be determined by recursion if we can show that knowledge of $t_i$ along with
whether or not $i \in J$, determines whether or not $i-1 \in J$. Given
the defining formula for $t_i$, the only possible ambiguity is if $t_i
= 0$ and $\gamma_i +
\delta_{J^c}(i) = 0$, so that $\gamma_i = 0$ and $i \in J$. But the definition of ${\mathcal P}_{\tau}$ requires $i-1
\in J$ in this case. This completes the proof.
We now turn to proving the formula for $T(\mathfrak{N}(J))$. We will use
Lemma~\ref{lem: generic fibres of rank 1 Kisin modules} applied at
$i=0$, for which we have to compute $\alpha_0 - d_0$ writing $\alpha_0
= \alpha_0(\mathfrak{N})$. Recall
that we have already computed $\alpha_0(\mathfrak{M}(J)) - \alpha_0(\mathfrak{N}(J))$ in
the proof of Proposition~\ref{prop:ker-ext-maximal}. Since
$\alpha_0(\mathfrak{M}(J)) + \alpha_0(\mathfrak{N}(J)) = e(p^{f'}-1)/(p-1)$, taking the
difference between these formulas gives
\[ 2 \alpha_0 = [d_0 - c_0] - \sum_{j=1}^{f'} p^{f'-j} \gamma_j^* +
\sum_{j \in S_0^c} p^{f'-j} \]
where $S_0^c$ consists of those $1\le j \le f$ such that $(j-1,j)$ is
a transition. Subtract $2[d_0]$ from both sides, and add the expression
$-[k_0-k'_0] + \sum_{j=1}^{f'} p^{f'-j} \gamma_j$ (which vanishes by definition) to the
right-hand side. Note that $[d_0 - c_0] - [k_0-k'_0] - 2[d_0]$ is
equal to $-2[k_0]$ if $0 \not\in J$, and to
$e(K'/K)-2[k_0-k_0']-2[k_0']$ if $0 \in J$.
Since $\gamma_j -
\gamma_j^* = 2\gamma_j - (p-1)$ if $j-1 \in J$ and is $0$ otherwise,
the preceding expression rearranges to give (after dividing by $2$)
\[ \alpha_0 - [d_0] = -\kappa_0 + \sum_{j-1\in J} p^{f'-j}
\gamma_j + \sum_{j-1\in J, j\not\in J} p^{f'-j} = -\kappa_0 +
\sum_{j=1}^{f'} p^{f'-j} t_j\]
where $\kappa_0 = [k_0]$ if $0 \not\in J$ and $\kappa_0 = [k_0 - k'_0]
+[k'_0]$ if $0 \in J$. Since in either case $\kappa_0 \equiv k_0
\pmod{e(K'/K)}$ the result now follows from Lemma~\ref{lem: generic fibres of rank 1 Kisin modules}.
\end{proof}
\begin{defn}Let $\overline{r}:G_K\to\GL_2({\mathbb F}')$ be representation. Then we
say that a Breuil--Kisin module~$\mathfrak{M}$ with ${\mathbb F}'$-coefficients is a
\emph{Breuil--Kisin model of~$\overline{r}$ of type~$\tau$} if~$\mathfrak{M}$ is an
${\mathbb F}'$-point of~${\mathcal C}^{\tau,\operatorname{BT},1}$, and
$T_{{\mathbb F}'}(\mathfrak{M})\cong\overline{r}|_{G_{K_\infty}}$.
\end{defn}
Recall that for each continuous representation $\overline{r}:G_K\to\GL_2(\Fbar_p)$, there
is an associated (nonempty) set of Serre weights~$W(\overline{r})$ whose
precise definition is recalled in Appendix~\ref{sec: appendix on
tame types}.
\begin{thm}
\label{thm: unique serre weight}The~$\overline{{\mathcal Z}}(J)$, with $J\in{\mathcal P}_\tau$, are pairwise distinct closed
substacks of ${\mathcal Z}^{\tau,1}$. For each $J\in{\mathcal P}_\tau$, there is a dense set of finite type
points of $\overline{{\mathcal Z}}(J)$ with the property that the corresponding Galois
representations have $\sigmabar(\tau)_J$ as a Serre weight, and which
furthermore admit a unique Breuil--Kisin model of type~$\tau$.
\end{thm}
\begin{proof}
Recall from Definition~\ref{def:scheme-theoretic images}
that $\overline{{\mathcal Z}}(J)$ is defined to be the scheme-theoretic image
of a morphism $\Spec {B^{\operatorname{dist}}} \to {\mathcal Z}^{\mathrm{dd},1}.$ As in the proof of
Lemma~\ref{lem:ext images}, since the source
and target of this morphism are of finite presentation over ${\mathbb F}$,
its image is a dense constructible subset of its scheme-theoretic image,
and so contains a dense open subset, which we may interpret as
a dense open substack ${\mathcal U}$ of $\overline{{\mathcal Z}}(J).$
From the definition of ${B^{\operatorname{dist}}},$
the finite type points of~${\mathcal U}$
correspond to
reducible Galois representations admitting a model of type~$\tau$
and refined shape~$(J,r)$, for which~$(J,r)$ is maximal.
That the~$\overline{{\mathcal Z}}(J)$ are pairwise distinct is immediate
from the above and~Proposition~\ref{prop:char-calculation}.
Combining this observation with Theorem~\ref{thm: dimension of refined shapes},
we see that by deleting the intersections of~$\overline{{\mathcal Z}}(J)$ with
the~$\overline{{\mathcal Z}}(J',r')$ for all refined shapes~$(J',r')\ne (J,r)$,
we obtain a dense open substack~${\mathcal U}'$ whose finite type points have
the property that every Breuil--Kisin model of type~$\tau$ of the
corresponding Galois representation has shape~$(J,r)$. The unicity of such a Breuil--Kisin model then follows
from Corollary~\ref{cor:ker-ext-maximal-nonzero}.
It remains to show that every such Galois representation $\overline{r}$ has $\sigmabar(\tau)_J$ as a
Serre weight.
Suppose first that $\tau$ is a principal series type. We claim that (writing $\sigmabar(\tau)_J =
\sigmabar_{\vec{t},\vec{s}} \otimes (\eta' \circ \det)$ as
in Appendix~\ref{sec: appendix on tame
types}) we
have \[T(\mathfrak{N}(J))|_{I_K}=\eta'|_{I_K} \prod_{i=0}^{f-1}\omega_{\sigma_i}^{s_i+t_i}.\]To see this, note that by
Proposition~\ref{prop:char-calculation} it is enough to show that
$\eta|_{I_K}=\eta'|_{I_K} \prod_{i=0}^{f-1}\omega_{\sigma_i}^{s_i+2t_i}$, which
follows by comparing the central characters of $\sigmabar(\tau)_J$
and $\sigmabar(\tau)$ (or from a direct computation with the
quantities $s_i,t_i$).
Since~$\det\overline{r}|_{I_K}=\eta\eta'\overline{\varepsilon}^{-1}$, we
have \[\overline{r}|_{I_K}\cong \eta'|_{I_K} \otimes \begin{pmatrix}
\prod_{i=0}^{f-1}\omega_{\sigma_i}^{s_i+t_i} &*\\ 0 & \overline{\varepsilon}^{-1}\prod_{i=0}^{f-1}\omega_{\sigma_i}^{t_i}
\end{pmatrix}.\] The result then
follows from Lemma~\ref{lem: explicit Serre weights with our
normalisations}, using Lemma~\ref{lem: list of things we need to
know about Serre weights}(2)
and the fact that the fibre of the morphism ${\mathcal C}^{\tau,\operatorname{BT},1} \to {\mathcal R}^{\mathrm{dd},1}$
above $\overline{r}$ is nonempty to see that $\overline{r}$ is not tr\`es ramifi\'ee.
The argument in the cuspidal case proceeds analogously, noting
that if the character $\theta$ (as in Appendix~\ref{sec: appendix on tame
types}) corresponds to $\widetilde{\theta}$ under local class field theory
then $\widetilde{\theta} |_{I_K} = \eta'
\prod_{i=0}^{f'-1} \omega_{\sigma'_i}^{t_i}$, and that from central
characters we have
$\eta\eta' = (\widetilde{\theta} |_{I_K})^2
\prod_{i=0}^{f-1} \omega_{\sigma_i}^{s_i}$.
\end{proof}
\begin{rem}
\label{rem: unique Serre weight isn't proved here but could be}With
more work, we could use the results of~\cite{gls13} and our
results on dimensions of families of extensions to strengthen
Theorem~\ref{thm: unique serre weight}, showing that there is a
dense set of finite type points of~$\overline{{\mathcal Z}}(J)$ with the property that the corresponding Galois
representations have $\sigmabar(\tau)_J$ as their \emph{unique} non-Steinberg Serre
weight. In fact, we will prove this as part of our work on
the geometric Breuil--M\'ezard conjecture in \cite{cegsA}
(which uses Theorem~\ref{thm: unique serre
weight} as an input).
\end{rem}
\subsection{Irreducible Galois representations}
\label{subsec:irreducible}
We now show
that the points of ${\mathcal C}^{\tau,\operatorname{BT},1}$ which are irreducible (that
is, cannot be written as an extension of rank one Breuil--Kisin modules) lie
in a closed substack of positive codimension. We begin with the
following useful observation.
\begin{lem}
\label{lem: closed points of irred}The rank two Breuil--Kisin modules with
descent data and $\Fbar_p$-coefficients which are irreducible (that
is, which cannot be written as an extension of rank~$1$ Breuil--Kisin
modules with descent data) are
precisely those whose corresponding \'etale $\varphi$-modules are
irreducible, or equivalently whose corresponding
$G_K$-representations are irreducible.
\end{lem}
\begin{proof}Let~$\mathfrak{M}$ be a Breuil--Kisin module with descent data
corresponding to a finite type point of~${\mathcal C}^{\tau,\operatorname{BT},1}_{\mathrm{dd}}$,
let~$M=\mathfrak{M}[1/u]$, and let~$\overline{\rho}$ be the $G_K$-representation
corresponding to~$M$. As noted in the proof of Lemma~\ref{lem: restricting to K_infty doesn't lose information about
rbar}, $\overline{\rho}$ is reducible if and only
if~$\overline{\rho}|_{G_{K_\infty}}$ is reducible, and by Lemma~\ref{lem:
Galois rep is a functor if A is actually finite local}, this is
equivalent to~$M$ being reducible. That this is in turn
equivalent to~$\mathfrak{M}$ being reducible may be proved in
the same way as~\cite[Lem.\ 5.5]{MR3164985}.
\end{proof}
Recall that~$L/K$ denotes the unramified quadratic extension; then the
irreducible representations~$\overline{\rho}:G_K\to\GL_2(\Fbar_p)$ are all
induced from characters of~$G_L$. Bearing in mind Lemma~\ref{lem:
closed points of irred}, this means that we can study
irreducible Breuil--Kisin modules via a consideration of base-change of Breuil--Kisin modules
from $K$ to~$L$, and our previous
study of reducible Breuil--Kisin modules.
Since this will require us to consider Breuil--Kisin modules (and moduli
stacks thereof) over both $K$ and $L$, we will have to introduce
additional notation in order to indicate over which of the two fields
we might be working. We do this simply by adding a subscript `$K$'
or `$L$' to our current notation. We will also
omit other decorations which are being held fixed
throughout the present discussion. Thus we write ${\mathcal C}_{K}^{\tau}$
to denote the moduli stack that was previously denoted ${\mathcal C}^{\tau,\operatorname{BT},1}$,
and ${\mathcal C}_{L}^{\tau_{|L}}$ to denote the corresponding
moduli stack for Breuil--Kisin modules over $L$, with the type taken
to be the restriction $\tau_{|L}$ of $\tau$ from $K$ to $L$.
(Note that whether $\tau$ is principal series or cuspidal,
the restriction $\tau_{| L}$ is principal series.)
As usual we fix
a uniformiser~$\pi$ of $K$, which we also take to be our fixed
uniformiser of $L$.
Also, throughout
this section we take $K' =
L(\pi^{1/(p^{2f}-1)})$, so that $K'/L$ is the standard choice of
extension for $\tau$ and $\pi$ regarded as a type and uniformiser for~$L$.
If~$\mathfrak{P}$ is a Breuil--Kisin module with descent data from~$K'$ to~$L$, then we
let
$\mathfrak{P}^{(f)}$ be the Breuil--Kisin module ~$W(k')[[u]]\otimes_{\Gal(k'/k),W(k')[[u]]}\mathfrak{P}$,
where the pullback is given by the non-trivial automorphism
of~$k'/k$, and the descent data on $\mathfrak{P}^{(f)}$ is given by $\hat{g}(s
\otimes m) = \hat{g}(s) \otimes \hat{g}^{p^f}(m)$ for $s\in
W(k')[[u]]$ and $m \in \mathfrak{P}$.
In particular, we have $\mathfrak{M}(r,a,c)^{(f)} =
\mathfrak{M}(r',a',c')$ where $r'_i = r_{i+f}$, $a'_i=a_{i+f}$,
and $c'_i=c_{i+f}$.
We let $\sigma$ denote the non-trivial automorphism of $L$ over $K$,
and write $G := \Gal(L/K)= \langle \sigma \rangle$, a cyclic group
of order two.
There is an action $\alpha$ of $G$ on
${\mathcal C}_{L}$
defined via $\alpha_{\sigma}: \mathfrak{P} \mapsto \mathfrak{P}^{(f)}$.
More precisely, this induces an action
of $G:= \langle \sigma \rangle$ on
${\mathcal C}_{L}^{\tau_{|L}}$
in the strict\footnote{From a $2$-categorical perspective,
it is natural to relax the notion
of group action on a stack so as to allow natural transformations,
rather than literal equalities, when relating multiplication
in the group to the compositions of the corresponding
equivalences of categories arising in the definition of an action.
An action
in which actual equalities hold is then called {\em strict}. Since
our action is strict, we are spared from having to consider
the various $2$-categorical aspects of the situation that would
otherwise arise.}
sense that
$$\alpha_{\sigma} \circ \alpha_{\sigma} =
\operatorname{id}_{{\mathcal C}_L^{\tau_{|L}}}.$$
We now define the fixed point stack for this action.
\begin{df}
\label{def:fixed points}
We let the fixed point stack
$({\mathcal C}_{L}^{\tau_{|L}})^G$ denote the stack
whose $A$-valued points consist of an $A$-valued point $\mathfrak{M}$
of ${\mathcal C}_L^{\tau_{|L}}$, together with an isomorphism
$\imath: \mathfrak{M} \buildrel \sim \over \longrightarrow \mathfrak{M}^{(f)}$ which satisfies the cocycle condition
that the composite
$$\mathfrak{M} \buildrel \imath \over \longrightarrow \mathfrak{M}^{(f)}
\buildrel \imath^{(f)} \over \longrightarrow (\mathfrak{M}^{(f)})^{(f)} = \mathfrak{M}$$
is equal to the identity morphism $\operatorname{id}_{\mathfrak{M}}$.
\end{df}
We now give another description of $({\mathcal C}_L^{\tau_{|L}})^G$, in terms
of various fibre products, which is technically useful.
This alternate description involves two steps. In the first step,
we define fixed points of the automorphism $\alpha_{\sigma}$,
without imposing the additional condition that the fixed point data
be compatible with the relation $\sigma^2~=~1$ in~$G$. Namely, we define
$$
({\mathcal C}_{L}^{\tau_{|L}})^{\alpha_{\sigma}}
:=
{\mathcal C}_{L}^{\tau_{|L}}
\underset
{{\mathcal C}_{L}^{\tau_{|L}}
\times
{\mathcal C}_{L}^{\tau_{|L}}
}
{\times}
{\mathcal C}_{L}^{\tau_{|L}}
$$where the first morphism ${\mathcal C}_{L}^{\tau_{|L}}\to{\mathcal C}_{L}^{\tau_{|L}}
\times
{\mathcal C}_{L}^{\tau_{|L}}$ is the diagonal, and the second is $\operatorname{id}\times\alpha_\sigma$.
Working through the definitions,
one finds
that an $A$-valued point of $({\mathcal C}_L^{\tau_{|L}})^{\alpha_{\sigma}}$
consists of a pair $(\mathfrak{M},\mathfrak{M}')$ of objects of ${\mathcal C}_L^{\tau_{|L}}$
over $A$, equipped with isomorphisms $\alpha: \mathfrak{M} \buildrel \sim \over \longrightarrow \mathfrak{M}'$
and $\beta: \mathfrak{M} \buildrel \sim \over \longrightarrow (\mathfrak{M}')^{(f)}$. The morphism
$$(\mathfrak{M},\mathfrak{M}',\alpha,\beta) \mapsto (\mathfrak{M}, \imath),$$
where $\imath := (\alpha^{-1})^{(f)} \circ \beta: \mathfrak{M} \to \mathfrak{M}^{(f)}$,
induces an isomorphism between $({\mathcal C}_L^{\tau_{|L}})^{\alpha_{\sigma}}$
and the stack classifying points $\mathfrak{M}$ of ${\mathcal C}_L^{\tau_{|L}}$
equipped with an isomorphism
$\imath: \mathfrak{M} \to \mathfrak{M}^{(f)}$.
(However, no cocycle condition has been imposed on $\imath$.)
Let $I_{{\mathcal C}_L^{\tau_{|L}}}$ denote the inertia stack
of ${\mathcal C}_L^{\tau_{|L}}.$
We define a morphism
$$({\mathcal C}_L^{\tau_{|L}})^{\alpha_{\sigma}} \to I_{{\mathcal C}_L^{\tau_{|L}}}$$
via $$(\mathfrak{M},\imath) \mapsto (\mathfrak{M}, \imath^{(f)}\circ \imath),$$
where, as in Definition~\ref{def:fixed points},
we regard the composite $\imath^{(f)}\circ \imath$ as an automorphism
of $\mathfrak{M}$ via the identity $(\mathfrak{M}^{(f)})^{(f)} = \mathfrak{M}.$
Of course, we also have the identity
section $e: {\mathcal C}_L^{\tau_{|L}} \to I_{{\mathcal C}_L^{\tau_{|L}}}$.
We now define
$$({\mathcal C}_L^{\tau_{|L}})^G :=
({\mathcal C}_L^{\tau_{|L}})^{\alpha_{\sigma}}
\underset{I_{{\mathcal C}_L^{\tau_{|L}}}}{\times}
{\mathcal C}_L^{\tau_{|L}}.
$$
If we use the description of $({\mathcal C}_L^{\tau|_{L}})^{\alpha_{\sigma}}$
as classifying
pairs $(\mathfrak{M},\imath),$ then (just unwinding definitions)
this fibre product classifies tuples $(\mathfrak{M},\imath,\mathfrak{M}',\alpha)$,
where $\alpha$ is an isomorphism $\mathfrak{M} \buildrel \sim \over \longrightarrow \mathfrak{M}'$ which furthermore
identifies $\imath^{(f)}\circ \imath$
with $\operatorname{id}_{\mathfrak{M}'}$. Forgetting $\mathfrak{M}'$ and $\alpha$ then induces
an isomorphism between~$({\mathcal C}_L^{\tau|_L})^G$, as defined
via the above fibre product, and the stack defined in
Definition~\ref{def:fixed points}.
To compare this fixed point stack to ${\mathcal C}^\tau_K$, we make the
following observations. Given a Breuil--Kisin module with descent
data from~$K'$ to~$K$, we obtain a Breuil--Kisin module with descent data
from~$K'$ to~$L$ via the obvious forgetful map. Conversely, given a
Breuil--Kisin module~$\mathfrak{P}$ with descent data
from~$K'$ to~$L$, the additional data required to enrich this to a
Breuil--Kisin module with descent data from~$K'$ to~$K$ can be described as follows as follows: let $\theta \in \Gal(K'/K)$ denote the unique element which fixes
$\pi^{1/(p^{2f}-1)}$ and acts nontrivially on $L$. Then to enrich
the descent data on $\mathfrak{P}$ to descent data from $K'$ to $K$, it is
necessary and
sufficient to give an additive map $\hat\theta : \mathfrak{P} \to \mathfrak{P}$ satisfying
$\hat\theta(sm) = \theta(s)\hat\theta(m)$ for all $s \in \mathfrak{S}_{{\mathbb F}}$ and
$m \in \mathfrak{P}$, and such that $\hat\theta
\hat g \hat \theta = \hat g^{p^f}$ for all $g \in \Gal(K'/L)$.
In turn, the data of the additive map $\hat\theta:\mathfrak{P}\to\mathfrak{P}$ is
equivalent to giving the data of the map
$\theta(\hat\theta):\mathfrak{P}\to\mathfrak{P}^{(f)}$ obtained by
composing~$\hat\theta$ with the Frobenius on~$L/K$. The defining
properties of $\hat\theta$ are equivalent to asking that
this map is an isomorphism of Breuil--Kisin modules with descent data
satisfying the cocycle condition of Definition~\ref{def:fixed points};
accordingly, we have a natural morphism ${\mathcal C}_K^{\tau} \to
({\mathcal C}_L^{\tau_{|L}})^G$, and a restriction morphism
\addtocounter{subsubsection}{1}\begin{equation}\label{eqn: restriction morphism}{\mathcal C}_K^{\tau} \to
{\mathcal C}_L^{\tau_{|L}}. \end{equation}
The following simple lemma summarises the basic facts about
base-change in the situation we are considering.
\begin{lemma}\label{lem: fixed point stack iso}
There is an isomorphism ${\mathcal C}_K^{\tau} \buildrel \sim \over \longrightarrow
({\mathcal C}_L^{\tau_{|L}})^G$.
\end{lemma}
\begin{proof}
This follows immediately from the preceding discussion.
\end{proof}
\begin{rem}
\label{rem: the R version of the fixed point stack}In the proof of
Theorem~\ref{thm: irreducible Kisin modules can be ignored} we
will make use of the following analogue of Lemma~\ref{lem: fixed
point stack iso} for \'etale $\varphi$-modules. Write~${\mathcal R}_K$,
${\mathcal R}_L$ for the moduli stacks of Definition~\ref{defn: R^dd}, i.e.\
for the moduli stacks of rank~$2$ \'etale $\varphi$-modules with
descent data respectively to~$K$ or to~$L$. Then we have an action
of~$G$ on~${\mathcal R}_L$ defined
via~$M\mapsto M^{(f)}:=W(k')\otimes_{\Gal(k'/k),W(k')}M$, and we
define the fixed point stack~$({\mathcal R}_L)^G$ exactly as in
Definition~\ref{def:fixed points}: namely an $A$-valued point
of~$({\mathcal R}_L)^G$ consists of an $A$-valued point~$M$ of ${\mathcal R}_L$,
together with an isomorphism $\iota:M\stackrel{\sim}{\To} M^{(f)}$ satisfying the
cocycle condition. The preceding discussion goes through in this
setting, and shows that there is an isomorphism
${\mathcal R}_K\stackrel{\sim}{\To} ({\mathcal R}_L)^G$.
We also note that
the morphisms ${\mathcal C}_K^\tau \to {\mathcal C}_L^{\tau_{|L}}$ and
${\mathcal C}_K^\tau \to {\mathcal R}_K$
induce a monomorphism
\addtocounter{subsubsection}{1}\begin{equation}\label{eqn:C into R mono base change} {\mathcal C}_K^{\tau} \hookrightarrow {\mathcal C}_L^{\tau_{|L}} \times_{{\mathcal R}_L}
{\mathcal R}_K\end{equation}
One way to see this is to rewrite this morphism (using the previous discussion)
as a morphism
$$({\mathcal C}_L^{\tau_{|L}})^G \to {\mathcal C}_L^{\tau_{|L}} \times_{{\mathcal R}_L} ({\mathcal R}_L)^G,$$
and note that the descent data via $G$ on an object classified by
the source of this morphism is determined by the induced descent data on its
image in $({\mathcal R}_L)^G$.
\end{rem}
We now use the Lemma~\ref{lem: fixed point stack iso} to study the locus of finite type points
of ${\mathcal C}_K^{\tau}$ which correspond to irreducible Breuil--Kisin modules.
Any irreducible Breuil--Kisin module over $K$ becomes reducible when restricted to $L$,
and so may be described as an extension
$$0 \to \mathfrak{N} \to \mathfrak{P} \to \mathfrak{M} \to 0,$$
where $\mathfrak{M}$ and $\mathfrak{N}$ are
Breuil--Kisin modules of rank one with descent data from $K'$ to $L$,
and $\mathfrak{P}$ is
additionally
equipped with an isomorphism $\mathfrak{P} \cong \mathfrak{P}^{(f)}$,
satisfying the cocycle condition of Definition~\ref{def:fixed
points}.
Note that the characters
$T(\mathfrak{M})$, $T(\mathfrak{N})$ of $G_{L_\infty}$ are distinct and cannot be
extended to characters of $G_K$. Indeed, this condition is plainly
necessary for an extension~$\mathfrak{P}$ to arise as the base change of an irreducible
Breuil--Kisin module
(see the
proof of Lemma~\ref{lem: restricting to K_infty doesn't lose information about
rbar}).
Conversely, if $T(\mathfrak{M})$, $T(\mathfrak{N})$ of $G_{L_\infty}$ are distinct and cannot be
extended to characters of $G_K$, then for any $\mathfrak{P} \in \Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$
whose descent data can be enriched to give descent data from $K'$ to $K$, this enrichment is necessarily irreducible.
In particular, the existence of such a~$\mathfrak{P}$ implies that
the descent data
on $\mathfrak{M}$ and $\mathfrak{N}$ cannot be enriched to give descent data from~$K'$ to~$K$.
We additionally have the following observation.
\begin{lemma}\label{lem:nonempty-then-map}
If $\mathfrak{M},\mathfrak{N}$ are such that there is an extension \[0 \to \mathfrak{N} \to \mathfrak{P}
\to \mathfrak{M} \to 0\] whose descent data can be enriched to give an irreducible
Breuil--Kisin module over~$K$, then there exists a
nonzero map $\mathfrak{N} \to \mathfrak{M}^{(f)}$.
\end{lemma}
\begin{proof}
The composition $\mathfrak{N}
\to \mathfrak{P} \xrightarrow{\hat\theta} \mathfrak{P} \to \mathfrak{M}$, in which first and
last arrows are the natural inclusions and projections, must be
nonzero (or else $\hat\theta$ would give descent data on $\mathfrak{N}$ from
$K'$ to $K$). It is not itself a map of Breuil--Kisin modules, because $\hat\theta$
is semilinear, but is a map of Breuil--Kisin modules when viewed as a map $\mathfrak{N} \to \mathfrak{M}^{(f)}$.
\end{proof}
We now consider (for our fixed~$\mathfrak{M}$, $\mathfrak{N}$, and working over~$L$
rather than over~$K$) the scheme $\Spec B^{{\operatorname{dist}}}$ as in
Subsection~\ref{subsec:universal families}. Following
Lemma~\ref{lem:nonempty-then-map}, we assume that there exists a
nonzero map $\mathfrak{N} \to \mathfrak{M}^{(f)}$. The observations made above show
that we are in the strict case, and thus that
$\Spec A^{{\operatorname{dist}}} = \GG_m\times \GG_m$ and that furthermore we may (and do)
set $V = T$. We consider the fibre product with the restriction morphism~\eqref{eqn: restriction morphism}
$$Y(\mathfrak{M},\mathfrak{N}):=\Spec B^{{\operatorname{dist}}} \times_{{\mathcal C}_L^{\tau_{|L}}}{\mathcal C}_K^{\tau}.$$
Let $\GG_m \hookrightarrow \GG_m\times\GG_m$ be the diagonal closed immersion,
and let $(\Spec B^{{\operatorname{dist}}})_{|\GG_m}$ denote the pull-back of $\Spec B^{{\operatorname{dist}}}$
along this closed immersion.
By Lemma~\ref{lem:nonempty-then-map}, the projection $Y(\mathfrak{M},\mathfrak{N}) \to \Spec B^{{\operatorname{dist}}}$
factors through
$(\Spec B^{{\operatorname{dist}}})_{|\GG_m},$
and combining this with Lemma~\ref{lem: fixed point stack iso} we see
that $Y(\mathfrak{M},\mathfrak{N})$ may also be described as the fibre product
$$(\Spec B^{{\operatorname{dist}}})_{|\GG_m} \times_{{\mathcal C}_L^{\tau_{|L}}} ({\mathcal C}_L^{\tau_{|L}})^G.$$
Recalling the warning of Remark~\ref{rem: potential confusion of two lots of Gm times
Gm}, Proposition~\ref{prop: construction of family monomorphing to C
and R}
now shows that there is a monomorphism
$$[ (\Spec B^{{\operatorname{dist}}})_{|\GG_m} / \GG_m\times\GG_m] \hookrightarrow {\mathcal C}_L^{\tau_{|L}},$$
and thus, by
Lemma~\ref{lem: morphism from quotient stack is a monomorphism},
that there is an isomorphism
$$
(\Spec B^{{\operatorname{dist}}})_{|\GG_m}
\times_{{\mathcal C}_L^{\tau_{|L}}}
(\Spec B^{{\operatorname{dist}}})_{|\GG_m}
\buildrel \sim \over \longrightarrow
(\Spec B^{{\operatorname{dist}}})_{|\GG_m} \times \GG_m\times \GG_m.$$
(An inspection of the proof of Proposition~\ref{prop: construction of family monomorphing to C and R} shows that in fact
this result is more-or-less proved directly,
as the key step in proving the proposition.)
An elementary manipulation with fibre products then shows that there
is an isomorphism
$$Y(\mathfrak{M},\mathfrak{N}) \times_{({\mathcal C}_L^{\tau_{|L}})^G} Y(\mathfrak{M},\mathfrak{N})
\buildrel \sim \over \longrightarrow Y(\mathfrak{M},\mathfrak{N})\times \GG_m\times\GG_m,$$
and thus, by another application of
Lemma~\ref{lem: morphism from quotient stack is a monomorphism},
we find that there is a monomorphism
\addtocounter{subsubsection}{1}\begin{equation}\label{eqn: mono from Y}[Y(\mathfrak{M},\mathfrak{N})/\GG_m\times\GG_m] \hookrightarrow ({\mathcal C}_L^{\tau_{|L}})^G.\end{equation}
We define ${\mathcal C}_{\operatorname{irred}}$ to be the union over all such pairs $(\mathfrak{M},\mathfrak{N})$ of
the scheme-theoretic images of the various projections
$Y(\mathfrak{M},\mathfrak{N}) \to ({\mathcal C}_L^{\tau_{|L}})^G$. Note that this
image
depends on $(\mathfrak{M},\mathfrak{N})$ up to simultaneous
unramified twists of $\mathfrak{M}$ and $\mathfrak{N}$, and there are only
finitely many such pairs $(\mathfrak{M},\mathfrak{N})$ up to such unramified twist. By
definition, ${\mathcal C}_{\operatorname{irred}}$ is a closed substack of
${\mathcal C}^{\tau}_K$
which contains every finite
type point of ${\mathcal C}^{\tau}_K$ corresponding to an irreducible Breuil--Kisin
module.
The following is the main result of this section.
\begin{thm}
\label{thm: irreducible Kisin modules can be ignored} The
closed substack ${\mathcal C}_{\operatorname{irred}}$ of ${\mathcal C}_K^{\tau}={\mathcal C}^{\tau,\operatorname{BT},1}$, which contains
every finite type point of ${\mathcal C}^{\tau}_K$ corresponding
to an irreducible Breuil--Kisin module,
has dimension strictly less than $[K:\Q_p]$.
\end{thm}
\begin{proof}
As noted above, there are only finitely many pairs $(\mathfrak{M},\mathfrak{N})$ up to
unramified twist, so it is enough to show that for each of them, the
scheme-theoretic image of the monomorphism~\eqref{eqn: mono from Y}
has dimension less than $[K:{\mathbb Q}_p]$.
By \cite[\href{https://stacks.math.columbia.edu/tag/0DS6}{Tag 0DS6}]{stacks-project},
to prove the present theorem,
it then suffices to show that
$\dim Y(\mathfrak{M},\mathfrak{N}) \leq [K:{\mathbb Q}_p] + 1$
(since $\dim \GG_m\times\GG_m = 2$).
To establish this, it suffices to show, for each point
$x \in \GG_m({\mathbb F}'),$ where ${\mathbb F}'$ is a finite extension of~${\mathbb F}$,
that the dimension of the fibre $Y(\mathfrak{M},\mathfrak{N})_x$ is bounded by
$[K:{\mathbb Q}_p]$. After relabelling, as we may, the field ${\mathbb F}'$ as ${\mathbb F}$ and
the Breuil--Kisin modules $\mathfrak{M}_x$ and $\mathfrak{N}_x$ as $\mathfrak{M}$ and $\mathfrak{N}$, we may
suppose that in fact ${\mathbb F}'={\mathbb F}$ and~$x=1$.
Manipulating
the fibre product appearing in the definition of~$Y(\mathfrak{M},\mathfrak{N})$, we find
that
\addtocounter{subsubsection}{1}\begin{equation}
\label{eqn:Y fibre blahblah}
Y(\mathfrak{M},\mathfrak{N})_1 =
\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}) \times_{{\mathcal C}_L^{\tau_{|_N}}} {\mathcal C}_K^{\tau},
\end{equation}
where the fibre product is taken with respect to the
morphism $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}) \to {\mathcal C}_L^{\tau}$ that
associates the corresponding
rank two extension to an extension
of rank one Breuil--Kisin modules,
and the restriction
morphism~\eqref{eqn: restriction morphism}.
In order to bound the dimension of~$Y(\mathfrak{M},\mathfrak{N})_1$, it will be easier
to first embed it into another,
larger, fibre product, which we now introduce. Namely, the
monomorphism~\eqref{eqn:C into R mono base change}
induces a monomorphism
$$Y(\mathfrak{M},\mathfrak{N})_1 \hookrightarrow Y'(\mathfrak{M},\mathfrak{N})_1 :=
\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}) \times_{{\mathcal R}_L} {\mathcal R}_K.$$
Any finite type point of this fibre product lies over a fixed isomorphism
class of finite type points
of ${\mathcal R}_K$ (corresponding to some fixed irreducible Galois
representation); we let $P$ be a choice of such a point. The
restriction of $P$ then lies in a fixed isomorphism class of finite
type points of ${\mathcal R}_L$ (namely, the isomorphism
class of the direct sum
$\mathfrak{M}[1/u]\oplus \mathfrak{N}[1/u] \cong \mathfrak{M}[1/u] \oplus \mathfrak{M}^{(f)}[1/u]$).
Thus the projection $Y'(\mathfrak{M},\mathfrak{N})_1 \to {\mathcal R}_K$ factors through
the residual gerbe of $P$, while the morphism $Y'(\mathfrak{M},\mathfrak{N})_1
\to {\mathcal R}_L$ factors through the residual gerbe
of
$\mathfrak{M}[1/u]\oplus \mathfrak{N}[1/u] \cong \mathfrak{M}[1/u] \oplus \mathfrak{M}^{(f)}[1/u]$.
Since $P$ corresponds via
Lemma~\ref{lem: Galois rep is a functor if A is actually finite local}
to an irreducible Galois representation,
we find that $\Aut(P) = \GG_m$.
Since
$\mathfrak{M}[1/u]\oplus \mathfrak{N}[1/u] $ corresponds via
Lemma~\ref{lem: Galois rep is a functor if A is actually finite local}
to the direct sum of two non-isomorphic Galois characters, we find
that $\Aut(\mathfrak{M}[1/u]\oplus \mathfrak{N}[1/u] ) = \GG_m \times \GG_m$.
Thus we obtain monomorphisms
\addtocounter{subsubsection}{1}\begin{multline}
\label{eqn:Y fibre}
Y(\mathfrak{M},\mathfrak{N})_1 \hookrightarrow
Y'(\mathfrak{M},\mathfrak{N})_1
\\
\hookrightarrow
\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})
\times_{[\Spec F'//\GG_m\times \GG_m]} [\Spec F'//\GG_m]
\cong
\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})
\times \GG_m.
\end{multline}
In Proposition~\ref{prop:irred-bound} we obtain a description of the image of $Y(\mathfrak{M},\mathfrak{N})_1$
under this monomorphism which allows us to bound its dimension
by~$[K:\Q_p]$, as required.
\end{proof}
We now prove the bound on the dimension of~$Y(\mathfrak{M},\mathfrak{N})_1$
that we used in the proof of Theorem~\ref{thm: irreducible Kisin
modules can be ignored}. Before establishing this bound, we make some further remarks.
To begin with, we remind the reader
that we are working with Breuil--Kisin modules, \'etale $\varphi$-modules, etc.,
over $L$ rather than $K$, so that e.g.\
the structure parameters of $\mathfrak{M}, \mathfrak{N}$ are periodic modulo $f' = 2f$
(not modulo $f$), and the pair $(\mathfrak{M},\mathfrak{N})$ has type $\tau|_L$.
We will readily apply various pieces of notation that were
introduced above in the
context of the field $K$, adapted in the obvious manner to the context
of the field $L$.
(This applies in particular to the notation $\Cone_{u}$, $\Czero_{u}$, etc.\
introduced in Definition~\ref{notn:calh}.)
We write
$m, n$ for the standard generators of $\mathfrak{M}$ and
$\mathfrak{N}$.
The existence of the nonzero map $\mathfrak{N} \to \mathfrak{M}^{(f)}$ implies that
$\alpha_i(\mathfrak{N}) \ge \alpha_{i+f}(\mathfrak{M})$ for all $i$, and also that
$\prod_i a_i = \prod_i b_i$. Thanks to the latter we will lose no
generality by assuming that $a_i = b_i =1 $ for all $i$. Let
$\tilde m$ be the standard generator for $\mathfrak{M}^{(f)}$. The map
$\mathfrak{N} \to \mathfrak{M}^{(f)}$ will (up to a scalar) have the form
$n_i \mapsto u^{x_i} \tilde m_{i}$ for integers $x_i$ satisfying
$px_{i-1}-x_i = s_i - r_{i+f}$ for all $i$; thus
$x_i = \alpha_{i}(\mathfrak{N}) - \alpha_{i+f}(\mathfrak{M})$ for all $i$. Since the
characters $T(\mathfrak{M})$ and $T(\mathfrak{N})$ are conjugate we must have
$x_i \equiv d_i - c_{i+f} \pmod{p^{f'}-1}$ for all $i$ (\emph{cf}.\
Lemma~\ref{lem: generic fibres of rank 1 Kisin modules}). Moreover,
the strong determinant condition $s_i + r_i = e'$ for all $i$ implies
that $x_i = x_{i+f}$.
We stress that we make no claims about the
optimality of the following result; we merely prove ``just what we
need'' for our applications. Indeed the estimates of
\cite{MR2562792,CarusoKisinVar} suggest that improvement should
be possible.
\begin{prop}\label{prop:irred-bound}
We have $\dim
Y(\mathfrak{M},\mathfrak{N})_1 \le [K:\Q_p]$.
\end{prop}
\begin{remark}
Since the image of $Y(\mathfrak{M},\mathfrak{N})_1$ in $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ lies
in $\kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ with fibres that can be seen to have dimension at most one,
many cases of Proposition~\ref{prop:irred-bound} will already follow from
Remark~\ref{rem:half} (applied with $L$ in place of~$K$).
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:irred-bound}]
Let $\mathfrak{P} = \mathfrak{P}(h)$ be an element of $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ whose
descent data can be enriched to give descent data from $K'$ to $K$, and
let $\widetilde{\gP}$ be such an enrichment.
By Lemma~\ref{lem:nonempty-then-map} (and the discussion preceding
that lemma)
the \'etale $\varphi$-module $\mathfrak{P}[\frac 1u]$ is
isomorphic to $\mathfrak{M}[\frac 1u] \oplus \mathfrak{M}^{(f)}[\frac 1u]$. All
extensions of the $G_{L_{\infty}}$-representation $T(\mathfrak{M}[\frac 1u] \oplus
\mathfrak{M}^{(f)}[\frac 1u])$ to a representation of $G_{K_{\infty}}$ are
isomorphic (and given by the induction of $T(\mathfrak{M}[\frac 1u])$ to~$G_{K_\infty}$),
so the same is true of the \'etale $\varphi$-modules with
descent data from $K'$ to $K$ that enrich the descent data on
$\mathfrak{M}[\frac 1u] \oplus \mathfrak{M}^{(f)}[\frac 1u]$. One such enrichment, which
we denote $P$, has $\hat\theta$ that interchanges $m$ and
$\tilde m$. Thus $\widetilde{\gP}[\frac 1u]$ is isomorphic to $P$.
As in the proof of Lemma~\ref{lem:nonempty-then-map}, the hypothesis that $T(\mathfrak{M}) \not\cong
T(\mathfrak{N})$ implies that any non-zero map (equivalently, isomorphism) of
\'etale $\varphi$-modules with descent data $\lambda : \widetilde{\gP}[\frac 1u] \to P$ takes the submodule $\mathfrak{N}[\frac
1u]$ to $\mathfrak{M}^{(f)}[\frac 1u]$. We may scale the map $\lambda$ so that
it restricts to the map $n_i \to u^{x_i} \tilde m_i$ on $\mathfrak{N}$.
Then there is an element $\xi \in {\mathbb F}^\times$ so that
$\lambda$ induces multiplication by $\xi$ on the common quotients $\mathfrak{M}[\frac 1u]$.
That is, the map $\lambda$ may be assumed to have the form
\addtocounter{subsubsection}{1}\begin{equation}\label{eq:lambdamap}
\begin{pmatrix}
n_{i} \\ m_{i}
\end{pmatrix} \mapsto
\begin{pmatrix}
u^{x_i} & 0 \\ \nu_i & \xi
\end{pmatrix}
\begin{pmatrix}
\tilde m_{i} \\ m_{i}
\end{pmatrix}
\end{equation}
for some $(\nu_i) \in {\mathbb F}((u))^{f'}$. The condition that the map
$\lambda$ commutes with the descent data from $K'$ to $L$ is seen to be
equivalent to the condition that nonzero terms in $\nu_i$ have degree
congruent to $c_i -d_i + x_i \pmod{p^{f'}-1}$; or equivalently, if we
define $\mu_i := \nu_i u^{-x_i}$ for all $i$, that the tuple $\mu = (\mu_i)$
is an element of the set $\Czero_{u} = \Czero_{u}(\mathfrak{M},\mathfrak{N})$
of Definition~\ref{notn:calh}.
The condition that $\lambda$ commutes with $\varphi$ can be checked to give
\begin{equation*}
\varphi \begin{pmatrix}
n_{i-1} \\ m_{i-1}
\end{pmatrix}
= \begin{pmatrix}
u^{s_i} & 0 \\ \varphi(\nu_{i-1}) u^{r_{i+f}-x_i} - \nu_i u^{r_i-x_i}
& u^{r_i}
\end{pmatrix}\begin{pmatrix}
n_{i} \\ m_{i}
\end{pmatrix}.
\end{equation*}
The extension $\mathfrak{P}$ is of the form $\mathfrak{P}(h)$, for some $h \in \scrC^1$
as in Definition~\ref{notn:calh}.
The lower-left entry of the first matrix on the right-hand side of the
above equation must then be $h_i$. Since $r_{i+f}-x_i = s_i - px_{i-1}$,
the resulting condition can be rewritten as
\[ h_i= \varphi(\mu_{i-1}) u^{s_i} - \mu_i u^{r_i},\]
or equivalently that $h = \partial(\mu)$. Comparing with
Remark~\ref{rem:explicit-ker-ext}, we recover the fact that the
extension class of $\mathfrak{P}$ is an element
of~$\kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$, and the tuple $\mu$ determines an
element of the space $\mathscr{H}$ defined as follows.
\begin{defn}\label{defn:calw} The map $\partial \colon \Czero_{u} \to \Cone_{u}$ induces a map
$\Czero_{u}/\scrC^0 \to \Cone_{u}/\partial(\scrC^0)$, which we also
denote
$\partial$. We let $\mathscr{H} \subset \Czero_{u}/\scrC^0$
denote the subspace consisting of elements $\mu$ such that
$\partial(\mu) \in \scrC^1/\partial(\scrC^0)$.
\end{defn}
By the discussion following
Lemma~\ref{lem:explicit-complex}, an element $\mu \in \mathscr{H}$ determines an
extension $\mathfrak{P}(\partial(\mu))$. Indeed,
Remark~\ref{rem:explicit-ker-ext} and the proof of \eqref{eqn:
computing kernel of Ext groups} taken together show that there is a natural isomorphism,
in the style of Lemma~\ref{lem:explicit-complex}, between the morphism
$\partial : \mathscr{H} \to \scrC^1/\partial(\scrC^0)$ and the connection map
$\Hom_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N}[1/u]/\mathfrak{N}) \to \Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$, with
$\im\partial$ corresponding to $\kExt^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$.
Conversely, let $h$ be an element of $\partial(\Czero_{u}) \cap \scrC^1$,
and set $\nu_i = u^{x_i} \mu_i$. The condition that there is a Breuil--Kisin module
$\widetilde{\gP}$ with descent data from $K'$ to $K$ and $\xi \in {\mathbb F}^{\times}$ such that $\lambda : \widetilde{\gP}[\frac1u]
\to P$ defined as above is an isomorphism is precisely the condition that
the map $\hat\theta$ on $P$ pulls back via $\lambda$ to a map that
preserves $\mathfrak{P}$. One computes that this pullback is
\begin{equation*}
\hat\theta \begin{pmatrix}
n_{i} \\ m_{i}
\end{pmatrix}
= \xi^{-1} \begin{pmatrix}
-\nu_{i+f} & u^{x_i} \\
(\xi^2-\nu_i \nu_{i+f}) u^{-x_i} & \nu_i
\end{pmatrix}
\begin{pmatrix}
n_{i+f} \\ m_{i+f}
\end{pmatrix}
\end{equation*}
recalling that $x_i =x_{i+f}$.
We deduce that $\hat\theta$ preserves $\mathfrak{P}$
precisely when the $\nu_i$ are
integral and $\nu_i \nu_{i+f} \equiv \xi^2 \pmod{u^{x_i}}$ for
all~$i$. For~$i$ with $x_i=0$
the latter condition is automatic given the former, which is
equivalent to the condition that $\mu_i$ and $\mu_{i+f}$ are both
integral. If instead $x_i > 0$, then we have the nontrivial
condition $\nu_{i+f} \equiv \xi^2 \nu_{i}^{-1} \pmod{u^{x_i}}$; in other
words that $\mu_i, \mu_{i+f}$ have $u$-adic valuation exactly $-x_i$,
and their principal parts determine one another via the equation
$\mu_{i+f} \equiv \xi^2 (u^{2x_i} \mu_i )^{-1}
\pmod{1}$.
Let
$\mathbf{G}_{m,\xi}$ be the multiplicative group with parameter
$\xi$. We now (using the notation of Definition~\ref{defn:calw}) define $\mathscr{H}' \subset \Czero_{u}/\scrC^0 \times
\mathbf{G}_{m,\xi}$ to be the subvariety
consisting of the pairs
$(\mu,\xi)$ with exactly the preceding properties; that is, we
regard~$\Czero_{u}/\scrC^0$ as an Ind-affine space in the obvious way, and
define~$\mathscr{H}'$ to be the pairs $(\mu,\xi)$ satisfying
\begin{itemize}
\item if $x_i=0$ then $\val_i \mu = \val_{i+f} \mu =\infty$,
and
\item if $x_i >0$ then $\val_i \mu = \val_{i+f} \mu = -x_i$ and $\mu_{i+f}
\equiv \xi^2 (u^{2x_i} \mu_i)^{-1} \pmod{u^0}$
\end{itemize}
where we write $\val_i \mu$ for the $u$-adic valuation of $\mu_i$, putting $\val_i \mu = \infty$ when $\mu_i$ is integral.
Putting all this together with~\eqref{eqn:Y fibre blahblah}, we find that the map
\[ \mathscr{H}' \cap (\mathscr{H} \times \mathbf{G}_{m,\xi}) \to
Y(\mathfrak{M},\mathfrak{N})_1 \]
sending $(\mu,\xi)$ to the pair $(\mathfrak{P},\widetilde{\gP})$ is a well-defined
surjection,
where $\mathfrak{P} =
\mathfrak{P}(\partial(\mu))$, $\widetilde{\gP}$ is the enrichment of $\mathfrak{P}$ to a Breuil--Kisin
module with descent data from $K'$ to $K$ in which $\hat\theta$ is
pulled back to $\mathfrak{P}$ from $P$ via the map $\lambda$ as in
\eqref{eq:lambdamap}.
(Note that~$Y(\mathfrak{M},\mathfrak{N})_1$ is reduced and
of finite type, for
example by~\eqref{eqn:Y fibre}, so the surjectivity can be checked on
$\Fbar_p$-points.)
In particular $\dim Y(\mathfrak{M},\mathfrak{N})_1 \le \dim \mathscr{H}'.$
Note that $\mathscr{H}'$ will be empty if for some $i$ we have $x_i > 0$ but
$x_i + c_i-d_i \not\equiv 0 \pmod{p^{f'}-1}$ (so that $\nu_i$ cannot
be a $u$-adic unit).
Otherwise, the dimension of $\mathscr{H}'$ is easily computed to be
$D = 1+\sum_{i=0}^{f-1} \lceil x_i/(p^{f'}-1) \rceil$ (indeed if~$d$ is the number of nonzero
$x_i$'s, then $\mathscr{H}' \cong \GG_m^{d+1} \times \GG_a^{D-d-1}$),
and since
$x_i \le e'/(p-1)$ we find that $\mathscr{H}'$ has dimension at most $1 + \lceil e/(p-1) \rceil f$.
This establishes the bound $\dim
Y(\mathfrak{M},\mathfrak{N})_1 \le 1 + \lceil e/(p-1) \rceil f$.
Since $p > 2$ this bound already establishes the theorem when $e
> 1$.
If instead $e=1$ the above bound gives $\dim Y(\mathfrak{M},\mathfrak{N}) \le [K:\Q_p]
+ 1$. Suppose for the sake of
contradiction that equality holds. This is only possible if $\mathscr{H}'
\cong \GG_m^{f+1}$, $\mathscr{H}' \subset \mathscr{H} \times \mathbf{G}_{m,\xi}$,
and $x_i = [d_i - c_i] > 0$ for all
$i$.
Define $\mu^{(i)}
\in \Czero_{u}$ to be the element such that $\mu_{i} =
u^{-[d_{i}-c_{i}]}$, and $\mu_j = 0$ for $j \neq i$. Let ${\mathbb F}''/{\mathbb F}$ be
any finite extension such that $\#{\mathbb F}'' > 3$. For each nonzero $z \in {\mathbb F}''$
define $\mu_z = \sum_{j \neq i,i+f} \mu^{(i)} + z \mu^{(i)} + z^{-1}
\mu^{(i+f)}$, so that $(\mu_z, 1)$ is an element of $\mathscr{H}'({\mathbb F}'')$.
Since $\mathscr{H}' \subset \mathscr{H} \times \mathbb{G}_{m,\xi}$ and $\mathscr{H}$ is
linear, the differences between the $\mu_z$ for varying $z$ lie in
$\mathscr{H}({\mathbb F}'')$, and (e.g.\ by considering $\mu_1 - \mu_{-1}$ and $\mu_1 -
\mu_{z}$ for any $z \in {\mathbb F}''$ with $z\neq z^{-1}$) we deduce that each
$\mu^{(i)}$ lies in $\mathscr{H}$. In particular
each
$\partial(\mu^{(i)})$ lies in $\scrC^1$.
If $(i-1,i)$ were not a transition then (since $e=1$) we would have
either $r_i =0 $ or $s_i = 0$. The former would contradict
$\partial(\mu^{(i)}) \in \scrC^1$ (since the $i$th component of
$\partial(\mu^{(i)})$ would be $u^{-[d_i-c_i]}$, of negative degree),
and similarly the latter would contradict $\partial(\mu^{(i-1)}) \in
\scrC^1$. Thus $(i-1,i)$ is a transition for all $i$. In fact the same
observations show more precisely that $r_i \ge x_i = [d_i-c_i]$ and $s_i
\ge p x_{i-1} = p [d_{i-1}-c_{i-1}]$. Summing these inequalities and subtracting
$e'$ we obtain $0 \ge p [d_{i-1}-c_{i-1}] - [c_i-d_i]$, and comparing
with \eqref{eq:gammastar}
shows that we must also have $\gamma_i^*=0$ for
all $i$. Since $e=1$ and $(i-1,i)$ is a transition for all $i$ the refined shape of the pair $(\mathfrak{M},\mathfrak{N})$ is
automatically maximal;\ but then we are in the exceptional case of
Proposition~\ref{prop:ker-ext-maximal},
which (recalling the proof of that Proposition) implies that $T(\mathfrak{M}) \cong T(\mathfrak{N})$.
This is the desired contradiction.
\end{proof}
\subsection{Irreducible components}\label{subsec: irred components}
We can now use our results on families of extensions of characters to
classify the irreducible components of the stacks~${\mathcal C}^{\tau,\operatorname{BT},1}$
and~${\mathcal Z}^{\tau,1}$. In the article \cite{cegsA}
we will combine
these results with results coming from Taylor--Wiles patching (in
particular the results of~\cite{geekisin,emertongeerefinedBM})
to describe the
closed points of each irreducible component of~${\mathcal Z}^{\tau,1}$ in terms
of the weight part of Serre's conjecture.
\begin{cor}
\label{cor: the C(J) are the components}Each irreducible component
of~${\mathcal C}^{\tau,\operatorname{BT},1}$ is of the form~$\overline{{\mathcal C}}(J)$ for some~$J$;
conversely, each~$\overline{{\mathcal C}}(J)$ is an irreducible component of~${\mathcal C}^{\tau,\operatorname{BT},1}$.
\end{cor}
\begin{rem}
\label{rem: haven't yet proved the C(J) are distinct}Note that at
this point we have not established that different sets~$J$ give
distinct irreducible components~$\overline{{\mathcal C}}(J)$; we will prove this in Section~\ref{subsec: map to
Dieudonne stack} below by a consideration of Dieudonn\'e
modules.
\end{rem}
\begin{proof}[Proof of Corollary~{\ref{cor: the C(J) are the
components}}]By~Theorem~\ref{cor: Kisin moduli consequences of local models}(2), ${\mathcal C}^{\tau,\operatorname{BT},1}$ is
equidimensional of dimension~$[K:\Q_p]$. By construction, the~$\overline{{\mathcal C}}(J)$ are irreducible
substacks of~${\mathcal C}^{\tau,\operatorname{BT},1}$, and by Theorem~\ref{thm:
dimension of refined shapes} they also have dimension~$[K:\Q_p]$, so they are in fact
irreducible components by~\cite[\href{https://stacks.math.columbia.edu/tag/0DS2}{Tag 0DS2}]{stacks-project}.
By Theorem~\ref{thm: irreducible Kisin
modules can be ignored} and Theorem~\ref{thm: dimension of refined
shapes}, we see that there is a closed substack
${\mathcal C}_{\mathrm{small}}$ of~${\mathcal C}^{\tau,\operatorname{BT},1}$ of dimension strictly
less than~$[K:\Q_p]$, with the property that every finite type point
of~${\mathcal C}^{\tau,\operatorname{BT},1}$ is a point of at least one of the~$\overline{{\mathcal C}}(J)$
or of~${\mathcal C}_{\mathrm{small}}$ (or both). Indeed, every extension of
refined shape~$(J,r)$ lies
on~$\overline{{\mathcal C}}(J,r)$, by Remark~\ref{rem-all-pts}, so we can
take~${\mathcal C}_{\mathrm{small}}$ to be the union of the
stack~${\mathcal C}_{\mathrm{irred}}$ of Theorem~\ref{thm: irreducible Kisin
modules can be ignored} and the stacks~$\overline{{\mathcal C}}(J,r)$ for
non-maximal shapes~$(J,r)$.
Since ${\mathcal C}^{\tau,\operatorname{BT},1}$ is
equidimensional of dimension~$[K:\Q_p]$, it follows
that the~$\overline{{\mathcal C}}(J)$ exhaust
the irreducible components of~${\mathcal C}^{\tau,\operatorname{BT},1}$, as required.
\end{proof}
We now deduce a classification of the
irreducible components of ${\mathcal Z}^{\tau,1}$. In the paper \cite{cegsA}
we will give
a considerable refinement of
this, giving a precise description of the finite type points of the
irreducible components in terms of the weight part of Serre's conjecture.
\begin{cor}
\label{cor: components of Z are exactly the Z(J)}The irreducible
components of~${\mathcal Z}^{\tau,1}$ are precisely the~$\overline{{\mathcal Z}}(J)$
for~$J\in{\mathcal P}_\tau$, and if $J\ne J'$ then~$\overline{{\mathcal Z}}(J)\ne\overline{{\mathcal Z}}(J')$.
\end{cor}
\begin{proof}
By Theorem~\ref{thm: identifying the vertical components}, if
$J\in{\mathcal P}_\tau$ then~$\overline{{\mathcal Z}}(J)$ is an irreducible component of
~${\mathcal Z}^{\tau,1}$.
Furthermore, these~$\overline{{\mathcal Z}}(J)$ are pairwise
distinct by Theorem~\ref{thm: unique serre weight}.
Since the morphism
${\mathcal C}^{\tau,\operatorname{BT},1}\to{\mathcal Z}^{\tau,1}$ is scheme-theoretically
dominant, it follows from Corollary~\ref{cor: the C(J) are the
components} that each irreducible component of ${\mathcal Z}^{\tau,1}$
is dominated by some~$\overline{{\mathcal C}}(J)$. Applying Theorem~\ref{thm:
identifying the vertical components} again, we see that if
$J\notin{\mathcal P}_\tau$ then~$\overline{{\mathcal C}}(J)$ does not dominate an irreducible
component, as required.
\end{proof}
\subsection{Dieudonn\'e modules and the morphism to the gauge stack}\label{subsec: map to
Dieudonne stack}
We now study the images of the irreducible components $\overline{{\mathcal C}}(J)$
in the gauge stack ${\mathcal G}_\eta$;
this amounts to computing
the Dieudonn\'e modules and Galois
representations associated to the extensions of Breuil--Kisin modules that we
considered in Section~\ref{sec: extensions of rank one Kisin
modules}.
Suppose throughout this subsection that $\tau$ is a non-scalar type,
and that $(J,r)$ is a maximal refined shape.
Recall that in the cuspidal case this entails that $i \in J$ if and
only if $i + f \not\in J$.
\begin{lemma}
\label{lem:Dieudonne modules}
Let $\mathfrak{P} \in \Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$ be an extension of type $\tau$
and refined shape $(J,r)$. Then for $i \in {\mathbb Z}/f'{\mathbb Z}$ we have $F=0$ on $D(\mathfrak{P})_{\eta,i-1}$ if $i\in
J$, while $V=0$ on $D(\mathfrak{P})_{\eta,i}$ if
$i\notin J$.
\end{lemma}
\begin{proof}
Recall that $D(\mathfrak{P}) = \mathfrak{P}/u\mathfrak{P}$. Let $w_i$ be the image of $m_i$ in
$D(\mathfrak{P})$ if $i \in J$, and let $w_i$ be the image of $n_i$ in $D(\mathfrak{P})$
if $i \not\in J$. It follows easily from the
definitions that $D(\mathfrak{P})_{\eta,i}$ is generated over~${\mathbb F}$ by $w_i$.
Recall that the actions of $F,V$ on $D(\mathfrak{P})$ are as specified in
Definition~\ref{def: Dieudonne module formulas}. In particular $F$ is
induced by $\varphi$, while $V$ is $c^{-1} \mathfrak{V}$ mod $u$ where $\mathfrak{V}$ is the
unique map on $\mathfrak{P}$ satisfying $\mathfrak{V} \circ \varphi =
E(u)$, and $c = E(0)$. For the Breuil--Kisin module $\mathfrak{P}$, we have
\[\varphi(n_{i-1}) = b_i u^{s_i} n_i,\qquad \varphi(m_{i-1}) = a_i u^{r_i}
m_i + h_i n_i,\] and so
one checks (using that $E(u) = u^{e'}$ in ${\mathbb F}$)
that
$$\mathfrak{V}(m_i) = a_i^{-1} u^{s_i} m_{i-1} - a_{i}^{-1} b_i^{-1}
h_i n_{i-1} , \qquad \mathfrak{V}(n_i) = b_i^{-1} u^{r_i} n_{i-1}.$$
From Definition~\ref{df:extensions of shape $J$} and the
discussion immediately following it, we recall that if $(i-1,i)$ is not a transition
then $r_i = e'$,
$s_i=0$, and $h_i$ is divisible by $u$ (the latter because nonzero
terms of $h_i$ have degrees congruent to $r_i+c_i-d_i
\pmod{p^{f'}-1}$, and $c_i \not\equiv d_i$ since $\tau$ is non-scalar).
On the other hand if $(i-1,i)$ is a transition, then $r_i , s_i >0$,
and nonzero terms of $h_i$ have degrees divisible by
$p^{f'}-1$; in that case we write $h_i^0$ for the constant coefficient
of $h_i$, and we remark that $h_i^0$ does not vanish identically on $\Ext^1_{\K{{\mathbb F}}}(\mathfrak{M},\mathfrak{N})$.
Suppose, for instance, that $i-1 \in J$ and $i \in J$. Then
$w_{i-1}$ and $w_i$ are the images in $D(\mathfrak{P})$ of $m_{i-1}$ and
$m_{i}$. From the above formulas we see that $u^{r_i} = u^{e'}$ and
$h_i$ are both divisible by $u$, while on the other hand $u^{s_i} = 1$. We
deduce that $F(w_{i-1}) = 0$ and $V(w_i) = c^{-1} a_i^{-1}
w_{i-1}$. Computing along similar lines, it is easy to check the following four
cases.
\begin{enumerate}
\item $i-1\in J,i\in J$. Then $F(w_{i-1}) = 0$ and $V(w_i) = c^{-1} a_i^{-1}
w_{i-1}$.
\item $i-1\notin J,i\notin J$. Then $F(w_{i-1})=b_{i}w_{i}$, $V(w_{i})=0$.
\item\label{item: interesting case} $i-1\in J$, $i\notin J$. Then $F(w_{i-1})=h_{i}^0w_{i}$, $V(w_{i})=0$.
\item $i-1\notin J$, $i\in J$. Then $F(w_{i-1})=0$,
$V(w_{i})=-c^{-1} a_{i}^{-1}b_{i}^{-1}h_{i}^0w_{i-1}$.
\end{enumerate}
In particular, if $i\in J$ then $F(w_i)=0$, while if
$i\notin J$ then $V(w_{i+1})=0$.
\end{proof}
Since ${\mathcal C}^{\tau,\operatorname{BT}}$ is flat over ${\mathcal O}$ by Theorem~\ref{cor: Kisin
moduli
consequences of local models},
it follows from Lemma~\ref{lem: maps to gauge stack as Cartier
divisors} that the natural morphism ${\mathcal C}^{\tau,\operatorname{BT}} \to {\mathcal G}_{\eta}$
is determined by an $f$-tuple of effective Cartier divisors $\{{\mathcal D}_j\}_{0 \le j < f}$
lying in the special fibre ${\mathcal C}^{\tau,\operatorname{BT},1}$.
Concretely,
${\mathcal D}_j$ is the zero locus of~ $X_j$, which is the zero locus
of~$F:D_{\eta,j}\to D_{\eta,j+1}$.
The zero locus of $Y_j$ (which is the zero locus of~$V:D_{\eta,j+1}\to
D_{\eta,j}$) is another
Cartier divisor ${\mathcal D}_j'$.
Since ${\mathcal C}^{\tau,\operatorname{BT},1}$ is reduced,
we conclude that each of ${\mathcal D}_j$ and ${\mathcal D}_j'$ is simply a union of irreducible components
of ${\mathcal C}^{\tau,\operatorname{BT},1}$, each component appearing precisely once in
precisely one of either ${\mathcal D}_j$ or ${\mathcal D}_j'$.
\begin{prop}
\label{prop:Dieudonne divisors}
${\mathcal D}_j$ is equal to the union of the irreducible components~$\overline{{\mathcal C}}(J)$ of
${\mathcal C}^{\tau,\operatorname{BT},1}$ for those $J$ that contain
$j+1$.
\end{prop}
\begin{proof}
Lemma~\ref{lem:Dieudonne modules} shows
that if $j+1\in J$, then $X_j=0$, while
if $j+1\notin J$, then $Y_j=0$. In the latter case, by an inspection
of case~\eqref{item: interesting case} of the proof of Lemma~\ref{lem:Dieudonne modules}, we have
$X_j=0$ if and only if $j\in J$
and
$h_{j+1}^0=0$. Since~$h_{j+1}^0$ does not vanish identically on an
irreducible component, we see that the irreducible components on which $X_j$
vanishes identically are precisely those for which $j+1\in J$, as
claimed.
\end{proof}
\begin{thm}
\label{thm: components of C}The algebraic stack~${\mathcal C}^{\tau,\operatorname{BT},1}$
has precisely $2^f$ irreducible components, namely the irreducible substacks~$\overline{{\mathcal C}}(J)$.
\end{thm}
\begin{proof}
By Corollary~\ref{cor: the C(J) are the components}, we need only show
that if~$J\ne J'$ then $\overline{{\mathcal C}}(J)\ne{\mathcal C}(J')$; but this is immediate
from Proposition~\ref{prop:Dieudonne divisors}.
\end{proof}
\renewcommand{\theequation}{\Alph{section}.\arabic{subsection}}
|
1,116,691,500,641 | arxiv | \section{Introduction}
Most multiagent systems rely intrinsically on collaboration among agents in order to accomplish a joint task.
Collaboration however means that exchange of information among the agents cannot be dispensed with.
If the information is sensitive, then questions like respecting the privacy of the individual agents naturally rise.
Several approaches exist to address this {\em conundrum} of exchanging information without revealing it.
One approach is called differential privacy \cite{Dwork:2006:DP:2097282.2097284,Dwork:2014:AFD:2693052.2693053} and consists, roughly speaking, in corrupting the information being transmitted with a noise from an appropriate distribution so that an observer accessing the transmitted signals can only reconstruct the original data up to a prespecified precision level.
Another approach relies on cryptography.
Encrypted messages can be exchanged among the agents in various ways, e.g. through trusted third parties \cite{Lazzeretti6855039}, obfuscation \cite{Ambrosin:2017:OOB:3155100.3137573}, or through distributed cryptography schemes \cite{Ruan19}.
In these approaches the messages from each agent (corrupted with noise or encrypted) are typically exchanged through a communication graph and hence they are available to the other agents of the network.
Only the protection mechanism (noise source or cryptographic scheme) is kept private by each agent.
Both approaches have been recently used for multiagent dynamical systems \cite{Cortes7798915,Hale8031339,Huang:2012:DPI:2381966.2381978,NOZARI2017221,LeNy6606817,Ruan19,Wang7833044}.
In this case the information to keep private is typically the initial state of the agents.
A problem that is often studied in this context is the consensus problem, because it can be used as a basic building block in many distributed algorithms in database computations, sensor fusion, load balancing, clock synchronization, etc.
Dynamically, a consensus scheme consists of a stable system in which the final value reached asymptotically is the (weighted) mean of the initial conditions of the agents.
A privacy protected consensus should render this value available to all agents while not disclosing the initial conditions themselves to the other agents.
For instance, differentially private average consensus schemes are proposed in \cite{GUPTA20179515,Huang:2012:DPI:2381966.2381978,NOZARI2017221}.
Clearly the addition of noise impacts also the performances of the consensus algorithm: convergence to the true value might be missing \cite{Huang:2012:DPI:2381966.2381978} or be guaranteed only in expectation \cite{NOZARI2017221}.
Many other variants are possible: for instance in \cite{He2019Consensus,Manitara6669251,Mo7465717,Rezazadeh2018Privacy}, a non-stochastic perturbation is injected at the nodes, with the constraint that the sum (or integral) over time vanishes.
A cryptography-based approach requires instead one or more layers of data encryption technology which must themselves be kept secure and protected \cite{Lazzeretti6855039,Ruan19}.
Other system-oriented approaches to privacy protection in distributed computations appear e.g. in \cite{Alaeddini7963642,Duan7402925,FAROKHI2016254,Kia:RNC3178,Liu2019Dynamical,Monshizadeh19Plausible,Pequito7039593,XUE2014852}.
The aim of this paper is to propose a conceptually different framework for privacy preservation of the initial states of multiagent dynamics, inspired by system-theoretic considerations.
Our framework is exact, and is developed for continuous-time dynamical systems. It relies on what we call {\em output masks}, i.e., local (in the sense of ``agent-local'', that is, decided and implemented independently by each agent) time-varying transformations of the states to be transmitted to the neighboring nodes, whose functional form and/or numerical parameters are kept hidden to the other agents.
In the privacy literature, the use of masking maps is widespread. For instance, non-invertible maps are used in homomorphic or semi-homomorphic encryption \cite{Kogiso,FAROKHI201713,Lazzeretti6855039,Ruan19}, as well as in secure wiretap channels \cite{Wiese2016Secure}.
In the present context, output masks are used to offset the initial condition in a way such that an eavesdropping (curious but not malicious) agent cannot reconstruct it, neither directly nor using a model of the system.
In fact, even when an eavesdropper has knowledge of the vector field used by an agent, reconstruction of the initial state of that agent requires to set up a state observer, which in turn requires to identify the functional form and the numerical parameters of the output mask of the agent.
In the paper this joint ``system identification'' and ``initial state detection'' problem is called {\em discernibility}, and conditions are given that render the initial state indiscernible.
The approach we follow (offsetting the initial condition) is somewhat related to \cite{He2019Consensus,Mo7465717,Rezazadeh2018Privacy}. However, our use of output masks enables us to carry out a thorough analysis of the dynamical properties of the masked system, which is novel and insightful of the implications of preserving privacy on the dynamics.
When the original unmasked system is globally exponentially stable, or perhaps exponentially stable on ``slices'' of the state space if there is a continuum of equilibria, as in the consensus problem, we show in the paper that under the assumption that no agent has in-neighborhood that covers that of another agent \cite{Rezazadeh2018Privacy,Mo7465717}, the masked multiagent system globally uniformly converges to the same attractor as the unmasked system while guaranteeing the privacy of the initial conditions at any level of precision.
The price to pay for guaranteeing privacy is that the masked system is time-varying and has no fixed points.
However, as long as the output masks are constructed to converge asymptotically to the unmasked state, the masked time-varying system has the original system as its limit system \cite{Artstein1976,ARTSTEIN1977184}.
When the unmasked system is autonomous, the resulting masked time-varying system is a case of a so-called asymptotically autonomous system \cite{Artstein1976,Markus1956}.
In spite of the indiscernibility of the initial conditions, the asymptotic collapse of the masked dynamics to the original dynamics guarantees that the distributed computation is carried out correctly anyway.
Clearly, dealing with a distributed computation representable as a converging dynamical system is a key prerequisite of our method, hence we refer to it as {\em dynamical privacy}.
The system-theoretical framework for dynamical privacy developed in this paper is for continuous-time multiagent dynamics. Unlike \cite{Rezazadeh2018Privacy}, where a similar setting is chosen, we do not require the time integrals of the perturbations to be vanishing asymptotically, which gives us more freedom in the choice of the masks and leads to a framework applicable to a broad range of distributed multiagent scenarios.
In the paper we investigate the effect of output masks on three different case studies: a globally exponentially stable nonlinear system, an average consensus problem, and a system of diffusively coupled higher order ODEs achieving pinning synchronization \cite{Chen4232574,Yu-doi:10.1137/100781699,ZHOU2008996}.
In all three cases a privacy preserving version of the system based on output masks is shown to have the equilibrium point of the unmasked system as unique attractor.
However, as the masked system lacks fixed points, it cannot be stable at the attractor.
This behavior is designed in purpose.
Think for instance at a situation in which the initial conditions are all in a neighborhood of the (say, globally stable) equilibrium point of the unmasked system.
If the masked system is stable around that point, its trajectories remain confined in a neighborhood of the equilibrium for all times, leading to an approximate disclosure of the initial states.
In order to avoid such situations, a masked system cannot preserve neighborhoods of its attractor, or, in other words, the attractor cannot be also an equilibrium point.
To achieve this, our output masks have to be inhomogeneous in the state variables.
Such structure is reminiscent of the additive noise used e.g. in differential privacy.
Technically, to show global attractivity in the masked system (in the complement of the agreement subspace for consensus-like problems), we use Lyapunov arguments.
The Lyapunov function of the unmasked system is shown to lead to Lyapunov derivatives which are in general sign indefinite, but upper bounded by terms that decay to $0$ as $ t\to \infty $ \cite{MuASJC}.
The reasoning is fundamentally different from those used in stability analysis of time-varying systems \cite{Aeyels701102,Lee2001,Loria1393135}, but somehow related to constructions used in input-to-state stability \cite{SONTAG1995351} and in the stability analysis of nonlinear systems in presence of additive exponentially decaying disturbances \cite{Sussmann1991Peaking}.
In particular, our masked system has a so-called converging-input converging-state property \cite{Sontag2003RemarkCICS}.
Boundedness of its trajectories is imposed by choosing Lyapunov functions with globally bounded gradients \cite{Sontag2003Example,Teel2004Examples}.
The argument is reminiscent of those used in cascade systems \cite{CHAILLET2008519,PANTELEY1998Global,saberi1990global} or in observer-based nonlinear control
\cite{Arcak2001Observer}.
While the importance of initial conditions is well-known in problems such as average consensus (the final value changes with the initial condition, hence privacy questions are self-evident) in the paper we show that similar privacy issues may arise also in other cases in which the unmasked system is globally exponentially stable.
In particular we show that in continuous-time Friedkin-Johnsen models of opinion dynamics \cite{PROSKURNIKOV201765}, the value of the equilibrium point is also a function of the initial conditions, because an inhomogeneous term, depending on the initial conditions, is added to an asymptotically stable linear system.
Clearly this is a context in which non-disclosure of the initial states could be of strong relevance.
The case of pinned synchronization is instead an example of an unmasked system which is time-varying (it depends on the pinning exosystem \cite{Yu-doi:10.1137/100781699,ZHOU2008996}).
Our privacy protection framework applies also to this case, the only difference being that the limit system of the masked system is itself time-varying.
The rest of the paper is organized as follows: a few preliminary results are outlined in Section~\ref{sec:prelim}, while the dynamical privacy problem and the properties of the output masks are formulated in Section~\ref{sec:prob-form}. In Section~\ref{sec:glob-as-stable} the case of a globally exponentially stable unmasked system (and the related case of Friedkin-Johnsen opinion dynamics model) is discussed. Sections~\ref{sec:av-consensus} and~\ref{sec:synchro} deal with privacy preservation respectively for the average consensus problem and for a pinning synchronization problem.
The proofs of all results are gather in the Appendix.
In the conference version of this paper, \cite{Altafini2019Dynamical}, only the average consensus problem of Section~\ref{sec:av-consensus} is discussed. The material of Sections~\ref{sec:glob-as-stable} and~\ref{sec:synchro} is presented here for the first time.
\section{Preliminaries}
\label{sec:prelim}
A continuous function $ \alpha \, : \, [0, \infty) \to [ 0, \, \infty) $ is said to belong to class $ \mathcal{K}_\infty $ if it is strictly increasing and $ \alpha(0) =0$.
Subclasses of $ \mathcal{K}_\infty $ which are homogeneous polynomials of order $ i$ will be denoted $ \mathcal{K}_\infty^i $: $ \alpha(r) = a r^i $ for some constant $ a>0$.
A continuous function $ \zeta \, : \, [0, \infty) \to [ 0, \, \infty) $ is said to belong to class $ \mathcal{L} $ if it is decreasing and $ \lim_{t\to \infty} \zeta(t) =0$.
In particular, we are interested in $ \mathcal{L} $ functions that are exponentially decreasing: $ \zeta(t) = a e^{-\delta t} $ for some $ a>0$ and $ \delta>0$. We shall denote such subclass $ \mathcal{L}^e \subset \mathcal{L} $.
A continuous function $ \beta \, : \, [0, \infty) \times [0, \, \infty) \to [ 0, \, \infty) $ is said to belong to class $ \mathcal{KL}_\infty^{i,e} $ if the mapping $ \beta(r, \, t ) $ belongs to class $ \mathcal{K}_\infty^i $ for each fixed $ t $ and to class $ \mathcal{L}^e $ for each fixed $ r$, i.e., $ \beta(r, \, t) = a r^i e^{-\delta t }$ for some $ a>0$ and $\delta>0$.
Consider
\begin{equation}
\dot x = g(t, \, x ), \qquad x(t_o) =x_o
\label{eq:ode_f(t,x)}
\end{equation}
where $ g\, : \, \mathbb{R}_+ \times \mathbb{R}^n \to \mathbb{R}^n$ is Lipschitz continuous in $ x$, measurable in $ t$, and such that for each $ x_o \in \mathbb{R}^n $ and each $ t_o \in \mathbb{R}_+ $ the solution of \eqref{eq:ode_f(t,x)}, $ x(t, \, x_o ) $, exists in $[0, \, \infty)$. A point $ x^\ast \in \mathbb{R}^n $ is an equilibrium point of \eqref{eq:ode_f(t,x)} if $ g(t, \, x^\ast ) =0 $ for a.e.\footnote{almost every, i.e., except for at most a set of Lebesgue measure $0$.} $ t \geq t_o$.
A point $ x^\ast \in \mathbb{R}^n $ is {\em uniformly globally attractive} for \eqref{eq:ode_f(t,x)} if for each $ \nu >0 $ there exists $ T = T(\nu) > 0 $ such that for each solution $ x(t, x_o ) $ of \eqref{eq:ode_f(t,x)} it holds that $ \| x(t, \, x_o ) - x^\ast \| < \nu$ for each $ t > t_o + T $, each $ x_o \in \mathbb{R}^n $ and each $ t_o \geq 0$.
In particular, if $ x^\ast $ is a uniform global attractor for \eqref{eq:ode_f(t,x)}, then as $ t\to \infty $ all trajectories $ x(t, x_o) $ converge to $ x^\ast $
uniformly in $ t$ for all $ t_o \geq 0 $ and $ x_o $.
A point $ x^\ast $ can be attractive for \eqref{eq:ode_f(t,x)} without being an equilibrium of \eqref{eq:ode_f(t,x)} (we will use this fact extensively in the paper).
Given \eqref{eq:ode_f(t,x)}, denote $ g_s(t, \, x ) $ the translate of $ g(t, \, x)$: $ g_s(t, \, x ) = g(t+s, \, x )$.
A (possibly time-dependent) system $ \dot x = \tilde g (t, \, x) $ is called a {\em limit system} of \eqref{eq:ode_f(t,x)} if there exists a sequence $ \{ s_k\} $, $ s_k \to \infty $ as $ k\to \infty$, such that $ g_{s_k}(t, \, x ) $ converges to $ \tilde g (t, \, x)$ \cite{Artstein1976}.
An existence condition for a limit system $ \tilde g(t, \, x) $ is given in Lemma~1 of \cite{Lee2001}: when $ g(t, \, x ) $ is a uniformly continuous and bounded function, then there exist increasing and diverging sequences $ \{ s_k \} $ such that on compact subsets of $ \mathbb{R}^n$ $ g_{s_k} (t, \, x ) $ converges uniformly to a continuous limit function $ \tilde g(t, \, x) $ on every compact of $ [0, \, \infty)$, as $ k\to \infty$.
In general the limit system may not be unique nor time-invariant.
However, when it exists unique, then it must be autonomous \cite{Artstein1976,rouche2012stability} because all translates $ g_{s+s'} (t, \, x ) $ must have themselves a limit system hence the latter cannot depend on time.
The time-varying system \eqref{eq:ode_f(t,x)} is called {\em asymptotically autonomous} in this case.
The $ \omega$-limit set of $ x(t, \, x_o) $, denoted $ \Omega_{x_o} $, consists of all points $ x^\ast $ such that a sequence $ \{ t_k\}$, with $ t_k \to \infty $ when $ k\to\infty $, exists for which $ \lim_{k\to\infty} x(t_k, \, x_o) = x^\ast $.
For time-varying systems, if a solution is bounded then the corresponding $ \Omega_{x_o} $ is nonempty, compact and approached by $ x(t, x_o)$.
However, it need not be invariant.
Only for limit systems the invariance property may hold, although not necessarily (it may fail even for asymptotically autonomous systems, see \cite{Artstein1976}).
\section{Problem formulation}
\label{sec:prob-form}
Consider a distributed dynamical system on a graph with $n$ nodes:
\begin{equation} \dot x = f(x) ,\qquad x(0) = x_o ,
\label{eq:model}
\end{equation}
where $ x= \begin{bmatrix} x_1 & \ldots & x_n \end{bmatrix}^T \in \mathbb{R}^n $ is a state vector and $ f =[ f_1\, \ldots f_n ]^T : \, \mathbb{R}^n \to \mathbb{R}^n $ is a Lipschitz continuous vector field. Standing assumptions in this paper are that \eqref{eq:model} possesses a unique solution continuable on $ [0, \, \infty ) $ for all $ x_o \in \mathbb{R}^n $ and that information can be exchanged only between first neighbors on the graph, i.e.,
\begin{equation}
\dot x_i = f_i ( x_i, \, x_j, \, j \in \mathcal{N}_i ) , \qquad i=1, \ldots, n
\label{eq:model_i}
\end{equation}
with $ \mathcal{N}_i $ the in-neighborhood of node $ i$.
Furthermore, to avoid trivial situations, we impose that $ \mathcal{N}_i $ is the ``essential neighborhood'' of agent $i$ \cite{Mo7465717}, i.e.,
\begin{equation}
\begin{split}
f_i(x_i, \, x_j, \, j \in \tilde{\mathcal{N}}_i ) \neq f_i (x_i, \, x_j, \, j \in \mathcal{N}_i ) &\quad \forall \tilde{\mathcal{N}}_i \subsetneq \mathcal{N}_i, \\
& \forall i=1\ldots, n.
\end{split}
\label{eq:non-trivial-neigh}
\end{equation}
We are interested in cases in which the system \eqref{eq:model_i} has a globally exponentially stable equilibrium point, i.e., $ \lim_{t\to \infty } x(t) = x^\ast $ for all $ x_o$, but also in cases in which the presence of a conservation law (as in the consensus problem) leads to exponential stability on some submanifold depending on the initial conditions, i.e., $ \lim_{t\to \infty} x(t) = x^\ast(x_o) $.
The {\em privacy preservation problem} consists in using a system like \eqref{eq:model} to perform the computation of $ x^\ast $ in a distributed manner, while avoiding to divulgate the initial condition $ x_o $ to the other nodes.
Clearly this cannot be achieved directly on the system \eqref{eq:model} which is based on exchanging the values $ x_i $ between the nodes.
It can however be achieved if we insert a mask on the value $ x(t) $ which preserves convergence to $ x^\ast$, at least asymptotically.
The masks we propose in this paper have the form of time-varying output maps.
\subsection{Output masks}
Consider a continuously differentiable time-varying output map
\begin{equation}
\begin{split}
h \, : \, \mathbb{R}_+ \times \mathbb{R}^n \times \mathbb{R}^m & \to \mathbb{R}^n \\
(t, \, x, \, \pi ) & \mapsto y(t) = h(t, x(t) , \pi)
\end{split}
\label{eq:output1}
\end{equation}
where $ y = \begin{bmatrix} y_1 & \ldots & y_n \end{bmatrix}^T \in \mathbb{R}^n $ is an output vector of the same size as $x$, and $ \pi\in \mathbb{R}^{m} $ is a vector of parameters splittable into $n$ subvectors (not necessarily of the same dimension), one for each node of the network: $ \pi =\{ \pi_1, \ldots , \pi_n \} $.
In the following we refer to $h(t, x(t) , \pi)$ as an {\em output mask} and to $ y $ as a {\em masked output}.
The state $ x$ of the system is first masked into $y$ and then sent to the first out-neighbors on the graph.
The original system \eqref{eq:model} can therefore be modified into the following {\em masked system:}
\begin{subequations}
\begin{align}
\dot x & = f(y) \label{eq:model_xy_a}\\
y & = h(t, x, \pi) .\label{eq:model_xy_b}
\end{align}
\label{eq:model_xy}
\end{subequations}
Denote $ y(t, x_o) $, of components $ y_i(t, x_{o,i})$, $ i=1, \ldots, n$, the output trajectory of \eqref{eq:model_xy} from the initial state $ x_o$, of components $ x_{o,i}$.
We assume in what follows that the vector field $ f(\cdot) $ is publicly known (i.e., each agent knows the shape of the functions $ f_1(\cdot), \ldots, f_n (\cdot) $) and that each node knows the output trajectories $ y_i(t, x_{o,i})$ of its in-neighbors. The state $ x$ and the output mask $ h(t, x, \pi) $ (functional form plus values of the parameters $ \pi$) are instead private to each agent, as explained more in detail next.
\begin{definition}
\label{def:local_mask}
A $ C^1 $ output map $h$ is said a {\em local mask} if it has components that are local, i.e.,
\begin{enumerate}
\item[P1:] $ h_i(t, x, \pi) = h_i(t, x_i, \pi_i ) \qquad i =1, \ldots, n$.
\eenu
\end{definition}
The property of locality guarantees that the output map $ h_i $ can be independently decided by node $i$.
Both the functional form chosen for $ h_i(\cdot)$ and the numerical value of the parameters $ \pi_i $ can therefore remain hidden to the other agents.
The output mask needs also to avoid mapping neighborhoods of a point $ x^\ast $ of \eqref{eq:model} (typically an equilibrium point) into themselves.
For that, we introduce the following definition.
\begin{definition}
A $ C^1 $ output map $ h $ is said to {\em preserve neighborhoods} of a point $ x^\ast $ if, for all small $ \epsilon >0$, $ \|x_o- x^\ast \| <\epsilon \;\; \Longrightarrow \;\; \|h (0, x_o, \pi)- x^\ast \| < \epsilon$. It is said {\em not to preserve neighborhoods} otherwise.
\end{definition}
These notions are used in the following definition.
\begin{definition}
\label{def:privacy_mask}
A $ C^1 $ output map $h$ is said a {\em privacy mask} if it is a local mask and in addition
\begin{enumerate}
\item[P2:] \label{def:mask1} $ h_i(0, x_i, \pi_i) \neq x_i $ $ \forall \; x_i \in \mathbb{R}^n$, $ i=1, \ldots, n$;
\item[P3:] \label{def:mask3} $ h(t,x, \pi) $ does not preserve neighborhoods of any $ x \in \mathbb{R}^n$;
\item[P4:] \label{def:mask4} $ h_i(t, x_i, \pi_i) $ is strictly increasing in $ x_i$ for each fixed $ t$ and $ \pi_i$, $ i=1, \ldots, n$.
\eenu
\end{definition}
Property P2 means that $ h_i (\cdot) $ has no fixed points.
Property P4
resembles a definition of $ \mathcal{K}_\infty $ function, but it is in fact more general: $ x=0 $ is not a fixed point of $ h $ for any finite $ t$, and $ h$ need not be nonnegative in $ x$.
Monotonicity of $ h$ in $ x $ (for each fixed $ \pi$) follows from Property P4 combined with P1. It implies that $ h $ is a bijection in $ x $ for each fixed $ t$ and $ \pi$, although one that does not preserve the origin.
This is meant to avoid that the output mask introduces undesired behavior in the system, like spurious equilibrium points.
In many cases, it will be necessary to impose that the privacy mask converges asymptotically to the true state, i.e., that the perturbation induced by the mask is vanishing.
\begin{definition}
\label{def:vanishing_privacy_mask}
The output map $h$ is said a {\em vanishing privacy mask} if it is a privacy mask and in addition
\begin{enumerate}
\item[P5:] \label{def:mask5}$ | h_i(t, x_i, \pi_i ) - x_i | $ is decreasing in $ t$ for each fixed $ x_i$ and $ \pi_i$, and $ \lim_{t\to \infty } h_i(t, x_i, \pi_i ) = x_i $ for each fixed $ \pi_i$, $ i=1, \ldots, n$.
\eenu
\end{definition}
The difference between the true initial condition $ x_{o,i} $ and the masked output $ h_i(0, x_{o,i}, \pi_i ) $ can be used to quantify the level of privacy for agent $i$. More formally, if $ h_i (\cdot) $ is a privacy mask for agent $i$, we denote $ \rho_i (x_{o,i} ) = | h_i(0, x_{o,i}, \pi_i ) - x_{o,i} | $ the privacy metric of agent $i$ relative to the initial condition $ x_{o,i}$, and $ \rho(x_o) = \min_{i=1, \ldots, n} \rho_i(x_{o,i} ) $ the privacy metric of the system relative to the initial condition $ x_o $.
\subsection{Examples of output masks}
The following are examples of output masks.
\paragraph*{Linear mask}
\begin{equation}
h_i(t, x_i, \pi_i ) = (1 + \phi_i e^{-\sigma_i t } ) x_i , \qquad \phi_i \geq 0, \quad \sigma_i > 0
\label{eq:ex-linear-mask}
\end{equation}
(i.e., $ \pi_i = \{ \phi_i, \, \sigma_i \}$). This local vanishing mask is not a proper privacy mask since $ h_i(0, 0, \pi_i ) =0 $ i.e. the origin is not masked. Notice that all homogeneous maps have this problem (and they fail to escape neighborhoods of $ x_i $).
\paragraph*{Additive mask}
\begin{equation}
h_i(t, x_i, \pi_i ) = x_i + \gamma_i e^{-\delta_i t }, \qquad \delta_i >0 , \quad \gamma_i \neq 0
\label{eq:ex-additive-mask}
\end{equation}
(i.e., $ \pi_i = \{ \delta_i, \, \gamma_i \}$) is a vanishing privacy mask.
\paragraph*{Affine mask}
\begin{equation}
h_i(t, x_i, \pi_i ) =c_i (x_i + \gamma_i e^{-\delta_i t }), \quad c_i > 1, \quad \delta_i>0, \quad \gamma_i\neq 0
\label{eq:ex-affine-mask}
\end{equation}
(i.e., $ \pi_i = \{ c_i, \, \delta_i, \, \gamma_i \}$) is also a privacy mask.
Since $ \lim_{t\to\infty} h_i (t, \, x_i, \, \pi_i ) = c_i x_i $, it is however not vanishing.
\paragraph*{Vanishing affine mask}
\begin{equation}
\begin{split}
h_i(t, x_i, \pi_i ) = & (1 + \phi_i e^{-\sigma_i t } ) (x_i + \gamma_i e^{-\delta_i t }), \\
& \phi_i > 0, \quad \sigma_i >0, \quad \delta_i>0, \quad \gamma_i\neq 0
\end{split}
\label{eq:van-aff-mask-scalar}
\end{equation}
(i.e., $ \pi_i = \{ \phi_i, \, \sigma_i , \, \delta_i, \, \gamma_i \}$). This privacy mask is also vanishing.
Notice that in vector form, assuming all nodes adopt it, the vanishing affine mask can be expressed as
\begin{equation}
h(t, \, x, \pi) = (I + \Phi e^{-\Sigma t} ) (x + e^{-\Delta t } \gamma)
\label{eq:output-mask-affine-II}
\end{equation}
where $ \Phi = {\rm diag}( \phi_1, \ldots, \phi_n ) $, $ \Sigma= {\rm diag}( \sigma_1, \ldots, \sigma_n ) $, $ \Delta = {\rm diag}( \delta_1, \ldots, \delta_n ) $, and $ \gamma = \begin{bmatrix} \gamma_1 & \ldots & \gamma_n \end{bmatrix}^T $.
The following proposition shows that for these masks the level of privacy (as defined by the metric $ \rho$), can be made arbitrary if each agent chooses $ \pi_i $ in an appropriate way.
\begin{proposition}
\label{prop:metric}
Given $ \lambda>0 $, for each of the privacy masks \eqref{eq:ex-additive-mask}, \eqref{eq:ex-affine-mask} and \eqref{eq:van-aff-mask-scalar} the parameters $ \pi_i$ can be chosen locally by each agent $ i$, $ i =1, \ldots, n $, so that $ \rho(x_o ) > \lambda $ for any $ x_o$.
\end{proposition}
By definition of $ \rho$, the choice of parameters $ \pi_i $ in Proposition~\ref{prop:metric} is compatible with the privacy of the maps $ h_i(\cdot)$ (each agents knows its own $ x_{o,i}$, hence can compute $ \rho_i (x_{o,i} ) $ without disclosing $ x_{o,i}$).
\subsection{Dynamically private systems}
Consider the system \eqref{eq:model_xy}, rewritten here for convenience in components ($i =1, \ldots, n$):
\begin{subequations}
\begin{align}
\dot x_i & = f_i(y_i, \, y_k , \, k \in \mathcal{N}_i ) , \qquad x_i(0) = x_{o,i} \label{eq:model_xyi_a} \\
y_i & = h_i(t, \, x_i, \, \pi_i ) . \label{eq:model_xyi_b}
\end{align}
\label{eq:model_xyi}
\end{subequations}
We would like to understand when an eavesdropping agent $j$ can violate the privacy of agent $ i$, estimating its initial condition $ x_{o,i}$.
Recall that we are assuming that the agent $ j$ knows:
\begin{enumerate}
\item[K1:] \label{list_known2} the form of the vector field $ f_i (\cdot ) $,
\item[K2:] \label{list_known2} the output trajectories of its incoming neighborhood: $ y_k(t, x_{o,k}) $, $ k \in \{ \mathcal{N}_j \cup \{ j \} \} $, $ t \in [t_o, \, \infty)$,
\eenu
while instead the following are unknown for agent $j$:
\begin{enumerate}
\addtocounter{enumi}{2}
\item[U1:] \label{list_unknown2} the form of the output mask $ h_i (\cdot) $ and the numerical values of the parameters $ \pi_i$,
\item[U2:] the output trajectories not in its incoming neighborhood: $ y_k(t, x_{o,k}) $, $ k \notin \{ \mathcal{N}_j \cup \{ j \} \} $.
\eenu
Because of item~U1 above, the problem of estimating $ x_{o,i}$ from the output in \eqref{eq:model_xyi} cannot be cast as a state observability problem, but rather it has to be treated as a joint system identification + observability problem. To characterize this unusual situation we introduce a new concept, discernibility.
\begin{definition}
The initial condition of agent $ i$, $ x_{o,i} $, is said {\em discernible for agent $ j$} ($j\neq i$) if agent $ j$ can estimate $ x_{o,i}$ from the knowledge of K1 and K2.
It is said {\em indiscernible for agent $j$} otherwise.
An initial condition $ x_o$ is said {\em indiscernible} if all of its components $ x_{o,i}$ are indiscernible for all agents $ j \in \{ 1, \ldots, n \} \setminus \{ i\}$.
\end{definition}
Indiscernibility refers to the impossibility to solve the joint identification+observation problem of estimating $ x_{o,i}$ in \eqref{eq:model_xyi}. It can be imposed using the properties of a privacy mask together with the following Assumption~\ref{ass1} (see \cite{Rezazadeh2018Privacy} and \cite{Mo7465717}, Corollary 1).
\begin{assumption}
\label{ass1} {\rm (No completely covering neighborhoods)}
The system \eqref{eq:model_xy} is such that $ \{ \mathcal{N}_i\cup \{ i \} \} \nsubseteq \{ \mathcal{N}_j \cup \{ j \}\} $, $ \forall \; i,\, j =1, \ldots, n$, $ i \neq j$.
\end{assumption}
Assumption~\ref{ass1} guarantees that no node has complete information of what is going on at the other nodes.
This is a condition on the topology of the graph, and therefore a {\em system} property, rather than simply a property of well-conceived output maps.
Combining indiscernibility with privacy of the output masks, we can formulate the following definition.
\begin{definition}
\label{def:priv-sys}
The system \eqref{eq:model_xy} is called a {\em dynamically private} version of \eqref{eq:model} if
\begin{enumerate}
\item $ h $ is a privacy mask;
\item the solution of \eqref{eq:model_xy} exists unique in $ [0, \, \infty ) $ and is bounded $ \forall \; x_o \in \mathbb{R}^n$;
\item $ \lim_{t\to \infty} y(t) = \lim_{t\to\infty} x(t) $;
\item indiscernibility of the initial condition is guaranteed.
\eenu
\end{definition}
The next proposition relates indiscernibility to Assumption~\ref{ass1}.
\begin{proposition}
\label{prop:ass1:indiscern}
If the system \eqref{eq:model_xy} satisfies conditions~1-3 of Definition~\ref{def:priv-sys} and Assumption~\ref{ass1}, then it is a dynamically private version of \eqref{eq:model}.
\end{proposition}
\begin{remark}
From Proposition~\ref{prop:metric}, when a system is dynamically private with any of \eqref{eq:ex-additive-mask}, \eqref{eq:ex-affine-mask} and \eqref{eq:van-aff-mask-scalar} as mask, then privacy can be made to hold at an arbitrary level of precision, i.e., given $ \lambda>0 $, $ \rho(x_o) >\lambda $ can be guaranteed for any $x_o$ only though local choices of the parameters $ \pi_i $ for each agent.
\end{remark}
The privacy property P3 of $ h(\cdot ) $ suggests that in a dynamically private system we cannot have equilibrium points and therefore we cannot talk about stability (of equilibria), while convergence of $ y(t)$ to $ x(t)$ suggests that as long as $ f(\cdot ) $ is autonomous, a dynamically private system is asymptotically autonomous with the unmasked system as limit system.
This can be shown to be always true if the output mask is vanishing.
\begin{proposition}
\label{prop:no-equil}
If \eqref{eq:model_xy} is a dynamically private version of \eqref{eq:model}, then it cannot have equilibrium points.
Furthermore, if $ h (\cdot ) $ is a vanishing privacy mask, then the system \eqref{eq:model_xy} is asymptotically autonomous with limit system \eqref{eq:model}.
\end{proposition}
The ``vanishing'' attribute of the second part of Proposition~\ref{prop:no-equil} is sufficient but not necessary. As we will see below, when \eqref{eq:model} is globally exponentially stable, the condition that the output mask must be vanishing can be dispensed with.
\section{Dynamical privacy in globally exponentially stable systems}
\label{sec:glob-as-stable}
{In this section we restrict ourselves to unmasked systems \eqref{eq:model} having a globally exponentially stable equilibrium point. Under Assumption~\ref{ass1}, any privacy mask (not necessarily vanishing) can guarantee privacy of the initial conditions. We will only show the simplest case of affine mask.
Since we rely on standard converse Lyapunov theorems, we also request \eqref{eq:model} to be globally Lipschitz. }
\begin{theorem}
\label{thm:glob-as-stab}
Consider the system \eqref{eq:model} with $ f\, : \, \mathbb{R}^n \to \mathbb{R}^n$ {globally} Lipschitz continuous, $ f(0) =0$, and the masked system \eqref{eq:model_xy} with the affine mask \begin{equation}
h(t,x,p)= C(x + e^{-\Delta t} \gamma),
\label{eq:glob-as-st-output-mask}
\end{equation}
$ C= {\rm diag}(c_1, \ldots, c_n )$, $ c_i > 1 $, $ \Delta = {\rm diag} (\delta_1, \ldots, \delta_n)$, $ \delta_i > 0$, and $ \gamma = \begin{bmatrix} \gamma_1 & \ldots & \gamma_n \end{bmatrix}^T $, $ \gamma_i\neq 0 $.
If Assumption~\ref{ass1} holds and the equilibrium $ x^\ast =0 $ is globally {exponentially} stable for \eqref{eq:model}, then $ x^\ast =0 $ is uniformly globally attractive for the masked system \eqref{eq:model_xy}. Furthermore, \eqref{eq:model_xy} is a dynamically private version of \eqref{eq:model}.
\end{theorem}
\begin{remark}
Even if \eqref{eq:model} has $ x^\ast =0 $ as equilibrium point, the masked system \eqref{eq:model_xy} does not, as can be seen from the expression \eqref{eq:doty} in the proof of Theorem~\ref{thm:glob-as-stab}.
This follows from the inhomogeneity of the output mask.
Since $ x^\ast =0 $ is not stationary, we cannot talk about stability of its neighborhoods.
Nevertheless, $ x^\ast $ remains an attractor for all trajectories of the system.
\end{remark}
The following corollary states that the dynamically private system is asymptotically autonomous with $ \omega$-limit set identical to that of the corresponding unmasked system.
\begin{corollary}
\label{cor:as-auton}
Under the assumptions of Theorem~\ref{thm:glob-as-stab}, the system \eqref{eq:model_xy} with the output mask \eqref{eq:glob-as-st-output-mask} is asymptotically autonomous with limit system
\begin{equation}
\dot x = f(C^{-1} x ) .
\label{eq:glob-as-st-limit}
\end{equation}
The $\omega $-limit set of each trajectory of \eqref{eq:model_xy} is given by $\{ 0 \} $ for each $ x_o \in \mathbb{R}^n$.
\end{corollary}
Notice that since the affine mask \eqref{eq:glob-as-st-output-mask} is not vanishing, \eqref{eq:glob-as-st-limit} differs from \eqref{eq:model} (yet $ x^\ast $ is the same).
\begin{remark}
The result of Theorem~\ref{thm:glob-as-stab} can be rephrased as a converging-input converging-state property \cite{Sontag2003RemarkCICS}: under the assumption of $ f$ (locally) Lipschitz continuous and $ x^\ast $ globally asymptotically stable, boundedness of the trajectories
is enough to guarantee that
$ x(t) \to 0 $ as $ t\to \infty$.
However, guaranteeing boundedness is a nontrivial task: a globally asymptotically stable system can be destabilized by an additive perturbation which is arbitrarily small in $ \mathcal{L}_1 $ norm \cite{Sontag2003Example}.
Similarly, a globally exponentially stable system with linear sector growth (as opposed to global Lipschitzianity) can be destabilized by arbitrarily small additive exponentially decaying disturbances \cite{Teel2004Examples,Astolfi2007Remark}.
The assumptions made in Theorem~\ref{thm:glob-as-stab} imply the boundedness of the gradient of the Lyapunov function, which in turn guarantees boundedness of the solutions.
\end{remark}
\begin{example}
\label{ex:globally-exp-stable}
Consider the following interconnected system with saturated nonlinearities
\begin{equation}
\dot x = -x + \kappa A \psi (x)
\label{eq:interconn-example}
\end{equation}
where the off-diagonal matrix $ A \geq 0\; $ is a weighted adjacency matrix of spectral radius $ \rho(A)>0 $ describing the interactions among the agents and satisfying Assumption~\ref{ass1}, $ \kappa> 0 $ is a scalar coefficient, and $ \psi(x) = \begin{bmatrix} \psi_1(x_1) & \ldots & \psi_n(x_n) \end{bmatrix}^T $, $ \psi_i(x_i) = \tanh (x_i) $, is a vector of saturated sigmoidal functions depending only on the state of the sending node $ x_i $.
The system \eqref{eq:interconn-example} is used e.g. in \cite{FoAl2018} to describe collective distributed decision-making systems.
If we impose the condition $ \kappa< \frac{1}{\rho(A)} $, then $ x^\ast =0 $ is a globally exponentially stable equilibrium point for \eqref{eq:interconn-example}. In fact, in this case a simple quadratic Lyapunov function $ V= \frac{1}{2} \| x \|^2 $ leads to
\[
\dot V = - x^T x + \kappa x^T A \psi(x) \leq x^T ( -I + \kappa A ) x<0
\]
because $ \psi_i (x_i) $ obeys to the sector inequality $ 0 \leq \psi_i(x_i) x_i \leq 1 $.
Since the system is globally Lipschitz, Theorem~\ref{thm:glob-as-stab} is applicable to it if we choose an output mask like \eqref{eq:glob-as-st-output-mask}. Simulations for $ n= 100 $ are shown in Fig.~\ref{fig:exp-stable} for a privacy measure of $ \lambda=1 $.
Notice in panel (c) how the gap between $ x_{o,i} $ and $ y_{o,i} $ induced by this privacy level is clearly visible.
The initial conditions obey to $ \rho_i (x_{o,i}) = |y_i(0) - x_i(0)|\geq 1$, but $ | y_i(t) - x_i(t) | $ necessarily decreases as t grows, and converges to 0 as $ t$ diverges, see panel (d).
\begin{figure*}[htb]
\begin{center}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_8a}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_8b}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_8c}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_8e}}
\caption{Privacy-preserving globally exponentially stable system of Example~\ref{ex:globally-exp-stable}. (a): private state $x(t)$; (b): masked output $y(t)$; (c): initial conditions $ x(0) $ vs. $ y(0)$; (d): $ |y_i(t) - x_i(t)|$. The black dotted line in (d) represents $ \lambda $. The inset of panel (d) is a zoom in of the initial part.}
\label{fig:exp-stable}
\end{center}
\end{figure*}
\end{example}
In Example~\ref{ex:globally-exp-stable} global exponential stability implies that the initial conditions are forgotten asymptotically. In these cases privacy protection might be considered less critical than when the equilibrium point is itself a function of the initial state, as it happens in the next sections.
\subsection{Application to continuous-time Friedkin-Johnsen model}
Let us consider a continuous-time Friedkin-Johnsen model (also known as Taylor model, see \cite{PROSKURNIKOV201765})
\begin{equation}
\dot x = - (L + \Theta ) x + \Theta x_o , \qquad x(0) =x_o,
\label{eq:FJ_cont}
\end{equation}
where $ L $ is an irreducible Laplacian matrix, and $ \Theta = {\rm diag}(\theta_1, \ldots, \theta_n ) $, $ \theta_i \in [0, \, 1 ] $, is a diagonal matrix of so-called susceptibilities, i.e., tendencies of the $ i$-th agent to remain attached to its own initial opinion $ x_{o_i}$.
The behavior of the system \eqref{eq:FJ_cont} is analyzed in \cite{PROSKURNIKOV201765}: when $ L $ is irreducible and some $ \theta_i \neq 0 $, it has a single equilibrium point $ x^\ast = (L+\Theta)^{-1} \Theta x_o
$ which is asymptotically stable for a solution starting in $ x_o $.
The system reduces to the usual consensus problem when $ \theta_i =0 $ $ \forall \, i $ (see Section~\ref{sec:av-consensus}).
Notice how in the affine model \eqref{eq:FJ_cont}, the initial opinions (initial condition of the system) enter also in the vector field at time $ t$.
Hence protecting the privacy of the agents in \eqref{eq:FJ_cont} requires a `double mask', i.e., one needs to replace both $ x (t) $ and $ x_o $ with suitably masked versions $ y(t)$ and $ y_o = y(0)$ (since $ y_o $ is transmitted to the neighboring agents, it can be memorized and used whenever needed).
Denoting $ z = x - x^\ast = x - (L+\Theta)^{-1} \Theta x_o $, then \eqref{eq:FJ_cont} is expressed in $z $ as the linear system
\begin{equation}
\dot z = -(L + \Theta) z ,
\label{eq:FJ_cont-z}
\end{equation}
which has $ z^\ast =0 $ as globally asymptotically (and hence exponentially) stable equilibrium point, meaning that Theorem~\ref{thm:glob-as-stab} is applicable.
In the original $ x$ basis, a consequence of inhomogeneity of \eqref{eq:FJ_cont} is that the attractor $ x^\ast $ is a function of the initial condition $ x_o $, and it moves with it: $ x^\ast = x^\ast (x_o)$.
To talk rigorously about global asymptotic stability, we should use \eqref{eq:FJ_cont-z} in $ z$-coordinates.
However, for homogeneity of presentation, the next theorem is still formulated in terms of $ x $ and $ y $ variables, and global asymptotic stability / attractivity is referred to the ``moving'' point\footnote{Unlike for the consensus problem which we will study in Section~\ref{sec:av-consensus}, in this case there is no easy way to describe the orthogonal complement of the space in which $ x^\ast(x_o)$ moves.} $ x^\ast(x_o)$.
Another consequence of the inhomogeneous structure of \eqref{eq:FJ_cont} is that novanishing affine privacy masks like the one used in Theorem~\ref{thm:glob-as-stab} cannot be used. To obtain convergence to the correct $ x^\ast (x_o)$ we need to use a vanishing privacy mask.
\begin{theorem}
\label{thm:FJ}
If Assumption~\ref{ass1} holds, the masked system
\begin{equation}
\begin{split}
\dot x & = (-L - \Theta ) y + \Theta y_o \\
y & = h(t, \, x, \, \pi ) = \left(I + \Phi e^{-\Sigma t } \right) \left( x + e^{-\Delta t} \gamma \right)
\end{split}
\label{eq:FJ_cont_private}
\end{equation}
where $ y_o = h(t, \, x_o, \, \pi ) $ and $ \Theta\neq 0$, is a dynamically private version of \eqref{eq:FJ_cont}.
If $ x^\ast (x_o) = (L+\Theta)^{-1} \Theta x_o $ is the globally asymptotically stable equilibrium point of \eqref{eq:FJ_cont}, then $ x^\ast (x_o) $ is a globally uniform attractor of \eqref{eq:FJ_cont_private}.
\end{theorem}
\begin{corollary}
\label{cor:FJ-autonomous}
The masked system \eqref{eq:FJ_cont_private} is asymptotically autonomous with \eqref{eq:FJ_cont} as limit system.
The $ \omega$-limit set of \eqref{eq:FJ_cont_private} is given by $ \{ x^\ast(x_o) \} = \{ (L+\Theta)^{-1} \Theta x_o \} $ for each $ x_o $.
\end{corollary}
\begin{example}
\label{ex:FJ}
An example of $ n=100 $ agents is shown in Fig.~\ref{fig:FJ_cont_private}. The introduction of $ h(\cdot) $ scrambles the initial conditions, as expected, see panel (c) of Fig.~\ref{fig:FJ_cont_private}. A level of privacy $ \lambda = 1$ is requested. Both $ x(t) $ and $ y(t) $ converge to the same $ x^\ast = (L+\Theta)^{-1} \Theta x_o $, see panel (d) of Fig.~\ref{fig:FJ_cont_private}, although neither now respects the rankings during the transient (i.e., unlike for \eqref{eq:FJ_cont}, for \eqref{eq:FJ_cont_private} it is no longer true that $ x_i(t_1) < x_j(t_1) $ $ \Longrightarrow $ $ x_i(t_2) < x_j(t_2) $ for all $ t_2> t_1$).
\begin{figure*}[htb]
\begin{center}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_7a}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_7b}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_7c}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_7d}}
\caption{Privacy-preserving continuous-time Friedkin-Johnsen model of Example~\ref{ex:FJ}. (a): private state $ x(t)$; (b): masked output $ y(t)$; (c): initial condition $ x(0) $ vs. $ y(0)$; (d): final condition $ x(t_f) $ vs. $ y(t_f)$, where $ t_f = $ final time of the simulation.
}
\label{fig:FJ_cont_private}
\end{center}
\end{figure*}
\end{example}
\section{Dynamically private average consensus}
\label{sec:av-consensus}
In the average consensus problem, $ f(x) = -L x $, with $ L$ a weight-balanced Laplacian matrix: $ L \textbf{1} = L^T \textbf{1} =0 $, with $ \textbf{1} =\begin{bmatrix} 1 & \ldots & 1 \end{bmatrix}^T \in \mathbb{R}^n$.
When $L$ is irreducible, the equilibrium point is $ x^\ast(x_o) = (\textbf{1}^T x_o/n) \textbf{1} $. The system has a continuum of equilibria, described by $ {\rm span}(\textbf{1}) $, and each $ x^\ast (x_o) $ is globally asymptotically stable in $ {\rm span} (\textbf{1})^\perp$, see \cite{Olfati2003Consensus}.
\begin{theorem}
\label{thm:av-consensus}
Consider the system
\begin{equation}
\dot x = - L x , \qquad x(0) =x_o
\label{eq:consensus1}
\end{equation}
where $ L $ is an irreducible, weight-balanced Laplacian matrix, and denote $ \eta = \textbf{1}^T x_o /n $ its average consensus value.
Then $ x^\ast = \eta \textbf{1} $ is a global uniform attractor on $ {\rm span} (\textbf{1})^\perp$ for the masked system
\begin{equation}
\begin{split}
\dot x & = - L y \\
y & = h(t, \, x, \, \pi ) = \left(I + \Phi e^{-\Sigma t } \right) \left( x + e^{-\Delta t} \gamma \right) .
\end{split}
\label{eq:consensus_xy}
\end{equation}
Furthermore, if Assumption~\ref{ass1} holds, then \eqref{eq:consensus_xy} is a dynamically private version of \eqref{eq:consensus1}.
\end{theorem}
Also in this case our masked system is an asymptotically autonomous time-varying system.
\begin{corollary}
\label{cor:consensus-autonomous}
The masked system \eqref{eq:consensus_xy} is asymptotically autonomous with \eqref{eq:consensus1} as limit system.
The $ \omega$-limit set of \eqref{eq:consensus_xy} is given by $ \{ \eta \textbf{1} \} $ for each $ x_o $.
\end{corollary}
\begin{remark}
Even if \eqref{eq:consensus1} has $ x^\ast =\eta \textbf{1} $ as a globally asymptotically stable equilibrium point in $ {\rm span}(\textbf{1})^\perp$, the masked system \eqref{eq:model_xy} does not have equilibria because of the extra inhomogeneous term in the right hand sid
, hence we cannot talk about stability of $ \eta \textbf{1} $.
Nevertheless, $ x^\ast =\eta \textbf{1} $ remains a global attractor for all trajectories of the system in $ {\rm span}(\textbf{1})^\perp$.
\end{remark}
\begin{remark}
Since the evolution of the masked system \eqref{eq:consensus1} is restricted to the $ n-1$ dimensional subspace $ {\rm span}(\textbf{1})^\perp$, our masked consensus problem (as any exact privacy preserving consensus scheme) makes sense only when $ n>2$. The case $ n=2 $ never satisfies Assumption~\ref{ass1} when $L$ is irreducible.
\end{remark}
\begin{example}
\label{ex:private-consensus}
In Fig.~\ref{fig:consensus1} a private consensus problem is run among $ n=100 $ agents.
Both $ x(t) $ and $ y(t) $ converge to the same consensus value $ \eta = \textbf{1}^T x(0)/n$, but the initial condition $ y(0) $ does not reflect $ x(0)$, not even when $ x_i(0) $ is already near $ \eta $ ($h(\cdot ) $ does not preserve neighborhoods, see panel (c) of Fig.~\ref{fig:consensus1}).
The level of privacy measure imposed in this simulation is $ \lambda=1 $.
Notice that $ \textbf{1}^T x(t)/n $ is constant over $t$, while $ \textbf{1}^T y(t)/n $ is not, i.e., the output mask hides also the conservation law.
Notice further that a standard Lyapunov function used for consensus, like $ V_{mm}(t) = \max_i(x_i(t)) - \min_i (x_i(t)) $, does not work in our privacy-preserving scheme (see panel (d) of Fig.~\ref{fig:consensus1}), which reflects the fact that the system \eqref{eq:consensus_xy} is not asymptotically stable in $ {\rm span}(\textbf{1})^\perp $.
The convergence speed of the time-dependent part can be manipulated by selecting the factors $ \sigma_i $ and $ \delta_i $ appropriately.
\begin{figure*}[htb]
\begin{center}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_6a}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_6b}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_6c}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_6d}}
\caption{Privacy-preserving consensus of Example~\ref{ex:private-consensus}. (a): private state $ x(t)$; (b): masked output $ y(t)$; (c): initial conditions $ x(0) $ vs. $ y(0)$; (d): $ V_{mm}(t) = \max_i(x_i(t)) - \min_i (x_i(t)) $. The black dotted line in (a) resp. (b) represent $ \textbf{1}^T x(t)/n $, resp. $ \textbf{1}^T y(t)/n $.
}
\label{fig:consensus1}
\end{center}
\end{figure*}
\end{example}
\section{Privacy for higher order systems: the case of pinned synchronization}
\label{sec:synchro}
When instead of a scalar variable, at each node we have a vector of variables $ x_i \in \mathbb{R}^\nu$, $ \nu >1 $, then the definition of output mask can be straightforwardly extended by defining $ h_i(t, x_i, \pi_i ) $ as a $ \nu$-dimensional diagonal map.
For instance for the vanishing affine output mask, in place of \eqref{eq:van-aff-mask-scalar} at each node we can use
\[
h_i (t, x_i, \pi_i ) = (I + \Phi_i e^{-\Sigma_i t } ) ( x_i + e^{-\Delta_i t } \gamma_i )
\]
where $ \Phi_i = {\rm diag}( \phi_{i,1}, \ldots, \phi_{i,\nu} ) $, $ \Sigma_i= {\rm diag}( \sigma_{i,1}, \ldots, \sigma_{i,\nu} ) $, $ \Delta_i = {\rm diag}( \delta_{i,1}, \ldots, \delta_{i,\nu }) $, and $ \gamma_i = \begin{bmatrix} \gamma_{i,1} & \ldots & \gamma_{i,\nu} \end{bmatrix}^T $.
The formalism introduced in the paper extends unaltered.
We will now investigate privacy protection in a standard example of coordination of multivariable multiagent systems: synchronization via pinning control of identical nonlinear systems with diffusive couplings \cite{Chen4232574,Yu-doi:10.1137/100781699,ZHOU2008996}.
Other settings of multiagent coordination can be treated in an analogous way.
Consider a network of $n$ agents obeying the following set of coupled differential equations
\begin{align}
\dot x_i & = f(x_i) - \sum_{j=1}^n \ell_{ij} R x_j - p_i R (x_i - s) , \qquad i=1, \ldots, k
\label{eq:synchro1} \\
\dot x_i & = f(x_i) - \sum_{j=1}^n \ell_{ij} R x_j, \qquad i=k+1, \ldots, n
\label{eq:synchro2}
\end{align}
where $ x_i \in \mathbb{R}^\nu $, $ L = (\ell_{ij} ) $ is an irreducible Laplacian matrix, and $ R$ is a symmetric positive definite matrix of inner couplings.
The extra term in the first $ k$ equations expresses the coupling with a pinned node ($ p_i = $ pinning gain), acting as an exosystem for \eqref{eq:synchro1}-\eqref{eq:synchro2} and obeying to the law
\begin{equation}
\dot s = f(s) .
\label{eq:synchro3}
\end{equation}
The system \eqref{eq:synchro3} can represent an equilibrium point, a periodic or a chaotic system \cite{Yu-doi:10.1137/100781699}.
Synchronization of \eqref{eq:synchro1}-\eqref{eq:synchro2} to the exosystem \eqref{eq:synchro3} corresponds to
\[
\lim_{t\to\infty} \| x_i(t) - s(t) \| =0 \quad \forall \, x_i(0) \in \mathbb{R}^\nu, \quad \forall \, i =1, \ldots, n.
\]
We need the following (standard) assumption:
\begin{assumption}
\label{ass2} {\rm (Global Lipschitzianity of the drift)}
$ f \,:\, \mathbb{R} \to \mathbb{R} $ is such that
\begin{equation}
\| f(x) - f(z) \| \leq q \big( ( x-z)^T R (x-z) \big)^{\frac{1}{2}} \qquad \forall \; x, \, z \in \mathbb{R}^\nu
\label{eq:glob-Lipsc-pinning}
\end{equation}
for some positive constant $q$.
\end{assumption}
Under Assumption~\ref{ass2}, then a sufficient condition for global synchronization of \eqref{eq:synchro1}-\eqref{eq:synchro2} to \eqref{eq:synchro3} is given by the following matrix inequality
\begin{equation}
q \Xi \otimes R - \left( \frac{1}{2}(\Xi L + L^T \Xi ) + \Xi P \right) \otimes R <0
\label{eq:synchro_ass2}
\end{equation}
where $ \Xi = {\rm diag}(\xi ) $, with $ \xi = ( \xi_1, \ldots, \xi_n) $ the left eigenvector of $L$ relative to $0$, and $ P = {\rm diag}(p_1, \ldots, p_k, 0, \ldots, 0 ) $, see \cite{Yu-doi:10.1137/100781699} for more details.
\begin{theorem}
\label{thm:pinning}
Under the Assumptions~\ref{ass1} and~\ref{ass2}, if the solution $ s(t)$ of \eqref{eq:synchro3} is bounded $ \forall \, t \in [0, \, \infty) $, $ L $ is irreducible, and $ P $ is such that \eqref{eq:synchro_ass2} holds, then the exosystem \eqref{eq:synchro3} is a global attractor for the trajectories of the dynamically private system:
\begin{align}
\dot x_i & = f(y_i) - \sum_{j=1}^n \ell_{ij} R y_j - p_i R (y_i -s) , \quad i=1, \ldots, k
\label{eq:synchro_masked1} \\
\dot x_i & = f(y_i) - \sum_{j=1}^n \ell_{ij} R y_j, \quad i=k+1, \ldots, n
\label{eq:synchro_masked2} \\
y_i & = \left(I + \Phi_i e^{-\Sigma_i t } \right) \left( x_i + e^{-\Delta_i t} \gamma_i \right) , \quad i=1, \ldots, n.
\label{eq:synchro_masked3}
\end{align}
\end{theorem}
\begin{remark}
Notice that the masked system \eqref{eq:synchro_masked1}-\eqref{eq:synchro_masked3} is not asymptotically autonomous, as its limit system \eqref{eq:synchro1}-\eqref{eq:synchro2} is a function of the exosystem $ s(t)$ which also constitutes the $ \omega$-limit set of the system.
\end{remark}
\begin{example}
\label{ex:synchro}
Consider the case of an $ f(\cdot) $ representing a three dimensional chaotic attractor (here the model presented in \cite{ZHOU2008996} is used).
In Fig.~\ref{fig:synchro1} a system of $ n=50$ coupled agents synchronize to an exosystem $ s(t) $ obeying the same law. The privacy measure in this example is set to $ \lambda=10$. The convergence speed can be tuned by changing the $ \Sigma_i$ and $ \Delta_i $ parameters of the masks.
\begin{figure*}[htb]
\begin{center}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_9a}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_9b}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_9c}}
\subfigure[]{
\includegraphics[angle=0, trim=1cm 7.5cm 9cm 1cm, clip=true, width=4.2cm]{fig_privacy_9d}}
\caption{Privacy-preserving pinned synchronization of Example~\ref{ex:synchro}. (a): private state $ x(t)$; (b): masked output $ y(t)$; (c): initial condition $ x(0) $ vs. $ y(0)$; (d): error $ e(t)$.
}
\label{fig:synchro1}
\end{center}
\end{figure*}
\end{example}
\section{Conclusions}
The approach to privacy protection we have taken in this paper is exact and inspired by classical nonlinear systems techniques.
While most of the assumptions under which it holds are fairly simple and reasonable (only the internal state of an agent and the parameters of its output mask must be kept private), the need to have non completely covering neighborhoods (Assumption~\ref{ass1}) is instead restrictive, but difficult to dispense with without requiring some other form of restriction (for instance privacy of the vector fields themselves).
Assumption~\ref{ass1} is key to guarantee the impossibility for an eavesdropper to identify a model of the system, and hence to set up an observer for $ x_o$.
Notice that a breaching of the privacy at one node does not compromise the other nodes.
From a system-theoretical perspective, the most interesting fact described in the paper is that privacy seems incompatible with a point being a fixed point of a dynamical system, as in that case if all agents happen to have initial conditions already on the fixed point, privacy is compromised (an agent will see the same stationary messages being exchanged among its neighboring nodes for all $ t$).
By extension of the same argument, approximate privacy (at any level of accuracy) does not seem to be compatible with stability.
It is intriguing to investigate if concepts like $ \epsilon$-differential privacy \cite{Cortes7798915} can be rephrased in these more dynamical terms.
Several generalizations of our approach are possible.
First of all an equivalent framework for discrete-time systems should be developed.
Then it is easy to think of output masks that vanish in finite time rather than asymptotically.
More complicated seems to be integrating the time dependence introduced by an output mask with a time-varying communication graph.
Even more challenging is the case in which, instead of global exponential stability (perhaps on ``slices'' of the state space if there is a continuum of equilibria) of the unmasked system, this last has multiple isolated locally exponentially stable equilibria.
In this case even a transient output mask may lead to tipping over from one basin of attraction to another, hence it should be used with care.
\section{Acknowledgments}
The author would like to thank Claudio De Persis for useful discussions on the topic of the paper and the anonymous reviewers for constructive criticisms. This paper is dedicated to the memory of the author's father.
|
1,116,691,500,642 | arxiv |
\section{Introduction}
\label{sec:intro}
$K$-cores play an important role in revealing the higher-order organization of networks.
A $k$-core \cite{seidman1983network} is a maximal induced subgraph where all vertices have internal degree of at least $k$. These cohesive subgraphs have been applied to model users' engagement and viral marketing in social networks \cite{bhawalkar2015preventing,kitsak2010identification}. Other applications include anomaly detection \cite{shin2016corescope}, community discovery \cite{peng2014accelerating}, protein function prediction \cite{you2013prediction}, and visualization \cite{alvarez2006large,carmi2007model}. However, the $k$-core structure can be quite unstable under network modification. For instance, removing only a few edges from the graph might lead to the collapse of its core structure.
This motivates the $k$-core minimization problem
\textit{ Given a graph G and constant k, find a small set of $b$ edges for which the removal minimizes the size of the k-core structure \cite{zhu2018k}.}
We motivate $k$-core minimization using the following applications: (1) \textit{Monitoring:} Given an infrastructure or technological network, which edges should be monitored for attacks \cite{xiangyu2013identification,Laishram2018}? (2) \textit{Defense:} Which communication channels should be blocked in a terrorist network in order to destabilize its activities \cite{pedahzur2006changing,perliger2011social}? and (3) \textit{Design:} How to prevent unraveling in a social or biological network by strengthening connections between nodes \cite{bhawalkar2015preventing,morone2018k}?
Consider a specific application of $k$-cores to online social networks (OSNs). OSN users tend to perform activities (e.g., joining a group, playing a game) if enough of their friends do the same \cite{burke2009feed}. Thus, strengthening critical links between users is key to the long-term popularity, and even survival, of the network \cite{Farzan2011}. This scenario can be modeled using $k$-cores. Initially, everyone is engaged in the $k$-core. Removal of a few links (e.g., unfriending, unfollowing) might not only cause a couple of users to leave the network but produce a mass exodus due to cascading effects. This process can help us to understand the decline and death of OSNs such as Friendster \cite{garcia2013social}.
\begin{figure}[t]
\centering
\subfloat[Initial $G$]{\includegraphics[width=0.24\columnwidth]{Experiments/Example/intro_1.pdf}\label{fig:intro_1}}
\hspace{3em}
\subfloat[Modification $G'$]{\includegraphics[width=0.24\columnwidth]{Experiments/Example/intro_2.pdf}\label{fig:intro_2}}
\hspace{3em}
\subfloat[Modification $G''$]{\includegraphics[width=0.24\columnwidth]{Experiments/Example/intro_3.pdf}\label{fig:intro_3}}
\caption{\textbf{ K-core minimization for an illustrative example: (a) Initial graph, where all the vertices are in the $3$-core; (b) Removing $e_1$ causes all the vertices to leave the $3$-core; (c) Removing $e_2$ causes only six vertices to leave the $3$-core. \label{fig:core_intro}}}
\vspace{-2mm}
\end{figure}
$K$-core minimization (KCM) can be motivated both from the perspective of a centralized agent who protects the structure of a network or an adversary that aims to disrupt it. Moreover, our problem can also be applied to measure network resilience \cite{Laishram2018}.
We illustrate KCM in Figure \ref{fig:core_intro}. An initial graph $G$ (Figure \ref{fig:intro_1}), where all vertices are in the $3$-core, is modified by the removal of a single edge. Graphs $G'$ (Figure \ref{fig:intro_2}) and $G''$ (Figure \ref{fig:intro_2}) are the result of removing $e_1$ and $e_2$, respectively. While the removal of $e_1$ brings all the vertices into a $2$-core, deleting $e_2$ has a smaller effect---four vertices remain in the 3-core. Our goal is to identify a small set of edges removal of which minimizes the size of the $k$-core.
From a theoretical standpoint, for any objective function of interest, we can define a \textit{search} (e.g. the $k$-core decomposition) and a corresponding \textit{modification} problem, such as $k$-core minimization. In this paper, we show that, different from its search version \cite{batagelj2011fast}, KCM is NP-hard. Furthermore, there is no polynomial time algorithm that achieves a constant-factor approximation for our problem. Intuitively, the main challenge stems from the strong combinatorial nature of the effects of edge removals. While removing a single edge may have no immediate effect, the deletion of a small number of edges might cause the collapse of the k-core structure. This behavior differs from more popular problems in graph combinatorial optimization, such as submodular optimization, where a simple greedy algorithm provides constant-factor approximation guarantees.
The algorithm for $k$-core minimization proposed in this paper applies the concept of \textit{Shapley values} (SVs), which, in the context of cooperative game theory, measure the contribution of players in coalitions \cite{shapley1953value}. Our algorithm selects edges with largest Shapley value to account for the joint effect (or cooperation) of multiple edges. Since computing SVs is NP-hard, we approximate them in polynomial time via a randomized algorithm with quality guarantees.
Recent papers have introduced the KCM problem \cite{zhu2018k} and its vertex version \cite{zhang2017finding}, where the goal is to delete a few vertices such that the $k$-core structure is minimized. However, our work provides a stronger theoretical analysis and more effective algorithms that can be applied to both problems. In particular, we show that our algorithm outperforms the greedy approach proposed in \cite{zhu2018k}.
Our main contributions are summarized as follows:
\begin{itemize}
\item We study the $k$-core minimization (KCM) problem, which consists of finding a small set of edges, removal of which minimizes the size of the $k$-core structure of a network.
\item We show that KCM is NP-hard, even to approximate by a constant for $k\geq 3$. We also discuss the parameterized complexity of KCM and show the problem is $W[2]$-hard for the same values of $k$.
\item Given the above inapproximability result, we propose a randomized Shapley Value based algorithm that efficiently accounts for the interdependence among the candidate edges for removal.
\item We show that our algorithm is both accurate and efficient using several datasets. Moreover, we illustrate how KCM can be applied to profile the structural resilience of real networks.
\end{itemize}
\subsection{Hardness and Approximability}
\label{sec:hardness}
The hardness of the KCM problem stems from two major facts: 1) There is a combinatorial number of choices of edges from the candidate set, and 2) there might be strong dependencies in the effects of edge removals (e.g. no effect for a single edge but cascading effects for subsets of edges). We show that KCM is NP-hard to approximate within any constant factor for $k\geq 3$.
\begin{thm} \label{thm:np_hard_k_1_2}
The KCM problem is NP-hard for $k=1$ and $k=2$.
\end{thm}
\begin{proof}
See the Appendix.
\end{proof}
Notice that a proof for Theorem \ref{thm:np_hard_k_1_2} is also given by \cite{zhu2018k}. However, our proof applies a different construction and was developed independently from (and simultaneously with) this previous work.
\begin{thm} \label{thm:hard_approx}
The KCM problem is NP-hard and it is also NP-hard to approximate within a constant-factor for all $k\geq 3$.
\end{thm}
\begin{proof}
We sketch the proof for $k\!=\!3$ (similar for $k\!>\!3$).
Let $SK(U,\mathcal{S},P,W,q)$ be an instance of the Set Union Knapsack Problem \cite{goldschmidt1994note}, where $U=\{u_1, \ldots u_{n'}\}$ is a set of items, $\mathcal{S}=\{S_1, \ldots S_{m'}\}$ is a set of subsets ($S_i \subseteq U$), $p:\mathcal{S}\to\mathbb{R}_{+}$ is a subset profit function, $w:U\to\mathbb{R}_+$ is an item weight function, and $q \in \mathbb{R}_+$ is the budget. For a subset $\mathcal{A} \subseteq \mathcal{S}$, the weighted union of set $\mathcal{A}$ is $W(\mathcal{A}) = \sum_{e\in \cup_{t\in \mathcal{A}} S_t} w_e$ and $P(\mathcal{A})=\sum_{t \in \mathcal{A}} p_t$. The problem is to find a subset $\mathcal{A}^*\subseteq S$ such that $W(\mathcal{A}^*)\leq q$ and $P(\mathcal{A}^*)$ is maximized. SK is NP-hard to approximate within a constant factor \cite{arulselvan2014note}.
We reduce a version of $SK$ with equal profits and weights (also NP-hard to approximate) to the KCM problem. The graph $G'$ is constructed as follows. For each $u_j \in U$, we create a cycle of $m'$ vertices $Y_{j,1}, Y_{j,2}, \ldots, Y_{j,m'}$ in $V$ and add $(Y_{j,1},Y_{j,2})$, $(Y_{j,2},Y_{j,3}), \ldots, (Y_{j,m'-1}$ $,Y_{j,m'}),(Y_{j,m'},Y_{j,1})$ as edges between them. We also add $5$ vertices $Z_{j,1}$ to $Z_{j,5}$ with eight edges where the four vertices $Z_{j,2}$ to $Z_{j,5}$ form a clique with six edges. The other two edges are $(Z_{j,1},Z_{j,2})$ and $(Z_{j,1},Z_{j,5})$. Moreover, for each subset $S_i$ we create a set of $O((m')^3)$ vertices (sets $X_{i,*}$ are red rectangles in Figure \ref{fig:hardness_ex1}), such that each node has exactly degree $3$, and add one more node $X_{i,1}$ with two edges incident to two vertices in $X_{i,*}$ from $X_{i,1}$. In the edge set $E$, an edge $(X_{i,1},Y_{j,i})$ will be added if $u_j\in S_i$. Additionally, if $u_j\notin S_i$, the edge $(Y_{j,i},Z_{j,1})$ will be added to $E$. Figure \ref{fig:hardness_ex1} illustrates our construction for a set $S_1=\{u_1,u_2\},S_2=\{u_1,u_3\},S_3=\{u_2\}$.
In KCM, the number of edges to be removed is the budget, $b$. The candidate set of edges, $\Gamma$ is the set of all the edges with form $(Y_{j,1},Y_{j,2})$. Initially all the nodes in $G'$ are in the $3$-core. Our claim is, for any solution $\mathcal{A}$ of an instance of $SK$ there is a corresponding solution set of edges, $B$ (where $|B|=b$) in $G'$ of the KCM problem, such that $f_3(B)=P(\mathcal{A})+b(m'+1)$ if the edges in $\mathcal{A}$ are removed.
The $m'$ nodes in any $Y_j$ and the node $Z_{j,1}$ will be in the $2$-core if the edge $(Y_{j,1},Y_{j,2)}$ gets removed. So, removal of any $b$ edges from $\Gamma$ enforces $b(m'+1)$ nodes to go to the $2$-core. But the node $X_{i,1}$ and each node in $X_{i,*}$ ($O((m')^3)$ nodes) will be in the $2$-core iff all its neighbours in $Y_{j,i}$s go to the $2$-core after the removal of $b$ edges in $\Gamma$. Thus, an optimal solution $B^*$ will be $f_3(B^*)=P(\mathcal{A^*})+b(m'+1)$ where $\mathcal{A^*}$ is the optimal solution for SUKP. For any non-optimal solution $B$, $f_3(B)=P(\mathcal{A})+b(m'+1)$ where $\mathcal{A}$ is also non-optimal solution for SUKP. As $P(\mathcal{A^*})$ is at least $O((m')^3)$ by construction (i.e. $P(\mathcal{A^*})\gg b(m'+1)$), and $\frac{P(\mathcal{A^*})}{P(\mathcal{A})}$ cannot be within a constant factor, $\frac{f_3(B^*)}{f_3(B)}$ will also not be within any constant factor
\end{proof}
Theorem \ref{thm:hard_approx} shows that there is no polynomial-time constant-factor approximation for KCM when $k\!\geq\!3$. This contrasts with well-known NP-hard graph combinatorial problems in the literature \cite{kempe2003maximizing}. In the next section, we explore the hardness of our problem further in terms of exact
exponential algorithms with respect to the parameters.
\begin{figure}[t]
\vspace{-1mm}
\centering
{\includegraphics[width=0.45\textwidth]{Experiments/Example/hard_ex1.pdf}}
\caption{\textbf{Example construction for hardness reduction from SK where $U=\{u_1,u_2,u_3\}, S=\{S_1,S_2,S_3\}, S_1=\{u_1,u_2\},S_2=\{u_1,u_3\},S_3=\{u_2\}$. \label{fig:hardness_ex1}}}
\end{figure}
\subsection{Parameterized Complexity}
There are several NP-hard problems with exact solutions via algorithms that run in exponential time in the size of the parameter. For instance, the NP-hard Vertex Cover can be solved via an exhaustive search algorithm in time $2^{b_1}{n_1}^{O(1)}$ \cite{balasubramanian1998improved}, where $b_1$ and $n_1$ are budget and the size of the graph instance respectively. Vertex cover is therefore fixed-parameter tractable (FPT) \cite{flum2006parameterized}, and if we are only interested in small $b_1$, we can solve the problem in polynomial time. We investigate whether the KCM problem is also in the FPT class.
A parameterized problem instance is comprised of an instance $X$ in the
usual sense, and a parameter $b$. A problem with parameter $b$ is called fixed parameter tractable (FPT) \cite{flum2006parameterized}
if it is solvable in time $g(b) \times p(|X|)$, where $g$ is an arbitrary function of $b$ and $p$ is a polynomial in the
input size $|X|$. Just as in NP-hardness,
there exists a hierarchy of
complexity classes above FPT. Being hard for one of these classes is an evidence that the problem is unlikely to be FPT. Indeed, assuming the
Exponential Time Hypothesis, a problem which is $W[1]$-hard does not belong to FPT.
The main classes in this hierarchy are: FPT$\subseteq W[1] \subseteq W[2]\subseteq\ldots W[P]\subseteq XP$. Generally speaking, the problem is harder when it belongs to a higher $W[.]$-hard class in terms of the parameterized complexity. For instance, \textit{dominating set} is in $W[2]$ and is considered to be harder than \textit{maximum independent set}, which is in $W[1]$.
\begin{definition}
\textbf{Parameterized Reduction \cite{flum2006parameterized}: }
Let $P_1$ and $P_2$ be parameterized problems. A parameterized reduction from $P_1$ to $P_2$ is an algorithm that, given an instance $(X_1, b_1)$ of $P_1$, outputs an instance $(X_2, b_2)$ of $P_2$ such that: (1) $(X_1, b_1)$ is a yes-instance of $P_1$ iff $(X_2, b_2)$ is a yes-instance of $P_2$; (2) $b_2\leq h(b_1)$ for some computable (possibly exponential) function $h$; and (3) the running time of the algorithm is $g(b_1)\cdot|X|^{O(1)}$ for a computable function $g$.
\end{definition}
\begin{thm} \label{thm: param_approx}
The KCM problem is not in FPT, in fact, it is in $W[2]$ parameterized by $b$ for $k\geq 3$.
\end{thm}
\begin{proof}
We show a parameterized reduction from the Set Cover problem. The Set Cover problem is known to be $W[2]$-hard \cite{bonnet2016parameterized}. The details on the proof are given in the Appendix
\end{proof}
\begin{thm}\label{cor:tmcv_para_d}
The KCM problem is para-NP-hard parameterized by $k$.
\end{thm}
This can be proven from the fact that our problem KCM is NP-hard even for constant $k$. Motivated by these strong hardness and inapproximability results, we next consider some practical heuristics for the KCM problem.
\section{Problem Definition}
\label{sec:prob_def}
We assume $G(V,E)$ to be an undirected and unweighted graph with sets of vertices $V$ ($|V|=n$) and edges $E$ ($|E|=m$). Let $d(G,u)$ denote the degree of vertex $u$ in $G$. An induced subgraph, $H=(V_H,E_H) \subset G$ is the following: if $u,v \in V_H$ and $(u,v)\in E$ then $(u,v)\in E_H$. The $k$-core \cite{seidman1983network} of a network is defined below.
\begin{definition} \textbf{$k$-Core:} The $k$-core of a graph $G$, denoted by $C_k(G)=(V_k(G),E_k(G))$, is defined as a maximal induced subgraph that has vertices with degree at least $k$.
\end{definition}
Figure \ref{fig:core_definitions} shows an example. The graphs in Figures \ref{fig:kcore_ex11} and \ref{fig:kcore_ex12} are the $2$-core and the $3$-core, respectively, of the initial graph in Figure \ref{fig:kcore_ex1}. Note that, $C_{k+1}(G)$ is a subgraph of $C_k(G)$. Let $\mathcal{E}_G(v)$ denote the core number of the node $v$ in $G$. If $v\in V_k(G)$ and $v\notin V_{k+1}(G)$ then $\mathcal{E}_G(v)=k$. $K$-core decomposition can be performed in time $O(m)$ by recursively removing vertices with degree lower than $k$ \cite{batagelj2011fast}.
Let $G^B = (V,E\setminus B)$ be the modified graph after deleting a set $B$ with $b$ edges. Deleting an edge reduces the degree of two vertices and possibly their core numbers. The reduction in core number might propagate to other vertices. For instance, the vertices in a simple cycle are in the $2$-core but deleting any edge from the graph moves all the vertices to the $1$-core. Let $N_k(G) =|V_k(G)|$ and $M_k(G) =|E_k(G)|$ be the number of nodes and edges respectively in $C_k(G)$.
\begin{table} [t]
\centering
\small
\vspace{-2mm}
\begin{tabular}{| c | c |}
\hline
\textbf{Symbols} & \textbf{Definitions and Descriptions}\\
\hline
$G(V,E)$ & Given graph (vertex set $V$ and edge set $E$)\\
\hline
$n$ & Number of nodes in the graph\\
\hline
$m$ & Number of edges in the graph\\
\hline
$C_k(G)=(V_k(G),E_k(G))$ & The $k$-core of graph $G$\\
\hline
$N_k(G)$ & $|V_k(G)|$, $\#$nodes in the $k$-core of $G$\\
\hline
$M_k(G)$ & $|E_k(G)|$, $\#$edges in the $k$-core of $G$\\
\hline
$\Gamma$& Candidate set of edges\\
\hline
$b$& Budget \\
\hline
$\mathscr{V}(P)$ & The value of a coalition $P$ \\
\hline
$\Phi_e$ & The Shapley value of an edge $e$ \\
\hline
$P_e(\pi)$ & Set of edges before $e$ in permutation $\pi$ \\
\hline
\end{tabular}
\caption { Frequently used symbols}\label{tab:table_symbol}
\end{table}
\begin{definition} \textbf{Reduced $k$-Core:} A reduced $k$-core, $C_k(G^B)$ is the $k$-core in $G^B$, where $G^B = (V,E\setminus B)$.
\end{definition}
\begin{example}
Figures \ref{fig:kcore_ex2} and \ref{fig:kcore_ex3} show an initial graph, $G$ and modified graph $G^B$ (where $B=\{(a,c)\}$) respectively. In $G$, all the nodes are in the $3$-core. Deleting $(a,c)$ brings the vertices $a$ and $c$ to the $2$-core and thus $b$ and $d$ also go to the $2$-core.
\end{example}
\begin{definition} \textbf{$K$-Core Minimization (KCM):} Given a candidate edge set $\Gamma$, find the set, $B \subset \Gamma$ of $b$ edges to be removed such that $C_k(G^B)$ is minimized, or, $f_k(B)=N_k(G)-N_k(G^B)$ is maximized.
\label{def:kcm}
\end{definition}
\begin{example}
Figures \ref{fig:kcore_ex2} shows an initial graph, $G$, where all the nodes are in the $3$-core. Deleting $(a,c)$ and $(e,g)$ brings all the vertices to the $2$-core, whereas deleting $(e,c)$ and $(d,f)$ has no effect on the $3$-core structure (assuming $b\!=\!2)$.
\end{example}
Clearly, the importance of the edges varies in affecting the $k$-core upon their removal. Next, we discuss strong inapproximability results for the KCM problem along with its parameterized complexity.
\begin{figure}[t]
\vspace{-6mm}
\centering
\subfloat[Initial graph, $G$]{\includegraphics[width=0.12\textwidth]{Experiments/Example/kcore_ex1.pdf}\label{fig:kcore_ex1}}
\subfloat[The $2$-core of $G$]{\includegraphics[width=0.12\textwidth]{Experiments/Example/kcore_ex11.pdf}\label{fig:kcore_ex11}}
\subfloat[The $3$-core of $G$]{\includegraphics[width=0.12\textwidth]{Experiments/Example/kcore_ex12.pdf}\label{fig:kcore_ex12}}
\caption{\textbf{ Examples of (a) a graph $G$; (b) its $2$-core; and (c) its $3$-core structures. \label{fig:core_definitions}}}
\end{figure}
\begin{figure}[t]
\vspace{-2mm}
\centering
\subfloat[Initial, $G$]{\includegraphics[width=0.12\textwidth]{Experiments/Example/kcore_ex2.pdf}\label{fig:kcore_ex2}}
\hspace{12mm}
\subfloat[Modified, $G^B$]{\includegraphics[width=0.12\textwidth]{Experiments/Example/kcore_ex3.pdf}\label{fig:kcore_ex3}}
\caption{ Example of the changes in the core structure via deletion of an edge: (a) All the nodes are in the $3$-core. (b) In the modified graph, the nodes $\{a,b,c,d\}$ are in the $2$-core. \label{fig:core_examples}}
\end{figure}
\input{2_1_hardness}
\subsection{Baseline: Greedy Cut~\cite{zhu2018k}}
\label{sec:algo_greedy}
For KCM, only the current $k$-core of the graph, $\mathscr{G}(V_k,E_k)=C_k(G)$ ($|V_k|=N_k$,$|E_k|=M_k$), has to be taken into account. Remaining nodes will already be in a lower-than-$k$-core and can be removed. We define a vulnerable set $VS_k(e,\mathscr{G})$ as those nodes that would be demoted to a lower-than-$k$-core if edge $e$ is deleted from the current core graph $\mathscr{G}$. Algorithm \ref{alg:GC} (GC) is a greedy approach for selecting an edge set $B$ ($|B|=b$) that maximizes the $k$-core reduction, $f_k(B)$. In each step, it chooses the edge that maximizes $|VS_k(e,\mathscr{G})|$ (step $3$-$4$) among the candidate edges $\Gamma$. The specific procedure for computing $VS_k(e,\mathscr{G})$ (step $3$), $LocalUpdate$ and their running times ($O(M_k+N_k)$) are described in the Appendix. The overall running time of GC is $O(b|\Gamma| (M_k+N_k))$.
\begin{algorithm}[t]
\caption{Greedy Cut (GC)}
\label{alg:GC}
\KwInput{$G,k,b$}
\KwOutput{$B$: Set of edges to delete}
$B\leftarrow \emptyset,max \leftarrow -\infty, \mathscr{G}\leftarrow C_k(G)$\\
\While{$|B|<b$}{
$e^*\leftarrow \argmax_{e \in \mathscr{G}(E_k)\setminus B} |computeVS(e=(u , v),\mathscr{G},k)|$\\
$B\leftarrow B \cup \{e^*\}$ \\
LocalUpdate$(e,\mathscr{G},k)$ \\
}
\textbf{return} $B$
\end{algorithm}
\subsection{Shapley Value Based Algorithm}
\label{sec:shapley_algo}
The greedy algorithm discussed in the last section is unaware of some dependencies between the candidates in the solution set. For instance, in Figure \ref{fig:kcore_ex2}, all the
edges have same importance (the value is $0$) to destroy the $2$-core structure. In this scenario, GC will choose an edge arbitrarily. However, removing an optimal set of seven edges can make the graph a tree ($1$-core). To capture these dependencies, we adopt a cooperative game theoretic concept named Shapley Value \cite{shapley1953value}. Our goal is to make a coalition of edges (players) and divide the total gain by this coalition equally among the edges inside it.
\subsubsection{Shapley Value}
The Shapley value of an edge $e$ in the context of KCM is defined as follows. Let the value of a coalition $P$ be $\mathscr{V}(P)= f_k(P)=N_k(G)-N_k(G^P)$. Given an edge $e\in \Gamma$ and a subset $P\subseteq \Gamma$ such that $e \notin P$, the marginal contribution of $e$ to $P$ is:
\begin{equation}
\mathscr{V}(P\cup \{e\}) - \mathscr{V}(P),\quad \forall P \subseteq \Gamma
\end{equation}
Let $\Omega$ be the set of all $|\Gamma|!$ permutations of all the edges in $\Gamma$ and $P_e(\pi)$ be the set of all the edges that appear before $e$ in a permutation $\pi$. The Shapley value of $e$ the average of its marginal contributions to the edge set that appears before $e$ in all the permutations:
\begin{equation} \label{eq:SV_all}
\Phi_e=\frac{1}{|\Gamma|!} \sum_{\pi \in \Omega} \mathscr{V}(P_e (\pi)\cup \{e\}) - \mathscr{V}(P_e (\pi))
\end{equation}
Shapley values capture the importance of an edge inside a set (or coalition) of edges. However, computing Shapley value requires considering $O(|\Gamma|!)$ permutations. Next we show how to efficiently approximate the Shapley value for each edge via sampling.
\begin{algorithm}[h]
\caption{Shapley Value Based Cut (SV)}
\label{alg:SV}
\KwInput{$G,k,b$}
\KwOutput{$B$: Set of edges to delete}
Initialize all $\Phi'_e$ as $0$, $\forall e \in \Gamma$ \\
Generate $S=O(\frac{\log{\Gamma}}{\epsilon^2})$ random permutations of edges\\
$B\leftarrow \emptyset, \mathscr{G}\leftarrow C_k(G)$\\
\For{$\pi \in S$}{
\For{$e =(u,v) \in \Gamma$} {
$\Phi'_e \leftarrow \Phi'_e+(\mathscr{V}(P_e (\pi)\cup \{e\}) - \mathscr{V}(P_e (\pi)))$
}
}
$\Phi'_e\leftarrow \frac{\Phi'_e}{|S|}$, $\forall e \in \Gamma$ \\
Select top $b$ $\Phi'_e$ edges from $B$\\
\textbf{return} $B$
\end{algorithm}
\subsubsection{Approximate Shapley Value Based Algorithm} \label{sec:approx_SV}
Algorithm \ref{alg:SV} (Shapley Value Based Cut, SV) selects the best $b$ edges according to their approximate Shapley values based on a sampled set of permutations, $S$.
For each permutation in $S$, we compute the marginal gains of all the edges. These marginal gains are normalized by the sample size, $s$. In terms of time complexity, steps 4-6 are the dominating steps and take $O(s|\Gamma|(N_k+M_k))$ time, where $N_k$ and $M_k$ are the number of nodes and edges in $C_k(G)$, respectively. Note that similar sampling based methods have been introduced for different applications \cite{castro2009polynomial, maleki2013bounding} (details are in Section \ref{sec:prev_work}).
\subsubsection{Analysis} \label{sec:approx_SV_analysis}
In the previous section, we presented a fast sampling algorithm (SV) for $k$-core minimization using Shapley values. Here, we study the quality of the approximation provided by SV as a function of the number of samples. We show that our algorithm is nearly optimal with respect to each Shapley value with high probability. More specifically, given $\epsilon >0$ and $\delta < 1$, SV takes $p(\frac{1}{\epsilon},\frac{1}{\delta})$ samples, where $p$ is a polynomial in $\frac{1}{\epsilon},\frac{1}{\delta}$, to approximate the Shapley values within $\epsilon$ error with probability $1-\delta$.
We sample uniformly with replacement, a set of permutations $S$ ($|S|=s$) from the set of all permutations, $\Omega$. Each permutation is chosen with probability $\frac{1}{|\Omega|}$. Let $\Phi'_e$ be the approximate Shapley value of $e$ based on $S$. $X_i$ is a random variable that denotes the marginal gain in the $i$-th sampled permutation. So, the estimated Shapley value is $\Phi'_e = \frac{1}{s} \sum_{i=1}^s X_i$. Note that $\mathbb{E}[\Phi'_e]= \Phi_e$.
\iffalse
\begin{lemma} \label{lemma:expectation}
Given $s$ sampled permutations and $\Phi'_e = \frac{1}{s} \sum_{i=1}^s X_i$, for any edge $e \in \Gamma$, $\mathbb{E}[\Phi'_e]= \Phi_e$.
\end{lemma}
\begin{proof} The proof is omitted for brevity.
\end{proof}
\fi
\begin{theorem} \label{thm:approx_SV}
Given $\epsilon$ $(0<\epsilon<1)$, a positive integer $\ell$, and a sample of independent permutations $S, |S|=s$, where
$s \geq \frac{(\ell+1)\log{|\Gamma|}}{2\epsilon^2}$; then $\forall e \in \Gamma$:
\begin{equation*}
Pr (|\Phi'_e- \Phi_e | < \epsilon \cdot N_k) \geq 1- 2|\Gamma|^{-\ell}
\end{equation*}
where $N_k$ denotes the number of nodes in $C_k (G)$.
\end{theorem}
\begin{proof}
We start by analyzing the Shapley value of one edge. Because the samples provide an unbiased estimate and are i.i.d., we can apply \emph{Hoeffding's inequality}~\cite{hoeff1963} to bound the error for edge $e$:
\begin{equation}
Pr[|\Phi'_e-\Phi_e| \geq \epsilon \cdot Q_e ]\leq \delta
\end{equation}
where $\delta=2\exp\left(-\frac{2s^2\epsilon^2 Q^2_e}{\mathcal{R}}\right)$, $\mathcal{R} =\sum\limits_{i=1}^{s}(b_i-a_i)^2$, and each $X_i$ is strictly bounded by the intervals $[a_i, b_i]$. Let $Q_e= Max\{\mathscr{V}(P_e (\pi)\cup \{e\}) - \mathscr{V}(P_e (\pi))| \pi \in \Omega\}$ be the maximum gain for $e$ in any permutation. Then, $\mathcal{R}< sQ^2_e$, as for any $X_i$ the minimum and maximum values are $0$ and $Q_e$ respectively. As a consequence:
\begin{equation*}
\delta =2\exp\left(-\frac{2s^2\epsilon^2 Q^2_e}{\mathcal{R}}\right) < 2\exp\left(-\frac{2s^2\epsilon^2 Q^2_e}{sQ^2_e}\right) = 2\exp\left(-2s\epsilon^2 \right)
\end{equation*}
Thus, the following holds for each edge $e$:
\begin{equation*}
Pr[|\Phi'_e-\Phi_e| \geq \epsilon \cdot Q_e ] < 2\exp\left(-2s\epsilon^2 \right)
\end{equation*}
Using the above equation we compute a joint sample bound for all edges $e\in \Gamma$. Let $\Gamma=\{e_1,e_2,...,e_{|\Gamma|}\}$ and $E_i$ be the event that $|\Phi'_{e_i}-\Phi_{e_i}| \geq \epsilon \cdot Q_{e_i}$. So,
$Pr[E_i]= Pr[|\Phi'_{e_i}-\Phi_{e_i}| \geq \epsilon \cdot Q_{e_i} ] < 2\exp\left(-2s\epsilon^2 \right) $. Similarly, one can prove that
$Pr[|\Phi'_{e_i}-\Phi_{e_i}| \geq \epsilon \cdot N_k ] \leq \delta'$,
where $\delta' =2\exp\left(-\frac{2s^2\epsilon^2 N^2_k}{\mathcal{R}}\right) < 2\exp\left(-2s\epsilon^2 \right)$, as $\mathcal{R}< sN^2_k$.
Applying union bound ($Pr(\cup_{i}E_i)\leq \sum_{i}Pr(E_i)$), for all edges in $ \Gamma$, i.e., $\forall i \in \{1,2,...|\Gamma|\}$, we get that:
\begin{equation*}
Pr[|\Phi'_{e_i}-\Phi_{e_i}| \geq \epsilon \cdot N_k ] < 2|\Gamma|\exp\left(-2s\epsilon^2 \right)
\end{equation*}
By choosing $s \geq \frac{(\ell+1)\log{|\Gamma|}}{2\epsilon^2} $, $\forall i \in \{1,2,...|\Gamma|\}$,
\begin{equation*}
Pr[|\Phi'_{e_i}-\Phi_{e_i}| \geq \epsilon \cdot N_k ] < \frac{2}{|\Gamma|^\ell}, \quad \text{or,}
\end{equation*}
\begin{equation*}
Pr[|\Phi'_{e_i}-\Phi_{e_i}| < \epsilon \cdot N_k) \geq 1- 2|\Gamma|^{-\ell}
\end{equation*}
This ends the proof.
\end{proof}
\iffalse
\begin{cor}\label{cor:anyEdge}
Given $\epsilon$ $(0<\epsilon<1)$, a positive integer $l$, and a sample of independent permutations $S, |S|=s$, where
$s \geq \frac{l\log{|\Gamma|}}{2\epsilon^2}$; then for an edge $e \in \Gamma$:
\begin{equation*}
Pr (|\Phi'_e- \Phi_e | < \epsilon \cdot Q_e) \geq 1- 2|\Gamma|^{-l} \quad \text{or,}
\end{equation*}
\begin{equation*}
Pr (|\frac{\Phi'_e}{Q_e}- \frac{\Phi_e}{Q_e}| < \epsilon ) \geq 1- 2|\Gamma|^{-l}
\end{equation*}
where $Q_e= Max\{\mathscr{V}(C_e (\pi)\cup \{e\}) - \mathscr{V}(C_e (\pi))| \pi \in \Omega\}$, the maximum marginal gain for $e$ in any permutation.
\end{cor}
\begin{proof} The proof is similar to that of Theorem \ref{thm:approx_SV}.
\end{proof}
\fi
Next, we apply Theorem \ref{thm:approx_SV} to analyze the quality of a set $B$ produced by Algorithm \ref{alg:SV} (SV), compared with the result of an exact algorithm (without sampling). Let the exact Shapley values of top $b$ edges be $\Phi^o_{B}=\{\Phi_{O1},\Phi_{O2},\Phi_{O3},...,\Phi_{Ob}\}$ where $\Phi_{O1}\geq \Phi_{O2}\geq...\geq\Phi_{Ob}$. The set produced by Algorithm \ref{alg:SV} (SV) has Shapley values, $\Phi^a_{B}=\{\Phi_{A1},\Phi_{A2},\Phi_{A3},...,\Phi_{Ab}\}$ where $\Phi_{A1}\geq \Phi_{A2}\geq...\geq\Phi_{Ab}$. We can prove the following result regarding the SV algorithm.
\begin{cor}\label{cor:anyset}
For any $i, \Phi_{Oi} \in \Phi^o_{B}$ and $\Phi_{Ai} \in \Phi^a_{B}$, $\epsilon$ $(0<\epsilon<1)$, positive integer $\ell$, and a sample of independent permutations $S, |S|=s$, where
$s \geq \frac{(\ell+1)\log{|\Gamma|}}{2\epsilon^2}$:
\begin{equation*}
Pr (|\Phi_{Oi}- \Phi_{Ai} | < 2\epsilon \cdot N_k) \geq 1- 2|\Gamma|^{-\ell}
\end{equation*}
where $N_k$ denotes the number of nodes in $C_k (G)$.
\end{cor}
\begin{proof} For all edges $e \in \Gamma$, Theorem \ref{thm:approx_SV} shows that $Pr (|\Phi'_e- \Phi_e | < \epsilon \cdot N_k) \geq 1- 2|\Gamma|^{-\ell}$. So, with probability $1- 2|\Gamma|^{-\ell}$, $|\Phi'_{Oi}- \Phi_{Oi} | < \epsilon \cdot N_k$ and $ |\Phi'_{Ai}- \Phi_{Ai} | < \epsilon \cdot N_k$. As $\Phi'_{Ai} > \Phi'_{Oi} $, $ |\Phi_{Oi}- \Phi_{Ai} | < 2\epsilon \cdot N_k$ with the same probability.
\end{proof}
At this point, it is relevant to revisit the hardness of approximation result from Theorem \ref{thm:hard_approx} in the light of Corollary \ref{cor:anyset}. First, SV does not directly minimize the KCM objective function (see Definition \ref{def:kcm}). Instead, it provides a score for each candidate edge $e$ based on how different permutations of edges including $e$ minimize the KCM objective under the assumption that such scores are divided fairly among the involved edges. Notice that such assumption is not part of the KCM problem, and thus Shapley values play the role of a heuristic. Corollary \ref{cor:anyset}, which is a polynomial-time randomized approximation scheme (PRAS) type of guarantee instead of a constant-factor approximation, refers to the exact Shapley value of the top $b$ edges, and not the KCM objective function. We evaluate how SV performs regarding the KCM objective in our experiments.
\subsubsection{Generalizations} Sampling-based approximate Shapley values can also be applied to other relevant combinatorial problems on graphs for which the objective function is not submodular. Examples of these problems include $k$-core anchoring \cite{bhawalkar2015preventing}, influence minimization \cite{kimura2008minimizing}, and network design \cite{dilkina2011}).
\iffalse
\subsection{Optimizations for GC and SV }
\label{sec:opt}
We briefly discuss optimizations for the Greedy (GC) and Shapley Value based (SV) algorithms introduced in this section. The objective is to reduce the number of evaluations of candidates edges in GC and SV via pruning. To achieve this goal, we introduce the concept of \textit{edge dominance}. Let $Z(e,\mathscr{G})$ be the set of vertices that will be removed if $e$ is deleted from $\mathscr{G}$ due to the $k$-core constraint. If $e'$ is dominated by $e$, then $Z(e',\mathscr{G}) \subseteq Z(e,\mathscr{G})$. We can skip the evaluation of $e'$ whenever it appears after $e$ among candidate edges.
The concept of edge dominance is applied to speedup both GC and SV. In GC, we do not compute the marginal gain of any edge that is dominated by a previously computed edge. For SV, we only consider $b$ non-dominated edges in a permutation. A more detailed discussion of these pruning schemes is provided in the Appendix. Notice that these optimizations do not affect the output of the algorithms. We evaluate the performance gains due to pruning in our evaluation.
\fi
\iffalse
\begin{proof} The proof is omitted for brevity.
\end{proof}
\fi
\section{Algorithms}
\label{sec:algo}
According to Theorems \ref{thm:hard_approx} and \ref{thm: param_approx}, an optimal solution--- or constant-factor approximation---for $k$-core minimization requires enumerating all possible size-$b$ subsets from the candidate edge set, assuming $P\!\neq\!NP$. In this section, we propose efficient heuristics for KCM.
\input{3_1_algo_greedy}
\input{3_2_algo_shapley}
\subsection{Quality Evaluation}
\label{sec::effect_sv}
KCM algorithms are compared in terms of quality (DN(\%)) for varying budget ($b$), core value $k$, and the error of the sampling scheme applied by the SV algorithm ($\epsilon$).
\textbf{Varying budget (b):} Figure \ref{fig:SV_budget} presents the k-core minimization results for $k\!=\!5$---similar results were found for $k\!=\!10$---using four different datasets. SV outperforms the best baseline by up to six times. This is due to the fact that our algorithm can capture strong dependencies among sets of edges that are effective at breaking the k-core structure. On the other hand, GC, which takes into account only marginal gains for individual edges, achieves worse results than simple baselines such as JD and LD. We also compare SV and the optimal algorithm in small graphs and show that SV produces near-optimal results (Section \ref{sec:sv_opt}).
\textbf{Varying core value (k):} We evaluate the impact of $k$ over quality for the algorithms using two datasets (FB and WS) in Figures \ref{fig:sv_fb_k} and \ref{fig:sv_ws_k}. The budget ($b$) is set to $400$. As in the previous experiments, SV outperforms the competing approaches. However, notice that the gap between LD (the best baseline) and SV decreases as $k$ increases. This is due to the fact that the number of samples decreases for higher $k$ as the number of candidate edge also decreases, but it can be mended by a smaller $\epsilon$. Also, a larger $k$ will increase the level of dependency between candidate edges, which in turn makes it harder to isolate the impact of a single edge---e.g. independent edges are the easiest to evaluate.
On the other hand, a large value of $k$ leads to a less stable k-core structure that can often be broken by the removal of edges with low-degree endpoints.
LD is a good alternative for such extreme scenarios. Similar results were found for other datasets.
\textbf{Varying the sampling error ($\epsilon$):} The parameter $\epsilon$ controls the the sampling error of the SV algorithm according to Theorem \ref{thm:approx_SV}. We show the effect of $\epsilon$ over the quality results for FB and WS in Figures \ref{fig:sv_fb_epsilon} and \ref{fig:sv_ws_epsilon}. The values of $b$ and $k$ are set to $400$ and $12$ respectively. The performance of the competing algorithms do not depend on such parameter and thus remain constant. As expected, DN(\%) is inversely proportional to the value of $\epsilon$ for SV. The trade-off between $\epsilon$ and the running time of our algorithm enables both accurate and efficient selection of edges for k-core minimization.
\subsection{Running Time}
\label{sec:running_time}
Here, we evaluate the running time of the GC and SV algorithms. In particular, we are interested in measuring the performance gains due to the pruning strategies described in the Appendix. LD and JD do not achieve good quality results in general, as discussed in the previous section, thus we omit them from this evaluation.
Running times for SV varying the sampling error ($\epsilon$) and the core parameter ($k$) using the FB dataset are given in Figures \ref{fig:fb_ep_time} and \ref{fig:fb_k_time}, respectively. Even for small error, the algorithm is able to process graphs with tens of thousands of vertices and millions of edges in, roughly, one minute. Running times decay as $k$ increases due to two factors: (1) the size of the $k$-core structure decreases (2) pruning gets boosted by a less stable core structure.
\iffalse
\begin{table}[t]
\centering
\begin{tabular}{| c | c | c |c|c|c|c|c|c|c|}
\hline
&\multicolumn{5}{c|}{\textbf{$\epsilon$ }} & \multicolumn{4}{c|}{\textbf{$k$ }}\\
\hline
\textbf{Dataset} & .1 & .2 & .3 & .4 & .5 & 5 & 10 & 15 & 20\\
\hline
\textbf{CY} & 45 & 10 & 5 & 2 & 1 & 525 & 289 & 224 & 159\\
\hline
\textbf{WS} & 123 & 31 & 14 & 9 & 5 & 679 & 493 & 340 & 254 \\
\hline
\end{tabular}
\caption{Running times (seconds) of Algorithm SV varying $\epsilon$ ($k=20, b= 500$) and varying $k$ ($\epsilon=.05, b= 500$). \label{table:time_SV_epsilon_k}}
\end{table}
\fi
\begin{figure}[t]
\vspace{-4mm}
\centering
\subfloat[Varying $\epsilon$]{\includegraphics[width=0.22\textwidth]{Experiments/plots/SVexp/facebook_time_epsilon.pdf}\label{fig:fb_ep_time}}
\hspace{1mm}
\subfloat[Varying $k$]{\includegraphics[width=0.22\textwidth]{Experiments/plots/SVexp/facebook_time_k.pdf}\label{fig:fb_k_time}}
\vspace{-1mm}
\caption{Running times by SV using FB while varying (a) the sampling error $\epsilon$ and (b) the core parameter $k$; SV is efficient even for small values of sampling error and its running time decreases with $k$. \label{fig:gc_sv_time}}
\vspace{-2mm}
\end{figure}
\begin{figure}[t]
\vspace{-4mm}
\centering
\subfloat[ $b=5$]{\includegraphics[width=0.245\textwidth, trim={1.5cm 1cm .5cm .45cm},clip]{Experiments/visual/karate_b_5_k_3.pdf}\label{fig:karate_b5_k3}}
\subfloat[$b=10$]{\includegraphics[width=0.245\textwidth, trim={1.5cm 1cm .5cm .45cm},clip]{Experiments/visual/karate_b_10_k_3.pdf}\label{fig:karate_b10_k3}}
\caption{K-core ($k=3$) minimization on the Zachary's Karate network: (a) $b=5$ and (b) $b=10$. Unfilled circle nodes are not in the $3$-core of the original network. After removal of $b$ dashed (red) edges, filled (blue) circle nodes remain in the $3$-core and unfilled (red) square nodes are removed from the $3$-core. \label{fig:karate_k3}}
\vspace{-2mm}
\end{figure}
\begin{figure*}[ht]
\vspace{-3mm}
\centering
\subfloat[DB]{\includegraphics[width=0.23\textwidth]{Experiments/profiling/profile_dblp.png}\label{fig:SV_k_b_DB}}
\hspace{1mm}
\subfloat[WS]{\includegraphics[width=0.23\textwidth]{Experiments/profiling/profile_web.png}\label{fig:SV_k_b_WS}}
\hspace{1mm}
\subfloat[FB]{\includegraphics[width=0.23\textwidth]{Experiments/profiling/profile_facebook.png}\label{fig:SV_k_b_FB}}
\hspace{1mm}
\subfloat[ER]{\includegraphics[width=0.23\textwidth]{Experiments/profiling/profile_er.png}\label{fig:SV_k_b_ER}}
\hspace{1mm}
\vspace{-1mm}
\caption{\textbf{Core resilience for four different networks: (a) DB (co-authorship), (b) WS (Webgraph), (c) FB (social), (d) ER (random). ER and DB are the most and least stable networks, respectively. Tipping points are found for ER and DB. \label{fig:SV_b_k_two}}}
\end{figure*}
\iffalse
\begin{figure}[h]
\centering
\subfloat[DB]{\includegraphics[width=0.23\textwidth]{Experiments/Other/dblp_k5to20_b100to500.pdf}\label{fig:SV_k_b_DB}}
\hspace{1mm}
\subfloat[WS]{\includegraphics[width=0.23\textwidth]{Experiments/Other/stanford_k5to20_b100to500.pdf}\label{fig:SV_k_b_WS}}
\hspace{1mm} \\
\subfloat[LG]{\includegraphics[width=0.23\textwidth]{Experiments/Other/gowalla_k5to20_b100to500.pdf}\label{fig:SV_k_b_LG}}
\hspace{1mm}
\subfloat[CY]{\includegraphics[width=0.23\textwidth]{Experiments/Other/youtube_k5to20_b100to500.pdf}\label{fig:SV_k_b_CY}}
\hspace{1mm}
\subfloat[Facebook]{\includegraphics[width=0.23\textwidth]{Experiments/Other/facebook_k5to20_b100to500.pdf}\label{fig:SV_k_b_FB}}
\hspace{1mm}
\subfloat[Random]{\includegraphics[width=0.23\textwidth]{Experiments/Other/er.pdf}\label{fig:SV_k_b_ER}}
\hspace{1mm}
\caption{\textbf{Core stability in different networks: (a) DB (co-authorship), (b) WS (webgraph), (c) LG (OSN), and (d) CY (OSN), (e) Facebook, (f) Random (ER) \label{fig:SV_b_k_two}}}
\end{figure}
\fi
\subsection{SV and the optimal algorithm}
\label{sec:sv_opt}
In these experiments, we evaluate the approximation achieved by SV (Algorithm \ref{alg:SV}) compared to the optimal results using two small networks (Human and Yeast). The optimal set of $b\!=\!5$ and $b\!=\!10$ edges among a randomly chosen a set of $50$ edges is selected as the candidate set $\Gamma$ inside the $k$-core. An optimal solution is computed based on all possible sets with size $b$ in $\Gamma$. Table \ref{tab:sv_opt} shows the $DN(\%)$ produced by the optimal solution (OPT) and SV. Notice that the SV algorithm produces near-optimal results.
\begin{table}[t]
\centering
\begin{tabular}{|c| c | c | c | c |}
\hline
&\multicolumn{2}{c|}{\textbf{Human}} & \multicolumn{2}{c|}{\textbf{Yeast}} \\
\hline
& $b=5$ & $b=10$ & $b=5$ & $b=10$\\
\hline
OPT & 2.88 & 3.24 & 11.16 & 12.05 \\
\hline
SV ($\epsilon =.1$) & 2.88 & 3.06 & 10.27 & 11.16 \\
\hline
SV ($\epsilon =.2$) & 2.8 & 3.06 & 8.48 & 10.71 \\
\hline
\end{tabular}
\caption{$DN(\%)$ by SV (approximate) and optimal algorithm using a small candidate set size ($|\Gamma|=50$), and $k=5$. \label{tab:sv_opt}}
\vspace{-9mm}
\end{table}
\subsection{Application: $k$-core Resilience}
\label{sec:others}
We show how KCM can be applied to profile the resilience or stability of real networks.
A profile provides a visualization of the resilience of the $k$-core structure of a network for different combinations of $k$ and budget. We apply $DN(\%)$ (Equation \ref{eq:DN}) as a measure of the percentage of the $k$-core removed by a certain amount of budget---relative to the immediately smaller budget value.
Figure \ref{fig:SV_b_k_two} shows the results for co-authorship (DB), Web (WS), social network (FB) and a random (ER) graph. We also discuss profiles for Human and Yeast in the Appendix. Each cell corresponds to a given $k$-$b$ combination and the color of cell $(X,Y)$ shows the difference in $DN(\%)$ between $b\!=\!Y$ and $b\!=\!Y\!-\!100$ for $k\!=\!X$.
As colors are relative, we also show the range of values associated to the the color scheme. We summarize our main findings as follows:
\textbf{Stability:} ER (Figure \ref{fig:SV_k_b_ER}) is the most stable graph, as can be noticed by the range of values in the profile. The majority of nodes in ER are in the $19$-core. DB (Figure \ref{fig:SV_k_b_DB}) is the least stable, but only when $k\!>\!5$, which is due to its large number of small cliques. The high-core structure of DB is quite unstable, with less than $1$\% of the network in the $20$-core structure after the removal of $500$ edges.
\textbf{Tipping points:} We also look at large effects of edge removals within small variations in budget---for a fixed value of $k$. Such a behavior is not noticed for FB and WS (Figures \ref{fig:SV_k_b_WS} and \ref{fig:SV_k_b_FB}, respectively), for which profiles are quite smooth. This is mostly due to the presence of fringe nodes at different levels of $k$-core structure. On the other hand, ER produced the most prominent tipping points ($k\!=\!15$ and $k\!=\!20$). This pattern is also found for DB.
\subsection{$K$-core Minimization on the Karate Network}
In Figure \ref{fig:karate_k3}, we demonstrate the application of our algorithm for KCM using the popular Zachary's Karate network with two different budget settings, $b=5$ and $b=10$, and $k$ fixed to $3$. Unfilled circles are nodes initially out of the $3$-core. The dashed (red) edges are removed by our algorithm---often connecting fringe nodes. Filled (blue) circles and unfilled (red) squares represent nodes that remain and are removed from the $3$-core, respectively, after edge removals.
\iffalse
\textbf{Study 1: } Consider a $k$-core structure. The cohesive structure often is a part of a community. Now we delete $b$ edges using our algorithm. We show that the effect of deletion of the edges in the community structure against the random deletion of edges. This is useful in cases where a more cohesive community structure is not desirable.
\textbf{Study 2: } Consider a $k$-core structure. The cohesive structure is expected to have less disagreement on a topic as often this structure is a part of a community. Now we delete $b$ edges using our algorithm. We show that the increase in average disagreement is much higher against the random deletion of edges. This is useful in cases where the community has a very biased and possibly harmful opinion on certain topic.
\fi
\section{Experiments}
\label{sec:exp}
In this section, we evaluate the proposed Shapley Value Based Cut (SV) algorithm for k-core minimization against baseline solutions. Sections \ref{sec::effect_sv} and \ref{sec:running_time} are focused on the quality results (k-core minimization) and the running time of the algorithms, respectively. Moreover, in Section \ref{sec:others}, we show how k-core minimization can be applied in the analysis of the structural resilience of networks.
\begin{figure*}[ht]
\vspace{-3mm}
\centering
\subfloat[DB]{\includegraphics[width=0.24\textwidth]{Experiments/plots/SVexp/vary_b/dblp_5_100to400.pdf}\label{fig:sv_db_budget}}
\hspace{1mm}
\subfloat[WS]{\includegraphics[width=0.24\textwidth]{Experiments/plots/SVexp/vary_b/stanford_5_100to400.pdf}\label{fig:sv_ws_budget}}
\hspace{1mm}
\subfloat[EE]{\includegraphics[width=0.24\textwidth]{Experiments/plots/SVexp/vary_b/email_5_100to400.pdf}\label{fig:sv_ee_budget}}
\hspace{1mm}
\subfloat[FB]{\includegraphics[width=0.24\textwidth]{Experiments/plots/SVexp/vary_b/facebook_k5_b100to400.pdf}\label{fig:sv_fb_budget}}
\hspace{1mm}
\subfloat[FB]{\includegraphics[width=0.24\textwidth]{Experiments/plots/SVexp/vary_k/facebook_k4to10_b400.pdf}\label{fig:sv_fb_k}}
\hspace{1mm}
\subfloat[WS]{\includegraphics[width=0.24\textwidth]{Experiments/plots/SVexp/vary_k/stanford_k4to10_b400.pdf}\label{fig:sv_ws_k}}
\subfloat[FB]{\includegraphics[width=0.24\textwidth]{Experiments/plots/SVexp/vary_ep/facebook_k12_b400_epsilon.pdf}\label{fig:sv_fb_epsilon}}
\hspace{1mm}
\subfloat[WS]{\includegraphics[width=0.24\textwidth]{Experiments/plots/SVexp/vary_ep/stanford_k12_b400_epsilon.pdf}\label{fig:sv_ws_epsilon}}
\vspace{-1mm}
\caption{\textbf{K-core minimization (DN(\%)) for different algorithms varying (a-d) the number of edges in the budget; (e-f) the core parameter $k$; (g-h) and the sampling error $\epsilon$. Some combinations of experiments and datasets are omitted due to space limitations, but those results are consistent with the ones presented here. The Shapley Value based Cut (SV) algorithm outperforms the best baseline (LD) by up to 6 times. On the other hand, the Greedy approach (GC) achieves worse results than the baselines, with the exception of RD, in most of the settings. SV error increases smoothly with $\epsilon$ and LD becomes a good alternative for large values of $k$. \label{fig:SV_budget}}}
\vspace{-2mm}
\end{figure*}
\subsection{Experimental Setup}
All the experiments were conducted on a $2.59$ GHz Intel Core i7-4720HQ machine with $16$ GB RAM running Windows 10. Algorithms were implemented in Java. The source-code of our implementations will be made open-source once this paper is accepted.
\textbf{Datasets:} The real datasets used in our experiments are available online and are mostly from SNAP\footnote{\url{https://snap.stanford.edu}}. The Human and Yeast datasets are available in \cite{moser2009mining}. In these datasets the nodes and the edges correspond to genes and interactions (protein-
protein and genetic interactions) respectively. The Facebook dataset is from \cite{viswanath2009evolution}. Table \ref{table:data_description} shows dataset statistics, including the largest k-core (a.k.a. degeneracy). These are undirected and unweighted graphs from various applications: EE is from email communication; FB is an online social network, WS is a Web graph, DB is a collaboration network and CA is a product co-purchasing network. We also apply a random graph (ER) generated using the Erdos-Renyi model.
\textbf{Algorithms:} Our algorithm, \textit{Shapley Value Based Cut (SV)} is described in Section \ref{sec:shapley_algo}.
Besides the Greedy Cut (GC) algorithm~\cite{zhu2018k} ( Section \ref{sec:algo_greedy}), we also consider three more baselines in our experiments. \textit{Low Jaccard Coefficient (JD)} removes the $k$ edges with lowest Jaccard coefficient. Similarly, \textit{Low-Degree (LD)} deletes $k$ edges for which adjacent vertices have the lowest degree. We also apply \textit{Random (RD)}, which simply deletes $k$ edges from the candidate set $\Gamma$ uniformly at random. Notice that while LD and JD are quite simple approaches for KCM, they often outperform GC.
\textbf{Quality evaluation metric:} We apply the percentage $DN(\%)$ of vertices from the initial graph $G$ that leave the $k$-core after the deletion of a set of edges $B$ (produced by a KCM algorithm):
\begin{equation} \label{eq:DN}
DN(\%)=\frac{N_k(G)-N_k(G^B)}{N_k(G)}\times 100
\end{equation}
\textbf{Default parameters:} We set the candidate edge set $\Gamma$ to those edges ($M_k(G)$) between vertices in the k-core $C_k(G)$. Unless stated otherwise, the value of the approximation parameter for SV ($\epsilon$) is $0.05$ and the number samples applied is $\frac{\log{|\Gamma|}}{\epsilon^2}$ (see Theorem~\ref{thm:approx_SV}).
\input{4_1_exp_SV}
\input{4_2_exp_other}
\iffalse
\begin{figure*}[ht]
\centering
\subfloat[CA]{\includegraphics[width=0.22\textwidth]{Experiments/plots/Highcore/vary_b/amazon_6_03.pdf}\label{fig:greedy_amazon_6_low}}
\hspace{1mm}
\subfloat[EE]{\includegraphics[width=0.22\textwidth]{Experiments/plots/Highcore/vary_b/email_42_03.pdf}\label{fig:greedy_email_42_low}}
\hspace{1mm}
\subfloat[WS]{\includegraphics[width=0.22\textwidth]{Experiments/plots/Highcore/vary_b/stanford_70_03.pdf}\label{fig:greedy_stanford_70_low}}
\hspace{1mm}
\subfloat[Time]{\includegraphics[width=0.22\textwidth]{Experiments/plots/Highcore/time_bar_03.pdf}\label{fig:greedy_time}}
\hspace{1mm}
\caption{\textbf{Comparison between GC and other baselines \label{fig:gc_baselines}}}
\end{figure*}
\fi
\iffalse
\begin{figure}[ht]
\centering
\subfloat[CA]{\includegraphics[width=0.22\textwidth]{Experiments/plots/Highcore/vary_b/amazon_6_03.pdf}\label{fig:greedy_amazon_6_low}}
\hspace{1mm}
\subfloat[EE]{\includegraphics[width=0.22\textwidth]{Experiments/plots/Highcore/vary_b/email_42_03.pdf}\label{fig:greedy_email_42_low}}
\hspace{1mm}
\caption{\textbf{Comparison between GC and other baselines \label{fig:gc_baselines}}}
\end{figure}
\fi
\section{Previous Work}
\label{sec:prev_work}
\noindent
\textbf{ $K$-core computation and applications: }
A $k$-core decomposition algorithm was first introduced by Seidman \cite{seidman1983network}.
A more efficient solution---with time complexity $O(|E|)$---was presented by Batagelj et al. \cite{batagelj2011fast} and its distributed version was proposed in \cite{montresor2013distributed}. Sariyuce et al. \cite{sariyuce2013streaming} and Bonchi et al. \cite{bonnet2016parameterized} proposed algorithms $k$-core decomposition in streaming data and uncertain graphs respectively. $K$-cores are often applied in the analysis and visualization of large scale complex networks \cite{alvarez2006large}. Other applications include clustering and community detection \cite{giatsidis2014corecluster}, characterizing the Internet topology \cite{carmi2007model}, and analyzing the structure of software systems \cite{zhang2010using}. In social networks, $k$-cores are usually associated with models for user engagement. Bhawalkar et al. \cite{bhawalkar2015preventing} and Chitnis et al. \cite{chitnis2013preventing} studied the problem of increasing the size of $k$-core by anchoring a few vertices initially outside of the $k$-core. Malliaros et al. \cite{malliaros2013stay} investigated user engagement dynamics via $k$-core decomposition.
\textbf{Network resilience/robustness:} Understanding the behavior of a complex system (e.g. the Internet, the power grid) under different types of attacks and failures has been a popular topic of study in network science \cite{callaway2000network,albert2004structural,cohen2000resilience}. This line of work is mostly focused on non-trivial properties of network models, such as critical thresholds and phase transitions, assuming random or simple targeted modifications. Najjar et al. \cite{najjar1990network} and Smith et al. \cite{smith2011network} apply graph theory to evaluate the resilience of computer systems, specially communication networks. An overview of different graph metrics for assessing robustness/resilience is given by \cite{ellens2013graph}. Malliaros et al. \cite{malliaros2012fast} proposed an efficient algorithm for computing network robustness based on spectral graph theory. The appropriate model for assessing network resilience and robustness depends on the application scenario and comparing different such models is not the focus of our work.
\textbf{Stability/resilience of $k$-core: } Adiga et al. \cite{adiga2013robust} studied the stability of high cores in noisy networks. Laishram et al. \cite{Laishram2018} recently introduced a notion of resilience in terms of the stability of $k$-cores against deletion of random nodes/edges. If the rank correlation of core numbers before and after the removal is high, the network is core-resilient. They also provided an algorithm to increase resilience via edge addition. Notice that this is different from our problem, as we search for edges that can destroy the stability of the $k$-core. Another related paper is the work by Zhang et al. \cite{zhang2017finding}. Their goal is to find $b$ vertices such that their deletion reduces the $k$-core maximally. The $k$-core minimization problem via edge deletion has been recently proposed by Zhu et al. \cite{zhu2018k}. However, here we provide stronger inapproximability results and a more effective algorithm for the problem, as shown in our experiments.
\textbf{Shapley Value (SV) and combinatorial problems:} A Shapley value based algorithm was previously introduced for influence maximization (IM) \cite{narayanam2011shapley}. However, IM can be approximated within a constant-factor by a simple greedy algorithm due to the submodular property \cite{kempe2003maximizing}. In this paper, we use Shapley value to account for the joint effect of multiple edges in the solution of the KCM problem, for which we have shown stronger inapproximability results.
\textbf{Shapley Value (SV) estimation via sampling:} Sampling techniques have been applied for the efficient computation of SV's \cite{castro2009polynomial, maleki2013bounding}. Castro et al. \cite{castro2009polynomial} introduced SV sampling
in the context of symmetric and non-symmetric voting games. Maleki et al. \cite{maleki2013bounding} provided analyses for stratified sampling specially when the marginal contributions of players are similar. Our sampling results are specific for the KCM problem and analyze the effect of sampling on the top $k$ nodes with highest Shapley values (Corollary \ref{cor:anyset}).
\textbf{Other network modification problems:
A set of network modification problems were introduced by Paik et al.~\cite{paik1995}. Recent work~\cite{lin2015, dilkina2011} addressed the shortest path distance optimization problem via improving edge or node weights on undirected graphs. A node version of the problem has also been studied \cite{dilkina2011,medya2018noticeable}.
Another related problem is to optimize node centrality by adding edges \cite{crescenzi2015,ishakian2012framework,medya2018group}. Boosting or containing diffusion processes in networks were studied under different well-known diffusion models such as Linear Threshold \cite{Khalil2014} and Independent Cascade \cite{kimura2008minimizing}.
\section{Conclusion}
We have studied the $k$-core minimization (KCM) problem, which consists of finding a set of edges, removal of which minimizes the size of the $k$-core structure. KCM was shown to be NP-hard, even to approximate within any constant when $k\!\geq\!3$. The problem is also not fixed-parameter tractable, meaning it cannot be solved efficiently even if the number of edges deleted is small. Given such inapproximability results, we have proposed an efficient randomized heuristic based on Shapley value to account for the interdependence in the impact of candidate edges. For the sake of comparison, we also evaluate a simpler greedy baseline, which cannot assess such strong dependencies in the effects of edge deletions.
We have evaluated the algorithms using several real graphs and shown that our Shapley value based approach outperforms competing solutions in terms of quality. The proposed algorithm is also efficient, enabling its application to graphs with hundreds of thousands of vertices and millions of edges in time in the order of minutes using a desktop PC. We have also illustrated how KCM can be used for profiling the resilience of networks to edge deletions.
\section*{Acknowledgment}
The authors would like to thank Neeraj Kumar for helpful discussions.
\section{Appendix}
\subsection{Proof for Theorem \ref{thm:np_hard_k_1_2}}
\begin{proof}
First, we sketch the proof for $k=1$.
Consider an instance of the NP-hard 2-MINSAT \cite{kohli1994minimum} problem which is defined by a set $U=\{u_1,u_2,...,u_{m'}\}$ of $m'$ variables and collection $C'=\{c_1,c_2,...,c_{n'}\}$ of $n'$ clauses. Each clause $c\in C'$ has two literals ($|c|=2$). So, each $c_i\in C'$ is of the form $z_{i1}\vee z_{i2}$ where $z_{ij}$ is a literal and is either a variable or its negation. The problem is to decide whether there exists a truth assignment in $U$ that satisfies no more than $n^* < n'$ clauses in $C$. To define a corresponding KCM instance, we construct the graph $G'$ as follows. We create a set of $n'$ vertices $X_c =\{v_i|\ c_i\in C'\}$. For each variable $u_i\in U$, we create two vertices: one for the variable ($w_{i1}$) and another for its negation ($w_{i2}$). Thus, a total of $2m'$ vertices, $Y_u=\{w_{11},w_{12},w_{21},w_{22}$ $, \ldots, w_{m'1},w_{m'2}\}$ are produced. Moreover, whenever the literal $u_i\vee \bar{u}_j\in c_t$, we add two edges, $(v_{t},w_{i1})$ and $(v_{t},w_{j2})$ to $G'$.
For $k=1$, KCM aims to maximize the number of isolated vertices ($0$-core, $k\!=\!0$) via removing $b$ edges. An edge in the KCM instance ($K_I$) is a vertex $v_i$ in $G'$. Each vertex is connected to exactly two vertices (end points of the edge in $K_I$) in $Y_u$. Satisfying a clause is equivalent to removing the corresponding vertex (deleting the edge in $K_I$) from $G'$. A vertex in $Y_u$ will be isolated when all associated clauses (or vertices) in $X_c$ are satisfied (removed). If there is a truth assignment which satisfies no more than $b=n^*$ clauses in 2-MINSAT, that implies $m'$ vertices can be isolated in $G'$ by removing $\leq b$ vertices (or deleting $\leq b$ edges in KCM). If there is none, then $m'$ vertices cannot be isolated by removing $\leq b$ edges in KCM.
To prove for $k=2$, we can transform the $k=1$ version of KCM to the $k=2$ one (transformation is similar to the one in \cite{zhang2017finding}).
\end{proof}
\subsection{Proof for Theorem \ref{thm: param_approx} }
\begin{proof}
We sketch the proof for $k=3$. A similar construction can be applied for $k>3$. Consider an instance of the $W[2]$-hard Set Cover \cite{bonnet2016parameterized} problem, defined by a collection of subsets $S=\{S_{1},S_{2},...,S_{m}\}$ from a universal set of items $U=\{ u_{1},u_{2},...,u_{n} \}$. The problem is to decide whether there exist $b$ subsets whose union is $U$. We define a corresponding KCM instance (graph $G$) as follows.
For each $S_i \in S$ we create two cycles: one of $n$ vertices $X_{i,1},X_{i,2},$\\$\cdots,X_{i,n}$ in $V$ with edges $(X_{i,1},X_{i,2}), (X_{i,2},X_{i,3}),\cdots,(X_{i,n},X_{i,1})$. Another cycle of $n$ vertices $W_{i,1}$ to $W_{i,n}$ and obviously with $n$ edges $(W_{i,1},W_{i,2}), (W_{i,2},W_{i,3})\cdots,(W_{i,n},W_{i,1})$ is also added along with $n$ more edges $(W_{i,j},X_{i,j})$ where $j=1,2,\cdots,n$ for every $i$. Moreover, for each $u_j \in U$, we create a cycle of $m$ vertices $Y_{j,1}, Y_{j,2},\cdots, Y_{j,m}$ with the following edges $(Y_{j,1},Y_{j,2}), \cdots,(Y_{j,m-1},Y_{j,m}), (Y_{j,m},Y_{j,1})$.
We add $5$ vertices $Z_{j,1}$ to $Z_{j,5}$ with cliques of four vertices $Z_{j,2}$ to $Z_{j,5}$ and two more edges: $(Z_{j,1},Z_{j,2}),(Z_{j,1},Z_{j,5})$.
Furthermore, edges $(X_{i,j},Y_{j,i})$ will be added to $E$ if $u_j\in S_i$.
Moreover, edges $(Y_{j,i},Z_{j,1})$ will be added to $E$ if $u_j\notin S_i$. Clearly the reduction is in FPT. The candidate set, $\Gamma = \{ (W_{i,1},W_{i,2})| \forall{i=1,2,...,m}\}$. Fig. \ref{fig:hardness_ex2} illustrates our construction for sets $S_1=\{u_1\},S_2=\{u_1,u_2,u_4\},$ and $S_3=\{u_3\}$.
Initially all nodes are in the $3$-core. We claim that a set $S'\subset S$, with $|S'|\leq b$, is a cover iff $f_3(B)\!=\! 2bn\!+\!n(m\!+\!1)$ where $B\!=\!\{(W_{i,1},W_{i,2})| S_i \in S' \}$.
For any $i$, if $(W_{i,1},W_{i,2})$ is removed, the $2n$ nodes $\{W_{i,1},\cdots,W_{i,n}\}$ and $\{X_{i,1},\cdots,X_{i,n}\}$ will go to the $2$-core (note that the trivial case is removed where a subset will contain all the elements; thus there will be one $X_{i,t}$ for some $t$ with exactly degree 3). If $u_j\in S_i$, then $m+1$ nodes $\{Y_{j,1},...,X_{j,m}\}$ and $Z_{j,1}$ go to $2$-core after $(W_{i,1},W_{i,2})$ is removed. If $S'$ is a set cover, all the $u_j$s will be in some $S_i\in S'$ and $n(m+1)$ nodes (all the $Y_{j,i}$s and $Z_{j,1}$s) will go into $2$-core; so $f_3(B)\!=\!2bn\!+\!n(m+1)$. On the other hand, assume that $f_3(B)\!=\!2bn\!+\!n(m\!+\!1)$ after removing edges in $B= \{(W_{i,1},W_{i,2})| S_i \in S' \}$. The only way to have $m+1$ nodes removed from corresponding $u_j$ is if $u_j \in X_i$ and $X_i\in X$. Thus, $n(m+1)$ nodes (all the $Y_{j,i}$s and $Z_{j,1}$s) along with $2bn$ nodes ($X_{i,j}$s and $W_{i,j}$s, where $S_i\in S'$) will be removed, making $S'$ a set cover.
\end{proof}
\begin{figure}[t]
\centering
{\includegraphics[width=0.4\textwidth]{Experiments/Example/hard_ex2.pdf}}
\caption{Example construction for parameterized hardness from Set Cover where $U=\{u_1,u_2,u_3,u_4\}, S=\{S_1,S_2,S_3\}, S_1=\{u_1\},S_2=\{u_1,u_2,u_4\},$ and $S_3=\{u_3\}$. \label{fig:hardness_ex2}}
\end{figure}
\subsection{Algorithm \ref{alg:computeVS}} \label{sec:algo_computevs}
This procedure computes the vulnerable set---i.e., the set of nodes that will leave the $k$-core upon deletion of the edge $e$ from $\mathscr{G}$. The size of the set is essentially the marginal gain of deleting $e$. If $e\!=\!(u,v)$ is deleted, $u$ will be removed iff $d(\mathscr{G},u)\!=\!k$ (the same for $v$). This triggers a cascade of node removals from the $k$-core (with the associated edges). Let $vul(w)$ be the set of nodes already removed from $\mathscr{G}$ that are neighbours of node $w$. We observe that $w$ will be removed if $d(\mathscr{G},w)- |vul(w)|<k$. Note that the procedure is similar to Algorithm 2 (\textit{LocalUpdate}), having $O(M_k+N_k)$ running time.
\begin{algorithm}[h]
\small
\caption{computeVS}
\label{alg:computeVS}
\KwInput{$e=(u,v),\mathscr{G},k$}
\KwOutput{$X$}
\If{$d(\mathscr{G},u)= \mathcal{E}_\mathscr{G}(u)$}{
Queue $S\leftarrow S\cup \{u\}$, $X\leftarrow X\cup \{u\}$\\
}
\If{$d(\mathscr{G},v)= \mathcal{E}_\mathscr{G}(v)$}{
Queue $S\leftarrow S\cup \{v\}$, $X\leftarrow X\cup \{v\}$\\
}
\While{$S \not = \emptyset$ } {
Remove $y$ form $S$ \\
\For{$w \in N(y)$}{
$vul(w)\leftarrow \{z|z\in N(w)\cap X\}$ \\
\If{$w \not \in X \And d(\mathscr{G},w) - |vul(w)|<k $}{
Add $w$ to $X$, $S$
}
}
}
\textbf{return} $X$
\end{algorithm}
\subsection{Local Update (Algorithm \ref{alg:Local_update})} \label{sec:algo_local}
After the removal of the edge $e^*$ in each step, the current graph $\mathscr{G}$ is updated (step $9$). Recomputing the $k$ cores in $\mathscr{G}$ would take $O(M_k)$ time. Instead, a more efficient approach is to update only the affected region after deleting the $e^*$. If an edge $e^*=(u,v)$ is deleted, $u$ will be removed if $d(\mathscr{G},u)=k$ (the same for $v$). This triggers a cascade of node removals (with the associated edges). Let $vul(w)$ be a set of nodes already removed from $\mathscr{G}$ that are neighbours of node $w$. We observe that $w$ will be removed if $d(\mathscr{G},w)- |vul(w)|<k$.
\begin{algorithm}[t]
\small
\caption{LocalUpdate}
\label{alg:Local_update}
\KwInput{$e=(u,v),\mathscr{G},k$}
Remove $(u,v)$, update $d(\mathscr{G},u),d(\mathscr{G},v)$, $X\leftarrow \Phi$, $Y\leftarrow \Phi$\\
\If{$d(\mathscr{G},u)<k$}{
Queue $Y\leftarrow Y\cup \{u\}$, $X\leftarrow X\cup \{u\}$
}
\If{$d(\mathscr{G},v)<k$}{
Queue $Y\leftarrow Y\cup \{v\}$, $X\leftarrow X\cup \{v\}$
}
\While{$Y \not = \emptyset$} {
Remove $y$ form $Y$
\For{$w \in N(y)$}{
$vul(w)\leftarrow \{z|z\in N(w)\cap X\}$ \\
\If{$w \not \in X \And d(\mathscr{G},w) - |vul(w)|<k $}{
Add $w$ to $X$, $S$
}
}
\If{$d(\mathscr{G},y) < k$}{
Remove $y$ from $\mathscr{G}$
}
}
\end{algorithm}
\subsection{Optimizations for GC and SV }
\label{sec:opt}
Here, we discuss optimizations for the Greedy (GC) and Shapley Value based (SV) algorithms. We propose a general pruning technique to speed up both Algorithms \ref{alg:GC} and \ref{alg:SV} (GC and SV). For GC, in each step, all the candidate edges are evaluated (step $3$). \textit{How can we reduce the number of evaluations in a single step?} In SV, in a single permutation, marginal gains are computed for all the candidate edges (step $5$). \textit{How can we skip edges that have $0$ marginal gain?}. We answer these questions by introducing the concept of \textit{edge dominance}. Let $Z(e,\mathscr{G})$ be the set of vertices that would be removed if $e$ is deleted from $\mathscr{G}$ due to the $k$-core constraint. If $e'=(u,v)$ has one of the end points $u$ or $v$ in $Z(e,\mathscr{G})$, then $e'$ is dominated by $e$.
\begin{observation} \label{obs:dominance}
If $e'$ is dominated by $e$, then $Z(e',\mathscr{G}) \subseteq Z(e,\mathscr{G})$.
\end{observation}
In Algorithm \ref{alg:GC} (GC), while evaluating each edge in the candidate set (step $3$) if $e'$ comes after $e$, we skip the evaluation of $e'$, as $|X_e| \geq |X_{e'}|$ (Obs. \ref{obs:dominance}).
In Algorithm \ref{alg:SV} (SV), while computing the marginal gain of each edge in a coalition for a particular permutation $\pi$, assume that $e'$ appear after $e$. As $e \in P_{e'}(\pi)$ and using Observation \ref{obs:dominance}, $\mathscr{V}(P_e (\pi)\cup \{e\}) - \mathscr{V}(P_e (\pi))) =0$. Thus, the computation of the marginal gain of $e'$ can be skipped. We evaluate the performance gains due to pruning in our experimental evaluation
In Figures \ref{fig:gc_time} and \ref{fig:sv_time}, we look further into the effect of pruning for GC and SV by comparing versions of the algorithms with and without pruning using three datasets. GC becomes one order of magnitude faster using pruning. Gains for SV are lower but still significant (up to 50\%). We found in other experiments that the impact of pruning for SV increases with the budget, which is due to the larger number of permutations to be considered by the algorithm.
\begin{figure}[t]
\vspace{-2mm}
\centering
\subfloat[Pruning, GC]{\includegraphics[width=0.22\textwidth]{Experiments/plots/Highcore/time_bar_03.pdf}\label{fig:gc_time}}
\hspace{1mm}
\subfloat[Pruning, SV]{\includegraphics[width=0.22\textwidth]{Experiments/plots/SVexp/sv_01_time.pdf}\label{fig:sv_time}}
\hspace{1mm}
\caption{Impact of pruning for GC and SV algorithms using three datasets: GC is up to one order of magnitude faster with pruning, while SV is up to 50\% faster. \label{fig:gc_sv_time_1}}
\vspace{-2mm}
\end{figure}
\iffalse
\begin{figure}[h]
\centering
{\includegraphics[width=0.3\textwidth]{Experiments/SV_vs_OPT.pdf}}
\caption{ Comparison between SV (approximate) and the optimal algorithm using four datasets and a small candidate set size ($|\Gamma|=50$), $b=5$ and $k=5$. \label{fig:sv_opt}}
\end{figure}
\fi
\iffalse
\subsection{SV and the optimal algorithm}
\label{sec:sv_opt}
In these experiments, we evaluate the approximation achieved by SV (Algorithm \ref{alg:SV}) compared to the optimal results using two small networks (Human and Yeast). The optimal set of $b\!=\!5$ and $b\!=\!10$ edges among a randomly chosen a set of $50$ edges is selected as the candidate set $\Gamma$ inside the $k$-core. An optimal solution is computed based on all possible sets with size $b$ in $\Gamma$. Table \ref{tab:sv_opt} shows the $DN(\%)$ produced by the optimal solution (OPT) and SV. Notice that the SV algorithm produces near-optimal results.
\begin{table}[t]
\centering
\begin{tabular}{|c| c | c | c | c |}
\hline
&\multicolumn{2}{c|}{\textbf{Human}} & \multicolumn{2}{c|}{\textbf{Yeast}} \\
\hline
& $b=5$ & $b=10$ & $b=5$ & $b=10$\\
\hline
OPT & 2.88 & 3.24 & 11.16 & 12.05 \\
\hline
SV ($\epsilon =.1$) & 2.88 & 3.06 & 10.27 & 11.16 \\
\hline
SV ($\epsilon =.2$) & 2.8 & 3.06 & 8.48 & 10.71 \\
\hline
\end{tabular}
\caption{SV (approximate) vs. optimal algorithm using two datasets and a small candidate set size ($|\Gamma|=50$), and $k=5$. \label{tab:sv_opt}}
\vspace{-9mm}
\end{table}
\fi
\subsection{K-core Resilience: Human vs Yeast}
\label{sec:human_vs_yeast}
K-cores have been previously applied in the analysis of functional modules in protein-protein networks \cite{alvarez2006large,wuchty2005peeling}.
Here, we compare the $k$-core stability of Human and Yeast (Figs. \ref{fig:SV_k_b_human}, \ref{fig:SV_k_b_yeast}). Human is shown to be more stable, as can be inferred from the range of values in the profile---$1\%$ to $35\%$ for Human and $3.4\%$ to $100\%$ for Yeast. Moreover, the profile for Human is smoother than Yeast. These results confirm our intuition that proteins have a more complex functional structure in Humans compared to other organisms \cite{Zitnik454033}. We also show similar results for clustering coefficient and efficiency, which are other popular robustness measures for networks \cite{ellens2013graph}, within the same core set of vertices to facilitate the comparison. Both competing metrics fail to effectively assess robustness for varying values of $k$ and budget. In particular, the clustering coefficient of the networks remain mostly unchanged after edge deletions. The effect of network efficiency minimization over the core of the network does not necessarily increase with the budget, which is counter-intuitive. More specifically, efficiency minimization often fails to break dense substructures of the network, even for large values of budget.
\begin{figure}[t]
\centering
\subfloat[Human (Core resilience)]{\includegraphics[width=0.2\textwidth,height=0.16\textwidth]{Experiments/profiling/human_yeast/profile_human.png}\label{fig:SV_k_b_human}}
\hspace{1mm}
\subfloat[Yeast (Core resilience)]{\includegraphics[width=0.2\textwidth,height=0.16\textwidth]{Experiments/profiling/human_yeast/profile_yeast.png}\label{fig:SV_k_b_yeast}}\\
\vspace{-3mm}
\subfloat[Human (Clustering coefficient)]{\includegraphics[width=0.2\textwidth,height=0.16\textwidth]{Experiments/profiling/human_yeast/profile_human_cc.png}\label{fig:SV_k_b_human_cc}}
\hspace{1mm}
\subfloat[Yeast (Clustering coefficient)]{\includegraphics[width=0.2\textwidth,height=0.16\textwidth]{Experiments/profiling/human_yeast/profile_yeast_cc.png}\label{fig:SV_k_b_yeast_cc}}\\
\vspace{-3mm}
\subfloat[Human (Efficiency)]{\includegraphics[width=0.2\textwidth,height=0.16\textwidth]{Experiments/profiling/human_yeast/profile_human_sp.png}\label{fig:SV_k_b_human_sp}}
\hspace{1mm}
\subfloat[Yeast (Efficiency)]{\includegraphics[width=0.2\textwidth,height=0.16\textwidth]{Experiments/profiling/human_yeast/profile_yeast_sp.png}\label{fig:SV_k_b_yeast_sp}}
\vspace{-3mm}
\caption{Core resilience (a, b) and other robustness metrics, clustering coefficient (c, d) and efficiency (e, f) \cite{ellens2013graph}, for the Human and Yeast protein-protein interaction networks. \label{fig:SV_human_yeast}}
\end{figure}
|
1,116,691,500,643 | arxiv | \section{A Story}
Mr Holt, the kindergarten teacher, gives his class these instructions:
\begin{quote}
Hello class, The Metropolitan Museum of Art has a sudden shortage of
sculptures and needs several new ones to fill its shelves. Please break into
groups so that each group can build a Lego tower. The director of the museum
will be here in an hour to pick up the towers and put them in the museum with
your names on them. Please do the best job you can; you don't want to be
professionally embarrassed.
\end{quote}
Each kindergartener wants to be in a group with her friends, but she also wants
her friends to be happy in the group; she doesn't want her friends to be
miserable.
The graph below is a map of who is friends with whom in the small class. Notice
that $a$ would have more friends in the group $\{a, b, c, d, e\}$ than
$\{a, b, c, d\}$, but maybe $a$ doesn't want $e$ to be in the group because $a$
knows that would make $b$, $c$, and $d$ less happy. Strangely, $a$ prefers
$\{a, b, c, d\}$ to $\{a, b, c, d, e\}$.
You can imagine that the kindergarteners might try to choose the best group in
some other way. The class would split into groups one way, but then people
would be unhappy and keep changing their groups. How can we model all this? How
could we easily visualize all this?
\centerline{\includegraphics[width=2in]{graph1}}
\section{Hedonic Games}
Below is the original definition of a hedonic game. Hedonic games
\citep*{banerjee2001core} were invented to model the formation and reformation
of groups.
\begin{definition}
\citep*{banerjee2001core} A \textbf{coalition formation game} is a pair
$G = (N, (\succeq_i)_{i \in N})$, where $N$ is a finite set of players and
for every $i \in N$, $\succeq_i$ is a reflexive, complete, and transitive
binary relation on $\coals{i} = \{C \in 2^N : i \in C\}$.
If $C,D \in \coals{i}$ and $C \succeq_i D$ and $D \not\succeq_i C$, then we write $C \succ_i D$.
\end{definition}
\begin{definition}
\citep*{banerjee2001core} A \textbf{coalition structure}
$\Gamma = \{C_1, \dots, C_k\}$ is a partition of $N$.
The coalition containing a player $i \in N$ is denoted $\Gamma(i)$.
Any subset of $N$ is called a coalition.
\end{definition}
That's a very minimal definition, and these most general hedonic games don't
have many computationally useful properties. For that reason, several subclasses
of hedonic games have been invented and studied. First though, let's look at
stability.
\subsection{The Core}
If Mr Holt were assigning groups, instead of letting the kids form their own
groups, then he might want a way to predict if a given partition will stick
before he actually moves people around. ``Will the students stay in their groups
or will they form new ones?'' There are many ways you can ask the question ``Is
this coalition formation stable?'' Seven good ways are mentioned in
\citep*{nguyen2016altruistic}. One of the most important ways to ask the
question (and the focus of the survey \citep*{woeginger2013core}) is ``Is this
this coalition formation core stable?''.
\begin{definition}
In a hedonic game $G$ with a partition $\Gamma$, if there is a nonempty set
$C \subseteq N$ where $\forall i \in C: C \succ_i \Gamma(i)$, then we say that
$C$ blocks $\Gamma$, or $C$ is a \textbf{blocking coalition} in $\Gamma$.
If $\Gamma$ cannot be blocked, then it is called \textbf{core stable}. The set
of core stable partitions for a game $G$ is called the \textbf{core} of $G$.
\end{definition}
\section{Varieties of Hedonic Games}
In the below paragraphs, $n = |N|$ is the number of players, $i$ is a player in
$N$, and $C,D \in \coals{i}$ are coalitions which contain $i$.
\subsection{Fractional Hedonic Games}
\citep*{aziz2014fractional} In \textbf{fractional hedonic games}, $i$ assigns some real value $v_i(j)$ to every
player $j \in N$. It's assumed that $v_i(i) = 0$.\footnote{
Raising your own score is equivalent to lowering everyone else's score.
Lowering your own score is equivalent to raising everyone else's score.}
We say $C \succeq^{\textup{FR}}_i D$ if $u^{\textup{FR}}_i(C) \geq u^{\textup{FR}}_i(D)$, where
$$u_i^{\textup{FR}}(C) = \sum_{j \in C} v_i(j). $$
A fractional hedonic game is called \textbf{simple} if
$\forall i,j \in N: v_i(j) \in \{0,1\}$
and is called \textbf{symmetric} if
$\forall i,j \in N: v_i(j) = v_j(i)$.
\citeauthor*{aziz2014fractional} show that even in fractional hedonic games which are both simple
and symmetric, the core is sometimes
empty and that checking core emptiness is $\Sigma_2^p$-complete.
\subsection{Friend and Enemy Oriented Hedonic Games}
\citep*{dimitrov2006simple} In both of these kinds of games, $i$ splits the
other players in $N$ into a set of friends, $F_i$, and a set of enemies, $E_i$.
In \textbf{friend-oriented games}, $i$ prefers coalitions with more friends and breaks
ties by considering the number of enemies. In other words,
\begin{align*}
& C \succeq^{\textup{FO}}_i D \\
\iff & |C \cap F_i| > |D \cap F_i| ~\lor~ \left( |C \cap F_i| = |D \cap F_i| ~\land~ |C \cap E_i| \leq |D \cap E_i| \right) \\
\iff & u_i^{\textup{FO}}(C) \geq u_i^{\textup{FO}}(D), \\
\textup{where } & u_i^{\textup{FO}}(C) = n|C \cap F_i| - |C \cap E_i|.
\end{align*}
So if $C$ has 8 of $i$'s friends and 600 of $i$'s enemies and $D$ has 7 of
$i$'s friends and 0 of $i$'s enemies, then $i$ would still rather be in $C$.
In \textbf{enemy-oriented games}, $i$ tries to minimize enemies and only considers
friends to break a tie. In other words,
\begin{align*}
& C \succeq^{\textup{EO}}_i D \\
\iff & |C \cap E_i| < |D \cap E_i| ~\lor~ \left( |C \cap E_i| = |D \cap E_i| ~\land~ |C \cap F_i| \geq |D \cap F_i| \right) \\
\iff & u_i^{\textup{EO}}(C) \geq u_i^{\textup{EO}}(D), \\
\textup{where } & u_i^{\textup{EO}}(C) = |C \cap F_i| - n |C \cap E_i|.
\end{align*}
\citeauthor*{dimitrov2006simple} show that the core is guaranteed to be
non-empty in both kinds of games. However, finding a core stable partition is
NP-hard in enemy-oriented games\footnote{
More precisely, if you could always find a core stable coalition structure in
polynomial time, then you could also find the largest clique in any
(undirected, unweighted) graph in polynomial time.}
but polynomial time in friend-oriented games.
\subsection{Altruistic Hedonic Games}
\citep*{nguyen2016altruistic} As in friend and enemy oriented hedonic games,
$i$ divides the other players into friends, $F_i$, and enemies, $E_i$. The idea
is that a player wouldn't want to be in a coalition $C$ where his friends were
miserable, even if $C$ had all of his friends and none of his enemies.
Three levels of altruism are considered. Let $\textup{avg}(S) = \sum_{x \in S} x / |S|$
denote the average of a multiset of numbers. And, as above, the utilities $u_i$
are defined so that $C \succeq_i D \iff u_i(C) \geq u_i(D)$.
In \textbf{selfish-first altruistic games}, a player cares most about his own happiness
and uses his friends' preferences to break ties. `Happiness' here means the
friend-oriented score. This is distinct from friend-oriented games in that a
tightly connected coalition $C$ with 6 friends and 3 enemies is preferred to a
sparse coalition $D$ with 6 friends and 3 enemies, because $i$'s friends in $C$
are happier than $i$'s friends in $D$.
$$u_i^{\textup{SF}}(C) = n^5 u_i^{\textup{FO}}(C) + \textup{avg}({u_j^{\textup{FO}}(C) : j \in C \cap F_i}).$$
In \textbf{equal-treatment altruistic games}, a player takes his and all his friends'
opinions into account equally when evaluating a partition:
$$u_i^{\textup{EQ}}(C) = \textup{avg}(u_j^{\textup{FO}}(C) : j \in C \cap F_i \cup \{\i\}).$$
And in \textbf{altruistic-treatment altruistic games} (i.e., truly altruistic games), a
player prefers coalitions where his friends are happy and breaks ties by
considering his own happiness.
$$u_i^{\textup{AL}}(C) = u_i^{\textup{FO}}(C) + n^5 \textup{avg}({u_j^{\textup{FO}}(C) : j \in C \cap F_i}).$$
\citeauthor*{nguyen2016altruistic} show that selfish-first altruistic games
always have an nonempty core. Whether equal-treatment altruistic games and
truly altruistic games ever have empty cores are open questions. I suspect
that the core is always nonempty in both games.
\section{The Simulator}
I wrote software to simulate hedonic games and put in on the internet.
You can draw graphs, choose partitions, choose several different player types,
and check the stability of the partition under several different measures.
Hopefully this will help others and myself quickly understand different
hedonic games and speed up the process of finding stable partitions.
\newpage
\begin{center}
\url{http://lukemiles.org/hedonic-games} \\
\includegraphics[width=3in]{graph} \\
\includegraphics[width=3in]{text} \\
\includegraphics[width=3in]{buttons} \\
The website works better on laptops than smartphones. Updates may have been
made to the website since this arXiv version was uploaded.
\end{center}
\bibliographystyle{plainnat}
|
1,116,691,500,644 | arxiv | \section{Introduction}
While Hardy and Wright are of course right in that ordinary generating functions of arithmetic
functions do not share the versatility and usefulness of their well-known Dirichlet counterparts,
several non-trivial results -- both old and new -- have been obtained for them. For instance, the analysis
of
\[
\sum_{n=1}^\infty \tau(n) z^n = \sum_{n=1}^\infty \frac{z^n}{1-z^n},
\]
where~$\tau(n)$ denotes the number of divisors of~$n$, goes back to Lambert~\cite{Kn13},
and the expansion
\begin{align}
& \sum_{n=1}^\infty \tau(n) \mathrm{e}^{-n t} \sim \frac1t \log \frac1t + \frac{\gamma}{t}
- \sum_{n=0}^\infty \frac{B_{n+1}^2}{(n+1)!(n+1)} t^n, \label{eq:tau exp} \\
& \text{where $t \to 0$},\quad |\arg(t)|<\tfrac12 \pi-\theta\quad \text{for some $\theta>0$}, \notag
\end{align}
involving Euler's constant and Bernoulli numbers, has been known for a long time~\cite{FlGoDu95,Sc74,Titchmarsh86,Wi17}.
Titchmarsh~\cite{Titchmarsh86} has applied~\eqref{eq:tau exp} in a result on mean values
of the Riemann zeta-function, and
Canfield et~al.~\cite{CaSaWi04} have extended~\eqref{eq:tau exp} to the case of the arithmetic
function that counts only divisors in some fixed residue class.
Another generalization has been obtained by Berndt and Evans~\cite{BeEv85}, who also proved the formula
\[
\sum_{n=1}^\infty p_n z^n \sim \frac{1}{(1-z)^2} \log \frac{1}{1-z}, \qquad z\to 1^-\ \text{in}\ \mathbb{R},
\]
where~$p_n$ is the $n$-th prime number.
Recently, the transcendence of number theoretic power series has been of interest to several authors.
Banks et~al.~\cite{BaLuSh05} have established the irrationality of $\sum \mu(n)z^n$, $\sum p_n z^n$, $\sum \tau(n)z^n$,
and several other similar series, over~$\mathbb{Q}(z)$.
Later it was noted~\cite{BeCo09,BoCo09} that the transcendence of these series follows easily from the fact that they
have the unit circle as a natural boundary.
This property even shows that they are not $D$-finite~\cite{BeGeKlLu08,FlGeSa05,St80}.
General results about the transcendence of $\sum f(n) z^n$ with~$f$ multiplicative have recently been
obtained by Borwein and Coons~\cite{BoCo09} and by Bell and Coons~\cite{BeCo09}.
The present note is concerned with asymptotic estimates for power series $\sum a_n z^n$,
where the Dirichlet generating function $\sum a_n n^{-s}$ has singularities at the zeros of the Riemann zeta-function.
For instance, $a_n=\mu(n)$ falls under this category.
Delange~\cite{De00} has noted that the prime number theorem in the form
\[
M(x) := \sum_{n\leq x} \mu(n) = \mathrm{o}(x), \qquad x\to\infty,
\]
where~$M(x)$ denotes the Mertens function, readily implies
\[
\sum_{n=1}^\infty \mu(n) z^n = \mathrm{o}\left( \frac{1}{1-z} \right), \qquad z\to1^-\ \text{in}\ \mathbb{R}.
\]
A quick way to improve this starts from Walfisz' deep result~\cite{Wa63}
\begin{equation}\label{eq:walfisz}
M(x) = \mathrm{O}\left(x \exp\left( -\frac{c(\log x)^{3/5}}{(\log\log x)^{1/5}} \right) \right).
\end{equation}
Recall the following a basic Abelian theorem~\cite{BiGoTe89,FlGeSa05,Se95}:
\begin{lemma}\label{le:abelian}
Suppose that~$(a_n)$ is an ultimately monotone real sequence with $a_n\sim n^\alpha \ell(n)$,
where $\alpha>0$, and~$\ell$ is positive and varies slowly at infinity. Then
\[
\sum_{n=1}^\infty a_n z^n \sim \frac{\Gamma(\alpha+1)}{(1-z)^{\alpha+1}} \ell\left( \frac{1}{1-z}\right)
\]
as $z\to1$ in any sector
\begin{equation}\label{eq:sector}
S_\theta := \{ z \in\mathbb{C} : |\arg(1-z)| \leq \tfrac12 \pi -\theta \}, \qquad \theta >0.
\end{equation}
\end{lemma}
From the lemma (applied here only for real~$z$) and~\eqref{eq:walfisz} we obtain
\begin{align}
\sum_{n=1}^\infty \mu(n) z^n &= (1-z) \sum_{n=1}^\infty M(n) z^n \notag \\
&= \mathrm{O}\left(\frac1t \exp\left( -\frac{c(\log 1/t)^{3/5}}{(\log\log 1/t)^{1/5}} \right) \right),
\qquad t=-\log z \sim 1-z \to0^+\ \text{in}\ \mathbb{R}. \label{eq:from abel}
\end{align}
There seems to be no Tauberian result available to translate~\eqref{eq:from abel} back into an estimate
for the Mertens function~$M(x)$. This typical asymmetry suggests that we might be able to do a little better
than~\eqref{eq:from abel} by using dedicated methods. Indeed, our main result
(Theorem~\ref{thm:main} below) improves~\eqref{eq:from abel} to
\begin{equation}\label{eq:mu est}
\sum_{n=1}^\infty \mu(n) z^n = \mathrm{O}\left(\frac1t \exp\left(-\frac{0.0203\times \log (1/t)}
{(\log\log 1/t)^{2/3}(\log\log\log 1/t)^{1/3}}\right) \right),
\quad t=-\log z,
\end{equation}
where~$z\to0$ in an arbitrary sector~$S_\theta$, $\theta>0$.
The proof rests on the contour integral representation~\cite{Titchmarsh86}
\begin{equation}\label{eq:mu repr}
\sum_{n=1}^\infty \mu(n) \mathrm{e}^{-nt} =
\frac{1}{2\pi\mathrm{i}} \int_{\kappa-\mathrm{i}\infty}^{\kappa+\mathrm{i}\infty} \frac{\Gamma(s)}{\zeta(s)} t^{-s} \mathrm{d} s, \qquad \kappa>1.
\end{equation}
In a way that is familiar from the prime number theorem or the Selberg-Delange method~\cite{Tenenbaum95},
one can deform the integration contour a little bit into the critical strip~$0<\Re(s)<1$,
and then estimate the resulting integral.
The exponential decrease of the Gamma function along vertical lines is a convenient feature
of~\eqref{eq:mu repr}, which is not present in the Perron summation formula~\cite{Tenenbaum95}
\begin{equation}\label{eq:perron}
\sum_{n\leq x}\mu(n) = \frac{1}{2\pi\mathrm{i}} \int_{\kappa-\mathrm{i}\infty}^{\kappa+\mathrm{i}\infty} \frac{1}{\zeta(s)}\frac{x^s}{s} \mathrm{d} s,
\qquad \kappa>1,\ x \in \mathbb{R}^+\setminus\mathbb{Z}.
\end{equation}
Power series thus tend to be easier to estimate than summatory functions. The fact that~$x$
is real in~\eqref{eq:perron}, whereas in~\eqref{eq:mu repr} it is natural to consider also complex~$t$,
causes no great difficulties. (At least if~$|\arg(t)|$ stays bounded away from~$\tfrac12\pi$.)
In the following section we put~\eqref{eq:mu est} into perspective by relating the growth
of $\sum \mu(n) z^n$ to the Riemann Hypothesis.
Section~\ref{se:main} contains our main result, from which~\eqref{eq:mu est} follows.
A few related power series will be estimated in Section~\ref{se:fu ex}.
Section~\ref{se:open} collects some open problems.
\section{Connection to the Riemann Hypothesis}
In conjunction with~\eqref{eq:walfisz} and~\eqref{eq:mu est}, the following proposition shows that the gap
between the Riemann Hypothesis and what is provable today is slightly smaller in the power series
case than in the case of the summatory function~$M(x)$.
\begin{proposition}
Let $\tfrac12 \leq\eta <1$. Then the following are equivalent:
\begin{itemize}
\item[$(i)$]\label{it:i} $\zeta(s)$ has no zeros for $\Re(s)>\eta$,
\item[$(ii)$] $M(x)=\mathrm{O}(x^{\eta+\varepsilon})$,
\item[$(iii)$] $\sum_{n\geq1} \mu(n) z^n = \mathrm{O}((1-z)^{-\eta+\varepsilon})$ as $z\to 1^-$ in $\mathbb{R}$.
\end{itemize}
\end{proposition}
\begin{proof}
The equivalence of~$(i)$ and~$(ii)$ is classical for~$\eta=\tfrac12$, see Titchmarsh~\cite{Titchmarsh86},
and the proof of the more general case is an easy modification.
(The implication $(ii)\Rightarrow (i)$, which we actually do not require, is posed as Exercise~13.4
in Apostol's textbook~\cite{Apostol76}.)
If~$(ii)$ holds, then~$(iii)$ follows by Lemma~\ref{le:abelian}.
Finally, if we assume that~$(iii)$ is true, we have that
\[
F(t) := \sum_{n=1}^\infty \mu(n) \mathrm{e}^{-nt} =\mathrm{O}(t^{-(\eta+\varepsilon)}), \qquad t\to0^+\ \text{in}\ \mathbb{R}.
\]
Hence the Mellin transform~\cite{FlGoDu95}
\[
\int_0^\infty F(t) t^{s-1} \mathrm{d} t = \frac{\Gamma(s)}{\zeta(s)}
\]
defines an analytic function for~$\Re(s)>\eta$.
\end{proof}
Under the Riemann Hypothesis, one would expect that we can push the integration contour in~\eqref{eq:mu repr}
across the critical line~$\Re(s)=\tfrac12$ to obtain an expansion of the form
\begin{equation}\label{eq:mu rh}
\sum_{n=1}^\infty \mu(n) \mathrm{e}^{-nt} \stackrel{?}{=} t^{-1/2} H(\log(1/t)) -2 + \mathrm{o}(1),
\qquad t\to0,
\end{equation}
where~$H$, a bounded oscillating function, is a sum of infinitely many harmonics corresponding to
the non-trivial zeros of the zeta-function. The fast decrease of the Gamma function makes the residues
of $\Gamma(s)/\zeta(s)$ at these zeros rather small, so that the term~$-2$ will dominate in~\eqref{eq:mu rh}
unless~$1-z$ is very close to zero. Indeed, the $\Omega(t^{-1/2})$ term becomes numerically visible
only from about $1-z=10^{-10}$ onwards [P.~Flajolet, private communication].
This ``fake asymptotics'' property has also been noted by Bateman and Diamond~\cite{BaDi00}.
Without assuming the Riemann Hypothesis, Delange~\cite{De00} has shown that
\[
\sum_{n=1}^\infty \mu(n) z^n = \Omega_\pm\left( \frac{1}{\sqrt{1-z}} \right), \qquad z\to 1^-\ \text{in} \ \mathbb{R},
\]
which is in line with~\eqref{eq:mu rh}, and shows that the left-hand side {\em does not} converge to~$-2$.
\section{Main Result}\label{se:main}
We write
\[
D(s) = \sum_{n=1}^\infty\frac{a_n}{n^s}, \qquad s = \sigma + \mathrm{i} \tau,
\]
for the Dirichlet generating function of a sequence~$a_n$. The following theorem gives an estimate
for the power series $\sum a_n z^n$ near $z=1$, assuming analyticity and growth conditions for~$D(s)$.
\begin{theorem}\label{thm:main}
Let~$a_n$ be a sequence of complex numbers such that~$D(s)$ is absolutely convergent for~$\Re(s)>1$
and has an analytic continuation to a set~$\Omega$ of the form
\begin{equation}\label{eq:domain}
\sigma \geq g(\tau) :=
\begin{cases}
1- b(\log|\tau|)^{-\alpha} (\log \log |\tau|)^{-\beta} & |\tau| \geq w \\
1- b(\log w)^{-\alpha} (\log \log w)^{-\beta} & |\tau| \leq w
\end{cases}
\end{equation}
for some positive parameters~$\alpha,\beta,b,w$. Assume furthermore that
\[
D(s) = \mathrm{O}(\tau^\nu),
\]
uniformly as $s\to\infty$ in~$\Omega$, for some~$\nu>0$. Then for any $\varepsilon>0$
\begin{equation*}\label{eq:gen est}
\sum_{n=1}^\infty a_n z^n = \mathrm{O}\left(\frac1t \exp\left(-\frac{(b-\varepsilon) \log (1/t)}
{(\log\log 1/t)^{\alpha}(\log\log\log 1/t)^{\beta}}\right) \right),
\qquad t=-\log z \sim 1-z.
\end{equation*}
The variable~$z$ may tend to~$1$ in an arbitrary sector of the form~\eqref{eq:sector}.
\end{theorem}
This result immediately implies the bound~\eqref{eq:mu est},
by noting that $D(s)=1/\zeta(s)$ for $a_n=\mu(n)$ and putting $\alpha=\tfrac23$ and $\beta=\tfrac13$.
The required analyticity and growth
of~$1/\zeta(s)$ are the content of Korobov and Vinogradov's famous theorem~\cite{Titchmarsh86}, which describes
the largest known zero-free region for the Riemann zeta function.
(Recall that it leads to the best known error term in the prime number theorem.)
For the constant~$b$ in~\eqref{eq:domain} one may take~$b=0.05507\times(4.45)^{-2/3}> 0.0203$ in this case, by
a result of Ford~\cite{Fo02}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
The convergence assumption on~$D(s)$ clearly implies that the radius of convergence of
$\sum a_n z^n$ is at least one.
We assume that~$z$ stays inside~$S_{2\theta}$; then $t=-\log z$ satisfies
\[
|\arg (t)| \leq \tfrac12 \pi - \theta
\]
for small~$|t|$.
For~$\kappa>1$ we have~\cite[p.~151]{Titchmarsh86}
\begin{equation}\label{eq:int}
\sum_{n=1}^\infty a_n \mathrm{e}^{-nt} =
\frac{1}{2\pi\mathrm{i}} \int_{\kappa-\mathrm{i}\infty}^{\kappa+\mathrm{i}\infty} D(s)\Gamma(s)t^{-s} \mathrm{d} s.
\end{equation}
%
\begin{figure}[t]
\includegraphics[scale=1.0]{contour.pdf}
\caption{The deformed integration contour, extending into the strip $0<\Re(s)<1$.}
\label{fig:contour}
\end{figure}
%
Now we deform the integration contour as indicated in Figure~\ref{fig:contour}, where
\[
\kappa = 1 - 1/\log|t| >1,
\]
and~$T=T(t)>0$, to be be fixed later, tends to infinity as~$t\to0$.
Between~$\pm\mathrm{i} T$, the contour is defined by the function~$g(\tau)$ from~\eqref{eq:domain}.
We will repeatedly apply the following version of Stirling's formula~\cite{Co35}: If~$\Re(s)=\sigma$
is confined to a finite interval, then
\[
|\Gamma(s)| \sim \sqrt{2\pi}\ \mathrm{e}^{-\pi |\tau|/2}|\tau|^{\sigma-1/2}, \qquad |\tau|\to\infty,
\]
uniformly w.r.t.~$\sigma$.
To bound the integral over the upper vertical line $[\kappa+\mathrm{i} T,\kappa+\mathrm{i}\infty[$, note that there we have
\begin{align*}
|\Gamma(s) t^{-s}| &\ll_t |t|^{-\kappa} \tau^{\kappa-1/2}\mathrm{e}^{-\tau(\pi/2-\arg(t))} \\
&\leq \mathrm{e} |t|^{-1} \tau^{\kappa-1/2}\mathrm{e}^{-\theta\tau}.
\end{align*}
(Here and in the following, we write~$A \ll_t B$ for $A=\mathrm{O}(B)$ as~$t\to0$, where
the estimate holds uniformly in~$\tau$, if~$\tau=\Im(s)$ is a free variable in the right-hand side~$B$.)
Hence the integral over the upper vertical line satisfies
\begin{equation}\label{eq:vert line}
|I_{\rm{vert}}| \ll_t \frac{1}{|t|} \int_T^\infty \mathrm{e}^{-\theta \tau} \tau^{\kappa+\nu-1/2} \mathrm{d} \tau
\ll_t |t|^{-1} \mathrm{e}^{-\theta T} T^{\kappa+\nu-1/2}.
\end{equation}
We next estimate the contribution of the horizontal segment $[g(T)+\mathrm{i} T, \kappa +\mathrm{i} T[$
to the integral. In this range we have
\[
|t^{-s}| \leq |t|^{-\kappa} \mathrm{e}^{T(\pi/2-\theta)} = |t|^{-1} \mathrm{e}^{T(\pi/2-\theta)+1}
\]
and
\[
|\Gamma(s)| \ll_t T^{\kappa-1/2}\mathrm{e}^{-T\pi /2},
\]
hence this portion of the integral is
\begin{equation}\label{eq:hor}
|I_{\rm{hor}}| \ll_t |t|^{-1} \mathrm{e}^{-\theta T} T^{\kappa+\nu-1/2},
\end{equation}
so that we obtain the same estimate as in~\eqref{eq:vert line}.
Finally, we bound the integral over the arc $\sigma=g(\tau)$, which we call~$I_{\rm{arc}}$.
The integral from~$g(w)$
to~$g(w)+\mathrm{i} w$ is plainly~$\mathrm{O}(t^{\delta-1})$ for some positive~$\delta$, hence negligible
compared to~\eqref{eq:hor}. In the remaining range~$\tau>w$, we have
\[
|t^{-s}| = |t|^{-g(\tau)} \mathrm{e}^{\tau \arg(t)} \leq |t|^{-g(T)} \mathrm{e}^{\tau (\pi/2-\theta)}
\]
and
\[
|\Gamma(s)| \ll_t \tau^{g(\tau)-1/2} \mathrm{e}^{-\tau\pi/2} \leq \tau^{g(T)-1/2} \mathrm{e}^{-\tau\pi/2},
\]
so that we have the bound
\begin{align}
|I_{\rm{arc}}| &\ll_t |t|^{-g(T)} T^\nu
\int_w^T \mathrm{e}^{-\tau \theta} \tau^{g(T)-1/2} \mathrm{d} \tau \notag \\
&\ll_t |t|^{-g(T)} T^\nu \Gamma(g(T)+\tfrac12) \notag \\
&\ll_t |t|^{-g(T)} T^\nu. \label{eq:arc}
\end{align}
To complete the proof, we have to pick~$T$ wisely in order to balance the estimates~\eqref{eq:hor}
and~\eqref{eq:arc}. We would like to have~$T$ as large as possible in~\eqref{eq:hor},
whereas~\eqref{eq:arc} calls for a small~$T$.
We therefore choose
\[
T = \frac {\log (1/|t|)}{(\log \log 1/|t|)^\alpha},
\]
which makes~\eqref{eq:hor} and~\eqref{eq:arc} approximately equal. The former then implies
\[
|I_{\rm{hor}}| \ll_t \frac1t \exp\left(- \frac{(\theta-\varepsilon)\log (1/t)}{(\log \log 1/t)^\alpha} \right),
\]
whereas~\eqref{eq:arc} yields
\[
|I_{\rm{arc}}| \ll_t \frac1t \exp\left( -\frac{(b-\varepsilon)\log (1/t)}
{(\log \log 1/t)^\alpha (\log \log \log 1/t)^\beta} \right),
\]
both for arbitrarily small $\varepsilon>0$.
\end{proof}
\section{Further Examples}\label{se:fu ex}
Besides~\eqref{eq:mu est}, Theorem~\ref{thm:main} yields also estimates for other number theoretic
power series. In what follows, we let $\Lambda,\lambda,\omega,$ and~$\tau$ denote, as usual, the von Mangoldt function,
the Liouville function, the number-of-distinct-prime-factors function, and the number-of-divisors function.
Applying Theorem~\ref{thm:main} to the Dirichlet generating functions~\cite{De00,Tenenbaum95}
\begin{align}
\sum_{n=1}^\infty \frac{(-1)^{n+1}\mu(n)}{n^s} &= \frac{1}{\zeta(s)} \frac{2^s+1}{2^s-1}, \label{eq:dgf1} \\
\sum_{n=1}^\infty \frac{\Lambda(n)-1}{n^s} &= -\frac{\zeta'(s)}{\zeta(s)} - \zeta(s), \\
\sum_{n=1}^\infty \frac{\lambda(n)}{n^s} &= \frac{\zeta(2s)}{\zeta(s)}, \\
\sum_{n=1}^\infty \frac{(-1)^{n+1}\lambda(n)}{n^s} &= (1+2^{1-s})\frac{\zeta(2s)}{\zeta(s)}, \\
\sum_{n=1}^\infty \frac{2^{\omega(n)}-\tau(n)}{n^s} &=
\frac{\zeta(s)^2}{\zeta(2s)} - \zeta(s)^2 \label{eq:dgf2}
\end{align}
yields the following result.
\begin{corollary}\label{cor:other fu}
Let~$E(z)$ denote the function in the error term in~\eqref{eq:mu est}. Then we have
\begin{align}
\sum_{n=1}^\infty (-1)^n \mu(n) z^n &= \mathrm{O}(E(z)), \notag \\
\sum_{n=1}^\infty \Lambda(n) z^n &= \frac{1}{1-z} + \mathrm{O}(E(z)), \notag \\
\sum_{n=1}^\infty \lambda(n) z^n &= \mathrm{O}(E(z)), \notag \\
\sum_{n=1}^\infty (-1)^n \lambda(n) z^n &= \mathrm{O}(E(z)), \notag \\
\sum_{n=1}^\infty 2^{\omega(n)} z^n &= \frac{1}{1-z} \log \frac{1}{1-z} +
\frac{1+\gamma}{1-z} + \mathrm{O}(E(z)), \label{eq:2^omega}
\end{align}
as~$z$ tends to~$1$ in an arbitrary sector of the form~\eqref{eq:sector}.
\end{corollary}
\begin{proof}
The Dirichlet series~\eqref{eq:dgf1}--\eqref{eq:dgf2} satisfy the assumptions of Theorem~\ref{thm:main};
see, e.g., Titchmarsh~\cite{Titchmarsh86}.
As for the case of~$2^{\omega(n)}$, formula~\eqref{eq:tau exp} provides the required
expansion of $\sum \tau(n)z^n$.
\end{proof}
Recall that Selberg and Delange~\cite[II.5]{Tenenbaum95} established expansions for
summatory functions $\sum_{n\leq x}a_n$ in the scale
\begin{equation}\label{eq:log scale}
x (\log x)^{\rho-k}, \qquad k = 1,2,\dots,
\end{equation}
assuming that the corresponding Dirichlet series $\sum a_n n^{-s}$ is sufficiently
close to a power~$\zeta(s)^{-\rho}$ of the zeta-function, where~$\rho\in\mathbb{C}$.
This is proved from Perron's summation formula, using a contour akin to
Figure~\ref{fig:contour}, but circumventing the possible singularity at~$s=1$ by a narrow loop.
The same programme could be carried out for power series, too, but this seems not worthwhile.
Note that Dirichlet series with a pole at $s=1$ can be handled by Theorem~\ref{thm:main}
after subtracting a singular element, as we did in the proof of~\eqref{eq:2^omega}.
An algebraic singularity at $s=1$ leads to an infinite expansion in the scale~\eqref{eq:log scale},
which readily translates into an expansion for~$\sum a_n z^n$ at $z=1$ by
an Abelian theorem (Lemma~\ref{le:abelian}).
\section{Open Problems}\label{se:open}
As noted in the introduction, the unit circle is a natural boundary of~$\sum\mu(n)z^n$. Hence one would
expect that, if~$z$ tends to~$1$ along a path that comes very close to the unit circle,
the function picks up too much growth from neighboring singularities
to be bounded in any scale involving only $1/(1-z)$.
So the restriction of~$z$ to sectors in Theorem~\ref{thm:main} is presumably essential.
More precisely, we pose the following
question: If $f:\mathbb{R}^+\to\mathbb{R}^+$ is an arbitrary function, does it follow that
\[
\sum_{n=1}^\infty \mu(n) z^n = \Omega\left( f\left(\frac{1}{1-z}\right) \right)
\]
as~$z\to1$ in the unit disk?
On another register, a natural continuation of the transcendence results mentioned in the
introduction would be to investigate whether the power series $\sum f(n) z^n$, with~$f$
any of the classical arithmetic functions, can satisfy an algebraic differential equation~\cite{Ru89}.
\bigskip
{\bf Acknowledgement.} I thank Philippe Flajolet and Florian Luca for helpful comments.
\bibliographystyle{siam}
|
1,116,691,500,645 | arxiv | \subsection{Method subsection.}
\end{methods}
|
1,116,691,500,646 | arxiv | \section{Introduction}
Recent interest in topologically complex soft materials has lead to the fabrication and characterization of molecules with non-covalent connectivity, including molecular knots and linked-ring networks known as catenanes.\cite{hart2021material} Molecular catenanes can be created synthetically with techniques such as metallo-organic complexation, \cite{wu2017poly} but several forms of catenated macromolecules or ``Olympic gels''\cite{deGennes_book,raphael1997progressive} are known to form naturally. These include the HK97 virus that has a capsid made of catenated proteins,\cite{zhou2015protein} and catenated DNA molecules that occur in the mitochondria of cancerous cells.\cite{clayton1967circular} The most extreme example of ``molecular chainmail'' is the kinetoplast. A kinetoplast is a complex DNA structure found in the mitochondria of trypanosome parasites, consisting of thousands of circular DNA molecules, known as minicircles, topologically linked in a two-dimensional network.\cite{englund2014passion} Kinetoplasts are part of an RNA editing mechanism that allows mutated metabolic genes to be expressed, and the network of minicircles is believed to have a honeycomb topology with an average ``valence'' of 3. \cite{chen1995topology}
Recently, kinetoplasts have been investigated as a model experimental system for studying the physics of 2D polymers and catenated materials. \cite{klotz2020equilibrium} It was observed that kinetoplasts from the \textit{Crithidia fasciculata} parasite in free solution exhibit the behavior expected of a thermalized elastic membrane but also have a strong intrinsic curvature. Subsequently, Soh et al. showed that stretched kinetoplasts form a metastable deformed state \cite{soh2020deformation} and undergo isotropic changes in size when the buffer ionic conditions are varied.\cite{soh2020ionic} One salient question is the degree to which their exotic catenated structure affects their physical properties, as distinct from their two-dimensional network topology: to what extent is a catenated membrane different from a covalent or tethered membrane? The lengthscale of minicircle catenation is typically below the lengthscale of optical microscopy and the deformations achieved through microfluidic stretching\cite{soh2020ionic} or confinement\cite{klotz2020equilibrium, soh2021equilibrium} may not be sufficient to distinguish their effects.
One predicted feature of self-avoiding two-dimensional polymers is their ``flatness,'' referring to the fact that the length-scale of their surface undulations is predicted to have a weaker dependence on molecular weight than the in-plane dimensions of the polymer, leading to an effectively infinite persistence length independent of molecular composition. While incorporating a local energetic bending rigidity into a model is sufficient to induce flatness, an effective bending rigidity of entropic origin can also arise solely through excluded-volume interactions. As noted by {Kantor} and Kremer, \cite{kantor1993excluded} local excluded-volume interactions provide the bending rigidity that leads to flatness, but then play no further role at larger distances. Flatness has been identified for various models of self-avoiding membranes in numerous simulations.\cite{plischke1988absence,abraham1989molecular,grest1991self,zhang1996molecular,popova2007structure,popova2008anomalous,mizuochi2014dynamical} While kinetoplasts have the appearance of a smooth but curved open membrane, it is not known from experiments whether catenated 2D materials exhibit the predicted flat phases of 2D polymers.
The apparent curvature of kinetoplasts in solution is not known to be an essential part of the trypanosome gene editing apparatus, nor is it known whether it arises due to ``purse-string'' effects at the edge,\cite{barker1980ultrastructure} the topology of the network or defects therein, or a subtler entropic mechanism. There has been recent interest in the spontaneous curvature of thermalized planar materials due to lattice impurities\cite{plummer2020buckling} as well as the influence of intrinsic curvature on defect dynamics,\cite{agarwal2020simple} but the spontaneous equilibrium curvature of membrane-like polymers without explicit curvature has not been investigated. Recently, Soh and Doyle showed that the apparent curvature of kinetoplasts vanishes when they are strongly confined.\cite{soh2021equilibrium}
In this study, we use Monte Carlo simulations to investigate the equilibrium statistical properties
of catenated membranes. {While other recent simulation studies have examined the statistical and dynamical
properties of similar systems, including catenane dimers,\cite{amici2019topologically,
caraglio2017mechanical} poly-catenanes,\cite{rauscher2018topological, wu2019mechanical, rauscher2020dynamics,
rauscher2020thermodynamics, dehaghani2020effects} Olympic gels,\cite{lang2014swelling, fischer2015formation,
lang2015olympic} as well as the linking statistics of ring polymers under confinement,\cite{michieletto2015kinetoplast,
dadamo2017linking, diao2015orientation} this is the first simulation study of a catenated membrane, to
our knowledge. The model membrane consists of identical rigid circular rings connected in 2D lattices.
Although the simulations in
Refs.~\citen{amici2019topologically, caraglio2017mechanical, rauscher2018topological,
wu2019mechanical, rauscher2020dynamics, rauscher2020thermodynamics, dehaghani2020effects, lang2014swelling,
fischer2015formation, lang2015olympic, michieletto2015kinetoplast, dadamo2017linking, diao2015orientation}
use flexible-chain models, we find the use of rigid rings to be a necessary simplification for
computational efficiency.}
We are interested in the growth of out-of-plane
fluctuations and spontaneous curvature with respect to the number of rings in the network, the extent
to which these features deviate from those found in covalently-connected membranes, as well as the
effects of ring thickness, lattice shape and linking topology. As observed in the case for covalent
membranes, we find that linked-ring membranes have a flat topology. Remarkably, we also find that they
form concave structures qualitatively similar to those observed in kinetoplasts.
\section{Model and Methods}
We use Monte Carlo (MC) simulations to study membranes composed of interlocking rigid circular rings. Figure~\ref{fig:illust} illustrates the three membrane structures examined in this study. The membrane of Fig.~\ref{fig:illust}(a) has a hexagonal shape and triangular lattice structure (HT). It is composed of two types of rings: those with a linking valence of 6 (except at the edges where the valence is either 3 or 4), and those with a valence of 2. This membrane resembles tethered membranes used in previous simulations studies,\cite{kantor1986statistical,abraham1989molecular,grest1991self,popova2007structure,popova2008anomalous,mizuochi2014dynamical} with the 6-valence rings analogous to vertices or particles and the 2-valence rings analogous to the connecting bonds. The membrane size $M$ is defined as the number of 6-valence rings that span the structure from one corner through the center to the opposite corner. As an illustration, Figure~\ref{fig:illust}(a) shows a membrane with $M$=9. Figure~\ref{fig:illust}(b) shows a square membrane with a square-lattice structure (SS) composed of rings with a linking valence of 4 (except at edges and corners) as well as those with a valence of 2. For this linking topology, we consider only membranes that are square when stretched out, as illustrated in the figure. The membrane size $M$ is the number of 4-valence rings along the side of the square. As an illustration, $M$=10 for the membrane in the figure. Figure~\ref{fig:illust}(c) shows the third membrane type examined: a membrane with triangular-lattice linking, as for HT membranes, but whose shape is (approximately) square, as for SS membranes. We call these ST membranes. This model is employed in some calculations to determine whether any observed differences in behavior between HT and SS membranes are caused by the linking topology or the membrane shape. Integer lattice sizes for the 6-valence rings $M_1$ and $M_2$ are chosen to best approximate a square shape. As an illustration, $M_1$=10 and $M_2$=11 for the membrane in Fig.~\ref{fig:illust}(c).
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig1.png}
\caption{Snapshots of linked-ring membranes illustrating the shapes and linking topologies examined in this study. (a) Hexagonal-shaped membrane with triangular-lattice linking (HT) and a size of $M$=9. (b) Square-shaped membrane with square-lattice linking (SS) and a size of $M$=10. (c) Approximately square membrane with triangular lattice linking (ST) with dimensions $M_1$=10 and $M_2$=11. (d) Close-up illustration of linking structure for HT and ST membranes. (e) Close-up illustration of linking structure for the SS membrane.}
\label{fig:illust}
\end{figure}
We examine membranes with rings of finite and zero thickness. The diameter of the rings, $D$, is chosen to be the unit of length, i.e. $D$=1. Rings of finite thickness have a circular cross section when bisected by a plane containing the ring normal. The diameter of this cross section, $w$ defines the thickness of the ring. For $w>0$, the diameter is the distance between the centers of the two circular cross sections. We consider a ring thickness in the range $w=0-0.2D$.
The MC simulations use the standard Metropolis methodology. For convenience, the initial positions and orientations of the rings are chosen to correspond the structures shown in Fig.~\ref{fig:illust}. Two types of MC trial moves are carried out for randomly selected rings: random displacement and random rotation about a randomly chosen axis. Trial moves are accepted or rejected based on whether they preserve the original linking structure, i.e., rings that are originally linked must stay linked, and rings that were originally unlinked must remain so. Any move that violates these constraints is rejected. For rings with $w>0$ moves are also rejected if the volumes occupied by the rings overlap, as determined using the method described in Ref.~\citen{vranek2002fast}. (A detailed description of the algorithms used for testing for interlocking rings and overlap of finite-$w$ rings is presented in the ESI.\dag) Moves that preserve the link structure and do not result in such overlap are accepted. Maximum displacement and rotation angles are chosen to yield an acceptance ratio in the range 30 -- 50\%. Displacement and rotation moves are selected with equal probability. A MC cycle is defined as a sequence of $N$ consecutive trial moves, where $N$ is the total number of rings in the membrane. Thus, during each cycle {an attempt is made to either translate or rotate each ring once,} on average. Prior to data sampling, the system is equilibrated for a period chosen to ensure the complete decay of the initial transients in the histories of all measured quantities. Equilibration periods were typically ${\cal O}(10^6)$ MC cycles, and production runs were in the range of $3-5\times 10^6$ MC cycles in duration. Typically, the results of between 10 and 200 independent simulations were averaged to achieve reasonable statistical accuracy, with larger systems requiring more averaging.
The principal quantity of interest in this study is the shape tensor, whose components, $S_{\alpha\beta}$, are defined
\begin{eqnarray}
S_{\alpha\beta} = \frac{1}{N} \sum_{i=1}^N (r_{i,\alpha}-r_\alpha^{\rm CM})
(r_{i,\beta}-r_\beta^{\rm CM}),
\label{eq:shape}
\end{eqnarray}
where $r_{i,\alpha}$ is the $\alpha$-coordinate of the position of the center of the $i$th ring, and where $r_\alpha^{\rm CM}$ is likewise the $\alpha$-coordinate for the center of mass of the membrane. The instantaneous {eigenvalues} are denoted $R_1^2$, $R_2^2$, and $R_3^2$, where we choose $R_1^2\geq R_2^2\geq R_3^2$. The corresponding eigenvectors are denoted $\hat{n}_1$, $\hat{n}_2$ and $\hat{n}_3$. Note that the radius of gyration is related by $R_{\rm g}^2 = R_1^2+R_2^2+R_3^2$. We also introduce a measure of membrane concavity, $\zeta$ as follows:
\begin{eqnarray}
\zeta = \frac{1}{N}\sum_{n=1}^N \xi_n \rho_n
\label{eq:concavity}
\end{eqnarray}
Here, $\xi_n$ is a coordinate of the position of the $n$th ring measured along the $\xi$ axis, which is aligned with $\hat{n}_3$. The $\xi$ axis is also defined to pass through the center of mass, which defines the point where $\xi=0$. In addition, $\rho_n$ is the transverse distance of ring $n$ to the nearest point on the $\xi$ axis. Note that $\zeta$ tends to be appreciably non-zero when the rings close to the membrane center are on one side of the center of mass and rings further from the center are on the other side, i.e., where the membrane has a concave structure. The quantities used to define $\zeta$ are illustrated in Figure~\ref{fig:zeta_illust}. {Note that correctly distinguishing between positive and negative values of $\zeta$ requires resolving the ambiguity in choosing between two possible directions of $\hat{n}_3$. This is done by exploiting the fixed connectivity of the membrane rings. The details are discussed in the ESI.\dag}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\columnwidth]{fig2-eps-converted-to.pdf}
\caption{Illustration of quantities used in the definition of concavity $\zeta$ in Eq.~(\ref{eq:concavity}). $\xi_n$ and $\rho_n$ are the position coordinates of ring $n$ in a coordinate system with $\xi$ aligned along $\hat{n}_3$, the eigenvector for the smallest shape-tensor eigenvalue that passes through the membrane center of mass. The shaded region represents a cross section of the membrane in the plane containing $\hat{n}_3$ and the center of mass. {Note that the figure illustrates the case of a membrane with $\zeta<0$.} }
\label{fig:zeta_illust}
\end{figure}
Of particular interest is the concavity probability distribution, ${\cal P}(\zeta)$, and the related free-energy function, defined by $F(\zeta)/k_{\rm B}T=-\ln {\cal }{\cal P}(\zeta)$, where $k_{\rm B}$ is Boltzmann's constant and $T$ is absolute temperature. In some systems the distributions obtained from simple sampling over 100--200 simulations are averaged to obtain reliable estimates of $F(\zeta)$ over the range of interest for $\zeta$. In other systems a sizeable free energy barrier precludes accurate estimates of $F$ in the barrier region. In such cases, we employ a multiple-histogram method that incorporates umbrella sampling.\cite{frenkel2002understanding} The method was used for the case where the barrier heights exceed approximately $3k_{\rm B}T$. As in previous studies where one of us has employed this method (see, e.g., Ref.~\citen{polson2013simulation}), we refer to the method as the Self-Consistent Histogram (SCH) method. To implement the SCH method, we carry out many independent simulations, each of which employs a unique ``window potential'' of the form:
\begin{eqnarray}
{W_i(\zeta)}=\begin{cases} \infty, \hspace{8mm} \zeta<\zeta_i^{\rm min} \cr 0,
\hspace{1cm} \zeta_i^{\rm min}<\zeta<\zeta_i^{\rm max} \cr \infty,
\hspace{8mm} \zeta>\zeta_i^{\rm max} \cr
\end{cases}
\label{eq:winPot}
\end{eqnarray}
where $\zeta_i^{\rm min}$ and $\zeta_i^{\rm max}$ are the limits that define the range of $\zeta$ for the $i$-th window. Within each window of $\zeta$, a probability distribution $p_i(\zeta)$ is calculated in the simulation. The window potential width, $\Delta \zeta \equiv \zeta_i^{\rm max} - \zeta_i^{\rm min}$, is chosen to be sufficiently small that the variation in $F$ does not exceed $\approx 2k_{\rm B}T$. The windows are chosen to overlap with half of the adjacent window, such that $\zeta^{\rm max}_{i} = \zeta^{\rm min}_{i+2}$. The window width was typically in the range $\Delta \zeta = 0.1D-0.2D$. The SCH algorithm was employed to reconstruct the unbiased distribution, ${\cal P}(\zeta)$, from the $p_i(\zeta)$ histograms. The free energy follows from the relation $F(\zeta) = -k_{\rm B}T\ln {\cal P}(\zeta)+{\rm const}$. We choose the constant such that $F=0$ at $\zeta=\zeta_{\rm min}$, where $\zeta_{\rm min}$ is the location of the free-energy minimum.
A derivation of the histogram reconstruction method is described in Ref.~\citen{frenkel2002understanding}. A detailed description of applying the methodology to measure the polymer translocation free-energy function is described in detail in Ref.~\citen{polson2013simulation}.
In the results presented below, distances are measured in units of the ring diameter, $D$, and free energy is measured in units of $k_{\rm B}T$.
\section{Results and Discussion}
Figure~\ref{fig:Rscale} shows the scaling of the shape-tensor eigenvalues, $R_1^2$, $R_2^2$ and $R_3^2$ with respect to $L\equiv\sqrt{N}$, where $N$ is the total number of rings in the membrane. Since $N$ is approximately proportional to the average surface area of the membrane, $L$ is a rough measure of its span measured along the surface. Figure~\ref{fig:Rscale}(a) shows the scaling results for the HT membrane depicted in Fig.~\ref{fig:illust}(a), while Fig.~\ref{fig:Rscale}(b) shows results for the SS membrane illustrated in Fig.~\ref{fig:illust}(b). In each case, results for ring thickness of $w$=0 and $w$=0.1 are shown. In all cases, the eigenvalues exhibit power-law scaling. The solid and dashed lines are the best-fit curves for $R_i^2\propto L^{2\nu_i}$. The values of the scaling exponents $\nu_i$ are presented in Table~\ref{tab:1}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig3.png}
\caption{Scaling of eigenvalues of the shape tensor as a function of membrane size, $L\equiv\sqrt{N}$, where $N$ is the total number of rings in the membrane. (a) Results for HT membranes for ring thickness of $w$=0 and $w$=0.1. (b) Results for SS membranes for ring thickness of $w$=0 and $w$=0.1.}
\label{fig:Rscale}
\end{figure}
\begin{table}[!htp]
\begin{center}
\begin{tabular}[t]{c c c c c}
\hline\hline
& \multicolumn{2}{c}{HT} & \multicolumn{2}{c}{SS} \\
\hline
& $w$=0 & $w$=0.1 & $w$=0 & $w$=0.1 \\
\hline
~$\nu_1$ ~~ & ~0.95$\pm$0.01 ~ & ~ 0.99$\pm$ 0.01~ & ~0.92$\pm$0.02 ~ & ~0.97$\pm$0.02~\\
~$\nu_2$ ~~ & ~0.99$\pm$0.01 ~ & ~ 1.03$\pm$ 0.01~ & ~0.97$\pm$0.01 ~ & ~1.02$\pm$0.03~\\
~$\nu_3$ ~~ & ~0.79$\pm$0.02 ~ & ~ 0.84$\pm$ 0.01 & ~0.73$\pm$0.02 ~ & ~0.84$\pm$0.03~\\
\hline \hline
\end{tabular}
\end{center}
\caption{Scaling exponents, $\nu_i$, for the power-law fits to the data shown in Fig.~\ref{fig:Rscale} for the HT and SS membranes depicted in Fig.~\ref{fig:illust}(a) and (b), respectively.}
\label{tab:1}
\end{table}
The exponents $\nu_1$ and $\nu_2$ are typically close to unity for both HT and SS membranes, though deviations from this value are evident for membranes of zero thickness. The exponent $\nu_3$ describing the scaling of the smallest shape-tensor eigenvalue is somewhat lower than unity, as expected for a ``flat'' configuration. As for the other exponents, $\nu_3$ is also somewhat lower for $w$=0 than for $w$=0.1. Assuming that the observed scaling persists for larger membranes, the observation that $\nu_3 < \nu_1\approx\nu_2\approx 1$ suggests that the membrane is flat. In this phase, the membrane thickness grows with $L$, but at a slower rate than that of the lateral dimensions. Such behavior also characterizes self-avoiding covalent membranes, in which local excluded-volume interactions give rise to an effective bending rigidity that promotes membrane flatness.\cite{kantor1993excluded} The ``roughness'' exponent $\nu_3$ observed for linked-ring membranes here tends to be somewhat larger than that measured for covalent membranes.\cite{zhang1996molecular,gompper1997network,popova2007structure,popova2008anomalous}
Figure~\ref{fig:Rscale_width}(a) shows the variation of $R_i^2$ with ring thickness, $w$, for HT membranes of size $M$=11, as well as for SS membranes of size $M$=10. Both of the large eigenvalues, $R_1^2$ and $R_2^2$, increase monotonically with increasing $w$. This is indicative of an overall increase in the size of the membrane. However, as evident in the close up for $R_3^2$ in Fig~\ref{fig:Rscale_width}(b), the scaling of this eigenvalue is qualitatively different. In the case of SS membranes, this quantity increases slightly, though it appears to level off around $w\approx 0.18$. In the case of HT membranes the variation is weaker and non-monotonic, displaying a maximum near $w\approx 0.1$. Figure~\ref{fig:Rscale_width}(c) shows the membrane shape anisometry, $\eta\equiv (R_1^2+R_2^2)/2R_3^2$, with $w$. For the SS membrane, $\eta$ is mostly constant, except at $w\geq 0.15$, where there is a small increase. By contrast, the shape anisometry of the HT membrane increases significantly over the entire range of $w$.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig4.png}
\caption{(a) Scaling of eigenvalues of the shape tensor as a function of ring thickness, $w$. Results are shown for a SS membrane of size $M$=10 and a HT membrane of size $M$=11. The solid and dashed curves are guides for the eye. (b) Close up of the data for $R_3^2$ from panel (a). (c) Membrane anisometry $\eta$ (defined in the text) vs ring thickness. }
\label{fig:Rscale_width}
\end{figure}
Figure~\ref{fig:pdist} provides further insight into the effects of varying the ring thickness using the case of a HT membrane of size $M$=11. Figures~\ref{fig:pdist}(a) and (b) show the probability distributions for center-to-center ring distance and angle between normal vectors, respectively, for pairs of linked rings. Generally, as $w$ increases, the ring-distance distribution narrows, and the normal vectors of linked rings are increasingly likely to be perpendicular to each other (this is the relative ring orientation depicted in the illustration of Fig.~\ref{fig:illust}(d)). Figures~\ref{fig:pdist}(c) and (d) show the same two distributions except between pairs of neighboring 6-valence rings. As in Fig.~\ref{fig:pdist}(a), the distance distribution narrows, but unlike the previous case, there is an additional shift toward greater distances as $w$ increases. This key result explains the increase in $R_1^2$ and $R_2^2$ with $w$ in Fig.~\ref{fig:Rscale_width}. The sharpening of the distribution around $\theta=0$ (i.e., $\cos\theta =1$) with increasing $w$ in Fig.~\ref{fig:pdist}(d) indicates that the orientations of neighboring 6-valence rings of the HT membrane are becoming increasingly aligned as the rings become thicker, likely effecting a reduction in membrane roughness.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig5.png}
\caption{(a) Probability distribution for distance between centers of pairs of linked rings. Results are shown for a HT membrane of size $M$=11 for various values of $w$. (b) Distribution for $\cos\theta$, where $\theta$ is the angle between normal vectors for pairs of linked rings. The legend is same as in panel (a). (c) As in (a), except distributions for neighboring 6-valence rings. (d) As in (b), except $\theta$ is the angle between normal vectors of neighboring 6-valence rings.}
\label{fig:pdist}
\end{figure}
The results presented in Figs.~\ref{fig:Rscale}, \ref{fig:Rscale_width} and \ref{fig:pdist} {\it appear} to suggest that linked-ring membranes behave comparably to self-avoiding covalent membranes in the following manner. Independent of the membrane shape (square or hexagonal) or linking topology (square or triangular), the membrane is flat. Increasing ring thickness principally affects the pair distribution function of linked rings. This increases the mean center-to-center distance, thus increasing the membrane size quantified by $R_i^2$ in a manner analogous to increasing the tethering range of bound particles in a covalent membrane.
A much more interesting picture emerges, however, when we examine the membrane concavity, $\zeta$, as defined in Eq.~(\ref{eq:concavity}). Figure~\ref{fig:concavity_hist}(a) shows the time dependence of $\zeta$ for a HT membrane of size $M$=5 and ring thickness $w$=0.15. Generally, $\zeta$ tends to fluctuate about the two values of $\pm 0.13$, between which it infrequently executes rapid jumps. The corresponding probability distribution measured from an average of many such histories is shown in Fig.~\ref{fig:concavity_hist}(b). As expected, the distribution is symmetric about $\zeta=0$. In addition, it features two sharp peaks with maxima at $\pm 0.13$, whose widths are a measure of the magnitude of the fluctuations about these values. The probability at zero concavity is very low relative to the value at the maxima, consistent with the observation of infrequent and rapid transitions between the two states. Physically, this behavior corresponds to the presence of a concave membrane that periodically transitions to a new state where the concave side switches from one face to the other. Figure~\ref{fig:concavity_hist}(c) shows a snapshot of a membrane of size $M$=13 that clearly illustrates the concave shape, qualitatively similar to the concave shapes observed in microscopy studies of kinetoplasts.\cite{klotz2020equilibrium}
\begin{figure}[!ht]
\centering
\vspace*{0.2in}
\includegraphics[width=\columnwidth]{fig6ab-eps-converted-to.pdf}
\includegraphics[width=0.8\columnwidth]{fig6c.png}
\caption{(a) Time dependence of $\zeta$ from a single simulation for a HT membrane of size $M$=5 and ring thickness $w$=0.15. (b) Concavity probability distribution for the same system as in (a). (c) Snapshot illustrating a HT membrane with concave shape for a system with $M$=11 and $w$=0.15.}
\label{fig:concavity_hist}
\end{figure}
Let us now examine the properties of the concavity free-energy functions, $F(\zeta)/k_{\rm B}T=-\ln {\cal P}(\zeta)$. Figure~\ref{fig:Fzeta}(a)--(f) shows a collection free-energy functions that illustrate the effects of lattice type, membrane size, and ring thickness. Note the free energy is measured in units of $k_{\rm B}T$. Since ${\cal F}(\zeta)={\cal F}(-\zeta)$, we plot functions for $\zeta\geq 0$ without any loss of information. Figure~\ref{fig:Fzeta}(a) shows functions for HT membranes of various size, each for fixed ring width of $w$=0.15. Two trends are evident. First, the most probable concavity, defined by the minimum in the free energy, $\zeta_{\rm min}$, increases with size. Second, the free energy barrier at a concavity of $\zeta=0$ increases with $M$. Thus, as the membrane size increases, it becomes increasingly unlikely that the membrane will spontaneously flip to the state with the opposite concavity. For sufficiently large size, the membrane effectively becomes locked into whichever state the system randomly selected at the beginning of the simulation. Figure~\ref{fig:Fzeta}(b) reveals that both the most probable concavity, $\zeta_{\rm min}$, and the free energy barrier height increase with increasing ring thickness. Thus, like increasing system size, increasing $w$ stabilizes the system by reducing the likelihood of concavity ``flips''.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig7.png}
\caption{(a) Concavity free-energy functions for a HT membrane of ring thickness $w$=0.15. Results for various $M$ are shown. (b) As in (a), except for fixed size of $M$=11 and various $w$. (c) Free-energy functions for a SS membrane of ring thickness $w$=0.15 for various $M$. (d) As in (c) except for fixed size of $M$=11 and various $w$. (e) Free-energy functions for an ST membrane with $w$=0.15 and various $M_1$ and $M_2$. (f) As in (e), except for fixed $M_1$=10 and $M_2$=11 and various $w$.}
\label{fig:Fzeta}
\end{figure}
Figures~\ref{fig:Fzeta}(c) and (d) show the effects of varying membrane size and ring thickness, respectively, for the SS membranes illustrated in Fig.~\ref{fig:illust}(b). Likewise, Figs.~\ref{fig:Fzeta}(e) and (f) show corresponding results for the ST membrane illustrated in Fig.~\ref{fig:illust}(c). The trends are mostly qualitatively consistent with those for HT membranes. In each case, there is a free-energy barrier centered at $\zeta$=0, as well as a free-energy minimum located at $\zeta_{\rm min}$, both of which tend to increase with membrane size and ring thickness. However, there are quantitative differences in the results for the HT and SS membranes. Most notably, the SS-membrane barrier height, $\Delta F\equiv F(0)-F(\zeta_{\rm min})$, is smaller than that of the HT membrane and appears to level off with membrane size and thickness. By contrast, the trends for the ST membrane are much closer to the those of the HT membrane. Specifically, $\Delta F$ increases monotonically with membrane size and ring thickness, with values considerably greater than those of SS membranes of comparable size.
Figures~\ref{fig:zeta_min}(a) and (b) show a comparison of the variation of $\zeta_{\rm min}$ and $\Delta F$ with membrane size for different membranes. As a rough measure of membrane size for HT membranes, we use the sum of the areas of the triangles formed by the 6-valence rings (see Fig.~\ref{fig:illust}(a)). The resulting membrane area is $A=c^2 l_{\rm b}^2(M-1)^2$, where $l_{\rm b}$ is the measured root-mean-square distance between neighboring 6-valence rings, and where $c^2=3\sqrt{3}/8$. A comparable measure of area for the SS membranes depicted in Fig.~\ref{fig:illust}(b) has the same form but with $c^2=1$. Likewise, for the ST membranes of Fig.~\ref{fig:illust}(c), $c^2=\sqrt{3}/2$.
Figure~\ref{fig:zeta_min}(a) shows that $\zeta_{\rm min}$ varies linearly with $A$ for each of the three membrane types. At any given $A$, the values are comparable for the different membranes. Figure~\ref{fig:zeta_min}(b) shows the monotonic increase of $\Delta F$ with $A$ for HT and ST membranes. By contrast, the much smaller barrier for SS membranes increases only for $A\lesssim 150$, following which it levels off.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig8.png}
\caption{(a) Variation of $\zeta_{\rm min}$ with membrane area $A$, and (b) variation of the barrier height $\Delta F$ with $A$ for the three types of membrane shown in Fig.~\ref{fig:illust}. The definition of $A$ is given in the text. In each case, the ring width is $w$=0.15. (c) Variation of $\zeta_{\rm min}$ with $w$, and (d) variation of $\Delta F$ with $w$. Results are shown for the HT and SS membranes, each of size $M=11$, and for ST membranes of size $M_1$=10 and $M_2$=11.}
\label{fig:zeta_min}
\end{figure}
Figure~\ref{fig:zeta_min}(c) and (d) shows the variation of $\zeta_{\rm min}$ and $\Delta F$, respectively, with ring thickness $w$. Results are shown for HT and SS membranes, each of size $M$=11, as well as for ST membranes of size $M_1$=10 and $M_2$=11. For each membrane, $\zeta_{\rm min}$ varies linearly with $w$. Note that the areas of the membranes at any given $w$ differ slightly, which may account for the somewhat larger values for the ST membranes. The most notable trend in Fig.~\ref{fig:zeta_min}(d) is the qualitative difference between the results for the SS membrane relative to those for the other membrane types. The leveling off of the barrier height for $w\gtrsim$0.15 for SS membranes stands in contrast to the continuing increase for HT and ST membranes.
A crude explanation for the linear scaling of $\zeta_{\rm min}$ with $A$ follows from employing a simple {mathematical} model in which the membrane is approximated as a small portion of a spherical surface with a uniform mass density. As described in the appendix, this model predicts an approximately linear relation between $\zeta$ and $A$, the area of the concave surface. A linear fit to the predicted curve, shown in Fig.~\ref{fig:zeta_A_illust}, yields a slope that is comparable to, though slightly greater than the value measured for the HT membrane in Fig.~\ref{fig:zeta_min}(a) for the case of $w$=0.15. The roughness of the membrane (clearly not accounted for in the smooth surface depicted in the inset of Fig.~\ref{fig:zeta_A_illust}) likely accounts in part for the small quantitative discrepancy.
{A complementary measure of the degree of membrane concavity is the Gaussian curvature, $\kappa_{\rm G}$.
The Gaussian curvature is easily calculated at any node on a triangular mesh using the
method described by Meyer {\it et al.}\cite{meyer2003discrete} This can be conveniently
applied to an HT membrane, where each 6-valence ring is essentially a node in a triangular
mesh. We have carried out such calculations for an HT membrane and measured the mean $\kappa_{\rm G}$.
The details of the procedure are described in Section~IV of the ESI.\dag~ The insets of
Fig.~\ref{fig:gaussian}(a) and (b) show the variation of $\kappa_{\rm G}$ with ring thickness, $w$,
and with membrane size, $M$, respectively. The main part of each panel of the figure shows the
variation of the characteristic length $R_{\rm c}\equiv 1/\sqrt{\kappa_{\rm G}}$ with $w$ and $M$.
The results are illuminating. Perhaps the most notable point is the fact that $\kappa_{\rm G} > 0$
for all systems measured. The positive Gaussian curvature indicates that the membrane is indeed
concave, as suggested by the results for $\zeta$ above.
In Fig.~\ref{fig:gaussian}(a) we note that $\kappa_{\rm G}$ and
$R_{\rm c}$ are only weakly dependent on the ring thickness, except at $w=0$, where the curvature
is notably lower. By contrast, Fig.~\ref{fig:gaussian}(b) shows that the curvature rapidly decreases
with increasing membrane size. Remarkably, $R_{\rm c}$ increases linearly with $M$. This trend
has a straightforward interpretation if the membrane is modeled as a portion of a spherical
surface as in the inset of Fig.~\ref{fig:zeta_A_illust}. In this picture $M$ is proportional to the
diameter of the membrane as measured along its surface, and the quantity $R_{\rm c}$ is the radius
of the underlying sphere. The proportionalisty $R_{\rm c}\propto M$ then implies that the curvature
increases in a manner that fixes the angle $\theta_0$, defined in the figure. In this sense,
the membrane shape is preserved as both membrane diameter and radius of curvature co-increase.
}
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig9.pdf}
\caption{{(a) Variation of $R_{\rm c}$ ($\equiv 1/\sqrt{\kappa_{\rm G}}$) with ring
thickness for a HT membrane of fixed size $M=9$. The inset shows the
corresponding variation of the Gaussian curvature, $\kappa_{\rm G}$, with $w$.
(b) Variation of $R_{\rm c}$ with membrane size $M$ for a HT membrane of
fixed ring thickness $w=0.15$. The inset shows the
corresponding variation of the Gaussian curvature with $M$.}}
\label{fig:gaussian}
\end{figure}
Note that the presence of intrinsic curvature complicates the simple interpretation, based on the scaling of $R_i^2$ in Fig.~\ref{fig:Rscale}, that the membrane is flat, i.e. that the lateral dimensions of the membrane as measured along the surface grow faster than its thickness with increasing system size. As a result of this curvature the exponents $\nu_1$ and $\nu_2$ are expected to be somewhat lower than unity, while the roughness exponent, $\nu_3$, is expected to be larger than a value determined solely by out-of-plane fluctuations in the membrane shape. To estimate the magnitude of these effects we employ again the spherical-surface model described above. As shown in the appendix, this simple model predicts minimal effect on the values of $\nu_1$ and $\nu_2$. In addition, it suggests that the contribution to $R_3^2$ is small compared to that from the effects of membrane roughness. Consequently, it is reasonable to interpret the scaling results as implying linked-ring membranes are flat.
The trends evident in Figs.~\ref{fig:Fzeta} and \ref{fig:zeta_min} suggest that concavity appears to be an intrinsic property of the simple membranes composed of rigid interlocking circular rings examined here. The growth of the free-energy barrier $\Delta F$ with membrane area for membranes with triangular linking topology (HT and ST membranes) suggests that a concave configuration is increasingly preferred as membranes of this type grow in size. The situation is qualitatively different, however, for membranes with a square linking topology (SS membranes). In this case the barrier is much smaller and approaches a constant value as $A$ increases, and thus, the tendency toward concavity is much weaker. The similarity of the behavior for HT and ST membranes suggests the qualitatively different results for the SS membrane is not due to the shape of the membrane (square, rather than hexagonal), but rather its linking topology (triangular, rather than square). It seems that the ``tighter'' triangular linking topology (i.e. a higher linking valence) underlies the spontaneous formation of stable concave structures. Increasing the ring thickness has the combined effect of swelling the size of the membrane, as noted in Fig.~\ref{fig:Rscale_width}, and increasing the tendency toward concave shapes, as noted in Fig.~\ref{fig:zeta_min}(d).
It is instructive to compare the concavity of the linked-ring membranes to that of covalent membranes of a type examined in previous studies. As an example, consider a membrane composed of hard spherical particles of diameter $\sigma$ connected through a fixed network of tethers, each to a small number of neighboring particles. The interaction energy between tethered particles is zero, unless the particles overlap or distance between them exceeds some limit, $b$, in which case it is infinite. Such self-avoiding athermal membranes are known to be flat. Using this model we have carried out simulations of hexagonal membranes with the triangular tethering network. This is analogous to the HT membrane shown in Fig.~\ref{fig:illust}(a), with the 6-valence rings replaced by hard spheres, and the 2-valence rings effectively replaced by the tethers. To make the comparison meaningful, we choose the parameter values of $\sigma$ and $b$ to yield the mean and variance of the distance between tethered particles to match that between the centers of the 6-valence rings in the linked-ring membrane. Figure~\ref{fig:Fzeta_covalent} shows concavity free-energy functions for linked-ring membranes of ring thickness $w$=0.15 and the corresponding covalent membranes for various $M$. As expected, $F(\zeta)$ for the covalent membrane shows a negligible barrier for any membrane size, in stark contrast to the linked-ring membranes. As well, functions rise steeply at much smaller $\zeta$ than those for covalent membranes.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{fig10.png}
\caption{Concavity free-energy functions for hexagonal-shaped membranes. Results for various sizes are shown for a HT membrane with ring thickness $w$=0.15 and for an equivalent covalent membrane, as defined in the text. The dashed black curves show results for modified HT membranes where excluded-volume and linking interactions were considered only for pairs of rings that share a linking partner.}
\label{fig:Fzeta_covalent}
\end{figure}
The results shown in Fig.~\ref{fig:Fzeta_covalent} highlight the fact that the spontaneous concavity of small linked-ring membranes does not arise simply from the presence of excluded-volume interactions, which are also present for the covalent membrane. Instead, it seems to be a consequence of the form of the anisotropy in these interactions; that is, the excluded volume depends strongly on the relative orientation of any pair non-linked rings, as does the range of the accessible center-to-center separation distance between linked rings. A loose analogy is the spontaneous entropy-driven orientational ordering in colloidal liquid crystals arising from the orientational anisotropy in the excluded-volume interaction between elongated colloidal particles.\cite{vroege1992phase}
As noted in the introduction, self-avoiding covalent membranes are flat as a result of {\it local} excluded-volume interactions, which give rise to an effective bending rigidity, while interaction sites separated by longer distances (as measured along the membrane) play no significant role.\cite{kantor1993excluded} To determine whether the concavity of linked-ring membranes likewise arises from {\it local} anisotropic excluded-volume interactions, we carry out a calculation for a linked-ring system modified as follows. For pairs of rings that are either linked or which have a common linking partner, the tests for linking and overlap are implemented as before. For all other pairs of rings, the tests for linking and overlap are ignored in the MC algorithm. This means that pairs of rings separated by two or more links are permitted to overlap, for example in a ``taco'' configuration. The simulations were carried out for HT membranes of various sizes for $w$=0.15. The calculated free-energy functions are overlaid shown as dashed lines in Fig.~\ref{fig:Fzeta_covalent}. The functions for the modified system are virtually identical to those for the original membranes, with only a small increase in $F$ for low $\zeta$. We conclude that membrane concavity does indeed arise from {\it local} interactions between rings.
As noted earlier, a key motivation for the present study is to provide some insight into the observed properties of kinetoplasts, structures consisting of thousands of interlinking circular DNA molecules. Two simulation results stand out as possibly relevant for kinetoplasts. The first is the monotonic increase in the membrane size with increasing ring thickness, evident in Figs.~\ref{fig:Rscale} and \ref{fig:Rscale_width}. By comparison, Soh {\it et al.} found that kinetoplasts increased in size with the effective width of the DNA rings, which is controlled by varying the ionic strength of the solvent.\cite{soh2020ionic} The second result is the emergence of intrinsic curvature giving rise to the concave structures, as seen for example in the snapshot of Fig.~\ref{fig:concavity_hist}, which is also a property of kinetoplasts. The other naturally occurring planar catenated network, the capsid of the HK97 virus, is also found in a strongly curved state.\cite{zhou2015protein} Although curved surfaces can easily be constructed from flat surfaces through various means, such as a tailor-like procedure of cutting and removing wedges\cite{grosberg2020human} or through a purse-string mechanism, it is surprising and notable that the simple linking networks such as those shown in Fig.~\ref{fig:illust} exhibit this property.
Some caveats are in order here. To manage the computational cost our simulation model is necessarily simplistic and ignores numerous features of the real system whose effects may well be non-negligible. For example, the kinetoplast DNA mini-circles have a typical contour length of several Kuhn lengths and are thus flexible objects, in contrast to the rigid circular rings in the model. In addition, the membrane linking structures shown in Fig.~\ref{fig:illust} are somewhat arbitrary, though convenient, choices. An alternative and perhaps more realistic approach might be selection of a random linking topology chosen in a manner as in Ref.~\citen{diao2015orientation}. Finally, we note the relatively small size of the membranes employed in the simulation. Even the largest model membranes contain far fewer rings than kinetoplasts. They are also far fewer than the typical number of nodes or particles used in simulations of covalent membranes. Unfortunately, simulation of larger catenated networks is not computationally feasible at present. Consequently, it remains an open question as to whether the observed trends will persist for much larger membranes. Still, future simulations may eventually remedy these limitations by using more realistic models, and we view the present simulation study as an important first step toward understanding the behavior of catenated networks such as kinetoplasts.
\section{Conclusions}
In this study, we use Monte Carlo simulations to examine the statistical properties of ``membranes'' composed of 2D networks of linked rings. This work is largely inspired by recent experiments studying the physical properties of kinetoplasts, chain-mail-like structures found in the mitochondria of trypanosome parasites consisting of thousands of catenated circular DNA molecules. To keep the simulations computationally feasible, we employ a highly simplified model using hard, rigid, circular rings that are linked together in a regular lattice pattern, and consider membranes that are effectively much smaller than kinetoplasts. Generally, the scaling of the average membrane dimensions with system size suggest that the networks are flat, in the sense that the lateral dimensions grow much faster than the membrane thickness. Increasing ring thickness tends to swell the membrane, qualitatively consistent with observations from kinetoplast experiments. Remarkably, we find that the membranes tend to form concave structures that qualitatively resemble the shapes observed in kinetoplasts. This feature is of entropic origin and arises from {\it local} anisotropic excluded-volume interactions between rings. The degree of concavity increases with ring thickness and tends to be more pronounced in networks with a higher linking valence.
Future work will focus on refining the model to make it better resemble the experimental system. Two relevant features to incorporate are flexibility of the rings and a random linking topology with the ``correct'' mean valence. It will be of interest to determine whether the observed membrane concavity is affected by such changes. Another topic of interest is the effect of holes on the conformational properties of linked-ring membranes, a feature recently studied in the context of covalent membranes.\cite{yllanes2017thermal} A longer-term goal is developing a more computationally efficient coarse-grained membrane model using the measured properties of the present model system. Such a model would effectively facilitate simulations of much larger membranes that better resemble kinetoplasts.
|
1,116,691,500,647 | arxiv | \section{Introduction}
The next-generation Exascale supercomputers will be available to the community sometime in the 2021-2023 timeframe and it will be capable of delivering up to $10^{18}$ floating point operations per seconds to support traditional HPC compute-intensive applications. With the emergence of new HPC data centric applications, such as workflows and data analytics workloads, the definition of Exascale computing is now broadening to include storage and processing of an order of an exabyte of data. In fact, Big Data Analysis and HPC are converging as massive data sets, such as very large volumes of data from scientific instruments and sensors, needs to be processed, analyzed and integrated into simulations. For this reason, we envision that Exascale supercomputers will be capable to be exploited by applications and workflows for science and technological innovation.
Computing infrastructure innovation has been driven by Moore's law and the development of even more parallelism with multi-core, many-core and accelerator processing to accommodate the increasing performance requirements of Exascale. However I/O and storage have lagged far behind in computational power improvement. Storage performance in the same time period is predicted to have improved for only 100 times, according to early estimates provided by Vetter et al.~\cite{vetter2009hpc}. In fact, at the time of publication of this work, the performance of disk drives per unit capacity is actually decreasing with new very high capacity disk drives on the horizon~\cite{vetter2009hpc}. Simultaneously, the landscape for storage is changing with the emergence of new storage device technologies, such as flash (available today) and the promise of non-volatile memory technologies available in the near future~\cite{peng2016exploring, peng2017exploring}. The optimal use of these devices (starting with flash) in the I/O hierarchy, combined with existing disk technology, is only beginning to be explored in HPC~\cite{FastForward} with burst buffers~\cite{liu2012role}.
The SAGE project is a European Commission funded project to investigate Exascale computing~\cite{narasimhamurthy2018sage}. It supports a storage centric view of Exascale computing by proposing hardwares to support multi-tiered I/O hierarchies and associated software. They provide a demonstrable path towards Exascale. Further, SAGE proposes a radical approach in extreme scale HPC by moving traditional computations, typically done in the compute cluster, to the storage system. This has the potential of significantly reducing the energy footprint of the overall system~\cite{reed2015exascale}.
The primary objective of this paper is to present the initial hardware and software architecture of the SAGE system. The paper is organized as follows. Section~\ref{sec-challenges} presents the initial development of the SAGE platform. Section~\ref{sec-architecture} describes the SAGE platform architecture and software stack. Section~\ref{sec-relwork} discusses the related work. Finally, Section~\ref{sec-conclusions} summarizes the paper and outlines the future work.
\section{A Storage Centric Architecture}
\label{sec-challenges}
The SAGE storage system developed by the SAGE consortium provides a unique paradigm to store, access and process data in the realm of extreme-scale data centric computing.
The SAGE platform consists of multiple tiers of storage device technologies. At the bottom of the stack is the \emph{Unified Object-Based Storage Infrastructure}. The system does not require any specific storage device technology type and accommodates upcoming NVRAM, existing flash and disk tiers. For the NVRAM tier, we are using Intel 3D XPoint technology~\cite{bourzac2017has} in our \emph{Tier-1}. We will also use emulated NVDIMMs (Non-Volatile DIMMs) in Tier-1 because of the lack of NVDIMM availability in vendor roadmaps. We are using Flash based solid state drives in \emph{Tier-2}. Serial Attached SCSI high performance drives are contained in \emph{Tier-3} and archival grade, high capacity, slow disks ( based on Serial ATA and Shingled Magnetic Recording) are contained in \emph{Tier-4}. These tiers are all housed in standard form factor enclosures that provide their own computing capability through standard x86 embedded processing components, which are connected through an FDR Infiniband network. Moving up the system stack, compute capability increases for faster and lower latency device tiers. The storage system is also capable to perform computation in storage (either through a function shipping interface or a run-time supporting, e.g.,pre-/post-processing of data) on behalf of the applications. This avoids the need to move data back and forth between compute subsystems and storage subsystems as in a typical HPC cluster.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{SAGEhardware.pdf}
\caption{SAGE supercomputer prototype that has been installed at the J\"ulich Supercomputing Center in 2017.}
\label{fig:Figure1}
\end{center}
\end{figure}
A SAGE prototype system has been developed by Seagate and other SAGE consortium partners and is installed at the at the J\"ulich Supercomputing Center. Figure \ref{fig:Figure1} shows the SAGE prototype consisting of the four tiers:
\begin{itemize}
\item Tier-1: PCIe-attached NVMe SSDs based on NAND Flash or 3D XPoint memory
\item Tier-2: SAS-attached SSDs based on NAND Flash
\item Tier-3: High performance disks
\item Tier-4: Archival grade disks.
\end{itemize}
\section{SAGE Software Stack}
\label{sec-architecture}
Together with the development of SAGE storage centric prototype platform, we designed and developed software capable of taking advantage of the SAGE infrastructure. A simplified diagram of the SAGE software stack is presented in Figure \ref{fig:Figure2}.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{SAGE-Software.pdf}
\caption{SAGE Software Stack.}
\label{fig:Figure2}
\end{center}
\end{figure}
\subsection{Mero}
Mero is at the base of the SAGE software stack, providing the Exascale-capable object storage infrastructure that drives the storage hardware \cite{danilov2016mero}. Mero is a distributed object storage platform. Lustre file system, NFSv4 and database technology are the the main sources of inspiration for the Mero design. An emerging consensus is that traditional file system properties (hierarchical directory namespace, strong POSIX consistency guarantees, etc) pose a major obstacle for the performance and scalability of Exascale systems.
Mero controls a cluster of nodes connected by a network. Some nodes have persistent storage attached to them. Mero distinguishes various types of persistent stores including rotational drives, SAS-attached non-volatile memory devices and PCIe-/memory bus attached non-volatile memory devices. Mero presents a collection of services to applications.
Mero Object store has a "core" providing - scalable re-writable fault-tolerant data objects, Index store with scalable key-value indices, and, resource management capabilities for caches, locks, extents, etc.
\emph{Extreme scale} features such as containers and function shipping are built on top of the core. Mero also features a Distributed transaction manager, which makes it possible to use other services in a consistent manner in the face of hardware and software failures, which is also an extreme scale feature on top of the core.
Mero also has an Extension mechanism which allows the addition of new functionalities without modification to the core, called the FDMI, or File Data Manipulation Interface. New services such as Information lifecycle management~(ILM), indexing, search, etc can hence be built as extensions by third parties leveraging the core.
A Mero application can be a traditional user space application, running standalone on a node, a large MPI job running on multiple nodes, a cluster management utility monitoring the state of the system, or a NFS/CIFS daemon (a front-end) exporting Mero objects to non-Mero clients, etc.
\textbf{Container abstraction.} Containers are the main way of grouping objects in various ways. They virtualize the object name space by providing the capability to label objects in various ways. There can be containers based on data formats (for example, HDF5 containers, etc) and containers based just on performance (for example, high performance containers which are mapped to objects in higher tiers, etc). It is possible to do operations such as function shipping, pre/post processing on a given containers.
\textbf{High Availability (HA) System.} Existing systems and data ~\cite{failures} indicate that we can expect many hardware failures per second at Exascale, in addition to software failures resulting in crashed nodes. To maintain service availability in the face of expected failures, a global state (or configuration) of the cluster is maintained. This may need to be modified by the means of repair procedures in response to failure events. The HA subsystem for SAGE will perform such automated repair activities within storage device tiers. The HA subsystem thus monitors failure events (inputs) throughout the storage tiers and then decides to take action based on collected events.
\textbf{Distributed Transaction Management (DTM).} In Mero, all I/O and metadata operations are, ultimately, organized into transactions. Transactions are atomic with respect to failures. In other words, either all or none of the updates corresponding to a transaction are visible to other users. This property is known as atomicity. A related property is failure atomicity, i.e. either all or none of the updates corresponding to a transaction survive a failure. A group of updates that must atomically survive a failure is called a distributed transaction.
Mero implements a Distributed Transaction Manager (DTM) that guarantees efficient management of system state consistency in an environment in which dependent data and metadata are scattered over multiple nodes to provide fault tolerance and scalability. DTM provides the necessary interface to group a collection of object store updates into a distributed transaction and guarantees that, in the event of a server node failure and restart, the effects of distributed transactions that have updates for the affected server are either completely restored after restart or completely eliminated.
\textbf{Layouts.}
A layout is a mapping of different parts or regions of an object to storage tiers. Each object has a layout attribute that defines how the object is stored in the cluster. Mero provides a flexible, hierarchical description to map object subcomponents to physical locations, or storage tiers. This mapping allows for compact formulaic expressions, as well as data transformations, such as erasure coding, de-duplication, encryption and compression. Layouts also describe data redundancy models, like simple replication or Server Network Striping. As an example of a layout, an object can have some extents mapped to Tier-1, other extents mapped to Tier-2 and a few others mapped to Tier-3. Further, each of set of extents mapped to a certain tier can have its own "sub-layout".
\textbf{Function Shipping.} Function shipping in Mero provides the ability to run application functions directly on storage nodes. This addresses one of the big bottlenecks foreseen for Exascale systems, which is the overhead of moving data to computations. Indeed moving very large quantities of data from storage to compute is extremely energy intensive and energy is one of the prime candidates to address for Exascale systems. Well defined functions within the use cases are registered on the storage nodes and are invoked by the use cases using remote procedure calls. Function shipping is accomplished through extensions of the Mero Clovis API (see next section).
\textbf{Advanced Views.} Modern storage systems are increasingly heterogeneous. This means not only multitude of data sources and data formats, but also the multitude of applications accessing shared data sets with different access patterns, multitude of various storage policies (retention, access control, provenance, etc.) and multitude of conflicting goals that the storage system must balance (latency vs. throughput, different storage hardware).
Historically, providing an interface that allowed different applications to access shared data often resulted in great benefits both for application and system developers, the two most famous examples being UNIX "everything-is-a-file" and relational models. Data stored in Mero are accessible to multiple applications using various existing storage interfaces (e.g., POSIX, pNFS, S3, HDF5). A component of Mero called \emph{Lingua Franca} (LF) implements common meta-data formats and interfaces that enables interoperability between multiple external interfaces and internal meta-data users.
LF is a mechanism to share the same sets of storage entities (objects, indices and containers) between multiple applications with different access interfaces. Its straightforward use is to provide interoperability between different front-ends. Together with other Mero features, like containers and function shipping, it can be used to implement a very flexible access arrangement to the data.
\subsection{Clovis}
Mero's application programming interface (API), known as Clovis, provides a library of functions that applications and front-end programs can use for accessing and manipulating storage resources in Mero. Access to storage resources by outside applications is strictly controlled via Clovis; no other interfaces exist. The Clovis API contains optimized functions to manage performance and scalability for modern extreme scale computing applications as well as legacy applications. We expect higher-level applications, as part of development work related to accessing Mero storage resources, to build their own APIs on top of Clovis. In other words, we do not expect computational scientists to directly interface with the Clovis interface. Higher-level interfaces include HDF5, POSIX via pNFS, and others.
The Clovis API is implemented in the C programming language, but equivalent versions of the API are planned in other popular programming languages such as Java, Python, etc. The Clovis API supports asynchronous transactions, i.e. an application can continue after starting a transaction and only later check for completion of the transaction. For clarity, the API is divided into three sub-APIs: the Clovis Access API, the Clovis Management API and the Clovis Extension API.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{Clovis.pdf}
\caption{Clovis abstractions.}
\label{fig:Figure1}
\end{center}
\end{figure}
Clovis provides the following abstractions, shown in Figure~\ref{fig:Figure1}. Short descriptions of the abstractions are provided below:
\begin{itemize}
\item Object is an array of fixed size blocks of data
\item Index is a key-value store
\item An entity is an object or an index
\item Realm is a spatial and temporal part of the system with a prescribed access discipline
\item Operation is a process of querying and/or updating the system state
\item Objects, indices and operations live in realms
\item Transaction is a collection of operations that are atomic in the face of failure
\item Epoch is a collection of operations done by an application that moves the system from one application-consistent state to another
\item Container, in the Mero context, is a collection of objects used by a particular application or group of applications
\end{itemize}
The Clovis Access API handles all I/O related functions for applications. In addition to standard initialization and finalization functions related to running a Clovis instance, the API provides \textsf{create()}, \textsf{write()}, \textsf{read()}, and \textsf{free()} functions that enable applications to create Mero objects, transfer data to and from them, and then delete them (completely freeing the resources the objects use).
The Clovis Management API handles all management-related functions in Mero, including system configuration, management of services, and analytics as well as diagnostics functions.
The Clovis Extension API provides a list of compute functions that support the development of third-party Mero plug-ins without modifying the core. The FDMI (File Data Manipulation Interface) is an example for how to use this feature.
\subsection{Higher-Level I/O Interfaces}
At the top of the software stack, we further develop widely-used HPC legacy APIs, such as MPI and HDF5, to exploit the SAGE architecture.
\textbf{PGAS I/O.} The goal of the Partitioned Global Address Space (PGAS) programming model is to provide processes with a global view of the memory and storage space during the execution of a parallel application. This is similar to what a Shared Memory model provides in a multithreaded local environment. In the PGAS approach, remote processes from different nodes can easily collaborate and access memory addresses through load / store operations that do not necessarily belong to their own physical memory space. In SAGE, we propose an extension to the MPI one-sided communication model to support window allocations in storage: MPI storage windows~\cite{rivas2017extending,rivas2017mpi}. Our objective is to define a seamless extension to MPI to support current and future storage technologies without changing the MPI standard, allowing to target either files (i.e., for local and remote storage through a parallel file system) or alternatively address block devices directly (i.e., as in DRAM). We propose a novel use of MPI windows, a part of the MPI process memory that is exposed to other MPI remote processes, to simplify the programming interface and to support high-performance parallel I/O without requiring the use of MPI I/O. Files on storage devices appear to users as MPI windows (MPI storage windows) and seamlessly accessed through familiar \textsf{PUT} and \textsf{GET} operations. Details about the semantics of operations on MPI storage windows and the implementation are provided in Ref.~\cite{rivas2017mpi}.
\textbf{MPI Streams for Post-Processing and Parallel I/O.} While PGAS I/O library addresses the challenge of heterogenous storage and memory, streams can be used to support function-shipping for post-processing and highly scalable parallel I/O. {\em Streams} are a continuous sequence of fine-grained data structures that move from a set of processes, called data {\em producers}, to another set of processes, called data {\em consumers}. These fine-grained data structures are often small in size and in a uniform format, called a {\em stream element}. A set of computations, such as post-processing and I/O operations, can be {\em attached} to a data stream. Stream elements in a stream are processed {\em online} such that they are discarded as soon as they are {\em consumed} by the attached computation.
In particular, our work in SAGE focuses on {\em parallel streams}, where data producers and consumers are distributed among processes that require communication to move data. To achieve this, we have developed a stream library, called MPIStream, to support post-processing and parallel I/O operations on MPI consumer processes~\cite{peng2017mpi, peng2017preparing,markidis2016performance}. More details about MPI streams operation semantics and MPIStream implementation are provided in Ref.~\cite{peng2015data}.
\textbf{HDF5.} Typically, data formats in HPC provide their own libraries to describe data structures and their relations (including I/O semantics). The HDF5 data format needs to be supported in SAGE, and is layered directly on top of Clovis. The HDF5 will use the Virtual Object Layer Infrastructure within HDF5 (used to interface HDF5 with various object formats), to interface with Clovis.
\subsection{Tools}
A following are a set of tools for I/O profiling and optimized data movement across different SAGE platform tiers at the top of the SAGE software stack.
\textbf{Data Analytics Tools.} Apache Flink is a framework for data analytics workloads. Flink connectors for Clovis are currently under development to enable the deployment of data analytics jobs on top of Mero.
\textbf{Parallel File System Access.} Parallel file system access is the traditional method of accessing storage in HPC. Many of the SAGE applications and use cases need the support of POSIX compliant storage access. This access is provided through the pNFS gateway built on top of Clovis. However, pNFS requires some POSIX semantics to be developed by leveraging Mero's KVS. For instance, an abstraction of namespaces on top of Mero objects is needed.
\textbf{Hierarchical storage management and Data Integrity.} In SAGE, an Hierarchical Storage Management (HSM) is used to control the movement of data in the SAGE hierarchies based on data usage. Advanced integrity checking overcomes some of the drawbacks of well known and widely used file system consistency checking schemes.
\textbf{ARM Forge.} ADDB telemetry records from the Clovis management interface are directly fed to ARM Forge performance report tools that reports overall system performance for SAGE.
\section{Validation with Applications}
As seen in the previous section, the SAGE platform supports appropriate scientific computing data formats and legacy application interfaces such as parallel file systems and POSIX. SAGE also needs to interface with emerging big data analytics applications (on top of the API) to access the rich features of these tools, and the Volumes, Velocity and Variety (potentially) of data coming from sensors, instruments and simulations. We have created a portfolio of scientific data-centric applications that have been used to provide requirements to the development of the SAGE system and to validate the developments in the projects. The applications we chose for the SAGE project are:
\begin{itemize}
\item iPIC3D is a parallel Particle-in-Cell Code for space physics simulations in support of NASA and ESA missions \cite{markidis2010multi, peng2015energetic, peng2015formation}.
\item NEST is a spiking neural network simulator to study brain science \cite{gewaltig2007nest}.
\item Ray is a parallel meta-genome assembler \cite{boisvert2012ray}.
\item JURASSIC is a fast radiative transfer model simulation code for the mid-infrared spectral region \cite{griessbach2013scattering}.
\item EFIT++ is a plasma equilibrium fitting code with application to nuclear fusion \cite{lupelli2015efit++}.
\item The ALF code performs analytics on data consumption log files
\item Spectre is a visualization tool providing near real time feedback on plasma and other operational conditions in fusion devices.
\item Savu is a code for tomography reconstruction and processing pipeline \cite{wadeson2016savu}.
\end{itemize}
\section{Related Work}
\label{sec-relwork}
To the best of our knowledge SAGE is the first HPC-enabled storage system to implement new NVRAM tiers, flash and disk drive tiers as part of a single unified storage system. The SAGE Architecture progresses the state of the art from Blue Gene Active Storage~\cite{fitch2010blue} and Dash~\cite{he2010dash}, which use flash for data staging. It also progresses the state of the art from Burst Buffer technologies as discussed earlier.
When compared to the FastForward Project~\cite{FastForward}, SAGE highly simplifies storage, developing a solution for deep I/O hierarchies, including NVRAM technologies. A major difference between SAGE and FastForward is the FastForward solution is evolutionary as it tries to make use of an existing storage solution, namely, Lustre~\cite{schwan2003lustre} used for the last 20 years or so. Lustre was really designed for the previous era, when use cases and architectural assumptions were different. On the hand, SAGE and Mero are the product of a complete redesign in consideration of the new requirements arising out of the Extreme scale computing community.
Mero, the object store in SAGE, extends the state of the art in existing object storage software such as Ceph~\cite{weil2006ceph} and Open Stack Swift ~\cite{swift} by building Exascale components required for extreme scale computing. While Ceph and Openstack swift are designed for supporting mainly cloud storage, Mero is built to meet the needs of the extreme scale computing community.
\section{Conclusions}
\label{sec-conclusions}
The SAGE project objective is to design and implement an I/O system capable of supporting I/O workloads of Exascale supercomputers. The SAGE platform has been recently installed at J\"ulich Computing center. It supports a multi-tiered I/O hierarchy and associated software stack, to provide a demonstrable path towards Big Data analytics and HPC convergence. The SAGE software stack consists of four main software layers: the Seagate Mero object-storage, the Clovis API, high-level interfaces and tools.
Current ongoing work focuses on the performance characterization of various new NVRAM device technologies. We also currently investigating lower level software and Operating System~(OS) infrastructure requirements to exploit these new devices types, below Mero in the SAGE stack. We clearly recognize that various NVRAM technologies have their own performance characteristics and limitations. New NVRAM technologies can be part of the SAGE hardware tiers based on where they ultimately are on the performance and capacity curve. The SAGE stack and Mero indeed is designed to be agnostic of storage device types as long as adaptations are in place within the OS.
The next steps will be to quantify the benefits of the various features of the SAGE stack on the SAGE prototype system currently installed at J\"ulich Supercomputing Center, with focus on providing results for the remaining SAGE components and the SAGE architecture as a whole. As a part of this external organizations outside of the SAGE consortium ( eg: from Climate and Weather, Astronomy, etc) will soon be granted access to study how their codes and workflows can exploit the features of the SAGE platform. We will then look at extrapolation studies of the benefits of the various SAGE features at Exascale through analytical and simulation models. These will be discussed separately. Porting of the SAGE stack across other sites and extensions of the SAGE prototype is also planned. We are targeting SAGE work to be a part of European Extreme Scale Demonstrators~\cite{esd} which will be pre-Exascale prototypes.
\section*{Acknowledgements}
The authors acknowledge that the SAGE work is being performed by a consortium of members consisting of Seagate (UK), Bull ATOS (France), ARM (UK), KTH (Sweden), STFC (UK), CCFE (UK), Diamond (UK), DFKI (Germany), Forchungszentrum J\"ulich (Germany) and CEA (France) - which is being represented by the Authors in this paper.
Funding for the work is received from the European Commission H2020 program, Grant Agreement No. 671500 (SAGE).
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,500,648 | arxiv | \section{Introduction}
This is a paper about idealized truthful mechanical knowing agents
who know facts in a quantified arithmetic-based language that also includes a
connective for their own knowledge ($K(1+1=2)$ is read ``I (the agent)
know $1+1=2$'').
It is well known (\cite{benacerraf},
\cite{carlson2000}, \cite{lucas}, \cite{penrose}, \cite{putnam}, \cite{reinhardt})
that such an agent cannot simultaneously know its own truthfulness and its own code.
Reinhardt conjectured that,
while knowing its own truthfulness, such a machine can
know it has \emph{some} code, without knowing which.
This conjecture was proved by Carlson \cite{carlson2000}.
The proof uses sophisticated structural results from \cite{carlson1999} about the ordinals,
and involves transfinite induction up to $\epsilon_0\cdot\omega$.
We will give a proof of a weaker result, but will do so in an
elementary way, inducting only as far as $\omega\cdot\omega$.
Along the way, we will develop some machinery that is interesting in its own right.
Carlson's proof of Reinhardt's conjecture is based on stratifying knowledge
(see \cite{carlson2012} for a gentle summary).
This can be viewed as adding operators $K^\alpha$ for knowledge
after time $\alpha$ where $\alpha$ takes ordinal values.
Under certain assumptions, theories in such stratified language \emph{collapse}
at positive integer multiples of $\epsilon_0$, in the sense that if $\phi$
only contains superscripts $<\epsilon_0\cdot n$ ($n$ a positive integer)
then $K^{\epsilon_0\cdot n}\phi$ holds if and only if $K^{\epsilon_0\cdot(n+1)}\phi$ does.
In this paper, collapse occurs at positive integer multiples of $\omega$, hence the name:
\emph{Fast-collapsing theories}.
Our result is weakened in the sense that the background theory of knowledge is weakened.
The schema $K(\ucl{K(\phi\rightarrow\psi)\rightarrow K\phi\rightarrow K\psi})$ ($\mathrm{ucl}$ denotes
universal closure)
is weakened by adding the requirement that $K$ not be nested deeper in $\phi$ than in $\psi$
(the unrestricted schema $\ucl{K(\phi\rightarrow \psi)\rightarrow K\phi\rightarrow K\psi}$
is preserved, but the knower is not required to \emph{know} it);
the schema $\ucl{K\phi\rightarrow KK\phi}$ is forfeited entirely;
and a technical axiom called Assigned Validity (made up of valid formulas with numerals
plugged in to their free variables) is added to the background theory of knowledge.
On the bright side, our result is stated in a more general way (we mention in passing
how the full unweakened result could also be so generalized, but leave those details for later work).
Casually, our main theorem has the following form:
\begin{quote}
A truthful knowing agent whose knowledge is sufficiently ``generic''
can be taught its own truthfulness and still remain truthful.
\end{quote}
Here ``generic'' is a specific technical term, but it is
inclusive enough to include knowledge that one has some code,
thus the statement addresses Reinhardt's conjecture.
In Section \ref{prelimsect} we present some preliminaries.
In Section \ref{stratifierssectn} we develop \emph{stratifiers}, maps from unstratified language to
stratified language.
These are the key to fast collapse. They debuted in
\cite{alexanderdissert} and \cite{alexanderjsl}.
In Section \ref{uniformsect} we discuss \emph{uniform} stratified theories.
A key advantage of stratifiers is that
they turn unstratified theories into uniform stratified theories.
In Section \ref{genericaxiomssectn} we define some notions of genericity of an
axiom schema, and establish the genericity of some building blocks of background
theories of knowledge.
In Section \ref{mainresultsect} we state our main theorem and make closing remarks.
\section{Preliminaries}
\label{prelimsect}
\begin{definition}
\label{standarddefns}
(Standard Definitions)
Let $\L_{\mathrm{PA}}$ be the language $(0,S,+,\cdot)$ of Peano arithmetic and let $\mathscr{L}$ be an arbitrary language.
\begin{enumerate}
\item For any $e\in\mathbb{N}$, $W_e$ is the range of the $e$th partial computable function.
The binary predicate $\bullet\in W_\bullet$ is $\L_{\mathrm{PA}}$-definable so we will freely act
as if $\L_{\mathrm{PA}}$ actually contains this predicate symbol.
\item If an $\mathscr{L}$-structure $\mathscr{M}$ is clear from context, an \emph{assignment}
is a function taking variables into the universe of $\mathscr{M}$.
\item If $s$ is an assignment, $x$ is a variable, and $a\in\mathscr{M}$, $s(x|a)$ is the assignment
that agrees with $s$ except that $s(x|a)(x)=a$.
\item
We define $\L_{\mathrm{PA}}$-terms $\overline n$ ($n\in\mathbb{N}$), called \emph{numerals}, so that $\overline 0=0$
and
$\overline{n+1}=S(\overline n)$.
\item If $\phi$ is an $\mathscr{L}$-formula, $\mathrm{FV}(\phi)$ is the set of free variables of $\phi$.
If $\mathrm{FV}(\phi)=\emptyset$ then $\phi$ is a \emph{sentence}.
\item If $\phi$ is an $\mathscr{L}$-formula, $x$ is variable, and $u$ is an $\mathscr{L}$-term,
$\phi(x|u)$ is the result of substituting $u$ for all free occurrences of $x$ in $\phi$.
\item A \emph{universal closure} of an $\mathscr{L}$-formula $\phi$ is a sentence $\forall x_1\cdots \forall x_n\phi$.
We write $\ucl{\phi}$ to denote a universal closure of $\phi$.
\item We use the word \emph{theory} as synonym for \emph{set of sentences}.
\item If $T$ is an $\mathscr{L}$-theory and $\mathscr{M}$ is an $\mathscr{L}$-structure, $\mathscr{M}\models T$ means that $\mathscr{M}\models\phi$ for all $\phi\in T$.
\item If $T$ is an $\mathscr{L}$-theory, we say $T\models\phi$ if $\mathscr{M}\models\phi$ whenever $\mathscr{M}\models T$.
\item A \emph{valid} $\mathscr{L}$-formula is one that holds in every $\mathscr{L}$-structure.
\item For any formulas $\phi_1,\phi_2,\phi_3$, we write $\phi_1\rightarrow\phi_2\rightarrow\phi_3$ to abbreviate
$\phi_1\rightarrow(\phi_2\rightarrow\phi_3)$.
\end{enumerate}
\end{definition}
We will repeatedly use the following standard fact
without explicit mention: if $\psi$ is a universal closure of $\phi$,
then in order to prove $\mathscr{M}\models\psi$, it suffices to let $s$ be an arbitrary assignment and
show that
$\mathscr{M}\models\phi[s]$.
For quantified semantics we work in Carlson's \emph{base logic}, defined as follows.
\begin{definition}
\label{baselogicdefn}
(The Base Logic)
A \emph{language $\mathscr{L}$ in the base logic} is a first-order language $\mathscr{L}_0$ together with a set
of symbols called \emph{operators}.
Formulas of $\mathscr{L}$ are defined in the usual way, with the clause that whenever $\phi$ is an $\mathscr{L}$-formula
and $K$ is an $\mathscr{L}$-operator, $K\phi$ is also an $\mathscr{L}$-formula (and $\mathrm{FV}(K\phi)=\mathrm{FV}(\phi)$).
Syntactic parts of Definition \ref{standarddefns} extend to the base logic in obvious ways.
Given such an $\mathscr{L}$, an \emph{$\mathscr{L}$-structure} $\mathscr{M}$ is a first-order $\mathscr{L}_0$-structure $\mathscr{M}_0$
together with a function that takes one $\mathscr{L}$-formula $\phi$, one $\mathscr{L}$-operator $K$, and one assignment $s$,
and outputs True or False---in which case we write $\mathscr{M}\models K\phi[s]$ or $\mathscr{M}\not\models K\phi[s]$, respectively---satisfying
the following three conditions (where $\phi$ ranges over $\mathscr{L}$-formulas and $K$ ranges over operators):
\begin{enumerate}
\item Whether or not $\mathscr{M}\models K\phi[s]$ is independent of $s(x)$ if $x\not\in\mathrm{FV}(\phi)$.
\item (Alphabetic Invariance) If $\psi$ is an \emph{alphabetic variant} of $\phi$, meaning that it is obtained from $\phi$ by renaming bound
variables while respecting binding of the quantifiers, then $\mathscr{M}\models K(\phi)[s]$ if and only if $\mathscr{M}\models K(\psi)[s]$.
\item (Weak Substitution)\footnote{Note that the general
substitution law, where $y$ is replaced by an arbitrary term,
is not valid in modal logic.} If the variable $y$ is substitutable for the variable $x$ in $\phi$,
then $\mathscr{M}\models K\phi(x|y)[s]$ if and only if $\mathscr{M}\models K\phi[s(x|s(y))]$.
\end{enumerate}
\end{definition}
\begin{theorem}
\label{completenesscompactness}
(Completeness and compactness)
Let $\mathscr{L}$ be an r.e.~language in the base logic.
\begin{enumerate}
\item The set of valid $\mathscr{L}$-formulas is r.e.
\item For any r.e.~$\mathscr{L}$-theory $T$, $\{\phi\,:\,T\models\phi\}$ is r.e.
\item There is an effective algorithm, given (a G\"odel number for) an r.e.~$\mathscr{L}$-theory $T$, to find
(a G\"odel number for) $\{\phi\,:\,T\models\phi\}$.
\item If $T$ is an $\mathscr{L}$-theory and $T\models\phi$ ($\phi$ any $\mathscr{L}$-formula), there
are $\tau_1,\ldots,\tau_n\in T$ such that $\left(\bigwedge_i\tau_i\right)\rightarrow\phi$ is valid.
\end{enumerate}
\end{theorem}
\begin{proof}
By interpreting the base logic in first-order logic. For details,
see \cite{alexanderdissert}.
\end{proof}
\begin{definition}
Let $\L_{\mathrm{EA}}$ be the language of Epistemic Arithmetic from
\cite{shapiro1985}, so $\L_{\mathrm{EA}}$
extends $\L_{\mathrm{PA}}$ by a unary operator $K$.
An $\L_{\mathrm{EA}}$-structure (more generally an $\mathscr{L}$-structure
where $\mathscr{L}$ extends $\L_{\mathrm{PA}}$) has \emph{standard first-order part} if its first-order part has universe $\mathbb{N}$ and
interprets $0,S,+,\cdot$ in the intended ways.
\end{definition}
\begin{definition}
Suppose $\mathscr{L}$ extends $\L_{\mathrm{PA}}$ and $\phi$ is an $\mathscr{L}$-formula with $\mathrm{FV}(\phi)\subseteq\{x_1,\ldots,x_n\}$.
For any assignment $s$ into $\mathbb{N}$,
we define
\[
\phi^s\equiv \phi(x_1|\overline{s(x_1)})\cdots (x_n|\overline{s(x_n)}),
\]
the sentence obtained by replacing all free variables in $\phi$ by numerals according to $s$.
\end{definition}
\begin{definition}
\label{defnofintendedmodel}
For any $\L_{\mathrm{EA}}$-theory $T$,
the intended structure for $T$ is the $\L_{\mathrm{EA}}$-structure $\mathscr N_T$
that has standard first-order part and interprets $K$ so that for any $\L_{\mathrm{EA}}$-formula $\phi$ and assignment $s$,
\[
\mbox{$\mathscr N_T\models K\phi[s]$ if and only if $T\models\phi^s$.}
\]
We say $T$ is \emph{true} if $\mathscr N_T\models T$.
\end{definition}
It is easy to check that the structures $\mathscr N_T$ of Definition \ref{defnofintendedmodel}
really are $\L_{\mathrm{EA}}$-structures (they satisfy Conditions 1--3 of Definition \ref{baselogicdefn}).
The following lemma shows that they accurately interpret
quantified formulas in the way one would expect.
\begin{lemma}
For any $\L_{\mathrm{EA}}$-theory $T$, $\L_{\mathrm{EA}}$-formula $\phi$ and assignment $s$,
\[
\mbox{$\mathscr N_T\models\phi[s]$ if and only if $\mathscr N_T\models \phi^s$.}
\]
\end{lemma}
\begin{proof}
Straightforward induction.
\end{proof}
Armed with these definitions, we can make more precise some things
we suggested in the introduction.
Let $T_{\text{SMT}}$ be the following $\L_{\mathrm{EA}}$-theory
($\phi$ and $\psi$ range over $\L_{\mathrm{EA}}$-formulas):
\begin{enumerate}
\item ($E_1$) $\ucl{K\phi}$ whenever $\phi$ is valid.
\item ($E_2$) $\ucl{K(\phi\rightarrow\psi)\rightarrow K\phi\rightarrow K\psi}$.
\item ($E_3$) $\ucl{K\phi\rightarrow\phi}$.
\item ($E_4$) $\ucl{K\phi\rightarrow KK\phi}$.
\item The \emph{axioms of Epistemic Arithmetic}, by which we mean the axioms of Peano Arithmetic
with the induction schema extended to $\L_{\mathrm{EA}}$.
\item (Mechanicalness) $\ucl{\exists e \forall x(K\phi\leftrightarrow x\in W_e)}$
provided $e\not\in\mathrm{FV}(\phi)$.
\item $K\phi$ whenever $\phi$ is an instance of lines 1--6 or (recursively) 7.
\end{enumerate}
Combining lines 6 and 7 yields the \emph{Strong Mechanistic Thesis},
$K(\ucl{\exists e \forall x(K\phi\leftrightarrow x\in W_e)})$.
One of the main results of \cite{carlson2000} is that $T_{\text{SMT}}$ is true, that is, $\mathscr N_{T_{\text{SMT}}}\models T_{\text{SMT}}$.
To establish $\mathscr N_{T_{\text{SMT}}}\models E_3$,
Carlson uses transfinite recursion up to $\epsilon_0\cdot \omega$, as well
as deep structural properties (from \cite{carlson1999}) about the ordinals.
That $\mathscr N_{T_{\text{SMT}}}$ satisfies lines 2, 5, 6, and 7, is trivial; that it satisfies
line 4 follows from the fact that it satisfies lines 1--2.
Line 1 would be trivial if we added the following line to $T_{\text{SMT}}$:
\begin{itemize}
\item[1b.] (Assigned Validity) $\phi^s$, whenever $\phi$ is valid and $s$ is any assignment.
\end{itemize}
Theorems from \cite{carlson2000} imply Assigned Validity is already a consequence of $T_{\text{SMT}}$,
so this addition is not necessary, however it becomes necessary if (say) line 2 is weakened.
The main result in this paper is that by weakening $E_2$, removing $E_4$, and adding Assigned Validity, we remove the
need to induct up to $\epsilon_0\cdot\omega$. Induction up to $\omega\cdot\omega$ suffices,
and the computations from \cite{carlson1999} can also be avoided. This is surprising because
we do not weaken $E_3$, the lone schema for which such sophisticated
methods were used before.
\begin{definition}
\label{depthdefn}
For any $\L_{\mathrm{EA}}$-formula $\phi$, let $\mathrm{depth}(\phi)$
denote the depth to which $K$ operators are nested in $\phi$, more formally:
\begin{itemize}
\item If $\phi$ is an $\L_{\mathrm{PA}}$-formula then $\mathrm{depth}(\phi)=0$.
\item If $\phi\equiv K(\phi_0)$ then $\mathrm{depth}(\phi)=\mathrm{depth}(\phi_0)+1$.
\item If $\phi\equiv (\rho\rightarrow\sigma)$ then $\mathrm{depth}(\phi)=\max\{\mathrm{depth}(\rho),\mathrm{depth}(\sigma)\}$.
\item If $\phi\in \{(\neg\phi_0), (\forall x\phi_0)\}$ then $\mathrm{depth}(\phi)=\mathrm{depth}(\phi_0)$.
\end{itemize}
\end{definition}
Now let $T^w_{\text{SMT}}$ be the $\L_{\mathrm{EA}}$-theory containing the following schemas:
\begin{enumerate}
\item $E_1$ and $E_3$.
\item Assigned Validity: $\phi^s$ whenever $\phi$ is valid and $s$ is any assignment.
\item ($E'_2$) $\ucl{K(\phi\rightarrow\psi)\rightarrow K\phi\rightarrow K\psi}$
provided $\mathrm{depth}(\phi)\leq\mathrm{depth}(\psi)$.
\item The axioms of Epistemic Arithmetic.
\item Mechanicalness.
\item $K\phi$ whenever $\phi$ is an instance of lines 1--5 or (recursively) 6.
\end{enumerate}
Our main result (obtained by inducting only up to $\omega\cdot\omega$)
will imply $T^w_{\text{SMT}}$ is true.
\section{Stratifiers}
\label{stratifierssectn}
\begin{definition}
Let $\L_{\omega\cdot\omega}$ be the language obtained from $\L_{\mathrm{PA}}$ by adding operators $K^\alpha$ for all $\alpha\in\omega\cdot\omega$.
For any $\L_{\omega\cdot\omega}$-formula $\phi$, let
\[\mathrm{On}(\phi) = \{\alpha\in\omega\cdot\omega\,:\,\mbox{$K^\alpha$ occurs in $\phi$}\}.\]
\end{definition}
An example of an $\L_{\omega\cdot\omega}$-formula: $\forall x(K^{\omega}K^{\omega\cdot 7+2}K^{53}K^0(x=0)\rightarrow K^{\omega\cdot 7+3}(x=0))$.
\begin{definition}
\label{stratifierdefn}
(Stratifiers)
For any infinite subset $X\subseteq\omega\cdot\omega$, the \emph{stratifier given by $X$}
is the function $\bullet^+$ that takes $\L_{\mathrm{EA}}$-formulas to $\L_{\omega\cdot\omega}$-formulas in the following way.
\begin{enumerate}
\item If $\phi$ is atomic, $\phi^+\equiv\phi$.
\item If $\phi$ is $\phi_1\rightarrow\phi_2$, $\neg\phi_1$, or $\forall x\phi_1$, then
$\phi^+$ is $\phi^+_1\rightarrow\phi^+_2$, $\neg\phi^+_1$, or $\forall x\phi^+_1$, respectively.
\item If $\phi$ is $K\phi_0$, then $\phi^+\equiv K^\alpha\phi^+_0$ where
$\alpha$ is the smallest ordinal in $X\backslash\mathrm{On}(\phi^+_0)$.
\end{enumerate}
By a \emph{stratifier}, we mean a stratifier given by some $X$.
By the \emph{veristratifier}, we mean the stratifier given by $X=\{\omega\cdot 1,\omega\cdot 2,\ldots\}$.
If $\bullet^+$ is a stratifier and $T$ is an $\L_{\mathrm{EA}}$-theory,
$T^+$ denotes $\{\phi^+\,:\,\phi\in T\}$.
\end{definition}
For example, if $\bullet^+$ is the veristratifier, then
\[
\left(K(1=0)\rightarrow KK(1=0)\right)^+ \,\equiv\, K^\omega(1=0)\rightarrow K^{\omega\cdot 2}K^\omega(1=0).
\]
\begin{lemma}
\label{assignmentplayswellwithstratifier}
Suppose $\phi$ is an $\L_{\mathrm{EA}}$-formula, $s$ is an assignment into $\mathbb{N}$,
and $\bullet^+$ is a stratifier.
If $\alpha,\beta\in\omega\cdot\omega$ are such that $(K\phi)^+\equiv K^\alpha\phi^+$
and $(K\phi^s)^+\equiv K^\beta(\phi^s)^+$, then $\alpha=\beta$.
\end{lemma}
\begin{proof}
By induction.
\end{proof}
\begin{lemma}
\label{depthandstratifier}
Suppose $\phi$ and $\psi$ are $\L_{\mathrm{EA}}$-formulas and $\bullet^+$ is a stratifier.
Let $\alpha,\beta\in\omega\cdot\omega$ be such that $(K\phi)^+\equiv K^\alpha\phi^+$
and $(K\psi)^+\equiv K^\beta\psi^+$.
Then $\mathrm{depth}(\phi)<\mathrm{depth}(\psi)$ if and only if $\alpha<\beta$.
\end{lemma}
\begin{proof}
By induction.
\end{proof}
\begin{definition}
For any $\L_{\omega\cdot\omega}$-structure $\mathscr{M}$ and stratifier $\bullet^+$, let $\mathscr{M}^+$ be the $\L_{\mathrm{EA}}$-structure
that has the same universe and interpretation of $\L_{\mathrm{PA}}$ as $\mathscr{M}$, and that interprets $K$ so that
for any $\L_{\mathrm{EA}}$-formula $\phi$ and assignment $s$,
\[
\mbox{$\mathscr{M}^+\models K\phi[s]$ if and only if $\mathscr{M}\models (K\phi)^+[s]$.}
\]
\end{definition}
It is easy to check that if $\mathscr{M}$ is an $\L_{\omega\cdot\omega}$-structure then $\mathscr{M}^+$ really is an $\L_{\mathrm{EA}}$-structure (it satisfies
Conditions 1--3 of Definition \ref{baselogicdefn}).
From now on we will suppress this remark when defining new structures.
\begin{lemma}
\label{structuregrowingmagic}
Let $\mathscr{M}$ be an $\L_{\omega\cdot\omega}$-structure, $\bullet^+$ a stratifier. For any $\L_{\mathrm{EA}}$-formula $\phi$ and assignment $s$,
\[
\mbox{$\mathscr{M}^+\models\phi[s]$ if and only if $\mathscr{M}\models \phi^+[s].$}
\]
\end{lemma}
\begin{proof}
A straightforward induction.
\end{proof}
\begin{definition}
For any $\L_{\omega\cdot\omega}$-formula $\phi$, $\phi^-$ is the $\L_{\mathrm{EA}}$-formula obtained by changing every operator of the form $K^\alpha$ in $\phi$
into $K$.
If $T$ is an $\L_{\omega\cdot\omega}$-theory, $T^-=\{\phi^-\,:\,\phi\in T\}$.
\end{definition}
\begin{example}
$\left(K^{\omega\cdot 8+3}\forall x K^{17}(x=y)\right)^- \,\equiv\, K\forall x K(x=y).$
\end{example}
\begin{lemma}
\label{obviouslemma}
Let $\bullet^+$ be a stratifier.
For any $\L_{\mathrm{EA}}$-formula $\phi$, $(\phi^+)^-\equiv\phi$.
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{definition}
If $\mathscr{M}$ is an $\L_{\mathrm{EA}}$-structure, let $\mathscr{M}^-$ be the $\L_{\omega\cdot\omega}$-structure that has the same universe as $\mathscr{M}$,
agrees with $\mathscr{M}$ on $\L_{\mathrm{PA}}$, and interprets each $K^\alpha$
so that for any $\L_{\omega\cdot\omega}$-formula $\phi$ and assignment $s$,
\[
\mbox{$\mathscr{M}^-\models K^\alpha\phi[s]$ if and only if $\mathscr{M}\models K\phi^-[s]$.}
\]
\end{definition}
In \cite{carlson2000} (Definition 5.4), $\mathscr{M}^-$ is the \emph{stratification of $\mathscr{M}$ over $\omega\cdot\omega$}.
\begin{lemma}
\label{structureshrinkingmagic}
For any $\L_{\mathrm{EA}}$-structure $\mathscr{M}$, $\L_{\omega\cdot\omega}$-formula $\phi$ and assignment $s$,
\[
\mbox{$\mathscr{M}^-\models\phi[s]$ if and only if $\mathscr{M}\models\phi^-[s]$.}
\]
\end{lemma}
\begin{proof}
A straightforward induction.
\end{proof}
\begin{theorem}
\label{stratifiersrespectvalidity}
\item
\begin{enumerate}
\item For any valid $\L_{\omega\cdot\omega}$-formula $\phi$, $\phi^-$ is valid.
\item
For any $\L_{\mathrm{EA}}$-formula $\phi$ and stratifier $\bullet^+$, $\phi$ is valid if and only if $\phi^+$ is valid.
\end{enumerate}
\end{theorem}
\begin{proof}
\item
(1) Let $\phi$ be a valid $\L_{\omega\cdot\omega}$-formula.
For any $\L_{\mathrm{EA}}$-structure $\mathscr{M}$ and assignment $s$, since $\phi$ is valid, $\mathscr{M}^-\models\phi[s]$
and so by Lemma \ref{structureshrinkingmagic}, $\mathscr{M}\models\phi^-[s]$.
By arbitrariness of $\mathscr{M}$ and $s$, $\phi^-$ is valid.
\item
(2, $\Rightarrow$)
Assume $\phi$ is a valid $\L_{\mathrm{EA}}$-formula.
For any $\L_{\omega\cdot\omega}$-structure $\mathscr{M}$ and assignment $s$, since $\phi$ is valid, $\mathscr{M}^+\models\phi[s]$,
and $\mathscr{M}\models\phi^+[s]$ by Lemma \ref{structuregrowingmagic}. By arbitrariness of $\mathscr{M}$ and $s$,
this shows $\phi^+$ is valid.
\item
(2, $\Leftarrow$)
Assume $\phi$ is an $\L_{\mathrm{EA}}$-formula and $\phi^+$ is valid.
For any $\L_{\mathrm{EA}}$-structure $\mathscr{M}$ and assignment $s$, since $\phi^+$ is valid, $\mathscr{M}^-\models\phi^+[s]$,
and $\mathscr{M}\models (\phi^+)^-[s]$ by Lemma \ref{structureshrinkingmagic}.
By Lemma \ref{obviouslemma}, $\mathscr{M}\models\phi[s]$. By arbitrariness of $\mathscr{M}$ and $s$,
$\phi$ is valid.
\end{proof}
\begin{definition}
\label{oplusdefn}
For any $\L_{\mathrm{EA}}$-theory $T$, let
\[
T^\oplus=\{\phi^+\,:\,\mbox{$\phi\in T$ and $\bullet^+$ is a stratifier}\}.
\]
\end{definition}
\begin{example}
Suppose $T$ is the $\L_{\mathrm{EA}}$-theory consisting of $K\phi\rightarrow KK\phi$ for all $\L_{\mathrm{PA}}$-sentences $\phi$.
Then $T^\oplus$ is the $\L_{\omega\cdot\omega}$-theory consisting of
$K^\alpha\phi\rightarrow K^\beta K^\alpha\phi$ for all $\L_{\mathrm{PA}}$-sentences $\phi$ and ordinals $\alpha<\beta<\omega\cdot\omega$.
\end{example}
\begin{theorem}
\label{proofstrat}
(Upward proof stratification)
For any $\L_{\mathrm{EA}}$-theory $T$, $\L_{\mathrm{EA}}$-sentence $\phi$, and stratifier $\bullet^+$,
the following are equivalent.
\begin{align*}
\mbox{1. $T\models\phi$.} && \mbox{2. $T^+\models\phi^+$.} && \mbox{3. $T^\oplus\models\phi^+$.}
\end{align*}
\end{theorem}
This theorem is so-named because it is an upside-down version of a harder theorem
that we called \cite{alexanderdissert} \emph{proof stratification}.
In non-upward proof stratification, $T$ and $\phi$ are taken in the \emph{stratified} language
and the theorem states that $T\models\phi$ if and only if $T^-\models\phi^-$.
This uses complicated hypotheses on $T$ and $\phi$.
Versions of these hypotheses could be stated in an elementary way,
but \emph{a priori} they might imply $T$ is inconsistent (in which case Theorem \ref{proofstrat}
is trivial). The only way we know to exhibit consistent theories that satisfy such hypotheses
is to exploit the machinery from \cite{carlson1999} on the $\Sigma_1$-structure of the ordinals.
\begin{proof}[Proof of Theorem~\ref{proofstrat}]
Let $T$, $\phi$ and $\bullet^+$ be as in Theorem \ref{proofstrat}.
\item
($1\Rightarrow 2$) Assume $T\models\phi$. By Theorem \ref{completenesscompactness},
there are $\tau_1,\ldots,\tau_n\in T$ such that $\left(\bigwedge_i\tau_i\right)\rightarrow\phi$
is valid. By Theorem \ref{stratifiersrespectvalidity},
$\left(\bigwedge_i\tau^+_i\right)\rightarrow\phi^+$ is valid,
showing $T^+\models\phi^+$.
\item
($2\Rightarrow 3$) Trivial: $T^+\subseteq T^\oplus$.
\item
($3\Rightarrow 1$)
Assume $T^\oplus\models\phi^+$.
By Theorem \ref{completenesscompactness}
there are $\tau_1,\ldots,\tau_n\in T^\oplus$
such that $\left(\bigwedge_i\tau_i\right)\rightarrow\phi^+$ is valid.
By definition of $T^\oplus$ there are $\sigma_1,\ldots,\sigma_n\in T$
and stratifiers $\bullet^1,\ldots,\bullet^n$
such that each $\tau_i\equiv\sigma^i_i$.
By Lemma \ref{obviouslemma}
\[
\mbox{$\left(\left(\bigwedge_i\sigma^i_i\right)\rightarrow\phi^+\right)^- \,\equiv\,
\left(\bigwedge_i\sigma_i \right)\rightarrow\phi$},
\]
so Theorem \ref{stratifiersrespectvalidity} guarantees
$\left(\bigwedge_i\sigma_i\right)\rightarrow\phi$
is valid, and $T\models\phi$.
\end{proof}
\section{Uniform Theories and Collapsing Knowledge}
\label{uniformsect}
\begin{definition}
Suppose $X\subseteq \omega\cdot\omega$ and $h:X\to\omega\cdot\omega$.
For any $\L_{\omega\cdot\omega}$-formula $\phi$, we define $h(\phi)$ to be the $\L_{\omega\cdot\omega}$-formula
obtained by replacing $K^\alpha$ by $K^{h(\alpha)}$ everywhere $K^{\alpha}$
occurs in $\phi$ ($\alpha\in X$).
(If $\alpha\not\in X$, we do not change occurrences of $K^\alpha$ in $\phi$.)
\end{definition}
\begin{example}Suppose $\alpha_1<\cdots<\alpha_4$ are distinct ordinals in $\omega\cdot\omega$.
Let $X=\{\alpha_2,\alpha_3\}$, let $h(\alpha_2)=\alpha_3$, $h(\alpha_3)=\alpha_4$.
Then
\[
h\left(K^{\alpha_3}K^{\alpha_2}K^{\alpha_1}(1=1)\right) \,\equiv\,
K^{\alpha_4}K^{\alpha_3}K^{\alpha_1}(1=1).
\]
\end{example}
\begin{definition}
\label{uniformdefn}
An $\L_{\omega\cdot\omega}$-theory $T$ is \emph{uniform}
if the following statement holds.
For all $X\subseteq\omega\cdot\omega$, for all order-preserving $h:X\to\omega\cdot\omega$,
for all $\phi\in T$, if $\mathrm{On}(\phi)\subseteq X$ then $h(\phi)\in T$.
\end{definition}
\begin{example}If $T$ contains $K^1K^0(1=0)$ and $T$ is uniform, then $T$ must contain
$K^\beta K^\alpha(1=0)$ for all $\alpha<\beta\in\omega\cdot\omega$.
\end{example}
\begin{lemma}
\label{uniformityofstratifiers}
Suppose $\bullet^+$ is a stratifier, $X\subseteq\omega\cdot\omega$,
$h:X\to\omega\cdot\omega$ is order preserving,
and $\phi$ is an $\L_{\mathrm{EA}}$-formula with $\mathrm{On}(\phi^+)\subseteq X$.
There is a stratifier $\bullet^*$ such that $\phi^*\equiv h(\phi^+)$.
\end{lemma}
\begin{proof}
Let $Y_0=\{h(\alpha)\,:\,\alpha\in \mathrm{On}(\phi^+)\}$, $Y=Y_0\cup\{\beta\in\omega\cdot\omega\,:\,\beta>Y_0\}$,
and let $\bullet^*$ be the stratifier given by $Y$.
By induction, for every subformula $\phi_0$ of $\phi$, $\phi_0^*\equiv h(\phi_0^+)$.
\end{proof}
\begin{lemma}
\label{uniformitylemma}
(Uniformity lemma)
For any $\L_{\mathrm{EA}}$-theory $T$, $T^\oplus$ is uniform.
\end{lemma}
\begin{proof}
Let $X\subseteq\omega\cdot\omega$, let $h:X\to\omega\cdot\omega$ be order preserving, let $\phi\in T^\oplus$,
and assume $\mathrm{On}(\phi)\subseteq X$.
By definition of $T^\oplus$, $\phi\equiv\phi^+_0$ for some $\phi_0\in T$ and some stratifier $\bullet^+$.
By Lemma \ref{uniformityofstratifiers} there is a stratifier $\bullet^*$ such that $h(\phi^+_0)\equiv \phi^*_0$.
This shows $h(\phi)\in T^\oplus$.
\end{proof}
Unfortunately, the range of $\oplus$ does not include every uniform $\L_{\omega\cdot\omega}$-theory.
For example, suppose $T$ is the $\L_{\omega\cdot\omega}$-theory consisting of
\[K^\alpha(\phi^+\rightarrow\psi^+)\rightarrow K^\alpha\phi^+\rightarrow K^\alpha\psi^+\]
for all $\L_{\mathrm{EA}}$-sentences $\phi$ and $\psi$ and stratifiers $\bullet^+$
with $\mathrm{On}(\phi^+),\mathrm{On}(\psi^+)<\alpha\in\omega\cdot\omega$.
The reader may check that despite being uniform, $T$ is not $T^\oplus_0$ for any $\L_{\mathrm{EA}}$-theory $T_0$.
\begin{definition}
\label{structuremappingdefn}
If $\mathscr{M}$ is an $\L_{\omega\cdot\omega}$-structure, $X\subseteq \omega\cdot\omega$, and $h:X\to\omega\cdot\omega$,
we define an $\L_{\omega\cdot\omega}$-structure $h(\mathscr{M})$ that has the same universe as $\mathscr{M}$,
agrees with $\mathscr{M}$ on the interpretation of $\L_{\mathrm{PA}}$, and interprets
$K^\alpha$ so that for any $\L_{\omega\cdot\omega}$-formula $\phi$ and assignment $s$,
\[
\mbox{$h(\mathscr{M})\models K^\alpha\phi[s]$ if and only if $\mathscr{M}\models h(K^\alpha\phi)[s]$.}
\]
\end{definition}
\begin{lemma}
\label{structuremappingmagic}
Suppose $\mathscr{M}$, $X$, and $h$ are as in Definition \ref{structuremappingdefn}.
For any $\L_{\omega\cdot\omega}$-formula $\phi$ and assignment $s$,
\[
\mbox{$h(\mathscr{M})\models\phi[s]$
if and only if $\mathscr{M}\models h(\phi)[s]$.}
\]
\end{lemma}
\begin{proof}
By induction.
\end{proof}
We will only need part 1 of the next lemma, we state part 2 for completeness.
\begin{lemma}
\label{hpreservesvalidity}
Suppose $\mathscr{M}$, $X$, and $h$ are as in Definition \ref{structuremappingdefn} and $\phi$ is an $\L_{\omega\cdot\omega}$-formula.
\begin{enumerate}
\item
If $\phi$ is valid then $h(\phi)$ is valid.
\item
Assume $h$ is injective.
If $\mathrm{On}(\phi)\subseteq X$ and $h(\phi)$ is valid, then $\phi$ is valid.
\end{enumerate}
\end{lemma}
\begin{proof}
\item
(1) Similar to Theorem \ref{stratifiersrespectvalidity}.
\item
(2) If $h(\phi)$ is valid then $h^{-1}(h(\phi))$ is valid by part 1. Since $\mathrm{On}(\phi)\subseteq X$, $h^{-1}(h(\phi))\equiv\phi$.
\end{proof}
\begin{definition}
For any $\L_{\omega\cdot\omega}$-theory $T$ and $\alpha\in\omega\cdot\omega$,
let $T\cap\alpha=\{\phi\in T\,:\,\mathrm{On}(\phi)\subseteq\alpha\}$
be the subset of $T$ where all superscripts are strictly bounded by $\alpha$.
\end{definition}
\begin{example}
\item
\begin{itemize}
\item
For any $\L_{\omega\cdot\omega}$-theory $T$, $T\cap 0=\{\phi\in T\,:\,\mbox{$\phi$ is an $\L_{\mathrm{PA}}$-sentence}\}$.
\item
For any $\L_{\omega\cdot\omega}$-theory $T$, $T\cap 1=\{\phi\in T\,:\,\mbox{$\phi$ is an
$\L_{\mathrm{PA}}\cup\{K^0\}$-sentence}\}$.
\item
For any $\L_{\mathrm{EA}}$-theory $T$,
$T^\oplus\cap\omega = \{\phi^+\,:\,\mbox{$\phi\in T$ and $\bullet^+$ is given by some $X\subseteq\omega$}\}$.
\end{itemize}
\end{example}
\begin{theorem}
\label{collapsethm}
(The collapse theorem)
Let $T$ be a uniform $\L_{\omega\cdot\omega}$-theory.
For any $0<n\in\mathbb{N}$ and $\L_{\omega\cdot\omega}$-formula $\phi$ with $\mathrm{On}(\phi)\subseteq\omega\cdot n$,
$T\models\phi$ if and only if $T\cap(\omega\cdot n)\models\phi$.
\end{theorem}
\begin{proof}
\setlength{\columnsep}{-2.25in}
\begin{multicols}{2}
The $\Leftarrow$ direction is trivial: $T\cap(\omega\cdot n)\subseteq T$.
For $\Rightarrow$, assume $T\models \phi$.
By Theorem \ref{completenesscompactness} there are $\tau_1,\ldots,\tau_n\in T$
such that
\[
\mbox{$\Phi\equiv \left(\bigwedge_i \tau_i\right)\rightarrow\phi$}
\]
is valid.
Let $X=\mathrm{On}(\Phi)\cap(\omega\cdot n)$, $Y=\mathrm{On}(\Phi)\cap[\omega\cdot n,\infty)$,
see Fig.~1.
Then $|X|,|Y|<\infty$ and $X\cup Y=\mathrm{On}(\Phi)$.
\columnbreak
\hspace{2.15in}
{
\vspace{2mm}
\includegraphics[scale=.65]{fig1.eps}
\vspace{-2.15mm}\hspace{2.65in}{
\parbox[t][1in]{1.5in}{
Figure 1: Collapse.
}
}
\par
}
\end{multicols}
\vspace{-.685in}
Since $|X|<\infty$ and $\omega\cdot n$ has no maximum element, there are infinitely many
ordinals above $X$ in $\omega\cdot n$.
Thus since $|Y|<\infty$ we can find $\widetilde{Y}\subseteq\omega\cdot n$ such that
$X<\widetilde{Y}$ and $|\widetilde{Y}|=|Y|$.
It follows there is an order preserving function $h:X\cup Y\to X\cup \widetilde{Y}$
such that $h(x)=x$ for all $x\in X$.
By Lemma \ref{hpreservesvalidity}, $h(\Phi)$ is valid.
Since $\mathrm{On}(\phi)\subseteq \omega\cdot n$, we have $\mathrm{On}(\phi)\subseteq X$ and $h(\phi)\equiv\phi$.
Thus
\[
\mbox{$h(\Phi) \,\equiv\,\left(\bigwedge_i h(\tau_i)\right)\rightarrow h(\phi) \,\equiv\,\left(\bigwedge_i h(\tau_i)\right)\rightarrow\phi$}.
\]
Since $T$ is uniform, each $h(\tau_i)\in T$.
In fact, since $\mathrm{range}(h)\subseteq\omega\cdot n$, each $h(\tau_i)\in T\cap(\omega\cdot n)$,
and the validity of $\left(\bigwedge_i h(\tau_i)\right)\rightarrow\phi$ witnesses $T\cap(\omega\cdot n)\models\phi$.
\end{proof}
\begin{definition}
\label{stratifiedmodel}
If $T$ is an $\L_{\omega\cdot\omega}$-theory, its intended structure is the $\L_{\omega\cdot\omega}$-structure
$\mathscr{M}_T$ with standard first-order part that interprets the operators of $\L_{\omega\cdot\omega}$
so that for every $\L_{\omega\cdot\omega}$-formula $\phi$, assignment $s$, and $\alpha\in\omega\cdot\omega$,
\[
\mbox{$\mathscr{M}_T \models K^\alpha\phi[s]$ if and only if $T\cap\alpha\models \phi^s$.}
\]
\end{definition}
\begin{lemma}
\label{scriptmbehavesasintended}
Suppose $T$ is an $\L_{\omega\cdot\omega}$-theory.
For any $\L_{\omega\cdot\omega}$-formula $\phi$ and assignment $s$,
$\mathscr{M}_T\models\phi[s]$ if and only if $\mathscr{M}_T\models\phi^s$.
\end{lemma}
\begin{proof}
By induction.
\end{proof}
Recall from Definition \ref{stratifierdefn} that the veristratifier
is the stratifier given by $X=\{\omega\cdot1,\omega\cdot2,\ldots\}$.
\begin{theorem}
\label{upwardstratificationtheorem}
(The upward stratification theorem)
Let $\bullet^+$ be the veristratifier.
For any $\L_{\mathrm{EA}}$-theory $T$, $\L_{\mathrm{EA}}$-formula $\phi$, and assignment $s$,
$\mathscr N_T\models\phi[s]$ if and only if $\mathscr{M}_{T^\oplus}\models\phi^+[s]$.
\end{theorem}
Again, the theorem is so-named because it is an upside-down version of
a harder theorem that equates $\mathscr{M}_T\models\phi[s]$ with $\mathscr N_{T^-}\models\phi^-[s]$
for stratified $T$ and $\phi$ under more complicated
hypotheses.
\begin{proof}[Proof of Theorem \ref{upwardstratificationtheorem}]
By induction on $\phi$.
The only nontrivial case is when $\phi$ is $K\psi$.
Then $\phi^+\equiv K^\alpha\psi^+$ for some $\alpha$.
By definition of the veristratifier, $\alpha=\omega\cdot n$
for some $0<n\in\mathbb{N}$, and $\mathrm{On}(\psi^+)\subseteq\omega\cdot n$.
By Lemma \ref{uniformitylemma}, $T^\oplus$ is uniform,
so we can use the collapse theorem (Theorem \ref{collapsethm}).
The following are equivalent.
\begin{align*}
\mathscr N_T &\models K\psi[s]\\
T &\models \psi^s
&\mbox{(Definition \ref{defnofintendedmodel})}\\
T^\oplus &\models (\psi^s)^+
&\mbox{(Upward proof stratification---Theorem \ref{proofstrat})}\\
T^\oplus\cap(\omega\cdot n) &\models (\psi^s)^+
&\mbox{(The collapse theorem---Theorem \ref{collapsethm})}\\
T^\oplus\cap(\omega\cdot n) &\models (\psi^+)^s
&\mbox{(Clearly $(\psi^s)^+\equiv(\psi^+)^s$)}\\
\mathscr{M}_{T^\oplus} &\models K^{\omega\cdot n}\psi^+[s].
&\mbox{(Definition \ref{stratifiedmodel})}
\end{align*}
\end{proof}
\begin{corollary}
\label{revelatorycorollary}
For any $\L_{\mathrm{EA}}$-theory $T$,
in order to show $\mathscr N_T\models T$, it suffices
to show $\mathscr{M}_{T^\oplus}\models T^\oplus$.
\end{corollary}
Corollary \ref{revelatorycorollary} provides a foothold
for proving truth of self-referential theories
by transfinite induction up to $\omega\cdot\omega$:
in order to prove $\mathscr N_{T}\models T$,
one can attempt to prove $\mathscr{M}_{T^\oplus}\models T^\oplus\cap\alpha$
for all $\alpha\in\omega\cdot\omega$ by induction on $\alpha$.
\section{Upward Generic Axioms}
\label{genericaxiomssectn}
One way to state an epistemological consistency result, for example
that a truthful machine can know itself to be true and recursively enumerable, is to show that
the schemas in question are consistent with a particular background theory of knowledge.
We take a more general approach:
show that the doubted schemas are consistent with \emph{any} background theory satisfying certain
conditions.
We say that an $\L_{\mathrm{EA}}$-theory $T$ is \emph{$K$-closed} if $K\phi\in T$ whenever $\phi\in T$.
\begin{definition}
\label{baregenericdefn}
Suppose $T_0$ is an $\L_{\mathrm{EA}}$-theory.
\begin{enumerate}
\item $T_0$ is \emph{generic} if $\mathscr N_T\models T_0$ for every $\L_{\mathrm{EA}}$-theory $T\supseteq T_0$.
\item $T_0$ is \emph{closed-generic} if $T_0$ is $K$-closed and $\mathscr N_T\models T_0$ for every $K$-closed $\L_{\mathrm{EA}}$-theory $T\supseteq T_0$.
\item $T_0$ is \emph{r.e.-generic} if $T_0$ is r.e.~and $\mathscr N_T\models T_0$ for every r.e.~$\L_{\mathrm{EA}}$-theory $T\supseteq T_0$.
\item $T_0$ is \emph{closed-r.e.-generic} if $T_0$ is $K$-closed, r.e., and $\mathscr N_T\models T_0$ for every $K$-closed r.e.~$\L_{\mathrm{EA}}$-theory
$T\supseteq T_0$.
\end{enumerate}
\end{definition}
\begin{lemma}
\label{genericdiamond}
\item
\begin{enumerate}
\item Generic$+$r.e.~implies r.e.-generic.
\item Generic$+K$-closed implies closed-generic.
\item Closed-generic$+$r.e.~implies closed-r.e.-generic.
\item R.e.-generic$+K$-closed implies closed-r.e.-generic.
\end{enumerate}
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{lemma}
\label{genericunions}
Let $T=\cup_{i\in I} T_i$ where each $T_i$ is an $\L_{\mathrm{EA}}$-theory.
\begin{enumerate}
\item If the $T_i$ are generic, then $T$ is generic.
\item If the $T_i$ are closed-generic, then $T$ is closed-generic.
\item If the $T_i$ are r.e.-generic and $T$ is r.e., then $T$ is r.e.-generic.
\item If the $T_i$ are closed-r.e.-generic and $T$ is r.e., then $T$ is closed-r.e.-generic.
\end{enumerate}
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{lemma}
\label{etwoisgeneric}
The $\L_{\mathrm{EA}}$-schema $E_2$, consisting of $\ucl{K(\phi\rightarrow\psi)\rightarrow K\phi\rightarrow K\psi}$, is generic.
\end{lemma}
\begin{proof}
Suppose $T\supseteq E_2$ is arbitrary.
For any $\L_{\mathrm{EA}}$-formulas $\phi$ and $\psi$ and assignment $s$, if
$\mathscr N_T\models K(\phi\rightarrow\psi)[s]$ and $\mathscr N_T\models K\phi[s]$,
then
\begin{align*}
T &\models \phi^s\rightarrow\psi^s
&\mbox{(Definition \ref{defnofintendedmodel})}\\
T &\models \phi^s
&\mbox{(Definition \ref{defnofintendedmodel})}\\
T &\models \psi^s
&\mbox{(Modus Ponens)}\\
\mathscr N_T &\models K\psi[s],\mbox{ as desired.}
&\mbox{(Definition \ref{defnofintendedmodel})}
\end{align*}
\end{proof}
\begin{definition}
Suppose $T_0$ is an $\L_{\mathrm{EA}}$-theory.
\begin{enumerate}
\item $T_0$ is \emph{upgeneric} if $\mathscr{M}_{T^\oplus}\models T^\oplus_0$ for every $\L_{\mathrm{EA}}$-theory $T\supseteq T_0$.
\item $T_0$ is \emph{closed-upgeneric} if $T_0$ is $K$-closed and $\mathscr{M}_{T^\oplus}\models T^\oplus_0$ for every $K$-closed $\L_{\mathrm{EA}}$-theory $T\supseteq
T_0$.
\item $T_0$ is \emph{r.e.-upgeneric} if $T_0$ is r.e.~and $\mathscr{M}_{T^\oplus}\models T^\oplus_0$ for every r.e.~$\L_{\mathrm{EA}}$-theory $T\supseteq T_0$.
\item $T_0$ is \emph{closed-r.e.-upgeneric} if $T_0$ is
$K$-closed, r.e., and $\mathscr{M}_{T^\oplus}\models T^\oplus_0$ for every $K$-closed r.e.~$\L_{\mathrm{EA}}$-theory $T\supseteq T_0$.
\end{enumerate}
\end{definition}
\begin{lemma}
(Compare Lemma \ref{genericdiamond})
\begin{enumerate}
\item Upgeneric$+K$-closed implies closed-generic.
\item Upgeneric$+$r.e.~implies r.e.-upgeneric.
\item Closed-upgeneric$+$r.e.~implies closed-r.e.-upgeneric.
\item R.e.-upgeneric$+K$-closed implies closed-r.e.-upgeneric.
\end{enumerate}
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{lemma}
\label{upgenericunions}
Suppose $T=\cup_{i\in I}T_i$ where the $T_i$ are $\L_{\mathrm{EA}}$-theories.
\begin{enumerate}
\item If the $T_i$ are upgeneric, then $T$ is upgeneric.
\item If the $T_i$ are closed-upgeneric, then $T$ is closed-upgeneric.
\item If the $T_i$ are r.e.-upgeneric and $T$ is r.e., then $T$ is
r.e.-upgeneric.
\item If the $T_i$ are closed-r.e.-upgeneric and $T$ is r.e., then $T$ is
closed-r.e.-upgeneric.
\end{enumerate}
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{lemma}
\label{upgenericimpliesgeneric}
\item
\begin{enumerate}
\item Upgeneric implies generic.
\item Closed-upgeneric implies closed-generic.
\item R.e.-upgeneric implies r.e.-generic.
\item Closed-r.e.-upgeneric implies closed-r.e.-generic.
\end{enumerate}
\end{lemma}
\begin{proof}
By the upward stratification theorem (Theorem \ref{upwardstratificationtheorem}).
\end{proof}
In light of Lemmas \ref{etwoisgeneric} and \ref{upgenericimpliesgeneric},
the following shows that upgeneric is strictly
stronger than generic.
\begin{lemma}
\label{etwonotupgeneric}
$E_2$ is not upgeneric. In fact $E_2$ is not even closed-r.e.-upgeneric.
\end{lemma}
\begin{proof}
Let $T$ be the smallest $K$-closed $\L_{\mathrm{EA}}$-theory containing the following schemata.
\begin{enumerate}
\item $E_2$.
\item $K(1=0)$.
\item $K(1=0)\rightarrow (1=0)$.
\end{enumerate}
Since $T\supseteq E_2$ is closed r.e., it suffices to exhibit
some $\theta\in E_2$ and stratifier $\bullet^+$ such that $\mathscr{M}_{T^\oplus}\not\models \theta^+$.
If $\bullet^+$ is the stratifier given by $X=\{0,1,2,\ldots\}$,
the reader can check that
\[
\theta \,\equiv\,\,\, K(K(1=0)\rightarrow (1=0))\rightarrow KK(1=0)\rightarrow K(1=0)
\]
works.
\end{proof}
Lemma \ref{etwonotupgeneric} and
the following demystify our reason for weakening $E_2$ to $E'_2$.
\begin{lemma}
\label{etwoprimeisupgeneric}
The schema $E'_2$, consisting of $\ucl{K(\phi\rightarrow\psi)\rightarrow K\phi\rightarrow K\psi}$
whenever $\mathrm{depth}(\phi)\leq\mathrm{depth}(\psi)$ (Definition \ref{depthdefn}), is upgeneric.
\end{lemma}
\begin{proof}
Let $T\supseteq E'_2$ be arbitrary.
Suppose $\phi$ and $\psi$ are $\L_{\mathrm{EA}}$-formulas with $\mathrm{depth}(\phi)\leq\mathrm{depth}(\psi)$
and $\bullet^+$ is a stratifier,
say with
\begin{align*}
(K\phi)^+ &\equiv K^\alpha\phi^+\\
(K\psi)^+ &\equiv K^\beta\psi^+\\
(K(\phi\rightarrow \psi))^+ &\equiv K^\gamma(\phi^+\rightarrow \psi^+),
\end{align*}
we will show $\mathscr{M}_{T^\oplus}$ satisfies
\[
(\ucl{K(\phi\rightarrow\psi)\rightarrow K\phi\rightarrow K\psi})^+
\equiv
\ucl{K^\gamma(\phi^+\rightarrow\psi^+)\rightarrow
K^\alpha\phi^+\rightarrow
K^\beta\psi^+}.
\]
Note that by Lemma \ref{depthandstratifier}, $\alpha\leq\beta=\gamma$.
Let $s$ be an arbitrary assignment such that
$\mathscr{M}_{T^\oplus}\models K^\gamma(\phi^+\rightarrow\psi^+)[s]$ and $\mathscr{M}_{T^\oplus}\models K^\alpha\phi^+[s]$.
Then
\begin{align*}
T^\oplus\cap\gamma &\models (\phi^+)^s \rightarrow (\psi^+)^s
&\mbox{(Definition \ref{stratifiedmodel})}\\
T^\oplus\cap\alpha &\models (\phi^+)^s
&\mbox{(Definition \ref{stratifiedmodel})}\\
T^\oplus\cap\beta &\models ((\phi^+)^s\rightarrow (\psi^+)^s)\,\wedge\, (\phi^+)^s
&\mbox{(Since $\alpha\leq\beta=\gamma$)}\\
T^\oplus\cap\beta &\models (\psi^+)^s
&\mbox{(Modus Ponens)}\\
\mathscr{M}_{T^\oplus} &\models K^\beta \psi^+[s],\mbox{ as desired.}
&\mbox{(Definition \ref{stratifiedmodel})}
\end{align*}
\end{proof}
\begin{lemma}
\label{assignedvalidityisupgeneric}
The Assigned Validity schema, consisting of $\phi^s$ whenever $\phi$ is valid and $s$ is any assignment,
is upgeneric.
\end{lemma}
\begin{proof}
Let $T\supseteq\mbox{(Assigned Validity)}$ be arbitrary.
Suppose $\phi$ is valid, $s$ is an assignment, and $\bullet^+$ is a stratifier.
By Theorem \ref{stratifiersrespectvalidity}, $\phi^+$ is also valid.
Thus $\mathscr{M}_{T^\oplus}\models\phi^+[s]$, and by Lemma \ref{scriptmbehavesasintended},
$\mathscr{M}_{T^\oplus}\models (\phi^+)^s$.
By arbitrariness of $\phi$, $s$, and $\bullet^+$, $\mathscr{M}_{T^\oplus}\models \mbox{(Assigned Validity)}^\oplus$.
\end{proof}
\begin{lemma}
\label{trivialgenericlemma}
Any set of true purely arithmetical sentences is upgeneric.
\end{lemma}
\begin{proof}
Trivial: $\mathscr{M}_T$ has standard first-order part.
\end{proof}
\begin{lemma}
\label{eaisupgeneric}
The schema consisting of the axioms of Epistemic Arithmetic (Peano Arithmetic with induction extended to
$\L_{\mathrm{EA}}$) is upgeneric.
\end{lemma}
\begin{proof}
Let $T\supseteq\mbox{(Epistemic Arithmetic)}$.
Let $\sigma$ be an axiom of Epistemic Arithmetic, $\bullet^+$ a stratifier.
If $\sigma$ is not an induction instance, then
$\mathscr{M}_{T^\oplus}\models \sigma^+$ by Lemma \ref{trivialgenericlemma}.
But suppose $\sigma$ is an instance
\[
\ucl{\phi(x|0)\rightarrow\forall x(\phi\rightarrow\phi(x|S(x)))\rightarrow\forall x\phi}
\]
of induction, so that $\sigma^+$ is
$\ucl{\phi^+(x|0)\rightarrow \forall x(\phi^+\rightarrow\phi^+(x|S(x)))\rightarrow\forall x\phi^+}$.
To show $\mathscr{M}_{T^\oplus}\models\sigma^+$, let $s$ be an assignment and assume
$\mathscr{M}_{T^\oplus}\models \phi^+(x|0)[s]$ and
$\mathscr{M}_{T^\oplus}\models\forall x(\phi^+\rightarrow\phi^+(x|S(x)))[s]$.
Then
\begin{align*}
\mathscr{M}_{T^\oplus} &\models \phi^+(x|0)^s
&\mbox{(Lemma \ref{scriptmbehavesasintended})}\\
\mathscr{M}_{T^\oplus} &\models (\phi^+)^{s(x|0)}
&\mbox{(Clearly $\psi(x|0)^s\equiv \psi^{s(x|0)}$)}\\
\forall n\in\mathbb{N},\mbox{ if }\mathscr{M}_{T^\oplus}\models \phi^+[s(x|n)], &\mbox{ then }
\mathscr{M}_{T^\oplus}\models \phi^+(x|S(x))[s(x|n)]
&\mbox{(First-order semantics of $\forall$ and $\rightarrow$)}\\
\forall n\in\mathbb{N},\mbox{ if }\mathscr{M}_{T^\oplus}\models (\phi^+)^{s(x|n)}, &\mbox{ then }
\mathscr{M}_{T^\oplus}\models (\phi^+(x|S(x)))^{s(x|n)}
&\mbox{(Lemma \ref{scriptmbehavesasintended})}\\
\forall n\in\mathbb{N},\mbox{ if }\mathscr{M}_{T^\oplus}\models (\phi^+)^{s(x|n)}, &\mbox{ then }
\mathscr{M}_{T^\oplus}\models (\phi^+)^{s(x|n+1)}
&\mbox{(Clearly $\psi(x|S(x))^{s(x|n)}\equiv \psi^{s(x|n+1)}$)}\\
\forall n\in\mathbb{N},\mbox{ }\mathscr{M}_{T^\oplus} &\models (\phi^+)^{s(x|n)}
&\mbox{(Mathematical induction)}\\
\forall n\in\mathbb{N},\mbox{ }\mathscr{M}_{T^\oplus} &\models (\phi^+)[s(x|n)]
&\mbox{(Lemma \ref{scriptmbehavesasintended})}\\
\mathscr{M}_{T^\oplus} &\models \forall x\phi^+[s]\mbox{, as desired.}
&\mbox{(First-order semantics of $\forall$)}
\end{align*}
\end{proof}
Armed with Lemmas \ref{genericunions} and \ref{upgenericunions},
computations such as Lemmas \ref{etwoisgeneric}, \ref{etwoprimeisupgeneric},
\ref{assignedvalidityisupgeneric},
\ref{trivialgenericlemma} and \ref{eaisupgeneric}
can be used as building blocks for background theories of knowledge.
Often, schemas we would like as building blocks are not (up)generic in isolation,
but become so when paired with other building blocks, as in the following three
lemmas.
\begin{lemma}
\label{eoneandassignedvalidityisupgeneric}
$E_1\cup (\mbox{Assigned Validity})$ is upgeneric
($E_1$ consists of $\ucl{K\phi}$ whenever $\phi$ is valid).
\end{lemma}
\begin{proof}
Let $T\supseteq E_1\cup (\mbox{Assigned Validity})$.
By Lemma \ref{assignedvalidityisupgeneric},
$\mathscr{M}_{T^\oplus}\models\mbox{(Assigned Validity)}^\oplus$,
we need only show $\mathscr{M}_{T^\oplus}\models E^\oplus_1$.
Let $\phi$ be valid, $\bullet^+$ any stratifier, and $s$ any assignment.
Since $T\supseteq (\mbox{Assigned Validity})$,
$T^\oplus$ contains the instance
\[(\phi^s)^+\equiv (\phi^+)^s\] of $(\mbox{Assigned Validity})^\oplus$.
In fact, $T^\oplus\cap\alpha$ contains $(\phi^+)^s$,
where $\alpha$ is such that $(K\phi)^+\equiv K^\alpha\phi^+$.
Thus by Definition \ref{stratifiedmodel}, $\mathscr{M}_{T^\oplus}\models K^{\alpha}\phi^+[s]$,
that is, $\mathscr{M}_{T^\oplus}\models (K\phi)^+[s]$.
This shows $\mathscr{M}_{T^\oplus}\models E^\oplus_1$.
\end{proof}
\begin{lemma}
\label{kclosurelemma}
For any upgeneric $T_0$, $T_0\cup K(T_0)$ is upgeneric,
where $K(T_0)$ consists of $K\phi$ whenever $\phi\in T_0$.
Similarly with ``upgeneric'' replaced by ``r.e.-upgeneric'',
``closed-upgeneric'', ``closed-r.e.-upgeneric'', ``generic'',
``r.e.-generic'', ``closed-generic'', or ``closed-r.e.-generic'' throughout.
\end{lemma}
\begin{proof}
We prove the upgeneric statement.
Suppose $T_0$ is upgeneric
and $T\supseteq T_0\cup K(T_0)$.
Since $T_0$ is upgeneric and $T\supseteq T_0$, $\mathscr{M}_{T^\oplus}\models T^\oplus_0$.
It remains to show $\mathscr{M}_{T^\oplus}\models (K\phi)^+$
for any sentence $\phi\in T_0$ and stratifier $\bullet^+$.
Let $\alpha$ be such that $(K\phi)^+\equiv K^\alpha\phi^+$.
By Definition \ref{stratifierdefn}, $\mathrm{On}(\phi^+)\subseteq \alpha$
and thus $\phi^+\in T_0^\oplus\cap\alpha\subseteq T^\oplus\cap\alpha$.
Since $T^\oplus\cap\alpha\models \phi^+$, $\mathscr{M}_{T^\oplus}\models K^\alpha\phi^+$
as desired.
\end{proof}
We will not use the following lemma,
but it illuminates differences between this paper's
upward approach
and Carlson's original downward approach.
\begin{lemma}
\label{everythingworksgivenetwo}
$E_1\cup E_2\cup E_4\cup\mbox{(Epistemic Arithmetic)}$ is closed-generic.
\end{lemma}
\begin{proof}
Let $T$ be a $K$-closed theory containing $E_1$, $E_2$, $E_4$ and $\mbox{(Epistemic Arithmetic)}$.
By Lemma \ref{etwoisgeneric}, $\mathscr N_T\models E_2$.
By Lemmas \ref{eaisupgeneric} and \ref{upgenericimpliesgeneric}, $\mathscr N_T\models \mbox{(Epistemic Arithmetic)}$.
It remains to show $\mathscr N_T\models E_1\cup E_4$. We will show $\mathscr N_T\models E_4$
and sketch $\mathscr N_T\models E_1$.
The typical sentence in $E_4$ is $\ucl{K\phi\rightarrow KK\phi}$.
Let $s$ be an assignment and assume $\mathscr N_T\models K\phi[s]$.
Then
\begin{align*}
T &\models \phi^s
&\mbox{(Definition \ref{defnofintendedmodel})}\\
\exists \tau_1,\ldots,\tau_n\in T &\mbox{ s.t.}
\left(\wedge_{i=1}^n \tau_i\right)\rightarrow \phi^s
\mbox{ is valid}
&\mbox{(Theorem \ref{completenesscompactness})}\\
T &\models K\left(\left(\wedge_{i=1}^n \tau_i\right)\rightarrow \phi^s\right)
&\mbox{($T$ contains $E_1$)}\\
T &\models \left(\wedge_{i=1}^n K(\tau_i)\right)\rightarrow K\phi^s
&\mbox{(Repeated applications of $E_2$ in $T$)}\\
T &\models \wedge_{i=1}^n K(\tau_i)
&\mbox{($T$ is $K$-closed)}\\
T &\models K\phi^s
&\mbox{(Modus Ponens)}\\
\mathscr N_T &\models KK\phi[s].
&\mbox{(Definition \ref{defnofintendedmodel})}
\end{align*}
This shows $\mathscr N_T\models E_4$.
Because of the lack of Assigned Validity, showing $\mathscr{M}_T\models E_1$ is tricky.
We indicate a rough sketch.
Carlson's Lemmas 5.23 and 7.1 \cite{carlson2000} (pp.~69 \& 72)
imply $T\models (\mbox{Assigned Validity})$
(we invoke Lemma 7.1 with $\mathscr Q$ a singleton).
As written, Lemma 5.23 demands $T$ also contain $E_3$, but it can be shown this is unnecessary.
Thus we may assume $T$ contains Assigned Validity.
By Lemmas \ref{eoneandassignedvalidityisupgeneric} and \ref{upgenericimpliesgeneric},
$\mathscr N_T\models E_1$.
\end{proof}
Lemma \ref{everythingworksgivenetwo}
explains why weakening $E_2$ to $E'_2$ required two other
seemingly-unrelated weakenings: adding Assigned Validity, and removing
$E_4$ altogether.
\begin{lemma}
\label{smtisreupgeneric}
The Mechanicalness schema,
\[
\ucl{\exists e\forall x(K\phi\leftrightarrow x\in W_e)}\,\,\,\,\mbox{($e\not\in \mathrm{FV}(\phi)$)},
\]
is r.e.-upgeneric.
\end{lemma}
\begin{proof}
Let $T$ be any r.e.~$\L_{\mathrm{EA}}$-theory containing the Mechanicalness schema.
Let $\bullet^+$ be a stratifier and let $\alpha$ be such that $(K\phi)^+\equiv K^\alpha\phi^+$.
We must show
\[
\mathscr{M}_{T^\oplus}\models \ucl{\exists e\forall x(K^\alpha\phi^+\leftrightarrow x\in W_e)}.
\]
Let $s$ be any assignment and note
\begin{align*}
\{q\in\mathbb{N}\,:\,\mathscr{M}_{T^\oplus}\models K^\alpha\phi^+[s(x|q)]\}
&=
\{q\in\mathbb{N}\,:\,T^\oplus\cap\alpha\models (\phi^+)^{s(x|q)}\}.
&\mbox{(Definition \ref{stratifiedmodel})}
\end{align*}
By the Church--Turing Thesis, the latter set is r.e., so there is some $p\in\mathbb{N}$ such that
\[
W_p = \{q\in\mathbb{N}\,:\,\mathscr{M}_{T^\oplus}\models K^\alpha\phi^+[s(x|q)]\}.
\]
For all $q\in\mathbb{N}$, the following biconditionals are equivalent:
\begin{align*}
\mathscr{M}_{T^\oplus}\models K^\alpha\phi^+ &\leftrightarrow x\in W_e[s(e|p)(x|q)]\\
\mathscr{M}_{T^\oplus}\models K^\alpha\phi^+[s(e|p)(x|q)]
&\mbox{ iff } \mathscr{M}_{T^\oplus}\models x\in W_e[s(e|p)(x|q)]
&\mbox{(First-order semantics of $\leftrightarrow$)}\\
\mathscr{M}_{T^\oplus}\models K^\alpha\phi^+[s(x|q)]
&\mbox{ iff } \mathscr{M}_{T^\oplus}\models x\in W_e[s(e|p)(x|q)]
&\mbox{(Since $e\not\in\mathrm{FV}(\phi)$)}\\
\mathscr{M}_{T^\oplus}\models K^\alpha\phi^+[s(x|q)]
&\mbox{ iff } q\in W_p.
&\mbox{(Since $\mathscr{M}_{T^\oplus}$ has standard first-order part)}
\end{align*}
The latter is true by definition of $p$.
By arbitrariness of $q$, $\mathscr{M}_{T^\oplus}\models \exists e\forall x(K^\alpha\phi^+\leftrightarrow x\in
W_e)[s]$.
\end{proof}
\begin{corollary}
\label{evenweakerweak}
(Recall the definition of $T^w_{\text{SMT}}$ from the end of Section \ref{prelimsect})
Let $(T^w_{\text{SMT}})\backslash E_3$ be the smallest $K$-closed theory containing $E_1$,
Assigned Validity, $E'_2$, Epistemic Arithmetic, and Mechanicalness.
(Loosely speaking, $T^w_{\text{SMT}}$ minus $E_3$.) Then $(T^w_{\text{SMT}})\backslash E_3$
is r.e.-upgeneric.
\end{corollary}
\section{The Main Result}
\label{mainresultsect}
With the machinery of Section \ref{genericaxiomssectn},
we are able to state our main result in a generalized form.
Informally:
\begin{quote}
An r.e.-upgeneric theory
remains true
upon augmentation by knowledge of its own truthfulness.
\end{quote}
Reinhardt's conjecture (proved by Carlson) was that the Strong Mechanistic Thesis is consistent with
a particular background theory of knowledge.
We showed (Lemma \ref{smtisreupgeneric}) that Mechanicalness is r.e.-upgeneric.
By Lemma \ref{kclosurelemma}, the pair consisting of Mechanicalness and the Strong Mechanistic Thesis,
is r.e.-upgeneric.
Thus as long as
the background theory of knowledge
is r.e.~and built of r.e.-generic pieces along with truthfulness,
the corresponding conjecture is a special case of this main result.
Recall
(Definition \ref{defnofintendedmodel}) that an $\L_{\mathrm{EA}}$-theory $T$ is \emph{true} if $\mathscr N_T\models T$.
\begin{theorem}
\label{themaintheorem}
Let $T_0$ be an r.e.-upgeneric $\L_{\mathrm{EA}}$-theory.
Let $T_1$ be $T_0\cup E_3$, that is, $T_0$ along with all axioms of the form $\ucl{K\phi\rightarrow\phi}$.
Let $T$ be the smallest $K$-closed theory containing $T_1$.
Then $T$ is true.
\end{theorem}
\begin{proof}
By Corollary \ref{revelatorycorollary} it is enough to show
$\mathscr{M}_{T^\oplus}\models T^\oplus$.
We will use transfinite induction up to $\omega\cdot\omega$
to show that for all $\alpha\in\omega\cdot\omega$, $\mathscr{M}_{T^\oplus}\models T^\oplus\cap\alpha$.
Let $\sigma\in T^\oplus\cap\alpha$. Then $\sigma\equiv\theta^+$ for some $\theta\in T$ and some stratifier $\bullet^+$.
We will show $\mathscr{M}_{T^\oplus}\models\theta^+$.
\item
\case1
$\theta\in T_0$.
Then $\mathscr{M}_{T^\oplus}\models \theta^+$ because $T\supseteq T_0$ is r.e.~and $T_0$ is r.e.-upgeneric.
\item
\case2
$\theta$ is $K\phi$ for some sentence $\phi\in T$. Let $\alpha_0$ be such that $(K\phi)^+\equiv K^{\alpha_0}\phi^+$.
By Definition \ref{stratifierdefn}, $\mathrm{On}(\phi^+)\subseteq \alpha_0$
and thus $\phi^+\in T^\oplus \cap\alpha_0$, so $T^\oplus\cap\alpha_0\models \phi^+$,
so $\mathscr{M}_{T^\oplus}\models K^{\alpha_0}\phi^+$.
\item
\case3
$\theta$ is $\ucl{K\phi\rightarrow\phi}$ for some $\phi$.
Let $\alpha_0$ be such that $(K\phi)^+\equiv K^{\alpha_0}\phi^+$,
so $\theta^+$ is $\ucl{K^{\alpha_0}\phi^+\rightarrow\phi^+}$.
Since $\theta^+\in T^\oplus\cap\alpha$, this forces $\alpha_0<\alpha$.
Let $s$ be any assignment and assume $\mathscr{M}_{T^\oplus}\models K^{\alpha_0}\phi^+[s]$.
Then:
\begin{align*}
\mathscr{M}_{T^\oplus} &\models K^{\alpha_0}\phi^+[s]
&\mbox{(Assumption)}\\
T^\oplus\cap\alpha_0 &\models (\phi^+)^s
&\mbox{(Definition \ref{stratifiedmodel})}\\
\mathscr{M}_{T^\oplus} &\models (\phi^+)^s
&\mbox{(By $\omega\cdot\omega$-induction, $\mathscr{M}_{T^\oplus}\models T^\oplus\cap\alpha_0$)}\\
\mathscr{M}_{T^\oplus} &\models \phi^+[s]\mbox{, as desired.}
&\mbox{(Lemma \ref{scriptmbehavesasintended})}
\end{align*}
\end{proof}
\begin{corollary}
$T^w_{\text{SMT}}$ is true.
\end{corollary}
\begin{proof}
By Theorem \ref{themaintheorem} and Corollary \ref{evenweakerweak}.
\end{proof}
If one is willing to induct up to $\epsilon_0\cdot\omega$ and
use machinery from \cite{carlson1999}, it is possible (without the grievous sacrifices
we have made in this paper) to generalize Reinhardt's conjecture
to a statement of the form:
\begin{quote}Any r.e.~theory that is generic in a very specific sense
(one that allows $E_2$ as building block)
remains true upon augmentation by knowledge of its own truthfulness. ($*$)\end{quote}
The specific notion of ``generic'' in order for this to work is somewhat complicated
and hinges on \cite{carlson1999}, putting it out of the present paper's scope.
It does admit Mechanicalness as building block,
so that ($*$) really is a generalization of Reinhardt's conjecture,
and the notion also admits full $E_2$, which in turn allows building blocks
containing $E_4$.
The main result of \cite{alexandercode} can also be generalized in this manner.
The methods of that paper are easily modified to prove:
\begin{quote}
For any r.e.~$\L_{\mathrm{EA}}$-theory $T$ that is generic (in the sense of Definition \ref{baregenericdefn}),
there is an $n\in\mathbb{N}$ such that $T'$ is true, where $T'$ is the smallest $K$-closed theory
containing $T$ along with the schema $\forall x(K\phi\leftrightarrow \langle
x,\overline{\ulcorner\phi\urcorner}\rangle\in W_{\overline n})$ ($\mathrm{FV}(\phi)\subseteq\{x\}$).
Less formally, any such generic knowing machine can be taught its own code and still remain true.
\end{quote}
One possible application of this paper is to reverse mathematics \cite{simpson}.
Since the results (except Lemma \ref{eaisupgeneric}) only use induction
up to $\omega\cdot\omega$,
suitable versions (minus Lemma \ref{eaisupgeneric} and
references to $\mathbb{N}$)
could be formalized and proved in weak subsystems of arithmetic.
|
1,116,691,500,649 | arxiv | \section{Introduction} \label{intro}
\subsection{Background and motivation}
Let $A$ be an Abelian variety defined over a number field $F$, and let $\Lambda$ be a subgroup of the Mordell-Weil group $A(F)$. For any prime $\mathfrak{p}$ (of $F$) of good reduction, we denote by $\Lambda_\mathfrak{p}$ the image of $\Lambda$ via the reduction map modulo $\mathfrak{p}$, and $F_\mathfrak{p}$ stands for the residue field of $F$ modulo $\mathfrak{p}$.
The following question was initiated in 2002 and was considered at the same time but independently by Gajda (in a letter to Kenneth Ribet in 2002, see~\cite{GaGo}) and Kowalski~\cite{Kowalski}, and it is now called \textit{detecting linear dependence}.
\begin{question}
\label{ques:WGquest}
Suppose that $P$ is a point of $A(F)$ such that for all but finitely many primes $\mathfrak{p}$ of $F$, as a point in $A(F_\mathfrak{p})$, we have $P\in \Lambda_\mathfrak{p}$. Does it then follow that $P\in \Lambda$?
\end{question}
An early result related to
this question is due to Schinzel~\cite{SchinS4}, who has answered affirmatively the question for
the multiplicative group in place of an Abelian variety.
Question~\ref{ques:WGquest} has been extensively studied in recent years and much progress has been made;
see~\cite{Bana,BGK,BK,GaGo,Jossen,JP,Perucca,Sadek,Weston} for more details and developments.
For example, Kowalski~\cite{Kowalski} has shown that the property in Question~\ref{ques:WGquest} holds for an elliptic curve and a cyclic subgroup, and Banaszak, Gajda and Kraso\'n~\cite{BGK} have established such a property for elliptic curves without complex multiplications and a finitely generated free subgroup.
In particular, Jossen~\cite{Jossen} has given an affirmative answer when $A$ is a simple Abelian variety, which automatically includes elliptic curves, in a stronger form that we only need ``for a set of primes $\mathfrak{p}$ with natural density 1" instead of ``for all but finitely many primes $\mathfrak{p}$".
We remark that the answer of Question~\ref{ques:WGquest} is not always positive; see~\cite{JP} for a counterexample.
Here, motivated by Question~\ref{ques:WGquest} and in some sense, we introduce and study its counterpart, called \textit{pseudolinear dependence}, in the case of elliptic curves.
Following the set up of~\cite{AGM}, which is crucial for some of our
approaches, we restrict ourselves to the case of
elliptic curves over the rational numbers, see Definitions~\ref{def1} and~\ref{def2}
below.
There is little doubt that one can extend~\cite{AGM}, and thus our results to elliptic curves over number fields,
but this may require quite significant efforts.
A result
of Banaszak and Kraso\'n~\cite[Theorem~7.7]{BK} replaces the
condition of linear dependence modulo all but finitely many primes by the
linear dependence modulo a finite set of primes depending on all the initial data (including the point $P$), and
then recently Sadek~\cite{Sadek} has given an explicit upper bound of such primes in the set for a specific class of elliptic curves under the {\it Generalised Riemann Hypothesis\/} (GRH).
In fact, all results that are based on the Chebotarev Density Theorem involve only a finite set of primes depending on the initial data;
see also~\cite{GaGo}.
Here, using some new ideas, we show that any set of primes that detects the linear dependence should contain large primes; see
Theorems~\ref{rank0}, \ref{thm:uncond} and~\ref{thm:cond} below for more details.
We first fix some notation.
Let $E$ be an elliptic curve over $\mathbb{Q}$ of rank $r$ and of discriminant $\Delta_E$. We denote by $E(\mathbb{Q})$ the Mordell-Weil group of rational points on $E$. We also let $\Gamma$ be a subgroup of $E(\mathbb{Q})$ with rank $s$. We refer to~\cite{Silv} for a background on elliptic curves.
Similarly, for a prime $p$ of good reduction (that is, $p\nmid \Delta_E$), we let $E(\mathbb{F}_p)$ be the
group of $\mathbb{F}_p$-points in the reduction of $E$ to the finite field $\mathbb{F}_p$ of $p$ elements.
We also denote by $\Gamma_p$ the reduction of $\Gamma$ modulo $p$, which is a subgroup of $E(\mathbb{F}_p)$. In particular, $E(\mathbb{Q})_p$ stands for the reduction of $E(\mathbb{Q})$ modulo $p$.
\begin{defin}[$\mathbb{F}_p$-pseudolinear dependence]
\label{def1}
Given a prime $p$ of good reduction, we call a point $Q\in E(\mathbb{Q})$ an
\textit{$\mathbb{F}_p$-pseudolinearly dependent point} of
$\Gamma$ if $Q\not \in \Gamma$ but as a point in $E(\mathbb{F}_p)$ we have $Q\in \Gamma_p$.
\end{defin}
We remark that the $\mathbb{F}_p$-pseudolinear dependence equivalently means that such a point $Q \not\in \Gamma$ but $Q \in \Gamma + {\rm ker}_p$, where ${\rm ker}_p$ denotes the kernel of the reduction map modulo $p$.
\begin{defin}[$x$-pseudolinear dependence]
\label{def2}
We say that a point $Q \in E(\mathbb{Q})$ is an \textit{$x$-pseudolinearly dependent point} of
$\Gamma$ if $Q \not \in \Gamma$ but it is an
$\mathbb{F}_p$-pseudolinearly dependent point of $\Gamma$ for all primes $p\le x$ of good reduction.
\end{defin}
In particular, if $\Gamma$ is generated by a point $P$, we also say that such a point $Q$ is an \textit{$x$-pseudomultiple} of $P$.
This notion is an elliptic analogue of the notions
of $x$-pseudosquares and $x$-pseudopowers over the integers,
which dates back to the classical results of Schinzel~\cite{Schin1,SchinS2,SchinS3}
and has recently been studied in~\cite{BLSW,BKPS,KPS,PoSh}.
In this paper, we explicitly construct such an $x$-pseudolinearly
dependent point $Q$ of $\Gamma$ provided that $s<r$ and give upper
bounds for its canonical height,
and then we also deduce lower
bounds for the height of any $x$-pseudo\-linearly dependent
point in some special cases.
More detailedly, we first briefly consider the existence problem of $x$-pseudolinearly dependent points. Further, essentially using a result of Gupta and Murty~\cite[Lemma~14]{GuMu}, we obtain
an unconditional upper bound on the height of an
$x$-pseudolinearly dependent point of $\Gamma$ if $s<r$. Then, using a result of Akbary,
Ghioca and Murty~\cite[Theorem~1.2]{AGM} and under GRH we obtain a
stronger conditional upper bound provided that $s \ge 19$ if $E$ is a non-CM curve, or $s \ge 7$ if $E$ is a CM curve. In addition, following some detailed theory on the number fields generated by some points of $E$ and applying the effective Chebotarev Density Theorem, we establish unconditional and conditional lower bounds for the height of such points in some special cases.
In the last section, we pose some problems which may merit further study.
\subsection{General notation}
Throughout the paper, we use the Landau symbols $O$ and $o$ and the Vinogradov symbol $\ll$ (sometimes written as $\gg$). We recall that the assertions $U=O(V)$ and $U\ll V$ are both equivalent to the inequality $|U|\le cV$ with some absolute constant $c$, while $U=o(V)$ means that $U/V\to 0$. In this paper, the constants implied in the symbols $O$ and $\ll$ depend only possibly on $E$ and $\Gamma$.
The letter $p$, with or without
subscripts, always denotes a prime.
As usual, $\pi(x)$ denotes the number of primes not exceeding $x$.
We use $\hat{h}$ to denote the canonical height of points on $E$,
see Section~\ref{const} for a precise definition.
For a finite set $S$, we use $\# S$ to denote its cardinality.
\subsection{Main results}
Here, we let $E$ be an elliptic curve of rank $r$ defined over $\mathbb{Q}$, and $\Gamma$ a subgroup of $E(\mathbb{Q})$ with rank $s$.
We first state several upper bounds for the height of pseudolinearly dependent points.
\begin{theorem} \label{rank0}
Suppose that $r\ge 1$ and $s=0$. Then
for a sufficiently large $x$, there is a rational
point $Q \in E(\mathbb{Q})$ of height
$$
\hat{h}(Q) \le \exp\(2x-2\log(\#\Gamma) \frac{x}{\log x}+ O(x/(\log x)^2)\)
$$
such that $Q$ is an $x$-pseudolinearly dependent point of
$\Gamma$.
\end{theorem}
\begin{theorem}
\label{thm:uncond}
Assume that $r\ge 2$ and $1\le s<r$. Then
for a sufficiently large $x$, there is a rational
point $Q \in E(\mathbb{Q})$ of height
$$
\hat{h}(Q) \le \exp\(\frac{4}{s+2}x +O(x/\log x)\)
$$
such that $Q$ is an $x$-pseudolinearly dependent point of
$\Gamma$.
\end{theorem}
\begin{theorem}
\label{thm:cond}
Suppose that either $19\le s <r$ if $E$ is a non-CM curve, or $7 \le s <r$ if $E$ is a CM curve. Then under GRH and for a sufficiently large $x$, there is a rational
point $Q \in E(\mathbb{Q})$ of height
$$
\hat{h}(Q) \le \exp\left( 4x(\log\log x)/\log x+O(x/\log x) \right)
$$
such that $Q$ is an $x$-pseudolinearly dependent point of
$\Gamma$.
\end{theorem}
Notice that by Definition~\ref{def2} the condition for $x$-pseudolinearly dependent points of $\Gamma$ is quite strong when $x$ tends to infinity. This convinces us that there maybe exist some lower bounds for the height of such points. Here, we establish some partial results.
\begin{theorem} \label{thm:lower0}
Suppose that $r\ge 1$ and $s=0$. Then
for any sufficiently large $x$ and any $x$-pseudolinearly dependent point $Q$ of $\Gamma$, we have
$$
\hat{h}(Q) \ge \frac{1}{\#\Gamma}x/\log x+O(x/(\log x)^2).
$$
\end{theorem}
\begin{theorem}
\label{thm:lower}
Assume that $r\ge 2$, $1\le s<r$, and $\Gamma$ is a free subgroup of $E(\mathbb{Q})$. Then for any sufficiently large $x$ and any $x$-pseudolinearly dependent point $Q$ of $\Gamma$, we have
$$
\hat{h}(Q) \ge \exp \( (\log x)^{1/(2s+6)+o(1)} \);
$$
and furthermore assuming GRH, we have
$$
\hat{h}(Q) \ge \exp \( x^{1/(4s+12)+o(1)} \).
$$
\end{theorem}
\section{Preliminaries}
\subsection{Jossen's result}
We want to highlight the following result which is
implied in the main theorem of Jossen~\cite{Jossen}.
As mentioned before, the original result is about simple Abelian varieties over number fields.
\begin{lemma}
\label{lem:Jossen}
Let $E$ be an elliptic curve over $\mathbb{Q}$, and let $\Gamma$ be a subgroup of $E(\mathbb{Q})$ and $Q\in E(\mathbb{Q})$ a rational point. If the set of primes $p$ for which $Q\in \Gamma_p$ has asymptotic natural density 1, then $Q\in \Gamma$.
\end{lemma}
\subsection{Heights on elliptic curves}
We recall briefly the definitions and relation of the Weil height and the canonical height of points in $E(\mathbb{Q})$;
see~\cite[Chapter~VIII, Section~9]{Silv} for more details.
For a point $P=(x,y)\in E(\mathbb{Q})$ with $x=a/b$, $a$ and $b$ are coprime integers, we define the Weil height of $P$ as
$$
{\mathfrak h}(P)=\log \max\{|a|,|b|\},
$$
and the canonical height of $P$ is defined as
$$
\hat{h}(P)=\lim_{n\to +\infty}\frac{{\mathfrak h}(2^nP)}{4^n}.
$$
These two heights are related by the following:
$$
\hat{h}(P)={\mathfrak h}(P)+O(1),
$$
where the implied constant depends only on $E$. In addition, for any $P\in E(\mathbb{Q})$ and $m\in \mathbb{Z}$, we have
$$
\hat{h}(mP)=m^2\hat{h}(P);
$$
furthermore, $\hat{h}(P)=0$ if and only if $P$ is a torsion point.
\subsection{Two useful facts about elliptic curves}
First, for any prime $p$ of good reduction, the reduction map modulo $p$ from $E(\mathbb{Q})$ to $E(\mathbb{F}_p)$ is injective when restricted to the torsion subgroup; see~\cite[Chapter~VII, Proposition~3.1]{Silv}. Hence, if $E(\mathbb{Q})$ has rank 0, then there is no $\mathbb{F}_p$-pseudolinear dependence, and thus there is no $x$-pseudolinear dependence in $E(\mathbb{Q})$.
Second, every rational point $P$ in $E(\mathbb{Q})$ has a representation of the form
\begin{equation}
\label{coordinate}
P=\left( \frac{m}{k^2},\frac{n}{k^3} \right),
\end{equation}
where $m,n$ and $k$ are integers with $k\ge 1$ and $\gcd(m,k)=\gcd(n,k)=1$; see~\cite[page~68]{Tate}.
So, for any prime $p$ of good reduction, $P \equiv O_E$ modulo $p$ if and only if $p\mid k$. In particular, given a point $P\in E(\mathbb{Q})$, there are only finitely many primes $p$ such that $P \equiv O_E$ modulo $p$.
From the above fact, it is easy to see that if $E(\mathbb{Q})\ne \{O_E\}$, then there are at most finitely many primes $p$ of good reduction such that $E(\mathbb{Q})_p=\{O_E\}$. This let our definitions and considerations make sense.
Indeed, if $E(\mathbb{Q})$ has more than one torsion point, then by the injectivity of the reduction map restricted to the torsion subgroup, we know that $E(\mathbb{Q})_p\ne \{O_E\}$ for any prime $p$ of good reduction. Otherwise if $E(\mathbb{Q})$ is a free abelian group of rank $r$ generated by $P_1,\ldots,P_r$, then by the above discussion there exists a prime $\ell$ such that for any prime $p>\ell$ of good reduction, at least one $P_i$ ($1\le i \le r$) satisfies $P_i\not \equiv O_E$ modulo $p$, that is $E(\mathbb{Q})_p\ne \{O_E\}$.
\subsection{Number fields derived from elliptic curves}
\label{preliminary2}
Following~\cite{AGM,GuMu,Lang1977}, we recall some basic facts about the number fields generated by division points and points of infinite order on $E$.
Here, we should assume that $E(\mathbb{Q})$ is of rank $r\ge 1$.
Let $\ell$ be a prime, and $P_1,P_2,\ldots,P_n\in E(\mathbb{Q})$ independent points of infinite order on $E$. Consider the number field
$$
L=\mathbb{Q}(E[\ell],\ell^{-1}P_1,\ldots, \ell^{-1}P_n),
$$
where $E[\ell]$ is the set of $\ell$-torsion points on $E$, and each $\ell^{-1}P_i$ ($1\le i \le n$) is a fixed point whose multiplication by $\ell$ is the point $P_i$. Moreover, we denote $K=\mathbb{Q}(E[\ell])$ and $K_i=\mathbb{Q}(E[\ell],\ell^{-1}P_i)$ for every $1\le i \le n$.
Now, both the extensions $K/\mathbb{Q}$ and $L/\mathbb{Q}$ are Galois extensions. For the Galois groups, ${\rm Gal}(K/\mathbb{Q})$ is a subgroup of ${\rm GL}_2(\mathbb{F}_\ell)$, ${\rm Gal}(L/K)$ is a subgroup of $E[\ell]^n$, and ${\rm Gal}(L/\mathbb{Q})$ is a subgroup of the semi-direct product
$$
{\rm GL}_2(\mathbb{F}_\ell) \ltimes E[\ell]^n,
$$
which implies that for any $i \ne j$ with $1\le i,j \le n$, we have $K_i \cap K_j=K$. In particular, we have
\begin{equation}
\label{field deg}
[K:\mathbb{Q}]< \ell^4 \qquad\mbox{and}\qquad [L:K]\le \ell^{2n}.
\end{equation}
Furthermore, Ribet~\cite{Ribet} has shown that for sufficiently large $\ell$,
the Galois group ${\rm Gal}(L/K)$ is isomorphic to $E[\ell]^n$ via the map
$$
(\ell^{-1}P_1,\ldots,\ell^{-1}P_n) \mapsto (\ell^{-1}P_1+A_1,\ldots,\ell^{-1}P_n+A_n),
$$
where $(A_1,\ldots,A_n)\in E[\ell]^n$ and assuming that $E$ is a CM curve.
If $E$ is a non-CM curve, it is still true by the theorems of Bachmakov (see~\cite{Bachmakov}
or~\cite[Chapter~V, Theorem~5.2]{Lang1978}).
In addition, the primes which ramify in the extension $L/\mathbb{Q}$ are exactly those primes dividing $\ell \Delta_E$.
Now, fix a number field $K_i$ with $1\le i \le n$, note that every rational point $P$ in $E(K_i)$ has a homogeneous coordinates of the form $[x,y,z]$ with $x,y,z\in O_{K_i}$ and at least one of $x,y,z$ in $O_{K_i}^*$, where $O_{K_i}$ is the ring of integers and $O_{K_i}^*$ is its group of units. Pick a prime $p\nmid \ell\Delta_E$ which splits completely in $K$, let $\mathfrak{p}_i$ be a prime ideal of $O_{K_i}$ above $p$. Then, the reduction map modulo $\mathfrak{p}_i$ is defined by
$$
E(K_i) \to E(O_{K_i}/\mathfrak{p}_i), \quad P=[x,y,z] \mapsto [x,y,z] \mod \mathfrak{p}_i.
$$
So, by the construction of $K_i$ and noticing the choice of $p$, the equation
\begin{equation}
\label{point eq}
\ell X= P_i
\end{equation}
has a solution in $E(\mathbb{F}_p)$, where $X$ is an unknown, if and only if $[O_{K_i}/\mathfrak{p}_i:\mathbb{F}_p]=1$, that is $p$ splits completely in $K_i$.
In particular, if we indeed have some $K_i$ such that $K_i\neq K$, then by the above discussion we can choose a conjugation class $C$ in the Galois group ${\rm Gal}(L/\mathbb{Q})$ such that each of its corresponding primes $p$ is unramified in $L/\mathbb{Q}$, $p$ is a prime of good reduction, every $\sigma \in C$ is the identity map when restricted to $K$, and $p$ splits completely in some fields $K_i$ but it does not split completely in the other fields $K_j$
with $j\ne i$ (these corresponding fields must not be trivial extensions of $K$), which means that for some points $P_i$ the equation~\eqref{point eq} has a solution in $E(\mathbb{F}_p)$ but for the others there is no such solution.
\subsection{The Chebotarev Density Theorem}
For the convenience of the reader, we restate two useful results as follows. The first one is due to Hensel, see~\cite[Proposition~6]{Serre1981}; while the second is about the least prime ideal in the Chebotarev Density Theorem, see~\cite{Lagarias1977,Lagarias1979}.
\begin{lemma}
\label{Hensel}
Let $L/\mathbb{Q}$ be a Galois extension of degree $n$ and ramified only at the primes $p_1,\ldots,p_m$. Then, we have
$$
\log |D_L|\le n\log n + n\sum_{i=1}^{m}\log p_i,
$$
where $D_L$ is the discriminant of $L/\mathbb{Q}$.
\end{lemma}
\begin{lemma}
\label{Chebotarev}
There exists an effectively computable positive absolute constant $c_1$ such that for any number field $K$, any finite Galois extension $L/K$ and any conjugacy
class $C$ of ${\rm Gal}(L/K)$, there exists a prime ideal $\mathfrak{p}$ of $K$ which is unramified in $L$, for which the Artin symbol $\left[\frac{L/K}{\mathfrak{p}}\right] = C$ and the norm $N_{K/\mathbb{Q}}(\mathfrak{p})$ is a rational prime, and which satisfies the bound
$$
N_{K/\mathbb{Q}}(\mathfrak{p}) \le 2 |D_L|^{c_1};
$$
furthermore, under GRH, there is an effectively computable absolute constant $c_2$ such that
$$
N_{K/\mathbb{Q}}(\mathfrak{p}) \le c_2 (\log |D_L|)^2.
$$
\end{lemma}
\section{The Existence and Construction of $x$-Pseudolinearly Dependent Points}
\subsection{Cases of existence and non-existence} \label{exist}
Before proving our main results, we want to first consider the existence problem of pseudolinearly dependent points. In this section, $E$ is a fixed elliptic curve of rank $r$ over $\mathbb{Q}$,
and $\Gamma$ is a fixed subgroup of $E(\mathbb{Q})$ with rank $s$.
If the ranks of $E$ and $\Gamma$ satisfy $s<r$, then $x$-pseudolinearly dependent points of $\Gamma$ do exist. Indeed, since $s<r$, we can take a point $R\in E(\mathbb{Q})$ of infinite order such that
$\gen{R} \cap \Gamma = \{O_E\}$, where $O_E$ is the point at infinity of $E$.
Pick an arbitrary point $P\in \Gamma$, it is easy to see that the following point
\begin{equation}
\label{eq:pseudolin}
Q=P+{\mathrm{lcm}}\,\{\# E(\mathbb{Q})_p / \# \Gamma_p~:~\textrm{$p\le x$ of good reduction}\}R,
\end{equation}
where, as usual, ``${\mathrm{lcm}}\,$'' means the least common multiple,
is an $x$-pseudolinearly dependent point of $\Gamma$ for any
sufficiently large $x>0$ (that is, there exists at least one prime of good reduction not greater than $x$).
In the construction~\eqref{eq:pseudolin}, we can see that $\gen{Q} \cap \Gamma = \{O_E\}$. Actually,
when $x$ is sufficiently large, any $x$-pseudolinearly dependent point of $\Gamma$ must satisfy this property.
\begin{prop} \label{independent}
There exists a sufficiently large constant $M$ depending on $E$ and $\Gamma$ such that for any $x>M$, every $x$-pseudolinearly dependent point $Q$ of $\Gamma$ satisfies $\gen{Q} \cap \Gamma = \{O_E\}$.
\end{prop}
\begin{proof}
Consider the subgroup
$$
\tilde{\Gamma}=\{P\in E(\mathbb{Q}): \textrm{$mP\in \Gamma$ for some $m\in \mathbb{Z}$}\}.
$$
Notice that $\tilde{\Gamma}$ is also a finitely generated group, and by construction each element in the quotient group $\tilde{\Gamma}/\Gamma$ is of finite order. So, $\tilde{\Gamma}/\Gamma$ is a finite group.
Then, we let
$n=[\tilde{\Gamma}:\Gamma]$ and assume that $\tilde{\Gamma}/\Gamma=\{P_0=O_E,P_1,\ldots,P_{n-1}\}$. If $n=1$, then everything is done. Now, we assume that $n>1$.
For any $P_i$, $1\le i \le n-1$, since $P_i\not\in \Gamma$, by Lemma~\ref{lem:Jossen} there exists a prime $p_i$ of good reduction such that $P_i\not \in \Gamma_{p_i}$. Then, we choose a constant, say $M$, such that $M\ge p_i$ for any $1\le i \le n-1$.
Thus, when $x>M$, any $P_i$ ($1\le i\le n-1$) is not an $x$-pseudolinearly dependent point of $\Gamma$, and then any point $P\in \tilde{\Gamma}$ is also not such a point. This in fact completes the proof.
\end{proof}
The following result says that the case (that is $s<r$) in \eqref{eq:pseudolin} is the only one meaningful case for $x$-pseudolinearly dependent points when $x$ is sufficiently large.
\begin{prop}
If $\Gamma$ is a full rank subgroup of $E(\mathbb{Q})$ (that is $s=r$), then there exists a constant $M$ depending on $E$ and $\Gamma$ such that for any $x>M$, there is no $x$-pseudolinearly dependent point of $\Gamma$.
\end{prop}
\begin{proof}
Since $\Gamma$ is of full rank, the index $[E(\mathbb{Q}):\Gamma]$ is finite. Let $n=[E(\mathbb{Q}):\Gamma]$.
We can assume that $n>1$.
Now, we suppose that $E(\mathbb{Q})/\Gamma=\{P_0=O_E,P_1,\cdots,P_{n-1}\}$. So, $P_i\not \in \Gamma$ for any $1\le i \le n-1$.
For any $P_i$ ($1\le i \le n-1$), since $P_i\not\in \Gamma$, by Lemma~\ref{lem:Jossen} there exists a prime $p_i$ of good reduction such that $P_i\not \in \Gamma_{p_i}$. Then, we choose a constant, say $M$, such that $M\ge p_i$ for any $1\le i \le n-1$.
Pick an arbitrary point $Q\in E(\mathbb{Q})\setminus \Gamma$, then there is exactly one $P_i$ ($1\le i \le n-1$) such that $Q-P_i \in \Gamma$. By the choice of $p_i$, we deduce that $Q\not\in \Gamma_{p_i}$. Thus, $Q$ is not an $x$-pseudolinearly dependent point of $\Gamma$ for any $x> M$.
\end{proof}
We remark that directly by Lemma~\ref{lem:Jossen}, any given point in $E(\mathbb{Q})$ is not an $x$-pseudolinearly dependent point of $\Gamma$ for $x$ sufficiently large.
Note that $E(\mathbb{Q})$ has finitely many torsion points,
so by choosing large enough $x$, none of the torsion points in $E(\mathbb{Q})$ is an $x$-pseudolinearly dependent point of $\Gamma$.
As an example, we present the following explicit result.
Note that by definition, if a point in $E(\mathbb{Q})$ is not an $\mathbb{F}_p$-pseudolinearly dependent point of $\Gamma$, then it is not an $x$-pseudolinearly dependent point of $\Gamma$ for any $x\ge p$.
\begin{prop}
Suppose that $E(\mathbb{Q})$ is of rank 1 and $E(\mathbb{Q})=\gen{P}$.
Fix a prime $p$ of good reduction, let $m$ be a positive divisor
of $\# E(\mathbb{Q})_p$, and set $\Gamma=\gen{mP}$. Then, there is no $\mathbb{F}_p$-pseudolinearly dependent point of $\Gamma$.
\end{prop}
\begin{proof}
If $m=1$, then nothing needs to be done. Now we assume that $m>1$.
Suppose that there exists a rational point $Q=nP\in E(\mathbb{Q})$ such that $Q\not\in \Gamma$ but $Q\in \Gamma_p$, that is $Q$ is an $\mathbb{F}_p$-pseudolinearly dependent point of $\Gamma$. Then, we have $m\nmid n$, and $Q\equiv kmP$ modulo $p$ for some integer $k$. Thus, we have $(km-n)P\equiv O_E$ modulo $p$. Then, noticing the choice of $m$, we must have $m\mid (km-n)$, and so $m\mid n$. This leads to a contradiction. So, there is no such point $Q$, and the desired result follows.
\end{proof}
\subsection{Construction} \label{const}
In the sequel, $E$ is a fixed elliptic curve of rank $r\ge 1$ over $\mathbb{Q}$,
and $\Gamma$ is a given subgroup of $E(\mathbb{Q})$ with rank $s<r$.
In order to get upper bounds on the height of pseudolinearly dependent points, the following construction is slightly different from what we give in Section~\ref{exist}.
For any prime $p$ of good reduction related to $E$, we let
$$
\qquad N_p = \# E(\mathbb{F}_p) \qquad\mbox{and}\qquad T_p = \#\Gamma_p,
$$
and set $N_p = T_p = 1$ for all other primes $p$. Given a sufficiently large $x>0$ (at least one prime of good reduction is not greater than $x$), we also define
$$
L_x= {\mathrm{lcm}}\,\{N_p/T_p~:~ p \le x\}.
$$
Take a point $R\in E(\mathbb{Q})$ of infinite order such that
$\gen{R} \cap \Gamma = \{O_E\}$, then pick an arbitrary point $P\in \Gamma$ and set
$$
Q = P + L_xR.
$$
It is easy to see that $Q\not\in \Gamma$ but as a point in $E(\mathbb{F}_p)$, $Q \in \Gamma_p$ for every prime $p\le x$ of good reduction.
Since the coordinates of points in $E(\mathbb{Q})$ are rational numbers, for any subset $S \subseteq E(\mathbb{Q})$ there exists a point with the smallest Weil height among all the points in $S$. So, noticing $s<r$, we choose a point with smallest Weil height in the subset consisting of non-torsion points $R$ in $E(\mathbb{Q})\setminus \Gamma$ with $\gen{R} \cap \Gamma = \{O_E\}$, we denote this point by $R_{\rm min}$. Thus, ${\mathfrak h}(R_{\rm min})$ is fixed if $E$ and $\Gamma$ are given.
Now, we define a point $Q_{\rm min} \in E(\mathbb{Q})$ as follows:
\begin{equation}\label{Qmin}
Q_{\rm min}=L_xR_{\rm min}.
\end{equation}
As before, $Q_{\rm min}\not\in \Gamma$ but $Q_{\rm min} \in \Gamma_p$ for every prime $p\le x$ of good reduction.
We also have
\begin{equation} \label{cheight}
\hat{h}(Q_{\rm min})=L_x^2\hat{h}(R_{\rm min})= L_x^2({\mathfrak h}(R_{\rm min})+O(1)) \ll L_x^2,
\end{equation}
which comes from the fact that ${\mathfrak h}(R_{\rm min})$ is fixed when $E$ and $\Gamma$ are given.
Finally, we want to give a trivial upper bound for $\hat{h}(Q_{\rm min})$, which can be viewed as a comparison of our main results.
Recalling the Hasse bound
$$
|N_p - p-1| \le 2p^{1/2}
$$
for any prime $p$ of good reduction (see~\cite[Chapter~V, Theorem~1.1]{Silv}), we derive the inequality
\begin{equation}
\begin{split}
\label{eq:prod Np 1}
\prod_{p\le x}N_p & \le \prod_{p\le x}(p+2p^{1/2} + 1) = \prod_{p\le x}p (1+p^{-1/2})^2\\
&= \exp\(\sum_{p\le x}\log p+2\sum_{p\le x}\log(1+p^{-1/2}) \)\\
&\le \exp\(\sum_{p\le x}\log p+ 2\sum_{p\le x} p^{-1/2} \)\\
&= \exp\(O\(\sqrt{x}/\log x\)\)\prod_{p\le x} p .
\end{split}
\end{equation}
Now using the prime number theorem
with the currently best known error term:
\begin{equation}
\label{eq:PNT psi}
\sum_{p\le x} \log p = x + O\(x \exp\(-c (\log x)^{3/5} (\log \log x)^{-1/5}\)\)
\end{equation}
with $x\ge 3$ and some absolute constant $c > 0$,
see~\cite[Corollary~8.30]{IwKow}, we obtain
\begin{equation}
\label{eq:prod Np 2}
\prod_{p\le x}N_p \le \exp\(x+O\(x \exp\(-c (\log x)^{3/5} (\log \log x)^{-1/5}\)\)\).
\end{equation}
Combining~\eqref{eq:prod Np 2} with~\eqref{cheight},
we derive the following trivial upper bound for $\hat{h}(Q_{\rm min})$:
\begin{equation}
\begin{split}
\label{trivial}
\hat{h}(Q_{\rm min})& \ll L_x^2 \le \prod_{p\le x}N_p^2 \\
&\le \exp\(2x+O\(x \exp\(-c (\log x)^{3/5} (\log \log x)^{-1/5}\)\)\).
\end{split}
\end{equation}
Next, we give some better upper bounds for $\hat{h}(Q_{\rm min})$, which automatically provide proofs of our main theorems on the upper bounds.
\section{Proofs of upper bounds: Theorems~\ref{rank0}, \ref{thm:uncond} and~\ref{thm:cond}}
\label{sec upper}
As mentioned above, to achieve our purpose, it suffices to bound the canonical height of $Q_{\rm min}$,
given by~\eqref{Qmin}, that is, $\hat{h}(Q_{\rm min})$.
Here, we also use the notation and some results of Section~\ref{const},
in particular, the bound~\eqref{cheight}.
By definition, we get
$$
L_x\le \prod_{p\le x}N_p / T_p.
$$
Our approach is to get upper and lower bounds for
$$\prod_{p\le x}N_p \qquad\mbox{and}\qquad \prod_{p\le x}T_p,
$$
respectively.
We also need the following result from~\cite[Proposition~5.4]{AGM}
(see~\cite[Lemma~14]{GuMu} for a previous result).
\begin{lemma}
\label{lem:Tp1}
For any real $z> 1$, we have
$$
\#\{p~:~T_p < z\} \ll z^{1+2/s}/\log z.
$$
\end{lemma}
\subsection{Proof of Theorem~\ref{rank0}}
Since $\Gamma$ has rank zero, by the injectivity of the reduction map restricted to the torsion subgroup, we can see that $T_p=\# \Gamma$ for any prime $p$ of good reduction.
We also recall the prime
number theorem in the following simplified form
\begin{equation}
\label{eq:PNT pi}
\pi(x) = \frac{x}{\log x} + O(x/(\log x)^2),
\end{equation}
which follows immediately from~\eqref{eq:PNT psi}.
Now, using~\eqref{eq:prod Np 2} and~\eqref{eq:PNT pi} we have
\begin{align*}
L_x & \le (\# \Gamma)^{-\pi(x)}\prod_{p\le x}N_p \\
& \le \exp\left( x-\log(\#\Gamma)\frac{x}{\log x} + O(x/(\log x)^2)\).
\end{align*}
From~\eqref{cheight}
we conclude that for a sufficiently large $x>0$, we have
$$
\hat{h}(Q_{\rm min})\le \exp\(2x-2\log(\#\Gamma) \frac{x}{\log x}+ O(x/(\log x)^2)\),
$$
which completes the proof.
\subsection{Proof of Theorem~\ref{thm:uncond}}
The desired result follows from the following estimate
on the canonical height of $Q_{\rm min}$.
\begin{lemma}
\label{lem:height Qm1}
If $s\ge 1$, then for a sufficiently large $x>0$, we have
$$
\hat{h}(Q_{\rm min})\le \exp\(\frac{4}{s+2}x +O(x/\log x)\).
$$
\end{lemma}
\begin{proof}
For a sufficiently large $x$, we define
$$
J = \fl{\frac{s}{s+2} \log x}\ge 1 \qquad\mbox{and}\qquad Z_j = x^{s/(s+2)}e^{-j}, \qquad j= 0, \ldots, J.
$$
Here $e$ is the base of the natural logarithm. Note that $1\le Z_J < e$.
Since $s\ge 1$, the number of primes $p$ such that $T_p=1$ or $2$ is finite; we denote this number by $N$, which depends only on $\Gamma$. Let $N_0$ be the number of primes $p \le x$ with $T_p\ge Z_0$. Furthermore, for $j =1, \ldots, J$ we define
$N_j$ as the number of primes $p \le x$ with $Z_{j-1} > T_p\ge Z_j$.
Clearly
$$
N+\sum_{j=0}^J N_j \ge \pi(x).
$$
So, noticing $Z_0=x^{s/(s+2)}$ we now derive
$$
\prod_{p\le x}T_p \ge \prod_{j=0}^J Z_j^{N_j}
\ge Z_0^{\pi(x)-N} \prod_{j=0}^J e^{-j N_j}\\
= Z_0^{\pi(x)-N} \exp(-\Lambda),
$$
where
$$
\Lambda = \sum_{j=1}^J j N_j.
$$
Recalling the definition of $Z_0$, and using~\eqref{eq:PNT pi}, we obtain
\begin{equation}
\label{eq:prod Tp}
\prod_{p\le x}T_p \ge \exp\(\frac{s}{s+2} x -\Lambda + O(x/\log x)\).
\end{equation}
To estimate $\Lambda$, we note that by Lemma~\ref{lem:Tp1}, for any positive integer $I \le J$
we have
\begin{align*}
\sum_{j=I}^J N_j & \le
\#\{p~:~T_p < Z_0 e^{-I+1} \} \ll \frac{\(Z_0 e^{-I+1}\)^{1+2/s}}{\log Z_0 -I+1}.
\end{align*}
Thus for $I \le J/2$, noticing $J\le \log Z_0$ we obtain
\begin{equation}
\label{eq:small I}
\sum_{j=I}^J N_j \ll \frac{\(Z_0 e^{-I}\)^{1+2/s}}{\log Z_0} \ll e^{-I(1+2/s)}\frac{x}{\log x},
\end{equation}
while for any $J/2 < I \le J$ we use the bound
\begin{equation}
\label{eq:large I}
\sum_{j=I}^J N_j \ll \(Z_0 e^{-I+1}\)^{1+2/s} \ll \(\sqrt{Z_0}\)^{1+2/s} = x^{1/2}.
\end{equation}
Hence, via partial summation, combining~\eqref{eq:small I} and ~\eqref{eq:large I},
we derive
\begin{align*}
\Lambda & = \sum_{I=1}^J \sum_{j=I}^J N_j \ll
\frac{x}{\log x} \sum_{1 \le I \le J/2} e^{-I(1+2/s)} + x^{1/2} \sum_{J/2 < I \le J} 1\\
& \ll \frac{x}{\log x} + J x^{1/2} \ll \frac{x}{\log x} .
\end{align*}
This bound on $\Lambda$, together with~\eqref{eq:prod Tp}, implies
$$
\prod_{p\le x}T_p \ge \exp\(\frac{s}{s+2} x +O(x/\log x)\).
$$
Therefore using~\eqref{eq:prod Np 2}, we obtain
$$
L_x \le \prod_{p\le x}N_p / T_p \le \exp\(\frac{2}{s+2} x +O(x/\log x)\).
$$
Therefore, the desired result follows from the bound~\eqref{cheight}.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:cond}}
We first restate two general results from~\cite[Theorems~1.2 and~1.4]{AGM} in a form convenient for our applications.
\begin{lemma}\label{lem:nonCM}
Assume that $E$ is a non-CM curve and $s \ge 19$. Under GRH, for $x\ge 2$ we have
$$
\#\{p\le x~:~T_p < p/(\log p)^2\} \ll x/(\log x)^2.
$$
\end{lemma}
\begin{proof}
Since there are only finitely many primes which do not yield good reductions related to $E$,
we can only consider primes $p$ of good reduction (that is $p \nmid \Delta_E$).
Here, we directly use the notation and follow the arguments in the
proof of~\cite[Theorem~1.2, Part~(a)]{AGM}, where
we choose the function $f(x)$ as $f(x)=(\log x)^2$.
Let
$\mathcal{B}_1$ and $\mathcal{B}_2$ be two sets defined in~\cite{AGM} such that
$$
\#\{p\le x~: p\nmid \Delta_E, T_p<p/(\log p)^2\}\le \#\mathcal{B}_1 + \#\mathcal{B}_2 +O(x/(\log x)^2),
$$
where $O(x/(\log x)^2)$ comes from $\pi(x/\log x)=O(x/(\log x)^2)$. In particular, we have
$$
\#\mathcal{B}_1 \ll \frac{x}{(\log x)^{(s+2)/s}\cdot (s(s+2)^{-1} \log x-\log \log x)}
$$
and
$$
\#\mathcal{B}_2 \ll \frac{x}{\log x \cdot g(x)^{1-\alpha}}+
O\( x^{1/2+\alpha+\(5+\alpha/2\)\cdot\(2/(s+2)+\alpha\)} \),
$$
where $g(x)=f(x/\log x)/3$, and the positive real number $\alpha$ is chosen such that
$$
\frac{1}{2}+\alpha+\(5+\frac{\alpha}{2}\)\cdot\(\frac{2}{s+2}+\alpha\)<1,
$$
which at least requires that $1/2+6\alpha<1$, that is $\alpha<1/12$.
Note that such $\alpha$ indeed exists because $s\ge 19$.
It is easy to see that
$$
\#\mathcal{B}_1 \ll x/(\log x)^2 \qquad\mbox{and}\qquad \#\mathcal{B}_2 \ll x/(\log x)^2,
$$
where the second upper bound comes from $2(1-\alpha)>1$.
Collecting these estimates, we get the required upper bound.
\end{proof}
\begin{lemma}\label{lem:CM}
Assume that $E$ is a CM curve and $s \ge 7$.
Under GRH, for $x\ge 2$ we have
$$
\#\{p\le x~:~T_p < p/(\log p)^2\} \ll x/(\log x)^2.
$$
\end{lemma}
\begin{proof}
The proof here almost follows the arguments in the proof of~\cite[Theorem~1.4]{AGM} only with
a few minor changes, where as in Lemma~\ref{lem:nonCM} we again
choose the function $f(x)$ as $f(x)=(\log x)^2$.
For any prime $p$ of good reduction, let $i_p=[E(\mathbb{F}_p):\Gamma_p]$.
The following can be derived from~\cite{AGM}:
$$
\#\{p\le x~: p\nmid \Delta_E, T_p<p/(\log p)^2\}\le
\#\widetilde{{\mathcal B}}_1 + \#\widetilde{{\mathcal B}}_2 +O(x/(\log x)^2),
$$
where
\begin{align*}
\widetilde{{\mathcal B}}_1&=\{p\le x~:~p\nmid \Delta_E,\ i_p\in (x^\kappa,3x]\},\\
\widetilde{{\mathcal B}}_2&=\{p\le x~:~p\nmid m\Delta_E,\ m\mid i_p,\ \textrm{for some $m \in (g(x),x^\kappa]$}\}
\end{align*}
with $g(x)=f(x/\log x)/3$ and some real number $\kappa>0$ to be chosen later on.
Applying Lemma~\ref{lem:Tp1}, we have
\begin{align*}
\#\widetilde{{\mathcal B}}_1&
= \# \{p\le x~:~p\nmid \Delta_E,\ T_p<N_p/x^\kappa\}\\
&\le \# \{p\le x~:~p\nmid \Delta_E,\ T_p<3x^{1-\kappa}\}\ll \frac{x^{(1-\kappa)(s+2)/s}}{(1-\kappa)\log x}.
\end{align*}
For any positive integer $m$, let $\omega(m)$ and $d(m)$ denote, respectively, the number of distinct
prime divisors of $m$ and the number of positive integer divisors of $m$.
Now, $\#\widetilde{{\mathcal B}}_2$ can be estimated as in~\cite{AGM} as follows:
$$
\#\widetilde{{\mathcal B}}_2
\ll \frac{x}{\log x \cdot g(x)^{1-\alpha}}+O\left(x^{1/2}\log x
\cdot\sum_{1\le m\le x^\kappa} ma^{\omega(m)/2}d(m)\right).
$$
where $a$ is the absolute constant of~\cite[Proposition~6.7]{AGM}.
Now, using~\cite[Equation~(6.21)]{AGM} we obtain
\begin{align*}
\#\widetilde{{\mathcal B}}_2
&\ll \frac{x}{\log x \cdot g(x)^{1-\alpha}}
+O\(x^{1/2+2\kappa}(\log x)^{1+\beta} \)\\
&\ll \frac{x}{(\log x)^2}+O\(x^{1/2+2\kappa}\
(\log x)^{1+\beta}\),
\end{align*}
where
$\alpha$ is an arbitrary real number in the interval $(0,1)$ such that $2(1-\alpha)>1$, and $\beta>2$ is
some positive integer.
Moreover, we choose the real number $\kappa$ such that
$$
(1-\kappa)(s+2)/s<1 \qquad\mbox{and}\qquad \frac{1}{2}+2\kappa<1.
$$
Thus, we get
\begin{equation}
\label{eq:cond}
\frac{2}{s+2}<\kappa<\frac{1}{4}.
\end{equation}
Since $s\ge 7$, such real number $\kappa$ indeed exists.
Therefore, for any fixed real number $\kappa$ satisfying~\eqref{eq:cond}
(for example, $\kappa =11/45$) we obtain
$$
\#\{p\le x~:~p\nmid \Delta_E, T_p<p/(\log p)^2\}\ll x/(\log x)^2,
$$
which completes the proof of this lemma.
\end{proof}
Finally, the following estimate completes our proof.
\begin{lemma}
\label{lem:height Qm2}
Suppose that either $s\ge 19$ if $E$ is a non-CM curve, or $s\ge 7$ if $E$ is a CM curve. Under GRH, for a sufficiently large $x>0$, we have
$$
\hat{h}(Q_{\rm min})\le \exp\(4x(\log\log x)/\log x+O(x/\log x)\).
$$
\end{lemma}
\begin{proof}
First, we have
\begin{align*}
\prod_{p\le x}T_p
&\ge \prod_{\substack{p\le x \\ T_p\ge p/(\log p)^2}}\frac{p}{(\log p)^2} \cdot \prod_{\substack{p\le x \\ T_p< p/(\log p)^2}}T_p\\
& = \prod_{p\le x}\frac{p}{(\log p)^2} \prod_{\substack{p\le x \\ T_p< p/(\log p)^2}}\frac{T_p (\log p)^2}{p}.
\end{align*}
Using the trivial lower bound $T_p \ge 1$ and Lemma~\ref{lem:nonCM} and Lemma~\ref{lem:CM},
we derive
\begin{align*}
\prod_{p\le x}T_p
&\ge \prod_{p\le x}p \cdot \prod_{p\le x} (\log p)^{-2} \cdot \prod_{\substack{p\le x \\ T_p< p/(\log p)^2}}(\log p)^2/p\\
&\ge \(\frac{(\log x)^2}{x}\)^{O(x/(\log x)^2)}\prod_{p\le x}p \cdot \prod_{p\le x} (\log p)^{-2},
\end{align*}
where the last inequality follows from Lemma~\ref{lem:nonCM} and Lemma~\ref{lem:CM}.
Thus, using~\eqref{eq:prod Np 1}, we obtain
\begin{align*}
L_x \le \prod_{p\le x}N_p / T_p
&\le \exp\(O(x/\log x)\)\prod_{p\le x} (\log p)^2\\
&\le \exp\(2\frac{x \log\log x}{\log x}+O(x/\log x)\),
\end{align*}
where the last inequality is derived from~\eqref{eq:PNT pi} and the
trivial estimate
$$
\sum_{p\le x} \log\log p \le \pi(x)\log\log x.
$$
Therefore, the desired result follows from the bound $\hat{h}(Q_{\rm min})\ll L_x^2$.
\end{proof}
\section{Proofs of lower bounds: Theorems~\ref{thm:lower0} and~\ref{thm:lower}}
\label{sec lower}
\subsection{Proof of Theorem~\ref{thm:lower0}}
\label{pf lower0}
Now, assume that $\Gamma$ is a torsion subgroup of $E(\mathbb{Q})$, and let $Q\in E(\mathbb{Q})$ be an $x$-pseudolinearly dependent point of $\Gamma$ for a sufficiently large $x$. Let $m$ be the number of primes of bad reduction. Then, since $Q\in \Gamma_p$ for any prime $p\le x$ of good reduction, there exists a rational point $P\in \Gamma$ such that at least $(\pi(x)-m)/\#\Gamma$ primes $p\le x$ of good reduction let the point $Q-P$ become the point at infinity modulo $p$. In view of~\eqref{coordinate}, this implies that
\begin{align*}
{\mathfrak h}(Q-P) &\ge 2\log \prod_{p\le (\pi(x)-m)/\#\Gamma} p\\
& \ge \frac{2}{\#\Gamma}x/\log x+O(x/(\log x)^2),
\end{align*}
where the last inequality follows from~\eqref{eq:PNT psi} and~\eqref{eq:PNT pi}. Note that $P$ is a torsion point, then using~\cite[Chapter VIII, Theorem 9.3]{Silv} we obtain
\begin{equation}
\begin{split}
\label{eq:lower0}
\hat{h}(Q)=\hat{h}(Q)+\hat{h}(P)&=\frac{1}{2}\(\hat{h}(Q+P)+\hat{h}(Q-P)\) \\
&\ge \frac{1}{2}\hat{h}(Q-P)\ge \frac{1}{2}{\mathfrak h}(Q-P)+O(1)\\
&\ge \frac{1}{\#\Gamma}x/\log x+O(x/(\log x)^2),
\end{split}
\end{equation}
which gives the claimed lower bound for the height of the point $Q$.
\subsection{Proof of Theorem~\ref{thm:lower}}
Here, we assume that $\Gamma$ is a free subgroup of rank $s$ generated by $P_1,P_2,\ldots,P_s$.
This assumption comes from the discussions in Section~\ref{preliminary2}.
We first prove a result, which can be viewed as an effective version of Lemma~\ref{lem:Jossen} in some sense.
\begin{lemma} \label{effective}
Let $Q\in E(\mathbb{Q})\setminus \Gamma$ be a point of infinite order such that $\gen{Q} \cap \Gamma = \{O_E\}$. Then, there exists a prime $p$ of good reduction satisfying
$$
\log p \ll (\log \hat{h}(Q))^{2s+6}\log\log \hat{h}(Q)
$$
such that $Q\not\in \Gamma_p$. If furthermore assuming GRH, we even have
$$
p\ll (\log \hat{h}(Q))^{4s+12}(\log\log \hat{h}(Q))^2.
$$
\end{lemma}
\begin{proof}
Let $Q_1,Q_2,\ldots,Q_r$ be a fixed basis of the free part of $E(\mathbb{Q})$. Since the point $Q$ is of infinite order, it can be represented as
$$
Q=Q_0+m_1Q_1+m_2Q_2+\cdots+m_rQ_r,
$$
where $Q_0$ is a torsion point of $E(\mathbb{Q})$, and there is at least one $m_i\ne 0$ ($1\le i \le r$).
By~\cite[Chapter~IX, Exercise~9.8~(e)]{Silv}, we immediately have
$$
\hat{h}(Q-Q_0)\gg \max_{1\le i \le r} m_i^2.
$$
Noticing that $Q_0$ is a torsion point, as~\eqref{eq:lower0} we obtain
\begin{equation}
\label{eq:height}
\hat{h}(Q)\ge \frac{1}{2}\hat{h}(Q-Q_0)\gg \max_{1\le i \le r} m_i^2.
\end{equation}
Now, take any $m_i\ne 0$ and let $\ell$ be the smallest prime such that $\ell \nmid m_i$.
Since the number $\omega(m)$ of distinct prime factors of an integer $m\ge 2$ satisfies
$$
\omega(m) \ll \frac{\log m}{\log\log m}
$$
(because we obviously have $\omega(m)! \le m$), using the prime number theorem we get
$$
\ell \ll \log |m_i|,
$$
which together with~\eqref{eq:height} yields that
\begin{equation}
\label{smallest}
\ell \ll \log \hat{h}(Q).
\end{equation}
By the choice of $\ell$, we see that there is no point $R\in E(\mathbb{Q})$ such that $Q=\ell R$.
This implies that the number field $\mathbb{Q}(E[\ell],\ell^{-1}Q)$ is not a trivial extension of $\mathbb{Q}(E[\ell])$.
Consider the number field
$$
L=\mathbb{Q}(E[\ell],\ell^{-1}Q,\ell^{-1}P_1,\ldots, \ell^{-1}P_s),
$$
and set $K=\mathbb{Q}(E[\ell])$.
By the discussions in Section~\ref{preliminary2}, we can choose a conjugation class $C$ in the Galois group ${\rm Gal}(L/\mathbb{Q})$ such that each of its corresponding primes $p$ is unramified in $L/\mathbb{Q}$, $p$ is a prime of good reduction, every $\sigma \in C$ is the identity map when restricted to $K$, and especially each equation $\ell X=P_i$ has solution in $E(\mathbb{F}_p)$ for $1\le i\le s$ but the equation $\ell X=Q$ has no such solution, which implies that
$$
Q \not\in \Gamma_p.
$$
By Lemma~\ref{Chebotarev}, we can choose such a prime $p$ such that
\begin{equation} \label{Cheb1}
\log p \ll \log |D_L|;
\end{equation}
if under GRH, we even have
\begin{equation} \label{Cheb2}
p\ll (\log |D_L|)^2.
\end{equation}
From Lemma~\ref{Hensel} and noticing that only the primes dividing $\ell\Delta_E$ ramify in $L$, we get
\begin{equation} \label{dis L}
\log |D_L|\le n\log n + n\log (\ell\Delta_E)\ll n\log n + n\log \ell,
\end{equation}
where $n=[L:\mathbb{Q}]$. Using~\eqref{field deg}, we obtain
\begin{equation} \label{deg L}
n\le \ell^{2s+6}.
\end{equation}
Combining~\eqref{smallest}, \eqref{Cheb1}, \eqref{Cheb2}, \eqref{dis L} with~\eqref{deg L}, we unconditionally have
$$
\log p \ll (\log \hat{h}(Q))^{2s+6}\log\log \hat{h}(Q),
$$
and conditionally we have
$$
p \ll (\log \hat{h}(Q))^{4s+12}(\log\log \hat{h}(Q))^2,
$$
which concludes the proof.
\end{proof}
Now, we are ready to prove Theorem~\ref{thm:lower}.
For a sufficiently large $x$, by Proposition~\ref{independent}, any $x$-pseudolinearly dependent point $Q$ of $\Gamma$ satisfies $\gen{Q} \cap \Gamma = \{O_E\}$. Then from Lemma~\ref{effective}, there is an unconditional prime $p$ of good reduction satisfying
$$
\log p \ll (\log \hat{h}(Q))^{2s+6}\log\log \hat{h}(Q)
$$
such that $Q\not\in \Gamma_p$. Since $x<p$ by definition, we obtain
$$
\log x \ll (\log \hat{h}(Q))^{2s+6}\log\log \hat{h}(Q),
$$
which implies that
$$
\hat{h}(Q) \ge \exp \( (\log x)^{1/(2s+6)+o(1)} \).
$$
Similarly, if assuming GRH, we can obtain
$$
\hat{h}(Q) \ge \exp(x^{1/(4s+12)+o(1)}),
$$
which completes the proof.
\section{Comments}
We remark that the upper bound of Theorem~\ref{rank0} is only slightly
better than the trivial bound~\eqref{trivial}, although
the ratio between the two estimates tends to zero whenever $\# \Gamma>1$.
In Section~\ref{sec lower}, we get some partial results on the lower bound for the height of $x$-pseudolinearly dependent points. In fact, the height of such points certainly tends to infinity as $x \to +\infty$.
Indeed, let $E$ be an elliptic curve over $\mathbb{Q}$ of rank $r\ge 1$,
and let $\Gamma$ be a subgroup of $E(\mathbb{Q})$ with rank $s<r$.
We have known that for any sufficiently large $x$, there exist
infinitely many $x$-pseudolinearly dependent points of $\Gamma$.
For any $x>0$, if such points exist,
as before we can choose a point, denoted by $Q_x$, with smallest Weil
height among all these points; otherwise if there are no such points, we let $Q_x=O_E$.
Thus, we get a subset $S=\{Q_x: x>0\}$ of $E(\mathbb{Q})$, and for
any $x<y$ we have ${\mathfrak h}(Q_x)\le {\mathfrak h}(Q_y)$. By
Lemma~\ref{lem:Jossen}, we know that for any fixed point $Q\in E(\mathbb{Q})$,
it can not be an $x$-pseudolinearly dependent point of $\Gamma$ for
any sufficiently large $x$. So, $S$ is an infinite set.
Since it is well-known that there are only finitely many rational points of $E(\mathbb{Q})$
with bounded height, we obtain
$$
\lim_{x\to +\infty}{\mathfrak h}(Q_x)=+\infty,
$$
which implies that $\lim_{x\to +\infty}\hat{h}(Q_x)=+\infty$.
This immediately implies that for the point $Q_{\rm min}$ constructed in
Section~\ref{const}, its height $\hat{h}(Q_{\rm min})$ also tends to infinity as $x \to +\infty$. Moreover,
let $p_n$ denote the $n$th prime, that is $p_1=2,\ p_2=3,\ p_3=5, \ldots$.
For any $n\ge 1$, denote by $T_n$ the set of $p_n$-pseudolinearly dependent points of $\Gamma$.
Obviously, $T_{n+1}\subseteq T_n$ and ${\mathfrak h}(Q_{p_{n+1}})\ge {\mathfrak h}(Q_{p_n})$
for any $n\ge 1$. For any sufficiently large $n$, we conjecture that $T_{n+1}\subsetneq T_n$.
If furthermore one could prove that ${\mathfrak h}(Q_{p_{n+1}})>{\mathfrak h}(Q_{p_n})$ for any
sufficiently large $n$, this would lead to a lower bound of the form
$$
{\mathfrak h}(Q_x)\ge \log x + O(\log\log x),
$$
as the values of ${\mathfrak h}(Q_x)$ are logarithms of integer numbers and
there are about $x/\log x$ primes not greater than $x$.
In Lemma~\ref{effective}, if we choose $\Gamma$ as a torsion subgroup, we can also get a similar unconditional upper bound. Indeed, for a prime $p$ of good reduction, suppose that $Q\in \Gamma_p$. Then, $Q-P\equiv O_E$ modulo $p$ for some $P\in \Gamma$. According to~\eqref{coordinate}, we have $p\le \exp(0.5 {\mathfrak h}(Q-P))$. Since $P$ is a torsion point, as~\eqref{eq:lower0} we get $p\le \exp(\hat{h}(Q)+O(1))$. Thus, we can choose a prime $p$ of good reduction satisfying
$$
p \le \exp(\hat{h}(Q)+O(1))
$$
such that $Q\not \in \Gamma_p$.
Finally, we want to remark that the definition of pseudolinearly dependent point can be generalized to many settings where there exist reduction maps modulo ``primes" (which can be prime numbers, prime ideals, monic irreducible polynomials, and so on), such as number fields, function fields, curves of higher genus, Abelian varieties, and so on.
\section*{Acknowledgement}
The authors would like to thank Wojciech Gajda for
very stimulating discussions which led to the idea of
this work and also for his valuable comments on an early version of the paper. These discussions took place at
the Max Planck Institute for Mathematics, Bonn, whose
support and hospitality are gratefully acknowledged.
This work was also supported in part by the Australian Research
Council Grants~DP130100237 and~DP140100118.
{\it
|
1,116,691,500,650 | arxiv | \section{Model introduction}
To understand the way the higher organized species emerge
during evolution we consider very simple model of evolving food
chain consisting of $N$ species. In the model, each species is represented by
nucleotide composition of their DNA sequence and the substitution rates between the nucleotides. There are four possible nucleotides, A, T, G, C,
in a DNA sequence. In our model, the DNA sequence is represented, simply, by four reals, $F_A, F_T, F_G, F_C$, being the nucleotide fractions and
\begin{equation}
F_A+F_T+F_G+F_C=1.
\label{fractions}
\end{equation}
\noindent
The nucleotide fractions depend on time due to mutations and selection.
Our model is originating from the Bak-Sneppen model of co-evolution \cite{BS1} and Kimura's neutral mutation hypothesis (\cite{Motoo_Kimura1},\cite{Wen-HsiungLi1}).
According to Kimura's hypothesis, neutral mutations are responsible for molecular diversity of species.
In 1980, Kimura introduced two-parameter model \cite{Motoo_Kimura2}, \cite{Wen-HsiungLi2}, where the transitional substitution rate
(substitutions $A \leftrightarrow G$ and $C \leftrightarrow T$) is different from the transversional rate (substitutions $A \leftrightarrow T$, $G \leftrightarrow T$, $A \leftrightarrow C$, $G \leftrightarrow C$) . If we use Markov chain notation, with discrete time $t$, then the transition matrix, ${\bf M}_{\rm nucl}$,
\begin{eqnarray}
{\bf M}_{\rm nucl} &=& \left(
\begin{array}{llll}
1-uW_{A} & u~W_{AT} & u~W_{AG} & u~W_{AC} \\
u~W_{TA} & 1-uW_{T} & u~W_{TG} & u~W_{TC}\\
u~W_{GA} & u~W_{GT} & 1-uW_{G} & u~W_{GC}\\
u~W_{CA} & u~W_{CT} & u~W_{CG} & 1-uW_{C}
\end{array}
\right)\\
&=&
\left(
\begin{array}{llll}
1-u(2v+s)& uv& us & uv \\
uv & 1-u(2v+s) & uv & us\\
us & uv & 1-u(2v+s) & uv\\
uv & us & uv & 1-u(2v+s) \\
\end{array}
\right),
\label{macierz1}
\nonumber
\end{eqnarray}
\noindent
representing rates of nucleotide substitutions in the two-parameter Kimura model
fulfills the following equation
\begin{equation}
{\overrightarrow F(t+1)} = {\bf M}_{\rm nucl} {\overrightarrow F(t)}
\label{evolution}
\end{equation}
\noindent
where {\overrightarrow {F(t)}=\{$F_A(t),F_T(t),F_G(t),F_C(t)\}^T$}
denotes nucleotide fractions at time $t$, $u$ represents substitution rate and the
symbols $W_{ij}=s$ for transitions and $W_{ij}=v$ for transversions ($i,j=A,T,G,C$) represent relative
substitution probability of nucleotide $j$ by
nucleotide $i$. $W_{ij}$ satisfy the equation
\begin{equation}
\sum_{i,j=A,T,G,C} W_{ij}=1,
\end{equation}
\noindent
which in the case of the two-parameter Kimura model is converted into the following
\begin{equation}
4s+8v=1,
\label{suma}
\end{equation}
\noindent
and $W_j=\sum_{i\neq j} W_{ij}$.
Evolution described by Eq.(\ref{macierz1}) has the property that starting from some initial value of $\overrightarrow F(t_0)$ at $t=t_0$ the solution tends to an equilibrium in which $F_A=F_T=F_G=F_C=0.25$. The example of this type of behavior has been presented in Fig.\ref{fig1}. The two-parameter Kimura approximation is one of the simplest models of nucleotide substitutions. For example, in reconstructing the phylogenetic trees, one should use a more general form of the transition matrix in Eq.(\ref{macierz1}) (\cite{Wen-HsiungLi2},\cite{Lobry1},\cite{Rzhetsky},\cite{Lobry2}). This is not necessary in our model, where we need only the property that the nucleotide frequencies are evolving to their equilibrium values.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{Figure1.eps}
\end{center}
\caption{Dependence of nucleotide fractions on time in two-parameter Kimura model. Here, the initial fractions take the following values: $F_A=0.320964$, $F_T=0.246541$, $F_G=0.0252434$, $F_C=0.407252$. Besides, there has been plotted the maximum absolute deviation from the difference $\vert F_A-F_T \vert$ and
$\vert F_G-F_C \vert$ (the dashed curve).}
\label{fig1}
\end{figure}
More complicated prey-predator relations were simulated with a $5 \times 5$
Chowdhury lattice \cite{Stauffer2} with a fixed number of six food levels. Each
lower (prey) level contains twice as many possible species as the upper
(predator) level. Also this model does not contain an explicit bit-string as
genome. We now
introduced a composition vector \overrightarrow {F(t)} as above, different for
each different species, and let it evolve according to Eq.(3). Again, after
many iterations all four fractions approached 0.25. This result, as we will show below, is qualitatively different from that in the model defined below, where we observe fluctuations of nucleotide frequency, instead.
Our model consists of $N$ species and for each species we define the set of random parameters, {$F_A$, $F_T$, $F_G$, $F_C$, $u$, $s$, $v$}, which satisfy
only two equations, Eq.(\ref{fractions}) and Eq.(\ref{suma}), and we assume that $4s>8v$ to fulfill the condition that transitions ($s$) dominate transversions ($v$).
The nucleotide fractions, representing each species, change in time according to Eq.(\ref{evolution}).
The species are related according to food-chain. In the case of the nearest-neighbor relation the species $i+1$ preys on species $i$. The extension to further neighbors follows the same manner.
The food-chain has the same dynamics as in Bak-Sneppen model (BS) \cite{BS1}, i.e.,
every discrete time step $t$, we choose the species $i$ with minimum fitness $B_i$ and the chosen species is replaced by a new one together with the species linked to it with respect to food-chain relation. In the original BS model the nearest neighborhood of species $i$ is symmetrical, e.g. $\{B_{i-1},B_i,B_{i+1}\}$. The asymmetrical (directional) neighborhood applied for food-chain modeling has been discussed by Stauffer and Jan \cite{Stauffer1} and their results were qualitatively the same as in the BS model. The generalizations of food-chain onto more complex food-web structures are also available \cite{Ito}, \cite{Stauffer2}.
The new species, substituting the old ones, obtain new random values {$F_A$, $F_T$, $F_G$, $F_C$, $u$, $s$, $v$}. In our model the fitness $B_i$ of the species $i=1, 2, \ldots, N$ is represented by the parameter
\begin{equation}
B_i=1-D, \quad D=\max (\vert {F_A-F_T}\vert, \vert {F_G-F_C}\vert),
\label{selectionrule}
\end{equation}
\noindent
where $B_i \in [0,1]$ is a measure of the deviation from equilibrium
of the nucleotide numbers $F_A-F_T$ and $F_G-F_C$. Thus, the species with the smallest value of $B_i$ (largest compositional deviation from equilibrium) are eliminated
together with their right-hand neighbors with respect to food-chain. This elimination mechanism leads to self-organization. Namely, in the case of finite value of $N$
the statistically stationary distribution of the values of $B_i$ ($i=1, 2, \ldots, N$)
is achieved after finite number of time steps with the property that the selected species with the minimum value $B_{min}$ is always below some threshold value $B_c$ or it is equal to the value. The typical snapshot, at transient time, of the distribution of the values of $B_i$ is presented in Fig.\ref{fig2}.
So, if Fig.\ref{fig2} looks much the same as it had been resulted from the simulation of pure BS model,
then what are the new results in our model? In the following, we will show that
the higher value of the threshold fitness, during the evolution course, it is often the case that the winners of the food-chain competition become also species with specific nucleotide composition, which is generating long genes.
\section{Discussion of results}
We know, from Eq.(\ref{evolution}) (see also Fig.\ref{fig1}), that a single species tends to posses equilibrium nucleotide
composition, which in this simple two-parameter Kimura model means asymptotically the same nucleotide composition $F_A=F_T=F_G=F_C=0.25$.
The only distinction, which we could observe, if we had used a more general form of the substitution table, could be the resulting equilibrium nucleotide composition different from the uniform one. This would bring nothing new to the qualitative behavior of our model.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{Figure2.eps}
\end{center}
\caption{Snapshot of the distribution of the species fitness at the transient time $t=5000$. In the example, $N=200$ and the substitution rate is a random real $u=0.01*rnd$, number of the nearest-neighbors $n=1$. The horizontal line is representing the value of threshold fitness.}
\label{fig2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{Figure3.eps}
\end{center}
\caption{Few examples of time dependence of the threshold fitness $B$ for different values of the upper bound of the applied substitution rate $u$.}
\label{fig3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{Figure4.eps}
\end{center}
\caption{
Distribution $P(k)$ of gene length $k$ in Chromosome IV of {\it Saccharomyces cerevisiae} genome and in the case of the approximate analytic formula (Eq.(\ref{sizeofgene})), where the nucleotide fractions take the values as in Chr. IV, i.e., $F_A=0.31121$, $F_T=0.309727$, $F_G=0.190188$, $F_C=0.188875$. Parameter $k$ is representing number of codons (nucleotide triplets).}
\label{fig4}
\end{figure}
Once, in the model under consideration, nucleotide composition of species is changing according to Eq.(\ref{macierz1}), the species fitness $B_i$ depends on time. It is not the case in the BS model \cite{BS1}, where the fitness of the evolving species is constant in time unless it is extincted. Although $B_i$ depends on time, the food-chain selection rule introduces mechanism, which forbids to achieve the equilibrium nucleotide composition ($B_i=1$). Instead, there appears a threshold value of $B_c$, below which the species become extinct. In our model the threshold value depends on substitution rate $u$. The examples of this dependence for transient time of $10^9$ generations have been plotted in Fig.\ref{fig3}.
Similarly, as in BS model, the SOC phenomenon disappears if the number of nearest neighbors $n=0$. Then, all species tend to the state with $B=1$.
We will discuss the influence of threshold fitness optimization on nucleotide composition of species and, in consequence, its influence on the possible maximum length of gene in species genome.
To this aim, we assume that a gene has continuous structure (no introns) and it always starts from codon START (ATG) and ends with codon STOP (TGA, TAG or TAA). Then, the probability of generating any gene consisting of $k$ nucleotide triplets in a random genome with the fractions $F_A$, $F_T$, $F_G$, $F_C$ could be approximated by the following formulae (see also \cite{Cebrat}):
\begin{equation}
P(k)=\alpha F_A F_T F_G (2 F_A F_T F_G+F_A^2F_T ) (1-2 F_A F_T F_G-F_A^2F_T)^{k-1},
\label{sizeofgene}
\end{equation}
\noindent
where $\alpha$ is a normalization constant, which can be derived from the normalization condition
\begin{equation}
\sum_{k=1}^{k_{\rm cutoff}} P(k) = 1.
\label{normalization}
\end{equation}
\noindent
The value of $k_{\rm cutoff}$ in Eq.(\ref{normalization}) could be associated with genome size.
In Fig.\ref{fig4}, there has been shown the relation between the empirical distribution of gene length $k$ in chromosome IV of {\it Saccharomyces cerevisiae} genome and the distribution $P(k)$ in Eq.(\ref{sizeofgene}). Similar results we could obtain for other genomes.
One can observe, that the approximation in Eq.(\ref{sizeofgene}) is acceptable for small gene size, whereas it becomes wrong for large gene size. Generally, it is accepted that there is direct selective pressure on gene size for the effect. Examples of papers discussing the problem could be found \cite{WentianLi},\cite{Cebrat},\cite{proteomesize} together with analyses of rich experimental data.
The lowest frequency of gene size, $k$, in {\it Saccharomyces cerevisiae} genome is equal to $P_0 \approx 2.8 \times 10^{-5}$ (Fig.\ref{fig4}).
In many natural genomes $P_0$ takes value of the same order of magnitude, e.g., in the {\it B.burgdorferi genome}
$P_0 \approx 5.7 \times 10^{-5}$. In our model, we have assumed that for all species holds $P_0=1 \times 10^{-6}$.
We have also introduced maximum gene length, $k_{\max}$, which is the largest value of $k$ for which $P(k) \ge P_0$.
In the particular case of the same fractions of nucleotides in genome ($F_A=F_T=F_G=F_C=0.25$) the limiting value
$k=k_{\max}$ for which $P(k) \ge P_0$ is equal to $k_{\max}=225$ nucleotide triplets ($675$ nucleotides). Thus, in our model, we could expect that for the oldest species the maximum gene length $k_{\max}$ should not exceed the value of $675$ nucleotides. The reason for that is that ageing species should approach equilibrium composition (Fig.\ref{fig1}). However, surprisingly, we
found that the self-organization phenomenon enforces a state, in which the oldest species may have much longer gene sizes than in genome with nucleotide composition corresponding to equilibrium composition. Actually, there start to appear fluctuations in the number of species with very short genes and very long ones.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{Figure5.eps}
\end{center}
\caption{Time dependence of maximum gene size, $k_{\max}$, of the oldest species and the Guanine content in their genome in the evolving ecosystem when $N=500$, $u=0.1*rnd$, $n=1$. The data in the figure have been decimated.}
\label{fig5}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{Figure6.eps}
\end{center}
\caption{The same parameters as in Fig.\ref{fig5} but $n=0$.}
\label{fig6}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{Figure7.eps}
\end{center}
\caption{Maximum gene length, $k_{\max}$, versus genomic fraction of nucleotide in the oldest species. The vertical lines correspond to equilibrium genome. In our model this means $F_A=F_T=F_G=F_C=0.25$. The same parameters as in Fig.\ref{fig5} have been used.}
\label{fig7}
\end{figure}
The selection towards the species with the smallest deviation from equilibrium nucleotide composition (the largest value of $B$) implicates that the species, which survive the selection, may have specific bias in nucleotide composition, which makes possible generating long genes. In our model, we have observed abundances of G+C content in the species with long genes. During simulation run, in each time step $t$, we have collected in a file data representing age of the oldest species, the corresponding gene size $k$ and nucleotide frequency. We have observed, that the closer the species fitness $B_i$ is to the threshold fitness the older might be the species and also the species might posses longer genes in its genome. There is no such effect in the case, when $n=0$. Even if there could appear, at some early time interval, a tendency to generate longer genes, this property would have disappeared after longer evolution time of the system of $N$ species. In Fig.\ref{fig5}, we have plotted time dependence of the recorded maximum length of gene in the oldest species and the corresponding Guanine fraction. One can compare this figure with Fig.\ref{fig6}, where there are no prey-predator relation in the ecosystem ($n=0$). In the latter case, the system is ageing in accordance with the
Eq.(\ref{evolution}) and $B_i \rightarrow 1$ ($i=1, 2, \ldots, N$) and self-organization has not been observed.
The observed property of the competing species has an analogy
with the behavior of the model of evolution of evolving legal rules in Anglo-American court, introduced by Yee \cite{KentonKYee} (see Fig. 3 in his paper).
The relation between nucleotide fraction of genome and the possible maximum length of gene in such genome has been shown in the histogram in Fig.\ref{fig7}. The presented data address, solely, the oldest species. Notice, $A \approx T$ and $G \approx C$ for genomes both with short genes and long ones, whereas $A \approx T \approx G \approx C \approx 0.25$ for genomes with nucleotide composition near equilibrium understood in terms of the two-parameter Kimura model.
We should remember, that the substitution table for the two-parameter Kimura model (Eq.\ref{macierz1}) is a symmetric matrix and the observed compositional asymmetry results directly from the predator-prey self-organization phenomenon. The right-hand wings, evident in the structure in Fig.\ref{fig7}, do vanish in the case when $n=0$ in spite of the same fitness parameter in Eq.\ref{selectionrule} applied for selection.
We have not included strand structure in species genome, in our model, since it is represented only with the help of nucleotide fraction. Lobry and Sueoka, in their paper \cite{LobrySueoka}, concluded that if there are no bias in mutation and selection between two DNA strands, then it is expected $A \approx T$ and $G \approx C$ within each strand, and that the observed variation in G+C content between species is an effect of another phenomenon than, simply, asymmetric mutation pressure. Here, we have shown, that such compositional fluctuations of genome could result from ecosystem optimization - no direct selection on genes length is present in our model.
The predator-prey rule, in the model under consideration, introduces large fluctuations in nucleotide frequency in the ageing ecosystem, if it is sufficiently old. However, we have not observed this frequency phenomenon in modeling speciation on the Chowdhury lattice \cite{Stauffer2}, as we stated in the beginning. After we have introduced a small change of our model, in such a way, that new species arising in the speciation process were taken always from among the survived species, and we only slightly were modifying their nucleotide frequency by introducing $d\%$ of changes in their values, then the observed by us fluctuations ceased to exist in the limit $d \rightarrow 0$, as found in the Chowdhury model.
\section{Conclusions}
The specific result of the food-chain self-organization of the competing
species is that the oldest survivors of the competition might posses strong compositional bias in nucleotides, the abundance of G+C content. In our model, this resulting asymmetry makes possible generating long genes. There was no direct selection applied on the gene length, in the model. The fluctuation in number of species with long genes and short genes represents rather undirectional noise,
the amplitude of which is increasing while the ecosystem is ageing.
The effect ceases to exist if there is no species competition. The same is if we allow only $d\%$ changes of nucleotide frequency in the new formed species, in the limit $d \rightarrow 0$.
It could be, that the observed self-organization is an attribute of genes in genome evolution. Typically, many genes are coupled together in genome in a hierarchical dynamical structure, which resembles complex food-web structure. Some genes may be duplicated but also you can observe fusion of genes or even genomes.
\vspace*{0.2cm}
{\bf Acknowledgments}\\
We thank geneticist S. Cebrat for bringing us physicists together at a
GIACS/COST meeting, September 2005.
One of us, M.D., thanks A. Nowicka for useful discussion.
|
1,116,691,500,651 | arxiv | \section*{ACKNOWLEDGMENTS}
This work was supported by the National Climb Project ``Nonlinear
Science'' of China. We would like to thank Professor S.L. Wan for helpful
discussions.
|
1,116,691,500,652 | arxiv | \section{Introduction}
Let $M$ be an integral matrix, and let $\mathop{\mathrm{diag}}(f_1, f_2, \dots, f_{r})$ be its Smith normal form, so that $f_1, f_2, \dots, f_{r}$ are positive integers such that $f_i \mid f_j$ for all $i\leq j$.
These integers are called the {\it invariant factors} of $M$.
Computing the Smith normal form of matrices has been of interest in combinatorics.
For instance, computing the Smith normal form of the adjacency or Laplacian matrix is a standard technique used to determine the Smith group and the critical group of a graph; see \cite{alfaval0,merino,rushanan}.
The critical group of a connected graph is especially interesting since, by Kirchoff's matrix-tree theorem, its order is equal to the number of spanning trees of the graph.
The study of the invariant factors of combinatorial matrices seems to have started in \cite{N} and was soon continued in \cite{WW}.
We refer the reader to \cite{stanley} for a survey on the Smith normal forms in combinatorics for more details in the topic.
Let $G=(V,E)$ be a connected graph.
The {\it distance} $d_G(u,v)$ between the vertices $u$ and $v$ is the number of edges in a shortest path between them. The {\it distance matrix} $D(G)$ of $G$ is the matrix with rows and columns indexed by the vertices of $G$ with the $uv$-entry equal to $d_G(u,v)$.
Distance matrices were introduced by Graham and Pollack in the study of a data communication problem in \cite{GP}.
This problem involved finding appropriate addresses so that a message can move efficiently through a series of loops from its origin to its destination, choosing the best route at each switching point.
Little is known about the Smith normal forms of distance matrices.
In \cite{HW}, the Smith normal forms of the distance matrices were determined for trees, wheels, cycles, and complements of cycles and were partially determined for complete multipartite graphs.
In \cite{BK}, the Smith normal form of the distance matrices of
unicyclic graphs and of the wheel graph with trees attached to each
vertex were obtained.
It is well known that the Smith normal form of a matrix over a principal ideal domain ({\it p.i.d.}) can be computed using row and column operations.
In fact, in \cite{KB}, Kannan and Bachem found polynomial algorithms for computing the Smith normal form of an integer matrix.
An alternative way of determining the Smith normal form is as follows.
Let $\Delta_i(G)$ denote the {\it greatest common divisor} of the $i$-minors of the distance matrix $D(G)$. Then the $i$-{\it th} invariant factor $d_i$ is equal to $\Delta_i(G)/ \Delta_{i-1}(G)$, where $\Delta_0(G)=1$.
We will generalize on this method to develop the notion of distance ideals.
The paper is organized as follows.
In Section \ref{section:DefinitionDistanceIdeals}, we define distance ideals and explore their varieties, as well as their behaviour under taking induced subgraphs.
We finish this section by giving a description of the distance ideals of the complete graphs and star graphs.
In Section~\ref{section:classification}, we will give a classification of the graphs which have exactly 1 trivial distance ideal over $\mathbb{Z}$ and $\mathbb{R}$ in terms of forbidden induced subgraphs.
\section{Distance ideals}\label{section:DefinitionDistanceIdeals}
Through the paper, we will assume all graphs are connected.
Given a connected graph $G=(V,E)$ and a set of indeterminates $X_G=\{x_u \, : \, u\in V(G)\}$, let $\mathop{\mathrm{diag}}(X_G)$ denote the diagonal matrix with the indeterminates in the diagonal and zeroes elsewhere.
The {\it generalized distance matrix} $D(G,X_G)$ of $G$ is the matrix with rows and columns indexed by the vertices of $G$ defined as $\mathop{\mathrm{diag}}(X_G)+D(G)$.
Note we can recover the distance matrix from the generalized distance matrix by evaluating $X_G$ at the zero vector, that is, $D(G)=D(G,\bf{0})$.
Let $\mathcal{R}[X_G]$ be the polynomial ring over a commutative ring $\mathcal{R}$ in the variables $X_G$.
For all $i\in[n]:=\{1,..., n\}$, the $i${\it-th distance ideal} $I^\mathcal{R}_i(G,X_G)$ of $G$ is the determinantal ideal given by
\[
\langle {\rm minors}_i(D(G,X_G))\rangle\subseteq \mathcal{R}[X_G],
\]
where $n$ is the number of vertices of $G$ and ${\rm minors}_i(D(G,X_G))$ is the set of the determinants of the $i\times i$ submatrices of $D(G,X_G)$.
Computing the Gr\"obner basis of the distance ideals gives us a compact description of these ideals.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=.7]
\tikzstyle{every node}=[minimum width=0pt, inner sep=2pt, circle]
\draw (0,0) node[draw] (0) {\tiny $0$};
\draw (30:1.5) node[draw] (1) {\tiny $2$};
\draw (150:1.5) node[draw] (2) {\tiny $1$};
\draw (270:1.5) node[draw] (3) {\tiny $3$};
\draw (0) -- (1);
\draw (0) -- (2);
\draw (0) -- (3);
\end{tikzpicture}
\end{center}
\caption{Claw graph $K_{1,3}$.}
\label{figure:lambda}
\end{figure}
\begin{example}
The generalized distance matrix of the claw graph $K_{1,3}$ is the following.
\[
D(K_{1,3},X_{K_{1,3}})=
\begin{bmatrix}
x_0 & 1 & 1 & 1\\
1 & x_1 & 2 & 2\\
1 & 2 & x_2 & 2\\
1 & 2 & 2 & x_3\\
\end{bmatrix}
\]
For this example, we will consider the distance ideals over $\mathbb{Z}[X_{K_{1,3}}]$.
It is obvious that $I^\mathbb{Z}_1(K_{1,3},X_{K_{1,3}})=\langle 1 \rangle$, since the $(0,1)$-entry of the generalized distance matrix is equal to 1.
The Gr\"obner basis of the second distance ideal $I^\mathbb{Z}_2(K_{1,3},X_{K_{1,3}})$ is
\[
\langle 2x_0 - 1, x_1 - 2, x_2 - 2, x_3 - 2\rangle.
\]
The Gr\"obner basis of $I^\mathbb{Z}_3(K_{1,3},X_{K_{1,3}})$ is equal to
\begin{eqnarray*}
\langle 2x_0x_1 - 4x_0 - x_1 + 2, 2x_0x_2 - 4x_0 - x_2 + 2, 2x_0x_3 - 4x_0 - x_3 + 2, \\
x_1x_2 - 2x_1 - 2x_2 + 4, x_1x_3 - 2x_1 - 2x_3 + 4, x_2x_3 - 2x_2 - 2x_3 + 4\rangle.
\end{eqnarray*}
Finally, the Gr\"obner basis of $I^\mathbb{Z}_4(K_{1,3},X_{K_{1,3}})$ is
\[
\langle x_0x_1x_2x_3 - 4x_0x_1 - 4x_0x_2 - 4x_0x_3 + 16x_0 - x_1x_2 - x_1x_3 + 4x_1 - x_2x_3 + 4x_2 + 4x_3 - 12\rangle.
\]
\end{example}
At the end of this section, we will compute the distance ideals of the star graphs, which is a family of graphs containing the claw.
An ideal is said to be {\it unit} or {\it trivial} if it is equal to $\langle1\rangle$.
Let $\Phi_\mathcal{R}(G)$ denote the maximum integer $i$ for which $I^\mathcal{R}_i(G,X_G)$ is trivial.
Note that every graph with at least one non-loop edge has at least one trivial distance ideal.
It has been of interest to study graphs whose Smith normal form of its associated matrix (say Laplacian matrix or adjacency matrix) has a particular number of invariant factors equal to 1.
This is because this number is related to the cyclicity of the group obtained from cokernel of the matrix.
Let $\phi_\mathcal{R}(G)$ denote the number of invariant factors over a p.i.d. $\mathcal{R}$ of the distance matrix of $G$ equal to 1.
The following observation will give us the relation between the Smith normal form of the distance matrix and the distance ideals over a p.i.d.
\begin{proposition}\label{teo:eval1}
Let $\mathcal{R}$ be a p.i.d. and ${\bf d}\in \mathcal{R}^{V(G)}$.
If $f_1\mid\cdots\mid f_{r}$ are the invariant factors of the matrix $D(G,{\bf d})$ over $\mathcal{R}$, then
\[
I^{\mathcal{R}}_i(G,{\bf d})=\left\langle \prod_{j=1}^{i} f_j \right\rangle\text{ for all }1\leq i\leq r.
\]
\end{proposition}
\begin{proof}
Let $\Delta^\mathcal{R}_i$ denote the {\it g.c.d.} over $\mathcal{R}$ of ${\rm minors}_i(D(G,{\bf d}))$.
We have $\langle {\rm minors}_i(D(G,{\bf d}))\rangle = \langle \Delta^\mathcal{R}_i \rangle$.
Since $f_i=\Delta^\mathcal{R}_i/\Delta^\mathcal{R}_{i-1}$ with $\Delta^\mathcal{R}_0=1$, then $I^{\mathcal{R}}_i(G,{\bf d})=\left\langle \prod_{j=1}^{i} f_j \right\rangle$.
\end{proof}
In this way,
to recover the Smith normal form of $D(G)$ from the distance ideals, we just need to evaluate them $X_G$ at ${\bf 0}$.
Moreover,
if the $i$-th invariant factor, computed over $\mathcal{R}$, of $D(G,{\bf d})$ is not equal to $1$, then the ideal $I^\mathcal{R}_i(D, X_D)$ is not trivial.
Another consequence of Proposition~\ref{teo:eval1} is the following.
\begin{corollary}
For any graph $G$, $\Phi_\mathcal{R}(G)\leq \phi_\mathcal{R}(G)$.
In particular, for any positive integer $k$, the family of graphs with $\Phi_\mathcal{R}(G)\leq k$ contains the family of graphs with $\phi_\mathcal{R}(G)\leq k$.
\end{corollary}
\begin{proof}
The inequality follows by observing that if the distance ideal $I^\mathcal{R}_i(G,X_G)$ is trivial, then $\Delta^\mathcal{R}_i(D(G))=1$, and thus the $i$-th invariant factor is equal to $1$.
Now, let $G$ be a graph with $\phi_\mathcal{R}(G)\leq k$.
Then by previous equation, $\Phi_\mathcal{R}(G)\leq\phi_\mathcal{R}(G)\leq k$.
\end{proof}
In Section~\ref{section:classification}, we will give some characterizations of graphs with 1 trivial distance ideals.
Meanwhile, it is not difficult to see that the family of graphs with $\phi_\mathcal{R}(G)\leq 1$ consists only of the graph with one vertex.
In fact, there is no graph with $\phi_\mathcal{R}(G)=1$.
\subsection{Varieties of distance ideals}
Let $I\subseteq \mathcal{R}[X]$ be an ideal in $\mathcal{R}[X]$.
The variety associated to the ideal $I$ is
\[
V_\mathcal{R}(I)=\left\{ {\bf a}\in \mathcal{R}^n : g({\bf a}) = 0 \text{ for all } g\in I \right\}.
\]
Note that if $I$ is trivial, then $V_\mathcal{R}(I)=\emptyset$.
Let $M$ be an $(i+1)\times (i+1)$-matrix with entries in $\mathcal{R}[X_G]$.
We have
\[
\det M = \sum_{j=1}^{i+1}M_{j,1}\det M[j;1],
\]
where $M_{j,1}$ denotes the $(j,1)$ entry of the matrix and $M[j;1]$ denotes the submatrix of $M$ whose $j$-th row and $1$st column were deleted.
More general, $M[\mathcal{I;J}]$ denote the sumbratix of a matrix $M$ generated by eliminating the rows and columns with indices in $\mathcal{I}$ and $\mathcal{J}$, respectively.
For simplicity, when $\mathcal{I=J}$, we just write $M[\mathcal{I}]$.
This gives that $I^\mathcal{R}_{i+1}(G,X_G)\subseteq I^\mathcal{R}_i(G,X_G)$.
Thus, distance ideals satisfy the condition that
\[
\langle 1\rangle \supseteq I^\mathcal{R}_1(G,X_G) \supseteq \cdots \supseteq I^\mathcal{R}_n(G,X_G) \supseteq \langle 0\rangle.
\]
Therefore
\[
\emptyset=V_\mathcal{R}(\langle 1\rangle) \subseteq V_\mathcal{R}(I^\mathcal{R}_1(G,X_G)) \subseteq \cdots \subseteq V_\mathcal{R}(I^\mathcal{R}_n(G,X_G)) \subseteq V_\mathcal{R}(\langle 0\rangle)=\mathcal{R}^n.
\]
If $V_\mathcal{R}(I^\mathcal{R}_k(G,X_G))\neq\emptyset$ for some $k$, then there exists ${\bf a}\in\mathcal{R}^n$ such that, for all $t \geq k$, $I^\mathcal{R}_{t}(G,{\bf a})=\langle 0\rangle$; that is, all $t$-minors of $D(G,{\bf a})$ have determinant equal to 0.
In particular, these varieties can be regarded as a generalization of the distance spectra of $G$.
Distance spectra of graphs have been widely studied; see for example the recent surveys \cite{AH,SI}.
Let $I^\mathcal{R}_i(G,\lambda)$ denote the $i$-{\it th} distance ideal where each $x_i=\lambda$ for all $i\in[n]$.
Therefore, we have that $I^\mathcal{R}_n(G,-\lambda)=\langle \det(-\lambda {\sf I}_n + D(G))\rangle$, and the variety of this ideal is the negative of the distance spectra of $G$.
In particular, if $\lambda$ is a graph eigenvalue of the distance matrix, then $I^\mathcal{R}_n(G,-\lambda)=\langle 0\rangle$.
\begin{example}
For the complete graph $K_3$ with 3 vertices, the Gr\"obner basis of the second distance ideal $I^\mathbb{R}_2(K_3,X_{K_3})$ is equal to $\langle x_0 - 1, x_1 - 1, x_2 - 1\rangle$, and the third distance ideal $I^\mathbb{R}_3(K_3,X_{K_3})$ is equal to $\langle x_0 x_1 x_2 - x_0 - x_1 - x_2 + 2\rangle$.
The variety of $I^\mathbb{R}_2(K_3,X_{K_3})$ consists only of the vector $(1,1,1)$, but the variety of $I^\mathbb{R}_3(K_3,X_{K_3})$ is more interesting; see Figure~\ref{fig:varietyK3}.
By evaluating, we have that $I^\mathbb{R}_2(K_3,-\lambda)=\langle \lambda+1\rangle$, whose variety consists only of $-1$, and the ideal $I^\mathbb{R}_3(K_3,-\lambda)=\langle \lambda^3-3\lambda-2\rangle$ has variety $V_\mathbb{R}(I^\mathbb{R}_3(K_3,\lambda))=\{2,-1\}$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.5]{K3.jpg}
\end{center}
\caption{Partial view of the variety of $I^\mathbb{R}_2(K_3,X_{K_3})$ in $\mathbb{R}^3$.}
\label{fig:varietyK3}
\end{figure}
\end{example}
\subsection{Distance ideals of induced subgraphs}
In general, distance ideals are not monotone under taking induced subgraphs.
A counterexample can be constructed, for example, from $P_5$ considered as induced subgraph of $C_6$, since the distance of the leaves of $P_5$ in $C_6$ is 2.
However, we have the following result:
\begin{lemma}\label{lemma:inducemonotone}
Let $H$ be an induced subgraph of $G$ such that
for every pair of vertices $v_i,v_j$ in $V(H)$, there is a shortest path from $v_i$ to $v_j$ in $G$ which lies entirely in $H$.
Then $I^\mathcal{R}_i(H,X_H)\subseteq I^\mathcal{R}_i(G,X_G)$ for all $i\leq |V(H)|$ and $\Phi_\mathcal{R}(H)\leq \Phi_\mathcal{R}(G)$.
\end{lemma}
\begin{proof}
Since any $i\times i$ submatrix of $D(H,X_H)$ is an $i\times i$ submatrix of $D(G,X_G)$, we have $I^\mathcal{R}_i(H,X_H)\subseteq I^\mathcal{R}_i(G,X_G)$ for all $i\leq |V(H)|$.
Thus if $I^\mathcal{R}_i(H,X_H)$ is trivial for some $i$, then $I^\mathcal{R}_i(G,X_G)$ is trivial.
\end{proof}
In particular we have the following.
\begin{lemma}\label{lemma:distance2inducedmonotone}
Let $H$ be an induced subgraph of $G$ with diameter is 2, that is the distance between any pair of vertices in $H$ is at most 2.
Then $I^\mathcal{R}_i(H,X_H)\subseteq I^\mathcal{R}_i(G,X_G)$ for all $i\leq |V(H)|$.
\end{lemma}
A related family of graphs, defined in \cite{H77}, are distance-hereditary graphs.
A graph $G$ is {\it distance-hereditary} if for each connected induced subgraph $H$ of $G$ and every pair $u$ and $v$ of vertices in $H$, $d_H(u,v)=d_G(u,v)$.
\begin{proposition}
Let $G$ be a distance hereditary graph.
If $H$ is a connected induced subgraph of $G$, then $I^\mathcal{R}_i(H,X_H)\subseteq I^\mathcal{R}_i(G,X_G)$ for all $i\leq |V(H)|$, and $\Phi_\mathcal{R}(H)\leq \Phi_\mathcal{R}(G)$.
\end{proposition}
There are other interesting examples not considered in Lemma~\ref{lemma:inducemonotone}.
\begin{example}\label{example:P4inducemonotone}
Let $P_4$ be the path with $V(P_4)=\{v_1,v_2,v_3,v_4\}$ and $E(P_4)=\{v_1v_2, v_2v_3, v_3v_4\}$.
Let $G$ be a graph containing $P_4$ as induced subgraph.
The only way to reduce the distance between any two vertices in $P_4$ is that $G$ has a vertex adjacent to $v_1$ and $v_4$.
Assume $u\in V(G)$ such that $u$ is adjacent at least with $v_1$ and $v_4$. Then $D(G,X_G)$ has the following submatrix
\[
M=D(G,X_G)[V(P_4)\cup \{u\},V(P_4)\cup \{u\}]=
\begin{bmatrix}
x_1 & 1 & 2 & 2 & 1\\
1 & x_2 & 1 & 2 & a\\
2 & 1 & x_3 & 1 & b\\
2 & 2 & 1 & x_4 & 1\\
1 & a & b & 1 & u
\end{bmatrix}
\]
Note that since $\det(M[\{v_2,v_4\},\{v_1,v_3\}])=-1$, we have that $I^\mathcal{R}_2(G,X_{G})$ is trivial.
Therefore, $P_4$ and any graph containing $P_4$ as an induced subgraph have trivial second distance ideal.
\end{example}
\subsection{Distance ideals of complete graphs and star graphs}
Another interpretation of the distance matrix is the following.
Given a connected graph $G$, the \textit{complete multigraph} $\mathcal{K}(G)$ is a multigraph whose underlying simple graph is a complete graph with $V(G)$ as vertex set, and the number of edges between two vertices $u$ and $v$ is $d_G(u,v)$.
Note that the distance matrix of $G$ is equal to the adjacency matrix of the complete multigraph $\mathcal{K}(G)$.
The converse is not always possible. That is, for an arbitrary complete multigraph, it is not always possible to find a graph whose distance matrix is equal to the adjacency matrix of this complete multigraph.
The torsion part of the cokernel of the adjacency matrix of a graph $G$ is known as the {\it Smith group} of $G$ and is denoted $S(G)$; see \cite{rushanan}.
Another interesting group is the {\it critical group} which is computed through the Smith normal form of the Laplacian matrix of $G$; see \cite{merino}.
In this way, by computing the Smith normal form of the distance matrix of a graph $G$, we are also computing the Smith group of $\mathcal{K}(G)$.
Furthermore, the critical ideals of a complete multigraph $\mathcal{K}(G)$ coincide with the distance ideals of $G$ evaluated at $-x_v$
for all $v\in V(G)$.
The {\it generalized Laplacian matrix} $L(G,X_G)$ of $G$ is the matrix $\mathop{\mathrm{diag}}(X_G)-A(G)$, where $A(G)$ is the adjacency matrix of $G$.
The {\it critical ideals} of $G$ are the ideals $\langle minors_i(L(G,X_G)) \rangle$ for $i\in [n]$.
These ideals were defined in \cite{CV} and further studied in \cite{alfacorrval,AVV,alfalin,AV,AV1}, from which our study was originally inspired.
We finish this section by giving a description of the distance ideals of the complete graphs and the star graphs.
In what follows $\mathcal{R}$ will be a commutative ring containing the integers.
The only case when $G$ and $\mathcal{K}(G)$ are the same is when $G$ is the complete graph.
Therefore for this case, distance ideals and critical ideals are similar.
Since the description of the distance ideals of the complete graph will be used later, we give this description.
\begin{remark}
In the following, we are going to consider $\prod_{\emptyset}=1$.
\end{remark}
\begin{theorem}\cite[Proposition 3.15 and Theorem 3.16]{CV}\label{distanceidealscompletegraphs}
The $i$-th distance ideal of the complete graph $K_n$ with $n$ vertices is generated by
\[
\begin{cases}
\prod_{j=1}^n(x_j-1) + \sum_{k=1}^n\prod_{j\neq k}(x_j-1) & \text{if } i = n,\\
\left\{\prod_{j\in \mathcal{I}}(x_j-1) : \mathcal{I} \subset [n] \text{ and } |\mathcal{I}|=i-1\right\} & \text{if } i < n.
\end{cases}
\]
\end{theorem}
Following Proposition~\ref{teo:eval1}, by evaluating the distance ideals at $x_v=0$ for each $v\in V$, we obtain the Smith normal form of distance matrix over the integers of the complete graph.
\begin{corollary}
The Smith normal form of the distance matrix of the complete graph with $n$ vertices is ${\sf I}_{n-1}\oplus (n-1)$.
\end{corollary}
\begin{proof}
After the evaluation, we have $\Delta_i=1$, for $i\in [n-1]$.
And $\Delta_n=|(-1)^n+n(-1)^{n-1}|=n-1$.
\end{proof}
Furthermore, the Smith normal form of other variants of the distance matrix can be computed from Theorem~\ref{distanceidealscompletegraphs}.
Let $tr(u)$ denote {\it transmission} of a vertex $u$, which is defined as $\sum_{v\in V}d_G(u,v)$.
The distance Laplacian matrix is defined as $-D(G,X_G)|_{x_u=-tr(u)}$.
Thus by evaluating the distance ideals at $x_v=-n+1$ for each $v\in V$ we can obtain the Smith normal form of distance Laplacian matrix of the complete graph.
As explained before, this case coincides with the invariant factors of the critical group of the complete graph.
\begin{corollary}
The Smith normal form of the distance Laplacian matrix of the complete graph with $n$ vertices is $1\oplus n{\sf I}_{n-2}\oplus 0$.
\end{corollary}
Now, let us compute the distance ideals of the star graphs.
For this, we first give a more general result than Theorem~\ref{distanceidealscompletegraphs}.
\begin{proposition}\label{propo:detcompletemultipartite}
Let $M_n(m)=\mathop{\mathrm{diag}}(X_n)-m{\sf I}_n+m{\sf J}_n$.
Then
\[
\det(M_n(m))=\prod_{i=1}^n(x_i-m)+m\sum_{i=1}^n\prod_{j\neq i}(x_j-m)
\]
\end{proposition}
\begin{proof}
For $n=2$, the result follows since $(x_1-m)(x-2-m)+m(x_1-m+x_2-m)=x_1x_2-m^2$.
Assume $n\geq 2$.
For simplicity, $M_n$ will denote $M_n(m)$.
\begin{eqnarray*}
\det(M_{n+1}) & = & x_{n+1}\cdot\det(M_n) + \sum_{i=1}^n(-1)^{(n+1)+i+(n-i)}\cdot m\cdot\det(M_n|_{x_i=m}) \\
& = & x_{n+1}\left( \prod_{i=1}^n(x_i-m)+m\sum_{i=1}^n\prod_{j\neq i}(x_j-m)\right)\\
& & -m\sum_{i=1}^n\left.\left( \prod_{k=1}^n(x_k-m)+m\sum_{k=1}^n\prod_{j\neq k}(x_j-m)\right)\right|_{x_i=m}\\
& = & x_{n+1} \prod_{i=1}^n(x_i-m) + m\cdot(x_{n+1}-m)\sum_{i=1}^n\prod_{j\neq i}(x_j-m)\\
& = & \prod_{i=1}^{n+1}(x_i-m)+m\sum_{i=1}^{n+1}\prod_{j\neq i}(x_j-m)
\end{eqnarray*}
\end{proof}
Thus we have the following result.
\begin{proposition}\label{prop:detgencompletegraph}
Let $M_n(m)=\mathop{\mathrm{diag}}(X_n)-m{\sf I}_n+m{\sf J}_n$.
Let
\[
A_k=\left\{ m\prod_{i\in \mathcal{I}}(x_i-m) : \mathcal{I}\subset [n] \text{ and } |\mathcal{I}| = k-1\right\}
\]
and
\[
B_k=\left\{ \prod_{i\in \mathcal{I}}(x_i-m)-m\sum_{i\in \mathcal{I}}\prod_{j\in \mathcal{I}\setminus{i}}(x_j-m) : \mathcal{I}\subset [n] \text{ and } |\mathcal{I}| = k \right\}.
\]
Then, for $k\in [n-1]$, $\left\langle {\rm minors}_k(M_n(m)) \right\rangle$ is equal to $\langle A_k\cup B_k\rangle$.
\end{proposition}
\begin{proof}
Let $M$ be a $k\times k$ submatrix of $M_n(m)$.
Then, there exist subsets $\mathcal{I}$ and $\mathcal{J}$ of $[n]$ with $\mathcal{J}\subseteq \mathcal{I}$ and $|\mathcal{I}|=k$ such that $M$ is equivalent to $M_n(m)[\mathcal{I}]|_{\{x_j=m \text{ for all } j \in \mathcal{J}\}}$.
If $|\mathcal{J}|\geq 2$, then $\det(M)=0$.
If $|\mathcal{J}|=1$, then, by Proposition~\ref{propo:detcompletemultipartite}, $\det(M)=\pm m\prod_{i\in \mathcal{I}\setminus \mathcal{J}}(x_i-m)$.
And if $|\mathcal{J}|=0$, then, by Proposition~\ref{propo:detcompletemultipartite}, $\det(M)=\prod_{i\in \mathcal{I}}(x_i-m)-m\sum_{i\in \mathcal{I}}\prod_{j\in \mathcal{I}\setminus{i}}(x_j-m)$.
Thus, we have one containment.
The other one follows since by an appropriate selection of the indices $\mathcal{I}$ and $\mathcal{J}$, we can obtain any element in $A_k\cup B_k$ as the determinant of $M_n(m)[\mathcal{I},\mathcal{J}]$.
\end{proof}
\begin{theorem}\label{teo:detK1m}
Let $m\geq 1$ and
\[
M(m)=
\begin{bmatrix}
\mathop{\mathrm{diag}}(X_m)-2{\sf I}_m+2{\sf J}_m & {\sf J}_{m,1}\\
{\sf J}_{1,m} & y\\
\end{bmatrix}.
\]
Then, for $i\in [m]$, $\det(M(m)[m+1,i])$ is equal to
\begin{equation}
(-1)^{m-i}\prod\limits^m_{\substack{j=1\\ j\neq i}}(x_j-2)
\end{equation}
And, $\det(M(m))$ is equal to
\begin{equation}
y\prod\limits_{i=1}^m(x_i-2) + (2y-1)\sum\limits_{i=1}^m\prod\limits^m_{\substack{j=1\\ j\neq i}}(x_j-2)
\end{equation}
\end{theorem}
\begin{proof}
For simplicity, let $N$ denote $\mathop{\mathrm{diag}}(X_m)-2{\sf I}_m+2{\sf J}_m$.
Since $(M(m)[m+1,i])[j,m]$ is equivalent to $N[j]|_{x_i=2}$, up to $|i-j|-1$ column switchings when $|i-j|\geq 2$, then
\[
\det(M(m)[m+1,i])=\sum_{j=1}^m(-1)^{m+j}\theta(i,j)\det N[j]|_{x_i=2},
\]
where
\[
\theta(i,j)=
\begin{cases}
(-1)^{|i-j|-1} & \text{ if }|i-j|\geq 2\\
1 & \text{otherwise.}
\end{cases}
\]
From which follows that $\det(M(m)[m+1,i])$ is equal to
\[
(-1)^{m-i-1}\sum_{j=1}^m\delta(i,j)\det N[j]|_{x_i=2},
\]
where
\[
\delta(i,j)=
\begin{cases}
-1 & \text{ if }i=j\\
1 & \text{otherwise.}
\end{cases}
\]
By Proposition~\ref{propo:detcompletemultipartite}, $\det(M(m)[m+1,i])$ is equal to
\[
(-1)^{m-i-1}\sum_{j=1}^m\delta(i,j) \left[ \prod_{\substack{k=1\\ k\neq j}}^m(x_k-2)+2\sum_{\substack{k=1\\ k\neq j}}^m\prod_{\substack{l\neq k\\ l\neq j}}(x_l-2) \right]_{x_i=2},
\]
which is also equal to
\[
(-1)^{m-i-1}\left(-\prod_{\substack{k=1\\ k\neq i}}^m(x_k-2) + 2\sum_{j=1}^m\delta(i,j) \left[ \sum_{\substack{k=1\\ k\neq j}}^m\prod_{\substack{l\neq k\\ l\neq j}}(x_l-2) \right]_{x_i=2}\right).
\]
The result now follows from the following claim.
\begin{claim}
\[
\sum_{j=1}^m\delta(i,j) \left[ \sum_{\substack{k=1\\ k\neq j}}^m\prod_{\substack{l\neq k\\ l\neq j}}(x_l-2) \right]_{x_i=2}=0.
\]
\end{claim}
\begin{proof}
For a fixed $i$, if $j\neq i$, then $
\left[ \sum_{\substack{k=1\\ k\neq j}}^m\prod_{\substack{l\neq k\\ l\neq j}}(x_l-2) \right]_{x_i=2}
=
\prod_{\substack{l\neq i\\ l\neq j}}(x_l-2)$.
From this, it follows that
\[
\sum_{j=1}^m\delta(i,j) \left[ \sum_{\substack{k=1\\ k\neq j}}^m\prod_{\substack{l\neq k\\ l\neq j}}(x_l-2) \right]_{x_i=2}
=
-\sum_{\substack{k=1\\ k\neq i}}^m\prod_{\substack{l\neq k\\ l\neq i}}(x_l-2)
+
\sum_{\substack{j=1\\ j\neq i}}^m\prod_{\substack{l\neq i\\ l\neq j}}(x_l-2)
=
0.
\]
\end{proof}
Finally,
\begin{eqnarray*}
\det M(m) & = & y\det N + \sum_{i=1}^m(-1)^{m+1+i}\det(M(m)[m+1,i])\\
& = & y\left( \prod_{i=1}^m(x_i-2)+2\sum_{i=1}^m\prod_{j\neq i}(x_j-2) \right) - \sum_{i=1}^m\prod\limits^m_{\substack{j=1\\ j\neq i}}(x_j-2)\\
& = & y\prod\limits_{i=1}^m(x_i-2)+(2y-1)\sum\limits_{i=1}^m\prod\limits^m_{\substack{j=1\\ j\neq i}}(x_j-2).
\end{eqnarray*}
\end{proof}
\begin{theorem}
Let
\[
C_k=\left\{ \prod\limits_{i\in \mathcal{I}}(x_i-2) : \mathcal{I}\subset [m] \text{ and } |\mathcal{I}| = k-1\right\}
\]
and
\[
D_k=\left\{ (2y-1)\prod\limits_{i\in \mathcal{I}}(x_i-2) : \mathcal{I}\subset [m] \text{ and } |\mathcal{I}| = k-2 \right\}.
\]
For $k \in [n-1]$, the $k$-th distance ideal of the star graph $K_{m,1}$ is generated by $\langle C_k \cup D_k\rangle$.
\end{theorem}
\begin{proof}
Let $M(m)$ be a matrix defined as in Theorem~\ref{teo:detK1m}, and $N=M(m)[\mathcal{I},\mathcal{J}]$ with $|\mathcal{I}|=|\mathcal{J}|=k$.
There are 3 possible cases:
\begin{enumerate}[a.]
\item Both sets $\mathcal{I}$ and $\mathcal{J}$ contain $m+1$,
\item Only one of the sets $\mathcal{I}$ or $\mathcal{J}$ contains $m+1$, and
\item Neither $\mathcal{I}$ nor $\mathcal{J}$ contains $m+1$.
\end{enumerate}
In case (a), $N$ is equivalent to $M(m)[\mathcal{I}]|_{\{x_i = 2 \,:\, i \in \mathcal{I}\setminus\mathcal{J} \}}$.
Note that if $|\mathcal{I}\setminus\mathcal{J}|\geq 2$, then $\det(N)=0$.
If $|\mathcal{I}\setminus\mathcal{J}|=1$, then by adequately applying Equation 2 of Theorem~\ref{teo:detK1m} we obtain
\begin{eqnarray*}
\det(N) & = & \left[y\prod_{i\in \mathcal{I}\setminus\{ m+1\}}(x_i-2)+(2y-1)\sum_{i\in \mathcal{I}\setminus \{m+1\}}\prod_{j\in\mathcal{I}\setminus \{i,m+1\}}(x_j-2)\right]_{\{x_i = 2 \,:\, i \in \mathcal{I}\setminus\mathcal{J} \}}\\
& = & (2y-1)\prod_{ j\in\mathcal{I}\cap\mathcal{J}\setminus\{ m+1 \} }(x_j-2) \in D_k.
\end{eqnarray*}
If $\mathcal{I}=\mathcal{J}$, then
by applying Equation 2 of Theorem~\ref{teo:detK1m} we obtain
\begin{eqnarray*}
\det(N) & = & y\prod_{i\in \mathcal{I}\setminus\{ m+1\}}(x_i-2)+(2y-1)\sum_{i\in \mathcal{I}\setminus \{m+1\}}\prod_{j\in\mathcal{I}\setminus \{i,m+1\}}(x_j-2),
\end{eqnarray*}
which is in $\langle C_k \cup D_k\rangle$.
In case (b), let us assume, without loss of generality, $m+1\in \mathcal{I}$.
We have $N$ is equivalent to $M(m)[\mathcal{I}]|_{\{x_i = 2 \,:\, i \in \mathcal{J} \setminus \mathcal{I} \}}$.
Note that if $|\mathcal{J} \setminus \mathcal{I}|\geq 2$, then $\det(N)=0$, and $|\mathcal{J} \setminus \mathcal{I}|\neq 0$ since otherwise $m+1$ would be in $\mathcal{J}$.
Thus $|\mathcal{J} \setminus \mathcal{I}|=1$, then
by applying Equation 1 of Proposition~\ref{teo:detK1m} we obtain $\det(N)$ is equal to, up to sign, $\prod_{i\in\mathcal{J} \cap \mathcal{I}}(x_i-2)$ which is in $C_k$.
Finally, in case (c), we have that $\det(N)$ is in $A_k$ or $B_k$ of Proposition~\ref{prop:detgencompletegraph}.
The result now follows since $\langle A_k\cup B_k\rangle\subset \langle C_k\rangle$.
The other statement can be derived from cases (a) and (b).
\end{proof}
As in the case of complete graphs, this description could be used to give the Smith normal form of the distance matrix and distance Laplacian matrix of the star graphs over the integers.
\begin{corollary}
The Smith normal form of the distance matrix of the star graph with $m$ leaves is ${\sf I}_{2}\oplus 2 {\sf I}_{n-2}\oplus 2m$.
\end{corollary}
\begin{proof}
After the evaluating the distance ideal at $X_G={\bf 0}$, we have $\Delta_i=1$, for $i\in [2]$; $\Delta_i=2^{(i-2)}$, for $i\in \{3,...,m\}$; and $\Delta_{m+1}=2^{m-1}m$.
\end{proof}
\begin{corollary}
The Smith normal form of the distance Laplacian matrix of the complete graph with $n$ vertices is ${\sf I}_{m}\oplus 2m(m-1)$.
\end{corollary}
\begin{proof}
After evaluating the distance ideal at $x_i=1$ for $i\in[m]$ and $y=m$, we obtain $\Delta_i=1$ for $i\in[m]$, and $\Delta_{m+1}=2m(m-1)$, from which the invariant factors can be easily obtained.
\end{proof}
\section{Graphs with at most one trivial distance ideal}\label{section:classification}
Despite the fact that distance ideals are, in general, not monotone under taking induced subgraphs, we will be able to classify the graphs which have exactly 1 trivial distance ideal over $\mathbb{Z}$ and $\mathbb{R}$ in terms of forbidden induced subgraphs.
Let $\Lambda^{\mathcal{R}}_k$ denote the family of graphs with at most $k$ trivial distance ideals over $\mathcal{R}$.
A graph $G$ is {\it forbidden} for $\Lambda^{\mathcal{R}}_k$ if the $(k+1)$-th distance ideal, over ${\mathcal{R}}$, of $G$ is trivial.
The set of forbidden graphs for $\Lambda^{\mathcal{R}}_k$ will be denoted by ${\sf Forb}^{\mathcal{R}}_k$.
In addition, a graph $G\in {\sf Forb}^{\mathcal{R}}_k$ is {\it minimal} if $G$ does not contain a graph in ${\sf Forb}^{\mathcal{R}}_k$ as induced subgraph, and for any graph $H$ containing $G$ as induced subgraph, $H\in{\sf Forb}^{\mathcal{R}}_k$.
First we consider the case over $\mathbb{Z}$.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c@{\extracolsep{10mm}}c@{\extracolsep{10mm}}c@{\extracolsep{10mm}}c@{\extracolsep{10mm}}c}
\begin{tikzpicture}[scale=.7]
\tikzstyle{every node}=[minimum width=0pt, inner sep=2pt, circle]
\draw (126+36:1) node (v1) [draw] {};
\draw (198+36:1) node (v2) [draw] {};
\draw (270+36:1) node (v3) [draw] {};
\draw (342+36:1) node (v4) [draw] {};
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\draw (v4) -- (v3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.7]
\tikzstyle{every node}=[minimum width=0pt, inner sep=2pt, circle]
\draw (-.5,-.9) node (v1) [draw] {};
\draw (.5,-.9) node (v2) [draw] {};
\draw (0,0) node (v3) [draw] {};
\draw (0,.9) node (v4) [draw] {};
\draw (v1) -- (v2);
\draw (v1) -- (v3);
\draw (v2) -- (v3);
\draw (v3) -- (v4);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.7]
\tikzstyle{every node}=[minimum width=0pt, inner sep=2pt, circle]
\draw (-.5,0) node (v2) [draw] {};
\draw (0,-.9) node (v1) [draw] {};
\draw (.5,0) node (v3) [draw] {};
\draw (0,.9) node (v4) [draw] {};
\draw (v1) -- (v2);
\draw (v1) -- (v3);
\draw (v2) -- (v3);
\draw (v2) -- (v4);
\draw (v3) -- (v4);
\end{tikzpicture}
\\
$P_4$
&
$\sf{paw}$
&
$\sf{diamond}$
\end{tabular}
\end{center}
\caption{The graphs $P_4$, $\sf{paw}$ and $\sf{diamond}$.}
\label{fig:forbiddendistance1}
\end{figure}
\begin{lemma}\label{lemma:P4PawDiamondForbidden}
The graphs $P_4$, $\sf{paw}$ and $\sf{diamond}$ are minimal forbidden graphs for graphs with 1 trivial distance ideal over $\mathbb{Z}$.
\end{lemma}
\begin{proof}
The fact that these are forbidden graphs follows from the observation that $P_4$, $\sf{paw}$ and $\sf{diamond}$ have exactly 2 trivial distance ideals over $\mathbb{Z}$, this can be verified with the code in the Appendix.
The minimality follows from Lemma \ref{lemma:distance2inducedmonotone} and Example \ref{example:P4inducemonotone}, and the fact that no proper induced subgraph of these graphs has 2 trivial distance ideals over $\mathbb{Z}$.
\end{proof}
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c@{\extracolsep{10mm}}c@{\extracolsep{10mm}}c@{\extracolsep{10mm}}c@{\extracolsep{10mm}}c}
\begin{tikzpicture}[scale=.7]
\tikzstyle{every node}=[minimum width=0pt, inner sep=2pt, circle]
\draw (126-36:1) node (v1) [draw] {};
\draw (198-36:1) node (v2) [draw] {};
\draw (270-36:1) node (v3) [draw] {};
\draw (342-36:1) node (v4) [draw] {};
\draw (414-36:1) node (v5) [draw] {};
\draw (v1) -- (v2);
\draw (v1) -- (v5);
\draw (v2) -- (v3);
\draw (v2) -- (v4);
\draw (v2) -- (v5);
\draw (v3) -- (v4);
\draw (v3) -- (v5);
\draw (v4) -- (v5);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.7]
\tikzstyle{every node}=[minimum width=0pt, inner sep=2pt, circle]
\draw (0:1) node (v1) [draw] {};
\draw (60:1) node (v2) [draw] {};
\draw (120:1) node (v3) [draw] {};
\draw (180:1) node (v4) [draw] {};
\draw (240:1) node (v5) [draw] {};
\draw (300:1) node (v6) [draw] {};
\draw (v1) -- (v2);
\draw (v1) -- (v3);
\draw (v1) -- (v5);
\draw (v1) -- (v6);
\draw (v2) -- (v4);
\draw (v2) -- (v5);
\draw (v2) -- (v6);
\draw (v3) -- (v4);
\draw (v3) -- (v5);
\draw (v3) -- (v6);
\draw (v4) -- (v5);
\draw (v4) -- (v6);
\draw (v5) -- (v6);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.7]
\tikzstyle{every node}=[minimum width=0pt, inner sep=2pt, circle]
\draw (-.5,-.9) node (v1) [draw] {};
\draw (.5,-.9) node (v2) [draw] {};
\draw (0,0) node (v3) [draw] {};
\draw (-.5,.9) node (v4) [draw] {};
\draw (.5,.9) node (v5) [draw] {};
\draw (v1) -- (v2);
\draw (v1) -- (v3);
\draw (v2) -- (v3);
\draw (v3) -- (v4);
\draw (v3) -- (v5);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.7]
\tikzstyle{every node}=[minimum width=0pt, inner sep=2pt, circle]
\draw (-.5,0) node (v2) [draw] {};
\draw (0,-.9) node (v1) [draw] {};
\draw (.5,0) node (v3) [draw] {};
\draw (1.5,0) node (v5) [draw] {};
\draw (0,.9) node (v4) [draw] {};
\draw (v1) -- (v2);
\draw (v1) -- (v3);
\draw (v2) -- (v3);
\draw (v2) -- (v4);
\draw (v3) -- (v4);
\draw (v3) -- (v5);
\end{tikzpicture}
\\
$K_5\setminus P_2$
&
$K_6\setminus M_2$
&
$\ltimes$
&
\sf{dart}
\end{tabular}
\end{center}
\caption{The graphs $K_5\setminus P_2$, $K_6\setminus M_2$, $\ltimes$ and $\sf dart$.}
\label{fig:forbiddencritical2}
\end{figure}
Given a family $\mathcal{F}$ of graphs, a graph is $\mathcal{F}$-free if no induced subgraph of $G$ is isomorphic to a graph in $\mathcal{F}$.
\begin{lemma}\cite[Theorem 4.3]{AV}\label{lem:classificationgamma2}
A simple connected graph is $\{P_4, K_5\setminus P_2, K_6\setminus M_2, \ltimes, \sf{dart}\}$-free if and only if it is an induced subgraph of $K_{m,n,o}$ or $\overline{K_n} \vee (K_m+K_o)$.
\end{lemma}
\begin{proposition}\label{prop:P4PawDiamondIsInKKL}
If a simple connected graph is $\{P_4, \sf{paw}, \sf{diamond}\}$-free, then it is an induced subgraph of $K_{m,n,o}$ or $\overline{K_n} \vee (K_m+K_o)$.
\end{proposition}
\begin{proof}
First note that $\sf{paw}$ is an induced subgraph of $K_5\setminus P_2$, $\ltimes$ and $\sf dart$, and $\sf{diamond}$ is an induced subgraph of $K_6\setminus M_2$.
Therefore, if $G$ is $\{P_4, \sf{paw}, \sf{diamond}\}$-free, then $G$ is $\{P_4, K_5\setminus P_2, K_6\setminus M_2, \ltimes, \sf{dart}\}$-free.
The result then follows by Lemma \ref{lem:classificationgamma2}.
\end{proof}
Now, we have the following characterization.
\begin{theorem}\label{teo:classification}
For $G$ a simple connected graph, the following are equivalent:
\begin{enumerate}
\item $G$ has only 1 trivial distance ideal over $\mathbb{Z}$.
\item $G$ is $\{P_4,\sf{paw},\sf{diamond}\}$-free.
\item $G$ is an induced subgraph of $K_{m,n}$ or $K_{n}$.
\end{enumerate}
\end{theorem}
\begin{proof}
$(1)\implies (2)$ follows from Lemma \ref{lemma:P4PawDiamondForbidden}.
$(2)\implies (3)$:
By Proposition \ref{prop:P4PawDiamondIsInKKL}, $G$ is an induced subgraph of $K_{m,n,o}$ or $\overline{K_n} \vee (K_m+K_o)$.
However, there are induced subgraphs in $K_{m,n,o}$ and $\overline{K_n} \vee (K_m+K_o)$ isomorphic to $\sf{paw}$ or $\sf{diamond}$.
By inspection, we are going to determine that $G$ is an induced subgraph of $K_{m,n}$ or $K_{n}$.
If $m,n\geq 1$ and $o\geq 2$, then $K_{m,n,o}$ contains $\sf{diamond}$ as induced subgraph.
Therefore, $o\leq 1$.
For simplicity, we assume $m\geq n\geq o$.
Thus, we have two cases:
\begin{enumerate}
\item $o=0$, or
\item $o=1$.
\end{enumerate}
In the first case, $G=K_{m,n}$.
In the second case, $K_{1,1,1}$ is the unique possibility, because if $m\geq 2$ and $n\geq 1$, then $K_{m,n,1}$ would contain $\sf{diamond}$ as induced subgraph.
Indeed, $K_{2,1,1}$ is isomorphic to $\sf{diamond}$.
If $m\geq 2$ and $n\geq 2$, then $\overline{K_n} \vee (K_m+K_o)$ contains $\sf{diamond}$ as induced subgraph.
For simplicity, we assume $m\geq o$.
Thus, we have two cases:
\begin{enumerate}
\item $m\leq 1$, or
\item $n\leq 1$.
\end{enumerate}
For case 1, we have that $o\leq m\leq 1$ and $n\geq 2$, thus $G$ is isomorphic to a bipartite graph $K_{2,n}$.
And for case 2, we have two cases: either $m\geq 2$ or $m=1$.
In the first case, if $n=1$, then $o=0$, otherwise $\sf{paw}$ will be an induced subgraph of $\overline{K_1} \vee (K_m+K_o)$.
But $\overline{K_1} \vee (K_m)$ is isomorphic to a complete graph with $m+1$ vertices.
In the second case, $G$ is an induced subgraph of $\overline{K_1} \vee (K_1+K_1)\cong P_3$.
$(3)\implies (1)$: Note that any non-trivial connected graph has trivial first distance ideal.
For an isolated vertex we have $I^\mathbb{Z}_1(K_1,\{ x\}) =\left< x \right>$.
Now we have to compute the second distance ideals of $K_{n}$ and $K_{m,n}$.
The 2-minors of the generalized distance matrix of a complete graphs are of the forms $x_ix_j-1$ and $x_i-1$.
Since $x_ix_j-1\in \left< x_1-1, \dots, x_n-1 \right>$,
\begin{equation}\label{eqn:gamma1}
I^\mathbb{Z}_2(K_n,X_{K_n})=
\begin{cases}
\left< x_1x_2-1 \right> & \text{if } n=2, \text{ and,}\\
\left< x_1-1, \dots, x_n-1 \right> & \text{if } n\geq 3.
\end{cases}
\end{equation}
Thus complete graphs have at most one trivial distance ideal.
If $m\geq2$ and $n=1$, then the 2-minors of $D(K_{m,1},\{x_1, \dots, x_m, y\})$ of $K_{m,1}$ have one of the following forms: $x_ix_j-4$, $2x_i-4$, $x_i-2$, $x_iy-1$ and $2y-1$.
Thus
\[
I^\mathbb{Z}_2(K_{m,1},\{x_1, \dots, x_m, y\})=\langle x_1-2, \dots, x_m-2,2y-1\rangle.
\]
If $m\geq2$ and $n\geq2$, then the 2-minors of $D(K_{m,n},\{x_1, \dots, x_m, y_1, \dots, y_n\})$ of $K_{m,n}$ have one of the following forms: $x_ix_j-4$, $2x_i-4$, $x_i-2$, $x_iy_j-1$, $2x_i-1$,
$y_iy_j-4$, $2y_i-4$, $y_i-2$, $2y_i-1$ and 3.
Thus
\[
I^\mathbb{Z}_2(K_{m,n},\{x_1, \dots, x_m, y_1, \dots, y_n\})=\langle x_1-2, \dots, x_m-2, y_1-2, \dots, y_n-2,3\rangle.
\]
Therefore bipartite graphs have at most one trivial distance ideal.
\end{proof}
We finish this section by classifying graphs which have exactly 1 trivial distance ideal over $\mathbb{R}$.
\begin{lemma}\label{lemma:P4PawDiamondC4Forbidden}
The graphs $P_4$, $\sf{paw}$, $\sf{diamond}$ and $C_4$ are minimal forbidden graphs for graphs with 1 trivial distance ideal over $\mathbb{R}$.
\end{lemma}
\begin{proof}
The graphs $P_4$, $\sf{paw}$, $\sf{diamond}$ and $C_4$ have exactly 2 trivial distance ideals over $\mathbb{R}$, which can be verified with the code in the Appendix.
The minimality of $\sf{paw}$, $\sf{diamond}$ and $C_4$ follows from Lemma \ref{lemma:distance2inducedmonotone}. Minimality of $P_4$ follows from Example \ref{example:P4inducemonotone} and the fact that no proper induced subgraph of these graphs has 2 trivial distance ideals over $\mathbb{R}$.
\end{proof}
\begin{theorem}\label{teo:classification2}
For $G$ a simple connected graph, the following are equivalent:
\begin{enumerate}
\item $G$ has only 1 trivial distance ideal over $\mathbb{R}$.
\item $G$ is $\{P_4,\sf{paw},\sf{diamond}, C_4\}$-free.
\item $G$ is an induced subgraph of $K_{1,n}$ or $K_{n}$.
\end{enumerate}
\end{theorem}
\begin{proof}
The statement can be derived from Lemma \ref{lemma:P4PawDiamondC4Forbidden}, Theorem~\ref{teo:classification} and the observation that
$I^\mathbb{R}_2\left(K_{m,n},X_{K_{m,n}}\right)$ is trivial when $m\geq n\geq 2$.
\end{proof}
A graph is {\it trivially perfect} if for every induced subgraph the stability number equals the number of maximal cliques.
In \cite[Theorem 2]{G}, Golumbic characterized trivially perfect graphs as $\{P_4,C_4\}$-free graphs. There are other equivalent characterization of this family; see \cite{B,CCY,Rubio}.
Therefore, from Theorem~\ref{teo:classification2}, graphs with 1 trivial distance ideal over $\mathbb{R}$ are a subclass of trivially perfect graphs.
A related family of graphs come from the {\it graph sandwich problem} for property $\Pi$, which is defined as follows.
Given two graphs $G_1 = (V, E_1)$ and $G_2 = (V, E_2)$ such that $E_1 \subseteq E_2$, is there a graph $G = (V, E)$ such that $E_1 \subseteq E \subseteq E_2$ which satisfies property $\Pi$?
In the literature there are several characterizations where the problem restricted to the graphs found in Theorem~\ref{teo:classification2} lies certain complexity class.
For instance, in \cite{DFMT} that the {\sf paw}-free graph sandwich problem is in ${\sf P}$.
See also \cite{G1}.
In \cite[Theorem 3]{HW} it was proved that the distance matrices of trees have exactly 2 invariant factors equal to 1.
This differs from the critical group, since the Laplacian matrix of any tree has all invariant factors equal to 1.
An interesting and difficult question will be to characterize the graphs whose distance matrix has at most 2 invariant factors equal to 1.
\section{Acknowledgements}
C.A. Alfaro was partially supported by SNI and CONACyT.
|
1,116,691,500,653 | arxiv | \section{Introduction}
It is a well known fact that the classical general relativity cannot
be trusted when curvature of the manifold approaches the Planck
regime. It means that understanding the nature of any classical
singularity that resides in the black hole interior as well as its
closest vicinity requires profound changes of the standard theory.
It is therefore natural that a great deal of effort has been concentrated on
construction of the singularity-free models.
(See for example Refs.~\cite{Sakharov,Gliner,Bardeen:68,frolov1,frolov2,Irina1,Borde,Mars,
abg,kiryll1,Dymnikova} and the references cited therein).
On of the most intriguing solutions of this type has been constructed
by Ayon-Beato and Garcia~\cite{abg} and by Bronnikov~\cite{kiryll1}
(ABGB). This solution to the system of coupled equations of the
nonlinear electrodynamics and gravity describes a regular static and
spherically-symmetric black hole characterized by the total mass
$\mathcal{M}$ and the magnetic charge $Q.$ The status of the nonlinear
electrodynamics in this model is to provide a matter source. The
casual structure of the solution is governed by the null geodesics
(``ordinary photons") rather than the photons of the nonlinear theory.
The latter move along geodesics of the effective
geometry~\cite{Salim1,Salim2}.
The recent popularity of the models constructed within the framework
of the nonlinear electrodynamics has been stimulated by the fact that the latter
appears naturally in the low-energy limit of certain
formulations of the string and M-theory~\cite{nat,cajtlin}.
The Lagrangian of the nonlinear electrodynamics adopted by Ayon-Beato and
Garcia and by Bronnikov has Maxwell asymptotic in a weak field limit, and,
consequently, far from the ABGB black hole as well as for $Q/\mathcal{M} \ll 1$ the
line element resembles the Reissner-Nordstr\"om (RN) solution;
noticeable differences appear near the extremality limit.
On the other hand, as $r \to 0$ one has the asymptotic behaviour
\begin{equation}
-g_{tt} =\frac{1}{g_{rr} }\sim 1-\frac{4\mathcal{M}}{r}\exp \left( \frac{-Q^{2}}
{\mathcal{M}r}\right)
\end{equation}
and this very information suffices to demonstrate the finiteness
of the curvature invariants.
Although
more complicated than the RN geometry the ABGB solution allows exact
analytical treatment. Indeed, the location of the horizons can be
expressed in terms of known special functions that certainly
simplifies investigations of its causal structure. The ABGB geometry
has been studied by a number of authors and from various points of
view. See, for example~\cite{BretonN,Radinschi,Yang,MatMPLA,Bur1,Bur2,Kocio1,Berej,
JM2004,JaSpin,Try}, where the stability of the ABGB spacetime, its
gravitational energy, generalizations and
the stress-energy tensor of the quantized field propagating in such a
geometry have been analyzed.
Especially interesting are the thermodynamic considerations
presented in Refs.~\cite{Kor,JaActa}.
In this paper we shall generalize the ABGB solution to the cosmological
background. Although the solution is valid for any $\Lambda$ we shall
restrict ourselves to the positive cosmological constant.
Consequently, an interesting group of related solutions describing
topological black holes are not considered here. The paper is
organized as follows: In section 2 we construct the solution
describing the ABGB black hole in the de Sitter geometry and give its
qualitative discussion. The quantitative discussion of its main
features is contained in Sec.3. The near-horizon geometry of the
degenerate configurations is constructed and discussed in Sec.4
whereas the analysis of the lukewarm black holes
is presented in Sec.5. Finally, Sec.6 contains short
discussion and suggestions for extending this work. Throughout the
paper the geometric system of units is used and our conventions follow
the conventions of MTW.
\section{Cosmological solutions of the coupled system of equations
of nonlinear electrodynamics and gravity}
In the presence of the cosmological constant the coupled system of
the nonlinear electrodynamics and gravity is described by the action
\begin{equation}
S=\frac{1}{16\pi G}\int \left( R-2\Lambda \right) \sqrt{-g}\,d^{4}x+S_{m},
\end{equation}
where
\begin{equation}
S_{m}=-\dfrac{1}{16\pi }\int \mathcal{L}\left( F\right) \sqrt{-g}\,d^{4}x.
\end{equation}
Here $\mathcal{L}\left( F\right) $ is some functional of $F=F_{ab}F^{ab}$
(its exact form will be given later) and all symbols have their usual
meaning. The tensor $F_{ab}$ and its dual $^{*}F^{ab}$ satisfy
\begin{equation}
\nabla _{a}\left( \dfrac{d\mathcal{L}\left( F\right) }{dF}F^{ab}\right) =0
\end{equation}
and
\begin{equation}
\nabla _{a}\,^{\ast }F^{ab}=0.
\end{equation}
The stress-energy tensor defined as
\begin{equation}
T^{ab}=\frac{2}{\sqrt{-g}}\frac{\delta }{\delta g_{ab}}S_{m} \label{tensep}
\end{equation}
is given therefore by
\begin{equation}
T_{a}^{b}=\dfrac{1}{4\pi }\left( \dfrac{d\mathcal{L}\left( F\right) }{dF}%
F_{ca}F^{cb}-\dfrac{1}{4}\delta _{a}^{b}\mathcal{L}\left( F\right) \right) .
\end{equation}
Let us consider the spherically symmetric and static configuration described
by the line element of the form
\begin{equation}
ds^{2}=-e^{2\psi \left( r\right) }f(r)dt^{2}+\frac{dr^{2}}{f(r)}%
+r^{2}d\Omega ^{2}, \label{el_gen}
\end{equation}
where $f(r)$ and $\psi(r)$ are unknown functions.
The spherical symmetry places restrictions on the components of $F_{ab}$
tensor and its only nonvanishing components compatible with the assumed
symmetry are $F_{01}$ and $F_{23}$. Simple calculations yield
\begin{equation}
F_{23}=Q\sin \theta
\end{equation}
and
\begin{equation}
r^{2}e^{-2\psi }\dfrac{d\mathcal{L}\left( F\right) }{dF}F_{10}=Q_{e},
\end{equation}
where $Q$ and $Q_{e}$ are the integration constants interpreted as the
magnetic and electric charge, respectively. In the latter we shall assume
that the electric charge vanishes, and, consequently, $F$ is given by
\begin{equation}
F=\dfrac{2Q^{2}}{r^{4}}. \label{postacF}
\end{equation}
The stress-energy tensor (\ref{tensep}) calculated for this configuration
is
\begin{equation}
T_{t}^{t}=T_{r}^{r}=-\dfrac{1}{16\pi }\mathcal{L}\left( F\right) \label{t1}
\end{equation}
and
\begin{equation}
T_{\theta }^{\theta }=T_{\phi }^{\phi }=\dfrac{1}{4\pi }\dfrac{d\mathcal{L}%
\left( F\right) }{dF}\dfrac{Q^{2}}{r^{4}}-\dfrac{1}{16\pi }\mathcal{L}\left(
F\right),
\end{equation}
which reduces to its Maxwell form for $\mathcal{L}(F) = F.$
With the substitution
\begin{equation}
f(r)=1-\frac{2M(r)}{r}
\label{fM}
\end{equation}
the left hand side of the time and radial components of the Einstein field equations
\begin{equation}
L_{a}^{b}\equiv G_{a}^{b}+\Lambda \delta_{a}^{b} = 8 \pi T_{a}^{b}
\end{equation}
assume simple and transparent form
\begin{equation}
L_{t}^{t}=-\frac{2}{r^{2}}\frac{dM}{dr}+\Lambda ,\hspace{0.4cm}
L_{r}^{r}=L_{t}^{t}+\frac{2}{r}\left( 1-\frac{2M}{r}\right) \frac{d\psi }{dr}
+\Lambda, \label{1st}
\end{equation}
and the resulting equations can be easily (formally) integrated.
Further considerations require specification of the Lagrangian $\mathcal{L}
\left( F\right) .$
We demand that it should have proper asymptotic, i.e., in a weak field limit it should approach
$F.$
Following Ay\'on-Beato, Garc\'\i a~\cite{abg} and Bronnikov~\cite{kiryll1}
let us
chose it in the form
\begin{equation}
\mathcal{L}\left( F\right) \,=F\left[ 1-\tanh ^{2}\left( s\,\sqrt[4]{\frac{
Q^{2}F}{2}}\right) \right] , \label{labg}
\end{equation}
where
\begin{equation}
s=\frac{\left| Q\right| }{2b}, \label{sabg}
\end{equation}
and the free parameter $b$ will be adjusted to guarantee regularity at the
center.
Inserting Eq.~(\ref{sabg}) into (\ref{labg}) and makig use of Eq.~(\ref
{postacF}) one has
\begin{equation}
8\pi T_{t}^{t}=8\pi T_{r}^{r}=-\frac{Q^{2}}{r^{4}}\left( 1-\tanh ^{2}\frac{
Q^{2}}{2br}\right) . \label{tep}
\end{equation}
Now the equations can easily be integrated in terms of the
elementary functions:
\begin{equation}
M\left( r\right) =C_{1}-b\tanh \frac{Q^{2}}{2br}+\frac{\Lambda r^{3}}{6},
\hspace{0.4cm}\psi \left( r\right) =C_{2} \label{mm0}
\end{equation}
where $C_{1}$ and $C_{2}$ are the integration constant. Making use of the
conditions
\begin{equation}
M(\infty)={\mathcal M}, \hspace{0.4cm} \psi(\infty)=0
\end{equation}
gives $C_{1}=\mathcal{M}$ and $C_{2} = 0.$ On the other hand, demanding
the regularity of the line element as $r\rightarrow 0$ yields $b_{1}=%
\mathcal{M,}$ and, consequently, the resulting line element has
the form (\ref{el_gen}) with $\psi(r)=0$ and
\begin{equation}
f(r) = 1-\frac{2 \mathcal{M}}{r}\left(1-\tanh\frac{Q^{2}}{2 \mathcal{M} r}
\right)-\frac{\Lambda r^{2}}{3}.
\label{el_gen1}
\end{equation}
We shall call this solution the Ay\'on-Beato-Garc\'\i a-Bronnikov-de Sitter
solution (ABGB-dS).
It could be easily shown that putting $Q=0$ yields the Schwarzschild-de Sitter
(Kottler) solution, whereas for $\Lambda=0$ one gets the Ay\'on-Beato, Garc\'\i a
line element as reinterpreted by Bronnikov (ABGB).
To study ABGB-dS line element it is convenient to introduce the
dimensionless quantities $x=r/M$, $q=\left| Q\right| /M$ and $\lambda
=\Lambda M^{-2}$. For $\lambda > 0$ the equation
\begin{equation}
1-\frac{2}{x}\left( 1-\tanh\frac{q^2}{2x}\right) -\frac{1}{3}\lambda x^2 =0
\label{eqq}
\end{equation}
has, in general, four
roots; the case $\lambda=0$ has been treated analytically
in Refs~\cite{Kocio1,Berej,JM2004}.
Unfortunately, Eq.~(\ref{eqq}) cannot be solved in terms of known
transcendental functions and one is forced to refer to some
approximations or employ numerical methods. Simple analysis indicate that
for $x>0$ it can have, depending on the
values of the parameters, three, two or one distinct real solutions.
The above configurations can, therefore,
have three distinct horizons located at zeros of $f\left( r\right) $, a
degenerate and a nondegenerate horizon, and, finally, one triply
degenerate horizon.
Let us consider each of the configuration more closely
and denote the inner, the event and the cosmological horizon by $r_{-}$, $
r_{+}$ and $r_{c},$ respectively. The first configuration is characterized
by $r_{-}<r_{+}<r_{c}$. The second configuration can be realized in
two different ways depending on which horizons do merge and is characterized
either by \ $r_{-}=r_{+}<r_{c}$ (degenerate horizons of the first type, referred to as
the cold black hole)
or $r_{-}<r_{+}=r_{c}$ (degenerate horizons of the
second type sometimes referred to
as the charged Nariai black hole~\footnote{It must not be confused with
the charged Nariai solution which will be discussed in section 4.}).
Finally, for the third (ultracold)
configuration one has $r_{-}=r_{+}=r_{c}.$ The degenerate horizons are
located at simultaneous zeros of $f\left( r\right) $ and $f^{\prime }\left(
r\right) $ for the cold or the Nariai black hole and $f\left( r\right) =f^{\prime }\left( r\right)=
f^{\prime \prime }\left( r\right)=0 $ for the ultracold black hole.
Additionally one can single out the lukewarm configuration, for which the
Hawking temperature of the black hole equals the temperature of the
cosmological horizon.
The Penrose diagrams visualizing two-dimensional sections of the
conformally transformed ABGB-dS geometry is precisely of the type considered earlier
for the Reissner-Nordstr\"om-de Sitter black hole~\cite{Mellor} with the one notable distinction:
the central singularity must be replaced be a regular region.
Although the line element (\ref{el_gen}) with (\ref{el_gen1})
is rather complicated and cannot be studied
analytically one can easily analyze its main features simply by referring to
its important limits. First, let us observe that for small $q$ $\left(
q\ll 1\right) $ as well as at great distances form the center $\left(
x\gg 1\right) $ the ABGB-dS solution closely resembles that of RN-dS. Indeed,
expanding $f\left( r\right) $ one obtains
\begin{equation}
f\,=1-\frac{2\mathcal{M}}{r}+\frac{Q^{2}}{r^{2}}\,-\frac{\Lambda r^{2}}{3}-\,%
\frac{Q^{6}}{12\mathcal{M}^{2}r^{4}}\,+\,....
\end{equation}
On the other hand, as $r\rightarrow 0$, one has
\begin{equation}
f\sim 1-\frac{4\mathcal{M}}{r}\exp \left( \frac{-Q^{2}}{\mathcal{M}r}\right)
-\frac{\Lambda r^{2}}{3}
\end{equation}
and the metric in the vicinity of the center may by approximated by the
de Sitter line element. One concludes, therefore, that the solution is regular
at $r=0$ and, in a view of the asymptotic behaviour of the line element,
to demonstrate this it is unnecessary to calculate the curvature
invariants explicitly. For example, at $r=0$ the Kretschmann scalar is equal $8\Lambda^{2}/3,$
as expected. Further, observe that for small $\lambda $ the
structure of the ABGB-dS geometry is to certain extend qualitatively similar
to the ABGB spacetime. Simple analysis indicates that there are, at most,
three positive roots of the equation $f\left( r\right) =0.$ Two of them
are located closely to the
inner and event horizons of the ABGB black hole whereas the third one,
located approximately at
\begin{equation}
x_{c}\simeq \left( \frac{3}{\lambda }\right) ^{1/2}
\label{cosmo}
\end{equation}
is to be interpreted as the cosmological horizon.
\section{Horizon structure of the ABGB-dS black holes}
Having established the
main features of the ABGB-dS solution qualitatively let us study it in
more detail and consider the approximate solutions for $\lambda \ll 1$ first.
We shall start, however, with a brief discussion of the $\lambda =0$ case
and present the results that will be needed in the subsequent calculations.
In Ref.~\cite{Kocio1} it has been shown that for $\lambda =0$
the location of the inner,
$r_{-}^{(0)},$ and event horizon, $r _{+}^{(0)},$ of the ABGB
black holes can be
expressed in terms the real branches of the Lambert functions, $W_{\left(
\pm \right) }\left( s\right) $:
\begin{equation}
\rho _{\pm }=r^{(0)}_{\pm}/{\mathcal M}=-\frac{4q^{2}}{4W_{\left( \pm \right) }\left( -\frac{q^{2}}{4}%
\exp \left( q^{2}/4\right) \right) -q^{2}}.
\end{equation}
Here $W_{+}$ (the principal branch) and $W_{-}$ are the only real branches
of the Lambert function.
Simple manipulations shows that $\rho _{+}$ and $\rho _{-}$ for
\begin{equation}
q=q_{c}=2\sqrt{W_{+}\left( 1/e\right) }\equiv 2\sqrt{w_{0}}
\label{abg_q}
\end{equation}
merge at
\begin{equation}
\rho _{c}=\frac{4w_{0}}{1+w_{0}}.
\label{abg_x}
\end{equation}
For $q^{2}>q_{c}^{2}$ the degenerate solution bifurcate into a pair of two
complex roots.
For small $\lambda$ one expects that the inner and the event horizon
lies closely to the $\rho_{-}$ and $\rho_{+},$ respectively, and the position
of the cosmological horizon can always be approximated by Eq. (\ref{cosmo}).
Depending on $q$
there will be two horizons located at the roots $x_{-}$ and $x_{+}$, which
for $q^{2}=q_{cr}^{2}$ coalesce into the degenerate horizon $x_{cr}.$ For $%
q^{2}>q_{cr}^{2}$ there are no real solutions for $x_{\pm}$ and $x_{c}$ tends
to $(3/\lambda)^{1/2}$ with increasing $q.$
Now, let us consider the cold black hole, i.e. the one for which $
x_{-}=x_{+}=x_{cr}.$ Taking $\lambda $ to be a small parameter of the
expansion one obtains
\begin{eqnarray}
q_{cr}^{2} &=&4w_{0}+\frac{64w_{0}^{3}}{3\left( 1+w_{0}\right) ^{2}}\lambda +%
\frac{1024w_{0}^{5}\left( 5+3w_{0}\right) }{9\left( 1+w_{0}\right) ^{5}}%
\lambda ^{2} \nonumber\\
&&+\frac{32768}{81\left( 1+w_{0}\right) ^{8}}\left(
59+65w_{0}+18w_{0}^{2}\right) \lambda ^{3}+O\left( \lambda ^{4}\right)
\end{eqnarray}
and
\begin{eqnarray}
x_{cr} &=&\frac{4w_{0}}{1+w_{0}}+\frac{64w_{0}^{3}\left( 3+w_{0}\right) }{%
3\left( 1+w_{0}\right) ^{4}}\lambda +\frac{1024w_{0}^{5}\left(
25+18w_{0}+3w_{0}^{2}\right) }{9\left( 1+w_{0}\right) ^{7}}\lambda ^{2}\nonumber \\
&&+\frac{32768w_{0}^{7}}{81\left( 1+w_{0}\right) 10}\left(
413+461w_{0}+162w_{0}^{2}+18w_{0}^{3}\right) \lambda ^{3}+O\left( \lambda
^{4}\right) .
\label{bb}
\end{eqnarray}
For $q^{2}<q_{cr}^{2},$ following Romans~\cite{Romans}, we shall introduce the
dimensionless parameter $\Delta =\sqrt{q_{cr}^{2}-q^{2}\text{ }}$ and look
for solutions of Eq. (\ref{eqq}) of the form
\begin{equation}
x_{\pm }=\rho_{\pm} +x_{1}^{\left( \pm \right) }\lambda
+x_{2}^{\left( \pm \right) }\lambda ^{2}+O\left( \lambda ^{3}\right) .
\label{aa}
\end{equation}
where
\begin{equation}
\rho_{\pm}=\frac{4\left( q_{c}^{2}-\Delta ^{2}\right) }{
4W_{\pm }\left( \eta \right) -q_{c}^{2}+\Delta ^{2}}
\end{equation}
and
\begin{equation}
\eta = \frac{\Delta^{2}-q_{c}^{2}}{4}\exp\left(\frac{q^{2}_{c}-\Delta^{2}}{4}\right).
\end{equation}
Now, solving a chain of equations of ascending complexity, one has
\begin{equation}
x_{1}^{\left( \pm \right) }=\frac{4\rho_{\pm}\left[ 64w_{0}^{3}-16w_{0}^{3}\rho_{\pm}-\left( 1+w_{0}\right)
^{2}\rho_{\pm}^{3}\right] }{3\left( 1+w_{0}\right) ^{2}\left[ \left(
4-\rho_{\pm}\right) \left( 4w_{0}-\Delta ^{2}\right) -4\rho_{\pm}\right] }
\end{equation}
and
\begin{eqnarray}
x_{2}^{\left( \pm \right) } &=&\frac{4}{\left( 4-\rho_{\pm}\right) \left[ \left( 4-\rho_{\pm}\right) \left(
4w_{0}-\Delta ^{2}\right) -4\rho_{\pm}\right] }\left[ 2 \left(x_{1}^{\left( \pm \right) }\right)^{2}+\frac{1}{9}
\left( \rho_{\pm}-2\right) \rho_{\pm}^{6}\right. \nonumber \\
&&\left. +\frac{2}{3}\left( \rho_{\pm}-6\right) \rho_{\pm}^{3}x_{1}^{\left( \pm \right) }+\frac{
256\rho_{\pm}w_{0}^{5}}{9\left( 1+w_{0}\right) ^{5}}\left( 5+3w_{0}\right) \left(
\rho_{\pm}-4\right) ^{2}\right].
\end{eqnarray}
It could be easily shown that putting $\Delta =0$ and taking limit of (\ref{aa}) as $
\rho_{\pm}\rightarrow 4w_{0}/\left( 1+w_{0}\right) $ one
obtains (\ref{bb}).
In the situations when the cosmological constant cannot be regarded as
small, the analytical treatment of the horizon structure of the ABGB-dS black holes is
impossible. However, although we are unable
to calculate the exact location of horizons in the spacetime of ABGB-dS black holes,
one can use a simple trick. Indeed, the form of the equation~(\ref{eqq})
suggests that it can be
solved easily with respect to $q$ yielding
\begin{equation}
q =\pm \sqrt{x \ln \frac{12-3x+ \lambda x^{2}}{x(3 - \lambda x^{2})}}.
\label{inv}
\end{equation}
This allows to draw curves $q=q(x)$ for various (constant) $\lambda .$
The extrema of the curves represent either the degenerate horizons of the cold black holes
or the charged Nariai black holes. A closer examination shows that
for $q>0$ one has minima for the configurations with $r_{+}=r_{c}$ and maxima
for the cold black hole. Drawing, on the other hand, $\lambda = \lambda(x)$ curves
for constant values of $q$ one has minima for the configurations with $r_{-} =r_{+}$
and maxima for the charged Nariai black holes.
Except the ultracold black hole the
configurations with the cosmological horizon only are not considered in this paper.
The results of such a calculation is displayed in Fig.1.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{pierwszy.eps}
\caption{The curves in this figure display values of $q$ as function of $x$
(where $x$ denotes the positive roots of Eq.(\ref{eqq})) for constant $\lambda$
(see Eq.(\ref{inv})).
The curves are drawn for $\lambda=0.02 i$ for $i=0,...,15$ The extrema
of curves represent degenerate configurations.}
\end{figure}
Now, rotating the diagram counter-clockwise by ninety degrees
and subsequently reflecting the thus obtained curves by
the vertical axis one gets the desired result.
On the other hand one can employ numerical methods and the results
of such calculations are presented in Fig. 2,
where, for better clarity, only $q>0$ region has been displayed.
To investigate the horizon structure it is useful to focus attention on
$x=x(q)$ curves of constant $\lambda,$ where $x$ is, depending on
its position on the curve, one the three horizons.
For each $\lambda \geq \lambda_{0}$ (where $\lambda_{0}$ denotes some critical value
of the cosmological constant to be given later) the (rescaled) radii of the inner
horizon (the lower branch), the event horizon (the middle branch) and the
cosmological horizon (the upper branch) comprise an S-shaped curve
and the turning points of each curve represent degenerate horizons.
On the other hand, for $\lambda <\lambda_{0}$ there is only one turning point representing
cold black hole with $r_{+}=r_{-}$ and the cosmological horizon branch is separated form the rest
of the curve. The degenerate horizon of the second
type appears precisely for $\lambda_{0} = 1/9.$
\begin{figure}
\centering
\includegraphics[width=8cm]{horyzonty.eps}
\caption{The positive roots of Eq. (\ref{eqq}) as a function of $q.$
Bottom to top the curves are drawn for $\lambda =0.008, 0.02, 0.05, 0.08,
0.1, 0.11, 0.12, 013, 0.14, 0.15, 016.$
The lower branches represent the inner horizon, which, for $q<1$
is practically insensitive to the changes of the cosmological constant.
For $\lambda >1/9$ there are two additional branches representing the event
and the cosmological horizon comprising S-shaped curves. For $\lambda <1/9$
the upper branch (the cosmological horizon) is disjoint from the rest of the curve
(consisting of the lower and middle branches) ,and,
for small $\lambda,$ it is located approximately at $\sqrt{3/\lambda}.$}
\end{figure}
Although the qualitative behaviour of the degenerate horizons as function of the
cosmological constant, such as rather weak dependence of the location of the
degenerate horizon of the first type, may easily be inferred from the above
analysis, we shall discuss it in more detail.
The dependence of the location of the degenerate horizons as functions
of $\lambda$ can be calculated from Eq.~(\ref{inv}) and the results
are displayed in Fig 3. The lower branch represents
degenerate horizons of the first type whereas the upper one represents
degenerate horizons of the second kind, and, finally, the branch point represents the
triply degenerated horizon. Such a configuration occurs at
$x=1.34657$ for $\lambda =0.246019$ and $|q|= 1.1082.$
\begin{figure}
\centering
\includegraphics[width=8cm]{double_2.eps}
\caption{The radii of the extremal horizons as function of $\lambda.$ The lower branch
represents the extreme horizons of the cold black hole $(x_{-} =x_{+})$ whereas the upper branch
represents the extreme horizons of the second type $(x_{+}=x_{c}).$
The branch point represents ultracold black hole $(x_{-} =x_{+}=x_{c})$ and the point (1/9, 3)
of the upper branch represents the extreme Schwarzschild-de Sitter solution.}
\end{figure}
\section{Extreme configurations}
The ABGB-dS solution gives rise to a number of solutions constructed
by applying some limiting procedure in the near extreme geometry. For
example, it is a well known fact that the near horizon geometry of the
Reissner-Nordstrom solution is properly described by the Bertotti-
Robinson line element~\cite{Brandon,Bert,Rob,Paul1} whereas Ginsparg
and Perry~\cite{Ginsparg} demonstrated that the extreme Schwarzschild-
de Sitter black hole is naturally connected with the Nariai
solution~\cite{Nariai1,Nariai2}. The procedure of Ginsparg and Perry
has been subsequently generalized and employed in a number of
physically interesting cases, such as C-metrics~\cite{Lemos0}, D-
dimensional black holes~\cite{Cald,Lemos1} and in construction of
various instantons~\cite{MannRo,Hawk}, to name a few. In this section we shall
construct the exact solutions to
the Einstein field equations which are asymptotically congruent to the
near horizon geometry of the ABGB-dS black holes.
First let us
consider the situation when the inner horizon is close to the event
horizon. For $r_{-} \leq r \leq r_{+}$ the function $f$ can be approximated
by a parabola $\alpha (x-x_{+})(x-x_{-})$ and the degenerate horizon, $x_{d},$
by $(x_{+}+x_{-})/2.$ Putting $q^{2} =q_{d}^{2}-\varepsilon^{2} \Delta^{2},$
where $\varepsilon$ is a small parameter that measures deviation from
the extremal configuration and $\Delta$ should be chosen is such a way
to guarantee $x_{\pm} =x_{d}\pm \varepsilon$ one can easily determine $\alpha.$
Indeed, it can be shown that for a given $\lambda$ one has
\begin{equation}
\alpha = 4 \frac{\partial f}{\partial q^{2}_{|q_{d}}}
\frac{\Delta^{2}\varepsilon^{2}}{(r_{+}-r_{-})^{2}}
= \frac{\partial f}{\partial q^{2}_{|q_{d}}}\Delta^{2} > 0
\end{equation}
Similarly, for $r_{+}\leq r \leq r_{c},$ one can approximate the function $f$ by
a parabola $\beta (x-x_{+})(x-x_{c})$ and the degenerate horizon by $(x_{+}+x_{c})/2.$
Putting $q^{2} = q^{2}_{d} +\varepsilon^{2} \tilde{\Delta}^{2}$ one obtains
\begin{equation}
\beta = - \frac{\partial f}{\partial q^{2}_{|q_{d}}}\tilde{\Delta}^{2} <0.
\end{equation}
We shall illustrate the procedure by the example of the ABGB black hole.
First, observe that making use of the expansions of the Lambert functions $W_{+}$ and $W_{-}$
\cite{Lambert1,Lambert2}
\begin{equation}
W_{\pm}(z) =-1 +p -\frac{1}{3}p^{2} +...,
\end{equation}
where $p=\sqrt{2 (ez+1)}$ for the principal branch and $p=-\sqrt{2 (ez+1)}$ for $W_{-}(z),$
one has
\begin{equation}
x_{\pm} =\frac{4w_{0}}{1+w_{0}} \pm \frac{\sqrt{8 w_{0}}}{(1+w_{0})^{3/2}}\Delta\varepsilon +...,
\end{equation}
and, consequently,
\begin{equation}
\Delta =\frac{(1+w_{0})^{3/2}}{\sqrt{8 w_{0}}}.
\label{ddd}
\end{equation}
(Notation has been slightly modified as compared to Sec. 3).
Further observe that
\begin{equation}
\frac{\partial f}{\partial q^{2}_{|q_{d}}} = \frac{1}{4w_{0}}
\label{ppp}
\end{equation}
and $f$ may be approximated by
\begin{equation}
f=\frac{(w_{0}+1)^{3}}{32 w_{0}^{2}}(x-x_{-})(x-x_{+})=A (r-r_{-})(r-r_{+}).
\end{equation}
Finally, introducing new coordinates $\tilde{t} = T/(\varepsilon A),$
$r = r_{d}+\varepsilon \cosh y,$ and taking limit $\varepsilon \to 0$
one obtains the line element in the form~\cite{JM2004}
\begin{equation}
ds^{2} = \frac{1}{A}\left(- \sinh^{2}y\, dT^{2} + dy^{2} \right)
+r_{d}^{2}\left(d\theta^{2} + \sin^{2}\theta \,d\phi^{2}\right).
\label{ads2xs2}
\end{equation}
Since the modulus of curvature
radii of the maximally symmetric subspaces are different
this solution does not belong to the Bertotti-Robinson~\cite{Bert,Rob} class.
Topologically it is a product of the round two sphere of a constant radius and
the two dimensional anti-de Sitter geometry. We will call this solution
a generalized Bertotti-Robinson solution.
Now, let us return to the ABGB-dS geometry
and observe that $\frac{\partial f}{\partial q^{2}_{|q=q_{d}}}$
is always nonnegative. Since there are no analytical expressions describing
the exact location of the horizons we shall employ the perturbative approach.
Starting with the configurations with $r_{-}$ close to $r_{+},$
and repeating the steps above, one obtains the line element (\ref{ads2xs2}) with
\begin{equation}
A = \frac{\partial f}{\partial q^{2}_{|q_{d}}}\frac{\Delta^{2}}{M^{2}},
\label{AA}
\end{equation}
where
\begin{equation}
\Delta^{2} = 2\frac{q_{d}^{2}}{x_{d}^{2}} - \cosh^{2}\frac{q_{d}^{2}}{2 x_{d}}
- \frac{q_{d}^{4}}{2 x_{d}^{3}}
\tanh\frac{q_{d}^{2}}{2 x_{d}}
\label{DeltSq}
\end{equation}
and
\begin{equation}
\frac{\partial f}{\partial q^{2}_{|q_{d}}}=\frac{1}{x_{d}^{2}
\cosh^{2}\frac{q_{d}^{2}}{2 x_{d}}}.
\label{poch}
\end{equation}
The curvature scalar
of the geometry (\ref{ads2xs2}) is a sum of curvatures of the maximally
symmetric subspaces
\begin{equation}
R = R_{AdS_{2}} + R_{S^{2}},
\end{equation}
where
$R_{AdS_{2}}=-2A$ and $R_{S^{2}}=2/r_{d}^{2}.$
We shall call (\ref{ads2xs2}) a generalized cosmological Bertotti-Robinson solution.
It can easily be checked that putting $q_{d}=q_{c}$ and $x_{d}=\rho_{c}$
as given by Eqs. (\ref{abg_q}) and (\ref{abg_x}), respectively, one obtains (\ref{ddd}) and (\ref{ppp}).
On the other hand, for the near extreme configurations with $r_{+}$ close to $r_{c}$
we shall put $q^{2} =q_{d}^{2}+\varepsilon^{2} \tilde{\Delta}^{2}.$
Repeating calculations one has $\tilde{\Delta}^{2} =-\Delta^{2}.$ It should be noted however
that for $x_{d}=x_{+}=x_{c}$ the parameter $\tilde{\Delta}^{2}$ is positive and hence
$\beta$ is negative, as required. Introducing new coordinates
$\tilde{t} = T/(\varepsilon B)$
and $r = r_{d}+\varepsilon \cos y,$ in the limit $\varepsilon \to 0,$ one obtains
\begin{equation}
ds^{2} = \frac{1}{B}\left(- \sin^{2}y\, dT^{2} + dy^{2} \right)
+r_{d}^{2}\left(d\theta^{2} + \sin^{2}\theta \,d\phi^{2}\right),
\label{ds2xs2}
\end{equation}
where $ B =-\beta.$
Topologically it is a product of the round two sphere and the two dimensional de Sitter spacetime
and the curvature scalar is given by
\begin{equation}
R = R_{dS_{2}} + R_{S^{2}},
\end{equation}
where $ R_{dS_{2}}=2 B.$
We shall call this solution a generalized Nariai solution.
Differentiating the function $f$ with respect to $x$ twice,
subtracting $2 f(x)/x^2 =0$ and dividing thus obtained result
by 2 one concludes that
\begin{equation}
A =\frac{1}{2} f''(r_{d}).
\end{equation}
It follows then that $A$ vanishes at the ultraextremal horizon.
Finally observe that putting $y =\xi A^{1/2}$ in (\ref{ads2xs2}) and taking the limit
$A \to 0,$ one obtains
\begin{equation}
ds^{2} = -\xi^{2} dT^{2} +d\xi^{2} + r_{d}^{2} \left(d\theta^{2} + \sin^{2}\theta \,d\phi^{2}\right).
\end{equation}
Topologically it is a product of the two-dimensional Minkowski space and the round
two-sphere of fixed radius
and can be identified with the Pleba\'nski-Hacyan~\cite{Plebanski,Ortaggio,OrtaggioP}
solution.
Although we have adopted the point of view that the cosmological constant
is not a parameter in the solutions space but, rather, the parameter
of the space of theories, one can equally well keep $q$ constant and
change $\lambda.$ Indeed, repeating the calculations with
$\lambda = \lambda_{d}+\varepsilon^{2}\Delta^{2}$ for $r_{-}$
close to $r_{+}$ and $\lambda = \lambda_{d}-\varepsilon^{2}\Delta^{2}$
for $r_{+}$ close to $r_{c}$ one obtains precisely (\ref{ads2xs2}) and (\ref{ds2xs2}),
respectively. The sign can be deduced form the analysis of the $\lambda=\lambda(x)$ curves
obtained from Eq. (\ref{eqq}), and,
as before, the subscript $d$ denotes degenerate configurations.
Since the $AdS_{2}\times S^{2}$ and $dS_{2}\times S^{2}$ (with arbitrary radii of
the maximally symmetric subspaces) appear to describe universally the geometry of
the vicinity of the (doubly) degenerate horizons, on can use this information in construction
of the coefficients $A$ and $B.$ Indeed, observe that $f''(r_{d})>0$ at the degenerate
horizon of the cold black hole, whereas $f''(r_{d})<0$ at the degenerate horizon
of the second type. At the degenerate horizons the Einstein field equations reduce to
\begin{equation}
-\frac{1}{r_{d}^{2}}+ \Lambda = 8\pi T_{t}^{t}
\end{equation}
and
\begin{equation}
\frac{1}{2}f''(r_{d}) + \Lambda = 8\pi T_{\theta}^{\theta}
\end{equation}
Consequently, $A =f''(r_{d})/2$ and $B=-f''(r_{d})/2$
and at the degenerate horizon of the ultracold configuration
one has $f''(r_{d}) =0.$
\section{Lukewarm configuration}
Finally, let us consider the important case of the lukewarm black holes,
i.e. the black holes with the Hawking temperature of the event horizon
equal to that associated with the cosmological horizons.
From the point of view of the quantum field theory in curved background the lukewarm
black holes are special. It has been shown that for the two-dimensional models
it is possible to construct a regular thermal state~\cite{Lizzie1}. Moreover, recent
calculations of the vacuum polarization indicate that it is regular on both
the event and cosmological horizons of the D=4 lukewarm RN-dS black holes~\cite{Lizzie2}.
\begin{figure}
\centering
\includegraphics[width=8cm]{lukewarm.eps}
\caption{The radii of horizons of the lukewarm black hole as functions of $\lambda.$
The branch point should be excluded as it represents the degenerate configuration
of the second type. The almost horizontal line displays the values of $q.$
}
\end{figure}
As the lukewarm black holes are characterized by the condition $T_{H} = T_{C}$
the radii of the horizons with this property satisfy the system of equations
\begin{equation}
f(r_{+})=f(r_{c})=0,\hspace{5mm} f'(r_{+})+f'(r_{c})=0.
\end{equation}
Since one expects that the structure of the horizons of the ABGB-dS black hole is qualitatively
similar to its maxwellian counterpart it is worthwhile to analyze briefly
lukewarm Reissner-Nordstr\"om black holes. Simple calculations indicate that such configurations
are possible for $q=1$ and the locations of the event and the cosmological horizons are
given by
\begin{equation}
x_{+} =\frac{{\it l}}{2}\left(1-\sqrt{1-\frac{4M}{{\it l}} } \right)
\end{equation}
and
\begin{equation}
x_{c} =\frac{{\it l}}{2}\left(1+\sqrt{1-\frac{4M}{{\it l}} } \right),
\end{equation}
where ${\it l}=\sqrt{3/\lambda}.$ For $\lambda =3/16$ $r_{+}$ and $r_{c}$
coalesce into the degenerate horizon of the second type.
One expects, therefore, that for the lukewarm ABGB-dS black holes $q$ should always
be close to 1. Results of our numerical calculations are presented in Fig.4., where
the (rescaled) radii of the event horizon (the lower branch) and the cosmological
horizon (the upper branch) are drawn. The function $q=q(\lambda)$ is almost horizontal
The branch point should be excluded as it refers to the ultracold black holes.
\section{Final Remarks}
In this paper we have constructed the regular solution to the system of coupled equations
of the nonlinear electrodynamics and gravity in the presence of the cosmological
constant. We have restricted to the positive $\Lambda$ and concentrated on the
classical issues, such as location of the horizons, degenerate configurations
and various solutions constructed form the two dimensional maximally symmetric
subspaces. The discussion of the solutions with $\Lambda <0,$ both spherically symmetric
and topological, have been intentionally omitted.
Outside the event horizon the ABGB-dS solution closely resembles RNdS; important differences appear,
as usual, for the near extreme configurations.
At large distances as well as in the closest vicinity of the center
the line element may be approximated by the de Sitter solution.
We indicate a few possible directions of investigations. First, it would
be interesting to examine the vacuum polarization effects in the ABGB-dS
geometries and compare them with the analogous results calculated in
the RNdS spacetime. It should be noted in this regard that the geometries
naturally connected with the ABGB-dS geometry, namely the generalized Bertotti-Robinson and
the cosmological charged Nariai metric are the exact solutions of the semiclassical
Einstein field equations~\cite{Sahni1,Sahni2,Sahni3,Ollo,Solodukhin,Ja2000,JaO1}.
Moreover, the interior of the ultraextremal ABGB-dS
black hole provides a natural background for studies initiated in Ref.~\cite{JaO2}.
This group of problems is actively investigated and the results will be published
elsewhere.
|
1,116,691,500,654 | arxiv | \section{Introduction}
The aim of this paper is to reconsider again the old pre-QCD
Nambu-Jona-Lasinio(NJL) ideas, but this time in the light of the new
insights gained in the last 10 years. We shall make appropriate use of
contemporary methods and ideas on boson and fermion condensates that proved
to be so useful in both nuclear physics and condensed matter physics. The
phenomena associated with broken chiral-flavour symmetries and chiral
anomalies are indeed to be ultimately related to the physical nature and
structure of a presumed quark-gluon quantum vacuum.
We assume the existence of a u,d fermion condensate and work from there,
using mathematical methods more or less common to all fermionic systems at
zero temperature. Our basic degrees of freedom are chiral QCDu,d,s-quarks
but only in colour singlet combinations. Their couplings to the leptons and
gammas are assumed to be primarily as prescribed by the Standard Model (SM).
Updating the NJL idea, we design an effective Hamiltonian to be worked out
under the assumption that it has a stable Dirac-Hartree-Fock-Bogoliubov
(DHFB) state as its (approximate) ground state . \ Necessary
renormalizations are carried out using experimental meson masses and
electroweak data. Applications and detailed numerical results will be
published elsewhere.
\section{The relevant effective Hamiltonian}
We begin by making the standard symmetry assumptions[1,2,3]. The internal
symmetry that phenomenology suggests as the most relevant to the analysis of
the lightest quark sector is the chiral-flavour symmetry of the form
\[
G:SU_{NL}\otimes SU_{NR}\otimes U_{1L}\otimes U_{1R}\qquad (1)
\]
with N=2,3. The chiral-flavour symmetry G undergoes various quantum
mechanical symmetry breakings \ [2,3] that reduces it down to flavour groups
\[
G\Longrightarrow H:SU_{NL+R}\otimes U_{1L+R}\qquad (2)
\]
Thus G is a symmetry of the Hamiltonian (or equations of motion) whereas H
is a symmetry of the quantum vacuum. There are two well-understood kinds of
symmetry breakings that achieve this, both traceable to the quantum
vacuum[2,3,4]. The first kind is due to the existence of a specific
classically definable Landau-like vacuum long range order (LRO)[5], to be
defined more precisely in the following sections. It breaks the chiral $%
SU_{N}$ ($SU_{N}$) part of the algebra (1), leading to the emergence of the
right set of Goldstones(viz. \ 3 and 8). The other kind of quantum
mechanical symmetry breaking removes the axial $U_{1L-R}$ symmetry of the
Hamiltonian, as a consequence of the axial anomaly. The latter can be
interpreted as simply a consequence of the physical existence of the Dirac
vacuum[4]. This implies that there are no chiral doublets in the hadronic
phase of QCD in which we live. This information is the input used in
designing the effective Hamiltonian of this paper.
We choose our basic degrees of freedom to be included in this effective
Hamiltonian: these are (we use Bjorken and Drell notations, definitions and
conventions [6])
\[
(u_{L},d_{L})\oplus (u_{R},d_{R})\oplus (s_{L},s_{R})\qquad (3)
\]
and their antiparticles.We assume input mass matrices:
\[
M_{0n}=\left(
\begin{array}{ccc}
m_{u} & 0 & 0 \\
0 & m_{d} & 0 \\
0 & 0 & m_{s}
\end{array}
\right) \qquad (4)
\]
Thus the simplest non-trivial relevant effective Hamiltonian can be assumed
to be
\[
{\bf H}_{st}=U_{0}+{\bf H}_{0}+{\bf V}^{(1)}+{\bf V}^{(2)}\qquad (5)
\]
\[
{\bf H}_{0}=\sum_{n=cu,cd,cs}\int d^{3}\vec{x}\bar{q}_{nR}(\vec{x})(-i\vec{%
\gamma}.\vec{\nabla})q_{nR}(\vec{x}))+L\Longleftrightarrow R\qquad (6)
\]
\[
{\bf V}^{(1)}=\sum_{n=cu,cd,cs}\int d^{3}\vec{x}\bar{q}_{nR}(\vec{x}%
)M_{0n}q_{nR}(\vec{x}))+L\Longleftrightarrow R\qquad (7)
\]
\[
{\bf V}^{(2)}=\frac{g_{SP}}{8\pi \Lambda _{\chi }^{2}}\sum_{n,n^{\prime
}=cu,cd,cs}\int d^{3}\vec{x}\bar{q}_{nL}(\vec{x})q_{nR}(\vec{x})\bar{q}%
_{n^{\prime }R}(\vec{x})q_{n^{\prime }L}(\vec{x})+L\Longleftrightarrow R\
\qquad (8)
\]
\[
+\frac{g_{VA}\ }{8\pi \Lambda _{\chi }^{2}}\sum_{n,n^{\prime }=cu,cd,cs}\int
d^{3}\vec{x}\bar{q}_{nL}(\vec{x})\gamma ^{\mu }q_{nL}(\vec{x})\bar{q}%
_{n^{\prime }L}(\vec{x})\gamma _{\mu }q_{n^{\prime }L}(\vec{x}%
)+L\Longleftrightarrow R\ \qquad (9)
\]
The $q_{nL,R}$ are chiral field operators\ [6].The necessary inputs for this
model are:
(i) a real flavour diagonal $3\times 3$ ''input mass matrix'' M$_{0n}$;
(ii) Two real independent effective dimensionless couplings $g_{SP}$ and $%
g_{VA}$;
(iii) two fundamental mass scales, provided by an explicit $\Lambda _{\chi }$
$\sim 1GeV$ (related to the scale where massless quarks become massive
quarks) and an implicit $\Lambda _{QCD}\sim 300MeV$ (a parameter basically
fixing the overall size of physical hadrons). In the context of a non-chiral
quark model this region would give the medium range q\={q} potential.
\section{The relevant LRO}
Let us {\it define }the Landau-like long range order (LRO) parameter
appropriate to an ''u,d-quark condensate'' by {\it assuming} the existence
of a {\it robust} spinless, colourless and flavourless non-vanishing scalar
LRO[2,3]
\[
\Delta =\frac{g_{SP}}{8\pi \Lambda _{\chi }^{2}}\sum_{n=cu,cd,cs}<0|(\bar{q}%
_{nL}(\vec{x})q_{nR}(\vec{x}))|0>+L\Longleftrightarrow R\qquad (10)
\]
Translation invariance ensures independence on space coordinates. This {\it %
defines} g$_{SP}$.The notation $|0>$ as used here should be explained. In a
many-body \ context the symbol
\mbox{$\vert$}%
0%
\mbox{$>$}%
would usually mean that one is referring to a certain state vector in
Hilbert space representing the true ground state of the many-body system. In
the context of this paper, such a true ground state of QCD-quark-gluon
coupled fields is of course not only unknown but it is also irrelevant. This
notation is nevertheless adopted here, but $|0>$ merely defines a ''no
particle state'' which is just a tautology for ''normal operator
products''[7]. It has therefore nothing to do with any true physical ground
state. We shall sometimes refer to it rather loosely as the '' quark
vaccum''.
We work exclusively with approximate Heisenberg operators that play the role
of ''physical states''[7]. It is established that only colour singlets would
qualify as such. We try to find stationary solutions to the Heisenberg
equations of motion for these ''physical states''. Thus our ''no particle
state'' is nothing but a tautology for normal ordering DHFB quasiparticle
operators.
In order to include this assumption in the effective Hamiltonian we begin by
making a straightforward Boguliubov transformation using the Nambu-Gorkov
representation for the u,d QCD-quarks[5]:
\[
\left(
\begin{array}{c}
\alpha _{n\lambda }(\vec{p}) \\
\beta _{\bar{n}\bar{\lambda}}^{+}(-\vec{p})
\end{array}
\right) =\sum_{h=\pm \frac{1}{2}}\left(
\begin{array}{cc}
\sin \varphi _{nh\lambda }(p) & \cos \varphi _{nh\lambda }(p) \\
-\cos \varphi _{nh\lambda }(p) & \sin \varphi _{nh\lambda }(p)
\end{array}
\right) \left(
\begin{array}{c}
b_{nh}(\vec{p}) \\
d_{\bar{n}\bar{h}}^{+}(-\vec{p})
\end{array}
\right) \quad \qquad (11)
\]
where $\lambda =L,R$ are chiralities and h are helicities. The ''Bogoliubov
angles $\varphi _{nh\lambda }(p)$'' serve as adjustable variational
parameters. By separating out the bilinear from the non-bilinear terms we
find that
\[
{\bf H}_{st}(u,d,s)=U_{0}^{\prime }+{\bf H}_{0}^{\prime }+{\bf V}^{\prime
}\qquad (12)
\]
where
\[
{\bf H}_{0}^{\prime }=\sum_{n=cu,cd,cs}\int d^{3}\vec{p}:\left(
\begin{array}{cc}
\bar{q}_{nR}(\vec{p}) & \bar{q}_{nL}(\vec{p})
\end{array}
\right) \left(
\begin{array}{cc}
|\vec{p}|\ & -\Delta _{n} \\
-\Delta _{n} & -|\vec{p}|
\end{array}
\right) \left(
\begin{array}{c}
q_{nL}(\vec{p}) \\
q_{nR}(\vec{p})
\end{array}
\right) \qquad (13)
\]
and V' represents the remaining (quadrilinear) terms.The Bogoliubov angles
are chosen so that ${\bf H}_{0}^{\prime }$ is fully diagonalized:
\[
{\bf H}_{0}^{\prime }=-U_{0}^{\prime }+\sum_{n=cu,cd,cs}\sum_{h=\pm \frac{1}{%
2}}\int d^{3}\vec{p}E_{n}(p)[b_{nh}^{+}(\vec{p})b_{nh}(\vec{p})+d_{\bar{n}%
\bar{h}}^{+}(-\vec{p})d_{\bar{n}\bar{h}}(-\vec{p})]\qquad (14)
\]
\[
\sin ^{2}\varphi (p)=\frac{1}{2}(1-|\vec{p}|/E_{u,d}(p)))\qquad n=u,d\qquad
(15)
\]
\[
E_{u,d}(p)=\sqrt{|\vec{p}|^{2}+(m_{u,d}+\Delta )^{2}}\qquad (16)
\]
\[
E_{s}(p)=\sqrt{|\vec{p}|^{2}+m_{s}^{2}}\qquad (17)
\]
where by definition
\[
b_{nh}(\vec{p})|0>=0=d_{\bar{n}\bar{h}}(-\vec{p})|0>\qquad (18)
\]
Note the independence of the Bogoliubov angles on helicities /chiralities
that follow from the definition of the LRO. We shall refer to this as the
Dirac-Hartree-Fock-Bogoliubov(DHFB) static approximation.
\section{Renormalizations}
We shall have to renormalize the above theory, selecting experimental data
on pions and the etas for doing so[1]. These mesons, just like any physical
mesons, are here considered to be just complex poles of the physical
S-matrix amplitudes.
The electromagnetic sector of this model is represented by the Hamiltonian
\[
{\bf H}_{ew}={\bf H}_{0l\gamma }+{\bf V}_{ew}\qquad (19)
\]
and will be treated perturbatively.${\bf H}_{0l\gamma }$ stands for free
leptons and gammas. The ${\bf V}_{ew}$ is given by the SM[2,3].
(i) Consider the $\pi ^{\pm }$main decay channel in the rest frame ,
defining the decay constant $f_{\pi ^{\pm }}$
\[
A(\ \pi ^{\pm }\longrightarrow \mu ^{\pm }+\nu _{\mu }(\bar{\nu}_{\mu
}))\equiv <0|J_{weak}|\pi ^{\pm }(M_{\pi ^{\pm }})>\qquad (20)
\]
where
\[
|\pi ^{\pm }(M_{\pi ^{\pm }})>=|\pi ^{+}>\equiv \frac{1}{\sqrt{6}}%
\sum_{c}\sum_{h}\int d^{3}\vec{p}\Psi _{\pi ^{\pm }}(\vec{p})\ast
b_{cuh}^{+}(\vec{p})d_{\bar{c}\bar{d}\bar{h}}^{+}(-\vec{p})|0>\ (21)
\]
\[
|<0|J_{weak}|\pi ^{\pm }(M_{\pi ^{\pm }})>|=|f_{\pi ^{\pm }}|\sqrt{\frac{%
M_{\pi ^{\pm }}}{2(2\pi )^{3}}}\ (22)
\]
we find the condition
\[
\sqrt{\frac{M_{\pi ^{\pm }}}{3(2\pi )^{3}}}\frac{|f_{\pi ^{\pm }}|}{4\cos
\theta _{C}}=|\int \frac{d^{3}\vec{p}}{(2\pi )^{3}}\Psi _{\pi ^{\pm }}(\vec{p%
})\ast \cos 2\varphi (p)|\ (23)
\]
(ii) Consider the $\pi ^{\pm }$charge radius, defined through
\[
<\pi ^{+}(\vec{p}_{2})|J_{em}^{\mu }(0)|\pi ^{+}(\vec{p}_{1})>=G_{\pi
}((p_{1}-p_{2})^{2})\frac{(p_{1}+p_{2})^{\mu }}{\sqrt{(2\pi )^{3}2E_{\pi
}(p_{2})(2\pi )^{3}2E_{\pi }(p_{1})}}\ (24)
\]
where
\[
G_{\pi }((p_{1}-p_{2})^{2})=1+\frac{1}{6}<r_{\pi }^{2}>(p_{1}-p_{2})^{2}\
(25)
\]
A simple estimate of the theoretical charge radius can be obtained by making
a reasonable ansatz for the (common) normalized internal wavefunction of $%
\pi ^{\pm }$:
\[
\Psi _{\pi ^{\pm }}(\vec{\rho};x,y)=\frac{1}{\sqrt{4\pi }}N(x,y)\exp (-\frac{%
1}{2}(\rho -x\Lambda _{\chi }^{-1})^{2}/(y\Lambda _{QCD}^{-1})^{2}\ (26)
\]
where x,y are dimensionless variational parameters. So
\[
<r_{\pi }^{2}>=\Lambda _{\chi }^{2}\frac{F_{4}(x,y;\Lambda _{\chi }/\Lambda
_{QCD})}{F_{2}(x,y;\Lambda _{\chi }/\Lambda _{QCD})}\qquad (27)
\]
\[
F_{j}(x,y;\Lambda _{\chi }/\Lambda _{QCD})=\int_{-x/y}^{\infty }du\exp
(-(\Lambda _{\chi }/\Lambda _{QCD})^{2}u^{2})(x+yu)^{j}\ (28)
\]
(iii)Next, consider the main decay channel of the $\pi ^{0}$at rest:
\[
A(\pi ^{0}\longrightarrow \gamma (\vec{k})+\gamma (-\vec{k})=<\gamma (\vec{k}%
),\gamma (-\vec{k})|{\bf V}_{em}|\pi ^{0}>\qquad (29)
\]
\[
|\pi ^{0}>=\frac{1}{\sqrt{12}}\sum_{c}\sum_{q=cu,cd}\sum_{h}\int d^{3}\vec{p}%
\Psi _{\pi ^{0}}(\vec{p})b_{cqh}^{+}(\vec{p})d_{\bar{c}\bar{q}\bar{h}}^{+}(-%
\vec{p})|0>\qquad (30)
\]
But assuming the initial pseudoscalar at rest, we find that the absorptive
part of the (dominant) underlying Lorentz and gauge invariant q\={q}
annihilation amplitude $a$ (with massive quarks) is :
\[
k^{2}%
\mathop{\rm Im}%
a(\pi ^{0}(\Longleftrightarrow q+\bar{q})\longrightarrow \gamma (\vec{k}%
)+\gamma (-\vec{k}))\sim
\]
\[
\sim \left( \Delta /k\right) ^{2}(1-\left( \Delta /k\right) ^{2})^{-\frac{1}{%
2}}\ln [1-(1-\left( \Delta /k\right) ^{2})^{\frac{1}{2}}/(1+(1-\left( \Delta
/k\right) ^{2})^{\frac{1}{2}})]\qquad (31)
\]
.As $\Delta ,k\longrightarrow 0$ it can be shown that the lhs of (31) tends
to $\delta (k^{2})$ [8], contrary to na\"{i}ve expectations.The deep reason
for this is the existence of the fixed anomaly pole at $k=0$, a necessary
feature if the U$_{1em}$ gauge invariance is to be maintained. This pole
lives below the physical threshold( at about $k=2\Delta $), which can be
related to the physical mass of the $\pi ^{0}$ through the
Gell-Mann-Oakes-Renner formula [2,3]. The existence of this pole could be
guessed from (31) as the rise of \ \ $k^{2}%
\mathop{\rm Im}%
a$ as $k$ descends from infinity towards the physical threshold. This
provides another condition on our parameters.
(iv)In order to calculate the value of the $\eta _{0}-\eta _{0}^{\prime }$
anomaly [2,3]we shall have to include further interaction terms from (12).We
shall consider the effect only to leading order and ignore all isospin
mixings. Let us define
\[
|\eta _{i}>=\frac{1}{\sqrt{3}}\sum_{c}\sum_{h}\sum_{f_{i}}\int d^{3}\vec{p}%
\Psi _{\eta _{i}}(\vec{p})b_{cf_{i}h}^{+}(\vec{p})d_{\bar{c}\bar{f}_{i}\bar{h%
}}^{+}(-\vec{p}))|0>\ i=1,2\qquad (32)
\]
with f$_{1}=u,d$ and f$_{2}=s$.
So by diagonalizing the $2\times 2$ matrix
\[
M_{ij}=<\eta _{i}|{\bf H}_{st}(u,d,s)-U_{0}|\eta _{j}>\qquad (33)
\]
we find that the eigenvectors (in the approximation of keeping only the
q\={q} pair exchange diagrams) are
\[
|\eta _{0}>=\cos \theta |\eta _{1}>+\sin \theta |\eta _{2}>\qquad (34)
\]
\[
|\eta _{0}^{\prime }>=\sin \theta |\eta _{1}>-\cos \theta |\eta _{2}>\qquad
(35)
\]
where the mixing angle is given by
\[
\tan \theta =(M_{\eta ^{0}}-M_{11})/M_{12}=M_{21}/(M_{\eta _{0}^{\prime
}}-M_{22})\qquad (36)
\]
\[
M_{11}=2E_{u,d}+|F_{1}|^{2}\qquad M_{22}=2E_{s}+|F_{2}|^{2}\qquad
M_{12}=F_{1}^{\ast }F_{2}+F_{1}F_{2}^{\ast }=M_{21}\qquad (37)
\]
\[
F_{1}\equiv \frac{1}{\sqrt{12}}\int_{0}^{\infty }d^{3}\vec{p}\Psi _{\eta
_{1}}(\vec{p})A_{1}(\Delta ,\vec{p})\quad F_{2}\equiv \frac{1}{\sqrt{6}}%
\int_{0}^{\infty }d^{3}\vec{p}\Psi _{\eta _{2}}(\vec{p})A_{2}(M_{s},\vec{p}%
)\qquad (38)
\]
\[
A_{1}(\Delta ,\vec{p})\equiv 12\Delta /\sqrt{|\vec{p}^{2}|+\Delta ^{2}}%
\qquad (39)
\]
\[
A_{2}(M_{s},\vec{p})=6M_{s}/\sqrt{|\vec{p}^{2}|+M_{s}^{2}}\qquad (40)
\]
The angle $\theta $ is thus part of our renormalized parameters.
\section{Conclusions}
We presented an Hamiltonian field theory in a DHFB-static approximation as a
complement/alternative to the conventional theory of pions and etas.
However, our theory (based on an updated version of the NJL field theory[9])
can be easily extended and has a much broader scope. It can also easily be
linked to non-chiral quark models and QCD in the deep infrared. All
renormalizations procedures are build in the theory itself and have of
course only meaning in the context of this theory, as every renormalization
scheme does in its own context. The issue of confinement, though not
essential to our case, is bypassed in a natural way. Detailed numerical fits
to experimenta data in order to get the renormalized parameters will be
published elsewhere.
REFERENCES
[1] \ \ Particle Data Group, http//pdg.lbl.gov/.
[2] \ S.Weinberg, in{\it \ The Quantum Theory of \ \ }
\ \ \ \ \ \ \ {\it Fields,Vol.1}(Vol.1(pp.62-81;213-229 ) and Vol.2(pp.
182-192)
\ \ \ \ \ \ \ edited by Cambridge University Press, U.K.1995-6.
[3] \ \ J. Donnoghue, E.Golowich and B.R.Holstein, in {\it Dynamics of the
Standard}
\ \ \ \ \ \ {\it \ Model }edited by Cambridge University Press,
U.K.1992,pp.157-183.
[4] \ R.Jackiw, hep-th/9903255 preprint, 1999.
[5] \ G.Volovik , in {\it The Universe in a helium Droplet}(pp.65-70){\it ,}
edited by
\ \ \ \ \ \ \ \ Clarendon Press, Oxford, 2003.
[6] \ \ J.Bjorken and S.Drell , in \ {\it Relativistic Quantum Fields, }%
edited by \ \ \
\ \ \ \ \ \ \ Mc-Graw-Hill Book Company,1965,pp.43-67.
[7] \ P. A.M.Dirac, in {\it Lectures in Quantum Field Theory, }edited by
Elfer Graduate
\ \ \ \ \ \ \ School, Yeshiva University, New York 1966.
[8] \ \ A.Dolgov and V.I. Zakharov, Nucl.Phys.B27,(1971)525.
[9] \ \ M. Buballa, hep-ph/0402234 preprint, 2004.
\end{document}
|
1,116,691,500,655 | arxiv | \section{Introduction}
Quantum computers promise dramatic speedups in a variety of disciplines~\cite{Jordan2012,AspuruGuzik2005,Abrams1999,Jaksch2003,Lidar1999}, but remain challenging to scale up in practice. A major obstacle to executing elaborate quantum algorithms, is the need for gates that act conditionally on a large number of qubits. The prototypical example of such a gate is the $N$-qubit Toffoli gate, which flips a single `target' qubit if and only if all $N-1$ `control' qubits are in the state $\ket{1}$. Even though quantum devices with over 50 qubits have been reported~\cite{Zhang2017,Arute2019}, the largest Toffoli gate ever performed is, to our best knowledge, the case $N=4$~\cite{Figgatt2017}. This gap is surprising, because Toffoli gates (or equivalents) are essential ingredients of many basic computation steps, such as elementary arithmetic~\cite{Vedral1996,Cuccaro2004,vanMeter2005}, error correction~\cite{Paetznick2013}, and the Grover diffusion operator~\cite{Grover1996}.
Two different strategies exist to implement Toffoli gates. The first consists of decomposing a single $N$-qubit Toffoli gate into a circuit consisting of one- and two-qubit gates~\cite{Maslov2003,Shende2009,Linke2017} or multiqubit gates, such as the M\o{}lmer-S\o{}rensen gate in trapped ions~\cite{Sorensen1999,Maslov2018, Groenland2020}. The second approach is to perform the gate in a single step using interactions that are native to the specific platform~\cite{Isenhower2011,Khazali2020,Molmer2011,Rasmussen2020}. In particular, a recent proposal~\cite{Rasmussen2020} has demonstrated that by exploiting systems with an all-to-all Ising interaction in combination with a drive field on a single target qubit an $i$-Toffoli gate can be implemented. This gate differs only from the regular Toffoli by a phase $+i$ on the target qubit.
Trapped ions are a natural candidate to implement this proposal, as intrinsic Ising interactions have been demonstrated in increasingly large ion crystals~\cite{Sorensen1999,Leibfried2003,Roos2008,Kim2009,Zhang2017}. Moreover, quantum operations have been demonstrated~\cite{Ballance2016,Gaebler2016} with fidelities higher than 99.9\%. Ising interactions generally arise from qubit-phonon couplings $\hat{H}_\text{q-ph}$ generated from state-dependent laser-induced forces on the ions. Combining this mechanism with the driving field \(\hat{H}_\text{drive}\) required for an $i$-Toffoli gate poses a problem, as both process do not commute i.e. ~$[\hat{H}_\text{q-ph}, \hat{H}_{\text{drive}}]\neq 0$. As a result, the qubit states and the motion of the ions remain entangled at the end of the gate sequence, which leads to fidelity loss. This effect could be mitigated by restricting the strength of the spin-phonon coupling such that the phonons are only virtually excited~\cite{Kim2009}. However, limiting the strength of the Ising interactions leads to undesirably long gate times.
In this work, we show that this residual qubit-phonon entanglement can be suppressed by adiabatic ramping of $\hat{H}_\text{q-ph}$. In this way, the $i$-Toffoli gate operates on the dressed eigenstates of $\hat{H}_\text{q-ph}$, that are adiabatically connected to the Fock eigenstates of the non-interacting system. The benefit of this approach is that the effective Ising interaction strength does not have to be limited to the regime of virtual phonon excitation. We show that high-fidelity $\bar{F}>99\%$, single step, $i$-Toffoli gates should be possible with up to 7 ions at gate times $\sim$ 600~\(\upmu\)s.
We start in Sec.~\ref{sec:model} with the derivation of the model for a $N$-qubit $i$-Toffoli gate for a system of trapped ions and introduce our proposal for adiabatic preparation of dressed states. In Sec.~\ref{sec:linear_crystal} we analyze the results of numerical simulations for a linear 3 crystal and consider the role of inhomogeneous Ising interactions mediated by multiple phonon modes. We discuss the implementation of a method based on multi-frequency laser fields~\cite{Shapira2020} to eliminate undesired phases originating from these inhomogeneous interactions. Finally, in Sec.~\ref{sec:fidelities} we calculate the fidelities for 3-9 qubits gates and discuss sources of errors and ways to mitigate them. We also consider the effects of imperfect ground state cooling.
\section{Model of a $N$-qubit Toffoli gate in trapped ions}\label{sec:model}
\subsection{Single step $N$-qubit $i$-Toffoli gate}
Briefly, the proposal~\cite{Rasmussen2020} requires qubits coupled via an Ising interaction of the form $\hat{H}_\text{Ising} = \sum^N_{ij}J^{(i,j)}\hat{\sigma}_z^{(i)}\hat{\sigma}_z^{(j)}$ with \(\hat{\sigma}^{(i)}_r\) the Pauli matrix acting on ion $i$, and $J^{(i,j)}$ the strength of the interaction field \footnote{We define $\hbar=1$ and thus omit it from all the Hamiltonians in this text}. Including a drive field of frequency \(\omega_g\) with strength \(g\) acting on the target qubit, $\hat{H}_{\text{drive}}=g\hat{\sigma}_x^{(\text{t})}\cos{(\omega_g t)}$, and the energy of the non-interacting qubits, $\hat{H}_0=\omega_0/2\sum_i\hat{\sigma}_z^{(i)}$, a simple Hamiltonian is obtained:
\begin{equation}\label{eq:Htot}
\hat{H}_\text{T} = - \frac{\nu}{2}\sum_i\hat{\sigma}_z^{(i)} + \sum_{i \neq j}J^{(i,j)}\hat{\sigma}_z^{(i)}\hat{\sigma}_z^{(j)} + \frac{g}{2}\hat{\sigma}_x^{(\text{t})},
\end{equation}
where we transformed into the interaction picture with respect to \(\omega_g\), using $\hat{U}=\exp\Big(-i \frac{\omega_g}{2}t\Big)$. We also define $\nu = \omega_g - \omega_0$ with $ \omega_0$ the energy spacing between qubit states (or eigenstates of $\hat{\sigma}_z$). These eigenstates and their energies (Fig.~\ref{fig:Spectrum}) can be labeled as $\ket{x_\text{t},\vec{x}_c}$ and \(E_{|x_t,\vec{x}_c\rangle}\) with $x_\text{t}$ describing the state of the target qubit and $\vec{x}_c$ is the string describing the state of the control qubits. In particular, the two target states labelled as $|0,1^{N_c}\rangle, |1,1^{N_c}\rangle$, where $N_c$ correspond to the number of control qubits, correspond to those that are coupled by the action of the Toffoli gate.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,scale=1.0]{fig1.pdf}
\vspace{10pt}
\caption{\label{fig:Spectrum} Energies of non-interacting eigenstates (\(J=0\)) and interacting (dressed) states (\(J>0\)). The two target states \(\ket{111},\;\ket{011}\) are highlighted. Because their energy gap is unique, an appropriate drive field can couple the states resonantly.}
\end{figure}
The driving field frequency (\(\omega_g\)) is chosen such that it resonantly couples these two states, i.e. $\Delta_{1^{N_c}} = E_{|0,1^{N_c}\rangle}-E_{|1,1^{N_c}\rangle} = \omega_g$. According to Eq.~\ref{eq:Htot} the energy gap for any pair of states with equal control bits can be written as:
\begin{align}\label{eq:E_gap}
\Delta_{\vec{x}_c}=4\sum_{i=1}^{N_c} J^{(\text{t},i)}(-1)^{\vec{x}_i} + \omega_0,
\end{align}
where $x_i$ denotes the state of the $i$-th control qubit. The resonant condition becomes then \(\nu = 4\sum_{i=1}^{N_c} J^{(\text{t},i)}(-1)^{\vec{x}_i}\), which for the target states implies $\nu = -4\sum_{i=1}^{N_c} J^{(\text{t},i)}$.
Evolution under the Hamiltonian of Eq.~\ref{eq:Htot} for a (gate) time \(\tau_g=\pi/g\) leads to the desired $i$-Toffoli gate. To prevent accumulation of unwanted dynamical phases during the gate, timing restrictions can be considered, or an echo pulse can be applied. Both will be discussed later in this text.
\subsection{Implementation in trapped ions}
To achieve the required Ising interaction in trapped ions, a qubit state-dependent force is generated with two non-copropagating bichromatic lasers with beatnote frequency $\mu$, which excites phonons in the ion crystal. For an homogenous laser field extending over the full ion crystal, the laser-ion interaction Hamiltonian is \(\hat{H}_\text{q-ph} = \sum_i F_i \exp(i \vec{k}\cdot\hat{\vec{r}}^{\,(i)}) + \text{h.c.}\). Here \(F_i=(\Omega/2) e^{-i\mu t}\hat{\sigma}_z^{(i)}\) is a state-dependent interaction \footnote{The dependence on the qubit state in arises from a differential Stark shift set by proper choice of laser polarizations~\cite{Leibfried2003}} with $\Omega$ the interaction strength, \(\vec{k}\) the resulting wavevector of the interfering laser fields, and \(\hat{\vec{r}}^{\,(i)}\) the position operator of ion $i$. With \(\vec{k}\cdot\hat{\vec{r}}^{\,(i)}=\sum_m \eta_m^{(i)} ( \hat{a}^\dagger_m + \hat{a}_m ) \) the Hamiltonian can be written as:
\begin{equation}\label{eq:HI}
\hat{H}_\text{q-ph} = \frac{\Omega}{2}\sum_{i}\left(e^{i\sum_{m}\eta^{(i)}_m\left(\hat{a}_m^{\dagger}+\hat{a}_m \right)-i\mu t} + \text{h.c.}\right)\hat{\sigma}_z^{(i)},
\end{equation}
where the creation and annihilation operators for the $m$-th phonon mode are denoted by $\hat{a}_m^\dagger$ and $\hat{a}_m$. The Lamb-Dicke parameter $\eta^{(i)}_{m}$ is scaled with the motion amplitude of the $i$-th ion on the $m$-th phonon mode ($\vec{b}^{\,(i)}_m$), i.e. $\eta^{(i)}_m= \vec{b}^{\,(i)}_m\cdot \vec{k} \sqrt{\hbar/(2M\omega_m)}$ with $M$ the ion mass and $\omega_m$ the phonon mode frequency.
Including again the drive field ($\hat{H}_\text{drive}$) and the energy of the non-interacting system (\(\hat{H}_0\)), the total Hamiltonian in the interaction picture of $\omega_g$ becomes:
\begin{align}\label{eq:Htot_ph}
\hat{H}_\text{T} =& -\frac{\nu}{2}\sum_i\hat{\sigma}_z^{(i)} + \sum_m \omega_m \hat{a}_m^\dagger\hat{a}_m \nonumber \\
+& \frac{\Omega}{2}\sum_{i}\left(e^{i\sum_{m}\eta_m^{(i)}\left(\hat{a}_m^{\dagger}+\hat{a}_m \right)- i\mu t} + \text{h.c.} \right)\hat{\sigma}_z^{(i)} \nonumber\\
+& \frac{g}{2}\hat{\sigma}_x^{(\text{t})},
\end{align}
which includes a new (second) term for the motional energy of the system. Now the eigenstates of the non-interacting system have the form \({\ket{\Psi}}=\ket{\Phi}\otimes\ket{x_\text{t},\vec{x}_c}\), with \(\ket{\Phi}=\bigotimes_m \ket{n_m}\) the motional wavefunction of the system in the Fock space of the $m$ phonon modes of the crystal. For this system we define the target states for the $i$-Toffoli gate as the ones corresponding to an ion crystal cooled to its ground state, that is the two target states are \(\ket{\Psi_1}=\bigotimes_m \ket{n_m=0}\otimes\ket{1,\vec{x}_c}\) and \(\ket{\Psi_0}=\bigotimes_m \ket{n_m=0}\otimes\ket{0,\vec{x}_c}\) \footnote{In the following we will drop the motional component from states in its ground state and label them only by their electronic part, e.g. \(\ket{\Psi_0}\rightarrow\ket{0,\vec{x}_c}\)}.
Next, we simplify this Hamiltonian by going into the interaction picture of the phonon mode frequencies with the transformation $\hat{U}=\exp\Big(-i t\sum_m\omega_m \hat{a}_m^\dagger\hat{a}_m\Big) $:
\begin{align}\label{eq:drive}
\tilde{H}_\text{T} =&- \frac{\nu}{2}\sum_i\hat{\sigma}_z^{(i)} + \frac{\Omega}{2}\sum_{i} \Big{(}e^{i\sum_m \eta_m^{(i)}\left(\hat{a}_m^\dagger e^{i\omega_m t}+ \text{h.c.} -i \mu t\right)} \nonumber\\
+& \text{h.c} \Big{)} \hat{\sigma}_z^{(i)} + \frac{ g}{2}\hat{\sigma}_x^{(\text{t})},
\end{align}
where high frequency terms ($2 \omega_g$) were ignored. We now consider a system within the Lamb-Dicke limit and transform the Hamiltonian into a new interaction picture \footnote{We use the rotating wave approximation and ignore frequencies higher than \(|\delta_\text{s}|\)} with respect to $\delta_\text{m} = \mu -\omega_m$ using $\hat{U}=\exp (-i t \sum_m \delta_m\hat{a}^\dagger_m \hat{a}_m)$:
\begin{align}\label{eq:HT_mm}
\tilde{H}_\text{T,mm} =& -\frac{\nu}{2}\sum_i\hat{\sigma}_z^{(i)} +\frac{i\Omega}{2}\sum_m\sum_i\left(\hat{a}_m^\dagger -\hat{a}_m \right)\eta_m^{(i)}\hat{\sigma}_z^{(i)} \nonumber \\
-&\sum_m\delta_m \hat{a}^\dagger_m \hat{a}_m +\frac{g}{2}\hat{\sigma}_x^{(\text{t})}.
\end{align}
To recover a Hamiltonian having the desired Ising interaction as in Eq.~\ref{eq:Htot}, we apply a Lang-Firsov transformation~\cite{Porras2004,Deng2005, Lang1968} to introduce a dressed-state picture of qubits entangled with phonon modes of the crystal. The transformation, \(\hat{U}_\text{I}=\exp \Big[-i\sum_{i,m} \alpha_m^{(i)}(\hat{a}_m^\dagger+\hat{a}_m)\Big]\), with \(\alpha_m^{(i)} = (\Omega\eta_m^{(i)}/2\delta_m)\hat{\sigma}_z^{(i)}\), has the form of a displacement operator that displaces the state of the system in phase space by a state dependent magnitude of \(\alpha_{m,\Psi} = \sum_i\alpha_m^{(i)}\). The result of the transformation is:
\begin{align}\label{eq:HT_ss}
\tilde{H}_\text{T,I} = \hat{U}^\dagger_\text{I}\tilde{H}_\text{T,sm}\hat{U}_\text{I} = &-\frac{\nu}{2}\sum_i\hat{\sigma}_z^{(i)} + \sum_{i\neq j} J^{(i,j)}\, \hat{\sigma}_z^{(i)}\hat{\sigma}_z^{(j)} \nonumber \\ &-\sum_m \delta_m \hat{a}^\dagger_m \hat{a}_m +\frac{\tilde{g}}{2}\tilde{\hat{\sigma}}_x^{(\text{t})},
\end{align}
\noindent with $J^{(i,j)}=\Omega^2 \sum_m\eta^{(i)}_m\eta^{(j)}_m/4\delta_m$, a corrected drive strength, \(\tilde{g}\), and a transformed drive term, \(\tilde{\hat{\sigma}}_x^{(\text{t})}=\hat{U}^\dagger_\text{I}\hat{\sigma}_x^{(\text{t})}\hat{U}_\text{I}\). Because the drive and the Ising terms do not commute, this transformation introduces a term \(\propto \alpha_m^{(\text{t})}\hat\sigma_y^{(\text{t})}\) which couples the drive to ion motion and can cause a gate error \(\propto \alpha_m^{(\text{t})}\). For weak (virtual) phonon excitation, \(\alpha_{\Psi}\ll 1\), such that \(\tilde{\hat{\sigma}}_x^{(\text{t})}\approx\hat{\sigma}_x^{(\text{t})}\), this error is small. However, this regime corresponds to very slow gates and we are here interested instead in the regime in which the corrections to \(\hat{\sigma}_x^{(\text{t})}\) have to be taken into account, i.e. \(\alpha_{\Psi} \gtrapprox 1\).
The corrected drive strength \(\tilde{g}=g/\lambda^{\Psi^\prime,\Psi}_c\) accounts for the non-unitary overlap of the motional part of the (dressed) eigenstates of Eq.~\ref{eq:HT_ss}. These states are displaced Fock states, i.e. $|\Phi\rangle_\text{I}=\prod_m \hat{D}(\alpha_{m,\Psi})\ket{n_m}$, which can be produced adiabatically from the Fock states of the non-interacting system. The correction factor \(\lambda^{\Psi^\prime,\Psi}_c\) is equal to the overlap between the displaced states of any pair of states \(\ket{\Psi^\prime}, \ket{\Psi}\). The overlap is dependent on their initial phonon occupation number \(\ket{n_m}\) and can be written as~\cite{Cahill1969}:
\begin{align}\label{eq:lambda_c_mm}
\lambda^{\Psi^\prime,\Psi}_c &= \prod_m \langle n_m^\prime|\hat{D}^\dagger(\alpha_{m,\Psi^\prime}) \hat{D}(\alpha_{m,\Psi})|n_m \rangle \nonumber \\ &= \prod_m e^{-\beta_m^2/2}\beta_m^{|\Delta n|_m} \left(\frac{n_m!}{n_m^\prime!}\right)^{\text{sign}(\Delta n_m)/2} L^{|\Delta n_m|}_{n_m}\left(\beta_m^2\right),
\end{align}
where \(\Delta n_m = n_m^\prime -n_m \) and \(\beta_m = \alpha_{m,\Psi^\prime} - \alpha_{m,\Psi}\), $L^{(\gamma)}_n(\beta)$ is the associated Laguerre polynomial. Note that the drive strength needed for implementing the correct gate depends therefore explicitely on the motional input state. For the target states in their ground states of motion, \(\ket{\Psi_0}, \ket{\Psi_1}\), the overlap simplifies to \(\lambda_c^{\Psi_0,\Psi_1} = \prod_m e^{-\beta_m^2/2}\) with \(\beta_m = \Omega \eta^\text{(t)}_m/\delta_m\) and where \(L_0\left(\beta_m^2\right)=1\).
\subsection{Adiabatic Preparation of States}
\begin{figure*}[ht!]%
\includegraphics[width=0.9\textwidth,scale=1.0]{fig2.pdf}
\caption{\label{fig:3ionUnitaries} Time evolution of states under the action of \(\tilde{H}_\text{T,sm}\) (a) Phase space trajectories (zoomed in) of motional wavefunction during evolution with \(\hat U_\text{T}\). Note that the adiabatic ramp ensures that dynamics take place along the momentum axis in this frame, as explained more in detail in Appendix \ref{app:ASE}. (b) Real and (c) imaginary part of process unitary matrix for the motional ground state $\left(|n=0\rangle\right)$ subspace. (d) Evolution in the Bloch sphere of the two resonant states, and (e) the projections along x (\(\mathbf{-\cdot}\)), y (\(\mathbf{--}\)) and z (\(\mathbf{-}\)) of the trajectory of initial state \(|111\rangle\). Time is indicated with the color intensity from light ($t=0$) to dark ($t=\tau_g$) in (d). The gate parameters are $\delta_\text{CM}/2 \pi = 20$ kHz, $J/2 \pi = 2$ kHz ($\Omega/2 \pi = 126.491$ kHz), $g/2 \pi = 1$ kHz for a gate time of \(\tau_g=\pi/g=500\;\upmu\)s.}
\end{figure*}
To guarantee a complete inversion of the target qubit, the system has to be prepared in a pure dressed eigenstate $|\Psi\rangle_\text{I}$ of the interacting system such that the drive strength can be exactly corrected using Eq.~\ref{eq:lambda_c_mm}. In the case of a sudden quench (diabatic activation) of Eq.~\ref{eq:HT_ss}, a superposition of dressed eigenstates will result. In contrast, by adiabatic switching (see Appendix \ref{app:ASE}) the qubit-phonon interaction, \(\hat{H}_\text{q-ph}\), and thus \(\hat{H}_\text{Ising}\), pure (dressed) eigenstates are obtained for which an appropriate drive strength can be chosen.
It also makes our gate robust against residual phonon-qubit entanglement which in turn makes it less sensitive to timing errors. For quenched gates, this residual entanglement occurs if the total gate time \(t_\text{T} \neq 2k_1\pi/\delta_m \) (\(k_1 \in \mathbb{N} \)), as in this case the evolution of the states do not describe closed trajectories in phase space. In contrast, the adiabatic ramp assures that the system remains in an eigenstate during the laser-ion interaction. Therefore, the exact timing is not crucial as long as the ramp time is long enough to assure adiabaticity. In practice, however, setting \(t_\text{T} = 2k_1\pi/\delta_m\) still proves to be useful to reduce errors due to off-resonant drive field coupling between dressed states and to reduce errors caused by non-adiabaticity.
The gate sequence consists then in ramping up the interaction for a time \(t_a\) and performing the $i$-Toffoli gate (Eq.~\ref{eq:HT_ss}) for a time \(\tau_g\), and finally ramp down the interaction to transform the system back to the non-interacting or computational basis. This complete $i$-Toffoli process has a total length \(t_\text{T}=2 t_a + \tau_g\) and is described by:
\begin{equation}\label{eq:Tof_ad}
\hat{U}_{i\text{Tof}} = \hat{U}^\text{d}_\text{eg}\hat{U}_\text{T}\hat{U}^\text{a}_\text{eg},
\end{equation}
where $\hat{U}^\text{a(d)}_\text{eg}$ is the unitary of the adiabatic activation (deactivation) of $\hat{H}_\text{Ising}$ and $\hat{U}_\text{T} = \exp(-i\tau_g \tilde{H}_\text{T,I})$.
\section{Simulations of a \emph{N}-qubit Toffoli gate in a linear ion crystal}\label{sec:linear_crystal}
\subsection{Single mode coupling}
The main features of our model can be first studied by considering an ideal system. This consists of a ground-state cooled linear ion crystal and an interaction laser coupling only to the axial modes of the crystal, with a beatnote $\mu$ tuned close to the center-of-mass phonon mode frequency $\omega_\text{CM}$ of the crystal, i,e. \(\delta_\text{CM}\ll\delta_{m \neq \text{CM}}\). We assume that the coupling with the remaining phonon modes can be ignored, i.e. \(J^{(i,j)}_\text{CM} \gg \sum_{m\neq \text{CM}}J_m^{(i,j)}\). This results in an homogeneous Ising coupling strength $J^{(i,j)} = \Omega^2 \eta_\text{CM}^2 / 4\delta_\text{CM} \equiv J $ and the simplified Hamiltonian:
\begin{align}\label{eq:CoM_Tof}
\tilde{H}_\text{T,sm} = 2 N_c J\sum_i\hat{\sigma}_z^{(i)} + J \sum_{i\neq j} \, \hat{\sigma}_z^{(i)}\hat{\sigma}_z^{(j)} + \frac{\tilde{g}}{2}\tilde{\hat{\sigma}}_x^{(\text{t})} \nonumber \\ -\delta_\text{CM}\hat{a}_\text{CM}^\dagger\hat{a}_\text{CM}.
\end{align}
The resulting $i$-Toffoli process unitary for a 3-ion crystal is observed in Figs.~\ref{fig:3ionUnitaries}(b) and \ref{fig:3ionUnitaries}(c). We have chosen a ramp time (\(t_a\)) that ensures the adiabaticity of the process, and the disappearance of dynamical phases. These phases have the form \(\phi_{t_\text{T}} = \exp\left(-i E_{\ket{x_t,\vec{x}_c}} \tilde{t}_\text{T}\right)\), where the total effective process time is \(\tilde{t}_T = 2\tilde{t}_a + \tau_g\) and $\tilde{t}_a$ is effective ramp time (See Appendix~\ref{app:ASE}). Because the Ising couplings are homogeneous in this particular case, the phases vanish if $\tilde{t}_\text{T} J = 2k_2\pi$ (\(k_2 \in \mathbb{N}\)). For a modulation of the form \(\Omega(t<t_a)=\Omega \sin^2(\pi t/(2t_a))\) and these parameters both criteria are fulfilled by setting \(t_{a} = \tau_g \).
To illustrate the dynamics under the action of Eq. \ref{eq:CoM_Tof}, we have plotted the phase space (Fig.~\ref{fig:3ionUnitaries}(a)) \footnote{The phase space shown in this work is in a rotating frame with frequency \(\mu\) and the values of \(\langle x\rangle\) are in units of the ground state wavepackage.} and Bloch sphere trajectories (Fig.~\ref{fig:3ionUnitaries}(d)) of the (target) dressed states $|\Psi\rangle_\text{I} = \hat{U}^\text{a}_\text{eg}\ket{n=0}\otimes |x_t,\vec{x}_c\rangle$. As expected for the two target states, the motional and electronic component are transformed from one to the other, i.e. \(\hat{D}(\alpha_{\Psi_0})\ket{n=0}\leftrightarrow\hat{D}(\alpha_{\Psi_1})\ket{n=0}\) and \(\ket{0,1^2}\leftrightarrow\ket{1,1^2}\). For the off-resonant states, closed trajectories are obtained indicating that motion is disentangled from the electronic component of the states. Finally, in Fig.~\ref{fig:3ionUnitaries}(e) we observe that the coupling of drive with the ion motion, leads to a small drive error reflected as small oscillations of \(\langle \hat\sigma_x\rangle\).
\subsection{Multi-mode coupling}
In experiments, due to the finite spacing between phonon frequencies, the laser field will couple to multiple phonon modes, as described in Eq.~\ref{eq:HT_mm}. Although the dynamics of the gate will still be dominated by the coupling to the center-of-mass mode, the contributions of nearby modes, \(\sum_{m\neq \text{CM}}J_m^{(i,j)}\), will lead to two additional source of errors. The first are additional terms \(\propto \alpha_m^\text{(t)}\hat\sigma_y^\text{(t)}\) which increase the drive error, and the second are state-dependent dynamical phases. The latter occur because the Ising interactions are inhomogeneous, \(J^{(i,j)}\neq J^{(i,k)}\), thus the state energies are not longer proportional to a single value of $J$. As a consequence, no single gate time can be chosen such that they vanish at the end of the gate (Fig.~\ref{fig:MultiBeatnote}(a)).
\begin{figure}[ht!]%
\smallskip
\includegraphics[width=0.48\textwidth,scale=1.0]{fig3.pdf}
\caption{\label{fig:MultiBeatnote} Multimode unitaries and spectrum of multiple beatnotes for ``echo'' step for phases cancellation. (a) $i$-Toffoli unitary for a 3 ion crystal considering all-mode couplings without and (inset) with ``echo'' step. Frequency and amplitude of beatnotes for (b) 3 and (c) 7 ions gate with detunings \(\delta_\text{CM}/2\pi=-20\) kHz and \(\delta_\text{CM}/2\pi=-50\) kHz respectively. The phonon mode frequencies are indicated in dashed red lines. The parameters of (a) are \(\omega_\text{CM}/2\pi = 1\) MHz, \(\delta_\text{CM}/2\pi=-20\) kHz, \(g/2\pi=1\) kHz) and for (b,c) the interaction time is \(t_\text{mb}\) 5 \(\upmu\)s.}
\end{figure}
The first error can be minimized by using a linear crystals with odd number of ions and by addressing the central ion with the drive field. In this way, the largest contribution, coming from the next nearest phonon mode, disappears. To cancel the second error, dynamical phases are removed with an additional ``echo'' step. During this step, the sign of all coupling strengths is inverted \(J^{(i,j)}\rightarrow-J^{(i,j)}\) for a duration \(t_\text{T}\). To realize this echo, we follow a recent proposal~\cite{Shapira2020} in which a combination of multiple beatnotes coupling to all the phonon modes is used to generate couplings with arbitrary magnitude and sign.
In short, the method uses beatnotes with frequencies \(\mu_k\) that are harmonics of the interaction time (\(t_\text{mb}\)) between the crystal and a multi-beatnote laser field, i.e. \(\mu_k= 2\pi k/t_\text{mb}\) for \(k \in \mathbb{N}\). Their amplitudes \(\Omega_{\mu_k}\) (Figs.~\ref{fig:MultiBeatnote}(b) and \ref{fig:MultiBeatnote}(c)) are calculated such that after a time \(t_\text{mb}\) the entanglement phases of each mode matches a target value \(\varphi_m\), and both dynamical phases and the entanglement with the phonon modes disappear. The entanglement phases are obtained by expressing the matrix of couplings for the echo step, \(\mathbf{\tilde{J}}_{i,j}=-J^{(i,j)}\), in terms of the phonon modes (\(\vec{b}_m\)) and the target entanglement phase:
\begin{equation}
\mathbf{\tilde{J}} \approxeq \sum_{m=1}^N \varphi_m \vec{b}_m \otimes \vec{b}_m.
\end{equation}
To reduce the number of beatnotes required, we chose an interaction \(t_\text{mb} \sim 2k_1\pi/\omega_\text{CM}\) for a small integer \(k_1\), that also satisfies \(t_\text{T}=k_2 t_\text{mb}\) (\(k_2 \in \mathbb{N}\)). The ``echo'' is obtained by sequentially applying \(k_1\) multi-beatnote field pulses with the same modulation of the amplitudes \(\Omega_{\mu_k}\) as for the laser-ion coupling strength \(\Omega\) (See Appendix \ref{app:SE}).
\section{Gate fidelities and error sources}\label{sec:fidelities}
\begin{figure}%
\includegraphics[width=0.48\textwidth,scale=1.0]{fig4.pdf}
\caption{\label{fig:FidvsSize_delta} Process error in function of Ising strength and gate time assuming single-mode coupling. The results are for a $i$-Toffoli gate of 3, 5, 7, and 9 ions for detunings of (a) 50, and (b) 200 kHz. We also show the result (\(7^*\)) for a 7 ion crystal including an ``echo'' step. In this case, the total process time corresponds to \(2t_\text{T}\).}
\end{figure}
We have shown that an $i$-Toffoli gate ($\hat{U}_{i\text{Tof}}$) can be implemented in a linear crystal of ions in realistic conditions where the effective Ising interaction is generated by coupling to multiple phonon modes of the crystal. In this section, we will compare this gate against an ideal $i$-Toffoli gate ($\hat{U}_\text{Ideal}$) for different number of qubits and find conditions for fast gates with high fidelities. Additionally, we are interested in identifying and estimating the effect of other sources.
To characterize the gate, we use as figure-of-merit the average fidelity \(\bar{F}\)~\cite{Nielsen2002}:
\begin{equation}
\bar{F}(\hat{U}_{i\text{Tof}},\hat{U}_{\text{Ideal}})=\frac{\sum_j \text{tr}[\hat{U}_{\text{Ideal}}U_j^\dagger\hat{U}_{\text{Ideal}}^\dagger\hat{U}_{i\text{Tof}}(U_j)]+d^2}{d^2(d+1)}
\end{equation}
\noindent where \(\hat{U}_{i\text{Tof}}(U_j)\equiv\text{tr}_\text{FS}\big(\hat{U}_{i\text{Tof}}[\hat{P}_0 \otimes U_j]\tilde{U}^\dagger_{i\text{Tof}}\big)\), \(U_j\) are generalized Pauli matrices in the qubit Hilbert space with dimension \(d=2^N\), \(\hat{P}_0=\bigotimes_m |0\rangle\langle 0|_m\) is a projector onto the $n_m=0$ Fock subspace and \(\text{tr}_\text{FS}\) is the partial trace of the phonons Fock space.
We start again by assuming single-mode coupling and calculate faster gates by increasing both $\Omega$ and $g$ and setting \(t_a=\tau_g\) to avoid phases accumulation. By increasing the interaction strengths and reducing gate and ramp times three types of gate errors will have to be accounted for: couplings between off-resonant states, drive errors and non-adiabatic couplings during ramping of the Ising interaction. To mitigate the first one, we require \(J > g\), therefore we keep the ratio $J/g=2$ for all the gates we will study. The last two errors can be minimized either by extending the duration of the adiabatic ramp or increasing the detuning of the laser beatnote \(\delta_m\), both reducing the amplitudes \(\alpha_{m,\Psi}\) and thus the final error. Because our goal is a faster gate, we have chosen for the latter.
Fidelities higher than 99\% with gate times below 500 \(\upmu\)s are obtained when \(\delta_\text{CM}/2\pi=200\) kHz (Fig.~\ref{fig:FidvsSize_delta}(c)) for gates with 3-9 qubits. As a consequence of the reduction of the ramp time with increasing $J$, the activation of the interaction becomes less adiabatic and the crystal motion is excited. This leads to coupling of motional excited states in the form of \(\ket{n>0}\ket{1,1^{N_c}}\leftrightarrow\ket{n>0}\ket{0,1^{N_c}}\) during the drive step. The larger drops in the fidelity are observed for particular interaction strengths, e.g. \(J/4\pi = 3.1\) kHz for \(\delta_t/2\pi=50\) kHz, originate also from undesired couplings between states of the type $|n=0\rangle\otimes|1,\vec{x}_c\rangle, \, |n=k\rangle\otimes|0,\vec{x}_c\rangle$, which become degenerate when \(\Delta_{\vec{x}_c}\sim k\delta_\text{CM}\).
These errors affect more strongly gates with larger amount of qubits as the number of states and the occurrence of degeneracies increases. Furthermore, the drive and non-adiabaticity errors also increase, as the displacement amplitude \(\alpha_{m,\Psi} \propto N\). However, by choosing appropriate gate parameters, these undesired couplings can be avoided.
\subsection{Multi-mode coupling with residual crystal motion}
From the single-mode coupling analysis we have identified conditions for high fidelity gates for ion crystals in their ground state. We can use this information to calculate high-fidelity gates for systems where all axial phonon modes participate. We will also take into account residual ion motion such that average number of phonons in the crystal \(\bar{n}_m\) is not zero. In particular, we consider the cases where \(\bar{n}_\text{CM}>0\) and \(\bar{n}_\text{m $\neq $ CM}=0\).
To illustrate, we choose gates with the largest detuning (\(\delta_\text{CM}/2\pi=-200\) kHz) to minimize drive errors and select two drive strength values (\(g/2\pi = 1.0;\; 4.762\) KHz) for which no large drop of fidelities were obtained in the single-mode model. As a result, we obtain multi-mode coupled gates with fidelities better than 99\% for both fast (Fig.~\ref{fig:FidvsN_ave}(a)) and slow gates (Fig.~\ref{fig:FidvsN_ave}(b)). Even in the presence of residual motion up to \(\bar{n}=1\), the fidelities always exceed 90\%.
Importantly, the addition of the ``echo'' step leads to fidelities that, in most of the cases, are better than those for single-mode model. Clearly, this step also compensates phases due to Stark shifts originated by couplings of states \(|1,\vec{x}_c\rangle \leftrightarrow |0,\vec{x}_c\rangle\), which remained uncorrected in Fig.~\ref{fig:FidvsSize_delta}.
Moreover, in absence of these phases, higher fidelities are obtained for larger gates (compare with Fig.~\ref{fig:FidvsSize_delta}). The increasing gaps between states, \(\Delta_{\vec{x}_c}\), for larger systems will reduce any type off-resonant couplings. In particular, it reduces couplings with excited motional states \(\Delta_{\vec{x}_c}\sim k\delta_\text{CM}\), as the ratio \(\Delta_{1^{N_c}}/\delta_m\) increases. Furthermore, not only do these gaps increase, there are also vastly more states with large gaps than with small gaps as $N$ increases. Thus state-specific errors weigh less in the calculation of the average fidelity for larger qubit gates.
\begin{figure}[h!]%
\centering
\includegraphics[width=0.48\textwidth,scale=1.0]{fig5.pdf}
\caption{\label{fig:FidvsN_ave} Effect of average phonon number in process fidelity for gate with multi-mode coupling. The Ising and drive strengths (\(J/4\pi=g/2\pi\)) are (a) 4.762 kHz and (b) 1 kHz. The detuning, \(\delta_\text{CM}/2\pi\), and the center-of-mass frequency, \(\omega_\text{CM}/2\pi\), are -200 kHz and 1 MHz respectively.}
\end{figure}
\section{Discussion and conclusions}
We have presented a high-fidelity method to implement a single-step $i$-Toffoli gate in trapped ions. Our method allows operating in a regime of strong Ising interactions between qubits, necessary for fast gate operations. Although the adiabatic ramping of these interactions extends the total length of the process, the long coherence times offered by trapped ions~\cite{Wang2017} should allow the experimental implementation of this gate with high fidelities. Furthermore, recent methods of shortcut to adiabaticity~\cite{An2016, Baksic2016, Yan2019} may be applied to speed up the adiabatic preparation of states.
We have shown that, when the Ising interactions are mediated by multiple phonon modes, the residual dynamical phases can be effectively removed by using an ``echo'' step exploiting a recent non-adiabatic method for multiple qubit entanglement~\cite{Shapira2020}. A natural next step would be to combine our model and this method to generate homogeneous Ising interactions which should allows us to avoid the ``echo'' step.
A feature of our method is that the appropriate drive strength $\tilde{g}$ depends on the initial phonon state. Pure phonon input states can be assured by ground state cooling the ion crystal. The necessity of ground state cooling sets the implementation apart from a decomposition in e.g. M\o{}lmer-S\o{}rensen gates~\cite{Maslov2018,Groenland2020} that are more robust with respect to the phonon states~\cite{Sorensen1999,Kirchmair2009}. On the other hand, reaching the ground state via sideband cooling is an established technique in trapped ions and is used extensively.
Taking these considerations into account, our single step implementation of the $i$-Toffoli gate offers a competitive advantage compared to the gate-based decomposition, in particular for large $N$ when accumulated gate errors start to dominate.
\acknowledgements
We thank Georg Jacob for providing code for the multiple beatnote calculations, Arghavan Safavi-Naini, Philippe Corboz and Thomas Feldker for fruitful discussions. This work was supported by the Netherlands Organization for Scientific Research (Grant No. 680.91.120, R.G. and M.M.) and by the QM\&QI grant of the University of Amsterdam (K.G.).
|
1,116,691,500,656 | arxiv | \section{Introduction}
A number of applications in learning and autonomy take the form of distributed optimization problems in which a network of agents minimizes a global objective function $f$. As these problems grow in size, asynchrony may result from delays in computations and communications between agents. For many problems (i.e., those
such that~$\nabla^2 f$ is not block-diagonally dominant~\cite[Theorem 4.1(c)]{frommer2000asynchronous}), arbitrarily long delays
may cause the system to fail to converge~\cite[Chapter 7, Example 1.3]{bertsekas1989parallel}. Synchrony can be enforced by making faster agents idle while waiting for communications from slower agents, though the network will suffer from ``straggler'' slowdown, where the progress of the network is restricted by its slowest agent. This has led to interest in partially asynchronous algorithms, which converge to a solution when all delays in communications and computations are bounded by a known upper limit $B$~\cite{tseng1991rate,zhou2018distributed}.
Partially asynchronous algorithms avoid requiring agents to idle by instead ``damping'' the dynamics of the system based on knowledge of $B$. For gradient-based algorithms, this is achieved by reducing agents' stepsize~$\gamma$ as~$B$ grows.
While this method (along with mild assumptions on $f$) ensures convergence when all delays are bounded by $B$, straggler slowdown is still present.
Specifically, in existing block coordinate descent algorithms,
if just one agent's delays have length up to~$B$, then
every agent's stepsize is~$O(1/B)$, even if the delays experienced by the other agents
are much shorter than~$B$~\cite{cannelli21eb, zhou2018distributed, tseng1991rate}.
When $B$ is large, this method leads to
excessively small stepsizes, which significantly slow convergence. This stepsize rule also requires agents
to have knowledge of $B$, which may be difficult to gain. For example, agents in a
large network may not know the lengths of all delays experienced by all agents.
In this paper, we will instead show that, under the same standard assumptions on $f$
in seminal work in~\cite{tseng1991rate}, a gradient-based partially asynchronous algorithm converges to a solution
while allowing agents to choose uncoordinated stepsizes using only local information.
That is, agent $i$ may choose its own stepsize $\gamma_i$ as a function of only a few entries
of $\nabla f$ and only the communication delays between itself and its neighbors.
We analyze block coordinate descent because it is widely used and because it is a building block
for many other algorithms. In this and related algorithms,
the stepsize is the only free parameter and it has a substantial
impact on convergence rate, which makes the use of larger values essential
when possible.
We prove that agents still converge to a stationary point under this new stepsize rule, and comparisons in simulation
validate the significant speedup that we attain. To the best of the authors' knowledge, this is the first proof of convergence of a partially asynchronous algorithm with uncoordinated stepsizes chosen using only local information.
Related work in~\cite{nedic2017geometrically,xu2015augmented,xu2017convergence,latafat2018multi,lu2018geometrical,li2020distributed} allows for uncoordinated stepsizes that differ across agents, though
they must still obey a bound computed with global information. In contrast, in this paper each agent's stepsize bound can be computed using only local information, i.e., global Lipschitz constants and global delay bounds are not required, hence the ``locally chosen'' label. Existing literature with locally chosen stepsizes either requires a synchronous setting~\cite{li2019decentralized}, diminishing stepsizes~\cite{tian2018asy,tian2020achieving}, or for $\nabla^2 f$ to be block diagonally-dominant~\cite{ubl2021totally}, whereas
we do not require any of these.
The rest of the paper is organized as follows. Section~\ref{sec:preliminaries} gives the problems and algorithm we study. Then
Section~\ref{sec:convergenceproof} proves convergence under the local stepsize rule we develop and
gives a detailed discussion of our developments in relation to recent work.
Section~\ref{sec:simulations} empirically verifies the speedup we attain, and finally Section~\ref{sec:conclusions} concludes.
\section{Problem Statement and Preliminaries} \label{sec:preliminaries}
This section establishes the problems we solve, the assumptions placed
on them, and the algorithm we use. Below, we use the notation~$[d] = \{1, \ldots, d\}$
for~$d \in \mathbb{N}$.
\subsection{Problem Statement and Assumptions}
We solve problems of the following form:
\begin{problem}
Given~$N$ agents, a function~$f : \mathbb{R}^n \to \mathbb{R}$, and a set~$X \subseteq \mathbb{R}^n$,
asynchronously solve
\begin{equation}
\underset{x \in X}{\textnormal{minimize}} \,\, f(x). \tag*{$\triangle$}
\end{equation}
\end{problem}
We first make the following assumption about $X$:
\begin{assumption} \label{a.setseperable}
There exist sets~$X_1, \ldots, X_N$ such that
$X = X_1 \times X_2 \times \dots \times X_N$, where $X_i \subseteq \mathbb{R}^{n_i}$ is nonempty, closed, and convex for all $i \in [N]$,
and~$n = \sum_{i \in [N]} n_i$.
\hfill $\triangle$
\end{assumption}
We emphasize that~$X$ need not be compact, e.g., it can be all of~$\mathbb{R}^n$.
This decomposition will allow each agent to execute a projected gradient update
law asynchronously and still ensure set constraint satisfaction.
For any closed, convex set~$\Omega$, we use~$\Pi_{\Omega}[y]$ to denote
the Euclidean projection of~$y$ onto~$\Omega$.
In our analysis, we will divide
$n$-dimensional vectors into $N$ blocks.
Given a vector $v\in\mathbb{R}^{n}$,
where $n=\sum_{i=1}^{N}n_{i}$, the $i^{th}$ block of $v$, denoted
$v^{[i]}$, is the $n_{i}$-dimensional vector formed by entries of $v$
with indices $\sum_{k=1}^{i-1}n_{k}+1$ through $\sum_{k=1}^{i}n_{k}$.
In other words, $v^{[1]}$
is the first $n_{1}$ entries of $v$, $v^{[2]}$ is the next $n_{2}$
entries, etc.
Thus, for~$x \in X$, we have~$x^{[k]} \in X_k$ for all~$k \in [N]$.
For~$\nabla f(x)$, we write~$\nabla^{[1]} f(x)$ for its first~$n_1$ entries,~$\nabla^{[2]} f(x)$ for
its next~$n_2$ entries, etc.
\begin{comment}
In our analysis, we will \red{divide
$n\times n$ matrices into $N$ blocks}.
\mh{Do we really do this for matrices anywhere? We do it for a vectors a lot, but I'm unsure
about matrices.}
Given a matrix $A\in\mathbb{R}^{n\times n}$,
where $n=\sum_{i=1}^{N}n_{i}$, the $i^{th}$ block of $A$, denoted
$A^{[i]}$, is the $n_{i}\times n$ matrix formed by rows of $A$
with indices $\sum_{k=1}^{i-1}n_{k}+1$ through $\sum_{k=1}^{i}n_{k}$.
In other words, $A^{[1]}$ is the first $n_{1}$ rows of $A$, $A^{[2]}$
is the next $n_{2}$ rows, etc. Similarly, for a vector $a$, $a^{[1]}$
is the first $n_{1}$ entries of $a$, $a^{[2]}$ is the next $n_{2}$
entries, etc.
Thus, for~$x \in X$, we have~$x^{[k]} \in X_k$ for all~$k \in [N]$.
For~$\nabla f(x)$, we write~$\nabla^{[1]} f(x)$ for its first~$n_1$ entries,~$\nabla^{[2]} f(x)$ for
its next~$n_2$ entries, etc.
We further define the notion of a sub-block $A^{[i]}_j$, where $A^{[i]} = \left[A^{[i]}_1 \textnormal{ }A^{[i]}_2 \textnormal{ ... } A^{[i]}_N\right]$. That is, $A^{[i]}_{1}$ is the first $n_{1}$ columns of $A^{[i]}$, $A^{[i]}_{2}$ is the next $n_{2}$ columns, etc. Mathematically,
\begin{equation*}
A = \begin{bmatrix}
A^{[1]} \\
\vdots \\[2pt]
A^{[N]} \\
\end{bmatrix} =
\begin{bmatrix}
A^{[1]}_{1} & A^{[1]}_{2} & \dots & A^{[1]}_{N} \\
\vdots & \vdots & \ddots & \vdots \\[2pt]
A^{[N]}_{1} & A^{[N]}_{2} & \dots & A^{[N]}_{N} \\
\end{bmatrix},
\end{equation*}
where $A^{[i]}_{j} \in \mathbb{R}^{n_{i} \times n_{j}}$ for all $i,j \in [N]$.
\end{comment}
We assume the following about~$f$.
\begin{assumption} \label{a.boundbelow}
$f$ is bounded from below on~$X$. \hfill $\triangle$
\end{assumption}
\begin{assumption} \label{a.lsmooth}
$f$ is $L^i_j$-smooth on $X$. That is, for all $i,j \in [N]$ and for any $x,y \in X$ with $x^{[k]} = y^{[k]}$ for all $k \neq j$, there exists a constant $L^i_j \geq 0$ such that $\|\nabla^{[i]}f(x)-\nabla^{[i]}f(y)\| \leq L^i_j \|x^{[j]}-y^{[j]}\|$. \hfill $\triangle$
\end{assumption}
In words, each block of~$\nabla f$ must be Lipschitz in each block of its argument.
We note that any $L$-smooth function~$f$ in the traditional sense (i.e., satisfying $\|\nabla f(x)-\nabla f(y)\| \leq L \|x-y\|$ for all $x,y \in X$) trivially satisfies Assumption~\ref{a.lsmooth} by setting~$L^i_j = L$ for all $i,j \in [N]$.
Thus, Assumption~\ref{a.lsmooth} is no stronger than the standard $L$-smooth assumption,
but it will allow us to leverage more fine-grained information from the problem. Note also from this construction that~$L^i_j = L^j_i$.
\subsection{Algorithm Setup}
For all~$i \in [N]$,
agent~$i$ stores a local copy of~$x$, denoted~$x_i$.
Due to asynchrony, we can have~$x_i \neq x_j$ for~$i \neq j$.
Agent~$i$ is tasked with updating the~$i^{th}$ block of the decision variable, and thus
it performs computations on its own block $x^{[i]}_i$. For~$j\neq i$, agent $i$'s copy of agent $j$'s
block, denoted~$x_i^{[j]}$, only changes when it receives a communication from agent~$j$.
Due to asynchrony in communications, at time~$t$ we expect
$x^{[j]}_i(t) \neq x^{[j]}_j(t)$. We define the term $\tau^j_i(t)$
to be the largest time index such that $\tau^j_i(t) \leq t$ and $x^{[j]}_i(t) = x^{[j]}_j(\tau^j_i(t))$.
In words,~$\tau^j_i(t)$ is the
most recent time at which $x^j_j$ equaled the value of~$x^i_j(t)$.
Note that~$\tau^i_i(t) = t$ for all~$i \in [N]$.
Using this notation, for all~$i \in [N]$, we may write agent~$i$'s local copy of~$x$ as
${x_i(t) = (x^{[1]}_1(\tau^1_i(t)),\dots,x^{[n]}_n(\tau^n_i(t))}$.
Defining~$T^i$ as the set of all time indices for which agent~$i$ computes an update
to~$x^{[i]}_i$, we formalize the partially asynchronous block coordinate
descent algorithm as follows.
\textit{Algorithm 1:}
Let~$f$,~$X$,~$x_1(0), \ldots, x_N(0)$, and~$\gamma_1, \ldots, \gamma_N > 0$ be given.
For all $i\in[N]$ and $j\in[N]\backslash\{i\}$,
execute
\begin{align*}
x_{i}^{[i]}(t+1) & =\begin{cases}
\Pi_{X_i}\left[x_{i}^{[i]}(t)-\gamma_{i}\nabla^{[i]}f(x_i(t))\right] & t\in T^{i}\\
x_{i}^{[i]}(t) & t\notin T^{i}
\end{cases}\\
x_{j}^{[i]}(t\!+\!1) & =\begin{cases}
x_{j}^{[j]}\left(\tau_{i}^{j}(t\!+\!1)\right) & \hspace{-0.3em}\text{$i$ receives~$x^{[j]}_j$ at time~$t\!+\!1$}\\
x_{j}^{[i]}(t) & \hspace{-0.3em}\text{otherwise.} \hfill\diamond
\end{cases}
\end{align*}
We emphasize that agents do not need to know~$T^i$ or~$\tau^j_i$ for any~$i$ or~$j$; these are
only used in our analysis.
Additionally, communications in Algorithm~1 are generally not all-to-all; agents $i$ and $j$ only need to communicate if $\nabla^{[i]} f$ has an explicit dependence on agent $j$'s block (i.e., if $L^i_j \neq 0$).
Below, we will analyze the ``true'' state of the network, denoted
$x(t) = (x^{[1]}_1(t),\dots,x^{[n]}_n(t))$, which contains each
agent's current value of its own block.
For clarity we will write $x^{[i]}(t)$ when discussing the~$i^{th}$ block of the global state $x(t)$, and
we will write $x^{[i]}_i(t)$ when discussing the~$i^{th}$ block of
agent $i$'s local copy $x_i(t)$, though we note that $x^{[i]}(t) = x^{[i]}_i(t)$ by definition.
Partial asynchrony is enforced by the next two assumptions
\begin{assumption} \label{a.bounddelay}
For every $i,j \in [N]$, there exists an integer $D^j_i \geq 0$ such that $0 \leq t^i - \tau^j_i(t^i) \leq D^j_i$ for all $t^i \in T^i$.
\hfill $\triangle$
\end{assumption}
Assumption~\ref{a.bounddelay} states that when agent $i$ computes an update, its value of agent $j$'s block
equals some value that~$x^{[j]}_j$ had at some point in the last $D^j_i+1$ timesteps.
Note that $D^i_i = 0$, and we allow $D^j_i \neq D^i_j$, i.e., delays
not need be symmetric for any pair of agents. For completeness, if two agents $i$ and $j$ do not
communicate (i.e., $L^i_j = 0$), then $D^i_j = D^j_i = 0$.
\begin{assumption} \label{a.boundupdate}
For each $i \in [N]$, there exists an integer $G_i \geq 0$ such that for every $t$, $T^i \cap \{t,t+1,\dots,t+G_i\} \neq \emptyset$.~$\triangle$
\end{assumption}
Assumption~\ref{a.boundupdate} simply states that agent $i$ updates at least once every $G_i+1$
timesteps. Note that in the existing partially
asynchronous literature~$B = \max_{i,j\in [N]} \{D^j_i,D^i_j,G_i\}$, and this is used to calibrate stepsizes. We show in the
next section that a finer-grained analysis leads to local stepsize rules that still ensure convergence.
\section{Convergence Results} \label{sec:convergenceproof}
The goal of Algorithm~1 is to find an element of the solution set
$X^* := \{x \in X : x = \Pi_{X}\left[x-\nabla f(x)\right]\}$.
That is, we wish to show $\lim_{t \rightarrow \infty}\|x(t)-x^*\| = 0 $, where $x^*$ is some element of $X^*$.
Our proof strategy is to first establish that the sequence $\{x(t)\}^{\infty}_{t = 0}$ has square summable successive differences, then show
that its limit point is indeed an element of~$X^*$.
\subsection{Analysis of Algorithm~1}
The forthcoming theorem uses the following lemma.
\begin{lemma} \label{l.lij}
Let Assumption~\ref{a.lsmooth} hold. For all $i \in [N]$ and $x,y \in X$,
$\|\nabla^{[i]}f(x)-\nabla^{[i]}f(y)\| \leq \sum_{j=1}^N L^i_j \|x^{[j]}-y^{[j]}\|$.
\end{lemma}
\textit{Proof:}
Fix~$x,y \in X$.
For all $k \in \{0,\dots,N\}$, define a vector~$z_k \in \mathbb{R}^n$
as $z^{[j]}_k = x^{[j]}$ if $j > k$ and $z^{[j]}_k = y^{[j]}$ if $j \leq k$.
By this definition, $z_0 = x$ and $z_N = y$. Then
$\|\nabla^{[i]}f(x) - \nabla^{[i]}f(y)\| = \|\sum_{k=1}^{N}\nabla^{[i]}f(z_{k-1}) - \nabla^{[i]}f(z_{k})\|
\leq \sum_{k=1}^{N}\|\nabla^{[i]}f(z_{k-1}) - \nabla^{[i]}f(z_{k})\|.$
We note that~$z_{k-1}$ and~$z_k$
differ in only one block,
i.e.,~$z_{k-1}^{[j]} = z_k^{[j]}$
for all~$j \neq k$,
and~$z_{k-1}^{[k]} \neq z_k^{[k]}$.
Then each element of the sum
satisfies the conditions of Assumption~\ref{a.lsmooth},
and applying it to each element of the sum completes the proof. $\hfill\blacksquare$
For conciseness, we define $s(t) = x(t+1) - x(t)$. The following theorem shows that the sequence $\{s(t)\}^{\infty}_{t=0}$ decays to zero.
\begin{theorem} \label{t.squaresum}
Let Assumptions~1-5 hold. If for all~${i \in [N]}$ we have $\gamma_i \in \left(0, \frac{2}{\sum_{j=1}^N L^i_j(1+D^j_i+D^i_j)}\right)$, then under Algorithm~1 we have $\lim_{t \rightarrow \infty} \|x(t+1) - x(t)\| = 0$ and,
for all~$i \in [N]$, $\lim_{t \rightarrow \infty} \|x(t) - x_i(t)\| = 0$.
\end{theorem}
\textit{Proof:} See Appendix~\ref{app.theorem1}. $\hfill\blacksquare$
\subsection{Convergence of Algorithm~1 to a Stationary Point}
Theorem~\ref{t.squaresum} on its own does not necessarily guarantee that Algorithm~1 converges to an element of $X^*$, and in order to do so we must impose additional assumptions on~$f$.
The first is the error bound condition.
\begin{assumption}[\cite{luo1992error}] \label{a.errorbound}
For every $\alpha > 0$, there exist $\delta, \kappa > 0$ such that for all $x \in X$ with $f(x) \leq \alpha$ and $\|x-\Pi_{X}\left[x-\nabla f(x)\right]\| \leq \delta$,
\begin{equation}
\min_{\bar{x}\in X^*} \|x-\bar{x}\| \leq \kappa \|x-\Pi_{X}\left[x-\nabla f(x)\right]\|. \tag*{$\triangle$}
\end{equation}
\end{assumption}
Assumption~\ref{a.errorbound} is satisfied by a number of problems,
including several classes of non-convex problems~\cite{drusvyatskiy2018error,zhang2017restricted}. It also holds
when $f$ is strongly convex on $X$ or satisfies the quadratic growth condition on $X$~\cite{drusvyatskiy2018error,zhang2017restricted}, and when $X$ is polyhedral and $f$ is either quadratic~\cite{luo1992error} or the dual functional associated with minimizing a strictly convex function subject to linear constraints~\cite{luo1993convergence}.
Additionally, we make the following assumption on $X^*$, which simply states that the elements of $X^*$ are isolated and sufficiently separated from each other.
\begin{assumption} \label{a.isolated}
There exists a scalar $\epsilon > 0$ such that for every distinct $x, y \in X^*$ we have $\|x-y\| \geq \epsilon$. \hfill $\triangle$
\end{assumption}
In addition to Assumptions~\ref{a.errorbound} and~\ref{a.isolated}, we will utilize the following lemma.
\begin{lemma} \label{l.rbound}
For any $x \in X$, any~$i \in [N]$, and any~$\gamma_i > 0$,
\begin{equation}
\left\| x^{[i]}(t)-\Pi_{X_i}\left[ x^{[i]}(t)-\nabla^{[i]}f(x(t))\right]\right\|
\leq \max\left\{ 1,\frac{1}{\gamma_i}\right\} \left\| x^{[i]}(t)-\Pi_{X_i}\left[ x^{[i]}(t)-\gamma_i\nabla^{[i]}f(x(t))\right] \right\|.
\end{equation}
\end{lemma}
\textit{Proof:}
This follows from~\cite[Lemma 3.1]{tseng1991rate} with $\gamma_i$, $x^{[i]}(t)$, $\nabla^{[i]} f (x(t))$, and $X_i$ replacing $\gamma, x, \nabla f$, and $X$. \hfill $\blacksquare$
\begin{theorem}
Let the conditions of Theorem~\ref{t.squaresum} and Assumptions~6 and 7 hold. Then, for some~$x^* \in X^*$,
\begin{equation}
\lim_{t \rightarrow \infty} \|x(t) - x^*\| = 0.
\end{equation}
\end{theorem}
\textit{Proof:} For every $t$ and $i \in [N]$, define $k_i(t) = \hat{t}_i$, where $\hat{t}_i$ is the largest element of $T^i$ such that $\hat{t}_i \leq t$. By Assumption~\ref{a.boundupdate}, $k_i(t) \geq t-G_i$ for all $t$. Therefore, as $t \rightarrow \infty$, $k_i(t) \rightarrow \infty$, which, under Theorem~\ref{t.squaresum}, gives $\lim_{t \rightarrow \infty} \|s^{[i]}(k_i(t))\| = 0$ for all $i \in [N]$. The definition of $s^{[i]}$ and Algorithm 1 give
\begin{equation}
s^{[i]}(k_i(t))\hspace{-0.2em} =\hspace{-0.2em} \Pi_{X_i}\hspace{-0.4em}\left[x^{[i]}(k_i(t))\hspace{-0.2em}-\hspace{-0.2em}\gamma_{i}\hspace{-0.2em}\nabla^{[i]}\hspace{-0.2em}f(x_i(k_i(t)))\right]\hspace{-0.2em} - x^{[i]}(k_i(t)).
\end{equation}
We now define the residual vector $r^{[i]}(k_i(t))$ as
\begin{equation}
r^{[i]}(k_i(t))\hspace{-0.2em} =\hspace{-0.2em} \Pi_{X_i}\hspace{-0.4em}\left[x^{[i]}(k_i(t))\hspace{-0.2em}-\hspace{-0.2em}\gamma_{i}\hspace{-0.2em}\nabla^{[i]}\hspace{-0.2em}f(x(k_i(t)))\right]\hspace{-0.2em} - x^{[i]}(k_i(t))
\end{equation}
for all $i \in [N]$. Note that the arguments of the gradient term differ between
$s^{[i]}$ and $r^{[i]}$.
Here, $s^{[i]}$ represents the update performed by agent $i$ with its asynchronous information, while $r^{[i]}$ represents the update that agent $i$ would take if it had completely
up to date information from its neighbors.
The non-expansive property of~$\Pi_{X_i}$ gives
\begin{align}
\|s^{[i]}(k_i(t)) - r^{[i]}(k_i(t))\| & \leq \gamma_i\|\nabla^{[i]}f(x(k_i(t)))-\nabla^{[i]}f(x_i(k_i(t)))\| \\
& \leq \gamma_i\sum_{j=1}^N L^i_j \|x^{[j]}(k_i(t)) - x^{[j]}_i(k_i(t))\|, \label{e.srbound}
\end{align}
where the last line follows from Lemma~\ref{l.lij}.
Theorem~\ref{t.squaresum} gives $\lim_{t \rightarrow \infty} \|x^{[j]}(t) - x^{[j]}_i(t)\| = 0$ for all $i,j \in [N]$, implying $\lim_{t \rightarrow \infty} \|x^{[j]}(k_i(t)) - x^{[j]}_i(k_i(t))\| = 0$. Combined with~\eqref{e.srbound}, this
gives $\lim_{t \rightarrow \infty} \|s^{[i]}(k_i(t)) - r^{[i]}(k_i(t))\| = 0$. Because $\lim_{t \rightarrow \infty} \|s^{[i]}(k_i(t))\| = 0$, we have $\lim_{t \rightarrow \infty} \|r^{[i]}(k_i(t))\| = 0$ for all $i \in [N]$ and therefore $\lim_{t \rightarrow \infty} \|r^{[i]}(t)\| = 0$. Using Lemma~\ref{l.rbound}, we have
\begin{align}
\|x(t)-\Pi_{X}\left[x(t)-\nabla f(x(t))\right]\| & \leq \sum^N_{i=1}\left\|x^{[i]}(t)-\Pi_{X_i}\left[x^{[i]}(t)-\nabla^{[i]}f(x(t))\right]\right\| \\
& \leq \hspace{-0.2em}\sum^N_{i=1}\max\hspace{-0.1em}\left\{\hspace{-0.2em}1,\frac{1}{\gamma_i}\hspace{-0.2em}\right\}\hspace{-0.2em} \left\|x^{[i]}(t)\hspace{-0.em}-\hspace{-0.1em}\Pi_{X_i}\hspace{-0.2em}\left[x^{[i]}(t)\hspace{-0.2em}-\hspace{-0.2em}\gamma_i\nabla^{[i]}f(x(t))\right]\hspace{-0.2em}\right\| \\
& = \sum^N_{i=1}\max\left\{1,\frac{1}{\gamma_i}\right\} \|r^{[i]}(t)\|, \label{e.yrbound}
\end{align}
implying $\lim_{t \rightarrow \infty}\|x(t)-\Pi_{X}\left[x(t)-\nabla f(x(t))\right]\| = 0$. Since $\{f(x(t))\}_{t=1}^{\infty}$ is bounded by Theorem~\ref{t.squaresum},
then by Assumption~\ref{a.errorbound} there exists a threshold $\bar{t} \geq 0$ and scalar $\kappa > 0$ such that
\begin{equation}
\min_{\bar{x}\in X^*} \|x(t)-\bar{x}\| \leq \kappa \|x(t)-\Pi_{X}\left[x(t)-\nabla f(x(t))\right]\| \label{e.finalerrorbound}
\end{equation}
for all $t \geq \bar{t}$. For each $t$, let $\bar{x}(t) = \arg\min_{\bar{x}\in X^*} \|x(t)-\bar{x}\|$. Then, combining \eqref{e.finalerrorbound} with \eqref{e.yrbound}
gives $\lim_{t \rightarrow \infty}\|x(t)-\bar{x}(t)\| = 0$, which along with Theorem~\ref{t.squaresum} implies $\lim_{t \rightarrow \infty}\|\bar{x}(t+1)-\bar{x}(t)\| = 0$. Then Assumption~\ref{a.isolated} implies that there exists a $\hat{t} \geq \bar{t}$ such that $\bar{x}(t) = x^*$ for all $t \geq \hat{t}$, where $x^* = \bar{x}(\hat{t})$. This gives
$\lim_{t \rightarrow \infty}\|x(t)-x^*\| = 0$,
as desired. $\hfill \blacksquare$
\subsection{Comparison to Existing Works}
We make a few remarks on the two preceding theorems.
\begin{remark}
Our locally chosen stepsize rule given in Theorem~\ref{t.squaresum} improves on the one provided in~\cite{zhou2018distributed},
which is the most relevant work,
in a few ways. For clarity, our rule is
\begin{equation} \label{e.localstep}
\gamma_i \in \left(0, \frac{2}{\sum_{j=1}^N L^i_j(1+D^j_i+D^i_j)}\right) \text{ for all }i \in [N],
\end{equation}
while the global, coordinated rule in~\cite{zhou2018distributed} is
\begin{equation} \label{e.globalstep}
\gamma \in \left(0, \frac{2}{L(1+2 \sqrt{N} B)}\right).
\end{equation}
First, while the similarity in structure between ~\eqref{e.localstep} and~\eqref{e.globalstep} is evident,~\eqref{e.localstep} only requires agent~$i$ to know
$\nabla^{[i]} f$ and the inward and outward communication delays to and from its neighbors to compute $\gamma_i$. Second, the $\sqrt{N}$ in \eqref{e.globalstep} is eliminated. The elimination of this explicit dependence on $B$ and $N$ is significant, especially when $B$ is large compared to the communication delays experienced by a particular agent, and $N$ is large compared to the number of neighbors a particular agent communicates with, in which case the upper bound in~\eqref{e.localstep} will be significantly larger than in~\eqref{e.globalstep}.
\begin{comment}
as demonstrated by the following scenario. First, it can be seen that the uncoordinated stepsize for a particular agent
(chosen using the results of this paper)
can only be smaller than the corresponding coordinated stepsize
from~\eqref{e.globalstep}
if $\sum^N_{j=1} L^i_j \geq L$. Indeed, this will be true for at least one $i \in [N]$~\cite{feingold1962block}. Assuming this holds for $i$, assume additionally that $D^j_i,D^i_j = B$ for all $j \in [N]$. That is, agent $i$ has the largest value of $\sum^N_{j=1} L^i_j$ in the network and has the maximum possible communication delay with every one of its neighbors. Even in this extreme worst-case scenario, the uncoordinated stepsize for agent $i$ will still only be smaller than the coordinated version if $\frac{\sum^N_{j=1} L^i_j}{L} \geq \frac{2\sqrt{N}B+1}{2B+1}$. That is, the gap between $\sum^N_{j=1} L^i_j$ and $L$ must be large enough to dominate a term depending on $\sqrt{N}$, which is unlikely for the large networks (and thus large~$N$)
that motivate this work.
\end{comment}
\end{remark}
\begin{remark}
Under Assumptions~1-7, our stepsize rule can be shown to provide geometric convergence by following a similar argument to~\cite{tseng1991rate} and~\cite{zhou2018distributed}.
However, (as seen in~\cite{tseng1991rate} and~\cite{zhou2018distributed}) a convergence rate proof
is quite involved, and due to space constraints is deferred
to a future publication.
Thus, to reiterate, the contribution of this paper is
providing, to the best of the authors' knowledge, the first proof of convergence of a partially asynchronous algorithm with
uncoordinated stepsizes chosen using only local information.
\end{remark}
\section{Simulations} \label{sec:simulations}
We compare the performance of the locally chosen stepsize rule~\eqref{e.localstep} with the globally coordinated rule~\eqref{e.globalstep} on a set-constrained quadratic program of the form $f(x) = \frac{1}{2}x^T Q x + r^T x$.
There are~$N = 20$ agents, each of which updates a scalar variable.
$Q$ and $r$ are generated such that $Q \nsucceq 0$,
$n = 20$, $L = 100$, and $X_i = \{ x \in \mathbb{R} : |x| \leq 10,000\}$ for all $i \in [N]$. Under this setup, $f$ is a nonconvex quadratic function on a polyhedral constraint set $X$, which satisfies Assumption~\ref{a.errorbound}~\cite{luo1992error}.
Each communication bound $D^j_i$ is randomly chosen from $\{0,\dots,20\}$.
Since the effect of asynchronous communications is maximized when communications are less frequent than computations, we have every agent compute an update at every timestep,
i.e., $T_i = \mathbb{N}$ for all $i \in [N]$. In this simulation communications between agents are instantaneous, with asynchrony arising from them being infrequent, with
the number of timesteps between communications from agent $j$ to agent $i$ being bounded by $D^j_i$. If agent $j$ communicates with agent $i$ at time $t$, the next such communication will occur at $t+1+\delta^j_i(t)$, where $\delta^j_i(t)$ is a randomly chosen element of $\{0,\dots,D^j_i\}$. This simulation is run from $t = 0$ to $t = 500$, and every agent is initialized with $x_i(0) = 0$.
To ensure a fair comparison,
both simulations are run using the same communication
and computation
time indices; one using a global coordinated stepsize, and the other using locally chosen stepsizes. The global coordinated stepsize is chosen to be the upper bound in~\eqref{e.globalstep} multiplied by 0.95 (to satisfy the strict inequality), which gives $\gamma = 2.1 \times 10^{-4}$. The local stepsizes are chosen as the upper bounds in~\eqref{e.localstep} multiplied by 0.95, and range from $4.9 \times 10^{-4}$ to $2.8 \times 10^{-3}$.
The values of~$f(x(t))$ for each simulation are plotted against $t$ in Figure~\ref{f.cost}\footnote{MATLAB code for both simulations is available at \url{https://github.com/MattUbl/asynch-local-stepsizes}}, where a clear speedup in convergence can be seen.
\begin{figure}[!tp]
\centering
\includegraphics[draft = false,width=3.4in]{NonconvexSim.eps}
\caption{Convergence comparison of $f(x(t))$ for algorithms using globally chosen~\eqref{e.globalstep} (orange dashed line) and locally chosen~\eqref{e.localstep} (blue solid line) stepsizes. \eqref{e.globalstep} is to the best of the authors' knowledge the best available result in the literature, and the stepsize rule
developed in this paper is shown to significantly accelerate convergence beyond it.
}
\label{f.cost}
\end{figure}
In Figure~\ref{f.cost}, we can see that both stepsize schemes appear to achieve geometric convergence, with our locally chosen scheme reaching a solution significantly faster. In particular, the algorithm using locally chosen stepsizes converges to a stationary point and stops updating at $t = 239$, while the algorithm using a global stepsize is still updating as of $t = 500$. This illustrates better performance when using the stepsize rule presented in this paper compared to the current state of the art, in addition to allowing the agents to implement this rule using only local information.
\section{Conclusions} \label{sec:conclusions}
We have presented, to the best of the authors' knowledge, the first proof of convergence of a partially asynchronous algorithm with uncoordinated stepsizes chosen using only local information.
The local stepsize selection rule in this paper generally allows for larger stepsizes than the current state of the art and is empirically shown to significantly
accelerate convergence. Future work will develop a full proof of geometric convergence of Algorithm~1 and extend this stepsize rule to other algorithms.
\appendices
\section{Proof of Theorem~\ref{t.squaresum}} \label{app.theorem1}
In addition to Lemma~\ref{l.lij}, proof of Theorem~\ref{t.squaresum} will use the following lemmas:
\begin{lemma} \label{l.orthogbound}
Under Assumption~\ref{a.setseperable}, for all $t$ and all $i \in [N]$
in Algorithm~1 we have
$\langle s^{[i]}(t) , \nabla^{[i]}f(x_i(t)) \rangle \leq -\frac{1}{\gamma_i}\|s^{[i]}(t)\|^2$.
\end{lemma}
\textit{Proof:} This is a property of orthogonal projections~\cite{tseng1991rate}. \hfill $\blacksquare$
\begin{lemma} \label{l.sum}
Consider the set $\{0,\dots,M\}$, with $M \leq \infty$. Then $\sum_{i=0}^M\sum_{j=0}^M a^j_i = \sum_{i=0}^M\sum_{j=0}^M a^i_j$
\end{lemma}
\textit{Proof:} This follows by re-labeling indices. \hfill $\blacksquare$
\textit{Proof of Theorem~\ref{t.squaresum}:} The identities~$x(t+1) = x(t) + s(t)$ and~$f(a)-f(b) = \int^1_0 \langle (a-b) , \nabla f(b + \tau (a-b)) \rangle d\tau$
give
\begin{align}
f(x(t+1)) - f(x(t)) & = \int^1_0 \langle s(t) , \nabla f (x(t) + \tau s(t) \rangle d\tau \\
& = \sum_{i=1}^N \int^1_0 \langle s^{[i]}(t) , \nabla^{[i]} f (x(t) + \tau s(t)) \rangle d\tau \\
& = \sum_{i=1}^N \langle s^{[i]}(t) , \nabla^{[i]} f (x_i(t))\rangle + H_i(t) \\
&\leq \sum_{i=1}^N -\frac{1}{\gamma_i}\|s^{[i]}(t)\|^2 + H_i(t), \label{e.yH}
\end{align}
where the last line uses Lemma~\ref{l.orthogbound} and $H_i(t) = \int^1_0 \langle s^{[i]}(t) , \nabla^{[i]} f (x(t)+ \tau s(t)) - \nabla^{[i]} f (x_i(t)) \rangle d\tau$. Next,
\begin{align}
H_i(t) & \leq\hspace{-0.2em} \int^1_0\hspace{-0.2em} \|s^{[i]}(t)\| \|\nabla^{[i]} f (x(t)+ \tau s(t)) \hspace{-0.2em}-\hspace{-0.2em} \nabla^{[i]} f (x_i(t))\| d\tau \\
& \leq \|s^{[i]}(t)\| \sum_{j=1}^N L^i_j \int^1_0 \|x^{[j]}(t)+ \tau s^{[j]}(t) - x^{[j]}_i(t)\| d\tau \\
& \leq \|s^{[i]}(t)\| \sum_{j=1}^N L^i_j \hspace{-0.2em}\int^1_0\hspace{-0.2em} \left(\tau \|s^{[j]}(t)\|+\|x^{[j]}(t) - x^{[j]}_i(t)\|\right) d\tau \\
& = \|s^{[i]}(t)\| \sum_{j=1}^N L^i_j \left(\frac{1}{2}\|s^{[j]}(t)\| + \|x^{[j]}(t) - x^{[j]}_i(t)\| \right), \label{e.Hbound}
\end{align}
where the~$2^{nd}$ line uses Lemma~\ref{l.lij}. Using~$ab \leq \frac{a^2+b^2}{2}$ gives
\begin{equation}
\|s^{[i]}(t)\| \sum_{j=1}^N L^i_j \frac{1}{2}\|s^{[j]}(t)\|
\leq \frac{1}{2} \sum_{j=1}^N L^i_j \left(\frac{1}{2} \|s^{[i]}(t)\|^2 + \frac{1}{2}\|s^{[j]}(t)\|^2\right). \label{e.Hbound1}
\end{equation}
To bound the term $\|s^{[i]}(t)\| \sum_{j=1}^N L^i_j \|x^{[j]}(t) - x^{[j]}_i(t)\|$ in~\eqref{e.Hbound}, recall that $x^{[j]}(t) = x^{[j]}_j(t)$ and $x^{[j]}_i(t) = x^{[j]}_j(\tau^j_i(t))$. Then
\begin{align}
\|x^{[j]}(t) - x^{[j]}_i(t)\| & = \|x^{[j]}_j(t) - x^{[j]}_j(\tau^j_i(t))\| \\
& = \left\|\sum^{t-1}_{k = \tau^j_i(t)} s^{[j]}(k)\right\| \\
&\leq \sum^{t-1}_{k = \tau^j_i(t)} \|s^{[j]}(k)\|, \label{e.ssum}
\end{align}
If $\tau^j_i(t) = t$, the above sum is~$0$. Using~\eqref{e.ssum}
and~$ab \leq \frac{a^2 + b^2}{2}$,
we have
\begin{align}
\|s^{[i]}(t)\| \sum_{j=1}^N L^i_j \|x^{[j]}(t) - x^{[j]}_i(t)\|
& \leq \|s^{[i]}(t)\| \sum_{j=1}^N L^i_j \sum^{t-1}_{k = \tau^j_i(t)} \|s^{[j]}(k)\| \\
& \leq \sum_{j=1}^N L^i_j \sum^{t-1}_{k = \tau^j_i(t)} \frac{1}{2}\left( \|s^{[i]}(t)\|^2 + \|s^{[j]}(k)\|^2\right) \\
& = \frac{1}{2}\sum_{j=1}^N L^i_j \left(\!(t-\tau^j_i(t))\|s^{[i]}(t)\|^2 + \!\!\!\sum^{t-1}_{k = \tau^j_i(t)} \!\! \|s^{[j]}(k)\|^2\!\right) \\
& \leq \frac{1}{2}\sum_{j=1}^N L^i_j \left(D^j_i\|s^{[i]}(t)\|^2 + \sum^{t-1}_{k = \tau^j_i(t)} \|s^{[j]}(k)\|^2\right), \label{e.Hbound2}
\end{align}
where the last line follows from Assumption~\ref{a.bounddelay}.
Using~\eqref{e.Hbound1} and~\eqref{e.Hbound2} in~\eqref{e.Hbound} gives
\begin{equation}
H_i(t) \leq \frac{1}{2} \sum_{j=1}^N L^i_j \left(\frac{1}{2} +D^j_i\right)\|s^{[i]}(t)\|^2
+ \frac{1}{2}\sum_{j=1}^N L^i_j \left(\frac{1}{2}\|s^{[j]}(t)\|^2 + \sum^{t-1}_{k = \tau^j_i(t)} \|s^{[j]}(k)\|^2\right),
\end{equation}
which combined with~\eqref{e.yH} gives
\begin{equation}
f(x(t \!+\! 1)) - f(x(t))
\leq \sum_{i=1}^N\! \left(\!-\frac{1}{\gamma_i} \!+\! \frac{1}{2} \sum_{j=1}^N L^i_j \left(\frac{1}{2} \!+\! D^j_i \right)\right)\|s^{[i]}(t)\|^2
+ \sum^N_{i=1}\frac{1}{2}\sum_{j=1}^N L^i_j \hspace{-0.2em}\left(\frac{1}{2}\|s^{[j]}(t)\|^2 + \hspace{-0.6em}\sum^{t-1}_{k = \tau^j_i(t)} \hspace{-0.2em}\|s^{[j]}(k)\|^2\hspace{-0.2em}\right)\hspace{-0.2em}.
\end{equation}
From Lemma~\ref{l.sum}, we see
\begin{equation}
\sum^N_{i=1}\frac{1}{2}\sum_{j=1}^N L^i_j \left(\frac{1}{2}\|s^{[j]}(t)\|^2 + \sum^{t-1}_{k = \tau^j_i(t)} \|s^{[j]}(k)\|^2\right) = \sum^N_{i=1}\frac{1}{2}\sum_{j=1}^N L^j_i \left(\frac{1}{2}\|s^{[i]}(t)\|^2 + \sum^{t-1}_{k = \tau^i_j(t)} \|s^{[i]}(k)\|^2\right),
\end{equation}
which, using the fact that $L^i_j = L^j_i$, gives
\begin{equation}
f(x(t+1)) - f(x(t))
\leq \sum_{i=1}^N \left(-\frac{1}{\gamma_i} + \frac{1}{2} \sum_{j=1}^N L^i_j \left(1 +D^j_i \right)\right)\|s^{[i]}(t)\|^2
+ \sum^N_{i=1}\frac{1}{2}\sum_{j=1}^N L^i_j \sum^{t-1}_{k = \tau^i_j(t)} \|s^{[i]}(k)\|^2.
\end{equation}
Summing this inequality over $t$ from $0$ to $m-1$ and rearranging gives
\begin{equation}
f(x(m)) - f(x(0))
\leq \sum_{i=1}^N \left(-\frac{1}{\gamma_i} + \frac{1}{2} \sum_{j=1}^N L^i_j \left(1 +D^j_i \right)\right)\sum^{m-1}_{t=0}\|s^{[i]}(t)\|^2
+ \sum^N_{i=1}\sum_{j=1}^N \frac{1}{2} L^i_j \sum^{m-1}_{t=0}\sum^{t-1}_{k = \tau^i_j(t)} \|s^{[i]}(k)\|^2. \label{e.msum1}
\end{equation}
Using Lemma~\ref{l.sum} and $\tau^i_j(t) \geq 0$ we see
\begin{align}
\sum^{m-1}_{t=0} \sum^{t-1}_{k = \tau^i_j(t)} \|s^{[i]}(k)\|^2 & = \sum^{m-1}_{t=0}\sum^{t-1}_{k = \tau^i_j(t)} \|s^{[i]}(t)\|^2 \\
& = \sum^{m-1}_{t=0}(t-\tau^i_j(t)) \|s^{[i]}(t)\|^2 \\
&\leq \sum^{m-1}_{t=0}D^i_j \|s^{[i]}(t)\|^2,
\end{align}
which combined with \eqref{e.msum1} and rearranging gives
\begin{align}
f&(x(m)) - f(x(0)) \leq -\sum_{i=1}^N C_i\sum^{m-1}_{t=0}\|s^{[i]}(t)\|^2,
\end{align}
where $C_i \!=\! \frac{1}{\gamma_i} \!-\! \frac{1}{2} \sum_{j=1}^N L^i_j \left(1 \!+\! D^j_i \!+\! D^i_j \right)$. Next, $C_i > 0$ if
\begin{equation}
0 < \gamma_i < \frac{2}{\sum_{j=1}^N L^i_j \left(1 +D^j_i + D^i_j \right)}.
\end{equation}
Choosing~$\gamma_i$ this way for each $i \in [N]$, taking~$m \rightarrow \infty$ gives
\begin{equation}
\limsup\limits_{m \rightarrow \infty} f(x(m)) \leq f(x(0)) - \sum^N_{i=1} C_i\sum^\infty_{t = 0}\|s^{[i]}(t)\|^2.
\end{equation}
Rearranging gives
\begin{align}
\sum^N_{i=1} C_i\sum^\infty_{t = 0}\|s^{[i]}(t)\|^2 & \leq f(x(0)) - \limsup\limits_{m \rightarrow \infty} f(x(m)) \\
& \leq f(x(0)) - \inf_{z\in X}f(z),
\end{align}
and rearranging once more gives
\begin{equation}
\sum^\infty_{t = 0}\|s^{[i]}(t)\|^2 \leq \frac{f(x(0)) - \inf_{z\in X}f(z)}{C_i} < \infty,
\end{equation}
for all~$i \in [N]$, where the final inequality follows from Assumption~\ref{a.boundbelow} and the fact that each $C_i$ is positive. The final inequality implies
$\lim_{t \rightarrow \infty} \|s^{[i]}(t)\| = 0$
for all $i \in [N]$. Following from the definition of $s^{[i]}(t)$ this in turn implies
$\lim_{t \rightarrow \infty} \|x^{[i]}(t+1) - x^{[i]}(t)\| = 0$
for all $i \in [N]$ and therefore
$\lim_{t \rightarrow \infty} \|x(t+1) - x(t)\| = 0$.
We now wish to show $\lim_{t \rightarrow \infty} \|x(t) - x_i(t)\| = 0$ for all $i \in [N]$. To do so, consider $x^{[j]}(t) - x^{[j]}_i(t)$. Using \eqref{e.ssum} and Assumption~\ref{a.bounddelay} gives
\begin{align}
\|x^{[j]}(t) - x^{[j]}_i(t)\| \leq \sum^{t-1}_{k = t-D^j_i} \|s^{[j]}(k)\|.
\end{align}
Then the fact that~$\lim_{t \rightarrow \infty} \|s^{[j]}(t)\| = 0$ implies
$\lim_{t \rightarrow \infty} \|x^{[j]}(t) - x^{[j]}_i(t)\| = 0$
for all $i,j \in [N]$, which gives
$\lim_{t \rightarrow \infty} \|x(t) - x_i(t)\| = 0$
for all $i \in [N]$. \hfill$\blacksquare$
\bibliographystyle{IEEEtran}
|
1,116,691,500,657 | arxiv | \section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
An axiomatic approach to the theory of Sobolev spaces over abstract metric
measure spaces has been proposed by V.\ Gol'dshtein and M.\ Troyanov in
\cite{GoldTroy01}. Their construction covers many important notions:
the weighted Sobolev space on a Riemannian manifold,
the Haj{\l}asz Sobolev space \cite{Hajlasz1996} and the
Sobolev space based on the concept of upper gradient
\cite{heinonen1998,Cheeger00,Shanmugalingam00,AmbrosioGigliSavare11}.
A key concept in \cite{GoldTroy01} is the so-called \emph{$D$-structure}:
given a metric measure space $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ and an exponent $p\in(1,\infty)$,
we associate to any function $u\in L^p_{loc}({\rm X})$ a family $D[u]$ of non-negative Borel
functions called \emph{pseudo-gradients}, which exert some control from above
on the variation of $u$. The pseudo-gradients are not explicitly specified,
but they are rather supposed to fulfil a list of axioms.
Then the space $W^{1,p}({\rm X},{\sf d},{\mbox{\boldmath$m$}},D)$ is defined as the set of all
functions in $L^p({\mbox{\boldmath$m$}})$ admitting a pseudo-gradient in $L^p({\mbox{\boldmath$m$}})$.
By means of standard functional analytic techniques, it is possible
to associate to any Sobolev function $u\in W^{1,p}({\rm X},{\sf d},{\mbox{\boldmath$m$}},D)$ a
uniquely determined minimal object $\underline D u\in D[u]\cap L^p({\mbox{\boldmath$m$}})$,
called \emph{minimal pseudo-gradient} of $u$.
More recently, the first author of the present paper introduced a differential
structure on general metric measure spaces (cf.\ \cite{Gigli14,Gigli17}).
The purpose was to develop a second-order differential calculus on spaces
satisfying lower Ricci curvature bounds (or briefly, $\sf RCD$ spaces;
we refer to \cite{Ambrosio18,Villani2016,Villani2017} for a presentation
of this class of spaces).
The fundamental tools for this theory are the $L^p$-normed $L^\infty$-modules,
among which a special role is played by the \emph{cotangent module}, denoted
by $L^2(T^*{\rm X})$. Its elements can be thought of as `measurable $1$-forms on ${\rm X}$'.
The main result of this paper -- namely Theorem \ref{thm:cot_mod} -- says that
any $D$-structure (satisfying suitable locality properties) gives rise to a natural
notion of cotangent module $L^p(T^*{\rm X};D)$, whose properties are analogous to the ones
of the cotangent module $L^2(T^*{\rm X})$ described in \cite{Gigli14}.
Roughly speaking, the cotangent module allows us to represent minimal
pseudo-gradients as pointwise norms of suitable linear objects.
More precisely, this theory provides the existence of an abstract differential
$\d:\,W^{1,p}({\rm X},{\sf d},{\mbox{\boldmath$m$}},D)\to L^p(T^*{\rm X};D)$, which is a linear operator
such that the pointwise norm $|\d u|\in L^p({\mbox{\boldmath$m$}})$ of $\d u$ coincides with
$\underline D u$ in the ${\mbox{\boldmath$m$}}$-a.e.\ sense for any
function $u\in W^{1,p}({\rm X},{\sf d},{\mbox{\boldmath$m$}},D)$.
\section{General notation}
For the purpose of the present paper, a \emph{metric measure space}
is a triple $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$, where
\begin{equation}\begin{split}
({\rm X},{\sf d})&\qquad\text{is a complete and separable metric space,}\\
{\mbox{\boldmath$m$}}\neq 0&\qquad\text{is a non-negative Borel measure on }{\rm X}\text{, finite on balls.}
\end{split}\end{equation}
Fix $p\in[1,\infty)$. Several functional spaces over ${\rm X}$
will be used in the forthcoming discussion:
\begin{align*}
L^0({\mbox{\boldmath$m$}}):&\quad\text{ the Borel functions }u:\,{\rm X}\to\mathbb{R}
\text{, considered up to }{\mbox{\boldmath$m$}}\text{-a.e.\ equality.}\\
L^p({\mbox{\boldmath$m$}}):&\quad\text{ the functions }u\in L^0({\mbox{\boldmath$m$}})\text{ for which }
|u|^p\text{ is integrable.}\\
L^p_{loc}({\mbox{\boldmath$m$}}):&\quad\text{ the functions }u\in L^0({\mbox{\boldmath$m$}})
\text{ with }u\restr B\in L^p\big({\mbox{\boldmath$m$}}\restr B\big)\text{ for any }
B\subseteq{\rm X}\text{ bounded Borel.}\\
L^\infty({\mbox{\boldmath$m$}}):&\quad\text{ the functions }u\in L^0({\mbox{\boldmath$m$}})\text{ that are
essentially bounded.}\\
L^0({\mbox{\boldmath$m$}})^+:&\quad\text{ the Borel functions }u:\,{\rm X}\to[0,+\infty]
\text{, considered up to }{\mbox{\boldmath$m$}}\text{-a.e.\ equality.}\\
L^p({\mbox{\boldmath$m$}})^+:&\quad\text{ the functions }u\in L^0({\mbox{\boldmath$m$}})^+\text{ for which }
|u|^p\text{ is integrable.}\\
L^p_{loc}({\mbox{\boldmath$m$}})^+:&\quad\text{ the functions }u\in L^0({\mbox{\boldmath$m$}})^+
\text{ with }u\restr B\in L^p\big({\mbox{\boldmath$m$}}\restr B\big)^+\text{ for any }
B\subseteq{\rm X}\text{ bounded Borel.}\\
{\rm LIP}({\rm X}):&\quad\text{ the Lipschitz functions }u:{\rm X}\to\mathbb{R},
\text{ with Lipschitz constant denoted by }{\rm Lip}(u).\\
{\sf Sf}({\rm X}):&\quad\text{ the functions }u\in L^0({\mbox{\boldmath$m$}})\text{ that are simple,
i.e.\ with a finite essential image.}
\end{align*}
Observe that for any $u\in L^p_{loc}({\mbox{\boldmath$m$}})^+$ it holds that $u(x)<+\infty$
for ${\mbox{\boldmath$m$}}$-a.e.\ $x\in{\rm X}$.
We also recall that the space ${\sf Sf}({\rm X})$ is strongly dense in $L^p({\mbox{\boldmath$m$}})$
for every $p\in[1,\infty]$.
\begin{remark}{\rm
In \cite[Section 1.1]{GoldTroy01} a more general notion of $L^p_{loc}({\mbox{\boldmath$m$}})$ is considered,
based upon the concept of \emph{$\mathcal K$-set}.
We chose the present approach for simplicity, but the following discussion would remain
unaltered if we replaced our definition of $L^p_{loc}({\mbox{\boldmath$m$}})$ with
the one of \cite{GoldTroy01}.
\fr}\end{remark}
\section{Axiomatic theory of Sobolev spaces}
We begin by briefly recalling the axiomatic notion of Sobolev space that has been
introduced by V.\ Gol'dshtein and M.\ Troyanov in \cite[Section 1.2]{GoldTroy01}:
\begin{definition}[$D$-structure]
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space. Let $p\in[1,\infty)$ be fixed.
Then a \emph{$D$-structure} on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ is any map $D$ associating to each function $u\in L^p_{loc}({\mbox{\boldmath$m$}})$
a family $D[u]\subseteq L^0({\mbox{\boldmath$m$}})^+$ of \emph{pseudo-gradients} of $u$,
which satisfies the following axioms:
\begin{itemize}
\item[\textbf{\emph{A1}}]\textbf{\emph{(Non triviality)}} It holds that
${\rm Lip}(u)\,{\raise.3ex\hbox{$\chi$}}_{\{u>0\}}\in D[u]$ for every $u\in L^p_{loc}({\mbox{\boldmath$m$}})^+\cap{\rm LIP}({\rm X})$.
\item[\textbf{\emph{A2}}]\textbf{\emph{(Upper linearity)}} Let $u_1,u_2\in L^p_{loc}({\mbox{\boldmath$m$}})$ be fixed.
Consider $g_1\in D[u_1]$ and $g_2\in D[u_2]$. Suppose that the inequality
$g\geq|\alpha_1|\,g_1+|\alpha_2|\,g_2$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$ for some $g\in L^0({\mbox{\boldmath$m$}})^+$
and $\alpha_1,\alpha_2\in\mathbb{R}$. Then $g\in D[\alpha_1\,u_1+\alpha_2\,u_2]$.
\item[\textbf{\emph{A3}}]\textbf{\emph{(Leibniz rule)}} Fix a function $u\in L^p_{loc}({\mbox{\boldmath$m$}})$ and
a pseudo-gradient $g\in D[u]$ of $u$. Then for every $\varphi\in{\rm LIP}({\rm X})$
bounded it holds that $g\,\sup_{\rm X}|\varphi|+{\rm Lip}(\varphi)\,|u|\in D[\varphi\,u]$.
\item[\textbf{\emph{A4}}]\textbf{\emph{(Lattice property)}} Fix $u_1,u_2\in L^p_{loc}({\mbox{\boldmath$m$}})$.
Given any $g_1\in D[u_1]$ and $g_2\in D[u_2]$, one has that
$\max\{g_1,g_2\}\in D\big[\max\{u_1,u_2\}\big]\cap D\big[\min\{u_1,u_2\}\big]$.
\item[\textbf{\emph{A5}}]\textbf{\emph{(Completeness)}} Consider two sequences
$(u_n)_n\subseteq L^p_{loc}({\mbox{\boldmath$m$}})$ and $(g_n)_n\subseteq L^p({\mbox{\boldmath$m$}})$ that
satisfy $g_n\in D[u_n]$ for every $n\in\mathbb{N}$. Suppose that
there exist $u\in L^p_{loc}({\mbox{\boldmath$m$}})$ and $g\in L^p({\mbox{\boldmath$m$}})$ such that $u_n\to u$ in $L^p_{loc}({\mbox{\boldmath$m$}})$
and $g_n\to g$ in $L^p({\mbox{\boldmath$m$}})$. Then $g\in D[u]$.
\end{itemize}
\end{definition}
\begin{remark}\label{rmk:conseq_D-structure}{\rm
It follows from axioms \textbf{A1} and \textbf{A2} that $0\in D[c]$ for every constant map $c\in\mathbb{R}$.
Moreover, axiom \textbf{A2} grants
that the set $D[u]\cap L^p({\mbox{\boldmath$m$}})$ is convex and that $D[\alpha\,u]=|\alpha|\,D[u]$
for every $u\in L^p_{loc}({\mbox{\boldmath$m$}})$ and $\alpha\in\mathbb{R}\setminus\{0\}$,
while axiom \textbf{A5} implies that each set $D[u]\cap L^p({\mbox{\boldmath$m$}})$ is closed
in the space $L^p({\mbox{\boldmath$m$}})$.
\fr}\end{remark}
Given any Borel set $B\subseteq{\rm X}$, we define the \emph{$p$-Dirichlet energy} of a map
$u\in L^p({\mbox{\boldmath$m$}})$ on $B$ as
\begin{equation}\label{eq:p-Dirichlet_energy}
\mathcal{E}_p(u|B):=\inf\left\{\int_B g^p\,\d{\mbox{\boldmath$m$}}\;\bigg|\;g\in D[u]\right\}\in[0,+\infty].
\end{equation}
For the sake of brevity, we shall use the notation $\mathcal{E}_p(u)$ to indicate $\mathcal{E}_p(u|{\rm X})$.
\begin{definition}[Sobolev space]
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space. Let $p\in[1,\infty)$ be fixed. Given a $D$-structure
on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$, we define the \emph{Sobolev class} associated to $D$ as
\begin{equation}\label{eq:Sobolev_class_associated_to_D-structure}
{\rm S}^p({\rm X})={\rm S}^p({\rm X},{\sf d},{\mbox{\boldmath$m$}},D):=
\big\{u\in L^p_{loc}({\mbox{\boldmath$m$}})\;:\;\mathcal{E}_p(u)<+\infty\big\}.
\end{equation}
Moreover, the \emph{Sobolev space} associated to $D$ is defined as
\begin{equation}\label{eq:Sobolev_space_associated_to_D-structure}
W^{1,p}({\rm X})=W^{1,p}({\rm X},{\sf d},{\mbox{\boldmath$m$}},D):=L^p({\mbox{\boldmath$m$}})\cap{\rm S}^p({\rm X},{\sf d},{\mbox{\boldmath$m$}},D).
\end{equation}
\end{definition}
\begin{theorem}
The space $W^{1,p}({\rm X},{\sf d},{\mbox{\boldmath$m$}},D)$ is a Banach space if endowed with the norm
\begin{equation}\label{eq:Sobolev_space_associated_to_D-structure_norm}
{\|u\|}_{W^{1,p}({\rm X})}:=\left({\|u\|}^p_{L^p({\mbox{\boldmath$m$}})}+\mathcal{E}_p(u)\right)^{1/p}
\quad\text{ for every }u\in W^{1,p}({\rm X}).
\end{equation}
\end{theorem}
For a proof of the previous result, we refer to \cite[Theorem 1.5]{GoldTroy01}.
\begin{proposition}[Minimal pseudo-gradient]
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space and let $p\in(1,\infty)$.
Consider any $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$. Let $u\in{\rm S}^p({\rm X})$ be given.
Then there exists a unique element $\underline{D}u\in D[u]$,
which is called the \emph{minimal pseudo-gradient} of $u$, such that
$\mathcal{E}_p(u)={\|\underline{D}u\|}^p_{L^p({\mbox{\boldmath$m$}})}$.
\end{proposition}
Both existence and uniqueness of the minimal pseudo-gradient follow from the
fact that the set $D[u]\cap L^p({\mbox{\boldmath$m$}})$ is convex and closed by Remark
\ref{rmk:conseq_D-structure} and that the space $L^p({\mbox{\boldmath$m$}})$ is uniformly convex;
see \cite[Proposition 1.22]{GoldTroy01} for the details.
In order to associate a differential structure to an axiomatic Sobolev space,
we need to be sure that the pseudo-gradients of a function depend only on the local
behaviour of the function itself, in a suitable sense. For this reason, we
propose various notions of locality:
\begin{definition}[Locality]
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space. Fix $p\in(1,\infty)$.
Then we define five notions of locality for $D$-structures on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$:
\begin{itemize}
\item[\textbf{\emph{L1}}]
If $B\subseteq{\rm X}$ is Borel and $u\in{\rm S}^p({\rm X})$ is ${\mbox{\boldmath$m$}}$-a.e.\ constant
in $B$, then $\mathcal{E}_p(u|B)=0$.
\item[\textbf{\emph{L2}}]
If $B\subseteq{\rm X}$ is Borel and $u\in{\rm S}^p({\rm X})$ is ${\mbox{\boldmath$m$}}$-a.e.\ constant
in $B$, then $\underline{D}u=0$ ${\mbox{\boldmath$m$}}$-a.e.\ in $B$.
\item[\textbf{\emph{L3}}]
If $u\in{\rm S}^p({\rm X})$ and $g\in D[u]$, then ${\raise.3ex\hbox{$\chi$}}_{\{u>0\}}\,g\in D[u^+]$.
\item[\textbf{\emph{L4}}]
If $u\in{\rm S}^p({\rm X})$ and $g_1,g_2\in D[u]$,
then $\min\{g_1,g_2\}\in D[u]$.
\item[\textbf{\emph{L5}}]
If $u\in{\rm S}^p({\rm X})$ then $\underline Du\leq g$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$
for every $g\in D[u]$.
\end{itemize}
\end{definition}
\begin{remark}{\rm
In the language of \cite[Definition 1.11]{GoldTroy01}, the properties \textbf{L1} and
\textbf{L3} correspond to \emph{locality} and \emph{strict locality},
respectively.
\fr}\end{remark}
We now discuss the relations among the several notions of locality:
\begin{proposition}\label{prop:implications_locality}
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space. Let $p\in(1,\infty)$.
Fix a $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$. Then the following
implications hold:
\begin{equation}\begin{split}
\textbf{\emph{L3}}&\quad\Longrightarrow\\
\textbf{\emph{L4}}&\quad\Longleftrightarrow\\
\textbf{\emph{L1}}+\textbf{\emph{L5}}&\quad\Longrightarrow
\end{split}\begin{split}
&\quad\textbf{\emph{L2}}\quad\Longrightarrow\quad\textbf{\emph{L1}},\\
&\quad\textbf{\emph{L5}}\\
&\quad\textbf{\emph{L2}}+\textbf{\emph{L3}}.
\end{split}\end{equation}
\end{proposition}
\begin{proof}\\
{\color{blue}\textbf{L2} $\Longrightarrow$ \textbf{L1}.}
Simply notice that $\mathcal E_p(u|B)\leq\int_B(\underline D u)^p\,\d{\mbox{\boldmath$m$}}=0$.\\
{\color{blue}\textbf{L3} $\Longrightarrow$ \textbf{L2}.}
Take a constant $c\in\mathbb{R}$ such that the equality $u=c$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in $B$.
Given that $\underline{D}u\in D[u-c]\cap D[c-u]$ by axiom \textbf{A2} and Remark
\ref{rmk:conseq_D-structure}, we deduce from \textbf{L3} that
\[\begin{split}
&{\raise.3ex\hbox{$\chi$}}_{\{u>c\}}\,\underline{D}u\in D\big[(u-c)^+\big],\\
&{\raise.3ex\hbox{$\chi$}}_{\{u<c\}}\,\underline{D}u\in D\big[(c-u)^+\big].
\end{split}\]
Given that $u-c=(u-c)^+ -(c-u)^+$, by applying again axiom \textbf{A2} we see that
$${\raise.3ex\hbox{$\chi$}}_{\{u\neq c\}}\,\underline{D}u=
{\raise.3ex\hbox{$\chi$}}_{\{u>c\}}\,\underline{D}u+{\raise.3ex\hbox{$\chi$}}_{\{u<c\}}\,\underline{D}u
\in D[u-c]=D[u].$$
Hence the minimality of $\underline{D}u$ grants that
$$\int_{\rm X}(\underline{D}u)^p\,\d{\mbox{\boldmath$m$}}\leq\int_{\{u\neq c\}}(\underline{D}u)^p\,\d{\mbox{\boldmath$m$}},$$
which implies that $\underline{D}u=0$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in $\{u=c\}$, thus also ${\mbox{\boldmath$m$}}$-a.e.\ in $B$.
This means that the $D$-structure satisfies the property \textbf{L2}, as required.\\
{\color{blue}\textbf{L4} $\Longrightarrow$ \textbf{L5}.} We argue by contradiction:
suppose the existence of $u\in{\rm S}^p({\rm X})$ and $g\in D[u]$ such that
${\mbox{\boldmath$m$}}\big(\{\underline Du>g\}\big)>0$, whence $h:=\min\{\underline D u,g\}\in L^p({\mbox{\boldmath$m$}})$
satisfies $\int h^p\,\d{\mbox{\boldmath$m$}}<\int(\underline D u)^p\,\d{\mbox{\boldmath$m$}}$. Since $h\in D[u]$ by
\textbf{L4}, we deduce that $\mathcal E_p(u)<\int(\underline D u)^p\,\d{\mbox{\boldmath$m$}}$,
getting a contradiction.\\
{\color{blue}\textbf{L5} $\Longrightarrow$ \textbf{L4}.} Since $\underline D u\leq g_1$
and $\underline D u\leq g_2$ hold ${\mbox{\boldmath$m$}}$-a.e., we see that $\underline D u\leq\min\{g_1,g_2\}$ holds ${\mbox{\boldmath$m$}}$-a.e.\ as well. Therefore $\min\{g_1,g_2\}\in D[u]$ by \textbf{A2}.\\
{\color{blue}\textbf{L1}+\textbf{L5} $\Longrightarrow$ \textbf{L2}+\textbf{L3}.}
Property \textbf{L1} grants the existence of $(g_n)_n\subseteq D[u]$
with $\int_B(g_n)^p\,\d{\mbox{\boldmath$m$}}\to 0$. Hence \textbf{L5} tells us that
$\int_B(\underline D u)^p\,\d{\mbox{\boldmath$m$}}\leq\lim_n\int_B(g_n)^p\,\d{\mbox{\boldmath$m$}}=0$,
which implies that $\underline D u=0$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in $B$, yielding \textbf{L2}.
We now prove the validity of \textbf{L3}: it holds that $D[u]\subseteq D[u^+]$,
because we know that $h=\max\{h,0\}\in D\big[\max\{u,0\}\big]=D[u^+]$ for every $h\in D[u]$
by \textbf{A4} and $0\in D[0]$, in particular $u^+\in{\rm S}^p({\rm X})$.
Given that $u^+=0$ ${\mbox{\boldmath$m$}}$-a.e.\ in the set $\{u\leq 0\}$, one has that $\underline D u^+=0$
holds ${\mbox{\boldmath$m$}}$-a.e.\ in $\{u\leq 0\}$ by \textbf{L2}. Hence for any $g\in D[u]$
we have $\underline D u^+\leq{\raise.3ex\hbox{$\chi$}}_{\{u>0\}}\,g$ by \textbf{L5}, which implies
that ${\raise.3ex\hbox{$\chi$}}_{\{u>0\}}\,g\in D[u^+]$ by \textbf{A2}. Therefore \textbf{L3} is proved.
\end{proof}
\begin{definition}[Pointwise local]
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space and $p\in(1,\infty)$.
Then a $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ is said to be \emph{pointwise local}
provided it satisfies \textbf{\emph{L1}} and \textbf{\emph{L5}} (thus
also \textbf{\emph{L2}}, \textbf{\emph{L3}} and \textbf{\emph{L4}} by Proposition
\ref{prop:implications_locality}).
\end{definition}
We now recall other two notions of locality for $D$-structures that
appeared in the literature:
\begin{definition}[Strong locality]
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space and $p\in(1,\infty)$.
Consider a $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$. Then we give the following definitions:
\begin{itemize}
\item[$\rm i)$] We say that $D$ is \emph{strongly local in the sense
of Timoshin} provided
\begin{equation}\label{eq:loc_Timosh}
{\raise.3ex\hbox{$\chi$}}_{\{u_1<u_2\}}\,g_1+{\raise.3ex\hbox{$\chi$}}_{\{u_2<u_1\}}\,g_2+{\raise.3ex\hbox{$\chi$}}_{\{u_1=u_2\}}\,(g_1\wedge g_2)
\in D[u_1\wedge u_2]
\end{equation}
whenever $u_1,u_2\in{\rm S}^p({\rm X})$, $g_1\in D[u_1]$ and $g_2\in D[u_2]$.
\item[$\rm ii)$] We say that $D$ is \emph{strongly local in the sense
of Shanmugalingam} provided
\begin{equation}
{\raise.3ex\hbox{$\chi$}}_B\,g_1+{\raise.3ex\hbox{$\chi$}}_{{\rm X}\setminus B}\,g_2\in D[u_2]
\quad\text{ for every }g_1\in D[u_1]\text{ and }g_2\in D[u_2]
\end{equation}
whenever $u_1,u_2\in{\rm S}^p({\rm X})$ satisfy $u_1=u_2$ ${\mbox{\boldmath$m$}}$-a.e.\ on some Borel
set $B\subseteq{\rm X}$.
\end{itemize}
\end{definition}
\medskip
The above two notions of strong locality have been proposed in \cite{Timosh}
and \cite{Shanm}, respectively.
We now prove that they are actually both equivalent to our pointwise locality property:
\begin{lemma}
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space and $p\in(1,\infty)$.
Fix any $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$. Then the following are equivalent:
\begin{itemize}
\item[$\rm i)$] $D$ is pointwise local.
\item[$\rm ii)$] $D$ is strongly local in the sense of Shanmugalingam.
\item[$\rm iii)$] $D$ is strongly local in the sense of Timoshin.
\end{itemize}
\end{lemma}
\begin{proof}
\\
{\color{blue}${\rm i)}\Longrightarrow{\rm ii)}$} Fix $u_1,u_2\in{\rm S}^p({\rm X})$
such that $u_1=u_2$ ${\mbox{\boldmath$m$}}$-a.e.\ on some $E\subseteq{\rm X}$ Borel.
Pick $g_1\in D[u_1]$ and $g_2\in D[u_2]$. Observe that
$\underline D(u_2-u_1)+g_1\in D\big[(u_2-u_1)+u_1\big]=D[u_2]$ by \textbf{A2},
so that we have $\big(\underline D(u_2-u_1)+g_1\big)\wedge g_2\in D[u_2]$ by \textbf{L4}.
Since $\underline D(u_2-u_1)=0$ ${\mbox{\boldmath$m$}}$-a.e.\ on $B$ by \textbf{L2}, we see that
${\raise.3ex\hbox{$\chi$}}_B\,g_1+{\raise.3ex\hbox{$\chi$}}_{{\rm X}\setminus B}\,g_2\geq\big(\underline D(u_2-u_1)+g_1\big)\wedge g_2$
holds ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$, whence accordingly we conclude that
${\raise.3ex\hbox{$\chi$}}_B\,g_1+{\raise.3ex\hbox{$\chi$}}_{{\rm X}\setminus B}\,g_2\in D[u_2]$ by \textbf{A2}.
This shows the validity of ii).\\
{\color{blue}${\rm ii)}\Longrightarrow{\rm i)}$} First of all, let us prove \textbf{L1}.
Let $u\in{\rm S}^p({\rm X})$ and $c\in\mathbb{R}$ satisfy $u=c$ ${\mbox{\boldmath$m$}}$-a.e.\ on some Borel
set $B\subseteq{\rm X}$. Given any $g\in D[u]$, we deduce from ii) that
${\raise.3ex\hbox{$\chi$}}_{{\rm X}\setminus B}\,g\in D[u]$, thus accordingly
$\mathcal E_p(u|B)\leq\int_B({\raise.3ex\hbox{$\chi$}}_{{\rm X}\setminus B}\,g)^p\,\d{\mbox{\boldmath$m$}}=0$. This proves
the property \textbf{L1}.
To show property \textbf{L4}, fix $u\in{\rm S}^p({\rm X})$ and $g_1,g_2\in D[u]$.
Let us denote $B:=\{g_1\leq g_2\}$. Therefore ii) grants that
$g_1\wedge g_2={\raise.3ex\hbox{$\chi$}}_B\,g_1+{\raise.3ex\hbox{$\chi$}}_{{\rm X}\setminus B}\,g_2\in D[u]$, thus
obtaining \textbf{L4}. By recalling Proposition \ref{prop:implications_locality},
we conclude that $D$ is pointwise local.\\
{\color{blue}${\rm i)}+{\rm ii)}\Longrightarrow{\rm iii)}$} Fix $u_1,u_2\in{\rm S}^p({\rm X})$,
$g_1\in D[u_1]$ and $g_2\in D[u_2]$.
Recall that $g_1\vee g_2\in D[u_1\wedge u_2]$ by axiom \textbf{A4}.
Hence by using property ii) twice we obtain that
\begin{equation}\label{eq:equiv_ptwse_local_aux}\begin{split}
{\raise.3ex\hbox{$\chi$}}_{\{u_1\leq u_2\}}\,g_1+{\raise.3ex\hbox{$\chi$}}_{\{u_1>u_2\}}\,(g_1\vee g_2)&\in D[u_1\wedge u_2],\\
{\raise.3ex\hbox{$\chi$}}_{\{u_2\leq u_1\}}\,g_2+{\raise.3ex\hbox{$\chi$}}_{\{u_2>u_1\}}\,(g_1\vee g_2)&\in D[u_1\wedge u_2].
\end{split}\end{equation}
The pointwise minimum between the two functions that are written in
\eqref{eq:equiv_ptwse_local_aux} -- namely given by
${\raise.3ex\hbox{$\chi$}}_{\{u_1<u_2\}}\,g_1+{\raise.3ex\hbox{$\chi$}}_{\{u_2<u_1\}}\,g_2+{\raise.3ex\hbox{$\chi$}}_{\{u_1=u_2\}}\,(g_1\wedge g_2)$
-- belongs to the class $D[u_1\wedge u_2]$ as well by property \textbf{L4},
thus showing iii).\\
{\color{blue}${\rm iii)}\Longrightarrow{\rm i)}$} First of all, let us prove \textbf{L1}.
Fix a function $u\in{\rm S}^p({\rm X})$ that is ${\mbox{\boldmath$m$}}$-a.e.\ equal to some constant $c\in\mathbb{R}$
on a Borel set $B\subseteq{\rm X}$. By using iii) and the fact that $0\in D[0]$,
we have that
\begin{equation}\label{eq:equiv_ptwse_local_aux2}\begin{split}
{\raise.3ex\hbox{$\chi$}}_{\{u<c\}}\,g&\in D\big[(u-c)\wedge 0\big]=D\big[-(u-c)^+\big]
=D\big[(u-c)^+\big],\\
{\raise.3ex\hbox{$\chi$}}_{\{u>c\}}\,g&\in D\big[(c-u)\wedge 0\big]=D\big[-(c-u)^+\big]
=D\big[(c-u)^+\big].
\end{split}\end{equation}
Since $u-c=(u-c)^+ -(c-u)^+$, we know from \textbf{A2} and
\eqref{eq:equiv_ptwse_local_aux2} that
\[{\raise.3ex\hbox{$\chi$}}_{\{u\neq c\}}\,g={\raise.3ex\hbox{$\chi$}}_{\{u<c\}}\,g+{\raise.3ex\hbox{$\chi$}}_{\{u>c\}}\,g\in D[u-c]=D[u],\]
whence $\mathcal E_p(u|B)\leq\int_B({\raise.3ex\hbox{$\chi$}}_{\{u\neq c\}}\,g)^p\,\d{\mbox{\boldmath$m$}}=0$.
This proves the property \textbf{L1}.
To show property \textbf{L4}, fix $u\in{\rm S}^p({\rm X})$ and $g_1,g_2\in D[u]$.
Hence \eqref{eq:loc_Timosh} with $u_1=u_2:=u$ simply reads as $g_1\wedge g_2\in D[u]$,
which gives \textbf{L4}. This proves that $D$ is pointwise local.
\end{proof}
\begin{remark}[\textbf{L1} does not imply \textbf{L2}]{\rm
In general, as we are going to show in the following example, it can happen that
a $D$-structure satisfies \textbf{L1} but not \textbf{L2}.
Let $G=(V,E)$ be a locally finite connected graph. The distance ${\sf d}(x,y)$ between two
vertices $x,y\in V$ is defined as the minimum length of a path joining $x$ to $y$,
while as a reference measure ${\mbox{\boldmath$m$}}$ on $V$ we choose the counting measure. Notice that
any function $u:\,V\to\mathbb{R}$ is locally Lipschitz and that any bounded subset of $V$
is finite. We define a $D$-structure on the metric measure space $(V,{\sf d},{\mbox{\boldmath$m$}})$ in the following way:
\begin{equation}\label{eq:D-structure_graphs}
D[u]:=\Big\{g:\,V\to [0,+\infty]\;\Big|\;\big|u(x)-u(y)\big|\leq g(x)+g(y)
\text{ for any }x,y\in V\text{ with }x\sim y\Big\}
\end{equation}
for every $u:\,V\to\mathbb{R}$, where the notation $x\sim y$ indicates that $x$ and $y$ are adjacent vertices,
i.e.\ that there exists an edge in $E$ joining $x$ to $y$.
We claim that $D$ fulfills \textbf{L1}. To prove it, suppose that some function $u:\,{\rm X}\to\mathbb{R}$ is constant on some
set $B\subseteq V$, say $u(x)=c$ for every $x\in B$. Define the function $g:\,V\to[0,+\infty)$ as
$$g(x):=\left\{\begin{array}{ll}
0\\
|c|+\big|u(x)\big|
\end{array}\quad\begin{array}{ll}
\text{ if }x\in B,\\
\text{ if }x\in V\setminus B.
\end{array}\right.$$
Hence $g\in D[u]$ and $\int_B g^p\,\d{\mbox{\boldmath$m$}}=0$, so that $\mathcal{E}_p(u|B)=0$.
This proves the validity of \textbf{L1}.
On the other hand, if $V$ contains more than one vertex, then \textbf{L2} is not satisfied.
Indeed, consider any non-constant function $u:\,V\to\mathbb{R}$. Clearly any
pseudo-gradient $g\in D[u]$ of $u$ is not identically zero, thus there exists
$x\in V$ such that $\underline D u(x)>0$. Since $u$ is trivially constant
on the set $\{x\}$, we then conclude that property \textbf{L2} does not hold.
\fr}\end{remark}
Hereafter, we shall focus our attention on the pointwise local $D$-structures.
Under these locality assumptions, one can show the following calculus
rules for minimal pseudo-gradients, whose proof is suitably adapted from
analogous results that have been proved in \cite{AmbrosioGigliSavare11}.
\begin{proposition}[Calculus rules for $\underline D u$]\label{prop:properties_Du}
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space and let $p\in(1,\infty)$.
Consider a pointwise local $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$.
Then the following hold:
\begin{itemize}
\item[$\rm i)$] Let $u\in{\rm S}^p({\rm X})$ and let $N\subseteq\mathbb{R}$ be a Borel
set with $\mathcal L^1(N)=0$. Then the equality $\underline D u=0$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in
$u^{-1}(N)$.
\item[$\rm ii)$] \textsc{Chain rule}. Let $u\in{\rm S}^p({\rm X})$ and
$\varphi\in{\rm LIP}(\mathbb{R})$. Then $|\varphi'|\circ u\,\underline D u\in D[\varphi\circ u]$.
More precisely, $\varphi\circ u\in{\rm S}^p({\rm X})$ and
$\underline D(\varphi\circ u)=|\varphi'|\circ u\,\underline D u$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$.
\item[$\rm iii)$] \textsc{Leibniz rule}. Let $u,v\in{\rm S}^p({\rm X})\cap L^\infty({\mbox{\boldmath$m$}})$.
Then $|u|\,\underline D v+|v|\,\underline D u\in D[uv]$. In other words,
$uv\in{\rm S}^p({\rm X})\cap L^\infty({\mbox{\boldmath$m$}})$ and $\underline D(uv)\leq|u|\,\underline D v+|v|\,\underline D u$
holds ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$.
\end{itemize}
\end{proposition}
\begin{proof}\\
{\color{blue}\textsc{Step 1.}} First, consider $\varphi$ affine,
say $\varphi(t)=\alpha\,t+\beta$. Then
$|\varphi'|\circ u\,\underline D u=|\alpha|\,\underline D u\in D[\varphi\circ u]$
by Remark \ref{rmk:conseq_D-structure} and \textbf{A2}.
Now suppose that the function $\varphi$ is piecewise affine, i.e.\ there exists
a sequence $(a_k)_{k\in\mathbb{Z}}\subseteq\mathbb{R}$, with $a_k<a_{k+1}$
for all $k\in\mathbb{Z}$ and $a_0=0$, such that each $\varphi\restr{[a_k,a_{k+1}]}$ is an affine function.
Let us denote $A_k:=u^{-1}\big([a_k,a_{k+1})\big)$ and $u_k:=(u\vee a_k)\wedge a_{k+1}$ for every index $k\in\mathbb{Z}$.
By combining \textbf{L3} with the axioms \textbf{A2} and \textbf{A5}, we can see that
${\raise.3ex\hbox{$\chi$}}_{A_k}\,\underline D u\in D[u_k]$ for every $k\in\mathbb{Z}$. Called $\varphi_k:\,\mathbb{R}\to\mathbb{R}$
that affine function coinciding with $\varphi$ on $[a_k,a_{k+1})$, we deduce from
the previous case that
$|\varphi'_k|\circ u_k\,\underline D u_k\in D[\varphi_k\circ u_k]=D[\varphi\circ u_k]$,
whence we have that $|\varphi'|\circ u_k\,{\raise.3ex\hbox{$\chi$}}_{A_k}\,\underline D u\in D[\varphi\circ u_k]$
by \textbf{L5}, \textbf{A2} and \textbf{L2}.
Let us define $(v_n)_n\subseteq{\rm S}^p({\rm X})$ as
\[v_n:=\varphi(0)+\sum_{k=0}^n\big(\varphi\circ u_k-\varphi(a_k)\big)
+\sum_{k=-n}^{-1}\big(\varphi\circ u_k-\varphi(a_{k+1})\big)
\quad\text{ for every }n\in\mathbb{N}.\]
Hence $g_n:=\sum_{k=-n}^n|\varphi'|\circ u_k\,{\raise.3ex\hbox{$\chi$}}_{A_k}\,\underline D u\in D[v_n]$
for all $n\in\mathbb{N}$ by \textbf{A2} and Remark \ref{rmk:conseq_D-structure}.
Given that one has $v_n\to\varphi\circ u$ in $L^p_{loc}({\mbox{\boldmath$m$}})$ and $g_n\to|\varphi'|\circ u\,\underline D u$ in $L^p({\mbox{\boldmath$m$}})$ as $n\to\infty$, we finally conclude that
$|\varphi'|\circ u\,\underline D u\in D[\varphi\circ u]$, as required.\\
{\color{blue}\textsc{Step 2.}} We aim to prove the chain rule for $\varphi\in C^1(\mathbb{R})\cap{\rm LIP}(\mathbb{R})$. For any $n\in\mathbb{N}$, let us denote by $\varphi_n$ the piecewise
affine function interpolating the points $\big(k/2^n,\varphi(k/2^n)\big)$
with $k\in\mathbb{Z}$. We call $D\subseteq\mathbb{R}$ the countable set
$\big\{k/2^n\,:\,k\in\mathbb{Z},\,n\in\mathbb{N}\big\}$. Therefore $\varphi_n$ uniformly converges to
$\varphi$ and $\varphi'_n(t)\to\varphi'(t)$ for all $t\in\mathbb{R}\setminus D$.
In particular, the functions $g_n:=|\varphi'_n|\circ u\,\underline D u$
converge ${\mbox{\boldmath$m$}}$-a.e.\ to $|\varphi'|\circ u\,\underline D u$ by \textbf{L2}.
Moreover, ${\rm Lip}(\varphi_n)\leq{\rm Lip}(\varphi)$ for every $n\in\mathbb{N}$ by construction,
so that $(g_n)_n$ is a bounded sequence in $L^p({\mbox{\boldmath$m$}})$. This implies that
(up to a not relabeled subsequence) $g_n\rightharpoonup|\varphi'|\circ u\,\underline D u$
weakly in $L^p({\mbox{\boldmath$m$}})$. Now apply Mazur lemma: for any $n\in\mathbb{N}$, there exists
$(\alpha^n_i)_{i=n}^{N_n}\subseteq[0,1]$ such that $\sum_{i=n}^{N_n}\alpha^n_i=1$
and $h_n:=\sum_{i=n}^{N_n}\alpha^n_i\,g_i\overset{n}\to|\varphi'|\circ u\,\underline D u$
strongly in $L^p({\mbox{\boldmath$m$}})$.
Given that $g_n\in D[\varphi_n\circ u]$ for every $n\in\mathbb{N}$ by
\textsc{Step 1}, we deduce from axiom \textbf{A2} that $h_n\in D[\psi_n\circ u]$
for every $n\in\mathbb{N}$, where $\psi_n:=\sum_{i=n}^{N_n}\alpha^n_i\,\varphi_i$.
Finally, it clearly holds that $\psi_n\circ u\to\varphi\circ u$ in $L^p_{loc}({\mbox{\boldmath$m$}})$,
whence $|\varphi'|\circ u\,\underline D u\in D[\varphi\circ u]$ by \textbf{A5}.\\
{\color{blue}\textsc{Step 3.}} We claim that
\begin{equation}\label{eq:properties_Du_claim}
\underline D u=0\;\;\;{\mbox{\boldmath$m$}}\text{-a.e.\ in }u^{-1}(K),
\quad\text{ for every }K\subseteq\mathbb{R}\text{ compact with }\mathcal L^1(K)=0.
\end{equation}
For any $n\in\mathbb{N}\setminus\{0\}$, define $\psi_n:=n\,{\sf d}(\cdot,K)\wedge 1$
and denote by $\varphi_n$ the primitive of $\psi_n$ such that $\varphi_n(0)=0$.
Since each $\psi_n$ is continuous and bounded, any function $\varphi_n$ is
of class $C^1$ and Lipschitz. By applying the dominated convergence theorem
we see that the $\mathcal L^1$-measure of the $\eps$-neighbourhood of $K$
converges to $0$ as $\eps\searrow 0$, thus accordingly $\varphi_n$ uniformly converges
to ${\rm id}_\mathbb{R}$ as $n\to\infty$. This implies that $\varphi_n\circ u\to u$
in $L^p_{loc}({\mbox{\boldmath$m$}})$. Moreover, we know from \textsc{Step 2} that
$|\psi_n|\circ u\,\underline D u\in D[\varphi_n\circ u]$, thus also
${\raise.3ex\hbox{$\chi$}}_{{\rm X}\setminus u^{-1}(K)}\,\underline D u\in D[\varphi_n\circ u]$.
Hence ${\raise.3ex\hbox{$\chi$}}_{{\rm X}\setminus u^{-1}(K)}\,\underline D u\in D[u]$ by \textbf{A5},
which forces the equality $\underline D u=0$ to hold ${\mbox{\boldmath$m$}}$-a.e.\ in $u^{-1}(K)$,
proving \eqref{eq:properties_Du_claim}.\\
{\color{blue}\textsc{Step 4.}} We are in a position to prove i). Choose any
${\mbox{\boldmath$m$}}'\in\mathscr P({\rm X})$ such that ${\mbox{\boldmath$m$}}\ll{\mbox{\boldmath$m$}}'\ll{\mbox{\boldmath$m$}}$ and call $\mu:=u_*{\mbox{\boldmath$m$}}'$.
Then $\mu$ is a Radon measure on $\mathbb{R}$, in particular it is inner regular.
We can thus find an increasing sequence of compact sets $K_n\subseteq N$
such that $\mu\big(N\setminus\bigcup_n K_n\big)=0$. We already know from
\textsc{Step 3} that $\underline D u=0$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in $\bigcup_n u^{-1}(K_n)$.
Since $u^{-1}(N)\setminus\bigcup_n u^{-1}(K_n)$ is ${\mbox{\boldmath$m$}}$-negligible
by definition of $\mu$, we conclude that $\underline D u=0$
holds ${\mbox{\boldmath$m$}}$-a.e.\ in $u^{-1}(N)$. This shows the validity of property i).\\
{\color{blue}\textsc{Step 5.}} We now prove ii). Let us fix $\varphi\in{\rm LIP}(\mathbb{R})$.
Choose some convolution kernels $(\rho_n)_n$ and define $\varphi_n:=\varphi*\rho_n$
for all $n\in\mathbb{N}$. Then $\varphi_n\to\varphi$ uniformly and $\varphi'_n\to\varphi'$
pointwise $\mathcal L^1$-a.e., whence accordingly $\varphi_n\circ u\to\varphi\circ u$ in
$L^p_{loc}({\mbox{\boldmath$m$}})$ and $|\varphi'_n|\circ u\,\underline D u\to|\varphi'|\circ u\,\underline D u$
pointwise ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$. Since $|\varphi'_n|\circ u\,\underline D u
\leq{\rm Lip}(\varphi)\,\underline D u$ for all $n\in\mathbb{N}$, there exists a (not relabeled)
subsequence such that $|\varphi'_n|\circ u\,\underline D u\rightharpoonup|\varphi'|\circ u\,\underline D u$
weakly in $L^p({\mbox{\boldmath$m$}})$. We know that $|\varphi'_n|\circ u\,\underline D u\in D[\varphi_n\circ u]$ for all $n\in\mathbb{N}$ because the chain rule holds for all $\varphi_n\in C^1(\mathbb{R})\cap{\rm LIP}(\mathbb{R})$,
hence by combining Mazur lemma and \textbf{A5} as in \textsc{Step 2} we obtain that
$|\varphi'|\circ u\,\underline D u\in D[\varphi\circ u]$, so that
$\varphi\circ u\in{\rm S}^p({\rm X})$ and the inequality
$\underline D(\varphi\circ u)\leq|\varphi'|\circ u\,\underline D u$
holds ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$.\\
{\color{blue}\textsc{Step 6.}} We conclude the proof of ii) by showing that one
actually has $\underline D(\varphi\circ u)=|\varphi'|\circ u\,\underline D u$.
We can suppose without loss of generality that ${\rm Lip}(\varphi)=1$.
Let us define the functions $\psi_\pm$ as $\psi_\pm(t):=\pm t-\varphi(t)$
for all $t\in\mathbb{R}$. Then it holds ${\mbox{\boldmath$m$}}$-a.e.\ in $u^{-1}\big(\{\pm\varphi'\geq 0\}\big)$ that
$$\underline D u=\underline D(\pm u)\leq
\underline D(\varphi\circ u)+\underline D(\psi_\pm\circ u)
\leq\big(|\varphi'|\circ u+|\psi'_\pm|\circ u\big)\,\underline D u=\underline D u,$$
which forces the equality $\underline D(\varphi\circ u)=\pm\varphi'\circ u\,\underline D u$
to hold ${\mbox{\boldmath$m$}}$-a.e.\ in the set $u^{-1}\big(\{\pm\varphi'\geq 0\}\big)$.
This grants the validity of $\underline D(\varphi\circ u)=|\varphi'|\circ u\,\underline D u$,
thus completing the proof of item ii).\\
{\color{blue}\textsc{Step 7.}} We show iii) for the case in which
$u,v\geq c$ is satisfied ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$, for some $c>0$.
Call $\eps:=\min\{c,c^2\}$ and note that the function $\log$ is Lipschitz on
the interval $[\eps,+\infty)$, then choose any Lipschitz function $\varphi:\,\mathbb{R}\to\mathbb{R}$
that coincides with $\log$ on $[\eps,+\infty)$.
Now call $C$ the constant $\log\big({\|uv\|}_{L^\infty({\mbox{\boldmath$m$}})}\big)$
and choose a Lipschitz function $\psi:\,\mathbb{R}\to\mathbb{R}$ such that $\psi=\exp$ on the interval
$[\log\eps,C]$. By applying twice the chain rule ii), we thus deduce that
$uv\in{\rm S}^p({\rm X})$ and the ${\mbox{\boldmath$m$}}$-a.e.\ inequalities
\[\begin{split}
\underline D(uv)&\leq|\psi'|\circ\varphi\circ(uv)\,\underline D
\big(\varphi\circ(uv)\big)\leq|uv|\,\big(\underline D\log u+\underline D\log v\big)\\
&=|uv|\left(\frac{\underline D u}{|u|}+\frac{\underline D v}{|v|}\right)=
|u|\,\underline D v+|v|\,\underline D u.
\end{split}\]
Therefore the Leibniz rule iii) is verified under the additional assumption that $u,v\geq c>0$.\\
{\color{blue}\textsc{Step 8.}}
We conclude by proving item iii) for general $u,v\in{\rm S}^p({\rm X})\cap L^\infty({\mbox{\boldmath$m$}})$.
Given any $n\in\mathbb{N}$ and $k\in\mathbb{Z}$, let us denote $I_{n,k}:=\big[k/n,(k+1)/n\big)$.
Call $\varphi_{n,k}:\,\mathbb{R}\to\mathbb{R}$ the continuous function that is the identity on $I_{n,k}$ and constant elsewhere.
For any $n\in\mathbb{N}$, let us define
\begin{align*}
u_{n,k}&:=u-\frac{k-1}{n},&\tilde u_{n,k}&:=\varphi_{n,k}\circ u-\frac{k-1}{n}
&\text{ for all }k\in\mathbb{Z},\\
v_{n,\ell}&:=v-\frac{\ell-1}{n},&\tilde v_{n,\ell}
&:=\varphi_{n,\ell}\circ v-\frac{\ell-1}{n}&\text{ for all }\ell\in\mathbb{Z}.
\end{align*}
Notice that the equalities $u_{n,k}=\tilde u_{n,k}$ and $v_{n,\ell}=\tilde v_{n,\ell}$
hold ${\mbox{\boldmath$m$}}$-a.e.\ in $u^{-1}(I_{n,k})$ and $v^{-1}(I_{n,\ell})$, respectively. Hence
$\underline D u_{n,k}=\underline D\tilde u_{n,k}=\underline D u$ and
$\underline D v_{n,\ell}=\underline D\tilde v_{n,\ell}=\underline D v$
hold ${\mbox{\boldmath$m$}}$-a.e.\ in $u^{-1}(I_{n,k})$ and $v^{-1}(I_{n,\ell})$, respectively, but
we also have that
\[\underline D(u_{n,k}\,v_{n,\ell})=\underline D(\tilde u_{n,k}\,\tilde v_{n,\ell})
\quad\text{ is verified }{\mbox{\boldmath$m$}}\text{-a.e.\ in }u^{-1}(I_{n,k})\cap v^{-1}(I_{n,\ell}).\]
Moreover, we have the ${\mbox{\boldmath$m$}}$-a.e.\ inequalities
$1/n\leq\tilde u_{n,k},\tilde v_{n,\ell}\leq 2/n$ by construction. Therefore for
any $k,\ell\in\mathbb{Z}$ it holds ${\mbox{\boldmath$m$}}$-a.e.\ in $u^{-1}(I_{n,k})\cap v^{-1}(I_{n,\ell})$ that
\[\begin{split}
\underline D(uv)&\leq\underline D(\tilde u_{n,k}\,\tilde v_{n,\ell})
+\frac{|k-1|}{n}\,\underline D v_{n,\ell}+\frac{|\ell-1|}{n}\,\underline D u_{n,k}\\
&\leq|\tilde v_{n,\ell}|\,\underline D\tilde u_{n,k}+
|\tilde u_{n,k}|\,\underline D\tilde v_{n,\ell}+
\frac{|k-1|}{n}\,\underline D v_{n,\ell}+\frac{|\ell-1|}{n}\,\underline D u_{n,k}\\
&\leq\left(|v|+\frac{4}{n}\right)\underline D u
+\left(|u|+\frac{4}{n}\right)\underline D v,
\end{split}\]
where the second inequality follows from the case $u,v\geq c>0$, treated in \textsc{Step 7}.
This implies that the inequality
$\underline D(uv)\leq|u|\,\underline D v+|v|\,\underline D u+4\,(\underline D u
+\underline D v)/n$ holds ${\mbox{\boldmath$m$}}$-a.e.\ in ${\rm X}$.
Given that $n\in\mathbb{N}$ is arbitrary, the Leibniz rule iii) follows.
\end{proof}
\section{Cotangent module associated to a \texorpdfstring{$D$}{D}-structure}
It is shown in \cite{Gigli14} that any metric measure space possesses a
first-order differential structure, whose construction relies upon the
notion of \emph{$L^p({\mbox{\boldmath$m$}})$-normed $L^\infty({\mbox{\boldmath$m$}})$-module}. For completeness,
we briefly recall its definition and we refer to \cite{Gigli14,Gigli17} for
a comprehensive exposition of this topic.
\begin{definition}[Normed module]
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space and $p\in[1,\infty)$.
Then an \emph{$L^p({\mbox{\boldmath$m$}})$-normed $L^\infty({\mbox{\boldmath$m$}})$-module} is any quadruplet
$\big(\mathscr M,{\|\cdot\|}_{\mathscr M},\,\cdot\,,|\cdot|\big)$ such that
\begin{itemize}
\item[$\rm i)$] $\big(\mathscr M,{\|\cdot\|}_{\mathscr M}\big)$ is a Banach space,
\item[$\rm ii)$] $(\mathscr M,\cdot)$ is an algebraic module over the
commutative ring $L^\infty({\mbox{\boldmath$m$}})$,
\item[$\rm iii)$] the \emph{pointwise norm} operator
$|\cdot|:\,\mathscr M\to L^p({\mbox{\boldmath$m$}})^+$ satisfies
\begin{equation}\label{eq:ptwse_norm}\begin{split}
|f\cdot v|=|f||v|\;\;\;{\mbox{\boldmath$m$}}\text{-a.e.}&\quad\text{ for every }f\in L^\infty({\mbox{\boldmath$m$}})\text{ and }v\in\mathscr M,\\
{\|v\|}_{\mathscr M}={\big\||v|\big\|}_{L^p({\mbox{\boldmath$m$}})}&\quad\text{ for every }v\in\mathscr M.
\end{split}\end{equation}
\end{itemize}
\end{definition}
A key role in \cite{Gigli14} is played by the \emph{cotangent module} $L^2(T^*{\rm X})$,
which has a structure of $L^2({\mbox{\boldmath$m$}})$-normed $L^\infty({\mbox{\boldmath$m$}})$-module;
see \cite[Theorem/Definition 1.8]{Gigli17} for its characterisation.
The following result shows that a generalised version of such object can be
actually associated to any $D$-structure,
provided the latter is assumed to be pointwise local.
\begin{theorem}[Cotangent module associated to a $D$-structure]\label{thm:cot_mod}
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be any metric measure space and let $p\in(1,\infty)$.
Consider a pointwise local $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$.
Then there exists a unique couple $\big(L^p(T^*{\rm X};D),\d\big)$, where $L^p(T^*{\rm X};D)$
is an $L^p({\mbox{\boldmath$m$}})$-normed $L^\infty({\mbox{\boldmath$m$}})$-module and $\d:\,{\rm S}^p({\rm X})\to L^p(T^*{\rm X};D)$
is a linear map, such that the following hold:
\begin{itemize}
\item[$\rm i)$] the equality $|\d u|=\underline{D}u$ is satisfied ${\mbox{\boldmath$m$}}$-a.e.\ in
${\rm X}$ for every $u\in{\rm S}^p({\rm X})$,
\item[$\rm ii)$] the vector space $\mathcal{V}$ of all elements of the form $\sum_{i=1}^n{\raise.3ex\hbox{$\chi$}}_{B_i}\,\d u_i$,
where $(B_i)_i$ is a Borel partition of ${\rm X}$ and $(u_i)_i\subseteq{\rm S}^p({\rm X})$, is dense in the space $L^p(T^*{\rm X};D)$.
\end{itemize}
Uniqueness has to be intended up to unique isomorphism: given another such couple $(\mathscr{M},\d')$,
there is a unique isomorphism $\Phi:\,L^p(T^*{\rm X};D)\to\mathscr{M}$
such that $\Phi(\d u)=\d' u$ for all $u\in S^p({\rm X})$.
The space $L^p(T^*{\rm X};D)$ is called \emph{cotangent module},
while the map $\d$ is called \emph{differential}.
\end{theorem}
\begin{proof}\\
{\color{blue}\textsc{Uniqueness.}}
Consider any element $\omega\in\mathcal{V}$ written as
$\omega=\sum_{i=1}^n{\raise.3ex\hbox{$\chi$}}_{B_i}\,\d u_i$,
with $(B_i)_i$ Borel partition of ${\rm X}$ and $u_1,\ldots,u_n\in S^p({\rm X})$. Notice that the requirements
that $\Phi$ is $L^\infty({\mbox{\boldmath$m$}})$-linear and $\Phi\circ\d=\d'$
force the definition $\Phi(\omega):=\sum_{i=1}^n{\raise.3ex\hbox{$\chi$}}_{B_i}\,\d'u_i$. The ${\mbox{\boldmath$m$}}$-a.e.\ equality
$$\big|\Phi(\omega)\big|=\sum_{i=1}{\raise.3ex\hbox{$\chi$}}_{B_i}\,|\d' u_i|=
\sum_{i=1}^n{\raise.3ex\hbox{$\chi$}}_{B_i}\,\underline{D}u_i=\sum_{i=1}^n{\raise.3ex\hbox{$\chi$}}_{B_i}\,|\d u_i|=|\omega|$$
grants that $\Phi(\omega)$ is well-defined, in the sense that it does not depend on the particular way of
representing $\omega$, and that $\Phi:\,\mathcal{V}\to\mathscr{M}$ preserves the pointwise norm.
In particular, one has that the map $\Phi:\,\mathcal{V}\to\mathscr{M}$ is (linear and) continuous.
Since $\mathcal{V}$ is dense in $L^p(T^*{\rm X};D)$, we can uniquely extend $\Phi$
to a linear and continuous map $\Phi:\,L^p(T^*{\rm X};D)\to\mathscr{M}$, which also preserves the pointwise
norm. Moreover, we deduce from the very definition of $\Phi$ that the identity $\Phi(h\,\omega)=h\,\Phi(\omega)$
holds for every $\omega\in\mathcal{V}$ and $h\in{\sf Sf}({\rm X})$, whence the $L^\infty({\mbox{\boldmath$m$}})$-linearity
of $\Phi$ follows by an approximation argument. Finally, the image $\Phi(\mathcal{V})$ is dense
in $\mathscr{M}$, which implies that $\Phi$ is surjective. Therefore $\Phi$ is the unique isomorphism
satisfying $\Phi\circ\d=\d'$.\\
{\color{blue}\textsc{Existence.}}
First of all, let us define the \emph{pre-cotangent module} as
\[{\sf Pcm}:=\left\{\big\{(B_i,u_i)\big\}_{i=1}^n\;\bigg|\begin{array}{ll}
\;n\in\mathbb{N},\;u_1,\ldots,u_n\in{\rm S}^p({\rm X}),\\
(B_i)_{i=1}^n\text{ Borel partition of }{\rm X}
\end{array}\right\}.\]
We define an equivalence relation on $\sf Pcm$ as follows: we declare that
$\big\{(B_i,u_i)\big\}_i\sim\big\{(C_j,v_j)\big\}_j$ provided $\underline{D}(u_i-v_j)=0$
holds ${\mbox{\boldmath$m$}}$-a.e.\ on $B_i\cap C_j$ for every $i,j$. The equivalence class of
an element $\big\{(B_i,u_i)\big\}_i$ of $\sf Pcm$ will be denoted by $[B_i,u_i]_i$.
We can endow the quotient ${\sf Pcm}/\sim$ with a vector space structure:
\begin{equation}\label{eq:def_vector_sp_operations}\begin{split}
[B_i,u_i]_i+[C_j,v_j]_j&:=[B_i\cap C_j,u_i+v_j]_{i,j},\\
\lambda\,[B_i,u_i]_i&:=[B_i,\lambda\,u_i]_i,
\end{split}\end{equation}
for every $[B_i,u_i]_i,[C_j,v_j]_j\in{\sf Pcm}/\sim$ and $\lambda\in\mathbb{R}$.
We only check that the sum operator is well-defined; the proof of the well-posedness of
the multiplication by scalars follows along the same lines.
Suppose that $\big\{(B_i,u_i)\big\}_i\sim\big\{(B'_k,u'_k)\big\}_k$
and $\big\{(C_j,v_j)\big\}_j\sim\big\{(C'_\ell,v'_\ell)\big\}_\ell$,
in other words $\underline D(u_i-u'_k)=0$ ${\mbox{\boldmath$m$}}$-a.e.\ on $B_i\cap B'_k$
and $\underline D(v_j-v'_\ell)=0$ ${\mbox{\boldmath$m$}}$-a.e.\ on $C_j\cap C'_\ell$ for every
$i,j,k,\ell$, whence accordingly
\[\underline D\big((u_i+v_j)-(u'_k+v'_\ell)\big)\overset{\textbf{L5}}\leq
\underline D(u_i-u'_k)+\underline D(v_j-v'_\ell)=0\quad\text{ holds }
{\mbox{\boldmath$m$}}\text{-a.e.\ on }(B_i\cap C_j)\cap(B'_k\cap C'_\ell).\]
This shows that $\big\{(B_i\cap C_j,u_i+v_j)\big\}_{i,j}\sim
\big\{(B'_k\cap C'_\ell,u'_k+v'_\ell)\big\}_{k,\ell}$, thus proving
that the sum operator defined in \eqref{eq:def_vector_sp_operations} is well-posed.
Now let us define
\begin{equation}\label{eq:def_norm}
{\big\|[B_i,u_i]_i\big\|}_{L^p(T^*{\rm X};D)}:=
\sum_{i=1}^n\bigg(\int_{B_i}(\underline{D}u_i)^p\,\d{\mbox{\boldmath$m$}}\bigg)^{1/p}
\quad\text{ for every }[B_i,u_i]_i\in{\sf Pcm}/\sim.
\end{equation}
Such definition is well-posed: if $\big\{(B_i,u_i)\big\}_i\sim\big\{(C_j,v_j)\big\}_j$
then for all $i,j$ it holds that
\[|\underline D u_i-\underline D v_j|\overset{\textbf{L5}}\leq
\underline D(u_i-v_j)=0\quad{\mbox{\boldmath$m$}}\text{-a.e.\ on }B_i\cap C_j,\]
i.e.\ that the equality $\underline D u_i=\underline D v_j$ is satisfied
${\mbox{\boldmath$m$}}$-a.e.\ on $B_i\cap C_j$. Therefore one has that
\[\begin{split}
\sum_i\bigg(\int_{B_i}(\underline D u_i)^p\,\d{\mbox{\boldmath$m$}}\bigg)^{1/p}
&=\sum_{i,j}\bigg(\int_{B_i\cap C_j}(\underline D u_i)^p\,\d{\mbox{\boldmath$m$}}\bigg)^{1/p}
=\sum_{i,j}\bigg(\int_{B_i\cap C_j}(\underline D v_j)^p\,\d{\mbox{\boldmath$m$}}\bigg)^{1/p}\\
&=\sum_j\bigg(\int_{C_j}(\underline D v_j)^p\,\d{\mbox{\boldmath$m$}}\bigg)^{1/p},
\end{split}\]
which grants that ${\|\cdot\|}_{L^p(T^*{\rm X};D)}$ in \eqref{eq:def_norm} is well-defined.
The fact that it is a norm on ${\sf Pcm}/\sim$ easily follows from standard verifications.
Hence let us define
\[\begin{split}
&L^p(T^*{\rm X};D):=\text{completion of }\big({\sf Pcm}/\sim,{\|\cdot\|}_{L^p(T^*{\rm X};D)}\big),\\
&\d:\,{\rm S}^p({\rm X})\to L^p(T^*{\rm X};D),\;\;\;\d u:=[{\rm X},u]\text{ for every }u\in{\rm S}^p({\rm X}).
\end{split}\]
Observe that $L^p(T^*{\rm X};D)$ is a Banach space and that $\d$ is a linear operator.
Furthermore, given any $[B_i,u_i]_i\in{\sf Pcm}/\sim$ and
$h=\sum_j\lambda_j\,{\raise.3ex\hbox{$\chi$}}_{C_j}\in{\sf Sf}({\rm X})$,
where $(\lambda_j)_j\subseteq\mathbb{R}$ and $(C_j)_j$ is a Borel partition of ${\rm X}$, we set
\[\begin{split}
\big|[B_i,u_i]_i\big|&:=\sum_i{\raise.3ex\hbox{$\chi$}}_{B_i}\,\underline{D}u_i,\\
h\,[B_i,u_i]_i&:=[B_i\cap C_j,\lambda_j\,u_i]_{i,j}.
\end{split}\]
One can readily prove that such operations, which are well-posed again by the pointwise
locality of $D$, can be uniquely extended to a pointwise norm
$|\cdot|:\,L^p(T^*{\rm X};D)\to L^p({\mbox{\boldmath$m$}})^+$ and to a multiplication by $L^\infty$-functions
$L^\infty({\mbox{\boldmath$m$}})\times L^p(T^*{\rm X};D)\to L^p(T^*{\rm X};D)$, respectively.
Therefore the space $L^p(T^*{\rm X};D)$ turns out to be an $L^p({\mbox{\boldmath$m$}})$-normed $L^\infty({\mbox{\boldmath$m$}})$-module
when equipped with the operations described so far. In order to conclude,
it suffices to notice that
$$|\d u|=\big|[{\rm X},u]\big|=\underline{D}u\;\;\;\text{holds }{\mbox{\boldmath$m$}}\text{-a.e.}
\quad\text{ for every }u\in{\rm S}^p({\rm X})$$
and that $[B_i,u_i]_i=\sum_i{\raise.3ex\hbox{$\chi$}}_{B_i}\,\d u_i$ for all $[B_i,u_i]_i\in{\sf Pcm}/\sim$,
giving i) and ii), respectively.
\end{proof}
\bigskip
In full analogy with the properties of the cotangent module that is
studied in \cite{Gigli14}, we can show that the differential $\d$ introduced
in Theorem \ref{thm:cot_mod} is a closed operator, which satisfies both
the chain rule and the Leibniz rule.
\begin{theorem}[Closure of the differential]\label{thm:closure_d}
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space and let $p\in(1,\infty)$.
Consider a pointwise local $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$.
Then the differential operator $\d$ is \emph{closed}, i.e.\ if a sequence $(u_n)_n\subseteq{\rm S}^p({\rm X})$
converges in $L^p_{loc}({\mbox{\boldmath$m$}})$ to some $u\in L^p_{loc}({\mbox{\boldmath$m$}})$ and $\d u_n\rightharpoonup\omega$
weakly in $L^p(T^*{\rm X};D)$ for some $\omega\in L^p(T^*{\rm X};D)$, then $u\in{\rm S}^p({\rm X})$
and $\d u=\omega$.
\end{theorem}
\begin{proof}
Since $\d$ is linear, we can assume with no loss of generality that
$\d u_n\to\omega$ in $L^p(T^*{\rm X};D)$ by Mazur lemma, so that
$\d(u_n-u_m)\to\omega-\d u_m$ in $L^p(T^*{\rm X};D)$ for any $m\in\mathbb{N}$.
In particular, one has $u_n-u_m\to u-u_m$ in $L^p_{loc}({\mbox{\boldmath$m$}})$ and
$\underline D(u_n-u_m)=\big|\d(u_n-u_m)\big|\to|\omega-\d u_m|$
in $L^p({\mbox{\boldmath$m$}})$ as $n\to\infty$ for all $m\in\mathbb{N}$,
whence $u-u_m\in{\rm S}^p({\rm X})$ and $\underline D(u-u_m)\leq|\omega-\d u_m|$
holds ${\mbox{\boldmath$m$}}$-a.e.\ for all $m\in\mathbb{N}$ by \textbf{A5} and \textbf{L5}. Therefore
$u=(u-u_0)+u_0\in{\rm S}^p({\rm X})$ and
\[\begin{split}
\varlimsup_{m\to\infty}{\|\d u-\d u_m\|}_{L^p(T^*{\rm X};D)}
&=\varlimsup_{m\to\infty}{\big\|\underline D(u-u_m)\big\|}_{L^p({\mbox{\boldmath$m$}})}
\leq\varlimsup_{m\to\infty}{\|\omega-\d u_m\|}_{L^p(T^*{\rm X};D)}\\
&=\varlimsup_{m\to\infty}\lim_{n\to\infty}{\|\d u_n-\d u_m\|}_{L^p(T^*{\rm X};D)}=0,
\end{split}\]
which grants that $\d u_m\to\d u$ in $L^p(T^*{\rm X};D)$ as $m\to\infty$ and
accordingly that $\d u=\omega$.
\end{proof}
\begin{proposition}[Calculus rules for $\d u$]
Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be any metric measure space and let $p\in(1,\infty)$.
Consider a pointwise local $D$-structure on $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$.
Then the following hold:
\begin{itemize}
\item[$\rm i)$] Let $u\in{\rm S}^p({\rm X})$ and let $N\subseteq\mathbb{R}$ be a Borel
set with $\mathcal L^1(N)=0$. Then ${\raise.3ex\hbox{$\chi$}}_{u^{-1}(N)}\,\d u=0$.
\item[$\rm ii)$] \textsc{Chain rule}. Let $u\in{\rm S}^p({\rm X})$ and
$\varphi\in{\rm LIP}(\mathbb{R})$ be given. Recall that $\varphi\circ u\in{\rm S}^p({\rm X})$
by Proposition \ref{prop:properties_Du}. Then $\d(\varphi\circ u)=\varphi'\circ u\,\d u$.
\item[$\rm iii)$] \textsc{Leibniz rule}. Let $u,v\in{\rm S}^p({\rm X})\cap L^\infty({\mbox{\boldmath$m$}})$
be given. Recall that $uv\in{\rm S}^p({\rm X})\cap L^\infty({\mbox{\boldmath$m$}})$
by Proposition \ref{prop:properties_Du}. Then $\d(uv)=u\,\d v+v\,\d u$.
\end{itemize}
\end{proposition}
\begin{proof}\\
{\color{blue}$\rm i)$} We have that $|\d u|=\underline D u=0$ holds ${\mbox{\boldmath$m$}}$-a.e.\ on
$u^{-1}(N)$ by item i) of Proposition \ref{prop:properties_Du},
thus accordingly ${\raise.3ex\hbox{$\chi$}}_{u^{-1}(N)}\,\d u=0$, as required.\\
{\color{blue}$\rm ii)$} If $\varphi$ is an affine function, say $\varphi(t)=\alpha\,t+\beta$,
then $\d(\varphi\circ u)=\d(\alpha\,u+\beta)=\alpha\,\d u=\varphi'\circ u\,\d u$.
Now suppose that $\varphi$ is a piecewise affine function. Say that $(I_n)_n$ is a sequence
of intervals whose union covers the whole real line $\mathbb{R}$ and that $(\psi_n)_n$ is a sequence of
affine functions such that $\varphi\restr{I_n}=\psi_n$ holds for every $n\in\mathbb{N}$.
Since $\varphi'$ and $\psi'_n$ coincide $\mathcal L^1$-a.e.\ in the interior of $I_n$, we have that
$\d(\varphi\circ f)=\d(\psi_n\circ f)=\psi'_n\circ f\,\d f=\varphi'\circ f\,\d f$ holds ${\mbox{\boldmath$m$}}$-a.e.\ on
$f^{-1}(I_n)$ for all $n$, so that $\d(\varphi\circ u)=\varphi'\circ u\,\d u$ is verified
${\mbox{\boldmath$m$}}$-a.e.\ on $\bigcup_n u^{-1}(I_n)={\rm X}$.
To prove the case of a general Lipschitz function $\varphi:\,\mathbb{R}\to\mathbb{R}$, we want to approximate
$\varphi$ with a sequence of piecewise affine functions: for any $n\in\mathbb{N}$, let us denote
by $\varphi_n$ the function that coincides with $\varphi$ at $\big\{k/2^n\,:\,k\in\mathbb{Z}\big\}$
and that is affine on the interval $\big[k/2^n,(k+1)/2^n\big]$ for every $k\in\mathbb{Z}$.
It is clear that ${\rm Lip}(\varphi_n)\leq{\rm Lip}(\varphi)$ for all $n\in\mathbb{N}$.
Moreover, one can readily check that, up to a not relabeled subsequence,
$\varphi_n\to\varphi$ uniformly on $\mathbb{R}$ and $\varphi'_n\to\varphi'$ pointwise
$\mathcal L^1$-almost everywhere. The former grants that
$\varphi_n\circ u\to\varphi\circ u$ in $L^p_{loc}({\mbox{\boldmath$m$}})$. Given that
$|\varphi'_n-\varphi'|^p\circ u\,(\underline D u)^p
\leq 2^p\,{\rm Lip}(\varphi)^p\,(\underline D u)^p\in L^1({\mbox{\boldmath$m$}})$ for all $n\in\mathbb{N}$
and $|\varphi'_n-\varphi'|^p\circ u\,(\underline D u)^p\to 0$ pointwise
${\mbox{\boldmath$m$}}$-a.e.\ by the latter above together with i), we obtain
$\int|\varphi'_n-\varphi'|^p\circ u\,(\underline D u)^p\,\d{\mbox{\boldmath$m$}}\to 0$ as $n\to\infty$
by the dominated convergence theorem. In other words,
$\varphi'_n\circ u\,\d u\to\varphi'\circ u\,\d u$ in the strong topology
of $L^p(T^*{\rm X};D)$. Hence Theorem \ref{thm:closure_d} ensures that
$\d(\varphi\circ u)=\varphi'\circ u\,\d u$, thus proving the chain rule ii)
for any $\varphi\in{\rm LIP}(\mathbb{R})$.\\
{\color{blue}$\rm iii)$} In the case $u,v\geq 1$, we argue as in
the proof of Proposition \ref{prop:properties_Du} to deduce from ii) that
\[\frac{\d(uv)}{uv}=\d\log(uv)=\d\big(\log(u)+\log(v)\big)=\d\log(u)+\d\log(v)=
\frac{\d u}{u}+\frac{\d v}{v},\]
whence we get $\d(uv)=u\,\d v+v\,\d u$ by multiplying both sides by $uv$.
In the general case $u,v\in L^\infty({\mbox{\boldmath$m$}})$, choose a constant $C>0$ so big that
$u+C,v+C\geq 1$. By the case treated above, we know that
\begin{equation}\label{eq:calc_rules_d_aux1}\begin{split}
\d\big((u+C)(v+C)\big)&=(u+C)\,\d(v+C)+(v+C)\,\d(u+C)\\
&=(u+C)\,\d v+(v+C)\,\d u\\
&=u\,\d v+v\,\d u+C\,\d(u+v),
\end{split}\end{equation}
while a direct computation yields
\begin{equation}\label{eq:calc_rules_d_aux2}
\d\big((u+C)(v+C)\big)=\d\big(uv+C(u+v)+C^2\big)
=\d(uv)+C\,\d(u+v).
\end{equation}
By subtracting \eqref{eq:calc_rules_d_aux2} from \eqref{eq:calc_rules_d_aux1},
we finally obtain that $\d(uv)=u\,\d v+v\,\d u$, as required.
This completes the proof of the Lebniz rule iii).
\end{proof}
\bigskip
\noindent{\bf Acknowledgements.}
\noindent This research has been supported by the MIUR SIR-grant `Nonsmooth Differential Geometry' (RBSI147UG4).
{\footnotesize
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
|
1,116,691,500,658 | arxiv | \section{Introduction}
Rotation velocity sensing is an important part of inertial navigation,
the aim of which is to measure the rotation velocity of non-inertial
system. And it is realized by a instrument named gyroscope \cite{key-1gr,key-9gr2}.
The earliest gyroscope utilizes the precession of the mechanical rotor,
and then the goals of the gyroscope's development are to achieve high
precision and miniaturization. With the development of laser and microelectronics,
people have made optoelectronic gyroscopes, such as Ring Laser Gyroscopes
\cite{key-11fig1,key-16fig2,key-7fig3} and Micro-Electro-Mechanical
System gyroscope \cite{key-9gr2}. In recent years, the development
of nuclear magnetic resonance technology and cold atom technology
leads to continuous improvement in accuracy of the quantum gyroscopes
\cite{key-2qgr1,key-8qgr2,key-3qgr3,key-4qgr4,key-5qgr5,key-qgr6},
the new member of the gyroscope family. Different types of quantum
gyroscopes are also being proposed, such as gyroscope based on nitrogen-vacancy
(NV) color centers \cite{key-1NV} and gyroscope \cite{key-23TFIM}
that utilize the decoherence of the transverse field Ising moedel
(TFIM) in non-initial system.
Interferometric Fiber Optic Gyroscope (IFOG) and Atom Interference
Gyroscope (AIG) \cite{key-15AIG,key-2qgr1}are based on the Sagnac
effect \cite{key-10sagnac,key-13sagnac3}. In 1980 \cite{key-14Lorenz},
Sakurai gave a approach to derive the Sagnac effect by using the similarity
between the Coriolis force and the Lorentz force. With this in mind,
a problem arises here that while the Lorentz force of the charged
particles in magnetic field leads to the Hall effect, then can the
Coriolis force lead to a similar effect? In order to solve this problem,
we study the motion of the charged particles between the conductor
plates in the non-inertial system. It is found that the Coriolis force
would induce a stable voltage between the conductor plates, which
is proportional to the rotation velocity of the system. Thus we use
this effect to design a new kind gyroscope and we call it charging
capacitor gyroscope (CCG).
This paper is organized as follows. The general motion equation of
charged particles in non-initial reference frame is given in Sec.
II. In Sec. III, we obtain the voltage difference between two conductor
plates when the charged particles moving through them, thus the charging
capacitor gyroscope is put forward . Then, the resolution of CCG with
different structure is discussed in Sec. IV, and conclusions are given
in Sec V.
\section{Motion equation for a charged particle in non-inertial system}
We first consider a charged particle with mass $m$, change $q$ moves
in a non-inertial reference frame with rotation velocity $\overrightarrow{\varOmega}$.
The equation of motion of such a particle reads
\begin{equation}
m\ddot{\overrightarrow{r}}=q\left(\vec{E}+\overrightarrow{v}\times\overrightarrow{B}\right)+2m\overrightarrow{v}\times\overrightarrow{\varOmega}+m\overrightarrow{\varOmega}\times\left(\overrightarrow{\varOmega}\times\overrightarrow{r}\right)\label{eq:motion}
\end{equation}
where $\overrightarrow{r}$ is the particle's coordinate, $\overrightarrow{E}$
and $\overrightarrow{B}$ are the electromagnetic field. Obviously,
Eq. (\ref{eq:motion}) can be re-written in the Cartesian coordinates
as
\begin{equation}
\begin{cases}
\ddot{x} & =\frac{q}{m}\left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right)+2\left(v_{y}\varOmega_{z}-v_{z}\varOmega_{y}\right)\\
\ddot{y} & =\frac{q}{m}\left(E_{y}+v_{z}B_{x}-v_{x}B_{z}\right)+2\left(v_{z}\varOmega_{x}-v_{x}\varOmega_{z}\right)\\
\ddot{z} & =\frac{q}{m}\left(E_{z}+v_{x}B_{y}-v_{y}B_{x}\right)+2\left(v_{x}\varOmega_{y}-v_{y}\varOmega_{x}\right)
\end{cases},\label{eq:decompose}
\end{equation}
where the second order terms of $\varOmega$ have been ignored in
Eq. (\ref{eq:decompose}) in the limit $v\gg\varOmega$.
\section{Charged particles moving between conductor plates}
\begin{center}
\begin{figure}
\includegraphics[scale=0.2]{Figure1}
Figure1. Charged particle flows between conductor plates. The particles
are now moving in $x$ direction with velocity $v_{0}$ between two
conductor plates, where the distance between them is $d$ and The
lower conductor plate is grounded to maintain its potential at zero.
Assuming the system rotates around $y$-direction with rotation velocity
$\varOmega_{0}$, and there is no external electromagnetic field in
the system.
\end{figure}
\par\end{center}
As shown in Fig. 1, it is obviously that the particles will shift
to the upper plate due to the Coriolis force, thereby charging the
upper conductor plate. In this case, the motion equation for each
particle is
\begin{equation}
\ddot{z}=\frac{q}{m}E_{z}+2v_{0}\varOmega_{0},
\end{equation}
to which the steady solution is
\begin{equation}
E_{z}=\frac{2mv_{0}\varOmega_{0}}{q}.
\end{equation}
For the electric field between the two plates can be approximated
evenly, the stable voltage between the two plates is obtained as
\begin{equation}
U_{z}=E_{z}d=\frac{2mv_{0}\varOmega_{0}d}{q}.\label{eq:voltage}
\end{equation}
This voltage is induced by the Coriolis force of the moving charged
particles in the non-inertial reference frame, and formally similar
to the Hall effect. Once the voltage between the conductor plates
is measured, the rotation velocity of the non-initial system is given
by Eq. (\ref{eq:voltage}) as
\begin{equation}
\varOmega_{0}=\frac{qU_{z}}{2mv_{0}d}.\label{eq:rotation}
\end{equation}
Thus, the measurement of rotation velocity is achieved by this system,
which can be used as a new type of gyroscope, and we call it charging
capacitor gyroscope (CCG).
\section{Resolution of CCG }
It follows from Eq. (\ref{eq:rotation}) that the resolution of CCG
is
\begin{equation}
\triangle\varOmega=\frac{q}{2mv_{0}d}\triangle U,\label{eq:resolution}
\end{equation}
where $\triangle U$ is the resolution of the voltmeter used to detect
the voltage between the two conductor plates. For $\triangle U\sim\mu$V,
$m\sim10^{-27}$kg, $d\sim1$m, $q\sim10^{-19}$C, $v_{0}\sim10^{6}$m/s,
$\triangle\varOmega\sim10^{-4}$rad/s. In order to improve the resolution
of CCG, we put forward an arrangement structure which is shown in
Fig. 2.
\begin{center}
\begin{figure}
\includegraphics[scale=0.2]{Figure2}
Figure. 2. Charging capacitor gyroscope (CCG) with Linear structure.
The upper plate of the $i$th conductor plates pair is connect to
the lower plate of the $i+1$th pair with wire to make them have the
same potential.
\end{figure}
\par\end{center}
As shown in Fig. 2, we have
\begin{equation}
U_{i}^{u}=U_{i+1}^{l}.\label{eq:ii+1}
\end{equation}
Here, $U_{i}^{u}$ ($U_{i}^{l}$ ) is the potential of the upper (lower)
plate belonging to the $i$th conductor plates pair. On the other
hand, for the the $i$th pair , the difference in potential of the
two conductor plates is the same as that in Sec. III, thus
\begin{equation}
U_{i}^{u}-U_{i}^{l}=\frac{2mv_{0}\varOmega_{0}d}{q}.\label{eq:difference}
\end{equation}
Combining Eqs. (\ref{eq:ii+1}) and (\ref{eq:difference}), we have
\begin{equation}
U_{n}^{u}-U_{1}^{l}=\frac{2nmv_{0}\varOmega_{0}d}{q},
\end{equation}
where $n$ is the number of the conductor plates pairs. If we measure
the potential difference between the lower plate of the first pair
of conductor plates and the upper plate of the $n$ th pair, the resolution
of $\varOmega_{0}$ will be
\begin{equation}
\triangle\varOmega=\frac{q}{2nmv_{0}d}\triangle U,
\end{equation}
which is decreased by a factor $1/n$ compared with the result in
Eq. (\ref{eq:resolution}). This indicates that the resolution of
CCG with this structure is $n$ times better than that of the original
one. In the actual production, we can follow the design of cyclotron,
then the above linear cascade structure will changes to a spiral structure,
which can be seen in Fig. 3.
\begin{figure}
\includegraphics[scale=0.25]{Figure3}
Figure. 3. Carrier changing gyroscope with helical structure.The distance
and length of each conductor plates pair are $d$ and $l$, and the
radius of the periphery of the disc structure is $R$.
\end{figure}
When $l\ll R$, $d\ll R$, the disc can be loaded with N-layer conductor
plates pairs, thus $N=R/d$. So the total length of conductor is
\begin{equation}
L=\sum_{i=1}^{i=R/d}2\pi id=2\pi d\frac{\left(1+R/d\right)R/d}{2}=\pi R+\frac{S}{d}\approx\frac{S}{d},
\end{equation}
where $S$ is the area of the disc. So the number of conductor plates
pair in this disc structure is $n=L/l=S/dl$. Thus, the expression
of resolution for CCG with the disc structure in Fig. 3 is
\begin{equation}
\triangle\varOmega=\frac{ql}{2Smv_{0}}\triangle U.
\end{equation}
For $S\sim m^{2}$,$l\sim\mu m$, $\triangle\varOmega\sim10^{-10}$rad/s,
which has theoretically reached the resolution of ultra-high-precision
{[}1{]} gyroscopes. However, the structure discussed above also basically
belongs to a two-dimensional structure. So it is possible to further
optimize. By stacking the above-mentioned disc structures in layers,
a three-dimensional structure can be formed. For conductor plates
with height $h$, there are $n^{'}=H/h$ layers of the helical structure
in the three-dimensional structure with height $H$. The connection
conditions for voltage in the three-dimensional structure CCG is thus
\begin{equation}
\begin{array}{c}
U_{i,j}^{u}=U_{i+1,j}^{l}\\
U_{n,j}^{u}=U_{i,j+1}^{l}
\end{array},\label{eq:3-dimention}
\end{equation}
where $i$ and $j$ denote the order of the conductor plates pairs
in each layer and the order of the layers of the spiral structure.
It follows from Eqs. (\ref{eq:difference}) and (\ref{eq:3-dimention})
that
\begin{equation}
U_{n,n^{'}}^{u}-U_{11}^{l}=\frac{2nn^{'}mv_{0}\varOmega_{0}d}{q}=\frac{2HSmv_{0}\varOmega_{0}}{qlh}.
\end{equation}
Now we obtain the resolution of the CCG with the helical columnar
structure as
\begin{equation}
\triangle\varOmega=\frac{qs}{2Vp}\triangle U.
\end{equation}
Here, $s=hl$ is the surface area of each conductor plate, $V=HS$
is the The volume of the entire CCG structure, and $p=mv_{0}$ is
the momentum of each changed particle. Then, for $l,h\sim\mu m$,
$s\sim10^{-12}m^{2}$, $V\sim m^{3}$,$\triangle\varOmega\sim10^{-16}rad/s\sim10^{-11}\textdegree/h$.
\section{Conclusion}
In summary, we found that the charged particles moving between the
conductor plates in the non-inertial system can induce a voltage between
the plates, by measuring which we can obtain the rotational velocity
of the system. This effect gives a new design of microelectronics
gyroscope, which is named as charging capacitor gyroscope (CCG). Inspired
by the analogy of the Coriolis force and the Lorentz force, we have
proposed a sensing scheme for rotational velocity. This protocol is
similar to the magnetic field measurement based on the Hall effect.
By optimizing the structure of the CCG, the best accuracy we have
achieved in this paper is $\triangle\varOmega\sim10^{-11}\textdegree/h$.
|
1,116,691,500,659 | arxiv | \section{Introduction}
Let $f:{\mathbb R}_+\to{\mathbb R}_+$ be a convex, nondecreasing function that satisfies the linear growth condition
\[
mt\le f(t)\le M(1+t)
\]
with some constants $0<m\le M<\infty$.
Let $\Omega$ be an open set on a metric measure space $(X,d,\mu)$. Throughout the work we assume that the measure is doubling and that the space supports a Poincar\'e inequality. For $u\in L^1_{\mathrm{loc}}(\Omega)$, we define the functional of linear growth via relaxation by
\begin{align*}
&\mathcal F(u,\Omega)\\
&\quad=\inf\left\{\liminf_{i\to\infty}\int_{\Omega}f(g_{u_i})\,d\mu:\,u_i\in \mathrm{Lip}_{\mathrm{loc}}(\Omega),\,u_i\to u\text{ in }L^1_{\mathrm{loc}}(\Omega)\right\},
\end{align*}
where $g_{u_i}$ is the minimal 1-weak upper gradient of $u_i$.
For $f(t)=t$, this is the definition of functions of bounded variation, or $\mathrm{BV}$ functions, on metric measure spaces, see \cite{A2}, \cite{AMP} and \cite{M}.
For $f(t)=\sqrt{1+t^2}$, we get the generalized surface area functional, see \cite{HKL}.
Our first result shows that if $\mathcal F(u,\Omega)<\infty$, then $\mathcal F(u,\cdot)$ is a Borel regular outer measure on $\Omega$.
This result is a generalization of \cite[Theorem 3.4]{M}.
For corresponding results in the Euclidean case with the Lebesgue measure, we refer to \cite{AmbFP00}, \cite{ButGH98}, \cite{GiaMS98I}, \cite{GiaMS98II}, and \cite{GiaMS79}.
Our second goal is study whether the relaxed functional $\mathcal F(u,\cdot)$ can be represented as an integral.
To this end, let $u\in L_{\mathrm{loc}}^1(\Omega)$ with $\mathcal F(u,\Omega)<\infty$. Then the growth condition implies that $u\in\mathrm{BV}(\Omega)$.
We denote the decomposition of the variation measure $\Vert Du\Vert$ into the absolute continuous and singular parts
by $d\Vert Du\Vert=a\, d\mu+d\Vert Du \Vert ^s$, where $a \in L^1(\Omega)$.
Similarly, we denote by $\mathcal F^a(u,\cdot)$ and $\mathcal F^s(u,\cdot)$ the absolutely continuous and singular parts of $\mathcal F(u,\cdot)$ with respect to $\mu$.
For the singular part, we obtain the integral representation
\[
\mathcal F^s(u,\Omega)=f_{\infty}\Vert Du\Vert^s(\Omega),
\]
where $f_{\infty}=\lim_{t\to\infty}f(t)/t$. This is analogous to the Euclidean case.
However, for the absolutely continuous part we only get an integral representation up to a constant
\[
\int_{\Omega}f(a)\,d\mu \le \mathcal F^a(u,\Omega)\le \int_{\Omega}f(Ca)\,d\mu,
\]
where $C$ depends on the doubling constant of the measure and the constants in the Poincar\'e inequality.
Furthermore, we give a counterexample which shows that the constant cannot be dismissed.
We observe that working in the general metric context produces significant challenges that are already visible in the Euclidean setting with a weighted Lebesgue measure.
In overcoming these challenges, a key technical tool is an equi-integrability result for the discrete convolution of a measure.
As a by-product of our analysis, we are able to show that a $\mathrm{BV}$ function is actually a Newton-Sobolev function in a set where the variation measure is absolutely continuous.
As an application of the integral representation, we consider a minimization problem related to functionals of linear growth.
First we define the concept of boundary values of $\mathrm{BV}$ functions, which is a delicate issue already in the Euclidean case.
Let $\Omega\Subset\Omega^*$ be bounded open subsets of $X$, and assume that $h\in\mathrm{BV}(\Omega^*)$.
We define $\mathrm{BV}_{h}(\Omega)$ as the
space of functions $u\in\mathrm{BV}(\Omega^*)$ such that $u=h$ $\mu$-almost everywhere in $\Omega^*\setminus\Omega$.
A function $u\in \mathrm{BV}_{h}(\Omega)$ is a minimizer of the functional of linear growth
with boundary values $h$, if
\[
\mathcal F(u,\Omega^*)= \inf\mathcal F(v,\Omega^*),
\]
where the infimum is taken over all $v\in\mathrm{BV}_h(\Omega)$. It was shown in \cite{HKL} that this problem always has a solution.
By using the integral representation, we can express the boundary values as a penalty term. More precisely, under suitable conditions on the space and $\Omega$, we establish equivalence between the above minimization problem and minimizing the functional
\[
\mathcal F(u,\Omega)+f_{\infty}\int_{\partial \Omega}|T_\Omega u-T_{X\setminus\Omega}h|\theta_\Omega \,d\mathcal H
\]
over all $u\in\mathrm{BV}(\Omega)$. Here $T_\Omega u$ and $T_{X\setminus\Omega}u$ are boundary traces and $\theta_\Omega$
is a strictly positive density function.
This is the main result of the paper, and it extends the Euclidean results in \cite[p. 582]{GiaMS98II} to metric measure spaces.
A careful analysis of $\mathrm{BV}$ extension domains and boundary traces is needed in the argument.
\section{Preliminaries}\label{sec:prelis}
In this paper, $(X,d,\mu)$ is a complete metric measure space
with a Borel regular outer measure $\mu$.
The measure $\mu$ is assumed to be doubling, meaning that there exists a constant $c_d>0$ such that
\[
0<\mu(B(x,2r))\leq c_d\mu(B(x,r))<\infty
\]
for every ball $B(x,r)$ with center $x\in X$ and radius $r > 0$. For brevity, we will sometimes write $\lambda B$ for $B(x,\lambda r)$. On a metric space, a ball $B$ does not necessarily have a unique center point and radius, but we assume every ball to come with a prescribed center and radius.
The doubling condition implies that
\begin{equation}\label{eq:doubling dimension}
\frac{\mu(B(y,r))}{\mu(B(x,R))}\ge C\left(\frac{r}{R}\right)^Q
\end{equation}
for every $r\leq R$ and $y\in B(x,R)$, and some $Q>1$ and $C\ge1$ that only depend on $c_d$.
We recall that a complete metric space endowed with a doubling measure is proper,
that is, closed and bounded sets are compact. Since $X$ is proper, for any open set $\Omega\subset X$
we define $\textrm{Lip}_{\mathrm{loc}}(\Omega)$ as the space of
functions that are Lipschitz continuous in every $\Omega'\Subset\Omega$ (and other local spaces of functions are defined similarly).
Here $\Omega'\Subset\Omega$ means that $\Omega'$ is open and that $\overline{\Omega'}$ is a
compact subset of $\Omega$.
For any set $A\subset X$, the restricted spherical Hausdorff content
of codimension $1$ is defined as
\[
\mathcal{H}_{R}(A)=\inf\left\{ \sum_{i=1}^{\infty}\frac{\mu(B(x_{i},r_{i}))}{r_{i}}:\, A\subset\bigcup_{i=1}^{\infty}B(x_{i},r_{i}),\, r_{i}\le R\right\},
\]
where $0<R<\infty$.
The Hausdorff measure of codimension $1$ of a set
$A\subset X$ is
\[
\mathcal{H}(A)=\lim_{R\rightarrow0}\mathcal{H}_{R}(A).
\]
The measure theoretic boundary $\partial^{*}E$ is defined as the set of points $x\in X$
in which both $E$ and its complement have positive density, i.e.
\[
\limsup_{r\rightarrow0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}>0\quad\;\textrm{and}\quad\;\limsup_{r\rightarrow0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}>0.
\]
A curve $\gamma$ is a rectifiable continuous mapping from a compact interval
to $X$. The length of a curve $\gamma$
is denoted by $\ell_{\gamma}$. We will assume every curve to be parametrized
by arc-length, which can always be done (see e.g. \cite[Theorem 3.2]{Hj}).
A nonnegative Borel function $g$ on $X$ is an upper gradient
of an extended real-valued function $u$
on $X$ if for all curves $\gamma$ in $X$, we have
\begin{equation} \label{ug-cond}
|u(x)-u(y)|\le \int_\gamma g\,ds
\end{equation}
whenever both $u(x)$ and $u(y)$ are finite, and
$\int_\gamma g\, ds=\infty $ otherwise.
Here $x$ and $y$ are the end points of $\gamma$.
If $g$ is a nonnegative $\mu$-measurable function on $X$
and (\ref{ug-cond}) holds for $1$-almost every curve,
then $g$ is a $1$-weak upper gradient of~$u$.
A property holds for $1$-almost every curve
if it fails only for a curve family with zero $1$-modulus.
A family $\Gamma$ of curves is of zero $1$-modulus if there is a
nonnegative Borel function $\rho\in L^1(X)$ such that
for all curves $\gamma\in\Gamma$, the curve integral $\int_\gamma \rho\,ds$ is infinite.
We consider the following norm
\[
\Vert u\Vert_{N^{1,1}(X)}=\Vert u\Vert_{L^1(X)}+\inf_g\Vert g\Vert_{L^1(X)},
\]
where the infimum is taken over all upper gradients $g$ of $u$.
The Newtonian space is defined as
\[
N^{1,1}(X)=\{u:\,\|u\|_{N^{1,1}(X)}<\infty\}/{\sim},
\]
where the equivalence relation $\sim$ is given by $u\sim v$ if and only if
$\Vert u-v\Vert_{N^{1,1}(X)}=0$. In the definition of upper gradients and Newtonian spaces, the whole space $X$ can be replaced by any $\mu$-measurable (typically open) set $\Omega\subset X$. It is known that for any $u\in N_{\mathrm{loc}}^{1,1}(\Omega)$, there exists a minimal $1$-weak
upper gradient, which we always denote $g_{u}$, satisfying $g_{u}\le g$
$\mu$-almost everywhere in $\Omega$, for any $1$-weak upper gradient $g\in L_{\mathrm{loc}}^{1}(\Omega)$
of $u$ \cite[Theorem 2.25]{BB}.
For more on Newtonian spaces, we refer to \cite{S} and \cite{BB}.
Next we recall the definition and basic properties of functions
of bounded variation on metric spaces, see \cite{A2}, \cite{AMP} and \cite{M}.
For $u\in L^1_{\text{loc}}(X)$, we define the total variation of $u$ as
\begin{align*}
&\|Du\|(X)\\
&\quad =\inf\left\{\liminf_{i\to\infty}\int_Xg_{u_i}\,d\mu:\, u_i\in \Lip_{\mathrm{loc}}(X),\, u_i\to u\textrm{ in } L^1_{\text{loc}}(X)\right\},
\end{align*}
where $g_{u_i}$ is the minimal $1$-weak upper gradient of $u_i$.
We say that a function $u\in L^1(X)$ is of bounded variation,
and write $u\in\mathrm{BV}(X)$, if $\|Du\|(X)<\infty$.
Moreover, a $\mu$-measurable set $E\subset X$ is said to be of finite perimeter if $\|D\chi_E\|(X)<\infty$.
By replacing $X$ with an open set $\Omega\subset X$ in the definition of the total variation, we can define $\|Du\|(\Omega)$.
For an arbitrary set $A\subset X$, we define
\[
\|Du\|(A)=\inf\{\|Du\|(\Omega):\, A\subset\Omega,\,\Omega\subset X
\text{ is open}\}.
\]
If $u\in\mathrm{BV}(\Omega)$, $\|Du\|(\cdot)$ is a finite Radon measure on $\Omega$ by \cite[Theorem 3.4]{M}.
The perimeter of $E$ in $\Omega$ is denoted by
\[
P(E,\Omega)=\|D\chi_E\|(\Omega).
\]
We have the following coarea formula given by Miranda in \cite[Proposition 4.2]{M}: if $\Omega\subset X$ is an open set and $u\in L_{\mathrm{loc}}^{1}(\Omega)$, then
\begin{equation}\label{eq:coarea}
\|Du\|(\Omega)=\int_{-\infty}^{\infty}P(\{u>t\},\Omega)\,dt.
\end{equation}
For an open set $\Omega\subset X$ and a set of locally finite perimeter $E\subset X$, we know that
\begin{equation}\label{eq:def of theta}
\Vert D\chi_{E}\Vert(\Omega)=\int_{\partial^{*}E\cap \Omega}\theta_E\,d\mathcal H,
\end{equation}
where $\theta_E:X\to [\alpha,c_d]$, with $\alpha=\alpha(c_d,c_P)>0$, see \cite[Theorem 5.3]{A2} and \cite[Theorem 4.6]{AMP}. The constant $c_P$ is related to the Poincar\'e inequality, see below.
The jump set of a function $u\in\mathrm{BV}_{\mathrm{loc}}(X)$ is defined as
\[
S_{u}=\{x\in X:\,u^{\wedge}(x)<u^{\vee}(x)\},
\]
where $u^{\wedge}$ and $u^{\vee}$ are the lower and upper approximate limits of $u$ defined as
\[
u^{\wedge}(x)
=\sup\left\{t\in\overline{\mathbb R}:\,\lim_{r\to0}\frac{\mu(\{u<t\}\cap B(x,r))}{\mu(B(x,r))}=0\right\}
\]
and
\[
u^{\vee}(x)
=\inf\left\{t\in\overline{\mathbb R}:\,\lim_{r\to0}\frac{\mu(\{u>t\}\cap B(x,r))}{\mu(B(x,r))}=0\right\}.
\]
Outside the jump set, i.e. in $X\setminus S_u$, $\mathcal H$-almost every point is a Lebesgue point of $u$ \cite[Theorem 3.5]{KKST}, and we denote the Lebesgue limit at $x$ by $\widetilde{u}(x)$.
We say that $X$ supports a $(1,1)$-Poincar\'e inequality
if there exist constants $c_P>0$ and $\lambda \ge1$ such that for all
balls $B(x,r)$, all locally integrable functions $u$,
and all $1$-weak upper gradients $g$ of $u$, we have
\[
\vint{B(x,r)}|u-u_{B(x,r)}|\, d\mu
\le c_P r\,\vint{B(x,\lambda r)}g\,d\mu,
\]
where
\[
u_{B(x,r)}=\vint{B(x,r)}u\,d\mu =\frac 1{\mu(B(x,r))}\int_{B(x,r)}u\,d\mu.
\]
If the space supports a $(1,1)$-Poincar\'e inequality, by an approximation argument we get for
every $u\in L^1_{\mathrm{loc}}(X)$
\[
\vint{B(x,r)} |u-u_{B(x,r)}|\, d\mu
\le c_P r\frac{\|Du\|(B(x,\lambda r))}{\mu(B(x,\lambda r))},
\]
where the constant $c_P$ and the dilation factor $\lambda$ are the same as in the $(1, 1)$-Poincar\'e inequality. When $u=\chi_{E}$ for $E\subset X$, we get the relative isoperimetric inequality
\begin{equation}\label{eq:isop ineq}
\min\{\mu(B(x,r)\cap E), \mu(B(x,r)\setminus E)\}
\le 2c_P r\|D\chi_{E}\|(B(x,\lambda r)).
\end{equation}
\emph{Throughout the work we assume, without further notice, that the measure $\mu$ is doubling and that
the space supports a $(1, 1)$-Poincar\'e inequality.}
\section{Functional and its measure property}\label{sec:functional}
In this section we define the functional that is considered in this paper, and show that it defines a Radon measure. Let $f$ be a convex nondecreasing function that is defined on $[0,\infty)$ and satisfies the linear growth condition
\begin{equation}\label{eq:linear growth}
mt\le f(t)\le M(1+t)
\end{equation}
for all $t\ge 0$, with some constants $0<m\le M<\infty$. This implies that $f$ is Lipschitz continuous with constant $L>0$. Furthermore, we define
\[
f_{\infty}=\sup_{t>0} \frac{f(t)-f(0)}{t}=\lim_{t \to \infty} \frac{f(t)-f(0)}{t}=\lim_{t\to\infty}\frac{f(t)}{t},
\]
where the second equality follows from the convexity of $f$.
From the definition of $f_{\infty}$, we get the simple estimate
\begin{equation}\label{eq:estimate for f}
f(t)\le f(0)+tf_{\infty}
\end{equation}
for all $t\ge 0$. This will be useful for us later.
Now we give the definition of the functional. For an open set $\Omega$ and $u\in N^{1,1}(\Omega)$, we could define it as
\[
u\longmapsto\int_{\Omega}f(g_u)\,d\mu,
\]
where $g_{u}$ is the minimal 1-weak upper gradient of $u$. For $u\in \mathrm{BV}(\Omega)$, we need to use a relaxation procedure as given in the following definition.
\begin{definition}
Let $\Omega\subset X$ be an open set. For $u\in L^1_{\mathrm{loc}}(\Omega)$, we define
\begin{align*}
&\mathcal F(u,\Omega)\\
&\quad =\inf\left\{\liminf_{i\to\infty}\int_{\Omega}f(g_{u_i})\,d\mu:\,u_i\in \mathrm{Lip}_{\mathrm{loc}}(\Omega),\,u_i\to u\text{ in }L^1_{\mathrm{loc}}(\Omega)\right\},
\end{align*}
where $g_{u_i}$ is the minimal 1-weak upper gradient of $u_i$.
\end{definition}
Note that we could equally well require that $g_{u_i}$ is \emph{any} 1-weak upper gradient of $u_i$.
We define $\mathcal F(u,A)$ for an arbitrary set $A \subset X$ by
\begin{equation}\label{eq:def of F for general sets}
\mathcal F(u,A)=\inf\{\mathcal F(u,\Omega): \,\Omega \textrm{ is open,}\,A\subset \Omega\}.
\end{equation}
In this section we show that if $u\in L_{\mathrm{loc}}^1(\Omega)$ with $\mathcal F(u,\Omega)<\infty$, then $\mathcal F(u,\cdot)$ is a Borel regular outer measure on $\Omega$, extending \cite[Theorem 3.4]{M}. The functional clearly satisfies
\begin{equation}\label{eq:basic estimate for functional}
m\Vert Du \Vert(A) \le \mathcal F(u,A) \le M(\mu(A)+\Vert Du\Vert(A))
\end{equation}
for any $A\subset X$. This estimate follows directly from the definition of the functional, the definition of the variation measure, and \eqref{eq:linear growth}.
It is also easy to see that
\[
\mathcal F(u,B)\le \mathcal F(u,A)
\]
for any sets $B\subset A\subset X$.
In order to show the measure property, we first prove a few lemmas. The first is the following technical gluing lemma that is similar to \cite[Lemma 5.44]{AmbFP00}.
\begin{lemma}\label{joining lemma}
Let $U'$, $U$, $V'$, $V$ be open sets in $X$ such that $U'\Subset U$ and $V'\subset V$. Then there exists an open set $H\subset (U\setminus U')\cap V'$, with $H\Subset U$, such that for any $\varepsilon>0$ and any pair of functions $u\in \mathrm{Lip}_{\mathrm{loc}}(U)$ and $v\in \mathrm{Lip}_{\mathrm{loc}}(V)$, there is a function $\phi\in\Lip_{c}(U)$ with $0\le\phi\le 1$ and $\phi=1$ in a neighborhood of $U^{'}$, such that the function $w=\phi u+(1-\phi)v\in\mathrm{Lip}_{\mathrm{loc}}(U'\cup V')$ satisfies
\[
\int_{U'\cup V'} f(g_w)\,d\mu \le \int_U f(g_u)\,d\mu + \int_V f(g_v)\,d\mu + C\int_H |u-v|\,d\mu + \varepsilon.
\]
Here $C=C(U,U',M)$.
\end{lemma}
\begin{proof}
Let $\eta = \dist(U',X\setminus U) >0$.
Define
\[
H=\left\{x\in U\cap V':\, \frac{\eta}{3} < \dist(x,U')<\frac{2\eta}{3}\right\}.
\]
Now fix $u\in \mathrm{Lip}_{\mathrm{loc}}(U),\,v\in \mathrm{Lip}_{\mathrm{loc}}(V)$ and $\varepsilon>0$. Choose $k\in {\mathbb N}$ such that
\begin{equation}\label{eq:gluing estimate Lipschitz}
M\int_H(1+g_u+g_v)\,d\mu < \varepsilon k
\end{equation}
if the above integral is finite --- otherwise the desired estimate is trivially true.
For $i=1,\ldots,k$, define the sets
\[
H_i=\left\{x\in U\cap V':\, \frac{(k+i-1)\eta}{3k} < \dist(x,U') < \frac{(k+i)\eta}{3k}\right\},
\]
so that $H\supset\cup_{i=1}^k H_i$, and define the Lipschitz functions
\[
\phi_i(x)=\begin{cases}
0, & \dist(x, U') > \frac{k+i}{3k} \eta, \\
\frac1\eta((k+i)\eta - 3k \dist(x,U')),\!\! & \frac{k+i-1}{3k}\eta \le \dist(x, U') \le \frac{k+i}{3k} \eta, \\
1, & \dist(x, U') < \frac{k+i-1}{3k} \eta.
\end{cases}
\]
Now $g_{\phi_i}=0$ $\mu$-almost everywhere in $U\cap V'\setminus H_i$ \cite[Corollary 2.21]{BB}. Let $w_i=\phi_iu+(1-\phi_i)v$ on $U'\cup V'$. We have the estimate
\[
g_{w_i} \le \phi_i g_u + (1-\phi_i)g_v + g_{\phi_i} |u-v|,
\]
see \cite[Lemma 2.18]{BB}. By also using the estimate $f(t) \le M(1+t)$, we get
\begin{align*}
\int_{U'\cup V'} f(g_{w_i})\,d\mu &\le \int_U f(g_u)\,d\mu + \int_V f(g_v)\,d\mu + \int_{H_i} f(g_{w_i})\,d\mu \\
&\le \int_U f(g_u)\,d\mu + \int_V f(g_v)\,d\mu\\
&\quad\ +M\int_{H_i} (1+g_u+g_v)\,d\mu + \frac{3Mk}{\eta}\int_{H_i}|u-v|\,d\mu.
\end{align*}
Now, since $H\supset\cup_{i=1}^k H_i$, we have
\begin{align*}
\frac{1}{k}\sum_{i=1}^k& \int_{U'\cup V'} f(g_{w_i})\,d\mu\\
&\le \int_U f(g_u)\,d\mu + \int_V f(g_v)\,d\mu + \frac{M}{k} \int_H (1+g_u+g_v)\,d\mu \\
&\qquad+ \frac{3M}{\eta} \int_H |u-v|\,d\mu\\
&\le \int_U f(g_u)\,d\mu + \int_V f(g_v)\,d\mu+C \int_H|u-v|\,d\mu + \varepsilon.
\end{align*}
In the last inequality we used \eqref{eq:gluing estimate Lipschitz}. Thus we can find an index $i$ such that the function $w=w_i$ satisfies the desired estimate.
\end{proof}
In the following lemmas, we assume that $u\in L_{\mathrm{loc}}^1(A\cup B)$.
\begin{lemma}\label{inner regularity lemma}
Let $A\subset X$ be open with $\mathcal F(u,A)<\infty$. Then
\[
\mathcal F(u,A)= \sup_{B\Subset A} \mathcal F(u,B).
\]
\end{lemma}
\begin{proof}
Take open sets $B_1\Subset B_2 \Subset B_3 \Subset A$ and sequences $u_i\in \mathrm{Lip}_{\mathrm{loc}}(B_3)$, $v_i\in \mathrm{Lip}_{\mathrm{loc}}(A\setminus \overline{B_1})$ such that $u_i\to u$ in $L^1_\mathrm{loc}(B_3)$, $v_i\to u$ in $L^1_\mathrm{loc}(A\setminus \overline{B_1})$,
\[
\mathcal F(u,B_3)= \lim_{i\to \infty} \int_{B_3} f(g_{u_i}) \,d\mu,
\]
and
\[
\mathcal F(u,A\setminus \overline{B_1}) = \lim_{i\to \infty} \int_{A\setminus \overline{B_1}} f(g_{v_i}) \,d\mu.
\]
By using Lemma \ref{joining lemma} with $U=B_3$, $U'=B_2$, $V=V'=A\setminus \overline{B_1}$ and $\varepsilon=1/i$, we find a set $H\subset B_3\setminus B_2$, $H\Subset B_3$, and a sequence $w_i \in \mathrm{Lip}_{\mathrm{loc}} (A)$ such that $w_i\to u$ in $L^1_{\mathrm{loc}}(A)$, and
\[
\int_A f(g_{w_i})\,d\mu \le \int_{B_3}f(g_{u_i}) \,d\mu + \int_{A\setminus \overline{B_1}} f(g_{v_i}) \,d\mu + C \int_H|u_i-v_i| \,d\mu + \frac{1}{i}
\]
for every $i\in{\mathbb N}$.
In the above inequality, the last integral converges to zero as $i\to \infty$, since $H\Subset B_3$ and $H\Subset A\setminus \overline{B_1}$. Thus
\[
\mathcal F(u,A)\le \liminf_{i\to\infty}\int_A f(g_{w_i})\,d\mu \le \mathcal F(u,B_3)+\mathcal F(u,A\setminus \overline{B_1}).
\]
Exhausting $A$ with sets $B_1$ concludes the proof, since then $\mathcal F(u,A\setminus \overline{B_1})\to 0$ by \eqref{eq:basic estimate for functional}.
\end{proof}
\begin{lemma}\label{subadditivity lemma}
Let $A,B\subset X$ be open. Then
\[
\mathcal F(u,A\cup B) \le \mathcal F(u,A)+\mathcal F(u,B).
\]
\end{lemma}
\begin{proof}
First we note that
every $C\Subset A\cup B$ can be presented as $C=A'\cup B'$, where $A'\Subset A$ and $B'\Subset B$.
Therefore, according to Lemma \ref{inner regularity lemma}, it suffices to show that
\[
\mathcal F(u,A'\cup B') \le \mathcal F(u,A)+\mathcal F(u,B)
\]
for every $A'\Subset A$ and $B'\Subset B$. If $\mathcal F(u,A)=\infty$ or $\mathcal F(u,B)=\infty$, the claim holds.
Assume therefore that $\mathcal F(u,A)<\infty$ and $\mathcal F(u,B)<\infty$.
Take sequences $u_i \in \mathrm{Lip}_{\mathrm{loc}}(A)$ and $v_i\in \mathrm{Lip}_{\mathrm{loc}}(B)$ such that $u_i\to u$ in $L^1_\mathrm{loc}(A)$, $v_i\to u$ in $L^1_\mathrm{loc}(B)$,
\[
\mathcal F(u,A)=\lim_{i\to \infty} \int_A f(g_{u_i})\,d\mu,
\]
and
\[
\mathcal F(u,B)=\lim_{i\to \infty} \int_B f(g_{v_i})\,d\mu.
\]
By using Lemma \ref{joining lemma} with $U'=A'$, $U=A$, $V'=B'$, $V=B$ and $\varepsilon=1/i$, we find a set $H\Subset A$, $H\subset B'\Subset B$, and a sequence $w_i\in \mathrm{Lip}_{\mathrm{loc}}(A'\cup B')$ such that $w_i\to u$ in $L^1_\mathrm{loc}(A'\cup B')$, and
\[
\int_{A'\cup B'} f(g_{w_i})\,d\mu \le \int_A f(g_{u_i})\,d\mu + \int_B f(g_{v_i})\,d\mu+C\int_H|u_i-v_i|\,d\mu + \frac{1}{i}
\]
for every $i\in{\mathbb N}$. By the properties of $H$, the last integral in the above inequality converges to zero as $i\to \infty$, and then
\[
\mathcal F(u,A'\cup B') \le \mathcal F(u,A)+\mathcal F(u,B).
\]
\end{proof}
\begin{lemma}\label{lem:additivity lemma}
Let $A,B\subset X$ be open and let $A\cap B=\emptyset$. Then
\[
\mathcal F(u,A\cup B) \ge \mathcal F(u,A)+\mathcal F(u,B).
\]
\end{lemma}
\begin{proof}
If $\mathcal F(u,A\cup B)=\infty$, the claim holds. Hence we may assume that $\mathcal F(u,A\cup B)<\infty$. Take a sequence $u_i\in \mathrm{Lip}_{\mathrm{loc}}(A\cup B)$ such that $u_i\to u$ in $L^1_\mathrm{loc} (A\cup B)$ and
\[
\mathcal F(u,A\cup B) = \lim_{i\to \infty} \int_{A\cup B} f(g_{u_i})\,d\mu.
\]
Then, since $A$ and $B$ are disjoint,
\begin{align*}
\mathcal F(u,A\cup B) &= \lim_{i\to \infty} \int_{A\cup B} f(g_{u_i})\,d\mu \\
&\ge \liminf_{i\to \infty} \int_{A} f(g_{u_i})\, d\mu + \liminf_{i\to \infty} \int_{B} f(g_{u_i})\,d\mu\\
&\ge \mathcal F(u,A)+\mathcal F(u,B).
\end{align*}
\end{proof}
Now we are ready to prove the measure property of the functional.
\begin{theorem}\label{thm:measure_prop}
Let $\Omega\subset X$ be an open set, and let $u\in L^1_{\mathrm{loc}}(\Omega)$ with $\mathcal F(u,\Omega)<\infty$. Then $\mathcal F(u,\cdot)$ is a Borel regular outer measure on $\Omega$.
\end{theorem}
\begin{proof}
First we show that $\mathcal F(u,\cdot)$ is an outer measure on $\Omega$. Obviously $\mathcal F(u,\emptyset)=0$. As mentioned earlier, clearly $\mathcal F(u,A)\le \mathcal F(u,B)$ for any $A\subset B\subset \Omega$.
Take open sets $A_i\subset \Omega$, $i=1,2,\ldots$. Let $\varepsilon >0$. By Lemma \ref{inner regularity lemma} there exists a set $B\Subset \cup_{i=1}^\infty A_i$ such that
\[
\mathcal F\left(u,\bigcup_{i=1}^\infty A_i\right) < \mathcal F(u,B) + \varepsilon.
\]
Since $\overline{B}\subset \cup_{i=1}^\infty A_i$ is compact, there exists $n\in {\mathbb N}$ such that
$B\subset \overline{B} \subset \cup_{i=1}^n A_i$.
Then by Lemma \ref{subadditivity lemma},
\[
\mathcal F(u,B) \le \mathcal F \left(u, \bigcup_{i=1}^n A_i \right) \le \sum_{i=1}^n \mathcal F(u,A_i),
\]
and thus letting $n\to \infty$ and $\varepsilon \to 0$ gives us
\begin{equation}\label{eq: countable subadditivity}
\mathcal F\bigg(u,\bigcup_{i=1}^\infty A_i\bigg) \le \sum_{i=1}^\infty \mathcal F(u,A_i).
\end{equation}
For general sets $A_i$, we can prove \eqref{eq: countable subadditivity} by approximation with open sets.
The next step is to prove that $\mathcal F(u,\cdot)$ is a Borel outer measure. Let $A,B\subset \Omega$ satisfy $\dist(A,B)>0$. Fix $\varepsilon>0$ and choose an open set $U\supset A\cup B$ such that
\[
\mathcal F(u,A\cup B) > \mathcal F(u,U)-\varepsilon.
\]
Define the sets
\begin{align*}
&V_A= \left\{x\in \Omega :\, \dist(x,A) < \frac{\dist(A,B)}{3}\right \} \cap U,\\
&V_B= \left\{x\in \Omega :\, \dist(x,B) < \frac{\dist(A,B)}{3}\right \} \cap U.
\end{align*}
Then $V_A, V_B$ are open and $A\subset V_A$, $B\subset V_B$. Moreover $V_A\cap V_B=\emptyset$. Thus by Lemma \ref{lem:additivity lemma},
\begin{align*}
\mathcal F(u,A\cup B) &\ge \mathcal F(u,V_A\cup V_B)-\varepsilon \\
&\ge\mathcal F(u,V_A)+\mathcal F(u,V_B) -\varepsilon\\
&\ge \mathcal F(u,A)+\mathcal F(u,B)-\varepsilon.
\end{align*}
Now letting $\varepsilon\to 0$ shows that $\mathcal F(u,\cdot)$ is a Borel outer measure by Carath{\' e}odory's criterion.
The measure $\mathcal F(u,\cdot)$ is Borel regular by construction, since for every $A\subset \Omega$ we may choose open sets $V_i$ such that $A\subset V_i \subset \Omega$ and
\[
\mathcal F(u,V_i)<\mathcal F(u,A)+\frac{1}{i},
\]
and by defining $V=\cap_{i=1}^\infty V_i$, we get $\mathcal F(u,V)=\mathcal F(u,A)$,
where $V\supset A$ is a Borel set.
\end{proof}
As a simple application of the measure property of the functional, we show the following approximation result.
\begin{proposition}\label{prop:weak convergence}
Let $\Omega\subset X$ be an open set, and let $u\in L^1_{\mathrm{loc}}(\Omega)$ with $\mathcal F(u,\Omega)<\infty$. Then for any sequence of functions $u_i\in \mathrm{Lip}_{\mathrm{loc}}(\Omega)$ for which $u_i\to u$ in $L_{\mathrm{loc}}^1(\Omega)$ and
\[
\int_{\Omega}f(g_{u_i})\,d\mu\to \mathcal F(u,\Omega),
\]
we also have $f(g_{u_i})\,d\mu \overset{*}{\rightharpoonup} d\mathcal F(u,\cdot)$ in $\Omega$.
\end{proposition}
\begin{proof}
For any open set $U\subset \Omega$, we have by the definition of the functional that
\begin{equation}\label{eq:weak convergence open set}
\mathcal F(u,U)\le \liminf_{i\to\infty}\int_U f(g_{u_i})\,d\mu.
\end{equation}
On the other hand, for any relatively closed set $F\subset\Omega$ we have
\begin{align*}
\mathcal F(u,\Omega) &= \limsup_{i\to\infty}\int_{\Omega}f(g_{u_i})\,d\mu\\
&\ge \limsup_{i\to\infty}\int_{F}f(g_{u_i})\,d\mu+\liminf_{i\to\infty}\int_{\Omega\setminus F}f(g_{u_i})\,d\mu\\
&\ge \limsup_{i\to\infty}\int_{F}f(g_{u_i})\,d\mu+\mathcal F(u,\Omega\setminus F).
\end{align*}
The last inequality follows from the definition of the functional, since $\Omega\setminus F$ is open. By the measure property of the functional, we can subtract $\mathcal F(u,\Omega\setminus F)$ from both sides to get
\[
\limsup_{i\to\infty}\int_{F}f(g_{u_i})\,d\mu\le \mathcal F(u,F).
\]
According to a standard characterization of the weak* convergence of Radon measures, the above inequality and \eqref{eq:weak convergence open set} together give the result \cite[p. 54]{EvGa}.
\end{proof}
\section{Integral representation}
In this section we study an integral representation for the functional $\mathcal F(u,\cdot)$.
First we show the estimate from below. Note that due to \eqref{eq:basic estimate for functional}, $\mathcal F(u,\Omega)<\infty$ always implies $\Vert Du\Vert(\Omega)<\infty$.
\begin{theorem}\label{thm:estimate from below}
Let $\Omega$ be an open set, and let $u\in L^1_{\mathrm{loc}}(\Omega)$ with $\mathcal F(u,\Omega)<\infty$. Let $d\Vert Du\Vert=a\,d\mu+d\,\Vert Du\Vert^s$ be the decomposition of the variation measure into the absolutely continuous and singular parts, where $a\in L^1(\Omega)$ is a Borel function and $\Vert Du\Vert^s$ is the singular part. Then we have
\[
\mathcal F(u,\Omega)\ge\int_{\Omega}f(a)\,d\mu+f_{\infty}\Vert Du\Vert^s(\Omega).
\]
\end{theorem}
\begin{proof}
Pick a sequence $u_i\in\mathrm{Lip}_{\mathrm{loc}}(\Omega)$ such that $u_i\to u$ in $L^1_{\mathrm{loc}}(\Omega)$ and
\begin{equation}\label{eq:choice of sequence}
\int_{\Omega}f(g_{u_i})\,d\mu\to \mathcal F(u,\Omega)\quad \textrm{as}\ \ i\to\infty.
\end{equation}
Using the linear growth condition for $f$, presented in \eqref{eq:linear growth}, we estimate
\[
\limsup_{i\to\infty}\int_{\Omega} g_{u_i}\,d\mu\le \frac{1}{m}\limsup_{i\to \infty}\int_{\Omega} f(g_{u_i})\,d\mu< \infty.
\]
Picking a suitable subsequence, which we still denote $g_{u_i}$, we have $g_{u_i}\,d\mu\overset{*}{\rightharpoonup}d\nu$ in $\Omega$, where $\nu$ is a Radon measure with finite mass in $\Omega$. Furthermore, by the definition of the variation measure, we necessarily have $\nu\ge \Vert Du\Vert$, which can be seen as follows. For any open set $U\subset \Omega$ and for any $\varepsilon>0$, we can pick an open set $U'\Subset U$ such that $\Vert Du\Vert(U)<\Vert Du\Vert(U')+\varepsilon$; see e.g. Lemma \ref{inner regularity lemma}. We obtain
\begin{align*}
\Vert Du\Vert(U)&<\Vert Du\Vert(U')+\varepsilon\le \liminf_{i\to \infty} \int_{U'}g_{u_i}\,d\mu+\varepsilon\\
&\le \limsup_{i\to \infty}\int_{\overline{U'}}g_{u_i}\,d\mu+\varepsilon
\le \nu(\overline{U'})+\varepsilon
\le \nu(U)+\varepsilon.
\end{align*}
On the first line we used the definition of the variation measure, and on the second line we used a property of the weak* convergence of Radon measures, see e.g. \cite[Example 1.63]{AmbFP00}. By approximation we get $\nu(A)\ge \Vert Du\Vert(A)$ for any $A\subset \Omega$.
The following lower semicontinuity argument is from \cite[p. 64--66]{AmbFP00}. First we note that as a nonnegative nondecreasing convex function, $f$ can be presented as
\[
f(t)=\sup_{j\in {\mathbb N}}(d_jt+e_j),\quad t\ge 0,
\]
for some sequences $d_j,e_j\in{\mathbb R}$, with $d_j\ge 0$, $j=1,2,\ldots$, and furthermore $\sup_jd_j=f_{\infty}$ \cite[Proposition 2.31, Lemma 2.33]{AmbFP00}.
Given any pairwise disjoint open subsets of $\Omega$, denoted $A_1,\ldots,A_k$, $k\in{\mathbb N}$, and functions $\varphi_j\in C_c(A_j)$ with $0\le \varphi_j\le 1$, we have
\[
\int_{A_j}(d_jg_{u_i}+e_j)\varphi_j\,d\mu \le \int_{A_j}f(g_{u_i})\,d\mu.
\]
for every $j=1,\ldots,k$ and $i\in{\mathbb N}$.
Summing over $j$ and letting $i\to \infty$, we get by the weak* convergence $g_{u_i}\,d\mu\overset{*}{\rightharpoonup}d\nu$
\[
\sum_{j=1}^k\left(\int_{A_j}d_j\varphi_j \,d\nu +\int_{A_j}e_j\varphi_j\,d\mu\right) \le \liminf_{i\to\infty}\int_{\Omega} f(g_{u_i})\,d\mu.
\]
Since we had $\nu\ge \Vert Du\Vert$, this immediately implies
\[
\sum_{j=1}^k\left(\int_{A_j}d_j\varphi_j \,d\Vert Du\Vert +\int_{A_j}e_j\varphi_j\,d\mu\right) \le \liminf_{i\to\infty}\int_{\Omega} f(g_{u_i})\,d\mu.
\]
We recall that $d\Vert Du\Vert=a\,d\mu+d\Vert Du\Vert^s$. It is known that the singular part $\Vert Du \Vert^s$ is concentrated on a Borel set $D \subset \Omega$ that satisfies $\mu (D)=0$ and $\Vert Du\Vert^s (\Omega \setminus D)=0$, see e.g. \cite[p. 42]{EvGa}. Define the Radon measure $\sigma=\mu+\Vert Du\Vert^s$, and the Borel functions
\[
\phi_j=\begin{cases}
d_ja+e_j, \quad & \text{on }\Omega\setminus D,\\
d_j, \quad & \text{on } D
\end{cases}
\]
for $j=1,\ldots,k$, and
\[
\phi=\begin{cases}
f(a), \quad & \text{on }\Omega\setminus D,\\
f_{\infty}, \quad & \text{on } D.
\end{cases}
\]
As mentioned above, we now have $\sup_j\phi_j=\phi$, and we can write the previous inequality as
\[
\sum_{j=1}^k\int_{A_j}\phi_j\varphi_j\,d\sigma \le \liminf_{i\to\infty}\int_{\Omega} f(g_{u_i})\,d\mu.
\]
Since the functions $\varphi_j\in C_c(A_j)$, $0\le \varphi_j\le 1$, were arbitrary, we get
\[
\sum_{j=1}^k\int_{A_j}\phi_j\,d\sigma\le \liminf_{i\to\infty}\int_{\Omega} f(g_{u_i})\,d\mu.
\]
Since this holds for \emph{any} pairwise disjoint open subsets $A_1,\ldots,A_k\subset \Omega$, by \cite[Lemma 2.35]{AmbFP00} we get
\[
\int_{\Omega} \phi \,d\sigma\le \liminf_{i\to\infty}\int_{\Omega} f(g_{u_i})\,d\mu.
\]
However, by the definitions of $\phi$ and $\sigma$, this is the same as
\[
\int_{\Omega}f(a)\,d\mu+f_{\infty}\Vert Du\Vert^s(\Omega)\le \liminf_{i\to\infty}\int_{\Omega} f(g_{u_i})\,d\mu.
\]
Combining this with \eqref{eq:choice of sequence}, we get the desired estimate from below.
\end{proof}
It is worth noting that in the above argument, we only needed the weak* convergence of the sequence $g_{u_i}\,d\mu$ to a Radon measure that majorizes $\Vert Du\Vert$. Then we could use the fact that the functional for measures
\[
\nu \longmapsto \int_{\Omega}f(\check{a})\,d\mu+f_{\infty}\nu^s(\Omega),\quad\ d\nu=\check{a}\,d\mu+d\nu^s,
\]
is lower semicontinuous with respect to weak* convergence of Radon measures. This \emph{lower} semicontinuity is guaranteed by the fact that $f$ is convex, but in order to have \emph{upper} semicontinuity, we should have that $f$ is also concave (and thus linear).
Thus there is an important asymmetry in the setting, and for the estimate from above, we will need to use rather different methods where we prove weak or strong $L^1$-convergence for the sequence of upper gradients, instead of just weak* convergence of measures.
To achieve this type of stronger convergence, we need to specifically ensure that the sequence of upper gradients is \emph{equi-integrable}. The price that is paid is that a constant $C$ appears in the final estimate related to the absolutely continuous parts. An example that we provide later shows that this constant cannot be discarded.
We recall that for a $\mu$-measurable set $F\subset X$, the equi-integrability of a sequence of functions $g_i\in L^1(F)$, $i\in{\mathbb N}$, is defined by two conditions. First, for any $\varepsilon>0$, there must be a $\mu$-measurable set $A\subset F$ with $\mu(A)<\infty$ such that
\[
\int_{F\setminus A}g_i\,d\mu<\varepsilon \quad\textrm{for all }i\in{\mathbb N}.
\]
Second, for any $\varepsilon>0$ there must be $\delta>0$ such that if $A\subset F$ is $\mu$-measurable with $\mu(A)<\delta$, then
\[
\int_{A}g_i\,d\mu<\varepsilon \quad\textrm{for all }i\in{\mathbb N}.
\]
We will need the following equi-integrability result that partially generalizes \cite[Lemma 6]{FHK}. For the construction of Whitney coverings that are needed in the result, see e.g. \cite[Theorem 3.1]{BBS07}.
\begin{lemma}\label{lem:generalized equiintegrability}
Let $\Omega\subset X$ be open, let $F\subset\Omega$ be $\mu$-measurable, and let $\nu$ be a Radon measure with finite mass in $\Omega$. Write the decomposition of $\nu$ into the absolutely continuous and singular parts with respect to $\mu$ as $d\nu=a\,d\mu+d\nu^s$, and assume that $\nu^s(F)=0$. Take a sequence of open sets $F_i$ such that $F\subset F_i\subset \Omega$ and $\nu^s(F_i)<1/i$, $i\in{\mathbb N}$. For a given $\tau\ge 1$ and every $i\in{\mathbb N}$, take a Whitney covering $\{B_j^i=B(x_j^i,r_j^i)\}_{j=1}^{\infty}$ of $F_i$ such that $r_j^i\le 1/i$ for every $j\in{\mathbb N}$, $\tau B_j^i\subset F_i$ for every $j\in{\mathbb N}$, every ball $\tau B_k^i$ meets at most $c_o=c_o(c_d,\tau)$ balls $\tau B_j^i$, and if $\tau B_j^i$ meets $\tau B_k^i$, then $r_j^i\le 2r_k^i$. Define the functions
\[
g_i=\sum_{j=1}^{\infty}\chi_{B_j^i}\frac{\nu(\tau B_j^i)}{\mu(B_j^i)},\quad\ i\in{\mathbb N}.
\]
Then the sequence $g_i$ is equi-integrable in $F$. Moreover, a subsequence of $g_i$ converges weakly in $L^1(F)$ to a function $\check{a}$ that satisfies $\check{a}\le c_oa$ $\mu$-almost everywhere in $F$.
\end{lemma}
\begin{remark}
If the measure $\nu$ is absolutely continuous in the whole of $\Omega$, then we can choose $F=F_i=\Omega$ for all $i\in{\mathbb N}$.
\end{remark}
\begin{proof}
To check the first condition of equi-integrability, let $\varepsilon>0$ and take a ball $B=B(x_0,R)$ with $x_0\in X$ and $R>0$ so large that $\nu(\Omega\setminus B(x_0,R))<\varepsilon/c_o$. Then, by the bounded overlap property of the Whitney balls, we have
\[
\int_{F\setminus B(x_0,R+2\tau)}g_i\,d\mu\le c_o\nu(F_i\setminus B(x_0,R))<\varepsilon
\]
for all $i\in{\mathbb N}$.
To check the second condition, assume by contradiction that there is a sequence of $\mu$-measurable sets $A_i\subset F$ with $\mu(A_i)\to 0$, and $\int_{A_i}g_i\,d\mu>\eta>0$ for all $i\in{\mathbb N}$. Fix $\varepsilon>0$. We know that there is $\delta>0$ such that if $A\subset \Omega$ and $\mu(A)<\delta$, then $\int_Aa\,d\mu<\varepsilon$. Note that by the bounded overlap property of the Whitney balls, we have for every $i\in{\mathbb N}$
\begin{equation}\label{eq:equiintegrability first estimate}
\begin{split}
\int_{A_i}g_i\,d\mu&= \sum_{j=1}^{\infty}\frac{\mu(A_i\cap B_j^i)}{\mu(B_j^i)}\nu(\tau B_j^i)\\
& \le c_o\nu^s(F_i)+\sum_{j=1}^{\infty}\frac{\mu(A_i\cap B_j^i)}{\mu(B_j^i)}\int_{\tau B_j^i}a\,d\mu.
\end{split}
\end{equation}
Fix $k\in{\mathbb N}$. We can divide the above sum into two parts: let $I_1$ consist of those indices $j\in{\mathbb N}$ for which $\mu(A_i\cap B_j^i)/\mu(B_j^i)>1/k$, and let $I_2$ consist of the remaining indices. We estimate
\[
\mu\left(\bigcup_{j\in I_1}\tau B_j^i\right)\le C\sum_{j\in I_1}\mu(B_j^i)\le Ck\sum_{j\in I_1}\mu(A_i\cap B_j^i)\le Ck\mu(A_i)<\delta,
\]
when $i$ is large enough. Now we can further estimate \eqref{eq:equiintegrability first estimate}:
\[
\int_{A_i}g_i\,d\mu\le c_o\nu^s(F_i)+\frac{c_o}{k}\int_{F_i}a\,d\mu+c_o\varepsilon
\]
for large enough $i\in{\mathbb N}$. By letting first $i\to \infty$, then $k\to \infty$, and finally $\varepsilon\to 0$, we get a contradiction with $\int_{A_i}g_i\,d\mu>\eta>0$, proving the equi-integrability.
Finally, let us prove the weak convergence in $L^1(F)$. Possibly by taking a subsequence which we still denote $g_i$, we have $g_i\to \check{a}$ weakly in $L^1(F)$ for some $\check{a}\in L^1(F)$, by the Dunford-Pettis theorem (see e.g. \cite[Theorem 1.38]{AmbFP00}). By this weak convergence and the bounded overlap property of the Whitney balls, we can estimate for any $x\in F$ and $0<\widetilde{r}<r$
\begin{align*}
\int_{B(x,\widetilde{r})\cap F}\check{a}\,d\mu&=\limsup_{i\to\infty}\int_{B(x,\widetilde{r})\cap F}g_i\,d\mu\\
&=\limsup_{i\to\infty}\sum_{j=1}^{\infty}\frac{\mu(B_j^i\cap B(x,\widetilde{r})\cap F)}{\mu(B_j^i)}\nu(\tau B_j^i)\\
&\le \limsup_{i\to\infty}\sum_{j\in{\mathbb N}:\,B_j^i\cap B(x,\widetilde{r})\cap F\ne \emptyset}\nu(\tau B_j^i)\\
&\le \limsup_{i\to\infty}c_o\nu(B(x,r)).
\end{align*}
By letting $\widetilde{r}\nearrow r$, we get
\[
\int_{B(x,r)\cap F}\check{a}\,d\mu\le c_o\nu(B(x,r)).
\]
By the Radon-Nikodym theorem, $\mu$-almost every $x\in F$ satisfies
\[
\lim_{r\to 0}\,\vint{B(x,r)\cap F}\check{a}\,d\mu=\check{a}(x)\quad\textrm{and}\quad\lim_{r\to 0}\frac{\nu^s(B(x,r))}{\mu(B(x,r))}=0.
\]
By using these estimates as well as the previous one, we get for $\mu$-almost every $x\in F$
\[
\begin{split}
\check{a}(x)&=\lim_{r\to 0}\,\vint{B(x,r)\cap F}\check{a}\,d\mu \\
&\le c_o\limsup_{r\to 0}\,\vint{B(x,r)}a\,d\mu+c_o\limsup_{r\to 0}\frac{\nu^s(B(x,r))}{\mu(B(x,r))},
\end{split}
\]
where the first term on the right-hand side is $c_oa$ by the Radon-Nikodym theorem, and the second term is zero. Thus we have $\check{a}\le c_oa$ $\mu$-almost everywhere in $F$.
\end{proof}
Now we are ready to prove the estimate from above.
\begin{theorem}\label{thm:estimate from above}
Let $\Omega$ be an open set, and let $u\in L^1_{\mathrm{loc}}(\Omega)$ with $\mathcal F(u,\Omega)<\infty$. Let $d\Vert Du\Vert=a\,d\mu+d\,\Vert Du\Vert^s$ be the decomposition of the variation measure, where $a\in L^1(\Omega)$ and $\Vert Du\Vert^s$ is the singular part. Then we have
\[
\mathcal F(u,\Omega)\le \int_{\Omega}f(Ca)\,d\mu+f_{\infty}\Vert Du\Vert^s(\Omega),
\]
with $C=C(c_d,c_P,\lambda)$.
\end{theorem}
\begin{proof}
Since the functional $\mathcal F(u,\cdot)$ is a Radon measure by Theorem \ref{thm:measure_prop}, we can decompose it into the absolutely continuous and singular parts as $\mathcal F(u,\cdot)=\mathcal F^a(u,\cdot)+\mathcal F^s(u,\cdot)$. Again, the singular parts $\Vert Du \Vert^s$ and $\mathcal F^s(u,\cdot)$ are concentrated on a Borel set $D \subset \Omega$ that satisfies $\mu (D)=0$ and
\[
\Vert Du\Vert^s (\Omega \setminus D)=0=\mathcal F^s(u,\Omega \setminus D),
\]
see e.g. \cite[p. 42]{EvGa}.
First we prove the estimate for the singular part. Let $\varepsilon >0$. Choose an open set $G$ with $D \subset G \subset \Omega$, such that $\mu (G)<\varepsilon$ and $\Vert Du \Vert (G) < \Vert Du \Vert (D)+\varepsilon$. Take a sequence $u_i \in \mathrm{Lip}_{\mathrm{loc}}(G)$ such that $u_i \to u$ in $L^1_{\mathrm{loc}}(G)$ and
\[
\int_G g_{u_i}\,d\mu \to \Vert Du\Vert (G) \quad\textrm{as}\ \ i\to\infty.
\]
Thus for some $i\in{\mathbb N}$ large enough, we have
\[
\int_G g_{u_i}\,d\mu < \Vert Du\Vert (G)+\varepsilon
\]
and
\[
\mathcal F(u,G)< \int_G f(g_{u_i}) \, d\mu +\varepsilon.
\]
The last inequality necessarily holds for large enough $i$ by the definition of the functional $\mathcal F(u,\cdot)$. Now, using the two inequalities above and the estimate for $f$ given in \eqref{eq:estimate for f}, we can estimate
\begin{align*}
\mathcal F(u,D) & \le \mathcal F(u,G)
\le \int_G f(g_{u_i}) \, d\mu +\varepsilon \\
& \le \int_G f(0)\, d\mu+f_{\infty}\int_G g_{u_i}\, d\mu +\varepsilon \\
& \le f(0)\mu(G)+f_{\infty}\Vert Du\Vert (G) +f_{\infty}\varepsilon+\varepsilon \\
& \le f(0)\varepsilon +f_{\infty}( \Vert Du\Vert (D)+\varepsilon) +f_{\infty}\varepsilon+\varepsilon.
\end{align*}
In the last inequality we used the properties of the set $G$ given earlier. Letting $\varepsilon \to 0$, we get the estimate from above for the singular part, i.e.
\begin{equation}\label{eq:estimate from above singular}
\mathcal F^s(u,\Omega)=\mathcal F(u,D)\le f_{\infty}\Vert Du\Vert(D)= f_{\infty}\Vert Du\Vert^s (\Omega).
\end{equation}
Next let us consider the absolutely continuous part.
Let $D$ be defined as above, and let $F=\Omega \setminus D$. Let $\varepsilon >0$. Take an open set $G$ such that $F\subset G\subset \Omega$, and $\Vert Du\Vert (G)<\Vert Du\Vert (F)+\varepsilon$.
For every $i\in{\mathbb N}$, take a Whitney covering $\{B_j^i=B(x_j^i,r_j^i)\}_{j=1}^{\infty}$ of $G$ s.t. $r_j^i\le 1/i$ for every $j\in{\mathbb N}$, $5\lambda B_j^i\subset G$ for every $j\in{\mathbb N}$, every ball $5\lambda B_k^i$ meets at most $C=C(c_d,\lambda)$ balls $5\lambda B_j^i$, and if $5\lambda B_j^i$ meets $5\lambda B_k^i$, then $r_j^i\le 2r_k^i$. Then take a partition of unity $\{\phi_j^i\}_{j=1}^{\infty}$ subordinate to this cover, such that $0\le \phi_j^i\le 1$, each $\phi_j^i$ is a $C(c_d)i$-Lipschitz function, and $\supp(\phi_j^i)\subset 2B_j^i$ for every $j\in{\mathbb N}$ (see e.g. \cite[Theorem 3.4]{BBS07}). Define discrete convolutions with respect to the Whitney coverings by
\[
u_i=\sum_{j=1}^{\infty}u_{B_j^i}\phi_j^i,\quad\ i\in{\mathbb N}.
\]
We know that $u_i\to u$ in $L^1(G)$ as $i\to\infty$, and that each $u_i$ has an upper gradient
\[
g_i=C\sum_{j=1}^{\infty}\chi_{B_j^i}\frac{\Vert Du\Vert(5\lambda B_j^i)}{\mu(B_j^i)}
\]
with $C=C(c_d,c_P)$, see e.g. the proof of \cite[Proposition 4.1]{KKST}. We can of course write the decomposition $g_i=g_i^a+g_i^s$, where
\[
g_i^a=C\sum_{j=1}^{\infty}\chi_{B_j^i}\frac{\int_{5\lambda B_j^i}a\,d\mu}{\mu(B_j^i)}
\]
and
\[
g_i^s=C\sum_{j=1}^{\infty}\chi_{B_j^i}\frac{\Vert Du\Vert^s(5\lambda B_j^i)}{\mu(B_j^i)}.
\]
By the bounded overlap property of the coverings, we can easily estimate
\begin{equation}\label{eq:mass of singular parts}
\int_Gg_i^s\,d\mu\le \widetilde{C}\Vert Du\Vert^s(G)<\widetilde{C}\varepsilon
\end{equation}
for every $i\in{\mathbb N}$, with $\widetilde{C}=\widetilde{C}(c_d,c_P,\lambda)$. Furthermore, by Lemma \ref{lem:generalized equiintegrability} we know that the sequence $g_i^a$ is equi-integrable and that a subsequence, which we still denote $g_i^a$, converges weakly in $L^1(G)$ to a function $\check{a}\le Ca$, with $C=C(c_d,\lambda)$.
By Mazur's lemma we have for certain convex combinations, denoted by a hat,
\[
\widehat{g_i^a}=\sum_{j=i}^{N_i}d_{i,j}g_j^a \to \check{a}\quad\textrm{in}\ L^1(G) \ \textrm{as}\ i\to\infty,
\]
where $d_{i,j}\ge 0$ and $\sum_{j=i}^{N_i}d_{i,j}=1$ for every $i\in {\mathbb N}$ \cite[Theorem 3.12]{Rud}. We note that $\widehat{u_i}\in\mathrm{Lip}_{\mathrm{loc}}(G)$ for every $i\in{\mathbb N}$ (the hat always means that we take the same convex combinations), $\widehat{u_i}\to u$ in $L^1_{\mathrm{loc}}(G)$, and $g_{\widehat{u_i}}\le \widehat{g_i}$ $\mu$-almost everywhere for every $i\in {\mathbb N}$ (recall that $g_u$ always means the minimal $1$-weak upper gradient of $u$).
Using the definition of $\mathcal F(u,\cdot)$, the fact that $f$ is $L$-Lipschitz, and \eqref{eq:mass of singular parts}, we get
\begin{align*}
\mathcal F(u,F)&\le \mathcal F(u,G)
\le \liminf_{i\to \infty} \int_G f(g_{\widehat{u_i}})\,d\mu\\
&\le \liminf_{i\to \infty}\int_Gf(\widehat{g_i})\,d\mu
\le \liminf_{i\to \infty}\left( \int_Gf(\widehat{g_i^a})\,d\mu+\int_GL\widehat{g_i^s}\,d\mu\right)\\
&\le \liminf_{i\to \infty}\left( \int_Gf(\widehat{g_i^a})\,d\mu+L\widetilde{C}\varepsilon\right)
= \int_Gf(\check{a})\,d\mu+L\widetilde{C}\varepsilon\\
&\le \int_Gf(Ca)\,d\mu+L\widetilde{C}\varepsilon
\le \int_{\Omega}f(Ca)\,d\mu+L\widetilde{C}\varepsilon.
\end{align*}
By letting $\varepsilon \to 0$ we get the estimate from above for the absolutely continuous part, i.e.
\[
\mathcal F^a(u,\Omega)= \mathcal F(u,F)\le \int_{\Omega}f(Ca)\,d\mu.
\]
By combining this with \eqref{eq:estimate from above singular}, we get the desired estimate from above.
\end{proof}
\begin{remark}\label{rem:integral representation}
By using Theorems \ref{thm:estimate from below} and \ref{thm:estimate from above}, as well as the definition of the functional for general sets given in \eqref{eq:def of F for general sets}, we can conclude that for any $\mu$-measurable set $A\subset\Omega\subset X$ with $\mathcal F(u,\Omega)<\infty$, we have
\[
\mathcal F^s(u,A)=f_{\infty}\Vert Du\Vert^s(A)
\]
and
\[
\int_{A}f(a)\,d\mu\le \mathcal F^a(u,A)\le \int_{A}f(Ca)\,d\mu,
\]
where $\mathcal F^a(u,\cdot)$ and $\mathcal F^s(u,\cdot)$ are again the absolutely continuous and singular parts of the measure given by the functional.
\end{remark}
Since locally Lipschitz functions are dense in the Newtonian space $N^{1,1}(\Omega)$ with $\Omega$ open \cite[Theorem 5.47]{BB}, from the definition of total variation we know that if $u\in N^{1,1}(\Omega)$, then $u\in\mathrm{BV}(\Omega)$ with $\Vert Du\Vert$ absolutely continuous, and more precisely
\[
\Vert Du\Vert(\Omega)\le \int_{\Omega}g_u\,d\mu.
\]
We obtain, to some extent as a by-product of the latter part of the proof of the previous theorem, the following converse, which also answers a question posed in \cite{KKST}. A later example will show that the constant $C$ is necessary here as well.
\begin{theorem}\label{thm:min upper gradient and variation}
Let $\Omega\subset X$ be an open set, let $u\in\mathrm{BV}(\Omega)$, and let $d\Vert Du\Vert=a\,d\mu+d\Vert Du\Vert^s$ be the decomposition of the variation measure, where $a\in L^1(\Omega)$ and $\Vert Du\Vert^s$ is the singular part. Let $F\subset \Omega$ be a $\mu$-measurable set for which $\Vert Du\Vert^s(F)=0$. Then, by modifying $u$ on a set of $\mu$-measure zero if necessary, we have $u|_F\in N^{1,1}(F)$ and $g_u\le Ca$ $\mu$-almost everywhere in $F$, with $C=C(c_d,c_P,\lambda)$.
\end{theorem}
\begin{proof}
We pick a sequence of open sets $F_i$ such that $F\subset F_i\subset \Omega$ and $\Vert Du\Vert^s(F_i)<1/i$, $i=1,2,\ldots$. Then, as described in Lemma \ref{lem:generalized equiintegrability}, we pick Whitney coverings $\{B_j^i\}_{j=1}^{\infty}$ of the sets $F_i$, with the constant $\tau=5\lambda$.
Furthermore, as we did in the latter part of the proof of Theorem \ref{thm:estimate from above} with the open set $G$, we define for every $i\in{\mathbb N}$ a discrete convolution $u_i$ of the function $u$ with respect to the Whitney covering $\{B_j^i\}_{j=1}^{\infty}$. Every $u_i$ has an upper gradient
\[
g_i=C\sum_{j=1}^{\infty}\chi_{B_j^i}\frac{\Vert Du\Vert(5\lambda B_j^i)}{\mu(B_j^i)}
\]
in $F_i$, with $C=C(c_d,c_P)$, and naturally $g_i$ is then also an upper gradient of $u_i$ in $F$. We have $u_i\to u$ in $L^1(F)$ (see e.g. the proof of \cite[Proposition 4.1]{KKST}) and, according to Lemma \ref{lem:generalized equiintegrability} and up to a subsequence, $g_i\to \check{a}$ weakly in $L^1(F)$, where $\check{a}\le Ca$ $\mu$-almost everywhere in $F$. We now know by \cite[Lemma 7.8]{Hj} that by modifying $u$ on a set of $\mu$-measure zero, if necessary, we have that $\check{a}$ is a $1$-weak upper gradient of $u$ in $F$. Thus we have the result.
\end{proof}
\begin{remark}
As in Lemma \ref{lem:generalized equiintegrability}, if $\Vert Du\Vert$ is absolutely continuous on the whole of $\Omega$, we can choose simply $F=\Omega$, and then we also have the inequality
\[
\int_{\Omega}g_u\,d\mu\le C\Vert Du\Vert(\Omega)
\]
with $C=C(c_d,c_P,\lambda)$. Note also that the proof of \cite[Lemma 7.8]{Hj}, which we used above, is also based on Mazur's lemma, so the techniques used above are very similar to those used in the proof of Theorem \ref{thm:estimate from above}.
\end{remark}
Finally we give the counterexample which shows that in general, we can have
\[
\mathcal F^a(u,\Omega)> \int_{\Omega}f(a)\,d\mu
\quad\text{and}\quad\Vert Du\Vert(\Omega)< \int_{\Omega}g_u\,d\mu.
\]
The latter inequality answers a question raised in \cite{M} and later in \cite{AMP}.
\begin{example}
Take the space $X=[0,1]$, equipped with the Euclidean distance and a measure $\mu$, which we will next define. First we construct a fat Cantor set $A$ as follows. Take $A_0=[0,1]$, whose measure we denote $\alpha_0=\mathcal L^1(A_0)=1$, where $\mathcal L^1$ is the 1-dimensional Lebesgue measure. Then in each step $i\in{\mathbb N}$ we remove from $A_{i-1}$ the set $B_i$, which consists of $2^{i-1}$ open intervals of length $2^{-2i}$, centered at the middle points of the intervals that make up $A_{i-1}$. We denote $\alpha_i=\mathcal L^1(A_i)$, and define $A=\cap_{i=1}^{\infty}A_i$. Then we have
\[
\alpha=\mathcal L^1(A) =\lim_{i\to\infty}\alpha_i=1/2.
\]
Now, equip the space $X$ with the weighted Lebesgue measure $d\mu=w\,d\mathcal L^1$, where $w=2$ in $A$ and $w=1$ in $X\setminus A$. Define
\[
g=\frac{1}{\alpha}\chi_{A}=2\chi_{A} \quad\ \textrm{and}\quad\ g_i=\frac{1}{\alpha_{i-1}-\alpha_i}\chi_{B_i},\ \ i\in{\mathbb N}.
\]
The unweighted integral of $g$ and each $g_i$ over $X$ is $1$.
Next define the function
\[
u(x)=\int_0^x g\, d\mathcal L^1.
\]
Now $u$ is in $N^{1,1}(X)$ and even in $\Lip(X)$, since $g$ is bounded. The minimal 1-weak upper gradient of $u$ is $g$ --- this can be seen e.g. by the representation formulas for minimal upper gradients, see \cite[Theorem 2.50]{BB}. Approximate $u$ with the functions
\[
u_i(x)=\int_0^x g_i\, d\mathcal L^1,\quad i\in{\mathbb N}.
\]
The functions $u_i$ are Lipschitz, and they converge to $u$ in $L^1(X)$ and even uniformly. This can be seen as follows. Given $i\in{\mathbb N}$, the set $A_i$ consists of $2^i$ intervals of length $\alpha_i/2^i$. If $I$ is one of these intervals, we have
\[
2^{-i}=\int_Ig\,d\mathcal L^1 =\int_Ig_{i+1}\,d\mathcal L^1,
\]
and also
\[
\int_{X\setminus A_i}g\,d\mathcal L^1 =0=\int_{X\setminus A_i}g_{i+1}\,d\mathcal L^1 .
\]
Hence $u_{i+1}=u$ at the end points of the intervals that make up $A_i$, and elsewhere $|u_{i+1}-u|$ is at most $2^{-i}$.
Clearly the minimal 1-weak upper gradient of $u_i$ is $g_i$. However, we have
\[
\int_0^1 g\,d\mu = 2 > 1 = \lim_{i\to\infty} \int_0^1 g_i\,d\mu \geq \Vert Du\Vert([0,1]).
\]
Thus the total variation is strictly smaller than the integral of the minimal 1-weak upper gradient, demonstrating the necessity of the constant $C$ in Theorem \ref{thm:min upper gradient and variation}. On the other hand, any approximating sequence $u_i\to u$ in $L^1(X)$ converges, up to a subsequence, also pointwise $\mu$- and thus $\mathcal L^1$-almost everywhere, and then we necessarily have for some such sequence
\begin{equation}\label{eq:gradients of approximating sequence}
\Vert Du\Vert([0,1])=\lim_{i\to\infty}\int_0^1g_{u_i}\,d\mu\ge \limsup_{i\to\infty}\int_0^1g_{u_i}\,d\mathcal L^1\ge 1.
\end{equation}
Hence we have $\Vert Du\Vert([0,1])=1$.
Let us show that more precisely, $d\Vert Du\Vert=a\,d\mu$ with $a=\chi_{A}$. The fact that $u$ is Lipschitz implies that $\Vert Du\Vert$ is absolutely continuous with respect to $\mu$. Since $u_i$ converges to $u$ uniformly, for any interval $(d,e)$ we must have
\[
\lim_{i\to\infty} \int_{(d,e)} g_i \,d\mathcal L^1 = \int_{(d,e)} g \,d\mathcal L^1 ,
\]
and since for the weight we had $w=1$ where $g_i> 0$, and $w=2$ where $g> 0$, we now get
\[
\lim_{i\to\infty} \int_{(d,e)} g_i \,d\mu = \frac{1}{2} \int_{(d,e)} g \,d\mu.
\]
By the definition of the variation measure, we have at any point $x\in X$ for $r>0$ small enough
\[
\Vert Du\Vert((x-r,x+r)) \le \liminf_{i\to\infty}\int_{(x-r,x+r)} g_i \,d\mu = \frac{1}{2} \int_{(x-r,x+r)} g \,d\mu.
\]
Now, if $x\in A$, we can estimate the Radon-Nikodym derivative
\[
\limsup_{r\to\infty}\frac{\Vert Du\Vert(B(x,r))}{\mu(B(x,r))}\le 1,
\]
and when $x\in X\setminus A$, we clearly have that the derivative is $0$. On the other hand, if the derivative were strictly smaller than $1$ in a subset of $A$ of positive $\mu$-measure, we would get $\Vert Du\Vert(X)<1$, which is a contradiction with the fact that $\Vert Du\Vert(X)=1$.
Thus $d\Vert Du\Vert=a\,d\mu$ with $a=\chi_{A}$.
\footnote{We can further show that $g_i\,d\mu \overset{*}{\rightharpoonup} a\,d\mu$ in $X$, but we do not have $g_i\to a$ weakly in $L^1(X)$, demonstrating the subtle difference between the two types of weak convergence.}
To show that we can have $\mathcal F^a(u,X)> \int_{X}f(a)\,d\mu$ --- note that $\mathcal F^a(u,X)=\mathcal F(u,X)$ --- assume that $f$ is given by
\[
f(t)=\begin{cases}
t, & t\in[0,1], \\
2t-1, & t>1.
\end{cases}
\]
(We could equally well consider other nonlinear $f$ that satisfy the earlier assumptions.)
Since $a=\chi_{A}$, we have
\[
\int_{X}f(a)\,d\mu=\int_{X}a\,d\mu= 2\int_X\chi_A\,d\mathcal L^1=1.
\]
On the other hand, for some sequence of Lipschitz functions $v_i\to u$ in $L^1(X)$, we have
\begin{equation}\label{eq:example}
\begin{split}
\mathcal F(u,X) &=\lim_{i\to\infty}\int_{X}f(g_{v_i})\,d\mu\\
&=\lim_{i\to\infty} \left(2\int_Af(g_{v_i})\,d\mathcal L^1+\int_{X\setminus A}f(g_{v_i})\,d\mathcal L^1 \right).
\end{split}
\end{equation}
By considering a subsequence, if necessary, we may assume that $v_i\to u$ pointwise $\mu$- and thus $\mathcal L^1$-almost everywhere.
By Proposition \ref{prop:weak convergence}, we have for any closed set $F\subset X\setminus A$
\[
\limsup_{i\to\infty}\int_{F} f(g_{v_i})\,d\mu\le\mathcal F(u,F)\le\mathcal F(u,X\setminus A)\le\int_{X\setminus A}f(g_u)\,d\mu=0,
\]
which implies that
\[
\lim_{i\to\infty}\int_{F} f(g_{v_i})\,d\mathcal L^1=0=\lim_{i\to\infty}\int_{F} g_{v_i}\,d\mathcal L^1.
\]
Applying these two equalities together with the inequality $f(t)\geq 2t-1$, we obtain
\begin{align*}
\limsup_{i\to\infty} \int_{X\setminus A} f(g_{v_i}) \,d\mathcal L^1 &= \limsup_{i\to\infty} \int_{X\setminus (A\cup F)} f(g_{v_i}) \,d\mathcal L^1\\
&\ge \limsup_{i\to\infty} \int_{X\setminus (A\cup F)} (2g_{v_i}-1) \,d\mathcal L^1\\
&\ge \limsup_{i\to\infty} \int_{X\setminus (A\cup F)} 2g_{v_i} \,d\mathcal L^1 - \mathcal L^1(X\setminus (A\cup F))\\
&=\limsup_{i\to\infty} \int_{X\setminus A} 2 g_{v_i} \,d\mathcal L^1 - \mathcal L^1(X\setminus (A\cup F)).
\end{align*}
The last term on the last line can be made arbitrarily small.
Inserting this into \eqref{eq:example}, we get
\begin{align*}
\mathcal F(u,X)&=\limsup_{i\to\infty} \left(2\int_A f(g_{v_i})\,d\mathcal L^1+\int_{X\setminus A}f(g_{v_i})\,d\mathcal L^1 \right)\\
&\ge 2\liminf_{i\to\infty}\int_Af(g_{v_i})\,d\mathcal L^1+ 2\limsup_{i\to\infty}\int_{X\setminus A}g_{v_i}\,d\mathcal L^1 \\
&\ge 2\liminf_{i\to \infty}\int_0^1g_{v_i}\,d\mathcal L^1\ge 2.
\end{align*}
The last inequality follows from the pointwise convergence of $v_i$ to $u$ $\mathcal L^1$-almost everywhere.
Roughly speaking, we note that the total variation $\Vert Du\Vert(X)$ is found to be unexpectedly small because the growth of the approximating functions $u_i$ is concentrated outside the Cantor set $A$, where it is ``cheaper'' due to the smaller value of the weight function. However, when we calculate $\mathcal F(u,X)$, the same does not work, because now the nonlinear function $f$ places ``extra weight'' on upper gradients that take values larger than $1$.
\end{example}
\section{Minimization problem}
Let us consider a minimization problem related to the functional of linear growth. First we specify what we mean by boundary values of $\mathrm{BV}$ functions.
\begin{definition}
Let $\Omega$ and $\Omega^*$ be bounded open subsets of $X$ such that $\Omega\Subset\Omega^*$, and assume that $h\in\mathrm{BV}(\Omega^*)$.
We define $\mathrm{BV}_{h}(\Omega)$ as the
space of functions $u\in\mathrm{BV}(\Omega^*)$ such that $u=h$ $\mu$-almost everywhere in $\Omega^*\setminus\Omega$.
\end{definition}
Now we give the definition of our minimization problem.
\begin{definition}\label{def:minimization problem}
A function $u\in \mathrm{BV}_{h}(\Omega)$ is a minimizer of the functional of linear growth
with the boundary values $h\in\mathrm{BV}(\Omega^*)$, if
\[
\mathcal F(u,\Omega^*)= \inf\mathcal F(v,\Omega^*),
\]
where the infimum is taken over all $v\in\mathrm{BV}_h(\Omega)$.
\end{definition}
Note that if $u\in L^1_{\mathrm{loc}}(\Omega^*)$ and $u=h$ in $\Omega^*\setminus \Omega$, then $u\in L^1(\Omega^*)$. Furthermore, if $\mathcal F(u,\Omega^*)<\infty$, then $\Vert Du\Vert(\Omega^*)<\infty$ by \eqref{eq:basic estimate for functional}. Thus it makes sense to restrict $u$ to the class $\mathrm{BV}(\Omega^*)$ in the above definition.
Observe that the minimizers do not depend on $\Omega^*$, but the value of the functional does.
Note also that the minimization problem always has a solution and that the solution is not necessarily continuous, see \cite{HKL}.
\begin{remark}
We point out that any minimizer is also a local minimizer in the following sense. A minimizer $u\in \mathrm{BV}_{h}(\Omega)$ of $\mathcal F(\cdot,\Omega^*)$ with the boundary values $h\in\mathrm{BV}(\Omega^*)$ is
a minimizer of $\mathcal F(\cdot,\Omega'')$ with the boundary values $u\in\mathrm{BV}_{u}(\Omega')$ for every $\Omega'\Subset\Omega''\subset\Omega^*$, with $\Omega'\subset\Omega$. This can be seen as follows. Every
$v\in\mathrm{BV}_{u}(\Omega')$ can be extended to $\Omega^*$ by defining $v=u$ in $\Omega^*\setminus\Omega''$. The minimality of $u$ and the measure property of the functional (Theorem \ref{thm:measure_prop}) then imply that
\[
\mathcal F(u,\Omega^*\setminus\Omega'') + \mathcal F(u,\Omega'')\leq\mathcal F(v,\Omega^*\setminus\Omega'') + \mathcal F(v,\Omega'').
\]
Since $u=v$ $\mu$-almost everywhere in $\Omega^*\setminus\Omega'$, the first terms on both sides of the inequality cancel out, and we have
\[
\mathcal F(u,\Omega'')\leq \mathcal F(v,\Omega'').
\]
\end{remark}
Now we wish to express the boundary values of the minimization problem as a penalty term involving an integral over the boundary.
To this end, we need to discuss boundary traces and extensions of $\mathrm{BV}$ functions.
\begin{definition}
An open set $\Omega$ is a strong $\mathrm{BV}$ extension domain, if for every $u\in \mathrm{BV}(\Omega)$ there is an extension $Eu\in \mathrm{BV}(X)$ such that $Eu|_{\Omega}=u$, there is a constant $1\le c_{\Omega}<\infty$ such that $\Vert Eu\Vert_{\mathrm{BV}(X)}\le c_{\Omega}\Vert u\Vert_{\mathrm{BV}(\Omega)}$ and $\Vert D(Eu)\Vert(\partial\Omega)=0$.
\end{definition}
Note that our definition differs from the conventional definition of a $\mathrm{BV}$ extension domain, since we also require that $\Vert D(Eu)\Vert(\partial\Omega)=0$. This can be understood as an additional regularity condition for the domain.
\begin{definition}
We say that a $\mu$-measurable set $\Omega$ satisfies the \emph{weak measure density condition} if for $\mathcal H$-almost every $x\in\partial \Omega$, we have
\[
\liminf_{r\to 0}\frac{\mu(B(x,r)\cap\Omega)}{\mu(B(x,r))}>0.
\]
\end{definition}
These are the two conditions we will impose in order to have satisfactory results on the boundary traces of $\mathrm{BV}$ functions.
Based on results found in \cite{BS}, we prove in the upcoming note \cite{L} that every bounded \emph{uniform} domain is a strong $\mathrm{BV}$ extension domain and satisfies the weak measure density condition. An open set $\Omega$ is $A$-uniform, with constant $A\ge 1$, if for every $x,y\in\Omega$ there is a curve $\gamma$ in $\Omega$ connecting $x$ and $y$ such that $\ell_{\gamma}\le Ad(x,y)$, and for all $t\in [0,\ell_{\gamma}]$, we have
\[
\dist(\gamma(t),X\setminus\Omega)\ge A^{-1}\min\{t,\ell_{\gamma}-t\}.
\]
Now we give the definition of boundary traces.
\begin{definition}
For a $\mu$-measurable set $\Omega$ and a $\mu$-measurable function $u$ on $\Omega$, a real-valued function $T_{\Omega}u$ defined on $\partial\Omega$ is a boundary trace of $u$ if for $\mathcal H$-almost every $x\in\partial\Omega$, we have
\[
\lim_{r\to 0}\,\vint{\Omega\cap B(x,r)}|u-T_{\Omega}u(x)|\,d\mu=0.
\]
\end{definition}
Often we will also call $T_{\Omega}u(x)$ a boundary trace if the above condition is satisfied at the point $x$.
If the trace exists at a point $x\in\partial\Omega$, we clearly have
\[
T_{\Omega}u(x)=\lim_{r\to 0}\,\vint{B(x,r)\cap\Omega}u\,d\mu=\aplim\limits_{y\in\Omega,\, y\to x}u(y),
\]
where $\aplim$ denotes the approximate limit.
Furthermore, we can show that the trace is always a Borel function.
Let us recall the following decomposition result for the variation measure of a $\mathrm{BV}$ function from \cite[Theorem 5.3]{AMP}. For any open set $\Omega\subset X$, any $u\in\mathrm{BV}(\Omega)$, and any Borel set $A\subset \Omega$ that is $\sigma$-finite with respect to $\mathcal H$, we have
\begin{equation}\label{eq:decomposition}
\Vert Du\Vert(\Omega)=\Vert Du\Vert(\Omega\setminus A)+\int_A\int_{u^{\wedge}(x)}^{u^{\vee}(x)}\theta_{\{u>t\}}(x)\,dt\,d\mathcal H(x).
\end{equation}
The function $\theta$ and the lower and upper approximate limits $u^{\wedge}$ and $u^{\vee}$ were defined in Section \ref{sec:prelis}. In particular, by \cite[Theorem 5.3]{AMP} the jump set $S_u$ is known to be $\sigma$-finite with respect to $\mathcal H$.
The following is our main result on boundary traces.
\begin{theorem}\label{thm:boundary traces}
Assume that $\Omega$ is a strong $\mathrm{BV}$ extension domain that satisfies the weak measure density condition, and let $u\in\mathrm{BV}(\Omega)$. Then the boundary trace $T_{\Omega}u$ exists, that is, $T_{\Omega}u(x)$ is defined for $\mathcal H$-almost every $x\in\partial\Omega$.
\end{theorem}
\begin{proof}
Extend $u$ to a function $Eu\in\mathrm{BV}(X)$. By the fact that
\[
\Vert D(Eu)\Vert(\partial \Omega)=0
\]
and the decomposition \eqref{eq:decomposition}, we have $\mathcal H(S_{Eu}\cap \partial\Omega)=0$ --- recall that the function $\theta$ is bounded away from zero. Here
\[
S_{Eu}=\{x\in X:\,(Eu)^{\wedge}(x)<(Eu)^{\vee}(x)\},
\]
as usual. On the other hand, by \cite[Theorem 3.5]{KKST} we know that $\mathcal H$-almost every point $x\in\partial^*\Omega\setminus S_{Eu}$ is a Lebesgue point of $Eu$. In these points we define $T_{\Omega}u(x)$ simply as the Lebesgue limit $\widetilde{Eu}(x)$. For $\mathcal H$-almost every $x\in\partial\Omega$ the weak measure density condition is also satisfied, so that
\[
\liminf_{r\to 0}\frac{\mu(B(x,r)\cap\Omega)}{\mu(B(x,r))}=c>0.
\]
Thus for $\mathcal H$-almost every $x\in\partial\Omega$ we can estimate
\[
\begin{split}
\limsup_{r\to 0}\,&\vint{B(x,r)\cap\Omega}|u-T_{\Omega}u(x)|\,d\mu\\
&\le \limsup_{r\to 0}\frac{1}{c\mu(B(x,r))}\int_{B(x,r)}|Eu-\widetilde{Eu}(x)|\,d\mu=0.
\end{split}
\]
\end{proof}
Due to the Lebesgue point theorem \cite[Theorem 3.5]{KKST}, we have in fact
\[
\limsup_{r\to 0}\,\vint{B(x,r)\cap\Omega}|u-T_{\Omega}u(x)|^{Q/(Q-1)}\,d\mu=0
\]
for $\mathcal H$-almost every $x\in\partial\Omega$, where $Q>1$ was given in \eqref{eq:doubling dimension}. However, we will not need this stronger result.
Let us list some general properties of boundary traces.
\begin{proposition}\label{prop:prop of trace}
Assume that $\Omega$ is a $\mu$-measurable set and that $u$ and $v$ are $\mu$-measurable functions on $\Omega$. The boundary trace operator enjoys the following properties for any $x\in\partial\Omega$ for which both $T_{\Omega}u(x)$ and $T_{\Omega}v(x)$ exist:
\begin{enumerate}[(i)]
\item $T_{\Omega}(\alpha u + \beta v)(x)= \alpha\, T_{\Omega}u(x) +\beta\, T_{\Omega}v(x)$ for any $\alpha,\beta\in{\mathbb R}$.\\
\item If $u\ge v$ $\mu$-almost everywhere~in $\Omega$, then $T_{\Omega} u(x)\ge T_{\Omega} v(x)$. In particular, if
$u=v$ $\mu$-almost everywhere~in $\Omega$, then $T_{\Omega} u(x)= T_{\Omega} v(x)$.\\
\item $T_{\Omega}(\max\{u,v\})(x)=\max\{T_{\Omega}u(x),T_{\Omega}v(x)\}$ and
$T_{\Omega}(\min\{u,v\})(x)=\min\{T_{\Omega}u(x),T_{\Omega}v(x)\}$.\\
\item Let $h>0$ and define the truncation $u_h=\min\{h,\max\{u,-h\}\}$. Then $T_{\Omega} u_h (x)=(T_{\Omega}u(x))_h$.\\
\item If $\Omega$ is a $\mu$-measurable set such that both $\Omega$ and its complement satisfy the weak measure density condition, and $w$ is a $\mu$-measurable function on $X$, then for $\mathcal H$-almost everywhere $x\in\partial\Omega$ for which both traces $T_{\Omega}w(x)$ and $T_{X\setminus \Omega}w(x)$ exist, we have
\[
\{T_{\Omega} w(x), T_{X\setminus\Omega}w(x)\}=\{w^{\wedge}(x), w^{\vee}(x)\}.
\]
\end{enumerate}
\end{proposition}
\begin{proof}
Assertions $(i)$ and $(ii)$ are clear. Since minimum and maximum can be written as sums by using absolute values, property $(iii)$ follows from $(i)$ and the easily verified fact that $T_{\Omega}|u|(x)=|T_{\Omega}u(x)|$. Assertion $(iv)$ follows from $(iii)$. In proving assertion $(v)$, due to the symmetry of the situation we can assume that $T_{\Omega}w(x)\ge T_{X\setminus\Omega}w(x)$. By using the definition of traces and Chebyshev's inequality, we deduce that for every $\varepsilon>0$,
\[
\lim_{r\to 0}\frac{\mu(\{|w-T_{\Omega}w(x)|>\varepsilon\}\cap B(x,r)\cap\Omega)}{\mu(B(x,r)\cap\Omega)}=0
\]
and
\[
\lim_{r\to 0}\frac{\mu(\{|w-T_{X\setminus\Omega}w(x)|>\varepsilon\}\cap B(x,r)\setminus\Omega)}{\mu(B(x,r)\setminus\Omega)}=0.
\]
To determine the lower and upper approximate limits, we use these results to compute
\begin{align*}
&\limsup_{r\to 0}\frac{\mu(\{w>t\}\cap B(x,r))}{\mu(B(x,r))}\\
& =\limsup_{r\to 0}\left[\frac{\mu(\{w>t\}\cap B(x,r)\cap\Omega)}{\mu(B(x,r))}+\frac{\mu(\{w>t\}\cap B(x,r)\setminus\Omega)}{\mu(B(x,r))} \right]\\
&\begin{cases}
=0+0, &\textrm{if }t>T_{\Omega}w(x), \\
=\limsup_{r\to 0}\frac{\mu(B(x,r)\cap\Omega)}{\mu(B(x,r))}+0, &\textrm{if }T_{X\setminus\Omega}w(x)<t<T_{\Omega}w(x), \\
= \limsup_{r\to 0}\left[\frac{\mu(B(x,r)\cap\Omega)}{\mu(B(x,r))}+\frac{\mu(B(x,r)\setminus\Omega)}{\mu(B(x,r))}\right], &\textrm{if }t<T_{X\setminus\Omega}w(x),
\end{cases}\\
&\begin{cases}
=0, &\textrm{if }t>T_{\Omega}w(x), \\
\in (0,1), &\textrm{if }T_{X\setminus\Omega}w(x)<t<T_{\Omega}w(x), \\
=1, &\textrm{if }t<T_{X\setminus\Omega}w(x).
\end{cases}
\end{align*}
To obtain the result ``$\in (0,1)$'' above, we used the weak measure density conditions.
We conclude that $w^{\vee}(x)=T_{\Omega}w(x)$, and since ``$\limsup$'' can be replaced by ``$\liminf$'' in the above calculation, we also get $w^{\wedge}(x)=T_{X\setminus\Omega}w(x)$.
\end{proof}
A minor point to be noted is that any function that is in the class $\mathrm{BV}(X)$, such as an extension $Eu$ for $u\in \mathrm{BV}(\Omega)$, is also in the class $\mathrm{BV}(\Omega)$, and thus $T_{\Omega}Eu=T_{\Omega}u$.
Eventually we will also need to make an additional assumption on the space, as described in the following definition which is from \cite[Definition 6.1]{AMP}. The function $\theta_E$ was introduced earlier in (\ref{eq:def of theta}).
\begin{definition}
We say that $X$ is a \emph{local space} if, given any two sets of locally finite perimeter $E_1\subset E_2\subset X$, we have $\theta_{E_1}(x)=\theta_{E_2}(x)$ for $\mathcal H$-almost every $x\in \partial^*E_1\cap\partial^*E_2$.
\end{definition}
For some examples of local spaces, see \cite{AMP} and the upcoming note \cite{L}.
The assumption $E_1\subset E_2$ can, in fact, be removed as follows. Note that for a set of locally finite perimeter $E$, we have $\Vert D\chi_E\Vert=\Vert D\chi_{X\setminus E}\Vert$, i.e. the two measures are equal \cite[Proposition 4.7]{M}. From this it follows that $\theta_E(x)=\theta_{X\setminus E}(x)$ for $\mathcal H$-almost every $x\in\partial^*E$. Now, if $E_1$ and $E_2$ are arbitrary sets of locally finite perimeter, we know that $E_1\cap E_2$ and $E_1\setminus E_2$ are also sets of locally finite perimeter \cite[Proposition 4.7]{M}. For every $x\in\partial^*E_1\cap\partial^*E_2$ we have either $x\in\partial^*(E_1\cap E_2)$ or $x\in\partial^*(E_1\setminus E_2)$. Thus by the locality condition, we have for $\mathcal H$-almost every $x\in\partial^*E_1\cap\partial^*E_2$ either
\[
\theta_{E_1}(x)=\theta_{E_1\cap E_2}(x)=\theta_{E_2}(x)
\]
or
\[
\theta_{E_1}(x)=\theta_{E_1\setminus E_2}(x)=\theta_{X\setminus E_2}(x)=\theta_{E_2}(x).
\]
Thus we have $\theta_{E_1}=\theta_{E_2}$ for $\mathcal H$-almost every $x\in\partial^*E_1\cap\partial^*E_2$.
In a local space the decomposition \eqref{eq:decomposition} takes a simpler form, as proved in the following lemma.
\begin{lemma}\label{lem:consequence of locality}
If $X$ is a local space, $\Omega$ is a set of locally finite perimeter, $u\in\mathrm{BV}(X)$, and $A\subset \partial^*\Omega$ is a Borel set, then we have
\[
\int_A\int_{u^{\wedge}(x)}^{u^{\vee}(x)}\theta_{\{u>t\}}(x)\,dt\,d\mathcal H(x)=\int_A(u^{\vee}(x)-u^{\wedge}(x))\theta_{\Omega}\,d\mathcal H(x).
\]
\end{lemma}
Note that since $\Omega$ is a set of locally finite perimeter, $A\subset\partial^*\Omega$ is $\sigma$-finite with respect to $\mathcal H$.
\begin{proof}
We have
\begin{align*}
&\int_A\int_{u^{\wedge}(x)}^{u^{\vee}(x)}\theta_{\{u>t\}}(x)\,dt\,d\mathcal H(x)\\
&=\int_A\int_{-\infty}^{\infty}\chi_{\{(u^{\wedge}(x),u^{\vee}(x))\}}(t)\theta_{\{u>t\}}(x)\,dt\,d\mathcal H(x)\\
& =\int_{-\infty}^{\infty}\int_A\chi_{\{(-\infty,t)\}}(u^{\wedge}(x))\chi_{\{(t,\infty)\}}(u^{\vee}(x))\theta_{\{u>t\}}(x)\,d\mathcal H(x)\,dt\\
& =\int_{-\infty}^{\infty}\int_{A\cap\partial^*\{u>t\}}\chi_{\{(-\infty,t)\}}(u^{\wedge}(x))\chi_{\{(t,\infty)\}}(u^{\vee}(x))\theta_{\{u>t\}}(x)\,d\mathcal H(x)\,dt.
\end{align*}
On the third line we used Fubini's theorem. On the fourth line we used the fact that if $u^{\wedge}(x)<t<u^{\vee}(x)$, then $x\in\partial^*\{u>t\}$.
This follows from the definitions of the lower and upper approximate limits.
By the locality condition we see that the right-hand side above equals to
\begin{align*}
&\int_{-\infty}^{\infty}\int_{A\cap\partial^*\{u>t\}}\chi_{\{(-\infty,t)\}}(u^{\wedge}(x))\chi_{\{(t,\infty)\}}(u^{\vee}(x))\theta_{\Omega}(x)\,d\mathcal H(x)\,dt\\
& = \int_{-\infty}^{\infty}\int_{A}\chi_{\{(-\infty,t)\}}(u^{\wedge}(x))\chi_{\{(t,\infty)\}}(u^{\vee}(x))\theta_{\Omega}(x)\,d\mathcal H(x)\,dt\\
&=\int_A\int_{-\infty}^{\infty}\chi_{\{(u^{\wedge}(x),u^{\vee}(x))\}}(t)\,dt\,\theta_{\Omega}(x)\,d\mathcal H(x)\\
&=\int_A(u^{\vee}(x)-u^{\wedge}(x))\theta_{\Omega}(x)\,d\mathcal H(x).
\end{align*}
\end{proof}
Now we prove two propositions concerning boundary traces that are based on \cite[Theorem 3.84]{AmbFP00} and \cite[Theorem 3.86]{AmbFP00}.
\begin{proposition}\label{prop:gluing}
Let $\Omega$ and $\Omega^*$ be open sets such that $\Omega$ and $\Omega^*\setminus \Omega$ satisfy the weak measure density condition, $\overline{\Omega}\subset \Omega^*$, and $\Omega$ is of finite perimeter.
Let $u,v\in \mathrm{BV}(\Omega^*)$, and let $w=u\chi_{\Omega}+v\chi_{\Omega^*\setminus \Omega}$. Then $w\in \mathrm{BV}(\Omega^*)$ if and only if
\begin{equation}\label{eq:trace integrability}
\int_{\partial \Omega}|T_{\Omega}u-T_{\Omega^*\setminus \overline{\Omega}}\,v|\,d\mathcal H<\infty.
\end{equation}
In the above characterization, we implicitly assume that the integral is well-defined --- in particular, this is the case if $\Omega$ and $\Omega^*\setminus\overline{\Omega}$ are also strong $\mathrm{BV}$ extension domains, due to Theorem \ref{thm:boundary traces}.
Furthermore, if $X$ is a local space, we then have
\[
\Vert Dw\Vert(\Omega^*)= \Vert Du\Vert(\Omega)+\Vert Dv\Vert(\Omega^*\setminus\overline{\Omega})+\int_{\partial \Omega}|T_{\Omega}u-T_{\Omega^*\setminus \overline{\Omega}}\,v|\theta_{\Omega}\,d\mathcal H.
\]
\end{proposition}
\begin{proof}
First note that by the weak measure density conditions, we have $\mathcal H(\partial\Omega\setminus\partial^*\Omega)=0$, and thus $\mathcal H(\partial\Omega)<\infty$. This further implies that $\mu(\partial\Omega)=0$ \cite[Lemma 6.1]{KKST12}, and by this and the weak measure density conditions again,
\[
\mathcal H(\partial\Omega\setminus\partial\overline{\Omega})=0
\quad\text{and}\quad T_{\Omega^*\setminus \overline{\Omega}}=T_{\Omega^*\setminus \Omega}.
\]
To prove one direction, let us assume \eqref{eq:trace integrability}. In particular, we assume that $T_{\Omega}u(x)$ and $T_{\Omega^*\setminus \overline{\Omega}}\,v(x)$ exist for $\mathcal H$-almost every $x\in\partial\Omega$. For $h>0$, define the truncated functions
\[
u_h=\min\{h,\max\{u,-h\}\}
\qquad\text{and}\qquad v_h=\min\{h,\max\{v,-h\}\}.
\]
Clearly $u_h,v_h,\chi_{\Omega},\chi_{\Omega^*\setminus \Omega}\in\mathrm{BV}(\Omega^*)\cap L^{\infty}(\Omega^*)$. Then
\[
w_h=u_h\chi_{\Omega}+v_h\chi_{\Omega^*\setminus \Omega}\in\mathrm{BV}(\Omega^*)\cap L^{\infty}(\Omega^*),
\]
see e.g. \cite[Proposition 4.2]{KKST}.
Based on the decomposition of the variation measure given in \eqref{eq:decomposition},
\begin{equation}\label{eq:gluing estimate}
\begin{split}
&\Vert Dw_h\Vert (\Omega^*)\\
&=\Vert Du_h\Vert(\Omega)+\Vert Dv_h\Vert(\Omega^*\setminus\overline{\Omega})+\int_{\partial \Omega}\int_{w_h^{\wedge}(x)}^{w_h^{\vee}(x)}\theta_{\{w_h>t\}}(x)\,dt\,d\mathcal H(x)\\
&\le \Vert Du\Vert(\Omega)+\Vert Dv\Vert(\Omega^*\setminus\overline{\Omega})+\int_{\partial \Omega}c_d|w_h^{\vee}(x)-w_h^{\wedge}(x)|\,d\mathcal H(x).
\end{split}
\end{equation}
By Proposition \ref{prop:prop of trace} $(iv)$, the boundary traces $T_{\Omega}$ of $u$, $u_h$, $w_h$, and $T_{\Omega^*\setminus\overline{\Omega}}$ of $v$, $v_h$, $w_h$, exist $\mathcal H$-almost everywhere on the boundary $\partial \Omega$. For $w_h$ this fact follows from the definition of boundary traces, by which we have that $T_{\Omega}w_h=T_{\Omega}u_h$, and similarly $T_{\Omega^*\setminus\overline{\Omega}}\,w_h=T_{\Omega^*\setminus\overline{\Omega}}\,v_h$. Proposition \ref{prop:prop of trace} $(v)$ now gives
\begin{equation}\label{eq:gluing traces}
\left\{w_h^{\wedge}(x), w_h^{\vee}(x)\right\} =\{T_{\Omega}w_h(x), T_{\Omega^*\setminus\overline{\Omega}}\,w_h(x)\}=\{T_{\Omega}u_h(x), T_{\Omega^*\setminus\overline{\Omega}}\,v_h(x)\}
\end{equation}
for $\mathcal H$-almost every $x\in \partial \Omega$.
Using Proposition \ref{prop:prop of trace} $(iv)$ again, for $\mathcal H$-almost every~$x\in\partial\Omega$ we have
\begin{equation}\label{eq:truncated traces}
\begin{split}
&T_{\Omega}u_h(x)=\min\{h,\max\{T_{\Omega}u(x),-h\}\},\\
&T_{\Omega^*\setminus\overline{\Omega}}\,v_h(x)=\min\{h,\max\{T_{\Omega^*\setminus\overline{\Omega}}\,v(x),-h\}\}.
\end{split}
\end{equation}
By the lower semicontinuity of the total variation as well as \eqref{eq:gluing estimate}, \eqref{eq:gluing traces} and \eqref{eq:truncated traces}, we now get
\begin{align*}
\Vert &Dw\Vert(\Omega^*)\le \liminf_{h\to \infty}\Vert Dw_h\Vert(\Omega^*)\\
&\le \Vert Du\Vert(\Omega)+\Vert Dv\Vert(\Omega^*\setminus\overline{\Omega})+\liminf_{h\to \infty}c_d\int_{\partial \Omega}|T_{\Omega}u_h-T_{\Omega^*\setminus\overline{\Omega}}\,v_h| \,d\mathcal H\\
&= \Vert Du\Vert(\Omega)+\Vert Dv\Vert(\Omega^*\setminus\overline{\Omega})+c_d\int_{\partial \Omega}|T_{\Omega}u-T_{\Omega^*\setminus\overline{\Omega}}\,v|\,d\mathcal H
<\infty.
\end{align*}
Thus $w\in\mathrm{BV}(\Omega^*)$.
To prove the converse, assume that $w\in \mathrm{BV}(\Omega^*)$. Here we can simply again write the decomposition of the variation measure
\[
\infty >\Vert Dw\Vert(\Omega^*)\ge \Vert Du\Vert(\Omega)+\Vert Dv\Vert(\Omega^*\setminus\overline{\Omega})+\alpha\int_{\partial \Omega}|w^{\vee}-w^{\wedge}|\,d\mathcal H,
\]
where $\alpha=\alpha(c_d,c_P)>0$, and just as earlier, note that
\begin{equation}\label{eq:approximate limits and traces}
|w^{\vee}(x)-w^{\wedge}(x)|=|T_{\Omega}w(x)-T_{\Omega^*\setminus \overline{\Omega}}\,w(x)|=|T_{\Omega}u(x)-T_{\Omega^*\setminus \overline{\Omega}}\,v(x)|
\end{equation}
for $\mathcal H$-almost every $x\in \partial \Omega$. This combined with the previous estimate gives the desired result. If $X$ is a local space, we combine the decomposition of the variation measure \eqref{eq:decomposition}, Lemma \ref{lem:consequence of locality}, and \eqref{eq:approximate limits and traces} to obtain the last claim.
\end{proof}
Next we show that if a set $A$ (which could be e.g. the boundary $\partial \Omega$) is in a suitable sense of codimension one, traces of $\mathrm{BV}$ functions are indeed integrable on $A$.
Let us first recall the following fact from the theory of sets of finite perimeter. Given any set of finite perimeter $E\subset X$, for $\mathcal H$-almost every $x\in \partial^*E$ we have
\begin{equation}\label{eq:density of E}
\gamma \le \liminf_{r\to 0} \frac{\mu(E\cap B(x,r))}{\mu(B(x,r))} \le \limsup_{r\to 0} \frac{\mu(E\cap B(x,r))}{\mu(B(x,r))}\le 1-\gamma,
\end{equation}
where $\gamma \in (0,1/2]$ only depends on the doubling constant and the constants in the Poincar\'e inequality \cite[Theorem 5.4]{A2}.
\begin{proposition}\label{prop:codimension one boundary}
Let $\Omega^*\subset X$ be open, let $u\in \mathrm{BV}(\Omega^*)$, and let $A\subset \Omega^*$ be a bounded Borel set that satisfies $\dist(A,X\setminus \Omega^*)>0$ and
\begin{equation}\label{eq:codimension one condition}
\mathcal H(A\cap B(x,r))\le c_A\frac{\mu(B(x,r))}{r}
\end{equation}
for every $x\in A$ and $r\in (0,R]$, where $R\in(0,\dist(A,X\setminus \Omega^*))$ and $c_A>0$ are constants. Then
\begin{equation}\label{eq:summability of traces}
\int_{A}(|u^{\wedge}|+|u^{\vee}|)\,d\mathcal{H}
\le C\Vert u\Vert_{\mathrm{BV}(\Omega^*)},
\end{equation}
where $C= C(c_d,c_P,\lambda,A,R,c_A)$.
\end{proposition}
\begin{proof}
We may assume that $u\ge 0$. Let
\[
c=\inf_{x\in A}\mu(B(x,R));
\]
by the doubling property of $\mu$ we have $c=c(A,R,c_d)>0$. First consider a set $E\subset X$ that is of finite perimeter in $\Omega^*$ and satisfies $\mu(E)<\delta$, where $\delta>0$ is a constant that will be determined later. Define
\[
E^{\gamma}=\left\{x\in \Omega^*:\,\liminf_{r\to 0}\frac{\mu(E\cap B(x,r))}{\mu(B(x,r))}\ge \gamma\right\},
\]
where $\gamma=\gamma(c_d,c_P,\lambda)>0$ is the constant from \eqref{eq:density of E}. Pick any $x\in E^{\gamma}\cap A$. We note that
\[
\frac{\mu(E\cap B(x,R))}{\mu(B(x,R))}\le \frac{\mu(E)}{\mu(B(x,R))}< \frac{\delta}{c}.
\]
By choosing $\delta>0$ small enough, we have
\[
\frac{\mu(E\cap B(x,R/(5\lambda)))}{\mu(B(x,R/(5\lambda)))}\le \frac{\gamma}{2}.
\]
Thus we have $\delta=\delta(c_d,\lambda,c,\gamma)$, and consequently $\delta=\delta(c_d,c_P,\lambda,A,R)$. By the definition of $E^{\gamma}$, we can find a number $r\in (0,R/5]$ that satisfies
\[
\frac{\gamma}{2c_d}<\frac{\mu(E\cap B(x,r/\lambda))}{\mu(B(x,r/\lambda))}\le \frac{\gamma}{2}.
\]
This can be done by repeatedly halving the radius $R/5$ until the right-hand side of the above inequality does not hold, and picking the last radius for which it did hold. From the relative isoperimetric inequality \eqref{eq:isop ineq} we conclude that
\begin{equation}\label{eq:estimate for balls}
\frac{\mu(B(x,r/\lambda))}{r/\lambda}\le \frac{2c_d}{\gamma}\frac{\mu(E\cap B(x,r/\lambda))}{r/\lambda}\le \frac{C}{\gamma} P(E,B(x,r)).
\end{equation}
Using the radii chosen this way, we get a covering $\{B(x,r(x))\}_{x\in A\cap E^{\gamma}}$ of the set $A\cap E^{\gamma}$. By the 5-covering lemma, we can select a countable family of disjoint balls $\{B(x_i,r_i)\}_{i=1}^{\infty}$ such that the balls $B(x_i,5r_i)$ cover $A\cap E^{\gamma}$. By using \eqref{eq:codimension one condition} and \eqref{eq:estimate for balls}, we get
\begin{equation}\label{eq:estimate for E gamma}
\begin{split}
\mathcal H(E^{\gamma}\cap A) &\le \sum_{i=1}^{\infty}\mathcal H(E^{\gamma}\cap A\cap B(x_i,5r_i))\\
&\le c_A\sum_{i=1}^{\infty}\frac{\mu(B(x_i,5r_i))}{5r_i}
\le C\sum_{i=1}^{\infty}\frac{\mu(B(x_i,r_i/\lambda))}{r_i/\lambda}\\
&\le C\sum_{i=1}^{\infty}P(E,B(x_i,r_i))
\le CP(E,\Omega^*),
\end{split}
\end{equation}
where $C=(c_d,c_P,\lambda,c_A)$.
Then we consider the function $u$. Assume that $x\in A\cap S_u$ and $u^{\wedge}(x)+u^{\vee}(x)>t$, with $t>0$. By the definitions of the lower and upper approximate limits, we know that $x\in \partial^{*}\{u>s\}$ for all $s\in (u^{\wedge}(x),u^{\vee}(x))$. By the coarea formula \eqref{eq:coarea}, the sets $\{u>s\}$ are of finite perimeter in $\Omega^*$ for every $s\in T$, where $T$ is a countable dense subset of ${\mathbb R}$. Thus, outside a $\mathcal H$-negligible set, \eqref{eq:density of E} holds for every $x\in\partial^{*}\{u>s\}$ and $s\in T$. Assuming that $x$ is outside this $\mathcal H$-negligible set, we can find $s\in ((u^{\wedge}(x)+u^{\vee}(x))/2,u^{\vee}(x))\cap T$ and estimate
\[
\liminf_{r\to 0}\frac{\mu(\{u>t/2\}\cap B(x,r))}{\mu(B(x,r))}\ge \liminf_{r\to 0}\frac{\mu(\{u>s\}\cap B(x,r))}{\mu(B(x,r))}\ge \gamma,
\]
which means that $x\in \{u>t/2\}^{\gamma}$. By Chebyshev's inequality we get
\[
\mu(\{u>t/2\})\le \frac{\Vert u\Vert_{L^1(\Omega^*)}}{t/2}<\delta
\]
if $t>t_0$, where $t_0=C(c_d,c_P,\lambda,A,R)\Vert u\Vert_{L^1(\Omega^*)}$ due to the dependencies of $\delta$ given earlier.
By the coarea formula again, $\{u>t/2\}$ is of finite perimeter in $\Omega^*$ for a.e. $t\in{\mathbb R}$, and Cavalieri's principle and \eqref{eq:estimate for E gamma} then imply that
\begin{align*}
\int_{A\cap S_u}&(u^{\wedge}+u^{\vee})\,d\mathcal H =\int_{0}^{\infty}\mathcal H(\{x\in A\cap S_u:u^{\wedge}(x)+u^{\vee}(x)>t\})\,dt\\
&\le\int_{0}^{\infty}\mathcal H(\{u>t/2\}^{\gamma}\cap A)\,dt\\
&\le t_0\mathcal H(A)+\int_{t_0}^{\infty}C(c_d,c_P,\lambda,c_A)P(\{u>t/2\},\Omega^*)\,dt\\
&\le C(c_d,c_P,\lambda,A,R)\Vert u\Vert_{L^1(\Omega^*)}\mathcal H(A)+C(c_d,c_P,\lambda,c_A)\Vert Du\Vert(\Omega^*).
\end{align*}
This gives the estimate for $A\cap S_u$. For $A\setminus S_u$, we simply note that if $x\in A\setminus S_u$ and $u^{\wedge}(x)=u^{\vee}(x)>t$, then the approximate limit of $u$ at $x$ is larger than $t$, which easily gives $x\in\{u>t\}^{\gamma}$, and then we can use Cavalieri's principle as above.
\end{proof}
Finally we get the desired representation for the minimization problem.
\begin{theorem}
Assume that $X$ is a local space, and let $\Omega\Subset \Omega^*$ be bounded open sets such that $\Omega$ and $\Omega^*\setminus\Omega$ satisfy the weak measure density condition, $\Omega$ is a strong $\mathrm{BV}$ extension domain, and $\partial \Omega$ satisfies the assumptions of Proposition \ref{prop:codimension one boundary}. Assume also that $h\in\mathrm{BV}(\Omega^*)$ and that the trace $T_{X\setminus\overline{\Omega}}h(x)$ exists for $\mathcal H$-almost every $x\in\partial\Omega$, which in particular is true if $\Omega^*\setminus\overline{\Omega}$ is also a strong $\mathrm{BV}$ extension domain. Then the minimization problem given in Definition \ref{def:minimization problem}, with boundary values $h$, can be reformulated as the minimization of the functional
\begin{equation}\label{eq:reformulation}
\mathcal F(u,\Omega)+f_{\infty}\int_{\partial \Omega}|T_{\Omega}u-T_{X\setminus \overline{\Omega}}h|\theta_{\Omega}\,d\mathcal H
\end{equation}
over all $u\in \mathrm{BV}(\Omega)$.
\end{theorem}
Note that this formulation contains no reference to $\Omega^*$.
\begin{proof}
First note that due to the conditions of Proposition \ref{prop:codimension one boundary}, we have $\mathcal H(\partial \Omega)<\infty$, and thus $\mu(\partial\Omega)=0$ and $\Omega$ is a set of finite perimeter, see e.g. \cite[Lemma 6.1, Proposition 6.3]{KKST12}.
By the weak measure density conditions,
\[
\mathcal H(\partial\Omega\setminus\partial\overline{\Omega})=0
\quad\text{and}\quad T_{\Omega^*\setminus \overline{\Omega}}=T_{\Omega^*\setminus \Omega}.
\]
Now, for any $u\in \mathrm{BV}_h(\Omega)$, we have $u\in\mathrm{BV}(\Omega^*)$ by definition, and $\mathcal F(u,\Omega^*)<\infty$ by \eqref{eq:basic estimate for functional}. Then
\begin{equation}\label{eq:reformulation calculation}
\begin{split}
\mathcal F&(u,\Omega^*)\\
&= \mathcal F(u,\Omega)+\mathcal F^s(u,\partial \Omega)+\mathcal F(h,\Omega^*\setminus \overline{\Omega})\\
&= \mathcal F(u,\Omega)+f_{\infty}\Vert Du\Vert^s(\partial \Omega)+\mathcal F(h,\Omega^*\setminus \overline{\Omega})\\
&= \mathcal F(u,\Omega)+f_{\infty}\int_{\partial \Omega}|u^{\vee}-u^{\wedge}|\theta_{\Omega}\,d\mathcal H+\mathcal F(h,\Omega^*\setminus \overline{\Omega})\\
&= \mathcal F(u,\Omega)+f_{\infty}\int_{\partial \Omega}|T_{\Omega}u-T_{X\setminus \overline{\Omega}}h|\theta_{\Omega}\,d\mathcal H+\mathcal F(h,\Omega^*\setminus \overline{\Omega}),
\end{split}
\end{equation}
where the first equality follows from the measure property of $\mathcal F(u,\cdot)$ as well as the fact that $\mu(\partial \Omega)=0$, the second equality follows from the integral representation of the functional (see Remark \ref{rem:integral representation}), the third equality follows from the decomposition \eqref{eq:decomposition} and Lemma \ref{lem:consequence of locality}, and the fourth equality follows from Proposition \ref{prop:prop of trace} $(v)$. Now, the term $\mathcal F(h,\Omega^*\setminus \overline{\Omega})$ does not depend on $u$, so in fact we need to minimize \eqref{eq:reformulation}.
Conversely, assume that $u\in \mathrm{BV}(\Omega)$. Then we can extend $u$ to $Eu\in\mathrm{BV}(\Omega^*)$. By Proposition \ref{prop:prop of trace} $(v)$ we have
\[
\{T_{\Omega}h(x),T_{X\setminus\overline{\Omega}}\,h(x)\}=\{h^{\wedge}(x),h^{\vee}(x)\}
\]
for $\mathcal H$-almost every $x\in\partial\Omega$. By the proof of Theorem \ref{thm:boundary traces} we have that $T_{\Omega}Eu(x)$ is the Lebesgue limit of $Eu$ for $\mathcal H$-almost every $x\in\partial\Omega$. By Proposition \ref{prop:codimension one boundary}, we now get
\[
\int_{\partial \Omega}|T_{\Omega}Eu-T_{X\setminus \overline{\Omega}}h|\,d\mathcal H\le C(\Vert Eu\Vert_{\mathrm{BV}(\Omega^*)}+\Vert h\Vert_{\mathrm{BV}(\Omega^*)})<\infty.
\]
By Proposition \ref{prop:gluing} we deduce that $w=(Eu)\chi_{\Omega}+h\chi_{\Omega^*\setminus\Omega}\in\mathrm{BV}(\Omega^*)$, and in fact we have $w=u\chi_{\Omega}+h\chi_{\Omega^*\setminus\Omega}\in\mathrm{BV}_{h}(\Omega)$. This completes the proof.
\end{proof}
\begin{remark}
Note that in the latter part of the above proof we showed that, under the assumptions on the space and on $\Omega$, the spaces $\mathrm{BV}(\Omega)$ and $\mathrm{BV}_h(\Omega)\subset \mathrm{BV}(\Omega^*)$ can be identified.
\end{remark}
|
1,116,691,500,660 | arxiv | \section*{\Large{\underline{Algorithms for the Ising Model}}}
\keywords{Ising model, exact sampling, random cluster model}
\begin{abstract}
The Ising model is often referred to as the most studied model
of statistical physics. It describes the behavior of ferromagnetic
material at different temperatures.
It is an interesting model also for mathematicians,
because although the Boltzmann distribution is continuous in the
temperature parameter, the behavior of the usual single-spin
dynamics to sample from this measure varies extremely.
Namely, there is a critical temperature where we get
rapid mixing above and slow mixing below this value.
Here, we
give a survey of the known results on mixing time of
Glauber dynamics for the Ising model on the square lattice and
present a technique that makes exact sampling of the Ising
model at all temperatures possible
in polynomial time. At high temperatures this is well-known
and
although this seems to be known also in the low temperature case
since Kramer and Waniers paper \cite{KW} from the 1950s,
we did not found any reference that
describes exact sampling for the Ising model at low temperatures.
\end{abstract}
\maketitle
\thispagestyle{empty}
\section{Introduction}
In this article we summarize the known results about the mixing time
of the heat bath dynamics for the Ising model and combine them
with some graph theoretic results to an algorithm to sample
exactly from the Ising model in polynomial time.
By time (or running time) we always mean the number of steps of
the underlying Markov chain.
The algorithm that will be analyzed (Algorithm 2, given
in Section \ref{sec-efficient}) is at high temperatures simply
the Coupling from the past algorithm (see Propp and Wilson \cite{PW}).
At low temperatures we have to produce a sample at the dual graph,
but this can be traced back to sampling on the initial graph with
constant boundary condition.
The main theorem of this article is stated as follows.
\begin{cit}[Theorem~\ref{th-main}]
Let $G_L$ be the square lattice with $N=L^2$ vertices. Then,
Algorithm 2 outputs an exactly distributed Ising
configuration with respect to $\pi_\beta^{G_L}$ in expected time
smaller than
\begin{itemize}
\item \quad $c_\beta\, N \,(\log N)^2$
\qquad for $\beta\neq\beta_c=\log(1+\sqrt{2})$ and some $c_\beta>0$
\vspace{2mm}
\item \quad $16\, N^C \log N$
\qquad
for $\beta=\beta_c$, where $C$ is given in \eqref{eq-critical-C}.
\end{itemize}
\end{cit}
As a consequence we get that one can estimate the expectation of
arbitrary functions with respect to the Boltzmann distribution
in polynomial time. Namely, if we use the simple Monte Carlo method
to approximate the expectation of a function $f$ on the Ising model,
we need $\epsilon^{-2}\Vert f\Vert^2_{2}$ exact samples from
$\pi_\beta$ (i.e. Algorithm~2) to reach a mean square error of at
most $\epsilon$.
Therefore, if we denote the bounds from Theorem~\ref{th-main} by
$T_\beta$, we need on average
$T_\beta\,\epsilon^{-2}\Vert f-\mathbb{E}_{\pi_\beta}f\Vert^2_{2}$
steps of the Markov chain that will be defined in Section
\ref{sec-ising} \\
The first polynomial-time algorithm (FPRAS) was shown by Jerrum and
Sinclair \cite{JS}. There they present an algorithm to approximate the
partition function $Z_\beta$ and, as a consequence, approximate
expectations of functions that are given in terms of the partition
function in polynomial time at all temperatures $\beta$.
\vspace{3mm}
\section{The Ising model}\label{sec-ising}
In this section we introduce the two-dimensional Ising model. \\
Let $G=(V,E)$ be a graph with finite vertex set $V\subset\mathbb{Z}^2$ and
edge set $E=\left\{\{u,v\}\in\binom{V}{2}:\, \abs{u-v}=1 \right\}$,
where $\binom{V}{2}$ is the set of all subsets of $V$ with 2
elements. From now, $N:=\abs{V}$.
We are interested in the square lattice, i.e. $V=\{1,\dots,L\}^2$
for some $L=\sqrt{N}\in\mathbb{N}$, because it is the most widely used
case. We denote the induced graph by $G_L$.\\
The \emph{Ising model} on $G_L$ is now defined as the set of possible
configurations $\O_{\rm IS}=\{-1,1\}^V$, where $\sigma\in\O_{\rm IS}$
is an assignment of -1 or 1 to each vertex in $V$,
together with the probability measure
\vspace{1mm}
\[
\pi_\beta(\sigma) \;:=\; \pi^{G_L}_\beta(\sigma) \;=\; \frac1{Z_\beta}\,
\exp\left\{\beta\,\sum_{u,v:\, u\leftrightarrow v}
\Large{\mathds{1}}\bigl(\sigma(u)=\sigma(v)\bigr)\right\},
\]
where $u\leftrightarrow v$ means $u$ and $v$ are neighbors in $G_L$,
$Z$ is the
normalization constant and $\beta\ge0$ is the called the inverse
temperature. This measure is called the Boltzmann (or Gibbs)
distribution with free boundary condition.\\
Additionally we need the notion of \emph{boundary conditions}, but we
restrict ourself here to the ``all plus'' and ``all minus'' case.\\
Let $V^c=\mathbb{Z}^2\setminus V$.
Then we denote the lattice $G_L$ together with the probability
measure
\[
\pi_\beta^{\pm}(\sigma) \;:=\; \pi^{G_L,\pm}_\beta(\sigma)
\;=\; \frac{1}{\widetilde Z_\beta}\; \pi^{G_L}_\beta(\sigma)\cdot
\exp\left\{\beta\,
\sum_{\substack{v\in V, \,u\in V^c:\\ u\leftrightarrow v} }
\Large{\mathds{1}}\Bigl(\sigma(v)=\pm1\Bigr)\right\}
\]
by the Ising model with plus/minus boundary condition, respectively.
One can imagine that this corresponds to the Ising model on $G_L$ with
a strip of fixed spins around, so every vertex in $G_L$ has the same
number of neighbors.\\
In 1944 Onsager \cite{Onsager} proved that there is a phase transition
at $\beta=\beta_c:=\ln(1+\sqrt{2})$ in the case where $V=\mathbb{Z}^2$ and
we will see that this value is also important for finite
lattices. Namely, the dynamics that will be defined below is
rapidly mixing if and only if $\beta\le\beta_c$.\\
We will use the so called \emph{heat bath dynamics}.
These dynamics define a irreducible, aperiodic and reversible
Markov chain $X^\beta=(X_i^\beta)_{i\in\mathbb{N}}$ with
stationary distribution $\pi_\beta$ by the transition matrix
\vspace{1mm}
\[
P(\sigma,\sigma^{v,\xi}) \;=\; \frac{1}{N}\;
\left(1+\frac{\pi_\beta(\sigma)}{\pi_\beta(\sigma^{v,\xi})}\right)^{-1},
\qquad \sigma\in\O_{\rm IS},\; v\in V,
\vspace{1mm}
\]
where $\sigma^{v,\xi}$ with $\xi\in\{-1,1\}$ is defined by
$\sigma^{v,\xi}(v)=\xi$ and $\sigma^{v,\xi}(u)=\sigma(u)$, $u\neq v$.
The interpretation of this algorithm is very simple. In each step
choose a random $v\in V$ and assign a new value to $v$ according
to $\pi_\beta$ conditioned on all the neighbors of $v$.\\
Note that the results of this article hold in general for all
Glauber dynamics as defined in \cite{Glauber} that admit a monotone
coupling (see Section \ref{sec-sampling}). For a general introduction
to Markov chains see e.g. \cite{LPW}, or \cite{M} in the context of
spin systems.
In the sequel we want to estimate how fast such a Markov chain
converges to its stationary distribution. Therefore we first introduce
the \emph{total variation distance} to measure the distance between
two probability measures $\nu$ and $\pi$, which is defined by
\[
\tvd{\nu-\pi} \;=\; \frac12\,\sum_{\sigma\in\O_{\rm IS}}\,
\abs{\nu(\sigma)-\pi(\sigma)}.
\]
Now we can define the \emph{mixing time} of the Markov chain with
transition matrix $P$ and stationary distribution $\pi_\beta$ by
\vspace{1mm}
\[
\tau_\beta \;=\; \min\left\{n: \max_{\sigma\in\O_{\rm IS}}
\tvd{P^n(\sigma,\cdot)-\pi_\beta(\cdot)}\,\le\,\frac1{2\rm e}\right\}.
\vspace{1mm}
\]
This is the expected time the Markov chain needs to get close to
its stationary distribution. In fact, one can bound the spectral gap
of the transition matrix $P$ in either direction in terms of the
mixing time, see e.g. \cite[Th. 12.3 \& 12.4]{LPW}, so one can bound
the error of a MCMC algorithm to integrate functions over $\O_{\rm IS}$,
as one can read in \cite{Rud}.
Furthermore, if the Markov chain is rapidly mixing
(i.e. the mixing time is at most polylogarithmic in the
size of the state space $\O_{\rm IS}$) we get that the problem
of integration (with an unnormalized density) on the Ising model
is \emph{tractable}, see also \cite{NW2}. Unfortunately, there is no
Markov chain that is proven to be rapidly mixing at all temperatures.\\
However, in this article we are interested in sampling exactly from
the stationary distribution, but first we present the known
mixing time results for the Glauber dynamics for the Ising model.
For proofs or further details we refer to the particular articles
or the survey of Martinelli \cite{M}.
Of course, we can only give a small selection of references,
because there are many papers leading to the results given below.
\vspace{2mm}
\begin{theorem}{\cite{MO1}}\label{th-mix-high}
Let $\beta<\beta_c$. Then there exists a constant $c_\beta>0$ such
that the mixing time of the Glauber dynamics
for the Ising model with arbitrary boundary condition on
$G_L$ satisfies
\[
\tau_\beta \;\le\; c_\beta\;N\log N.
\]
\end{theorem}
\begin{theorem}{\cite{CGMS}}\label{th-mix-low}
Let $\beta>\beta_c$. Then there exists a constant $c_\beta>0$ such
that the mixing time of the Glauber dynamics for the Ising model
on $G_L$ satisfies
\[
\tau_\beta \;\ge\; e^{c_\beta N}.
\]
\end{theorem}
\vspace{2mm}
The results above can be obtained by the observation that
some spatial mixing property of the measure $\pi_\beta$ is
equivalent to the mixing in time of the Glauber dynamics.
For details for this interesting fact, see \cite{DSVW}.\\
The constant $c_\beta$ of Theorem \ref{th-mix-high} is
widely believed to be of order
$\frac1{\beta-\beta_c}$. To determine the mixing time in the
case $\beta=\beta_c$ was a challenging problem for a long time.
It was solved by Lubetzky and Sly in their recent paper \cite{LS}.
\begin{theorem}{\cite{LS}}\label{th-mix-critical}
There exists a constant $C>0$ such that the mixing time of the
Glauber dynamics for the Ising model
on $G_L$ at the critical temperature satisfies
\[
\tau_\beta \;\le\; 4\,N^C.
\]
\end{theorem}
\begin{remark}
We give here only a brief description of the constant $C$, which
can be given explicitly. For more details see \cite[p.19]{LS}.\\
However, numerical experiments on the ``true'' exponent suggest that
$C\approx3.08$ (see e.g. \cite{WHS}, \cite{NB} and note the explanation
below). \\
The constant $C$ in Theorem \ref{th-mix-critical} is given by
\begin{equation}\label{eq-critical-C}
C\;=\;2+\log_{3/2}\left(\frac{2}{1-p^+}\right).
\end{equation}
Here, $p^+$ is the limiting vertical crossing probability in
the random cluster model on a fully-wired rectangle, where
the width of the lattice is 3 times its height.
The $C$, as given here, differs from the one given in \cite{LS}
by eliminating a factor of 2 in front of the $\log$ term and by
the additional 2. The reason is that we state their result in
terms of $N$ and not in the side-length $L$ of the lattice
(therefore without factor 2) and that we are interested in the
discrete time single-spin algorithms. Therefore we get an additional
factor $N$ in their spectral gap result (\cite[Th. 1]{LS}) and a
factor $N$ by (see e.g. \cite{LPW})
\[
\tau_\beta\;\le\;\log\left(\frac{e}{\min_{\sigma}\pi_{\beta_c}(\sigma)}\right)
\,\text{\rm\bf gap}(X^\beta)^{-1}
\;\le\; 4\,N\,\text{\rm\bf gap}(X^\beta)^{-1},
\]
because $\min_{\sigma}\pi_{\beta_c}(\sigma)\ge \exp(-3N)$.
\end{remark}
The results of this section show that the Glauber dynamics is
rapidly mixing
for $\beta\le\beta_c$, but very slowly mixing for larger $\beta$.
In Section \ref{sec-rc} we will see how to avoid this problem.
\section{Exact sampling}\label{sec-sampling}
In this section we briefly describe the so called
\emph{Coupling from the past algorithm} (CFTP) to sample exactly
from the stationary distribution of a Markov chain.\\
This algorithm works under weak assumptions on the Markov
chain for every finite state space and every distribution, but to
guarantee that the algorithm is efficient we need some monotonicity
property of the model and that the chain is rapidly mixing.
For a detailed description of CFTP and the proof of
correctness see \cite{PW}.\\
We restrict ourself to the heat bath dynamics for the Ising model.
First note that the heat bath dynamics, as defined above, admits
a monotone coupling, that is, given two realizations of the heat bath
chain $X=(X_t)_{t\in\mathbb{N}}$ and $Y=(Y_t)_{t\in\mathbb{N}}$, there exists a
coupling $(X,Y)$ (i.e. using the same random numbers) such that
\[
X_t \;\le\; Y_t \;\;\Longrightarrow\;\; X_{t+1} \;\le\; Y_{t+1}
\qquad \text{ for all } t\in\mathbb{N},
\]
where $\le$ means smaller or equal at each vertex.\\
Additionally we know that $-\bf{1}\le\sigma\le\bf{1}$ for all
$\sigma\in\O_{\rm IS}$, where ${-\bf{1}}=(-1)^V$ and ${\bf{1}}=(1)^V$.
Therefore if we set $X_0=-{\bf1}$ and $Y_0={\bf1}$ we know that
$X_0\le\sigma\le Y_0$ for all $\sigma$ and so
$X_t\le Z_t\le Y_t$ for the realization $Z=(Z_t)_{t\in\mathbb{N}}$
with $Z_0=\sigma$. Since this holds for all $\sigma$, one can choose
$Z_0\sim\pi_\beta$ and we get that whenever $X_t$ and $Y_t$ coalesce,
they also coalesce with $Z_t$ which has the right distribution.\\
After we presented the idea of the algorithm, we state the algorithm
in detail. Note that the algorithm is called Coupling from the past,
because we run the chains from the past to the present.
The algorithm $\text{CFTP}(G,\beta)$ to sample from the distribution
$\pi^G_\beta$ works as described in Algorithm 1.
\begin{algorithm}
\caption{\quad Coupling from the past}
\begin{algorithmic}[1]
\Statex\Call{\bf Input:}{} The graph $G=(V,E)$ and the value of $\beta$
\Statex\Call{\bf Output:}{} An Ising configuration $\sigma\sim\pi_\beta$
\vspace{2mm}
\Procedure{CFTP}{$G,\beta$}
\vspace{2mm}
\State Set $t = 0$
\State Set $X_0=-{\bf1}$ and $Y_0={\bf1}$
\vspace{2mm}
\While{$X_0 \neq Y_0$}
\State $t = t+1$
\vspace{1mm}
\State Generate random numbers $U_{-2^t+1},\dots,U_{-2^{t-1}}$ that are
\Statex \qquad\qquad sufficient to run the Markov chain.
\Statex \qquad\qquad\quad (e.g. $U_i\sim \text{Uniform }\{V\times[0,1]\}$)
\vspace{1mm}
\State Set $X_{-2^t+1}={\bf0}$ and $Y_{-2^t+1}={\bf1}$ and run the chains
until
\Statex \qquad\qquad time 0 by using only the random numbers
$U_{-2^t+1},\dots,U_{-1}$
\EndWhile
\vspace{1mm}
\State \textbf{return} $\sigma = X_0$
\vspace{1mm}
\EndProcedure
\end{algorithmic}
\end{algorithm}
We denote the algorithm by $\text{CFTP}^\pm(G,\beta)$ if we sample
with respect to $\pi^\pm_\beta$, i.e. with plus/minus boundary
condition.
See \cite{H} for examples that show that it is necessary to go
from the past in the future and that we have to reuse the random
numbers.\\
Now we state the connection between the expected running time of
the CFTP algorithm and the mixing time of the Markov chain.
\vspace*{5mm}
\begin{proposition}{\cite{PW}}\label{prop-cftp}
Let $T_\beta$ be the expected running time of CFTP$(G,\beta)$ from
Algorithm 1 with $G=(V,E)$ and $\abs{V}=N$. Then
\[
T_\beta \;\le\; 4\, \tau_\beta \,\log N,
\]
where $\tau_\beta$ is the mixing time of the underlying Markov chain.
\end{proposition}
We see that exact sampling from the Boltzmann distribution is efficient
whenever the Markov chain is rapidly mixing.
By the results of Section \ref{sec-ising} we know that this is the case
for $\beta\le\beta_c$. In the case $\beta>\beta_c$ we need a different
technique to generate exact samples. Therefore we need essentially
the so called random cluster model, as we will see in the next section.
\section{The random cluster model}\label{sec-rc}
The \emph{random cluster model} (also known as the FK-model) was
introduced by Fortuin and Kasteleyn in \cite{FK} to study lattice
spin systems with a graph structure. It is defined on a graph
$G=(V,E)$ by its state space
$\O_{\rm RC}=\{\omega: \omega\subseteq E\}$ and the RC measure
\[
\mu_p(\omega) \;=\;
\frac1Z\,p^{\abs{\omega}}\,(1-p)^{\abs{E}-\abs{\omega}}\,2^{C(\omega)},
\]
where $p\in(0,1)$, $Z$ is the normalization constant and
$C(\omega)$ is the number of connected components in the graph
$(V,\omega)$.
For a detailed introduction and related topics see the book
\cite{G1}.\\
There is a tight connection between the Ising model and the
random cluster model. Namely, if we set $p=1-e^{-\beta}$,
we can translate an Ising configuration $\sigma\sim\pi_\beta$ to
a random cluster state $\omega\sim\mu_p$ and vice versa.
To get an Ising configuration $\sigma\in\O_{\rm IS}$ from
$\omega\in\O_{\rm RC}$ assign
independent and uniformly random spins to each connected component
of $\omega$. For the reverse way include all edges $e=\{e_1,e_2\}\in E$
with $\sigma(e_1)=\sigma(e_2)$ to $\omega$ with probability $p$.
For details see \cite{ES}.\\
Therefore sampling an Ising configuration according to $\pi_\beta$ is
equivalent to sampling a RC state from $\mu_p$ whenever both models are
defined on the same graph $G$ and $p=1-e^{-\beta}$.\\
Another important concept in connection with the RC model is the
duality of graphs (see e.g. \cite{G2}).
Let $G=(V,E)$ be a finite, planar graph, i.e.
without intersecting edges if we draw it in the plane (like our $G_L$).
The \emph{dual graph} $G^*=(V^*,E^*)$ of $G$ is constructed as follows.
Put a vertex in each face (including the infinite outer one) of the
graph and connect 2 vertices by a edge if and only if the corresponding
faces of $G$ share a boundary edge. It is clear, that the number of
vertices can differ in the dual graph, but we have the same number of
edges.\\
Additionally we define a \emph{dual configuration}
$\omega^*\subseteq E^*$ in $G^*$ to a RC state $\omega\subseteq E$ in
$G$ by
\[
e\in\omega \;\Longleftrightarrow\; e^*\notin\omega^*,
\]
where $e^*$ is the edge in $E^*$ that ``crosses'' $e$. (By the
construction, this edge is unique.)
See Figure \ref{fig-dual}
for the graph $G_L$ with $L=3$ and its dual graph $G_L^*$
together with 2 corresponding RC states.
\begin{figure}[ht]
\scalebox{1}{\input{dual-graph}\hspace*{-2cm}\input{dual-conf}}
\caption[Dual graph and dual RC state]{Left: The graph $G_3$ (solid)
and its dual (dashed). Right: A RC state on $G_3$ (solid) and its
dual configuration (dashed)}
\label{fig-dual}
\end{figure}
Now we can state the following theorem about the relation of the
distribution of a RC state and its dual, see \cite{G2}.
\begin{proposition}{\cite[p.~164]{G2}}\label{prop-RC-dual}
Let $G=(V,E)$ be a finite, planar graph and $\mu_p$ be the random
cluster measure on $G$. Furthermore let $G^*=(V^*,E^*)$ be the dual
graph of $G$ and $\mu^*_{p^*}$ be the random cluster measure on $G^*$.\\
Then
\[
\omega\sim\mu_p \;\;\Longleftrightarrow\;\; \omega^*\sim\mu^*_{p^*},
\]
where
\begin{equation}
p^* \;=\; 1 \,-\, \frac{p}{2-p}.
\label{eq-dual-p}
\end{equation}
\end{proposition}
Obviously, $(p^*)^*=p$.
By Proposition \ref{prop-RC-dual} one can see that sampling
from $\mu_p$ and sampling from $\mu^*_{p^*}$ is equivalent.
It is straightforward to get the following Proposition.
\begin{proposition}\label{prop-Ising-dual}
Sampling from the Boltzmann distribution $\pi^G_\beta$ is
equivalent to sampling from the Boltzmann distribution
$\pi^{G^*}_{\beta^*}$, where
\begin{equation}\label{eq-dual-beta}
\beta^* \;=\; \log\left(\coth\,\frac{\beta}{2}\right).
\end{equation}
Additionally,
\[
\beta \,>\, \beta_c \;\;\Longleftrightarrow\;\; \beta^* \,<\, \beta_c.
\]
\end{proposition}
\begin{proof}
The equivalence was shown by the above procedure, i.e. if we want to
sample from $\pi_\beta^G$, we can sample from $\pi^{G^*}_{\beta^*}$
generate a RC state with respect to $\mu^*_{p^*}$, go to the dual
lattice with measure $\mu_p$ and finally generate a state
according to $\pi^G_\beta$.
Since $p^{(*)}=1-e^{-\beta^{(*)}}$, the formula for $\beta^*$ comes
from
\[
\beta^* \;=\; -\log(1-p^*)
\;\overset{\eqref{eq-dual-p}}{=}\; \log\left(\frac{2-p}{p}\right)
\;=\; \log\left(\coth\,\frac{\beta}{2}\right).
\]
This proves the statement.
\end{proof}
\section{Efficient sampling for the Ising model}\label{sec-efficient}
In this section we show an efficient algorithm to sample exactly
from the Boltzmann distribution.
But, before we prove that it is efficient, we state our
sampling algorithm.\\
Therefore we first have to explain how the
graph $G_L^*$ looks like. It is easy to obtain
(see Figure \ref{fig-dual}) that
$G_L^*=(V_L^*,E_L^*)$ is also a square lattice with $(L-1)^2$
vertices and an additional auxiliary vertex $v^*$, which is connected
to every vertex on the boundary of it. We denote the operation of
adding a vertex to a graph and connect it to all boundary vertices by
$\cup_b$. So $G_L^*=G_{L-1}\cup_b v^*$.
\vspace*{1mm}
\begin{algorithm}[H]
\caption{\quad Sampling from the Ising model on the square lattice}
\begin{algorithmic}[1]
\Statex\Call{\bf Input:}{} An integer $L$ and the value of $\beta$
\Statex\Call{\bf Output:}{} An Ising configuration $\sigma\sim\pi^{G_L}_\beta$
\vspace{2mm}
\If{$\beta\le\beta_c$}
\vspace{1mm}
\State $\sigma=\text{CFTP}(G_L,\beta)$
\vspace{1mm}
\Else
\vspace{1mm}
\State $\widetilde\sigma=\text{CFTP}^+(G_{L-1},\beta^*)$, where
$\beta^*$ is given in \eqref{eq-dual-beta}
\State Define a Ising configuration $\sigma^*$ on
$G_L^*=G_{L-1}\cup_b v^*$ by
\Statex\qquad\quad $\sigma^*(v)=\widetilde\sigma(v)$ on
$V(G_{L-1})$ and $\sigma^*(v^*)=1$.
\State Generate a RC state $\omega^*$ from $\sigma^*$
\State Take the dual RC state $\omega=(\omega^*)^*$
\State Generate an Ising configuration $\sigma$ from $\omega$
\vspace{1mm}
\EndIf
\vspace{1mm}
\State \textbf{return} $\sigma$
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{th-main}
Let $G_L$ be the square lattice with $N=L^2$ vertices. Then, the
algorithm from above outputs an exactly distributed Ising
configuration with respect to $\pi_\beta^{G_L}$ in expected time
smaller than
\begin{itemize}
\item \quad $c_\beta\, N \,(\log N)^2$
\qquad for $\beta\neq\beta_c=\log(1+\sqrt{2})$ and some $c_\beta>0$
\vspace{2mm}
\item \quad $16\, N^C \log N$
\qquad
for $\beta=\beta_c$, where $C$ is given in \eqref{eq-critical-C}.
\end{itemize}
\end{theorem}
\vspace{3mm}
\begin{proof}
The running time of the algorithm follows directly from Theorems
\ref{th-mix-high} and \ref{th-mix-critical} and Prop.~\ref{prop-cftp}.
Therefore we only have to prove that the output $\sigma$ of the
algorithm has the right distribution. In the case of
$\beta\le\beta_c$ this is obvious. For $\beta>\beta_c$ we know from
Proposition \ref{prop-Ising-dual} that $\sigma\sim\pi_\beta^{G_L}$,
if the dual configuration $\sigma^*$ on $G_L^*$
(line 5 of Algorithm 2) is distributed according to
$\pi_{\beta^*}:=\pi^{G_L^*}_{\beta^*}$.
But by the construction of lines 4 and 5 of
Algorithm 2, this is true. For this, note that
$\pi_\beta(\eta)=\pi_\beta(-\eta)$ for all $\eta\in\Omega_{\rm IS}$.
We get that for each vertex $v\in V$ (especially for $v^*$)
\[\begin{split}
\pi_\beta(\eta) \;&=\; \pi_\beta\bigl(\eta \cap \{\sigma\!:\sigma(v)=1\}\bigr)
\;+\; \pi_\beta\bigl(\eta \cap \{\sigma\!:\sigma(v)=-1\}\bigr) \\
&=\; \pi_\beta\bigl(\{\sigma\!:\sigma(v)=1\}\bigr)\;
\pi_\beta\bigl(\eta \;\rule[-1.5mm]{0.2mm}{5mm}\;
\{\sigma\!:\sigma(v)=1\}\bigr) \\
& \qquad\;+\; \pi_\beta\bigl(\{\sigma\!:\sigma(v)=-1\}\bigr)\;
\pi_\beta\bigl(\eta \;\rule[-1.5mm]{0.2mm}{5mm}\;
\{\sigma\!:\sigma(v)=-1\}\bigr) \\
&=\; \frac12\,\Bigl[\pi_\beta\bigl(\eta \;\rule[-1.5mm]{0.2mm}{5mm}\;
\{\sigma\!:\sigma(v)=1\}\bigr) \;+\;
\pi_\beta\bigl(\eta \;\rule[-1.5mm]{0.2mm}{5mm}\;
\{\sigma\!:\sigma(v)=-1\}\bigr)\Bigr]\\
&=\; \frac12\,\pi_\beta\bigl(\{\eta,-\eta\} \;
\rule[-1.5mm]{0.2mm}{5mm}\; \{\sigma\!:\sigma(v)=1\}\bigr).
\end{split}\]
The last equality comes from the fact that
\[
\pi_\beta\bigl(\eta \;\rule[-1.5mm]{0.2mm}{5mm}\;
\{\sigma\!:\sigma(v)=-1\}\bigr)
\;=\; \pi_\beta\bigl(-\eta \;\rule[-1.5mm]{0.2mm}{5mm}\;
\{\sigma\!:\sigma(v)=1\}\bigr).
\]
Therefore we can sample from $\pi_\beta$ on $G_L^*$ by sampling $\eta$
from the conditional measure
$\pi_\beta\bigl(\cdot \;\rule[-1.5mm]{0.2mm}{5mm}\;
\{\sigma\!:\sigma(v^*)=1\}\bigr)$ and then choose with probability $\frac12$
either $\eta$ or $-\eta$.
If we now use that $G_L^*=G_{L-1}\cup_b v^*$ one can see that
sampling on $G_L^*$ with respect to
$\pi_\beta\bigl(\cdot \;\rule[-1.5mm]{0.2mm}{5mm}\;
\{\sigma\!:\sigma(v^*)=1\}\bigr)$ is the same as sampling $\widetilde\sigma$ from
$\pi^{G_{L-1},+}_\beta$ and setting
\[
\sigma(v) \;=\; \begin{cases}
\widetilde\sigma(v), & v\in V(G_{L-1}) \\
1, & v=v^*.
\end{cases}\]
Note that we omit the step of choosing $\sigma$ or
$-\sigma$ with probability $\frac12$, because the RC state
that will be generated would be the same.\\
This completes the proof.\\
\end{proof}
\begin{remark}
Note that the same technique works also for the $q$-state Potts model.
This model consists of the state space $\O_{\rm P}=\{1,\dots,q\}^V$
and the same measure $\pi_\beta$.
In this case we consider the random cluster measure
\[
\mu_{p,q}(\omega) \;=\;
\frac1Z\,p^{\abs{\omega}}\,(1-p)^{\abs{E}-\abs{\omega}}\,q^{C(\omega)}
\]
and the connection of the models is again given by $p=1-e^{-\beta}$.\\
A recent result of Beffara and Duminil-Copin \cite{BDC} shows that
the self-dual point of the RC model corresponds to the critical
temperature of the Potts model $\beta_c(q)=\ln(1+\sqrt{q})$ in the
same way as in the case $q=2$ (i.e. the Ising case).
Therefore, a sampling algorithm for the
Potts model above (and at) the critical temperature is enough to
sample at all temperatures.
\end{remark}
\linespread{1}
\bibliographystyle{amsalpha}
|
1,116,691,500,661 | arxiv | \section{Introduction}
Most biologically relevant molecules cannot be superimposed on their mirror image, i.e. they are chiral.\cite{Gellman2010} This ubiquitous feature has important consequences for biological activity of chiral molecules. Since interactions between these molecules are chirally specific, different enantiomers of drug molecules have vastly different bioactivities. To exploit this feature it is necessary to selectively synthesize one enantiomer or to separate it from a racemic mixture.\cite{Sholl2009} This, in turn requires again chirally selective interactions or chirally selective catalysts.
The standard approach to produce chiral molecules is by homogeneous catalysis,\cite{Blaser2005} which requires additional purification steps after synthesis.\cite{Sholl2009} To avoid this additional complication it would be desirable to perform asymmetric synthesis, i.e. enantiopure synthesis directly on a surface. This necessitates a chiral surface which can be achieved by modifying it with chiral molecules.\cite{Sholl2009,Lorenzo2000,Fasel2006,Kuhnle2002,Mallat2007} Alternatively, one can hope to exploit the intrinsic chirality of high Miller index metal surfaces.\cite{Ahmadi1999,Sholl1998,Clegg2011} Since these surfaces are readily amenable to experimental surface science techniques\cite{Eralp2011,Bombis2010,Zhao2004,Greber2006} as well as Density Functional Theory (DFT) calculations\cite{Han2012,Bhatia2005,Bhatia2008,Sljivancanin2002,Greber2006} due to their well defined structure one can hope to gain a fundamental understanding of the principles underlying chiral selective properties through their study. Adsorption energetics can be studied experimentally via Temperature Programmed Desorption.\cite{Horvath2004,Huang2011,Huang2008,Cheong2011}
An interesting feature here is that chirality can also arise from achiral systems due to reduction of dimensionality. Racemic mixtures of molecules can form homochiral domains on surfaces.\cite{Bombis2010,Fasel2006,Vidal2005} Such an effect might also be implicated in the emergence of homochirality observed in biological molecules.\cite{Sowerby1998} This also highlights the qualitative differences resulting from molecule-molecule interactions in solution and collective behavior observed in molecular monolayers, an effect that might also be exploited for chirally selective catalysts.
Here we study one of the basic building blocks of green chemistry, lactic acid, on intrinsically chiral Pt surfaces.\cite{Gallezot2012,Poliakoff2002,Holm2010} Lactic acid is already produced on an industrial scale for use in food and beverages, or pharmaceuticals and can be made from renewable sources.\cite{Gallezot2012,Ragauskas2006} One especially important growth market is in its polymerized form, polylactic acid (PLA).\cite{MadhavanNampoothiri2010,Rasal2010,Katiyar2010} Here the thermochemical properties of the polymer depend also on the chirality of the monomers it was made from, which points to the importance of enantioselective control in this system.\cite{MadhavanNampoothiri2010,Platel2008} PLA of limited molecular weight can be obtained by condensation from lactic acid. To economically get to the high molecular weight polymer needed in practice this low molecular weight polymer can be broken down into lactide, the condensed dimer. This product can be purified and subsequently used as a precursor to obtain high quality high molecular weight polymer.\cite{MadhavanNampoothiri2010,Platel2008}
Specifically we focus here on the adsorption of the two enantiomers of lactic acid on Pt(321) and Pt(643) surfaces. We also study the adsorption of the molecule on the Pt(111) surface that can be considered as a model system for the terraces of the two chiral surfaces. We find that lactic acid binds to Pt surfaces predominantly through two adsorption sites: the oxygens of the hydroxyl and the carboxylic groups. On the Pt(321) surface these two binding sites of the molecule can each be bound to kink atoms, which also turns out to be the most stable adsorption configuration. On the Pt(643) surface one of them needs to bind to either a ridge atom or to be on the terrace. The most stable adsorption geometry in this case depends on the chirality of the surface and the molecule. In reality, a chiral Pt surface might undergo thermal roughening under conservation of the global chirality.\cite{Power2002,Giesen2004,Zhao2004,Baber2008} The surfaces studied here can be considered as a model system for this surface as the three surfaces offer one, two or no kink sites to bind the molecule to.
We find a large increase in binding energy when comparing adsorption on the Pt(111) and Pt(643) surfaces and a smaller increase for Pt(643) relative to the Pt(321) surface. Analysis of the contributions of the carboxyl and hydroxyl groups to the overall binding energy on the chiral surfaces shows that the carboxylic group contributes most to the binding energy. Therefore, the additional binding energy of the hydroxyl group on the second kink site on Pt(321) is partially compensated by strain on the molecule and the carboxylic bond to the kink site. Comparing the binding energies of different molecular chiralities we find a small chiral selectivity of 23 meV and 17 meV for the Pt(321) and Pt(643) surfaces, respectively. This is comparable to other results of chiral molecules on intrinsically chiral metal surfaces.\cite{Bhatia2008,Bhatia2005,Han2012,Horvath2004,Huang2011,Huang2008} However, because L-lactic acid is more stable on Pt(321)$^S$ and less stable on Pt(643)$^S$, the overall chiral selectivity of a roughened surface is predicted to be very small. To facilitate comparison to Field Ion Microscopy (FIM) imagery we also calculate the work function changes induced upon adsorption of the molecule. This is especially interesting for the Pt(643) surface since the most stable configurations of the two enantiomers are similar in energy but have very different conformations.
The remainder of the paper is organized as follows. In Sec. 2, we describe the parameters used in the computations. Section 3 and 4 present the results obtained for adsorption on the Pt(111) and chiral surfaces, respectively. Section 5 deals with the electronic structure of the different adsorption configurations.
\section{Computational details}
\begin{figure}[Htb]
\includegraphics[width=6cm]{CONTCAR_final.eps}
\caption{Lactic acid adsorbed on the Pt(111) surface calculated with the oPBE-vdW functional.}
\label{fig:geom111}
\end{figure}
\begin{table*}[Htb]
\caption{Binding energies and work functions of the most stable configurations of lactic acid (L- and D-enantiomer) on the Pt surfaces studied. The first number in the binding energy column is calculated with respect to the molecule in the surface supercell (with a K-mesh like in the surface calculation). The second binding energy column gives the energy with respect to an isolated molecule in a large supercell. Referencing the binding energy to the isolated molecule reduces the calculated chiral selectivity, showing that coverage effects influence this quantity. The Hirshfeld charge of the adsorbed lactic acid molecule is given in the last column.}
\begin{tabular}{c|c|c|c|c}
\hline
\multicolumn{1}{c|}{lactic acid on} & \multicolumn{2}{|c|}{binding energy oPBE-vdW with respect to} & \multicolumn{1}{c}{work function} & \multicolumn{1}{|c}{Hirshfeld charge} \\
& molecule in surface unit cell (eV) & isolated molecules (eV) & (eV) & of molecule (e)\\
\hline
Pt(111) & -0.803 & -0.838 & 4.67 & 0.14 \\
L on Pt(321)$^S$ & -1.297 & -1.288 & 4.59 & 0.25 \\
D on Pt(321)$^S$ & -1.274 & -1.270 & 4.61 & 0.26 \\
L on Pt(643)$^S$ & -1.232 & -1.283 & 4.92 & 0.17 \\
D on Pt(643)$^S$ & -1.249 & -1.289 & 4.57 & 0.17 \\
\hline
\end{tabular}
\label{tab:energ}
\end{table*}
\begin{table*}[Htb]
\caption{Binding energy component analysis for the chiral surface configurations. Binding energies of hydrogen saturated COH and COOH groups in the frozen adsorption geometries and deformation energies of the substrate and molecule are calculated. It is evident that the deformation energy is larger for the more strongly interacting Pt(321)$^S$ surface when compared to Pt(643)$^S$. The COOH group is bound more strongly than the COH group throughout, an effect that is much more pronounced on the Pt(643)$^S$ surfaces where the COH group is either far away from the surface (L-lactic acid) or bound to the (111) facet (D-lactic acid). For Pt(321)$^S$ the COH group is bound to a kink atom and it is much more strongly interacting than on Pt(643). The sum of the binding energy components considered is smaller than the actual binding energy (cf. Table \ref{tab:energ}) which might stem from the binding energy of the neglected carbon and hydrogen atoms.}
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{1}{c}{lactic acid on} & \multicolumn{2}{|c}{deformation energy (eV) } & \multicolumn{2}{|c}{binding energy (eV)} & \multicolumn{1}{|c}{sum of components (eV)} \\
& surface & molecule & COH group & COOH group & \\
\hline
L on Pt(321)$^S$ & 0.119 & 0.202 & -0.632 & -0.911 & -1.232 \\
D on Pt(321)$^S$ & 0.084 & 0.150 & -0.717 & -0.733 & -1.219 \\
L on Pt(643)$^S$ & 0.073 & 0.094 & -0.074 & -1.154 & -1.112 \\
D on Pt(643)$^S$ & 0.077 & 0.086 & -0.219 & -1.098 & -1.194 \\
\hline
\end{tabular}
\label{tab:decomp}
\end{table*}
We obtained our results with the DFT code VASP 5.12\cite{Hafner2008,Kresse1996a,Kresse1996b} with the oPBE-vdW functional\cite{Klimes2010,Klimes2011} throughout. The inclusion of the van der Waals forces is crucial for the adsorption of a weakly bound molecule since they dominate the binding energy, which was found by comparing to calculations using the PBE functional\cite{Perdew1996}. We opted for this special version of the vdW-DF functional since we wanted to keep the PBE class of functionals while also aiming at an optimal accuracy with the vdW nonlocal correlation. The Projector Augmented Wave method\cite{Bloechl1994,Kresse1999} was employed with valence wave functions expanded up to an energy cutoff of 400 eV. The lattice constant of Pt was determined using a K-mesh of 17x17x17 in conjunction with the tetrahedron method with Bloechl corrections. Fitting of a series of fixed volume calculations to the Murnaghan equation of state gave a lattice constant of 3.999\AA\ (3.978\AA\ for the PBE functional). All structural relaxations are carried out until all forces are smaller than 10 meV/\AA. For all slab calculations dipole corrections to the potential are applied throughout.\cite{Neugebauer1992} To increase accuracy all energies given are calculated with evaluation of the projector functions in reciprocal space.
The Pt(111) is constructed as a 6-layer slab with the two topmost layers relaxed. The K-mesh was sampled with a 17x17x1 K-mesh and Gaussian broadening of the energy levels of 0.1 eV to facilitate convergence. Pt(321) and Pt(643) surfaces were constructed with a thickness corresponding to 6 layers of Pt(111) and the upper half of the slabs were relaxed using 7x7x1 and 5x5x1 K-meshes, respectively. The general relaxation pattern is one of inward relaxing step edges and upward moving Pt atoms directly under the step edges for the chiral surfaces. It is similar for both PBE and oPBE-vdW functionals.
The molecules were adsorbed on a 3x3 supercell of Pt(111) and a 2x2 supercell of Pt(321), while for Pt(643) a single surface unit cell was used. Due to the increased unit cell size the K-mesh was reduced to 3x3x1 for (3x3)-Pt(111) and (2x2)-Pt(321), respectively. All molecular degrees of freedom were allowed to relax as was the upper part of the metal slab. In the case of the chiral surfaces the molecules were adsorbed on the relaxed side of the surfaces which constitutes a Pt(321)$^S$ and a Pt(643)$^S$ surface.\cite{Ahmadi1999,Sholl2001} Adsorption energies $E_{adsorption}$ are given with reference to the isolated surface $E_{surface}$ relaxed upon removing the molecule from the unit cell using identical computational parameters and the energy of the molecule $E_{mol}$
\begin{equation}
E_{adsorption} = E_{mol\ on\ surface} - E_{surface} - E_{mol}.
\end{equation}
Two different values for $E_{mol}$ are used: (i) the energy of the molecule relaxed in the surface unit cell and (ii) the energy of the isolated molecule in a larger unit cell. The binding energy with respect to (i) thus removes molecule-molecule interactions from the adsorption energy while the calculation with respect to (ii) does not.
\section{Lactic acid on Pt(111)}
To gain insight into the adsorption behavior of lactic acid on Pt surfaces we first adsorbed the molecule on a Pt(111) surface. This surface can also be considered as a model for the (111) terraces exhibited by the Pt(321) and Pt(643) surfaces. The relaxation of the L-Lactic acid on Pt(111) yielded an adsorption energy of 0.838 eV. Comparing to the adsorption energy of 0.161 eV predicted by a separate calculation using the PBE functional this points to the importance of van der Waals forces in this system. The distance between the hydroxyl and carboxylic oxygen atoms and the nearest Pt atoms are 2.50\AA\ and 3.16\AA, respectively. These distances are predicted to be very similar (2.55\AA\ and 3.23\AA) by the PBE functional. However, the carboxylic oxygen atoms are in a plane parallel to the surface in the case of the oPBE-vdW functional (see Fig. \ref{fig:geom111}), while for the PBE functional one oxygen is further away from the surface. Nevertheless this shows that despite the much larger binding energy the geometry is similar for the two functionals.
\section{L-lactic acid on Pt(321) and Pt(643)}
\begin{figure*}[Htb]
\includegraphics[width=12cm]{chiral_surfaces_lbl.eps}
\caption{Most stable adsorption configurations of L-lactic acid (\textbf{a},\textbf{c}) and D-Lactic acid (\textbf{b},\textbf{d}) on the chiral Pt(321)$^S$ (\textbf{a},\textbf{b}) and Pt(643)$^S$ (\textbf{c},\textbf{d}) surfaces calculated with the oPBE-vdW functional.}
\label{fig:geom}
\end{figure*}
\begin{figure*}[Htbp!]
\includegraphics[width=14cm]{DOS.eps}
\caption{All orbitals are projected onto atomic spheres of all atoms in the unit cell giving the Projected Density of states (PDOS). Summing over all PDOS values for all atoms belonging either to the lactic acid molecule or the substrate then gives the PDOS of the molecule and substrate, respectively. The figure shows the PDOS of the adsorption configurations of lactic acid on (\textbf{a}) Pt(111), (\textbf{b}) Pt(321)$^S$ and (\textbf{c}) Pt(643)$^S$. For the Pt(111) surface the PDOS of an isolated molecule configuration is given for comparison, while for (\textbf{b}) and (\textbf{c}) the PDOS for the two enantiomers of lactic acid are shown. It is evident that the HOMO orbital is broadened and shifted to higher binding energies for adsorption on Pt(643) and especially on Pt(321) with respect to adsorption on Pt(111). The gap between HOMO and HOMO-1 is also widened by a similar amount for all molecule-surface configurations when compared to the molecule in vacuum in (\textbf{a}). However, the HOMO-1 peak broadening is stronger for the Pt(643) surface than for the Pt(321) surface. Also evident is the close resemblance of the PDOS of different enantiomers on a given surface. The impact of chirality matching on the electronic structure of the molecule is thus limited, even for the case of different adsorption configurations on Pt(643)$^S$.}
\label{fig:PDOS}
\end{figure*}
On the chiral surfaces Pt(321)$^S$ and Pt(643)$^S$ we studied the adsorption of both enantiomers of lactic acid to gain insight into differences between their adsorption behavior. Initial positional sampling showed that the lactic acid molecule adsorbs preferentially with its oxygen binding sites on the kink sites of the surfaces. Calculated binding energies are much higher for binding to kink atoms, which is also consistent with findings from Temperature Programmed Desorption experiments for (R)-Methylcyclohexanone on chiral Cu surfaces.\cite{Huang2008} Therefore, we tested for each surface-enantiomer combination a set of configurations with either the hydroxyl or carboxyl group above the kink site. Rotating the molecule around this binding site in steps of 60 degrees then yielded the starting configurations from which the structural relaxations were carried out. Thus, for each chirality of the molecule and each chiral surface 12 configurations are considered. On the Pt(321) surfaces, however, one adsorption configuration has both molecular binding sites at kink sites. Since this constitutes at the same time the 0 degree configuration of the hydroxyl and carboxyl on kink series the overall number of configurations studied on Pt(321) for each chirality is reduced to 11. Thus, we calculated 24 configurations for Pt(643) and 22 for Pt(321) for an overall 46 structural relaxations.
The binding energies calculated are significantly higher on the chiral surfaces then on the flat Pt(111) surface (cf. Table \ref{tab:energ}). The Pt(321)$^S$ surface allows for simultaneous binding to two kink sites for both hydroxyl and carboxyl oxygen atoms (see fig. \ref{fig:geom}) which turns out to be the most favorable adsorption site for both chiralities.
In the case of Pt(643)$^S$ the most stable adsorption configurations are with the carboxyl oxygen atom bound to the kink sites. For L-lactic acid the most stable configuration has the molecule standing upright above the kink site with the carboxyl-hydroxyl carbon bond almost parallel to the surface normal so that it exhibits no hydroxyl oxygen bond to the surface. For D-lactic acid the most stable configuration is lying almost flat on the surface. Still, the hydroxyl oxygen is at a distance of 3.7 \AA\ to the nearest Pt atom, so this interaction also seems to be weak. Thus, for lactic acid on the Pt(643)$^S$ surface the hydroxyl-oxygen-surface interaction does not seem to play a role in the most stable configurations, while the carboxyl group is bound to the kink site.
The overall chiral selectivity of the surfaces studied is very small. The energy differences between the configurations of the two different chiralities are only 23 and 17 meV for Pt(321)$^S$ and Pt(643)$^S$, respectively, when referencing the energy to the molecules in the surface unit cell. Referencing instead to the energy of the isolated molecule gives reduced chiral selectivities which points to a coverage dependency of this quantity. This reduction is more pronounced in the case of Pt(643)$^S$. On Pt(321)$^S$ L-lactic acid is more stable, while on Pt(643)$^S$ the D enantiomer is the more stable one. The bond lengths of the different oxygens to the Pt atoms range from 2.29 \AA\ to 2.37 \AA\ in the case of Pt(321)$^S$. On the Pt(643)$^S$ surface similar bond lengths on the kink site are observed - 2.16 \AA\ and 2.18 \AA\ for the carboxylic oxygen. However, the bond lengths to the terrace Pt atoms are significantly longer in this case.
To understand the role of the carboxyl and hydroxyl groups in the adsorption process we carried out an energy decomposition analysis along the lines of Ref. \onlinecite{Sljivancanin2002} (see Table \ref{tab:decomp}). For each adsorption configuration the molecule is removed apart from the functional group whose binding energy contribution is to be evaluated. All atoms are held fixed in these calculations apart from the hydrogen atom introduced to saturate the bond of the functional group to the rest of the removed molecule. Also the deformation energy of the substrate and molecule are calculated by evaluating their energy at their frozen adsorption geometry upon removal of the other part, i.e. the molecule or the substrate, respectively. Since the binding energies of the molecular fragments are calculated in their frozen geometry, the sum of the binding energy of all components and the relaxation energies of the molecule and the substrate should give the adsorption energy of the whole molecule. The difference obtained in practice can be taken as a measure of the quality of the approximation made in decomposing the energies in this way.
First of all, the relaxation energies obtained are larger for the Pt(321)$^S$ configurations than for the Pt(643)$^S$ ones. This is in line with the more strongly interacting molecules being bound to two kink sites (cf. Table \ref{tab:decomp}). Secondly, the adsorption energy of the carboxyl group is larger than the one of the hydroxyl group for all configurations. On Pt(321)$^S$ the kink-bound hydroxyl binding energy is also sizable while it is much smaller on Pt(643)$^S$. However, the carboxyl group is generally the dominant binding site. In the case of L-lactic acid on Pt(643)$^S$ the energy contribution of the hydroxyl group is especially small which can be attributed to the large distance from the surface. For D-lactic acid on Pt(643)$^S$ it is interacting with the facet which yields an adsorption energy contribution larger than for the L-lactic acid configuration but much smaller than on the kink-bound hydroxyl oxygens on Pt(321)$^S$. Interestingly, the carboxyl group adsorption energy is largest for the Pt(643)$^S$ configurations as this bond can be optimized due to the much smaller specificity of the hydroxyl bond when compared to the Pt(321)$^S$ configurations. The large difference between the binding energy of the carboxyl group on the kink and the hydroxyl group on the facet explains how it is possible to detach this group from the surface for the most stable configuration of L-lactic acid on Pt(643)$^S$.
\begin{figure}[Htb]
\includegraphics[width=9cm]{charge_diff.eps}
\caption{Plane-averaged charge density redistributions upon adsorption of the lactic acid molecule enantiomers on the different Pt surfaces. The substrate is at low z-values and a vertical brown line marks the height of the uppermost surface atom in the case of Pt(111) and the highest and lowest z-values of the first layer atoms in the case of the stepped Pt(321) and Pt(643) surfaces. The highest and lowest atoms of the molecules are also marked by black dashed lines for L-lactic acid and and red lines for D-lactic acid. The observed charge redistribution pattern is very similar for all configurations which attests to the domination of the push-back effect in these configurations.}
\end{figure}
\section{Electronic structure}
To understand the bonding patterns of the lactic acid molecule on the different Pt surfaces we analyzed the electronic structure of the molecule on the different surfaces. To this end, we studied the Projected Density of States (PDOS) for the different surface-enantiomer combinations.\cite{Hoffmann1988} We also studied the charge density redistribution pattern on the surface and calculated the work function for all configurations.
Fig. \ref{fig:PDOS} shows that due to the adsorption process there are some distinct changes in the PDOS of the lactic acid molecule. In general the peaks corresponding to the highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) are broadened with the smallest broadening occurring for the molecule adsorbed on the Pt(111) surface and the largest occurring for the Pt(321) adsorbed configurations. The HOMO is also shifted to higher binding energies with the largest shift occurring on the Pt(321) surface and the smallest one on the Pt(643) surface. Also, the gap between HOMO and HOMO-1 is widened with the HOMO-1 being shifted to higher binding energies for all adsorbed molecules. These changes in the PDOS indicate that the bonding process involves a rehybridization of the HOMO and HOMO-1 states with electronic states of the surface. Also, a look at the spatial distribution of the frontier orbitals of the molecule shows electronic density on the binding sites, i.e. the hydroxyl and carboxyl groups. This is consistent with local interaction at two binding sites on the Pt(321) surface, one binding site on Pt(643) and generally smaller interaction on Pt(111) leading to progressively smaller broadening of the frontier orbital peaks.
One particularly important experimental observable is the work function of the different surface-adsorbate configurations.\cite{Ishii1999} Upon adsorption of the molecule the distribution of the electron density of the surface is altered by the presence of the molecule. These changes are made up of a push-back effect of electron density towards the surface by the Pauli repulsion exerted on the surface electron density by the atoms of the molecule and the local charge redistribution due to the formation of chemical bonds.\cite{Bagus2002,Michaelides2003} To get an overview of the charge redistribution pattern, we calculate the charge density change $n_{diff}(r)$ upon molecular adsorption as
\begin{equation}
n_{diff}(r)=n_{ads. mol.}(r)-n_{mol}(r)-n_{subst}(r).
\end{equation}
with $n_{subst}(r)$, $n_{mol}(r)$ and $n_{ads. mol.}(r)$ denoting the electron density of the complete system, molecule and of the substrate, respectively. The charge densities of the molecule and the substrate are calculated at the frozen geometry of the molecule-surface system with either the molecule or the surface removed. It is then averaged in planes perpendicular to the surface normal to get the net contribution of the charge density rearrangement to the surface dipole which generates the work function changes.
It is found that the charge redistribution pattern for all adsorption configurations exhibits a common feature. There is significant charge accumulation just above the surface and significant charge depletion on the molecule. This can be attributed to the push-back effect. The electron density above the pristine surface is pushed back towards the surface because of the Pauli repulsion of the electrons due to the presence of the molecule. The fact that the electron redistribution is very similar for all adsorption configurations attests to the minor importance of chemical bonding in the charge redistribution patterns. While the Pt(321) configurations give quasi-identical plane averaged charge redistribution patterns, there are differences for the Pt(643) configurations. For L-lactic acid another dipole layer of opposite sign is modulated on top of the general charge redistribution pattern. This increases the work function for this particular adsorption configuration by 0.3 eV. We attribute its appearance to a reduced Push-back effect as a result of the more upright adsorption configuration.
\section{Conclusion}
We studied the adsorption of the chiral molecule lactic acid on the Pt(111), Pt(321) and Pt(643) surfaces. We found that the molecule adsorbs most strongly on the surface exhibiting the highest density of kink sites, which is the Pt(321) surface, closely followed by the Pt(643) surface. On the closely packed Pt(111) the adsorption energy is significantly lower. The lactic acid molecule hereby shows a tendency to bind with its carboxyl group to the kink sites of the chiral surfaces. On Pt(321), also the hydroxyl group is adsorbed on a neighboring kink site, while the adsorption geometry in the case of Pt(643) depends on chirality. For the Pt(643)$^S$ surface and D-lactic acid the molecule lies on the (111) facet of the surface, while for L-lactic acid on Pt(643)$^S$ the molecule stands upright on the kink site.
The chiral selectivity calculated is small at about 20 meV for the Pt(321) and Pt(643) surfaces, when referencing the energy to the molecules in the surface unit cell (without substrate) but even smaller when reference to the energy of an isolated molecule is made. However, the calculated chiral selectivity has opposite sign for the Pt(321) and Pt(643) surfaces, i.e. L-lactic acid is more stable on Pt(321)$^S$ and less stable on Pt(643)$^S$. Experimental observation of an overall chiral selectivity on a real chiral Pt surface vicinal to the (111) surface is thus predicted to be challenging, though possible.\cite{Huang2011,Huang2008} Analysis of the contributions of the carboxyl and hydroxyl groups to the total binding energy shows that the carboxyl group is the dominant binding site, giving the biggest binding energy contributions.
The adsorption process leads to a rehybridization of the frontier orbitals with electronic states of the surface. This effect is more pronounced for the most strongly bound configurations on Pt(321), less so for Pt(643) and least pronounced for Pt(111). The charge redistribution of the surface due to the adsorption of the lactic acid molecule shows the hallmark of the push-back effect that pushes electron density closer to the surface due to the Pauli repulsion of the molecular electrons. This leads to a considerable lowering of the work function to values around 4.6 eV for all molecule-surface combinations with the lactic acid on Pt(111) surface showing a work function of 4.7 eV. An outlier in terms of work function is the L-lactic acid on Pt(643)$^S$ combination. Here the upright molecular configuration leads to a smaller push-back effect that in turn yields a higher work function of about 4.9 eV.
Overall our results show that lactic acid adsorbs on stepped Pt surfaces predominantly through bonding of its carboxyl group to a kink site with the hydroxyl group constituting a secondary binding site. A small chiral selectivity on chiral Pt surfaces is predicted, whose sign depends on the exact surface studied. Adsorption geometries can depend on molecular chirality, leading to large changes in the work function. Especially this last effect should be verifiable by Field Ion microscopy or Scanning Tunneling Microscopy.
\section{Acknowledgements}
This work has been supported by the Francqui Foundation, and Programme d'Actions de Recherche Concertee de la Communaute Francaise, Belgium. We would like to thank Pierre Gaspard and Thierry Visart de Bocarme for useful discussions. We also acknowledge the Computing Center of ULB/VUB for computer time on the HYDRA cluster.
|
1,116,691,500,662 | arxiv | \section{Introduction}\label{sec:intro}
\emph{Graph neural networks} (GNNs) \cite{gori2005new,scarselli2008graph, micheli2009neural} have seen sharply growing popularity over the last few years \cite{duvenaud2015convolutional,hamilton2017inductive,xu2018powerful}.
GNNs provide a general framework to model complex structural data containing elements (nodes) with relationships (edges) between them.
A variety of real-world domains such as social networks, %
computer programs, chemical and biological systems can be naturally represented as graphs. Thus, many graph-structured domains are commonly modeled using GNNs.
A GNN layer can be viewed as a message-passing step \cite{gilmer2017neural}, where each node updates its state by aggregating messages flowing from its direct neighbors. GNN variants \cite{li2015gated,velic2018graph,kipf2016semi} mostly differ in how each node aggregates the representations of its neighbors with its own representation.
However, most problems also require the interaction between nodes that are not directly connected, and they achieve this by stacking multiple GNN layers.
Different learning problems require different ranges of interaction between nodes in the graph to be solved.
We call this required range of interaction between nodes %
-- the \emph{problem radius}.
In practice, GNNs were observed \emph{not} to benefit from more than few layers.
The accepted explanation for this phenomenon is \emph{over-smoothing}: node representations become indistinguishable when the number of layers increases \cite{wu2020comprehensive}.
Nonetheless, over-smoothing was mostly demonstrated in \emph{short-range} tasks \cite{li2018deeper,klicpera2018predict,madgap_aaai20,oono2020Graph,Zhao2020PairNorm,rong2020dropedge,chen2020simple} -- tasks that have small \emph{problem radii}, where a node's correct prediction mostly depends on its local neighborhood. Such tasks include paper subject classification \cite{sen2008collective} and product category classification \cite{shchur2018pitfalls}.
Since the learning problems depend mostly on short-range information in these datasets,
it makes sense why more layers than the problem radius might be extraneous.
In contrast, in tasks that also depend on \emph{long-range} information (and thus have larger \emph{problem radii}),
we hypothesize that the explanation for limited performance is \emph{over-squashing}.
We further discuss the differences between over-squashing and over-smoothing in \Cref{sec:related}.
\input{figures/seq-vs-graph-fig.tex}
To allow a node to receive information from other nodes at a radius of $K$, the GNN needs to have at least $K$ layers, or otherwise, it will suffer from \emph{under-reaching} -- these distant nodes will simply not be aware of each other.
Clearly, to avoid under-reaching, problems that depend on long-range interaction
require as many GNN layers as the
range of the interaction.
However, as the number of layers increases, the number of nodes in each node's receptive field grows \emph{exponentially}.
This causes \emph{over-squashing}:
information from the exponentially-growing receptive field is compressed into fixed-length node vectors.
Consequently, the graph fails to propagate messages flowing from distant nodes, and learns only short-range signals from the training data. %
In fact, the GNN bottleneck is analogous to the bottleneck of sequential RNN models. Traditional seq2seq models \cite{sutskever2014sequence, cho2014properties, cho2014learning} suffered from a bottleneck at every decoder state -- the model had to encapsulate the entire input sequence into a fixed-size vector.
In RNNs, the receptive field of a node grows \emph{linearly} with the number of recursive applications.
However in GNNs, the bottleneck is asymptotically more harmful,
because %
the receptive field of a node grows \emph{exponentially}.
This difference is illustrated in \Cref{fig:bottleneck-seq-vs-graph}.
This work does \emph{not} aim to propose a new GNN variant. Rather, our main contribution is
introducing the \emph{over-squashing} phenomenon -- a novel explanation for the major and well-known issue of training GNNs for long-range problems, and showing its harmful practical implications. %
We use a controlled problem to demonstrate how over-squashing prevents GNNs from fitting long-range patterns in the data, and to provide theoretical lower bounds for the required hidden size given the problem radius (\Cref{sec:analysis}). We show, analytically and empirically, that GCN \cite{kipf2016semi} and GIN \cite{xu2018powerful} are susceptible to over-squashing \emph{more} than other types of GNNs such as GAT \cite{velic2018graph} and GGNN \cite{li2015gated}. %
We further show that prior work that extensively tuned GNNs to real-world datasets suffer from over-squashing: breaking the bottleneck using a simple fully adjacent layer reduces the error rate by 42\% in the QM9 dataset, by 12\% in ENZYMES, by 4.8\% in NCI1, and improves accuracy in {\sc{VarMisuse}},
without any additional tuning. %
\section{Preliminaries}\label{sec:gnns}
A directed graph $\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)$ contains nodes $\mathcal{V}$ and edges $\mathcal{E}$, where $\left(u,v\right)\in\mathcal{E}$ denotes an edge from a node $u$ to a node $v$.
For brevity, in the following definitions we treat all edges as having the same \emph{type}; in general, every edge can have a type and features \cite{schlichtkrull2018modeling}. %
\para{Graph neural networks}
Graph neural networks operate by propagating neural messages between neighboring nodes. At every propagation step (a graph layer): the network computes each node's sent message; every node aggregates its received messages; and each node updates its representation by combining the aggregated incoming messages with its own previous representation.
Formally, each node is associated with an initial representation $\mathbf{h}_v^{\left(0\right)} \in \mathcal{R}^{d_0}$. This representation is usually derived from the node's label or its given features. Then, a GNN layer updates each node's representation given its neighbors, yielding $\mathbf{h}_v^{\left(1\right)} \in \mathcal{R}^{d}$. In general, the $k$-th layer of a GNN is a parametric function $f_k$ that is applied to each node by considering its neighbors:
\begin{equation}
\mathbf{h}_v^{\left(k\right)}=f_k\left(
\mathbf{h}_v^{\left(k-1\right)},
\{\mathbf{h}_u^{\left(k-1\right)}\mid u\in\mathcal{N}_v\}
; \theta_{k}\right)
\label{eq:layer}
\end{equation}
where $\mathcal{N}_v$
is the set of nodes that have edges to $v$: $\mathcal{N}_v=\{u \in \mathcal{V} \mid \left( u,v \right) \in \mathcal{E}\}$.
The total number of layers $K$ is usually determined empirically as a hyperparameter.
The design of the function $f$ is what mostly distinguishes one type of GNN from the other. For example, graph convolutional networks (GCN) %
define $f$ as:
\begin{equation}
\mathbf{h}_v^{\left(k\right)}=
\sigma\left(
\sum\nolimits_{u\in \mathcal{N}_v \cup \{v\}} \frac{1}{c_{u,v}}
W^{\left(k\right)}\mathbf{h}_{u}^{\left({k-1}\right)}
\right)
\label{eq:gcn}
\end{equation}
where $\sigma$ is a nonlinearity such as $ReLU$, and $c_{u,v}$ is a normalization factor often set to $\sqrt{|\mathcal{N}_v | \cdot |\mathcal{N}_u} |$ or $|\mathcal{N}_v |$ \cite{hamilton2017inductive}.
As another example, graph isomorphism networks (GIN) \cite{xu2018powerful} %
update a node's representation using the following definition:
\begin{equation}
\mathbf{h}_v^{\left(k\right)}=
MLP^{\left(k\right)}\left( \left(1+\epsilon^{\left(k\right)}\right) \mathbf{h}_v^{\left(k-1\right)}
+ \sum\nolimits_{u \in \mathcal{N}_v} \mathbf{h}_{u}^{\left({k-1}\right)} \right)
\label{eq:gin}
\end{equation}
Usually, the last ($K$-th) layer's output is used for prediction:
in node-prediction, $\mathbf{h}_v^{\left(K\right)}$ is used to predict a label for $v$;
in graph-prediction, a permutation-invariant ``readout'' function aggregates the nodes of the final layer
using summation, averaging, or a weighted sum \cite{li2015gated}.
\section{The GNN Bottleneck}\label{sec:bottleneck}
Given a graph $\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)$ and a given node $v$, we denote the problem's required range of interaction, the \emph{problem radius}, by $r$. $r$ is generally unknown in advance, and usually approximated empirically by tuning the number of layers $K$.
We denote the set of nodes in the receptive field of $v$ by $\mathcal{N}_{v}^{K}$, which is defined recursively as $\mathcal{N}_{v}^{1}:=\mathcal{N}_{v}$ and $\mathcal{N}_{v}^{K}:=\mathcal{N}_{v}^{K-1} \cup \{w \mid \left(w,u\right)\in \mathcal{E} \land u \in \mathcal{N}_{v}^{K-1} \}$.
When a prediction problem relies on long-range interaction between nodes, the GNN must have as many layers $K$ as the estimated range of these interactions, or otherwise, these distant nodes would not be able to interact. It is thus required that $K \geq r$.
However, the number of nodes in each node's receptive field grows \emph{exponentially} with the number of layers: $\norm{ \mathcal{N}_{v}^{K}} = \mathcal{O}\left( \exp\left(K\right)\right)$ \cite{chen2018stochastic}.
As a result, an exponentially-growing amount of information is squashed into a fixed-length vector (the vector resulting from the $\sum$ in \Cref{eq:gcn,eq:gin}), and crucial messages fail to reach their distant destinations. Instead, the model learns only short-ranged signals from the training data
and consequently might generalize poorly at test time.
\input{figures/graph-cloud-fig.tex}
\para{Example} Consider the {\scshape NeighborsMatch}{} problem of \Cref{fig:cloud}. Green nodes (\mygreennode{A}, \mygreennode{B}, \mygreennode{C}) have a varying number of blue neighbors (\mybluenode) and an alphabetical label. Each example in the dataset is a different graph that has a different mapping from numbers of neighbors to labels.
The rest of the graph (marked as \mychar{figures/bottleneck-cloud.pdf}) represents a general, unknown, graph structure.
The goal is to predict a label for the target node, which is marked with a question mark (\boldgreennode{\textbf{?}}), according to its number of blue neighbors. The correct answer is \textbf{C} in this case, because the target node has \emph{two} blue neighbors, like the node marked with C in the same graph.
Every example in the dataset has a different mapping from numbers of neighbors to labels, and thus message propagation and matching between the target node and all the green nodes must be performed \emph{for every graph in the dataset}.
Since the model must propagate information from \emph{all} green nodes before predicting the label, a bottleneck at the target node is inevitable. This bottleneck causes \emph{over-squashing}, which can prevent the model from fitting the training data perfectly.
We demonstrate the bottleneck empirically in this problem in \Cref{sec:evaluation}; in \Cref{sec:analysis}, we %
provide theoretical lower bounds for the GNN's hidden size.
Obviously, adding direct edges between the target node and the green nodes, or making the existing edges bidirectional, could ease information flow for this specific problem. However, in real-life domains (e.g., molecules), we do not know the optimal message propagation structure a priori, and must use the given relations (such as bonds between atoms) as the graph's edges.
Although this is a contrived problem, it resembles real-world problems that are often modeled as graphs.
For example, a computer program in a language such as Python may declare multiple variables (i.e., the green nodes in \Cref{fig:cloud}) along with their types and values (their numbers of blue neighbors in \Cref{fig:cloud}); later in the program, predicting which variable should be used in a specific location (predict the alphabetical label in \Cref{fig:cloud}) must use one of the variables that are available in scope based on the required type and the required value at that point. We experiment with this {\sc{VarMisuse}} problem in \Cref{subsec:programs}.
\para{Short- vs. long-range problems}
Much of prior GNN work has focused on problems that were local in nature, with small problem radii, where the underlying inductive bias was that a node's most relevant context is its local neighborhood, and long-range interaction was not necessarily needed.
With the growing popularity of GNNs, their adoption expanded to domains that required longer-range information propagation as well, without addressing the inherent bottleneck.
In this paper, we focus on problems that \emph{require} long-range information. That is, a correct prediction requires considering the local environment of a node \emph{and} interactions beyond the close neighborhood. For example, a chemical property of a molecule \cite{ramakrishnan2014quantum,gilmer2017neural} %
can depend on the combination of atoms that reside in the molecule's \emph{opposite sides}.
Problems of this kind require long-range interaction, and thus, a large number of GNN layers. Since the receptive field of each node grows exponentially with the number of layers, the more layers -- over-squashing is more harmful. %
In problems that are local in nature (small $r$) -- the bottleneck is less troublesome, because a GNN can perform well with only few layers (e.g., $K$$=$2 layers in \citet{kipf2016semi}), and the receptive field of a node can be exponentially smaller.
Domains such as citation networks \cite{sen2008collective}, social networks \cite{leskovec2012learning},
and product recommendations \cite{shchur2018pitfalls} usually raise short-range problems and are thus \emph{not} the focus of this paper.
So, how long is long-range? We discuss and analyze this question in \Cref{sec:analysis}.
\section{Breaking the Bottleneck}\label{sec:breaking}
To overcome the bottleneck, we wish to allow a node to interact directly with nodes that reside beyond its immediate neighbors. The information propagated from these distant nodes might have been lost or noised in the bottleneck. However, we do \emph{not} wish to connect every node to every other node across all message propagation steps, because this ignores the topology of the graph and increases the computational complexity from $O\left(|\mathcal{E}| \right)$ to $O\left(|\mathcal{V}|^2 \right)$ across all layers.
To break the bottleneck, we propose to add a single \emph{clique layer} on top of a GNN stack containing $K$ layers.
A clique layer is a GNN layer where every node is connected to any other node:
\begin{equation}
\mathbf{h}_v^{\left(clique\right)}=f_{clique}\left(
\mathbf{h}_v^{\left(K\right)},
\{\mathbf{h}_u^{\left(K\right)}\mid u\in \mathcal{V}\}
; \theta_{clique}\right)
\label{eq:clique}
\end{equation}
Where $f_{clique}$ can either be the same function as in the rest of the layers $f_{1..K}$,
or a different interaction function between a node and a set of nodes. Note that \Cref{eq:clique} is very similar to \Cref{eq:layer}, except that in \Cref{eq:layer} the interaction is only with the neighbors of $v$ (i.e., $\mathcal{N}_v$),
and in \Cref{eq:clique} the interaction is with all the nodes in the graph ($\mathcal{V}$) \uri{candidate for removal}.
Clique layers allow interactions between topology-aware node representations of the $K$-'th layer, even if they are distant, while still allowing a GNN to leverage the graph topology using the original $K$ layers.
Additionally, clique layers free the GNN from having to squash all the information flowing into a node into a fixed-length vector, regardless of the graph size.
For example, if the node under prediction in \Cref{fig:cloud} could interact with the node marked with ``C'' directly, after both have counted the number of their blue neighbors, we could predict the label of the node in question without squashing the information from the entire graph into a fixed-length vector.
\section{Evaluation}\label{sec:evaluation}
First, we wish to empirically show that the GNN bottleneck exists, and find the smallest values of $r$ that raise over-squashing.
We generated a synthetic benchmark that is theoretically solvable; %
however, in practice, all GNNs fail to reach 100\% training accuracy because of the bottleneck (\Cref{subsec:synthetic}). %
Second, we examine whether the bottleneck exists in prior work, which addressed real-world problems (\Cref{subsec:chemistry,subsec:bio,subsec:programs}).
\subsection{Synthetic Benchmark: {\scshape NeighborsMatch}{}}\label{subsec:synthetic}
The {\scshape NeighborsMatch}{} problem (\Cref{fig:cloud}) is a contrived problem that we designed to
provide an intuition to the extent of the effect of over-squashing, while allowing us to control the problem radius $r$, and thus \emph{control the intensity} of over-squashing.
We focus on the \emph{training} accuracy of a model, to show that over-squashing prevents models from fitting long-range signals in the training set.
\para{{\scshape Tree-}{\scshape NeighborsMatch}{}}
From the perspective of a single node $v$, the rest of the graph may look like a tree of height $K$, rooted at $v$ \cite{xu2018representation,garg2020generalization}.
To simulate this exponentially-growing receptive field, we
created an instance of the general {\scshape NeighborsMatch}{} problem that we described in \Cref{sec:bottleneck} and portrayed in \Cref{fig:cloud}.
We instantiated the subgraph in the middle of the graph (marked as \mychar{figures/bottleneck-cloud.pdf} in \Cref{fig:cloud}) as a binary tree of depth $depth$ where the green nodes are its leaves, and the target node is the tree's root.
All edges are directed toward the root, such that information is propagated from all nodes toward the target node.
The goal, as in \Cref{sec:bottleneck}, is to predict a label for the target node, where the correct answer is the label of the green node that has the same number of blue neighbors as the target node. An illustration is shown in \Cref{fig:tree-as-cloud} in the appendix.
This allows us to control the problem radius, i.e., $r=depth$.
In this section we observe the bottleneck empirically; in \Cref{sec:analysis} we provide %
a lower bound for the GNN's hidden size given $r$. %
\para{Model}
We implemented a network with $r$$+$$1$ graph layers to allow an additional nonlinearity after the information from the leaves reaches the target node. %
Our PyTorch Geometric \cite{fey2019pytorchgeometric} implementation is available at \url{https://github.com/tech-srl/bottleneck/}.
Our training configuration and hyperparameter ranges are detailed in \Cref{subsec:trees-config}.
\input{tree-per-depth.tex}
\para{Results}
\Cref{fig:tree-by-depth} shows the following surprising results: some GNNs fail to fit the dataset starting from $r$$=$$4$. For example, the training accuracy of GCN \cite{kipf2016semi} at $r$$=$$4$ is 70\%. At $r$$=$$5$, all GNNs fail to perfectly fit the data.
Starting from $r$$=$$4$, the models suffered from \emph{over-squashing} that resulted in \emph{underfitting}: the bottleneck prevented the models from distinguishing between different training examples, even after they were observed tens of thousands of times.
These results clearly show the existence of over-squashing, starting from $r$$=$$4$.
\para{Why did some GNNs perform better than others?}
GCN and GIN managed to perfectly fit $r$$=$$3$ at most,
while GGNN and GAT also reached 100\% accuracy at $r$$=$$4$. This difference can be explained by their neighbor aggregation computation: consider the target node that receives messages in the $r$'th step. GCN and GIN aggregate all neighbors \emph{before} combining them with the target node's representation; they thus must compress the information flowing from \emph{all} leaves into a single vector, and \emph{only afterward} interact with the target node's own representation (\Cref{eq:gcn,eq:gin}). In contrast, GAT uses attention to weight incoming messages given the target's representation: at the last layer only, the target node can ignore the irrelevant incoming edge, and absorb only the relevant incoming edge, which contains information flowing from \emph{half} of the leaves.
That is, a single vector compresses \emph{only half} of the information.
Since the number of leaves grows exponentially with $r$, it is expected that GNNs that need to compress \emph{only half} of the information (GGNN and GAT) will succeed at an $r$ that is larger by 1.
Following \citet{levy2018long}, we hypothesize that the GRU cell in GGNNs filters incoming edges as GAT, but perform this filtering as element-wise attention.
\para{If all GNNs have reached low \emph{training} accuracy, how do GNN-based models usually \emph{do fit} the training data in public datasets of long-range problems?} We hypothesize that they overfit short-range signals and artifacts from the training set, rather than learning the long-range information that was squashed in the bottleneck, and thus generalize poorly at test time.
\subsection{Quantum Chemistry: QM9}\label{subsec:chemistry}
We wish to measure over-squashing in existing models.
But, how can we measure over-squashing? %
Instead, we measure whether breaking the bottleneck improves the results of long-range problems.
\para{Adding a fully-adjacent layer (FA)}
In \Cref{subsec:chemistry,subsec:bio,subsec:programs}, we took extensively tuned models from previous work, and modified adjacency in the last layer:
given a GNN with $K$ layers, we modified the $K$-th layer to be a \emph{fully-adjacent layer} (FA).
A \emph{fully-adjacent layer} is a GNN layer in which every pair of nodes is connected by an edge.
In terms of \Cref{eq:layer,eq:gcn,eq:gin},
converting an existing layer to be fully-adjacent means that $\mathcal{N}_v\coloneqq\mathcal{V}$ for every node $v \in \mathcal{V}$, in that layer only.
This does not change the type of layer nor add weights, but only changes adjacency of a data sample in a single layer. %
Thus, the $K-1$ graph layers exploit the graph structure using their original sparse topology, and only the $K$-th layer is an FA layer that allows the topology-aware node-representations to interact directly and consider nodes beyond their original neighbors. %
Hopefully, this would ease information flow, prevent over-squashing, and reduce the effect of the previously-existed bottleneck.
We re-trained the models using the authors' original code, without performing \emph{any} additional
tuning, to rule out hyperparameter tuning as the source of improvement.
Statistics of all datasets can be found in \Cref{sec:stats}.
We note that an FA layer is a \emph{simple} solution. Its purpose is merely to demonstrate that over-squashing in GNNs is so prevalent and untreated that \emph{even the simplest solution helps}.
Our main contribution is not the solution, but rather, highlighting and explaining the over-squashing \emph{problem}.
This simple solution opens the path for a variety of follow-up improvements and solutions for over-squashing.
\para{Data}
The QM9 dataset \cite{ramakrishnan2014quantum, gilmer2017neural, wu2018moleculenet} contains \textasciitilde130,000 graphs with \textasciitilde 18 nodes. Each graph is a molecule where nodes are atoms, and undirected, typed edges are different types of bonds between the atoms.
The goal is to regress each graph to 13 real-valued quantum chemical properties such as \emph{dipole moment} and \emph{isotropic polarizability}.
\para{Models}
We modified the implementation of \citet{brockschmidt2019graph} who performed an extensive hyperparameter tuning for multiple GNNs, by searching over 500 configurations; we took the same splits and their best-found configurations.
For most GNNs, Brockschmidt found that the best results are achieved using $K$$=$$8$ layers. This hints that this problem depends on long-range information and relies on both graph structure \emph{and} distant nodes.
We re-trained each modified model for each target property using the same code, configuration, and training scheme as \citet{brockschmidt2019graph}, training each model five times (using different random seeds) for each target property task. We compare the ``base'' models, reported by Brockschmidt, with our modified and re-trained ``$+$FA'' models. %
\input{qm_results.tex}
\para{Results}
Results for the top GNNs are shown in \Cref{tab:qm-results}. The main results are that breaking the bottleneck by modifying a single layer to be an FA layer \emph{significantly reduces the error rate}, by 42\% on average, across six GNN types.
These experiments clearly show evidence for a bottleneck in the original GNN models. Results for the other GNNs are shown in \Cref{sec:qm-additional} due to space limitation.
\para{Over-squashing or under-reaching?}
\citet{barcelo2020logical} discuss the inability of a GNN node to observe nodes that are farther away than the number of layers $K$.
We denote this limitation as \emph{under-reaching}: for every fixed number of layers $K$, local information cannot travel farther than distance $K$ along edges. So, was the improvement of the FA layer in \Cref{tab:qm-results} achieved thanks to the reduction in over-squashing, or did the FA layer only extend the nodes' reachability and prevent under-reaching?
To answer this,
we measured the graphs' \emph{diameter} in the QM9 dataset -- the maximum shortest path between any two nodes in a graph. We found that the average diameter is $6.35$$\pm$$0.91$, the maximum diameter is $10$, and the 90'th percentile is $8$, while most models were trained with $K$$=$$8$ layers. That is, at least 90\% of the examples in the dataset certainly did \emph{not} suffer from under-reaching, because the number of layers was greater or equal than their diameter.
We trained another set of models with 10 layers, which did not show an improvement over the base models.
We conclude that the source of improvement was clearly \emph{not} the increased reachability, but instead, the reduction in over-squashing.
\para{Can larger hidden sizes achieve a similar improvement?} We trained another set of models with \emph{doubled} dimensions. These models achieved only 5.5\% improvement over the base model (\Cref{subsec:qm-ablation}), while adding the FA layer achieved 42\% improvement using the original dimensions and without adding weights. Consistently, in \Cref{sec:analysis} we present an analysis that shows how dimensionality increase is \emph{ineffective} in preventing over-squashing.
\para{Is the entire FA layer needed?}
We experimented with using only a sampled fraction of edges in the FA layer. As \Cref{subsec:partial-fa} shows, the fraction of added edges in the last layer correlates with the decrease in error. For example, using only \emph{half} of the possible edges in the last layer (a ``semi-adjacent'' layer) still reduces the error rate by 31.5\% on average compared to ``base''.
\para{If all GNNs benefitted from direct interaction between all nodes, maybe the graph structure is not even needed?} We trained another set of models (\Cref{subsec:qm-ablation}) where
\emph{all $K$ layers} are FA layers,
thus ignoring the original graph topology; these models produced 1500\% \emph{higher} (worse) error. %
\input{bio-evaluation.tex}
\input{varmisuse-eval}
\section{How Long is Long-Range?}\label{sec:analysis}
\input{dim-depth-fig}
In this section, we analyze over-squashing combinatorially in the {\scshape Tree-}{\scshape NeighborsMatch}{} problem.
We provide a combinatorial lower bound for the minimal hidden size that a GNN requires to perfectly fit the data (learn to 100\% training accuracy) given its problem radius $r$.
We denote the arity of such a tree by $m$ ($=$$2$ in our experiments);
the counting base as $b$$=$$2$; %
the number of bits in a floating-point variable as $f$$=$$32$; %
and the hidden dimension of the GNN, i.e., the size of a node vector
$\mathbf{h}_v^{\left(k\right)}$, as $d$.
A full tree of arity $m$ and problem radius $r$$=$$depth$ has $m^{r}$ green label-nodes.
All $\left(m^{r}\right)!$ possible permutations of the labels \{A, B, C, ...\} are valid, disregarding the order of sibling nodes. Thus, the number of label assignments of green nodes is $\left(m^{r}\right)! / \left(m!\right)^{m^{r} - 1}$ (there are $m^{r} - 1$ parent nodes, where the order of each of their $m$ siblings can be permutated).
Right before interacting with the target node and predicting the label, a single vector of size $d$ must encapsulate the information flowing from all green nodes (\Cref{eq:gcn,eq:gin}).\footnote{The analysis holds for GCN and GIN. Architectures that use the representation of the recipient node to aggregate messages, like GAT, need to compress the information from only \emph{half} of the leaves in a single vector. This increases the final upper bounds on $r$ by up to 1 and demonstrated empirically in \Cref{subsec:synthetic}.}
Such a vector contains $d$ floating-point elements, each of them is stored as $f$ bits. Overall, the number of possible cases that this vector \emph{can} distinguish between is $b^{f\cdot d}$.
The number of possible cases that the vector can distinguish between must be greater than the number of different examples that this vector may encounter in the training data. This requirement is expressed in \Cref{eq:dim-depth}.
Considering binary trees ($m$$=$$2$), and floating-point values of $f$$=$$32$ binary ($b$$=$$2$) bits, we get \Cref{eq:dim-depth-concrete}:
\begin{minipage}{.45\linewidth}
\begin{equation}
b^{f\cdot d} > \frac{\left(m^{r}\right)!}{\left(m!\right)^{m^{r}-1}}
\label{eq:dim-depth}
\end{equation}
\end{minipage}
\begin{minipage}[t]{.45\linewidth}
\vspace{-13pt}
\begin{equation}
2^{32\cdot d} > \frac{\left(2^{r}\right)!}{2^{2^{r}-1}}
\label{eq:dim-depth-concrete}
\end{equation}
\end{minipage}
Since factorial grows faster than an exponent with a constant base, %
a small increase in %
$r$ requires a much larger increase in $d$.
Specifically,
for $d$$=$$32$ as in the experiments in \Cref{subsec:synthetic}, the maximal problem radius is as low as $r$$=$$7$.
That is, a model with $d$$=$$32$ \emph{cannot} obtain 100\% accuracy for $r$$>$$7$.
In practice, the problem is worse; i.e., the empirical minimal $d$ is higher than the combinatorial, because even if a solution to storing some information in a vector of a certain size exists, a gradient descent-based algorithm is not guaranteed to find it.
\Cref{fig:dim-depth} shows the combinatorial lower bound of $d$ given $r$.
We also repeated the experiments from \Cref{subsec:synthetic} and report the minimal \emph{empirical} $d$ for each value of $r$.
As shown in \Cref{fig:dim-depth}, the empirical and the theoretical minimal $d$ grow exponentially with $r$; for example, even $d$$=$$512$
can empirically fit $r$$=$$7$ at most.
\section{Related Work}\label{sec:related}
\para{Under-reaching}
\citet{barcelo2020logical} found that the expressiveness of GNNs captures only a small fragment of first-order logic.
The main limitation arises from the inability of a node to be aware of nodes that are farther away than the number of layers $K$, while the existence of such nodes \emph{can} be easily described using logic. We denote this limitation as \emph{under-reaching}.
Nevertheless, even when information is
reachable within $K$ edges,
we show that this information might be over-squashed along the way. Thus, the \emph{over-squashing} limitation described in this paper is \emph{tighter} than \emph{under-reaching}.
\para{Over-smoothing}
As observed before,
node representations become indistinguishable and prediction performance severely degrades as the number of layers increases. The accepted explanation to this phenomenon is \emph{over-smoothing} \cite{li2018deeper,wu2020comprehensive,oono2020Graph}.
This might explain the empirical optimality of few layers in short-range tasks (e.g., only $K$$=$$2$ layers in \citet{kipf2016semi}).
Nonetheless, some problems depend on longer-range information propagation and thus \emph{require} more layers, to avoid \emph{under-reaching}.
We hypothesize that in long-range problems, the explanation for the degraded performance is \emph{over-squashing} rather than \emph{over-smoothing}.
For further discussion of over-smoothing vs. over-squashing, see \Cref{append:oversmoothing}.
\para{Avoiding over-squashing}
Some previous work avoid over-squashing by various profitable means: %
\citet{gilmer2017neural} add ``virtual edges'' to shorten long distances;
\citet{scarselli2008graph} add ``supersource nodes''; %
and \citet{allamanis2018learning} designed program analyses that serve as 16 ``shortcut'' edge types. %
However, none of these explicitly explained these solutions using over-squashing, and did not identify the bottleneck and its negative cross-domain implications.
\section{Conclusion}
We propose a novel explanation to a well known limitation in training graph neural networks: a bottleneck that causes over-squashing.
Problems that depend on long-range interaction require as many GNN layers as the desired radius of each node's receptive field.
This causes an exponentially-growing amount of information to be squashed into a fixed-length vector. As a result, the GNN fails to propagate long-range information, learns only short-range signals from the training data instead, and performs poorly when the prediction task depends on long-range interaction.
We demonstrate
the existence of the bottleneck
in a controlled problem,
provide theoretical lower bounds for the hidden size given the problem radius,
and show that GCN and GIN %
are more susceptible to over-squashing than GAT and GGNN.
We further show that prior models of chemical, biological and programmatical benchmarks suffer from over-squashing by showing that they can be dramatically improved
using a simple FA layer.
We conclude that over-squashing in GNNs is so prevalent and untreated in some benchmarks -- that even the simplest solution helps.
Our observations open the path for a variety of follow-up improvements and even better solutions for over-squashing.
\section*{Acknowledgments}
We would like to thank Federico Errica and Marc Brockschmidt for their help in using their frameworks. %
We are also grateful to (alphabetically): Chen Zarfati, Elad Nachmias, Gail Weiss, Horace He, Jorge Perez, Lotem Fridman, Moritz Plenz, Pavol Bielik, Petar Veličković, Roy Sadaka, Shaked Brody, Yoav Goldberg, and the anonymous reviewers for their useful comments and suggestions.
\section{QM9 -- Additional Results}
\label{sec:qm-additional}
\subsection{Additional GNN Types}
Because of space limitations, in \Cref{subsec:chemistry} we presented results on the QM9 dataset only for R-GIN, R-GAT and GGNN.
In this section, we show that additional GNN architectures benefit from breaking the bottleneck using a fully-adjacent layer: GNN-MLP , R-GCN \cite{schlichtkrull2018modeling} and GNN-FiLM \cite{brockschmidt2019graph}.
All experiments were performed using the extensively-tuned implementation of \citet{brockschmidt2019graph} who experimented with over 500 hyperparameter configurations.
\Cref{tab:qm-results2} contains additional results for GGNN, R-GCN and R-GIN. As shown in \Cref{tab:qm-results2}, adding an FA layer significantly improves results across all GNN architectures, for all properties. %
\subsection{Alternative Solutions}
\label{subsec:qm-ablation}
\Cref{tab:qm-results-ablations} shows additional experiments, all performed using GCN. \emph{base$^{\dagger}$} is the original model of \citet{brockschmidt2019graph} as in \Cref{tab:qm-results2}. \emph{$+$FA} is the model that we re-trained with the last layer modified to an FA layer.
$2$$\times$$d$ is a model that was trained with a doubled hidden dimension size, $d=256$ instead of $d=128$ as in the base model. As shown, doubling the hidden dimension size leads to a small improvement of only 5.5\% reduction in error. In comparison, the $+$FA model used the original dimension sizes and achieves a much larger improvement of 43.40\%.
\emph{All FA} is a model that was trained with \emph{all} GNN layers converted into FA layers, practically ignoring the graph topology. This led to much worse results of more than 1500\% higher error. This shows that the graph topology is important in this benchmark, and that a direct interaction between nodes (as in a single FA layer) must be performed in addition to considering the topology.
\emph{$2\times$FA} is a model where the last layer was modified into an FA layer, and an additional FA layer was stacked on top of it. This led to results that are very similar to $+$FA.
\emph{Penultimate FA} is a model where the FA layer is the penultimate layer (the $K-1$-th), followed by a standard GNN layer as the $K$-th layer. This led to results that are even slightly better than $+$FA.
\begin{table*}[t]
\centering
\footnotesize
\begin{tabu}{lrrrrrrrrr}
\toprule
Property & \multicolumn{1}{c}{base$^{\dagger}$}& \multicolumn{1}{c}{$+$FA} & \multicolumn{1}{c}{$2$$\times$$d$} & \multicolumn{1}{c}{All FA} & \multicolumn{1}{c}{$2\times$FA} & \multicolumn{1}{c}{Penultimate FA} \\
\midrule
\footnotesize{mu} & 3.21$\pm$0.06 & 2.92$\pm$0.07 & 2.99$\pm$0.08 & 11.52 & 2.89$\pm$0.08 & \textbf{2.80}$\pm$0.08\\
\footnotesize{alpha} & 4.22$\pm$0.45 & \textbf{2.14}$\pm$0.08 & 3.57$\pm$0.40 & 9.19 & 2.23$\pm$0.04 & \textbf{2.14}$\pm$0.10\\
\footnotesize{HOMO} & 1.45$\pm$0.01 & 1.37$\pm$0.02 & 1.36$\pm$1.87 & 9.95 & 1.39$\pm$0.02 & \textbf{1.34}$\pm$0.03 \\
\footnotesize{LUMO} & 1.62$\pm$0.04 & 1.41$\pm$0.01 & 1.43$\pm$0.04 & 19.13 & 1.42$\pm$0.04 & \textbf{1.37}$\pm$0.02 \\
\footnotesize{gap} & 2.42$\pm$0.14 & 2.03$\pm$0.03 & 2.33$\pm$0.23 & 24.62 & 2.06$\pm$0.05 & \textbf{2.00}$\pm$0.03 \\
\footnotesize{R2} & 16.38$\pm$0.49 & 13.55$\pm$0.50 & 18.4$\pm$0.76 & 168.09 & 13.97$\pm$0.56 & \textbf{12.92}$\pm$0.11 \\
\footnotesize{ZPVE} & 17.40$\pm$3.56 & 5.81$\pm$0.61 & 15.8$\pm$2.59 & 591.33 & 5.79$\pm$0.50 & \textbf{4.53}$\pm$0.62 \\
\footnotesize{U0} & 7.82$\pm$0.80 & \textbf{1.75}$\pm$0.18 & 7.60$\pm$2.07 & 188.59 & 1.90$\pm$0.1 & 1.98$\pm$0.25 \\
\footnotesize{U} & 8.24$\pm$1.25 & 1.88$\pm$0.22 & 7.65$\pm$1.51 & 189.72 & \textbf{1.71}$\pm$0.16 & 2.05$\pm$0.23 \\
\footnotesize{H} & 9.05$\pm$1.21 & 1.85$\pm$0.18 & 8.67$\pm$1.10 & 191.11 & 1.83$\pm$0.11 & \textbf{1.73}$\pm$0.14 \\
\footnotesize{G} & 7.00$\pm$1.51 & \textbf{1.76}$\pm$0.15 & 2.90$\pm$1.15 & 173.68 & 1.93$\pm$0.11 & 1.96$\pm$0.42 \\
\footnotesize{Cv} & 3.93$\pm$0.48 & 1.90$\pm$0.07 & 3.99$\pm$0.07 & 64.18 & 1.90$\pm$0.14 & \textbf{1.83}$\pm$0.11 \\
\footnotesize{Omega} & 1.02$\pm$0.05 & 0.75$\pm$0.04 & 1.03$\pm$0.54 & 23.89 & 0.69$\pm$0.06 & \textbf{0.67}$\pm$0.01 \\
\midrule
relative & \multicolumn{1}{c}{0.0\%} & \multicolumn{1}{c}{-43.40\%} & \multicolumn{1}{c}{-5.50\%} & \multicolumn{1}{c}{+1520\%} & \multicolumn{1}{c}{-43.30\%} & \textbf{-45.2}\%\\
\bottomrule
\end{tabu}
\caption{Average error rates and standard deviations on the QM9 targets with GCN using alternative solutions.}
\label{tab:qm-results-ablations}
\end{table*}
\begin{table*}[h!]
\centering
\footnotesize
\begin{tabu}{llrrrrr}
\toprule
& base$^{\dagger}$ & 0.25$\times$ FA & 0.5$\times$ FA & 0.75$\times$ FA & $+$FA (as in \Cref{tab:qm-results2}) \\
\midrule
Avg. error compared to base$^{\dagger}$ & -0\% & -8.4\% & -31.5\% & -37.1\% & -43.4\% \\
\bottomrule
\end{tabu}
\caption{Average error rates and standard deviations on the QM9 targets with GCN, where we use only a fraction of the edges in the FA layer.}
\label{tab:qm-results-fraction}
\end{table*}
\subsection{Partial-FA Layers}
\label{subsec:partial-fa}
We also examined whether instead of adding a ``full fully-adjacent layer'', we can randomly sample only a fraction of these edges.
We randomly sampled only $\{0.25, 0.5, 0.75\}$ of the edges in the full FA layer in every example, and trained the model for each target property 5 times.
\Cref{tab:qm-results-fraction} shows the results of these experiments using GCN. \emph{base$^{\dagger}$} is the original model of \citet{brockschmidt2019graph} as in \Cref{tab:qm-results2}. \emph{$+$FA} is the model that we re-trained with the last layer modified to an FA layer. \emph{$\{0.25, 0.5, 0.75\}\times$ FA} are the models were only a fraction of the edges in the FA layer were used.
As shown in \Cref{tab:qm-results-fraction}, the full FA layer achieves the largest reduction in error (-43.4\%), but even adding a fraction of the edges improves the results over the base model. For example, using only \emph{half} of the edges (\emph{0.5$\times$ FA}) reduces the error by 31.5\%. Overall, the percentage of used edges in the partial-FA layer is correlated with its reduction in error.
\section{Discussion: Over-Smoothing vs. Over-Squashing}
\label{append:oversmoothing}
Although \emph{over-smoothing} and \emph{over-squashing} are related, they are disparate phenomena that occur in different types of problems.
For example, imagine a triangular graph containing only three nodes, where every node has a scalar value, an edge to each of the other nodes, and needs to compute a function of its own value and the other nodes' values.
The problem radius $r$ in this case is $r$$=$$1$.
As we increase the number of layers, the representations of the nodes might become indistinguishable, and thus suffer from \emph{over-smoothing}. However, there will be \emph{no over-squashing} in this case, because there is no growing amount of information that is squashed into fixed-sized vectors while passing long-range messages.
Contrarily, in the {\scshape Tree-}{\scshape NeighborsMatch}{} problem, there is no reason for over-smoothing to occur, because there are no two nodes that can converge to the same representation. A node in a ``higher'' level in the tree contains twice the information than a node in a ``lower'' level. Thus, this is a case where \emph{over-squashing can occur without over-smoothing}.
\section{Biological Benchmarks -- Training Details}
\label{sec:bio-additional}
We used the implementation of \citet{errica2020fair} who performed a fair and thorough comparison between GNNs, by splitting each dataset to 10-folds; then, for each GNN type they select a configuration among a grid of 72 configurations according to the validation set; finally, the best configuration for each fold is trained three additional times, early stopped using the validation set, and evaluated on the test set.
The final reported result is the average of all 30 test runs (10-folds$\times$3). The final standard deviation is computed among the average results of each of the ten folds.
\subsection{Biological Benchmarks}\label{subsec:bio}
\para{Data}
The NCI1 dataset \cite{wale2008comparison} contains 4110 graphs with \textasciitilde30 nodes on average, and its task is to predict whether a biochemical compound contains anti-lung-cancer activity.
ENZYMES \cite{borgwardt2005protein} contains 600 graphs with \textasciitilde36 nodes on average, and its task is to classify an enzyme to one out of six classes. We used the same 10-folds and split as \citet{errica2020fair}.
\para{Models} We used the implementation of \citet{errica2020fair} who performed a fair and thorough comparison between GNNs. The final reported result is the average of 30 test runs (10 folds$\times$3 random seeds). Additional training details are provided in \Cref{sec:bio-additional}.
In ENZYMES, Errica et al. found that a baseline that does not use the graph topology \emph{at all} (``\emph{No Struct}'') performs better than all GNNs. In NCI1, GIN performed best.
We converted the last layer into an FA layer by modifying the implementation of Errica et al., and repeated the same training procedure.
We compare the ``base'' models from Errica et al. with our re-trained ``+FA'' models.
\input{bio-results.tex}
\para{Results}
Results are shown in \Cref{tab:bio-results}. The main results are as follows: (a) in NCI1, GIN+FA improves by 1.5\% over GIN-base, which was previously the best performing model; (b) in ENZYMES, where \citet{errica2020fair} found that none of the GNNs exploit the topology of the graph, we find that GIN+FA \emph{does} exploit the structure and improves by 8.1\% over GIN-base and by 2.5\% over \emph{No Struct}.
On average, models with FA layers relatively reduce the error rate by 12\% in ENZYMES and by 4.8\% in NCI1.
These experiments clearly show evidence for a bottleneck in the original GNN models.
\section*{Broader Impact}
This work discusses a general limitation of graph neural networks. This work thus has no direct ethical or societal consequences.
\section{Combinatorial Analysis}
\label{sec:stats-additional}
We analyize the combinatorial upper bound for the maximal depth that a GNN can perfectly fit (learn to 100\% training accuracy) given its hidden vector size $d$.
We denote the arity of such a tree by $m$ ($=$$2$ in our experiments);
the counting base as $b$$=$$2$; %
the number of bits in a floating-point variable as $f$$=$$32$; %
and the hidden dimension of the GNN, i.e., the size of a node vector
$\mathbf{h}_v^{\left(k\right)}$, as $d$.
A full tree of arity $m$ has $m^{depth}$ leaves.
As described in \Cref{subsec:synthetic}, given an arrangement of blue neighbors, all possible permutations of the labels \{A, B, C, ...\} are valid. Thus, the number of leaf label assignments is $\left(m^{depth}\right)!$.
Right before interacting with the target node and predicting the label, a single vector of size $d$ must encapsulate the information flowing from all leaves (\Cref{eq:gcn,eq:gin}).\footnote{The analysis holds for GCN and GIN. Architectures that use the representation of the recipient node to aggregate messages, like GAT, need to compress the information from only \emph{half} of the leaves in a single vector. This increases the final upper bounds of $depth$ by up to 1 and demonstrated empirically in \Cref{subsec:synthetic}.}
Such a vector contains $d$ floating-point elements, each of them is stored as $f$ bits. Overall, the number of possible cases that this vector can distinguish between is $b^{f\cdot d}$. The number of possible cases that the vector can distinguish between must be greater than the number of different examples this vector may encounter in the training data.
\Cref{eq:dim-depth};
considering binary trees ($m$$=$$2$), and floating-point values of $f$$=$$32$ binary ($b$$=$$2$) bits, we get \Cref{eq:dim-depth-concrete}:
\newline
\noindent\begin{minipage}[t]{.5\linewidth}
\begin{equation}
\left(m^{depth}\right)! < b^{f\cdot d}
\label{eq:dim-depth2}
\end{equation}
\end{minipage}
\noindent\begin{minipage}[t]{.5\linewidth}
\begin{equation}
\left(2^{depth}\right)! < 2^{32\cdot d}
\label{eq:dim-depth-concrete2}
\end{equation}
\end{minipage}
Now, we can fix either $d$ or $depth$ and solve for the other. For example, in \Cref{fig:dim-depth} we fixed different values of $d$ and found the maximal $depth\in \mathbb{N}$ that satisfies \Cref{eq:dim-depth-concrete}, to get the combinatorial max $depth$ for each value of $d$.
\section{Data Statistics}
\label{sec:stats}
\subsection{Synthetic Dataset: {\scshape Tree-}{\scshape NeighborsMatch}}
Statistics of the synthetic {\scshape Tree-}{\scshape NeighborsMatch}{} dataset are shown in \Cref{tab:stat-trees}.
\begin{table*}[h!]
\centering
\caption{The number of examples, in our experiments and combinatorially, for every value of $depth$. }
\begin{tabular}{lrr}
\toprule
$depth$ & \# Training examples sampled & \makecell{Total combinatorial: \\ $\left(2^{depth}!\right)\cdot 2^{depth}$} \\
\midrule
2 & 96 & 96 \\
3 & 8000 & $>3\cdot 10^5$ \\
4 & 16,000 & $>3\cdot 10^{14}$ \\
5 & 32,000 & $>10^{36}$\\
6 & 32,000 & $>10^{90}$\\
7 & 32,000 & $>10^{217}$\\
8 & 32,000 & $>10^{509}$\\
\bottomrule
\end{tabular}
\label{tab:stat-trees}
\end{table*}
\subsection{Quantum Chemistry: QM9}
Statistics of the quantum chemistry QM9 dataset, as used in \citet{brockschmidt2019graph} are shown in \Cref{tab:stat-qm}.
\begin{table*}[h!]
\centering
\caption{Statistics of the QM9 chemical dataset \cite{ramakrishnan2014quantum} as used by \citet{brockschmidt2019graph}.}
\begin{tabular}{lrrrr}
\toprule
& Training & Validation & Test \\
\midrule
\# examples & 110,462 & 10,000 & 10,000 \\
\# nodes - average & 18.03 & 18.06 & 18.09\\
\# nodes - standard deviation & 2.9 & 2.9 & 2.9\\
\# edges - average & 18.65 & 18.67 & 18.72 \\
\# edges - standard deviation & 3.1 & 3.1 & 3.1 \\
\bottomrule
\end{tabular}
\label{tab:stat-qm}
\end{table*}
\subsection{Biological Benchmarks}
Statistics of the biological datasets, as used in \citet{errica2020fair}, are shown in \Cref{tab:stat-bio}.
\begin{table*}[h!]
\centering
\caption{Statistics of the biological datasets, as used by \citet{errica2020fair}.}
\begin{tabular}{lrr}
\toprule
& NCI1 \cite{wale2008comparison} & ENZYMES \cite{borgwardt2005protein} \\
\midrule
\# examples & 4110 & 600 \\
\# classes & 2 & 6 \\
\# nodes - average & 29.87 & 32.63 \\
\# nodes - standard deviation & 13.6 & 15.3 \\
\# edges - average & 32.30 & 64.14\\
\# edges - standard deviation & 14.9 & 25.5 \\
\# node labels & 37 & 3 \\
\bottomrule
\end{tabular}
\label{tab:stat-bio}
\end{table*}
\subsection{{\sc{VarMisuse}}}
Statistics of the {\sc{VarMisuse}} dataset, as used in \citet{allamanis2018learning} and \citet{brockschmidt2019graph}, are shown in \Cref{tab:stat-varmisuse}.
\begin{table*}[h!]
\centering
\caption{Statistics of the {\sc{VarMisuse}} dataset \cite{allamanis2018learning} as used by \citet{brockschmidt2019graph}.}
\begin{tabular}{lrrrr}
\toprule
& Training & Validation & UnseenProject Test & SeenProject Test \\
\midrule
\# graphs & 254360 & 42654 & 117036 & 59974 \\
\# nodes - average & 2377 & 1742 & 1959 & 3986 \\
\# edges - average & 7298 & 7851 & 5882 & 12925 \\
\bottomrule
\end{tabular}
\label{tab:stat-varmisuse}
\end{table*}
\section{{\scshape Tree-}{\scshape NeighborsMatch}{} -- Training Details}
\label{subsec:trees-config}
\paragraph{Data}
We created a separate dataset for every tree depth (which is equal to $r$, the problem radius) and sampled up to 32,000 examples per dataset.
The label of each leaf (``A'', ``B'', ``C'' in \Cref{fig:cloud}) is represented as a one-hot vector. To tease the effect of the bottleneck from the ability of a GNN to count neighbors,
we concatenated each leaf node's initial representation
with a 1-hot vector representing the number of blue neighbors, instead of creating the blue nodes. The target node is initialized with a learned vector as its (missing) label, concatenated with a 1-hot vector representing its number of blue neighbors. Intermediate nodes are initialized with another learned vector.
\paragraph{Model}
The network has an initial linear layer, followed by
$r+1$ GNN layers. Afterward, the final target node representation goes through a linear layer and a softmax to predict its label.
We experimented with GCN \cite{kipf2016semi}, GGNN \cite{li2015gated}, GIN \cite{xu2018powerful} and GAT \cite{velic2018graph} as the graph layers.
In \Cref{subsec:synthetic}, we used model dimensions of $d$$=$$32$. Larger values led to the exact same trend. %
We added residual connections, summing every node with its own representation in the previous layer to increase expressivity, and layer normalization which eased convergence.
We used the Adam optimizer with a learning rate of $10^{-3}$, decayed by $0.5$ after every 1000 epochs without an increase in training accuracy, and stopped training after 2000 epochs of no training accuracy improvement. This usually led to tens of thousands of training epochs, sometimes reaching 100,000 epochs.
To rule out hyperparameter tuning as the source of degraded performance,
we experimented with changing activations (ReLu, tanh, MLP, none), using layer normalization and batch normalization, residual connections, various batch sizes, and whether or not the same GNN weights should be ``unrolled'' over time steps. The presented results were obtained using the configurations that achieved the best results.
\para{Over-squashing or just long-range?}
To rule out the possibility that the long-range itself is preventing the GNNs from fitting the data, we repeated the experiment
of \Cref{fig:tree-by-depth}
for depths 4 to 8, where the distance between the leaves and the target node remained the same, but the amount of over-squashing was as in $r$$=$$2$.
That is, the graph looks like a tree of $depth$$=$$2$, where the root is connected to a ``chain'' of length of up to 6, and the target node is at the other side of the chain.
This setting maintains the long-range as in the original problem, but reduces the amount of information that needs to be squashed. In other words,
This setting \emph{disentangles} of the effect of the long-range itself from the effect of the growing amount of information (i.e., from over-squashing).
In this setting, \emph{all GNN types managed to easily fit the data to close to 100\%} across all distances, showing that the problem is the amount of over-squashing, rather than the long-range itself.
\subsection{Programs: {\sc{VarMisuse}}}\label{subsec:programs}
\para{Data} {\sc{VarMisuse}} \cite{allamanis2018learning} is a node-prediction problem that depends on long-range information in computer programs.
We used the same splits as \citet{allamanis2018learning}. %
\para{Models}
We use the implementation of \citet{brockschmidt2019graph} who performed an extensive hyperparameter tuning by searching over 30 configurations for each GNN type. The best results were found using 6-10 layers, which hints that this problem requires long-range information. We modified the last layer to be an FA layer, and used the resulting representations for node classification.
We used the same best found configurations as \citet{brockschmidt2019graph} add re-trained each model five times. %
\para{Results}
Results are shown in \Cref{tab:varmisuse-results}. The main result is that adding an FA layer to all GNNs improves their SeenProjTest accuracy, obtaining a new state-of-the-art of 88.4\%. %
In the \emph{Unseen}ProjTest set,
adding an an FA layer improves the results of some of most of the GNNs, obtaining a new state-of-the-art of 83.8\%.
These improvements are significant, especially since they were achieved on extensively tuned models, without any further tuning by us.
|
1,116,691,500,663 | arxiv | \section{Introduction}
Over the last few decades, enormous efforts have been dedicated to answer a
fundamental question; to determine whether a mechanical system is integrable
and how can one find integrals of the motion, if they exist? In fact, there
is no systematic method for doing that, even for integrals of the simplest
functional form, polynomial in the velocities and for the simplest
configuration space, the 2D Euclidean plane. The only possible way is to
compare one's system with available tables of known integrable cases in
different areas of interest. A fairly complete review of methods and the
small list of known integrable potentials in the Euclidean plane with an
integral polynomial in the velocities up to 1986 can be found in
Hietarinta's article \cite{Hiet:1987}. The list of cases added after that
date is even smaller. Just few more cases of systems in the plane were
obtained in few works (see e.g. \cite{Sen:1987a}, \cite{Sen:1987b} and \cit
{Karl:2000,Karl:2002}).
The matter becomes much harder for integrable systems whose configuration
space is more general, e.g. Riemannian, 2D manifols. For a long time the
list of those cases consisted of the separable (Liouville) systems and the
few known cases of rigid body dynamics.
The method introduced by Yehia in \cite{Yehia:1986} has been most successful
in constructing new families of integrable two-dimensional mechanical
systems with second integrals polynomial in velocities with degree ranging
up to six: quadratic \cite{Yehia:1992,Yehia:2007b}, cubic \cite{Yehia:1986},
\cite{Yehia:2002}, quartic \cite{Yehia:2006a,Yehia:2006b}. Most known cases
with a quartic integral were recovered as special cases corresponding to
certain choices of the parameters from the so-called \emph{master} system
involving 21 arbitrary parameters. Another system with 16 free parameters
was obtained in \cite{Yehia:2012}. The results of \cite{Yehia:2006b} and
\cite{Yehia:2012} have not only restored the famous, Kowalevski's integrable
case of rigid body dynamics \cite{Kowal:1889} and the case due to Chaplygin
of motion of a body in a liquid \cite{Chapl:1903}, but also introduced
several new integrable cases that generalized those two cases by adding
certain terms to the potential in each case \cite{Yehia:2006a} - \cit
{Yehia:2012} and \cite{YM13}.
Yehia's method consists in two steps. The first is constructing the basic
system integrable on its zero-energy level and the second is the
interpretation of the energy constant and the standard time variable. This
usually gives the freedom to introduce several additional parameters to the
structure of the system. More details on this can be found in \cit
{Yehia:2006a,Yehia:2012,Yehia:2013}.
The present paper is devoted to construction of integrable systems which
admit an integral quartic in velocities. It is a continuation of \cit
{Yehia:2006a} and \cite{Yehia:2012}. Systematic application of an extension
of the method of the last papers resulted in the construction of 14 systems
with a quartic invariant, of which 12 systems are new. The new systems
involve several parameters, ranging in number up to 11 parameters. Those
systems are here classified.
\subsection{Formulation of the problem}
Consider the natural conservative mechanical system described by the
Lagrangian
\begin{eqnarray}
L=\frac{1}{2}\sum_{i,j=1}^2a_{ij}\dot{q}_i\dot{q}_j-V, \label{L-0}
\end{eqnarray}
where $a_{ij},\;V$ are certain functions of the generalized coordinates
q_1,\;q_2$ only. Clearly, the system (\ref{L-0}) admits the energy integral
\begin{eqnarray}
H=\frac{1}{2}\sum_{i,j=1}^2a_{ij}\dot{q}_i\dot{q}_j+V=h, \label{H-0}
\end{eqnarray}
where $h$ denotes the arbitrary energy parameter. The most general form of a
quartic integral of (\ref{L-0}) is
\begin{eqnarray}
I=\sum_{i=0}^{4}C_{4,i}\dot{q}_1^i\dot{q}_2^{4-i} +\sum_{i=0}^{2}C_{2,i}\dot
q}_1^i\dot{q}_2^{2-i} +C_0 \label{I-0}
\end{eqnarray}
where $C_{ij},\;C_0$ are functions in $q_1,\;q_2$.
The problem is to determine the 13 unknown functions $\{\mbox{\sl g
_{ij}\},~V,~\{C_{4,i}\},~\{C_{2,i}\},~C_0$ such that $dI/dt=0$ in virtue of
the equations of motion derived from the Lagrangian (\ref{L-0}).
As was shown in \cite{Yehia:1986} and recently in \cite{Yehia:2012},
whenever a natural 2D mechanical system admits an integral of motion quartic
in velocities, this system can always be reduced in certain isometric
coordinates $\xi,\eta$ and a time parametrization $\tau$ to a ficticious
system described by the Lagrangian
\begin{eqnarray}
L=\frac{1}{2}\left(\xi^{\prime 2}+\eta^{\prime 2}\right)+U,\quad
U=\Lambda(h-V), \label{L-1}
\end{eqnarray}
restricted to its zero-energy level
\begin{eqnarray}
\xi^{\prime 2}+\eta^{\prime 2}+2U=0, \label{H-1}
\end{eqnarray}
where the prime denotes differentiation with respect to the parameter $\tau
, $h$ and $V$ are the energy constant and the potential function of the
original system, and $\Lambda(\xi,\eta)$ is a conformal factor which depends
on the metric of the configuration space. The quartic integral is
simultaneously written in the following simple form involving only three
unknown functions instead of nine in (\ref{I-0}):
\begin{eqnarray}
I=\xi^{\prime 4}+P\xi^{\prime 2}+Q\xi^\prime\eta^\prime+R=\mbox{const.}
\label{I}
\end{eqnarray}
All the functions involved are expressed in terms of an auxiliary function
F(\xi,\eta)$, which is a solution of the nonlinear equation
\begin{eqnarray}
\frac{\partial^2F}{\partial\xi\partial\eta} \left(\frac{\partial^4F}
\partial\xi^4}-\frac{\partial^4F}{\partial\eta^4}\right) +3\left(\frac
\partial^3F}{\partial\xi^3}\frac{\partial^3F}{\partial\xi^2\partial\eta}
\frac{\partial^3F}{\partial\eta^3}\frac{\partial^3F}{\partial\eta^2\partia
\xi}\right) \notag \\
+2\left(\frac{\partial^2F}{\partial\xi^2}\frac{\partial^4F}
\partial\xi^3\partial\eta} -\frac{\partial^2F}{\partial\eta^2}\frac
\partial^4F}{\partial\eta^3\partial\xi}\right)=0, \label{PDE}
\end{eqnarray}
which called the \textit{resolving equation}. In terms of $F$, three of the
unknown functions of the problem, namely $P,\;Q$ and $U$, are expressed as
\begin{eqnarray}
P=\frac{\partial^2 F}{\partial\xi^2},\quad Q=-\frac{\partial^2F}
\partial\xi\partial\eta},\quad U=-\frac{1}{4}\left(\frac{\partial^2F}
\partial\xi^2}+\frac{\partial^2F}{\partial\eta^2}\right), \label{F2U}
\end{eqnarray}
while the function $R$ is given, up to an additive constant, by the
quadrature
\begin{eqnarray}
R(\xi,\eta) =-\int Q\frac{\partial U}{\partial \xi} d\eta -\int\left[2P
\frac{\partial U}{\partial \xi} +Q\frac{\partial U}{\partial \eta}+2U \frac
\partial Q}{\partial \eta}\right]_0d\xi, \label{R}
\end{eqnarray}
where $[]_0$ means that the expression in the bracket is computed for $\eta$
taking an arbitrary constant value $\eta_0$ (say).
The set of solutions of (\ref{PDE}) generates all systems of the type (\re
{L-1}) having an integral of the form (\ref{I}) on the zero level of their
energy integral. Affecting all possible conformal mappings of the complex
\zeta=\xi+i\eta$ plane followed by a general point transformation to the
generalized coordinates $q_1, q_2$ with a suitable change of the time
variable we obtain all systems of the general form on two-dimensional
Riemannian (or pseudo-Riemannian) manifolds, having a quartic integral on
the zero level of their energy integral, i.e. \textit{conditional systems}.
The original system can now be expressed in terms of the coordinates
\xi,\eta$ and the natural time $t$ by the Lagrangian
\begin{eqnarray}
L^*=\frac{1}{2}\Lambda\left(\dot{\xi}^2+\dot{\eta}^2\right)-V.
\end{eqnarray}
The quartic integral now takes the form
\begin{eqnarray}
I^*=\Lambda^4\dot{\xi}^4+\Lambda^2\left(P\dot{\xi}^2+Q\dot{\xi}\dot{\eta
\right)+R=\mbox{const.} \label{I*}
\end{eqnarray}
\subsection{The choice of $\Lambda$}
\label{subsec:uncond-sys} To construct systems that integrable on all energy
levels, the functions $U$ obtained from (\ref{F2U}) must have a structure in
which the energy constant $h$ of the original system enters linearly as a
parameter. Any parameter that appears only as linear multiplier in a certain
term of the potential can be identified as the energy parameter $h$ and its
cofactor as the function $\Lambda$, and we can proceed through an inverse
time transformation to construct a set of general integrable systems valid
on arbitrary energy level, i.e. unconditional systems. The general situation
however assumes $U$ to have a set of linear multipliers $h_i$. Then, the
Lagrangian can be written as (see e.g. \cite{Yehia:2012, Yehia:2013})
\begin{eqnarray}
L=\frac{1}{2}\left(\xi^{\prime 2}+\eta^{\prime
2}\right)-\sum_{i=1}^nh_iU_i(\xi,\eta), \label{L_Ui}
\end{eqnarray}
which admits the quartic integral (\ref{I}) on the zero level of the energy
integral
\begin{eqnarray}
H=\frac{1}{2}\left(\xi^{\prime 2}+\eta^{\prime
2}\right)+\sum_{i=1}^nh_iU_i(\xi,\eta)=0. \label{H_Ui-0}
\end{eqnarray}
Introducing new arbitrary parameters $\alpha_i,\beta_i$ into (\ref{L_Ui}) by
the substitution $h_i=\alpha_i-\beta_ih$, we get
\begin{eqnarray}
L=\frac{1}{2}\left(\xi^{\prime 2}+\eta^{\prime 2}\right)
-\left(\sum_{i=1}^n\beta_iU_i\right) \left(\frac{\sum_{i=1}^n\alpha_iU_i}
\sum_{i=1}^n\beta_iU_i}-h\right). \label{L_Ui2}
\end{eqnarray}
Making the change of the independent variable $\tau$ to the original time $t$
according to the relation
\begin{eqnarray}
t=\int \sum_{i=1}^n\beta_i U_i\;d\tau,
\end{eqnarray}
we obtain the Lagrangin $L_1=L^*+h$, where
\begin{eqnarray}
L^*=\frac{1}{2}\left(\sum_{i=1}^n\beta_iU_i\right)\left(\dot{\xi}^2+\dot{\et
}^2\right) -\frac{\sum_{i=1}^n\alpha_iU_i}{\sum_{i=1}^n\beta_iU_i},
\label{L*}
\end{eqnarray}
while the energy integral (\ref{H_Ui-0}) is transformed to
\begin{eqnarray}
\frac{1}{2}\left(\sum_{i=1}^n\beta_iU_i\right)\left(\dot{\xi}^2+\dot{\eta
^2\right) +\frac{\sum_{i=1}^n\alpha_iU_i}{\sum_{i=1}^n\beta_iU_i}=h.
\label{H_Ui}
\end{eqnarray}
The second integral of $L_1$ is obtained from (\ref{I}).
Now, discarding the free additive parameter $h$ from $L_1$ reduces it to
L^* $. Since the zero level of energy integral of $L_1$ is the $h$-level for
$L^* $, as determined by (\ref{H_Ui}), the Lagrangian $L^*$ admits the
second integral (\ref{I*}) on its $h$-level of energy. Finally, one can use
the energy integral (\ref{H_Ui}) to eliminate $h$ from (\ref{I*}) and then
get a form of the second integral free of the energy parameter.
\section{New solutions of the resolving equation}
The discussion in \cite{Yehia:2006b} (see also \cite{Yehia:2012}) showed
that, in certain circumstances, the original isometric variables $\xi,\;\eta$
are not practically suitable for solving the equation, and that the
symmetric separation solution discussed in \cite{Yehia:2006b} can be more
conveniently expressed in the coordinates $p$ and $q$ defined by
\begin{eqnarray}
\xi=\int^{p}\frac{dz}{\sqrt[4]{a_4z^4+a_3z^3+a_2z^2+a_1z+a_0}}, \notag \\
\eta=\int^{q}\frac{dz}{\sqrt[4]{a_4z^4+b_3z^3+b_2z^2+b_1z+b_0}}, \label{x&h}
\end{eqnarray}
where $a_4,a_3,a_2,a_1,a_0,b_3,b_2,b_1,b_0$ are arbitrary constants. The
integrable system constructed in \cite{Yehia:2006b}, the master system,
represents the solution of (\ref{PDE}) when
\begin{eqnarray}
F(\xi,\eta)\=\int\!\!\!\!\int\!f(\xi)~d\xi d\xi +\int\!\!\!\!\int\!
g(\eta)~d\eta d\eta+\nu pq, \label{F4Master}
\end{eqnarray}
where
\begin{eqnarray}
f(\xi)=\frac{\frac{1}{4}b_3p^3+4Ap^2+4C_1p+4C_0}{\sqrt
a_4p^4+a_3p^3+a_2p^2+a_1p+a_0}}, \notag \\
g(\eta)=\frac{\frac{1}{4}a_3q^3+4Aq^2+4D_1q+4D_0}{\sqrt
a_4q^4+b_3q^3+b_2q^2+b_1q+b_0}}, \label{f&g}
\end{eqnarray}
where $\nu,A,C_1,C_0,D_1,D_0$ are arbitrary parameters. Another solution of
the resolving equation is obtained in \cite{Yehia:2012} assuming $F$ in the
form
\begin{eqnarray}
F(\xi,\eta)\=\int\!\!\!\!\int\!f(\xi)~d\xi d\xi+\int\!\!\!\!\int\!
g(\eta)~d\eta d\eta +\nu pq+\nu_1p^2q^2, \label{F4NewMaster}
\end{eqnarray}
where $\nu$ and $\nu_1$ are arbitrary constants. The above two choices led
to the construction of two systems integrable on all energy levels and
involving a total number of parameters 21 and 16 respectively. Special cases
of the two system admit interpretation in particle and rigid body dynamics
(See \cite{Yehia:2006b,Yehia:2012} for detail).
The main object of the present work is to extend further the method of \cit
{Yehia:2006b,Yehia:2012} to construct and classify integrable systems
corresponding to the generalized ansatz
\begin{eqnarray}
F(\xi,\eta)\=\int\!\!\!\!\int\!f(\xi)~d\xi d\xi+\int\!\!\!\!\int\!
g(\eta)~d\eta d\eta +\sum_{i=2}^4\sum_{j=2}^i\nu_{j,i-j}p^{i}q^{i-j},
\label{F-O(4)}
\end{eqnarray}
possibly with certain restrictions on the parameters involved in
f(\xi),g(\eta)$ as given in (\ref{f&g}).
Substituting (\ref{F-O(4)}) into equation (\ref{PDE}), and making use of
\ref{x&h}), we get a polynomial expression of the sixth degree in $p,q$ that
must vanish. This yields a system of 27 polynomial equations in the 26
parameters of the problem; $\{A,C_{0},C_{1},D_{0},D_{1},a_{i}(i=0,\dots
,4),b_{j}(j=0,\dots ,3),\nu _{ij}(2\leq i\leq j\leq 4)\}$. The system of
polynomial equations is solved using the MAPLE computer algebra package and
we obtained 59 distinct solutions, i.e 59 working combinations of the
parameters that may lead to the construction of integrable systems with a
quartic second integral. It turned out that for 17 solutions the
corresponding integrable systems are separable and then they admit integrals
quadratic in velocities and will not be considered further. Moreover, due to
the symmetric way in which groups of parameters are associated to the
variables p, q, there exist 16 symmetry relations between the remaining 42
solutions. This reduced the number of independent solutions to 26. Finally,
it turned out that 13 cases can be obtained by assuming special values of
parameters in the other 13. Thus, the final number of different systems is
thirteen, the number we are going now to classify and put in a form as
simple as possible.
\section{The basic integrable systems}
In this section we tabulate the 13 basic integrable systems with a quartic
integral. Those are the conditional ones, valid on their zero-energy levels.
We first note that
\begin{enumerate}
\item The first 5 systems could be expressed explicity in terms of the
cartesian coordinates $\xi,\eta$ in a Euclidean plane. This possibility was
ignored, since the resulting expressions contained rational powers which
makes the potential and the complementary integral more complicated.
\item The system No. 3 (given by the Lagrangian \ref{SYSI.03}) already
involves an additional arbitrary parameter $d$, which can be interpreted as
an energy parameter. This system is thus unconditionally integrable, but
still we can add more parameters to its structure in the next section.
\item The two systems by numbers 11 and 13 were given earlier in \cit
{Yehia:2012} and \cite{Yehia:2006b} respectively. They are given here only
for completeness of the results.
\end{enumerate}
For each case in table I we give the Lagrangian and the complementary
integral valid on its zero-energy level. The systems are classified in Table
I according to the number of arbitrary parameters entering into their
structure.
\newpage
\begin{landscape}
\noindent
\textbf{Table I.}
\textit{Basic (conditional) systems}
\vspace*{-.3cm}\\
\rule[-0.1cm]{21.5cm}{0.01cm}\\[-.4cm]
\rule[-0.1cm]{21.5cm}{0.03cm}\\
{\small
\begin{eqnarray}
1.\quad
&L=&\frac{1}{2}\left(p^4p^{\prime 2}+q^4q^{\prime 2}\right)
-(p^2+q^2)\left\{
a\left[\left(p^2-q^2\right)^4\left(5\left(\frac{p}{q}-\frac{q}{p}\right)^2
+\frac{\d}{p^2q^2}\right)
+\left(12p^4-16p^2q^2+12q^4-\d\right)^2\right]\right.\nn
& &\left.+b\left(\frac{p}{q}-\frac{q}{p}\right)^2
+\frac{c}{p^2q^2}\right\},\nn
&I=&\left\{\frac{1}{2}~p^4p^{\prime 2}+119ap^{10}+27a(5q^4-\d)p^6
-\left[a\left(195q^8-10\d q^4-\d^2\right)+b\right]p^2
+\frac{aq^8(5q^4+\d)+bq^4+c}{p^2}\right\}^2\nn
& &-4p^3q^3\left\{a\left[15p^8-2(39q^4-\d)p^4+15q^8+2q^4\d\right]+b\right\}p^\prime q^\prime
+2964a^2p^{20}-1800q^2a^2p^{18}
-4a^2(12495q^4-94\d)p^{16}\nn
& &+480a^2q^2(39q^4-\d)p^{14}
+4a\left[a\left(17130q^8+1240\d q^4-13\d^2\right)+122b\right]p^{12}
-16aq^2\left[a(3267q^8-126\d q^4+2\d^2)+15b\right]p^{10}\nn
& &+4a\left[17130aq^{12}-5484a\d q^8+(141a\d^2-282b)q^4-a\d^3-6b\d+78c\right]p^8
+32aq^2(15aq^8+2a\d q^4+b)(39q^4-\d)p^6\nn
& & -\left\{a(49980aq^{16}-4960a\d q^{12}-564(a\d^2-2b)q^8+24(a\d^3-10b\d+10c)q^4+4\d(b\d+4c))
-4b^2\right\}p^4\nn
& &-8q^2(15aq^8+2a\d q^4+b)^2p^2
+4q^4\left\{
a\left[741aq^{16}+94a\d q^{12}-(13a\d^2-122b)q^8-(a\d^3+6b\d-78c)q^4-b\d^2-4c\d\right]
+b^2\right\}\nn
\label{SYSI.01}
\end{eqnarray}
\begin{eqnarray}
2.\quad
&L=&\frac{1}{2}\left(pp^{\prime 2}+qq^{\prime 2}\right)
-\left(\frac{1}{q}+\frac{1}{p}\right)
\left\{(p+q)^4\left[a\left(5p^2+6pq+5q^2\right)+b\right]
+c(p+q)^2+d\right\},\nonumber\\
&I=&\left\{\frac{1}{2}~pp^{\prime 2}+31ap^5+5(27aq^2+b)p^3+(85aq^4+10bq^2+3c)p
+\frac{5aq^6+bq^4+cq^2+d}{p}\right\}^2\nn
&&-4pq\left[a\left(5p^2+3q^2\right)\left(3p^2+5q^2\right)+2\left(p^2+q^2\right)b+c\right]
p^\prime q^\prime
-1292a^2p^{10}-1800a^2qp^9-12a(1085aq^2+34b)p^8\nn
&&-480aq(17aq^2+b)p^7
-8\left[a(4355aq^4+380bq^2+33c)+4b^2\right]p^6
-16q\left[a(803aq^4+98bq^2+15c)+2b^2\right]p^5\nn
&&-8\left[4355a^2q^6+674abq^4+(159ac+20b^2)q^2+17ad+5bc\right]p^4
-32q\left[255a^2q^6+49abq^4+(17ac+2b^2)q^2+bc\right]p^3\nn
&&-4\left[3255a^2q^8+760abq^6+2(159ac+20b^2)q^4+4(15ad+7bc)q^2+4bd+3c^2\right]p^2\nn
&&-8q\left[225a^2q^8+60abq^6+2(15ac+2b^2)q^4+4bcq^2+c^2\right]p\nn
&&-4q^2\left[323a^2q^8+102abq^6+2(33ac+4b^2)q^4+2(17ad+5bc)q^2+4bd+3c^2\right].
\label{SYSI.02}
\end{eqnarray}
\newpage
\begin{eqnarray}
3.\quad
&L=&\frac{1}{2}\left(p^4p^{\prime 2}+q^4q^{\prime 2}\right)
-a\left[\frac{9p^6+2q^6}{2p^2}+30\d p^2q^3+\d^2\left(9p^6+64q^6\right)\right]
-\frac{b}{p^2}
-c\left(16\d q^3+3p^2\right)-d,\nn
&I=&\left[\frac{1}{2}~p^4p^{\prime 2}+9a\d^2p^6+3(10a\d q^3+c)p^2+\frac{aq^6+b}{p^2}\right]^2
-6ap^3q^2(3\d p^4+q^3)p^\prime q^\prime
-162a^2\d^2p^{10}\nn
&&-\frac{27a}{4}(128a\d^3q^3+16c\d^2+3a)p^8
-108a^2\d q^3p^6-9a(128a\d^2q^6+24c\d q^3+d)p^4
-18a^2q^6p^2
-12aq^3\left(8a\d q^6+cq^3+4b\d\right).\nn
\label{SYSI.03}
\end{eqnarray}
\begin{eqnarray}
4.\quad
&L=&\frac{1}{2}\left(pp^{\prime 2}+qq^{\prime 2}\right)
-a(p+q)^3\left[\frac{(p+q)^4}{pq}+4(p-q)^2\right]
-\frac{b(p+q)^5}{pq}
-c\left(\frac{p^2}{q}+\frac{q^2}{p}\right)
-d\left(\frac{1}{q}+\frac{1}{p}\right)
-e(p+q),\nonumber\\
&I=&\left\{\frac{1}{2}~pp^{\prime 2}
+11ap^5+(27aq^2+5b)p^3+(25aq^4+10bq^2+e)p+\frac{aq^6+bq^4+cq^2+d}{p}\right\}^2\nn
&&-4pq\left[3ap^4+2(5aq^2+b)p^2+3aq^4+2bq^2+c\right]p^\prime q^\prime
-76a^2p^{10}
-72a^2qp^9
-4a(231aq^2+26b)p^8
-96aq(5aq^2+b)p^7\nn
&&-4\left[a(518aq^4+200bq^2+15c+e)+8b^2\right]p^6
-16q\left[a(59aq^4+26bq^2+3c)+2b^2\right]p^5\nn
&&-4\left[518a^2q^6+316abq^4+(33ac+15ae+40b^2)q^2+10ad+7bc+be\right]p^4
-32q\left[15a^2q^6+13abq^4+(5ac+2b^2)q^2+bc\right]p^3\nn
&&-4\left[231a^2q^8+200abq^6+(33ac+15ae+40b^2)q^4+(12ad+10bc+6be)q^2+4bd+ce\right]p^2\nn
&&-8q\left[9a^2q^8+12abq^6+2(3ac+2b^2)q^4+4bcq^2+c^2\right]p\nn
&&-4q^2\left[19a^2q^8+26abq^6+(15ac+ae+8b^2)q^4+(10ad+7bc+be)q^2+4bd+ce\right].
\label{SYSI.04}
\end{eqnarray}
\begin{eqnarray}
5.\quad
&L=&\frac{1}{2}\left(p^4p^{\prime 2}+q^4q^{\prime 2}\right)
-\left(\d p^2+q^2\right)
\left\{a\left[4(\d p^2+q^2)^2+\left(\frac{p^3}{q}-\frac{\d q^3}{p}\right)^2\right]
+b\left(\frac{\d q^2}{p^2}+\frac{p^2}{q^2}\right)+c\right\}
-\frac{d}{p^2}-\frac{e}{q^2},\nn
&I=&\left\{\frac{1}{2}~p^4p^{\prime 2}
+a(4\d^3+1)p^6+(10a\d q^4+c\d)p^2+\frac{\d q^4(a\d q^4+b)+d}{p^2}\right\}^2
-4\d p^3q^3\left(2a\d q^4+2ap^4+b\right)p^\prime q^\prime\nn
&&-32a^2\d^3p^{12}
-32a^2\d^2q^2p^{10}
-4a\d\left[8aq^4(4\d^3+1)+8b\d^2+c\right]p^8
-32a\d^2q^2(2a\d q^4+b)p^6\nn
&&-4\d\left[8a^2\d q^8(\d^3+4)+2aq^4(4b\d^3+3c\d+4b)+b^2\d^2+4ae\d+bc\right]p^4\nn
&&-8\d^2q^2\left(2a\d q^4+b\right)^2p^2
-4\d q^4\left\{a\left[8a\d^2q^8+\d(c\d+8b)q^4+4d\right]+bc\d+b^2\right\}.
\label{SYSI.05}
\end{eqnarray}
\newpage
\begin{eqnarray}
6.\quad
&L=&\frac{1}{2}\left[\frac{p^{\prime 2}}{\sqrt{a_2p^2+a_0}}
+\frac{q^{\prime 2}}{\sqrt{a_2q^2+a_0}}\right]\nn
& &-\frac{\m a_2q^3p+q^2\left[5\m (3a_2p^2+2a_0)+\m_1a_2p\right]
+q\left[15\m a_2p^3+6\m_1a_2p^2+(2\m a_0+A)p+4\m_1a_0\right]
+\m a_2p^4+\m_1a_2p^3+Ap^2+C_1p+C_0}
{\sqrt{a_2p^2+a_0}}\nn
& &-\frac{\m a_2p^3q+p^2\left[5\m (3a_2q^2+2a_0)+\m_1a_2q\right]
+p\left[15\m a_2q^3+6\m_1a_2q^2+(2\m a_0+A)q+4\m_1a_0\right]
+\m a_2q^4+\m_1a_2q^3+Aq^2+C_1q+C_0}
{\sqrt{a_2q^2+a_0}},\nn
&I=&\frac{a_2^2}{a_2p^2+a_0}\left\{p^{\prime 2}/2+\m a_2q^3p+q^2\left[5\m (3a_2p^2+2a_0)+\m_1a_2p\right]
+q\left[15\m a_2p^3+6\m_1a_2p^2+(2\m a_0+A)p+4\m_1a_0\right]
+\m a_2p^4\right.\nn
& &\left.+\m_1a_2p^3+Ap^2+C_1p+C_0\right\}^2
-2a_2\left[3\m a_2q^2+2a_2q(5\m p+\m_1)+3\m a_2p^2+2\m_1a_2p-10\m a_0+A\right]p^\prime q^\prime\nn
& &-2\left[3\m a_2q^2+2a_2q(5\m p+\m_1)+3\m a_2p^2+2\m_1 a_2p-10\m a_0+A\right]^2
\sqrt{a_2p^2+a_0}\sqrt{a_2q^2+a_0}
-\m^2a_2^3q^6
-2\m a_2^3(24\m p+\m_1)q^5\nn
& &-a_2^2\left[\m(375\m a_2p^2+66\m_1a_2p+98\m a_0+2A)+\m_1^2a_2\right]q^4
-2a_2^2\left[344\m^2a_2p^3+158\m\m_1a_2p^2+2(5\m_1^2a_2+11\m A-20\m^2a_0)p\right.\nn
& &+\left.\m(C_1+38\m_1a_0)+\m_1A\right]q^3
-a_2\left\{375\m^2a_2^2p^4+316\m\m_1a_2^2p^3
+2a_2\left[\m(50A-210\m a_0)+27\m_1^2a_2\right]p^2
+2a_2\left[15\m(C_1-2\m_1a_0)+11\m_1A\right]p\right.\nn
& &-\left.4\m(5\m a_0^2+2a_0A)+2\m_1a_2(C_1+8\m_1^2a_0)+20\m a_2C_0+A^2\right\}q^2
-2a_2\left\{24\m^2a_2^2p^5+33\m\m_1a_2^2p^4
+2a_2\left[\m(11A-20\m a_0)+5\m_1^2a_2\right]p^3\right.\nn
& &+\left.a_2\left[15\m(C_1-2\m_1a_0)+11\m A\right]p^2
+2\left[2\m(3a_2C_0-5a_0A)+3\m_1a_2C_1+A^2\right]p+C_1(A-10\m a_0)+4\m_1a_2C_0\right\}q
-\m^2a_2^3p^6
-2\m\m_1a_2^3p^5\nn
& &-a_2^2\left[2\m(49\m a_0+A)+\m_1^2a_2\right]p^4
-2a_2^2\left[\m(38\m_1 a_0+C_1)+\m_1 A\right]p^3
-a_2\left\{4\m\left[5a_2C_0-a_0(5\m a_0+2A)\right]+2\m_1a_2(8\m_1a_0+C_1)+A^2\right\}p^2\nn
& &-2a_2\left[C_1(A-10\m a_0)+4\m_1a_2C_0\right]p
\label{SYSI.06}
\end{eqnarray}
\begin{eqnarray}
7.\quad
&L=&\frac{1}{2}\left[\frac{p^{\prime 2}}{\sqrt{a_2p^2+a_0}}
+\frac{q^{\prime 2}}{\sqrt{a_2q^2+b_0}}\right]
-\frac{\m a_2q^2p+q\left[2\m (3a_2p^2+2a_0)+Ap\right]+\m a_2p^3+Ap^2+C_1p+C_0}
{\sqrt{a_2p^2+a_0}}\nn
& &-\frac{\m a_2p^2q+p\left[2\m (3a_2q^2+2b_0)+Aq\right]+\m a_2q^3+Aq^2+C_1q+D_0}
{\sqrt{a_2q^2+b_0}},\nn
&I=&\frac{a_2^2\left\{p^{\prime 2}/2+\m a_2q^2p+q\left[2\m (3a_2p^2+2a_0)+Ap\right]
+\m a_2p^3+Ap^2+C_1p+C_0\right\}^2}
{a_2p^2+a_0}
-2a_2[2\m a_2(p+q)+A]p^\prime q^\prime\nn
& & -2[2\m a_2(p+q)+A]^2\sqrt{a_2p^2+a_0}\sqrt{a_2q^2+b_0}
-\m^2 a_2^3q^4
-2\m a_2^2(10\m a_2p+A)q^3
-a_2\left[2\m a_2\left(27\m a_2p^2+11Ap+8\m a_0+C_1\right)+A^2\right]q^2\nn
& & -2a_2\left[10\m^2 a_2^2p^3+11\m a_2Ap^2+2(A^2+3\m a_2C1)p+AC_1+4\m C_0\right]q
-\m^2 a_2^3p^4-2\m a_2^2Ap^3
-a_2\left[2\m a_2\left(8\m b_0+C_1\right)+A^2\right]p^2\nn
& & -2a_2(C_1A+4\m a_2D_0)p.
\label{SYSI.07}
\end{eqnarray}
\newpage
\begin{eqnarray}
8.\quad
&L=&\frac{1}{2}\left[\frac{p^{\prime 2}}{\sqrt{a_2p^2+a_1p+a_0}}
+\frac{q^{\prime 2}}{\sqrt{b_1q+b_0}}\right]
-\frac{\m q(b_1^2q^2+3b_0b_1q+3b_0^2)\left(2a_2p+a_1\right)
+81\m b_1^3p^3+9b_1^2Ap^2+C_1p+C_0}{\sqrt{a_2p^2+a_1p+a_0}}\nn
& &-\left(b_1q+b_0\right)^{3/2}\left(27\m b_1p+A\right),\nonumber\\
&I=&\frac{\left[p^{\prime 2}/2+\m q(b_1^2q^2+3b_0b_1q+3b_0^2)\left(2a_2p+a_1\right)
+81\m b_1^3p^3+9b_1^2Ap^2+C_1p+C_0\right]^2}{a_2p^2+a_1p+a_0}
-12\m\left(b_1q+b_0\right)^2p^\prime q^\prime\nn
& &-72\m^2(b_1q+b_0)^{9/2}\sqrt{a_2p^2+a_1p+a_0}
-4\m\left\{q(b_1^2q^2+3b_0b_1q+3b_0^2)\left[\m
a_2q(b_1^2q^2+3b_0b_1q+3b_0^2)+C_1\right]\right.\nn
& &+\left.9b_1p(b_1q+b_0)^3(27\m b_1p+2A)\right\}.
\label{SYSI.08}
\end{eqnarray}
\begin{eqnarray}
9.\quad
&L=&\frac{1}{2}\left[\frac{p^{\prime 2}}{\sqrt{a_2p^2+a_1p+a_0}}+q^{\prime 2}\right]
-\frac{\m q(q+\m_1)(2a_2p+a_1) +4Ap^2+C_1p+C_0}{\sqrt{a_2p^2+a_1p+a_0}}
-8\m p-Aq(q+\m_1)-D_0,\nonumber\\
&I=&\frac{\left[p^{\prime 2}/2+\m q(q+\m_1)(2a_2p+a_1)+4Ap^2+C_1p+C_0\right]^2}{a_2p^2+a_1p+a_0}
-4\m\left(2q+\m_1\right)p^\prime q^\prime
-8\m^2(2q+\m_1)^2\sqrt{a_2p^2+a_1p+a_0}\nn
& & -4\m\left\{q(q+\m_1)\left[a_2\m q(q+\m_1)+8Ap+C_1\right]
+16\m p^2+(\m_1^2A+4D_0)p\right\}.
\label{SYSI.09}
\end{eqnarray}
\begin{eqnarray}
10.\quad
& L=&\frac{1}{2}\left[
\frac{p^{\prime 2}}{\sqrt{a_2p^2+a_0}}+\frac{q^{\prime 2}}{\sqrt{b_2q^2+b_0}}\right]
-\frac{\m q^2(3a_2p^2+2a_0)+\m_1a_2 pq+\m b_2p^4+Ap^2+C_0}{\sqrt{a_2p^2+a_0}}\nn
& &-\frac{\m p^2(3b_2q^2+2b_0)+\m_1b_2 qp+\m a_2q^4+Aq^2+D_0}{\sqrt{b_2q^2+b_0}},\nn
&I=&\frac{\left[p^{\prime 2}/2
+\m q^2(3a_2p^2+2a_0)+\m_1a_2 pq+\m b_2p^4+Ap^2+C_0\right]^2}
{a_2p^2+a_0}
-2(2\m qp+\m_1)p^\prime q^\prime
-2(2\m qp+\m_1)^2\sqrt{a_2p^2+a_0}\sqrt{b_2q^2+b_0}\nn
& &-4\m^2(3a_2p^2+a_0)q^4-8a_2\m\m_1pq^3
-\left[4\m(3b_2\m p^4+2Ap^2+C_0)+\m_1^2a2\right]q^2
-4\m p(2\m b_2p^2+A)q-4\m^2b_0p^4\nn
& &-(\m_1^2b_2+4\m D_0)p^2.
\label{SYSI.10}
\end{eqnarray}
\newpage
\begin{eqnarray}
11.\quad
&L=&\frac{1}{2}\left[\frac{p^{\prime 2}}{2\sqrt{a_4p^4+a_2p^2+a_0}}
+\frac{q^{\prime 2}}{2\sqrt{a_4q^4+b_2q^2+b_0}}\right]
-\frac{\m q^2(4a_4p^4+3a_2p^2+2a_0)+\m_1 qp(2a_4p^2+a_2)+\m b_2p^4+Ap^2+C_0}
{\sqrt{a_4p^4+a_2p^2+a_0}}\nn
& &-\frac{\m p^2(4a_4q^4+3b_2q^2+2b_0)+\m_1 pq(2a_4q^2+b_2)+\m a_2q^4+Aq^2+D_0}
{\sqrt{a_4q^4+b_2q^2+b_0}},\nn
&I=&\frac{\left[p^{\prime 2}/2+\m q^2(4a_4p^4+3a_2p^2+2a_0)
+\m_1 qp(2a_4p^2+a_2)+\m b_2p^4+Ap^2+C_0\right]^2}
{a_4p^4+a_2p^2+a_0}
-2(2\m pq+\m_1)p^\prime q^\prime\nn
& &-2(2\m pq+\m_1)^2\sqrt{a_4q^4+b_2q^2+b_0}\sqrt{a_4p^4+a_2p^2+a_0}
-4\m^2(6a_4p^4+3a_2p^2+a_0)q^4
-8\m_1\m p(3a_4p^2+a_2)q^3\nn
& &-(12\m^2b_2p^4+2(3\m_1^2a_4+4\m A)p^2+\m_1^2a_2+4\m C_0)q^2
-4\m_1p(2\m b_2p^2+A)q
-4\m^2b_0p^4-(\m^2b_2+4\m D_0)p^2.
\label{SYSI.11}
\end{eqnarray}
\begin{eqnarray}
12.\quad
&L=&\frac{1}{2}\left[\frac{p^{\prime 2}}{\sqrt{a_3p^3+a_2p^2+a_1p+a_0}}
+\frac{q^{\prime 2}}{\sqrt{b_2q^2+b_0}}\right]
-\frac{4\m q^2(3a_3p^2+2a_2p^2+a_1)+64\m b_2p^3+4Ap^2+C_1p+C_0}
{\sqrt{a_3p^3+a_2p^2+a_1p+a_0}}\nn
& &-\frac{16\m p\left(3b_2q^2+2b_0\right)+\m a_3q^4+Aq^2+D_0}
{\sqrt{b_2q^2+b_0}},\nn
&I=&\frac{\left[p^{\prime 2}/2+4\m q^2(3a_3p^2+2a_2p+a_1)+64\m b_2p^3+4Ap^2+C_1p+C_0\right]^2}
{a_3p^3+a_2p^2+a_1p+a_0}
-32\m q p^\prime q^\prime
-512\m^2 q^2\sqrt{a_3p^3+a_2p^2+a_1p+a_0}\sqrt{b_2q^2+b_0}\nn
& &-16\m\left[4\m q^4(3a_3p+a_2)+q^2(192\m b_2p^2+8Ap+C_1)+64\m b_0p^2+4D_0p\right].
\label{SYSI.12}
\end{eqnarray}
\begin{eqnarray}
13.\quad
&L=&\frac{1}{2}\left[
\frac{\dot{p}^2}{\sqrt{a_4p^4+a_3p^3+a_2p^2+a_1p+a_0}}
+\frac{\dot{q}^2}{\sqrt{a_4q^4+b_3q^3+b_2q^2+b_1q+b_0}}\right]\nn
& &-\frac{\m q\left(4a_4p^3+3a_3p^2+2a_2p+a_1\right)
+\m b_3p^3+Ap^2+C_1p+C_0}{\sqrt{a_4p^4+a_3p^3+a_2p^2+a_1p+a_0}}
-\frac{\m p\left(4a_4q^3+3b_3q^2+2b_2q+b_1\right)
+\m a_3q^3+Aq^2+D_1q+D_0}{\sqrt{a_4q^4+b_3q^3+b_2q^2+b_1q+b_0}},\nn
&I=&\frac{\left[\dot{p}^2/2
+\m q(4a_4p^3+3a_3p^2+2a_2p+a_1)+\m b_3p^3+Ap^2+C_1p+C_0\right]^2}
{a_4p^4+a_3p^3+a_2p^2+a_1p+a_0}-4\m\dot{p}\dot{q}\nn
& &-8\m^2\sqrt{a_4p^4+a_3p^3+a_2p^2+a_1p+a_0}\sqrt{a_4q^4+b_3q^3+b_2q^2+b_1q+b_0}\nn
& &-4\m\left[\m q^2(3a_3p+6a_4p^2+a_2)+q(3\m b_3p^2+2Ap+C_1)+\m b_2p^2+D_1p\right].
\label{SYSI.13}
\end{eqnarray}
}\noindent
\rule[-0.1cm]{20.65cm}{0.01cm}\\[-.45cm]
\rule[-0.1cm]{20.65cm}{0.03cm}
\end{landscape}
\newpage
\section{Classification of the unconditional integrable systems}
In Table II below we list the most general deformations of the basic
integrable systems of the preceding section into their unconditional
counterparts, valid for arbitrary initial conditions. Those systems are
constructed in the way described in \S 2. For each system we give the number
of parameters in its structure, the final form of the Lagrangian $L^{\ast }$
(written simply as $L$) and the conformal factor $\Lambda $. The
complementary integral $I^{\ast }$ will not be written down. It can be
obtained for each case from the corresponding integral $I$ of the
corresponding basic system by performing three steps:
\begin{enumerate}
\item Substituting $p^{\prime }$ and $q^{\prime }$ by $\Lambda p^{\prime }$
and $\Lambda q^{\prime }$, respectively.
\item Changing the energy-like parameters in $I$ according to:
\begin{eqnarray*}
&&\mu =\nu -\alpha h,~\mu _{1}=\gamma -\beta h,~A=h_{2}-\alpha _{2}h, \\
&&C_{1}=h_{1}-\alpha _{1}h,~C_{0}=h_{0}-\alpha _{0}h,~D_{1}=k_{1}-\beta
_{1}h,~D_{0}=k_{0}-\beta _{0}h.
\end{eqnarray*}
\item The total energy parameter $h$, appearing in $I$ after the last
substitutions, is replaced by the energy integral corresponding to the
Lagrangian $L^{\ast }$.
\end{enumerate}
\textbf{Remark}: The potential of the system number 9 in Table I involves
several parameters in a linear way, but it is a bilinear function in the
parameters $\mu _{1},A$ and thus one can use at a time either $\mu _{1}$ or
A$ as an energy-like parameter. Thus, this system generates the two distinct
unconditional systems occupying numbers 9 and 10 in Table II.
\newpage
\begin{landscape}
\noindent
\textbf{Table II.}
\textit{Unconditional generalization}
\vspace*{-.3cm}\\
\rule[-0.1cm]{20.65cm}{0.01cm}\\[-.4cm]
\rule[-0.1cm]{20.65cm}{0.03cm}\\[.5cm]
{\small
1 - Number of parameters: 7
\begin{eqnarray}
&L=&\frac{1}{2}\L\left(p^4\dot{p}^2+q^4\dot{q}^2\right)
-\frac{1}{\L}(p^2+q^2)\left\{
h_1\left[\left(p^2-q^2\right)^4\left[5\left(\frac{p}{q}-\frac{q}{p}\right)^2
+\frac{\d}{p^2q^2}\right]
+\left(12p^4-16p^2q^2+12q^4-\d\right)^2\right]\right.\nn
& &+\left.h_2\left(\frac{p}{q}-\frac{q}{p}\right)^2
+\frac{h_3}{p^2q^2}\right\},\nonumber\\
\nonumber\\
&\L=&(p^2+q^2)\left\{
\a_1\left[\left(p^2-q^2\right)^4\left[5\left(\frac{p}{q}-\frac{q}{p}\right)^2
+\frac{\d}{p^2q^2}\right]
+\left(12p^4-16p^2q^2+12q^4-\d\right)^2\right]+\a_2\left(\frac{p}{q}-\frac{q}{p}\right)^2
+\frac{\a_3}{p^2q^2}\right\}.
\end{eqnarray}
2 - Number of parameters: 8
\begin{eqnarray}
&L=&\frac{1}{2}\L\left(p\dot{p}^2+q\dot{q}^2\right)
-\frac{1}{\L}\left\{\left(\frac{1}{q}+\frac{1}{p}\right)
\left\{(p+q)^4\left[h_1\left(5p^2+6pq+5q^2\right)+h_2\right]
+h_3(p+q)^2+h_4\right\}\right\},\nonumber\\
&\L=&\left(\frac{1}{q}+\frac{1}{p}\right)
\left\{(p+q)^4\left[\a_1\left(5p^2+6pq+5q^2\right)+\a_2\right]
+a_3(p+q)^2+a_4\right\}.
\end{eqnarray}
3 - Number of parameters: 9
\begin{eqnarray}
&L=&\frac{1}{2}\L\left(p^4\dot{p}^2+q^4\dot{q}^2\right)
-\frac{1}{\L}\left\{h_1\left[\frac{9p^6+2q^6}{2p^2}+30\d p^2q^3+\d^2\left(9p^6+64q^6\right)\right]
+\frac{h_2}{p^2}
+h_3\left(16\d q^3+3p^2\right)+h_4\right\},\nonumber\\
&\L=&\a_1\left[\frac{9p^6+2q^6}{2p^2}+30\d p^2q^3+\d^2\left(9p^6+64q^6\right)\right]
+\frac{\a_2}{p^2}
+\a_3\left(16\d q^3+3p^2\right)+\a_4.
\end{eqnarray}
4 - Number of parameters: 10
\begin{eqnarray}
&L=&\frac{1}{2}\L\left(p\dot{p}^2+q\dot{q}^2\right)
-\frac{1}{\L}\left\{h_1(p+q)^3\left[\frac{(p+q)^4}{pq}+4(p-q)^2\right]
+\frac{h_2(p+q)^5}{pq}
+h_3\left(\frac{p^2}{q}+\frac{q^2}{p}\right)
+h_4\left(\frac{1}{q}+\frac{1}{p}\right)
+h_5(p+q)\right\},\nonumber\\
&\L=&\a_1(p+q)^3\left[\frac{(p+q)^4}{pq}+4(p-q)^2\right]
+\frac{\a_2(p+q)^5}{pq}
+\a_3\left(\frac{p^2}{q}+\frac{q^2}{p}\right)
+\a_4\left(\frac{1}{q}+\frac{1}{p}\right)
+\a_5(p+q).
\end{eqnarray}
5 - Number of parameters: 11
\begin{eqnarray}
&L=&\frac{1}{2}\L\left(p^4\dot{p}^2+q^4\dot{q}^2\right)
-\frac{1}{\L}\left\{\left(\d p^2+q^2\right)
\left\{h_1\left[4(\d p^2+q^2)^2+\left(\frac{p^3}{q}-\frac{\d q^3}{p}\right)^2\right]
+h_2\left(\frac{\d q^2}{p^2}+\frac{p^2}{q^2}\right)+h_3\right\}
+\frac{h_4}{p^2}+\frac{h_5}{q^2}\right\},\nonumber\\
\nonumber\\
&\L=&\left(\d p^2+q^2\right)
\left\{\a_1\left[4(\d p^2+q^2)^2+\left(\frac{p^3}{q}-\frac{\d q^3}{p}\right)^2\right]
+\a_2\left(\frac{\d q^2}{p^2}+\frac{p^2}{q^2}\right)+\a_3\right\}
-\frac{\a_4}{p^2}-\frac{\a_5}{q^2}.
\end{eqnarray}
6 - Number of parameters: 12
\begin{eqnarray}
&L=&\frac{1}{2}\L\left[\frac{\dot{p}^2}{\sqrt{a_2p^2+a_0}}
+\frac{\dot{q}^2}{\sqrt{a_2q^2+a_0}}\right]\nn
& &-\frac{\n a_2q^3p+q^2\left[5\n (3a_2p^2+2a_0)+\g a_2p\right]
+q\left[15\n a_2p^3+6\g a_2p^2+(2\n a_0+h_2)p+4\g a_0\right]
+\n a_2p^4+\g a_2p^3+h_2p^2+h_1p+h_0}
{\sqrt{a_2p^2+a_0}}\nn
& &-\frac{\n a_2p^3q+p^2\left[5\n (3a_2q^2+2a_0)+\g a_2q\right]
+p\left[15\n a_2q^3+6\g a_2q^2+(2\n a_0+h_2)q+4\g a_0\right]
+\n a_2q^4+\g a_2q^3+h_2q^2+h_1q+h_0}
{\sqrt{a_2q^2+a_0}},\nn
&\L=& \frac{\a a_2q^3p+q^2\left[5\a (3a_2p^2+2a_0)+\b a_2p\right]
+q\left[15\a a_2p^3+6\b a_2p^2+(2\a a_0+\a_2)p+4\b a_0\right]
+\a a_2p^4+\b a_2p^3+\a_2p^2+\a_1p+\a_0}
{\sqrt{a_2p^2+a_0}}\nn
& &+\frac{\a a_2p^3q+p^2\left[5\a (3a_2q^2+2a_0)+\b a_2q\right]
+p\left[15\a a_2q^3+6\b a_2q^2+(2\a a_0+\a_2)q+4\b a_0\right]
+\a a_2q^4+\b a_2q^3+\a_2q^2+\a_1q+\a_0}
{\sqrt{a_2q^2+a_0}}.
\end{eqnarray}
7 - Number of parameters: 13
\begin{eqnarray}
&L=&\frac{1}{2}\L\left[\frac{\dot{p}^2}{\sqrt{a_2p^2+a_0}}
+\frac{\dot{q}^2}{\sqrt{a_2q^2+b_0}}\right]
-\frac{\n a_2q^2p+q\left[2\n (3a_2p^2+2a_0)+h_2p\right]+\n a_2p^3+h_2p^2+h_1p+h_0}
{\sqrt{a_2p^2+a_0}}\nn
& &-\frac{\n a_2p^2q+p\left[2\n (3a_2q^2+2b_0)+h_2q\right]+\n a_2q^3+h_2q^2+h_1q+k_0}
{\sqrt{a_2q^2+b_0}},\nn
&\L=& \frac{\a a_2q^2p+q\left[2\a (3a_2p^2+2a_0)+\a_2p\right]+\a a_2p^3+\a_2p^2+\a_1p+\a_0}
{\sqrt{a_2p^2+a_0}}\nn
& &+\frac{\a a_2p^2q+p\left[2\a (3a_2q^2+2b_0)+\a_2q\right]+\a a_2q^3+\a_2q^2+\a_1q+\b_0}
{\sqrt{a_2q^2+b_0}}.
\end{eqnarray}
8 - Number of parameters: 13
\begin{eqnarray}
&L=&\frac{1}{2}\L\left[\frac{\dot{p}^2}{\sqrt{a_2p^2+a_1p+a_0}}
+\frac{\dot{q}^2}{\sqrt{b_1q+b_0}}\right]\nn
& &-\frac{1}{\L}\left[\frac{\n q(b_1^2q^2+3b_0b_1q+3b_0^2)\left(2a_2p+a_1\right)
+81\n b_1^3p^3+9b_1^2h_2p^2+h_1p+h_0}{\sqrt{a_2p^2+a_1p+a_0}}
-\left(b_1q+b_0\right)^{3/2}\left(27\n b_1p+h_2\right)\right],\nn
&\L=&\frac{\a q(b_1^2q^2+3b_0b_1q+3b_0^2)\left(2a_2p+a_1\right)
+81\a b_1^3p^3+9b_1^2\a_2p^2+\a_1p+\a_0}{\sqrt{a_2p^2+a_1p+a_0}}
+\left(b_1q+b_0\right)^{3/2}\left(27\a b_1p+\a_2\right).
\end{eqnarray}
9 - Number of parameters: 13
\begin{eqnarray}
&L=&\frac{1}{2}\L\left[\frac{\dot{p}^2}{\sqrt{a_2p^2+a_1p+a_0}}+\dot{q}^2\right]
-\frac{1}{\L}\left[\frac{\m q(q+\g)(2a_2p+a_1)+4Ap^2+h_1p+h_0}{\sqrt{a_2p^2+a_1p+a_0}}
+8\m p+Aq(q+\g)+k_0\right],\nn
&\L=&\frac{\b\m q(2a_2p+a_1)+\a_1p+\a_0}{\sqrt{a_2p^2+a_1p+a_0}}+\b Aq+\b_0.
\label{SYSII.09}
\end{eqnarray}
10 - Number of parameters: 14
\begin{eqnarray}
&L=&\frac{1}{2}\L\left[\frac{\dot{p}^2}{\sqrt{a_2p^2+a_1p+a_0}}+\dot{q}^2\right]
-\frac{1}{\L}\left[\frac{\n q(q+\m_1)(2a_2p+a_1)
+4h_2p^2+h_1p+h_0}{\sqrt{a_2p^2+a_1p+a_0}}
+8\n p+h_2q(q+\m_1)+k_0\right],\nn
&\L=&\frac{\a q(q+\m_1)(2a_2p+a_1)+4 \a_2p^2+\a_1p+\a_0}{\sqrt{a_2p^2+a_1p+a_0}}
+8\a p+\a_2 q(q+\m_1)+\b_0.
\end{eqnarray}
11 - Number of parameters: 14
\begin{eqnarray}
&L=&\frac{1}{2}\L\left[\frac{\dot{p}^2}{\sqrt{a_2p^2+a_0}}
+\frac{\dot{q}^2}{\sqrt{b_2q^2+b_0}}\right]\nn
& &-\frac{1}{\L}\left[\frac{\n q^2(3a_2p^2+2a_0)+\g a_2 pq+\n b_2p^4+h_2p^2+h_0}
{\sqrt{a_2p^2+a_0}}
+\frac{\n p^2(3b_2q^2+2b_0)+\g b_2 qp+\n a_2q^4+h_2q^2+k_0}
{\sqrt{b_2q^2+b_0}}\right],\nn
&\L=&\frac{\a q^2(3a_2p^2+2a_0)+\b a_2 pq+\a b_2p^4+\a_2p^2+\a_0}{\sqrt{a_2p^2+a_0}}
+\frac{\a p^2(3b_2q^2+2b_0)+\b b_2 qp+\a a_2q^4+\a_2q^2+\b_0}{\sqrt{b_2q^2+b_0}}.
\label{SYSII.11}
\end{eqnarray}
12 - Number of parameters: 15
\begin{eqnarray}
&L=&\frac{1}{2}\L\left[\frac{\dot{p}^2}{2\sqrt{a_4p^4+a_2p^2+a_0}}
+\frac{\dot{q}^2}{2\sqrt{a_4q^4+b_2q^2+b_0}}\right]
-\frac{1}{\L}\left[
\frac{\n q^2(4a_4p^4+3a_2p^2+2a_0)+\g qp(2a_4p^2+a_2)+\n b_2p^4+h_2p^2+h_0}
{\sqrt{a_4p^4+a_2p^2+a_0}}\right.\nn
& &+\left.\frac{\n p^2(4a_4q^4+3b_2q^2+2b_0)+\g pq(2a_4q^2+b_2)+\n a_2q^4+h_2q^2+k_0}
{\sqrt{a_4q^4+b_2q^2+b_0}}\right],\nn
&\L=& \frac{\a q^2(4a_4p^4+3a_2p^2+2a_0)+\b qp(2a_4p^2+a_2)+\a b_2p^4+\a_2p^2+\a_0}
{\sqrt{a_4p^4+a_2p^2+a_0}}\nn
&& +\frac{\a p^2(4a_4q^4+3b_2q^2+2b_0)+\b pq(2a_4q^2+b_2)+\a a_2q^4+\a_2q^2+\b_0}
{\sqrt{a_4q^4+b_2q^2+b_0}}
\label{SYSII.12}
\end{eqnarray}
13 - Number of parameters: 16
\begin{eqnarray}
&L=&\frac{1}{2}\Lambda \left[ \frac{\dot{p}^{2}}{\sqrt
a_{3}p^{3}+a_{2}p^{2}+a_{1}p+a_{0}}}+\frac{\dot{q}^{2}}{\sqrt
b_{2}q^{2}+b_{0}}}\right] \notag \\
&&-\frac{1}{\Lambda }\left[ \frac{4\nu
q^{2}(3a_{3}p^{2}+2a_{2}p^{2}+a_{1})+64\nu
b_{2}p^{3}+4h_{2}p^{2}+h_{1}p+h_{0}}{\sqrt{a_{3}p^{3}+a_{2}p^{2}+a_{1}p+a_{0
}}\right. +\left. \frac{16\nu p\left( 3b_{2}q^{2}+2b_{0}\right) +\nu
a_{3}q^{4}+h_{2}q^{2}+k_{0}}{\sqrt{b_{2}q^{2}+b_{0}}}\right] , \notag \\
&\L=&\frac{4\alpha q^{2}(3a_{3}p^{2}+2a_{2}p+a_{1})+64\alpha
b_{2}p^{3}+4\alpha _{2}p^{2}+\alpha _{1}p+\alpha _{0}}{\sqrt
a_{3}p^{3}+a_{2}p^{2}+a_{1}p+a_{0}}}+\frac{16\alpha p\left( 3b_{2}q^{2}+2b_{0}\right) +\alpha
a_{3}q^{4}+\alpha _{2}q^{2}+\beta _{0}}{\sqrt{b_{2}q^{2}+b_{0}}}
\label{SYSII.13}
\end{eqnarray}
14 - Number of parameters: 21.
\begin{eqnarray}
&L=&\frac{1}{2}\Lambda \left[ \frac{\dot{p}^{2}}{\sqrt
a_{4}p^{4}+a_{3}p^{3}+a_{2}p^{2}+a_{1}p+a_{0}}}+\frac{\dot{q}^{2}}{\sqrt
a_{4}q^{4}+b_{3}q^{3}+b_{2}q^{2}+b_{1}q+b_{0}}}\right] \notag \\
&&-\frac{1}{\Lambda }\left[ \frac{\nu q\left(
4a_{4}p^{3}+3a_{3}p^{2}+2a_{2}p+a_{1}\right) +\nu
b_{3}p^{3}+h_{2}p^{2}+h_{1}p+h_{0}}{\sqrt
a_{4}p^{4}+a_{3}p^{3}+a_{2}p^{2}+a_{1}p+a_{0}}}+\frac{\nu p\left( 4a_{4}q^{3}+3b_{3}q^{2}+2b_{2}q+b_{1}\right)
+\nu a_{3}q^{3}+h_{2}q^{2}+k_{1}q+k_{0}}{\sqrt
a_{4}q^{4}+b_{3}q^{3}+b_{2}q^{2}+b_{1}q+b_{0}}}\right] , \notag \\
&\L=&\frac{\alpha q\left( 4a_{4}p^{3}+3a_{3}p^{2}+2a_{2}p+a_{1}\right)
+\alpha b_{3}p^{3}+\alpha _{2}p^{2}+\alpha _{1}p+\alpha _{0}}{\sqrt
a_{4}p^{4}+a_{3}p^{3}+a_{2}p^{2}+a_{1}p+a_{0}}} +\frac{\alpha p\left( 4a_{4}q^{3}+3b_{3}q^{2}+2b_{2}q+b_{1}\right) +\alpha
a_{3}q^{3}+\alpha _{2}q^{2}+\beta _{1}q+\beta _{0}}{\sqrt
a_{4}q^{4}+b_{3}q^{3}+b_{2}q^{2}+b_{1}q+b_{0}}}.
\label{SYSII.14}
\end{eqnarray}
}\noindent
\rule[-0.1cm]{20.65cm}{0.01cm}\\[-.45cm]
\rule[-0.1cm]{20.65cm}{0.03cm}
\end{landscape}
\bigskip The system number 14 in Table II is the \emph{master} system
enjoying the maximum number of 21 parameters. It was introduced in 2006 \cit
{Yehia:2006b}. The system occuring in Table II as number 12 were obtained
recently in \cite{Yehia:2012}. The remaining 12 systems are new.
|
1,116,691,500,664 | arxiv | \section{Introduction}
The majority of ordinary matter, a.k.a. baryonic matter, is trapped inside the potential wells of the large-scale structure of the Universe. The main constituent of this invisible scaffolding is dark matter, and its fully collapsed overdensities, known as haloes, contain most of the mass in the Universe. These structures are not isolated, and the process of structure formation is known to be hierarchical \citep{1974ApJ...187..425P}. In simple terms, this means that smaller haloes become subhaloes after they are accreted onto larger structures. Unsurprisingly, baryonic matter also follows this process, resulting in today's clusters of galaxies. Due to their joint evolution, a tight relationship exists between the luminosity of a galaxy and the mass of the dark matter halo it inhabits.
These galaxy clusters are associated with the largest haloes in the Universe and they are still accreting matter from the surrounding environment, i.e. they are not fully virialized yet.
Galaxies can be divided into two populations: red and blue \citep{2001AJ....122.1861S}. Whereas red galaxies derive their color from their aging stellar population, blue galaxies display active star formation, and young stars dominate their light. The exact mechanism behind quenching, i.e., the transition from star-forming to ``red and dead'', is still not fully understood \citep[see, e.g.,][]{2010MNRAS.402.1536S, 2015MNRAS.452.2879T}, but it is known to be connected to both baryonic feedback \citep[see, e.g.,][]{2008MNRAS.391..481S, 2010MNRAS.402.1536S} and interactions inside the dense cluster environment \citep[see, e.g.,][]{1980ApJ...237..692L, 1996Natur.379..613M, 2008MNRAS.387...79V}. An important consequence of this environmental dependence is the formation of a red sequence, i.e., a close relationship between the color and magnitude of red galaxies in clusters. By calibrating this red sequence as a function of redshift, it is possible to identify clusters in photometric surveys, even in the absence of precise spectroscopic redshifts \citep{2000AJ....120.2148G}.
In recent years, splashback has been recognized as a feature located at the edge of galaxy clusters. The radius of this boundary, $r_\text{sp}$, is close to the apocenter of recently accreted material \citep[see, e.g.,][]{Adhikari_2014, Diemer_2017, Diemer_2017b} and it is associated with a sudden drop in matter density. This is because it naturally separates the single and multi-stream regions of galaxy clusters: orbiting material piles up inside this radius, while collapsing material located outside it is about to enter the cluster for the first time.
In simulations and observations, the distribution of red satellite galaxies and dark matter seem to trace this feature in the same fashion \citep{2021MNRAS.tmp.1404C, 2021MNRAS.504.4649O}, but a possible dependence on satellite properties is currently being explored \citep{2021arXiv210505914S, 2022arXiv220205277O}. In fact, in the context of galaxy evolution models, the mechanism behind this feature has been known under the name backsplash for almost two decades and has been previously explored both in observations and simulations \citep{2005MNRAS.356.1327G, 2011MNRAS.416.2882M}. Compared to these efforts, however, the recent interest in this feature is guided by theoretical and observational implications for the study of the large-scale structure of the Universe.
Since haloes are perturbations on top of a background of constant density, their size can be quantified in terms of overdensity masses. For example, $M_\text{200m}$ is defined as the mass contained within a sphere of radius $r_\text{200m}$ such that the average density within it is $200$ times the average matter density of the Universe $\rho_\text{m}(z)$,
\begin{equation}
\label{eq:200m}
M_\text{200m} = 200 \times \frac{4\pi}{3} \rho_\text{m}(z) r_\text{200m}^3.
\end{equation}
From a theoretical perspective, the splashback radius defines a more accurate cluster mass and sidesteps the issue of pseudo evolution due to an evolving $\rho_\text{m}(z)$ as a function of redshift $z$ \citep{2013ApJ...766...25D, More_2015}. Thanks to this property, this definition implies a universal mass function that is valid for a variety of cosmologies \citep{2020ApJ...903...87D}. Moreover, the shape of the matter profile around this feature can also be used to learn about structure formation, the nature of dark matter \citep{2020JCAP...02..024B} and dark energy \citep{Contigiani_2019}.
Observationally, one of the most noteworthy applications of the splashback feature is the study of quenching through the measurement of the spatial distribution of galaxy populations with different colors \citep{2020arXiv200811663A}. While notable, this was not the earliest result from the literature, and many other measurements preceded it. Published works can be divided into three groups: those based on targeted weak lensing observations of X-ray selected clusters \citep{2017ApJ...836..231U, 2019MNRAS.485..408C}, those based on the lensing signal and satellite distributions around SZ-selected clusters \citep[see, e.g., ][]{Shin_2019}, and those based on samples constructed with the help of cluster-finding algorithms applied to photometric surveys \citep[see, e.g.,][]{2016ApJ...825...39M, 2018ApJ...864...83C}. However, we note that in the case of the last group, the results are difficult to interpret because the splashback signal correlates with the parameters of the cluster detection method \citep{Busch_2017}.
In this work, we implement an application of this feature based on \cite{2021MNRAS.tmp.1404C}. The location of the splashback radius is connected to halo mass, and its measurement from the distribution of cluster members can therefore lead to a mass estimate. Because this distribution can be measured without spectroscopy, this means that we can extract a dynamical mass purely from photometric data. To avoid the issues related to cluster-finding algorithms explained above, we studied the average distribution of faint galaxies around luminous red galaxies (LRGs) instead of the targets identified through overdensities of red galaxies. If we consider only passive evolution, the observed magnitude of the LRGs can be corrected to construct a sample with constant comoving density \citep{2016MNRAS.461.1431R,2019MNRAS.487.3715V}, and, by selecting the brightest among them, we expect to identify the central galaxies of groups and clusters.
We present our analysis in Section~\ref{sec:profiles} and produce two estimates of the masses of the haloes hosting the LRGs in Section~\ref{sec:fit}. The first is based on the splashback feature measured in the distribution of faint galaxies, while the second is based on the amplitude of weak lensing measurements. After comparing these results with an alternative method in Section~\ref{sec:discussion}, we discuss our measurements in the context of modified models of gravity. We conclude by pointing out that, while we limit ourselves to redshifts $z<0.55$ here, the sample constructed in this manner has implications for the higher redshift range probed by future stage-IV photometric surveys \citep{2006astro.ph..9591A} such as
\emph{Euclid} \citep{laureijs2011euclid} and the Legacy Survey of Space and Time \citep[LSST,][]{2009arXiv0912.0201L}.
Section~\ref{sec:future} discusses these complications in more detail and explores how this method can be used to complement the use of lensing to extract the masses of X-ray \citep{2019MNRAS.485..408C} or SZ selected clusters \citep{Shin_2019}.
Unless stated otherwise, we assume a cosmology based on the 2015 Planck data release \citep{Planck2015}. For cosmological calculations, we use the Python packages \textsc{astropy} \citep{Price-Whelan:2018hus} and \textsc{colossus} \citep{Diemer:2017bwl}. The symbols $R$ and $r_\text{sp}$ always refer to a comoving projected distance and a comoving splashback radius.
\section{Data}
\label{sec:data}
This section introduces both the Kilo-Degree Survey \citep[KiDS,][]{deJong2013} and its infrared companion, the VISTA Kilo-degree INfrared Galaxy survey \citep[VIKING,][]{Edge2013}. Their combined photometric catalog and the sample of LRGs extracted from it \citep{2020arXiv200813154V} are the essential building blocks of this paper.
\subsection{KiDS}
KiDs is a multi-band imaging survey in four filters ($ugri$) covering $1350$ deg$^2$. Its fourth data release \citep[DR4, ][]{Kuijken2019DR4} is the basis of this paper and has a footprint of 1006 deg$^2$ split between two regions, one equatorial and the other in the south Galactic cap ($770$ deg$^2$ in total after masking). The $5\sigma$ mean limiting magnitudes in the $ugri$ bands are, respectively, 24.23, 25.12, 25.02, and 23.68. The mean seeing for the $r$-band data, used both as a detection band and for the weak lensing measurements, is 0.7\arcsec. The companion survey VIKING covers the same footprint in five infrared bands, $ZYJHK_s$.
The raw data have been reduced with two separate pipelines, THELI \citep{2005AN....326..432E} for a lensing-optimized reduction of the $r$-band data, and AstroWISE \citep{2013ExA....35...45M}, used to create photometric catalogs of extinction corrected magnitudes. The source catalog for lensing was produced from the THELI images. Lensfit \citep{2013MNRAS.429.2858M, Conti:2016gav, 2019A&A...624A..92K} was used to extract the galaxy shapes.
\subsection{LRGs}
\label{sec:datalrg}
The LRG sample presented in \cite{2020arXiv200813154V} is based on KiDS DR4. In order to construct the catalogue, the red sequence up to redshift $z=0.8$ was obtained by combining spectroscopic data with the $griZ$ photometric information provided by the two surveys mentioned above. Furthermore, the near-infrared $K_s$ band from VIKING was used to perform a clean separation of stellar objects to lower the stellar contamination of the sample.
The color-magnitude relation that characterizes red galaxies was used to calibrate redshifts to a precision higher than generic photometric-redshift (photo-zs) methods, resulting in redshift errors for each galaxy below $0.02$. For more details on how the total LRG sample is defined and its broad properties, we direct the interested reader to \cite{2020arXiv200813154V}, or \cite{2019MNRAS.487.3715V}, a similar work based on a previous KiDS data release.
\cite{MCF2021} further analyzed this same catalog and calculated absolute magnitudes for all LRGs using \textsc{LePHARE} \citep{2011ascl.soft08009A} and \textsc{EZGAL} \citep{2012PASP..124..606M}. The first code corrects for the redshift of the rest-frame spectrum in the different passbands (k-correction), while the second corrects for the passive evolution of the stellar population (e-correction). For this work, we used these (k+e)-corrected luminosities as a tracer of total mass since the two are known to be highly correlated \citep[see, e.g.,][]{2006MNRAS.368..715M, 2015A&A...579A..26V}. Based on this, we then defined two samples with different absolute r-band magnitude cuts, $M_r<-22.8$ and $M_r<-23$, that we refer to as \emph{all} and \emph{high-mass} samples. These are the $10$ and $5$ percentile of the absolute magnitude distribution of the \emph{luminous} sample studied in \cite{MCF2021}, and the two samples contain $5524$ and $2850$ objects each.
Because the (k+e)-correction presented above is designed to correct for observational biases and galaxy evolution, the expected redshift distribution of the LRGs should correspond to a constant comoving density. However, when studying our samples (see Figure~\ref{fig:redshift}), it is clear that this assumption holds only until $z=0.55$. This suggests that the empirical corrections applied to the observed magnitudes are not optimal. It is important to stress that this discrepancy was not recognized before because our particular selection amplifies it: because we consider here the tail of a much larger sample ($N\sim10^5$) with a steep magnitude distribution, a small error in the lower limit induced a large mismatch at the high-luminosity end. To overcome this limitation, we discard all LRGs above $z=0.55$. After fitting the distributions in Figure~\ref{fig:redshift}, we obtained comoving densities $n = 7.5\times 10^{-6}$ Mpc$^{-3}$ and ${n= 4.0\times 10^{-6}~\text{Mpc}^{-3}}$ for the full and the high-mass samples.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{1.pdf}
\caption{The redshift distributions of the LRG samples studied in this paper. As visible in the figure, the distributions are consistent with the assumption of a constant comoving density up to redshift $z=0.55$, the maximum considered in our main analysis. For higher redshifts, we find that the empirical selection criteria explicitly designed to select for a constant comoving density do not hold. We use the high-redshift tail of our LRG sample (All, $z>0.75$) to investigate the behaviour of our measurements in this regime.}
\label{fig:redshift}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{color.pdf}
\caption{Separating red and blue galaxies. We calculated the distribution of KiDS galaxies in the ($g$-$r$)-($r$-$i$) color plane for objects around random points in the sky and around LRGs in the \emph{high-mass} sample between redshifts $0.3$ and $0.35$ ($R<1$ Mpc). This histogram represents the difference between the two distributions as a fraction of the entire KiDS population. The black and white squares mark the pixels with the lowest and highest value. An overdensity of red objects and an underdensity of blue objects is apparent, and the line separating the two locations is used to split the full KiDS sample into two populations.}
\label{fig:color_split}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1\columnwidth]{2a.pdf}
\includegraphics[width=1\columnwidth]{2b.pdf}
\caption{The signals studied in this paper. We measure the number density of KiDs red galaxies (left panel) and the lensing signal (right panel) around the LRGs in our sample (\emph{all}) and its high-luminosity subsample (\emph{high-mass}). Both measurements are based on the KiDS photometric catalog. The steep drop around ${1~\text{Mpc}}$ visible in the left panel is the splashback feature, and it is connected to the total mass of the LRG haloes. Similarly, the amplitude of the lensing signal on the right is also a measure of the same mass. In addition to the data and the $1\sigma$ error bars, we also display the $68$ percent contours of two profile fits performed to extract the mass measurements. The fit on the right is performed either by varying only the amplitude of the signal (thinner contours) or by varying its amplitude and concentration (wider contours). See text for more details. Section~\ref{sec:data} presents the data and the two samples, Section~\ref{sec:profiles} discusses how the profiles are measured, and Section~\ref{sec:fit} discusses the fitting procedure.}
\label{fig:measurement}
\end{figure*}
\section{Profiles}
\label{sec:profiles}
In this section, we discuss how we used our data sets to produce two stacked signals measured around the LRGs: the galaxy profile, capturing the distribution of fainter red galaxies, and the weak lensing profile, a measure of the projected mass distribution extracted from the distorted shapes of background galaxies. We present these two profiles and the $68$ percent contours of two separate parametric fits in Figure~\ref{fig:measurement}. The details of the fitting procedure are explained in Section~\ref{sec:fit}.
\subsection{Galaxy profile}
\label{sec:galaxyanalysis}
We expect bright LRGs to be surrounded by fainter satellites, i.e., we expect them to be the central galaxies of galaxy groups or clusters. To obtain the projected number density profile of the surrounding KiDS galaxies, we split the LRG samples in $7$ redshift bins of size $\delta_z = 0.05$ in the range $z\in[0.2, 0.55]$. We then defined a corresponding KiDS galaxy catalog for each redshift bin, obtained the background-subtracted distribution of these galaxies around the LRGs, and finally stacked these distributions using the weights $w_i$ defined below.
We did not select the KiDS galaxies by redshift due to their large uncertainty. Instead, for each redshift bin, we used the entire KiDS catalogs and only applied two redshift-dependent selections: one in magnitude and one in color space. The reason behind the first selection is simple: compared to a flat signal-to-noise ratio (SNR) threshold, a redshift-dependent magnitude limit does not mix populations with different intrinsic magnitudes as a function of redshift \citep[as suggested by][]{2016ApJ...825...39M}. On the other hand, the color cut has a more physical explanation. Red satellites are the most abundant population in galaxy clusters and, due to their repeated orbits inside the host cluster, they are known to better trace dynamical features such as splashback \citep[see, e.g.,][]{2017ApJ...841...18B}. Combining these two criteria also has the effect of selecting a similar population even in the absence of k-corrected magnitudes.
For the highest redshift considered here, $z_\text{max}$, we limited ourselves to observed magnitudes $m_r<23$, equivalent to a $10$ SNR cut. We then extrapolated this limit to other redshift bins by imposing
\begin{equation}
\label{eq:magcut}
m_r < 23 - 5\log \left(
\frac{d_L(z_\text{max})}{d_L(z_i)} \right),
\end{equation}
where $z_i$ is the upper edge of the redshift bin considered, and $d_L(z)$ is the luminosity distance as a function of redshift. Afterward, we divided the galaxy catalogs into two-color populations by following the method of \cite{2020arXiv200811663A}. Compared to random points in the sky, the color distribution of KiDS galaxies around LRGs contains two features: an overdensity of ''red`` objects and a deficit of ''blue`` objects. Based on the red-sequence calibration of \cite{2020arXiv200813154V} and the location of the $4000$ \AA~break, we identified the ${(g-r)-(r-i)}$ plane as the most optimal color space to separate these two populations at redshifts $z\leq 0.55$. We also noted that the ${(i-Z)-(r-i)}$ plane would be better suited for higher redshifts. From the distribution in the color-color plane, the two classes can then be separated by the line perpendicular to the segment connecting these two loci and passing through their midpoint. Figure~\ref{fig:color_split} provides an example of this procedure. We point out that a more sophisticated selection could be used since the structure in color space suggests the existence of a compact red cloud. For the purposes of this work, however, we do not find this to be necessary.
We used \textsc{treecorr} \citep{2004MNRAS.352..338J, 2015ascl.soft08007J} to extract the correlation functions from the red galaxy catalogs defined above
\begin{equation}
\xi_i = \frac{DD_i}{DR_i} - 1,
\end{equation}
where $DD$ and $DR$ are the numbers of LRG-galaxy pairs calculated using the KiDS catalogs or the random catalogs, respectively. These randoms are composed of points uniformly distributed in the KiDS footprint. The error covariance matrices of these measurements were obtained by dividing the survey area into $50$ equal-areal jackknife regions. Because the signal is statistics limited, the off-diagonal terms of this matrix are found to be negligible. To further support this statement, we point out that due to the low number density of the sample (see Figure~\ref{fig:redshift}), the clusters do not overlap in real space.
Formally, the correlation function written above is related to the surface overdensity of galaxies:
\begin{equation}
\Sigma_{i}(R) = \xi_i(R) \Sigma_{0, i},
\end{equation}
where $\Sigma_{0, i}$ is the average surface density of KiDS galaxies in the $i$-th redshift bin. However, since we are interested in the shape of the profile and not its amplitude, we did not take this parameter into account when stacking the correlation functions $\xi_i$.
The signal considered in this paper is a weighted sum of the individual correlation functions. Formally:
\begin{equation}
\frac{\Sigma_\text{g}(R)}{\Sigma_0} = \frac{\sum_i w_i(R) ~\xi_i(R)}{\sum_i w_i(R)},
\end{equation}
where $\Sigma_0$ is a constant needed to transform the dimensionless correlation function into the projected mass density. Because we decided to fit the combination $\Sigma_\text{g}(R)/\Sigma_0$ directly, the value of this constant is unimportant. To optimize the stacked signal, we used as weights $w_i$ the inverse variance of our measurement. This corresponds to an SNR weighted average, where the SNR is, in our case, dominated by the statistical error of the DD counts.
The left side of Figure~\ref{fig:measurement} presents our measurement of the galaxy profile around the LRGs. As expected, the high-mass subsample has a higher amplitude compared to the entire sample.
\subsection{Weak lensing profile}
\label{sec:lensinganalysis}
The shapes of background sources are deformed, i.e., lensed, by the presence of matter along the line of sight. In the weak lensing regime, this results in the observed ellipticity $\bm{\epsilon}$ of a galaxy being a combination of its intrinsic ellipticity and a lensing shear. If we assume that the intrinsic shapes of galaxies are randomly oriented, the coherent shear in a region of the sky can therefore be computed as the mean of the ellipticity distribution.
Consider a circularly symmetric matter distribution acting as a lens. In this case, the shear only contains a tangential component, i.e., the shapes of background galaxies are deformed only in the direction parallel and perpendicular to the line in the sky connecting the source to the center of the lens. Because of this, we can define the lensing signal in an annulus of radius $R$ as the average value of the tangential components of the ellipticities $\epsilon^{(t)}$. The next few paragraphs provide the details of the exact procedure we followed to measure this lensing signal around the LRGs in our samples. For this second measurement, we used the weak lensing KiDS source catalog extending up to redshift $z=1.2$ \citep[see also,][]{2015MNRAS.452.3529V, Dvornik_2017}.
Based on the lensfit weights $w_s$ associated with each source, we defined \emph{lensing} weights for every lens-source combination,
\begin{equation}
\label{eq:lensingeff}
w_\text{l,s} = w_\text{s} \left(\Sigma_{\text{crit, l}}^{-1}\right)^{2},
\end{equation}
where the two indices $\text{l}$ and $\text{s}$ are used to indicate multiple lens-source pairs. The second factor in the product above represents a lensing efficiency contribution and, in our formalism, this quantity does not depend on the source. It is calculated instead as an average over the entire source redshift distribution $n(z_\text{s})$:
\begin{equation}
\label{eq:lensingeff2}
\Sigma_\text{crit, l}^{-1} = \frac{4\pi G}{c^2} \frac{d_\text{A}(z_\text{l})}{(1+z_\text{l})^2} \int_{z_\text{l}+\delta}^{\infty} dz_\text{s} \; \frac{d_\text{A}(z_\text{l}, z_\text{s})}{d_\text{A}(0, z_\text{s})} n(z_\text{s}),
\end{equation}
where $d_\text{A}(z_1, z_2)$ is the angular diameter distance between the redshifts $z_1$ and $z_2$ in the chosen cosmology. Sources that belong to the correlated structure surrounding the lens might scatter behind it due to the uncertainty of the photometric redshifts. The gap between the lens plane and the source plane in the expression above ($\delta=0.2$) ensures that our signal is not diluted by this effect \citep[see appendix A4 of][]{Dvornik_2017}.
Once all of these ingredients are computed, an estimate of the measured lensing signal is given by:
\begin{equation}
\label{eq:deltaS}
\Delta \Sigma (R) =
\frac{
\sum_\text{l,s} \epsilon^\text{(t)}_{\text{l,s}} w_\text{l,s} \Sigma_{\text{crit, l}}
}{
\sum_\text{l,s} w_\text{l,s}
}
\frac{1}{1+m},
\end{equation}
where the sums are calculated over every source-lens pair, and $m$ is a residual multiplicative bias of order $0.014$ calibrated using image simulations \citep{Conti:2016gav, 2019A&A...624A..92K}. This signal is connected to the mass surface density $\Sigma_\text{m}(R)$ and its average value within that radius, $\overline{\Sigma}_\text{m}(<R)$.
\begin{equation}
\label{eq:esd}
\Delta \Sigma (R) = \overline{\Sigma}_\text{m}(<R) - \Sigma_\text{m} (R).
\end{equation}
The covariance matrix of this average lensing signal was extracted through bootstrapping, i.e., by resampling $10^5$ times the $1006$ $1\times1$ deg$^2$ KiDS tiles used in the analysis. This signal, like the galaxy profile before, is also statistics limited. Therefore we have not included the negligible off-diagonal terms of the covariance matrix in our analysis.
Finally, we note that we have thoroughly tested the consistency of our lensing measurement. We computed the expression in Equation~\eqref{eq:deltaS} using the cross-component $\epsilon^{(\times)}$ instead of the tangential $\epsilon^\text{(t)}$ and verified that its value was consistent with zero. Similarly, we also confirmed that the measurement was not affected by additive bias by measuring the lensing signal evaluated around random points.
\section{Three ways to measure cluster masses}
\label{sec:fit}
This section presents three independent measures of the total mass contained in the LRG haloes. We refer to these estimates as splashback (or dynamical) mass, lensing mass and abundance mass. The first two are extracted by fitting parametric profiles to the two signals presented in the previous section (Figure~\ref{fig:measurement}), and the third is based on a simple abundance matching argument. Fitting the galaxy profile allows us to constrain the splashback feature and provides a dynamical mass, while fitting the amplitude of the lensing signal provides a lensing mass.
\begin{table}
\begin{center}
\begin{tabular}{c|c|c}
\hline
Parameter & Prior \\ \hline
$\alpha$ & $\mathcal{N}(0.2, 2)$ \\
$g$ & $\mathcal{N}(4, 0.2)$ \\
$\beta$ & $\mathcal{N}(6, 0.2)$ \\
$r_\text{t}/(1~\text{Mpc})$ & $\mathcal{N}(1, 4)$ \\
$s_\text{e}$ & $[0.1, 2]$ \\ \hline
\end{tabular}
\end{center}
\caption{The priors used in the fitting procedure of Section~\ref{sec:fit}. When fitting the data in the left panel of Figure~\ref{fig:measurement}, we employ the model in Equation~\eqref{eq:DK14} with the priors presented above. For some parameters, we impose flat priors in a range, e.g. $[a, b]$, while for others we impose a Gaussian prior $\mathcal{N}(m, \sigma)$ with mean $m$ and standard deviation $\sigma$. We do not restrict the prior range of the two degenerate parameters $\bar{\rho}$ and $r_0$.}
\label{tab:priors}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=1\columnwidth]{3a.pdf}
\includegraphics[width=1\columnwidth]{3b.pdf}
\caption{Comparison of the mass measurements performed in this paper. Using three different techniques, we measured the mass of the haloes hosting our LRG sample (\emph{all}) and a high-luminosity subsample (\emph{high-mass}). The remarkable consistency between the three methods for both samples is a testament to the robustness of our LRG selection and the prospect of measuring halo masses from the splashback feature. Table~\ref{tab:table_masses} reports the same results in textual form. See Section~\ref{sec:discussion} for more details about this comparison.}
\label{fig:mass}
\end{figure*}
\subsection{Splashback mass}
\label{sec:fitmass}
Thanks to the splashback feature, it is possible to estimate the total halo mass by fitting the galaxy distribution with a flexible enough model. The essential feature that such three-dimensional profile, $\rho(r)$, must capture is a sudden drop in density around $r_\text{200m}$. Its most important parameter is the point of steepest slope, also known as the splashback radius $r_\text{sp}$. Equivalently, this location can be defined as the radius where the function $d \log \rho/d \log r$ reaches its minimum.
In general, the average projected correlation function can be written in terms of the average three-dimensional mass density profile as:
\begin{equation}
\label{eq:S}
\frac{\Sigma_\text{g} (R)}{\Sigma_0} = \frac{2}{\Sigma_0}\int_0^{\infty} d\Delta \, \rho\left(\sqrt{\Delta^2 + R^2}\right),
\end{equation}
In practice, we evaluated this integral in the range [$0$, $40$] Mpc and confirmed that our results are not sensitive to the exact value of the upper integration limit.
The specific density profile that we have used is based on \cite{2014ApJ...789....1D}, and it has the following form:
\begin{align}
\label{eq:DK14}
\rho(r) &=
\rho_{\text{Ein}}(r) f_{\text{trans}} (r) + \rho_{\text{out}} (r), \\
\rho_{\text{Ein}}(r) &= \rho_{\text{s}} \exp \left( -\frac{2}{\alpha}\left[\left( \frac{r}{r_{\text{s}}} \right)^{\alpha} - 1 \right] \right), \\
f_{\text{trans}} (r) &= \left[ 1+ \left(\frac{r}{r_\text{t}}\right)^{\beta}\right]^{-g/\beta}, \\
\rho_{\text{out}} &= \bar{\rho} \left( \frac{r}{r_0} \right)^{-s_\text{e}}.
\end{align}
These expressions define a profile with two components: an inner halo and an infalling region.
The term $\rho_\text{Ein}(r) f_\text{trans}(r)$ represents the collapsed halo through a truncated Einasto profile with shape parameter $\alpha$ and amplitude $\rho_s$ \citep{Einasto1965}.
The parameters $g, \beta$ in the transition function determine the maximum steepness of the sharp drop between the two regions, and $r_\text{t}$ determines its approximate location. Finally, the term $\rho_\text{out}(r)$ describes a power-law mass distribution with slope $s_\text{e}$ and amplitude $\bar{\rho}$, parametrizing the outer region dominated by infalling material. For more information about the role of each parameter and its interpretation, we refer the reader to \cite{2014ApJ...789....1D}, and previous measurements presented in the introduction \citep[see, e.g.,][for more details about the role of the truncation radius $r_\text{t}$]{2019MNRAS.485..408C}.
This profile is commonly used to parameterize mass profiles but is used in this section to fit a galaxy number density profile. When performing this second type of fit, the amplitudes $\rho_\mathrm{s}$ and $\bar{\rho}$ are dimensionless and, together with the flexible shape of the profile, completely capture the connection between the galaxy and matter density fields. Similarly to $\Sigma_0$, the value of these constants is not the focus of this paper.
To extract the location of the splashback radius for our two LRG samples, we fitted this model profile to the correlation function data using the ensemble sampler \textsc{emcee} \citep{Foreman-Mackey2013}. The priors imposed on the various parameters are presented in Table~\ref{tab:priors}, and we highlight in particular that the range for $\alpha$ is a generous scatter around the expectation from numerical simulations \citep{Gao2008}. The best-fitting profiles extracted from this procedure are shown in Figure~\ref{fig:measurement}.
In clusters, the location of the central galaxy might not correspond to the barycenter of the satellite distribution. While this discrepancy is usually accounted for in the modeling of the projected distribution in Equation~\eqref{eq:S}, we chose not to consider this effect in our primary analysis. This is justified by the fact that the miscentering term affects the profile within $R\sim0.1$ Mpc, while we are interested in the measurement around $R\sim 1$ Mpc \citep{2021arXiv210505914S}, and the data do not require a more flexible model to provide a good fit.
Finally, to transform the $r_\text{sp}$ measurements into a value for $M_\text{200m}$, we used the relations from \citet{2020ApJS..251...17D}, evaluated at our median redshift of $\bar{z}=0.44$. In this transformation, we employed the suggested theoretical definition of splashback, based on the $75$th percentile of the dark matter apocenter distribution. In the same paper, this definition of splashback based on particle dynamics has been found to accurately match the definition based on the minimum of $\log \rho / \log r$ used in this work. For more details about the relationship between these two definitions, we refer the reader to section 3.1 of \cite{2021MNRAS.tmp.1404C}.
Because the splashback radius depends on accretion rate, we used the median value of this quantity as a function of mass as a proxy for the effective accretion rate of our stacked sample. We note in particular that the additional scatter introduced by the accretion rate and redshift distributions is expected to be subdominant given the large number of LRGs we have considered.
\subsection{Lensing mass}
To extract masses from the lensing signal, we performed a fit using an NFW profile \citep{Navarro1996, Navarro1997}:
\begin{equation}
\label{eq:NFW}
\rho(r) =
\frac{1}{4 \pi F(c_\text{200m})}
\frac{M_\text{200m}}{r(r+ r_\text{200m}/c_\text{200m})^2},
\end{equation}
where $M_\text{200m}$ and $r_\text{200m}$ are related by Equation \eqref{eq:200m}, $c_\text{200m}$ is the halo concentration, and the function appearing in the first term is defined as:
\begin{equation}
\label{eq:f}
F(c) =\ln(1+c)-c/(1+c).
\end{equation}
From this three-dimensional profile, the lensing signal can be derived by replacing $\Sigma_\text{g}/\Sigma_0$ with $\Sigma_\text{m}$ in the projections Equations~\eqref{eq:esd} and \eqref{eq:S}.
We point out that we did not use the complex model of Equation~\eqref{eq:DK14} for the lensing measurement. This is because, the differences between the Einasto profile used there and the NFW profile presented above are not expected to induce systematic biases at the precision of our measurements \citep[see, e.g.,][]{2016JCAP...01..042S}. Although extra complexity might not be warranted, particular care should still be taken when measuring profiles at large scales, where the difference between the more flexible profile and a traditional NFW profile is more pronounced. Consequently, we reduce any bias in our measurement by fitting only projected distances $R<1.5$ Mpc, where the upper limit is decided based on the $r_\text{sp}$ inferred by our galaxy distribution measurement.
Since the mass and concentration of a halo sample are related, several mass-concentration relations calibrated against numerical simulations are available in the literature.
For the measurement presented in this section, we used the mass-concentration relation of \cite{2013ApJ...766...32B}. However, because this relation is calibrated with numerical simulations based on a different cosmology, we also fit the lensing signal while keeping the concentration as a free parameter. This consistency check is particularly important because halo profiles are not perfectly self-similar \citep{2015ApJ...799..108D} and moving between different cosmologies or halo mass definitions might require additional calibration.
We perform the fit to the profiles in the right panel of Figure~\ref{fig:measurement} using the median redshift of our samples, $\bar{z}=0.44$. We find that statistical errors dominate the uncertainties, and we do not measure any systematic effect due to the assumed mass-concentration relation.
\subsection{Abundance mass}
In addition to the two mass measurements extracted from the galaxy and lensing profiles, we also calculated masses using an abundance matching argument.
The comoving density of haloes of a given mass is a function of cosmology \citep{1974ApJ...187..425P}. Since we expect a tight relationship between the mass of a halo and the luminosity of the associated galaxy, any lower limit in the first can be converted into a lower limit in the second. Therefore, our measurement of the comoving density in Figure~\ref{fig:redshift} can be converted into a mass measurement. We note, in particular, that this step assumes that \citet{2020arXiv200813154V} built a complete sample of LRGs with no contamination and that the luminosity estimates obtained in \citet{MCF2021} are accurate, at least in ranking.
We used the mass function of \cite{2008ApJ...688..709T} at the median redshift $\bar{z}=0.44$ to convert our fixed comoving densities into lower limits on the halo mass $M_\text{200m}$. To complete the process, we then extracted the mean mass of the sample using the same
mass function.
The relation between halo mass and galaxy luminosity is not perfect, however, since the galaxy luminosity function is shaped by active galactic nuclei activity and baryonic feedback. These processes induce an increased scatter in the stellar mass to halo mass relation \citep{2014MNRAS.445..175G}, which we have not accounted for. This effect, combined with the uncertainties in the LRG selection and luminosity fitting, are the main sources of error for our abundance matching mass. Since we have not performed these steps in this work, however, we decided not to produce an uncertainty for this measurement and report it here without an error bar.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{fig2.pdf}
\includegraphics[width=1\columnwidth]{fig.pdf}
\includegraphics[width=1\columnwidth]{fig0.pdf}
\caption{Galaxy distribution measurements scale better with redshift compared to lensing measurements. The three panels show the posterior distribution of $r_\text{sp}$ obtained with the two techniques discussed in this paper for three different redshift ranges. The colored error bars indicate the $68$ percentile interval of each distribution. From top to bottom the ratio between the intervals for the two techniques increases significantly: $[0.15, 0.25, 0.4]$, proving the presence of a different redshift dependence that benefits the galaxy distribution measurement. Note that this figure uses $r_\text{sp}$ as a comparison variable instead of the mass used in Figure~\ref{fig:mass}. This choice is due to the smaller error bars for this parameter. See the final paragraph of
Section~\ref{sec:fitmass} for more details about how the one-to-one transformation between these two variables was obtained.}
\label{fig:hz}
\end{figure}
\section{Discussion}
\label{sec:discussion}
In this section, we compare and validate the measurements presented in the previous one. As an example of the power granted by multiple cluster mass measurements from the same survey, we also present an interpretation of these measurements in the context of modified theories of gravity.
In Figure~\ref{fig:mass} and Table~\ref{tab:table_masses}, we present the results of our two main mass measurements combined with the abundance-matching estimate introduced in the previous subsection. All measurements are in agreement, providing evidence that there is no significant correlation between the selection criteria of our LRG sample and the measurements performed here. The inferred average splashback masses of our LRG samples have an uncertainty of around $50$ percent.
The first striking feature is the varying degree of precision among the different measurements. The lensing result is the most precise, even when the concentration parameter is allowed to vary. In particular, the fact that the inferred profiles do not exhaust the freedom allowed by error bars in the right-hand panel of Figure~\ref{fig:measurement}
implies that our NFW model prior is responsible for the strength of our measurement and that a more flexible model will result in larger mass uncertainties. On the other hand, with splashback, we can produce a dynamical mass measurement without any knowledge of the shape of the average profile and, more importantly, without having to capture the exact nature of the measured scatter.
There is also a second, more important, difference between the two measurements that we want to highlight here. The SNR of the splashback mass is dominated by high-redshift LRGs since $\text{SNR}\sim \sqrt{N_\text{LRG}}$. While the ability to capture intrinsically fainter objects at low redshift might affect this scaling, we point out that the redshift-dependent magnitude cut introduced in Equation~\eqref{eq:magcut} explicitly prevents this. In contrast, the lensing weights in Equations~\eqref{eq:lensingeff} imply that the more numerous high-redshift objects do not dominate the lensing signal. This is due to a combination of the lower number of background sources available, the lower lensfit weights associated with fainter sources, and the geometrical term in Equation~\eqref{eq:lensingeff2}.
This point is explored quantitatively in Figure~\ref{fig:hz}, where we compare the two techniques for different redshift bins. The top panel is a projection of the left-hand panel of Figure~\ref{fig:mass} in terms of $r_\text{sp}$, while the other two are new results. These new measurements at higher redshift are obtained using the same methods presented in Section~\ref{sec:profiles}. To be precise: for the galaxy distribution, we impose a $10$ SNR cut for the KiDS galaxies and a subsequent color selection in the ${(i-Z)-(r-i)}$ plane; while for the lensing signal, we use the same source selection presented before. As visible in the Figure, both measurements degrade for higher redshifts, but the two scale differently. If we consider the size of the $68$ percentile intervals for the two measurements, at $z=[0.2, 0.5]$ we obtain a ratio between the two of $1:7$, while at $z=[0.65, 0.7]$ we obtain a ratio of $1:2.5$, significantly better. As discussed in a future section, this different scaling has important implications for future photometric missions.
As a final note on our main results, we point out that the difference between the masses of the two samples (\emph{all} and \emph{high-mass}) is $2\sigma$ for the lensing measurement, but it is not even marginally significant for the splashback values (due to the large error bars). As already shown in \cite{2019MNRAS.485..408C}, splashback measurements are heavily weighted towards most massive objects. To produce a non-mass weighted measure of the splashback feature, it is necessary to rescale the individual profiles with a proxy of the halo mass. However, because the study of $r_\text{sp}$ as a function of mass is not the main focus of this work, we leave this line of study open for future research.
\begin{table}
\hspace{-0.7cm}
\begin{tabular}{l|c|c|c|c}
\hline
Technique & \multicolumn{2}{c}{$M_\text{200m}$ ($10^{14}$ M$_\odot$)} & \multicolumn{2}{c}{$r_\text{sp}$ (Mpc)} \\
& All & High-mass & All & High-mass \\
\hline
Splashback & $0.57^{+0.36}_{-0.21}$ & $0.9^{+0.85}_{-0.38}$ & $1.48\pm 0.2$ & $1.68\pm 0.28$ \\
Lensing (fixed c) & $0.46\pm 0.03$ & $0.62\pm 0.05$ & $1.40\pm 0.01$ & $1.52\pm 0.02$ \\
\hline
Lensing (free c) & $0.44\pm 0.05$ & $0.54\pm 0.07$ & $1.39\pm 0.03$ & $1.6\pm 0.04$ \\
Abundance & $0.48$ & $0.74$ & $1.42$ & $1.6$\\
\hline
\end{tabular}
\caption{The mass measurements performed in this paper. This table summarizes the discussion of Section~\ref{sec:discussion} and the measurements presented in Figure~\ref{fig:mass} for our LRG samples (\emph{all} and \emph{high-mass}). The quoted splashback radii are in comoving coordinates. The abundance-matching measurements are provided without error bars as we have not modeled the selection function of our LRGs.
Most measurements and conversions between $M_\text{200m}$ and $r_\text{sp}$ are computed using a model at the median redshift $\bar{z}=0.44$, identical for both samples (see the end of Section~\ref{sec:fitmass} for details).
}
\label{tab:table_masses}
\end{table}
\subsection{Gravitational constants}
\label{sec:gravity}
In this subsection, we discuss how the combination of the lensing masses and splashback radii measured above can be used to constrain models of gravity. The principle behind this constraint is the fact that, while General Relativity (GR) predicts that the trajectories of light and massive particles are affected by the same metric perturbation, extended models generally predict a discrepancy between the two.
In extended models, the equations for the linearized-metric potentials
\citep[$\Phi$ and $\Psi$, see][]{1980PhRvD..22.1882B}
can be connected to the background-subtracted matter density $\rho(\bm{x})$ through the following equations \citep{2008JCAP...04..013A, 2008PhRvD..78b4015B, 2010PhRvD..81j4023P},
\begin{align}
\nabla^2 (\Phi + \Psi) = 8 \pi G \Sigma(x) \rho(x),
\label{eq:Poisson}
\\
\nabla^2 \Phi = 4 \pi G \mu(x) \rho(x).
\label{eq:Poisson2}
\end{align}
In the expressions above, the functions $\mu$ and $\Sigma$, also known as $G_\text{matter}/G$ and $G_\text{light}/G$ can be in principle a function of space and time (collectively indicated by $x$). We stress that the symbol $\Sigma$, previously used to refer to projected three-dimensional distributions ($\Sigma_\text{g}, \Sigma_\text{m}$), has a different use in this context. These equations are expressed in terms of $\Phi$ and $\Phi + \Psi$ because the trajectories of particles are affected by the first, while the deflection of light is governed by the second. In the presence of only non-relativistic matter, Einstein's equations in GR reduce to $\Phi=\Psi$ and we have $\Sigma = \mu = 1$.
The same type of deviation from GR can also be captured in the post-Newtonian parametrization by a multiplicative factor $\gamma$ between the two potentials: $\Psi = \gamma\Phi$. If $\mu, \Sigma$, and $\gamma$ are all constants, the three are trivially related:
\begin{equation}
\frac{\mu}{\Sigma} = \frac{1+\gamma}{2}.
\end{equation}
Under this same assumption, the ratio between the masses measured through lensing and the mass measured through the dynamics of test particles (e.g., faint galaxies or stars) can be used to constrain these parameters and the literature contains multiple results concerning these extended models. Solar System experiments have constrained $\gamma$ to be consistent with its GR value ($\gamma=1$) up to $5$ significant digits \citep{2003Natur.425..374B}, but the current measurements at larger scales are substantially less precise. For kpc-sized objects (galaxy-scale), stellar kinematics have been combined with solid lensing measurements to obtain $10$ percent constraints \citep{Bolton:2006yz, 2018Sci...360.1342C}, while large-scale measurements ($\sim 10-100$ Mpc) can be obtained by combining cosmic shear and redshift space distortion measurements to achieve a similar precision \citep[see, e.g.,][]{2013MNRAS.429.2249S, 2018MNRAS.474.4894J}. As for the scales considered in this paper, a precision of about $30$ percent can be obtained by combining lensing masses with either the kinematics of galaxies inside fully collapsed cluster haloes \citep{2016JCAP...04..023P} or the distribution of hot X-ray emitting gas \citep{2015MNRAS.452.1171W}. However, in this case, the effects of the required assumptions (e.g., spherical symmetry and hydrostatic equilibrium for the gas) are harder to capture. In all cases, no deviation from GR has been measured.
As an example of the power of the measurements presented in Section~\ref{sec:fit}, we present here their implication for beyond-GR effects. On one hand, our lensing signal is a measurement of the amplitude $M_\text{200m, L}$ of the lensing matter density $\rho_L = \rho \Sigma$. On the other hand, the splashback radius $r_\text{sp}$ depends on the amplitude of $ \rho_L \times \mu/\Sigma$ and it is related to the splashback mass $M_\text{200m, sp}$. Therefore, we focus on the ratio of these two amplitudes measured in the high-mass sample:
\begin{align}
\frac{\mu}{\Sigma} = \frac{M_\text{200m, L}}{M_\text{200m, sp}} = 0.8 \pm 0.4 && \Leftrightarrow && \gamma = 0.6\pm 0.8.
\end{align}
In high-density regions such as the Solar System, the expectation $\gamma = 1$ must be recovered with high precision. Hence, alternative theories of gravity commonly predict scale- and density-dependent effects, which cannot be captured through constant values of $\mu$ and $\Sigma$. Because $r_\text{sp}$ marks a sharp density transition around massive objects, it is more suited to test these complicated dependencies. To provide an example of the constraints possible under this second, more complex, interpretation, we followed \cite{Contigiani_2019} to convert the effects of an additional scale-dependent force (also known as a fifth force) on the location of the splashback radius $r_\text{sp}$. In particular, the model we employed is an extension of self-similar spherical collapse models and neglects any non-isotropic effects, e.g. those introduced by miscentering and halo ellipticity.
In the context of symmetron gravity \citep{2011PhRvD..84j3521H}, the change in $r_\text{sp}$ introduced by the fifth force is obtained by integrating the trajectories of test particles in the presence or absence of this force. In total, the theory considered has three parameters: 1) $\lambda_0/R(t_0)$, the dimensionless vacuum Compton wavelength of the field that we fix to be $0.05$ times the size of the collapsed object; 2) $z_\text{SSB}$, the redshift corresponding to the moment at which the fifth force is turned on in cosmic history, that we fix at $z_\text{SSB}=1.25$; and 3) $f$, a dimensionless force-strength parameter that is zero in GR. The choices of the fixed values that we imposed are based on physical considerations due to the connection of these gravity models to dark energy while maximizing the impact on splashback. See \cite{Contigiani_2019} for more details.
To match the expectation of the model to observations, we first converted the $M_\text{200m}$ lensing measurement into an expected splashback radius $r_\text{sp, L}$ by reversing the procedure explained at the end of Section~\ref{sec:fitmass} and then compared the measured $r_\text{sp}$ to this value. From the high-mass data, we obtained the following $1\sigma$ constraints:
\begin{align}
\label{eq:resultf}
\frac{ r_\text{sp, L} - r_\text{sp}}{r_\text{sp, L}} = 0.07 \pm 0.20 && \implies && f < 1.8.
\end{align}
The symmetron theories associated with $z_\text{SSB}\sim 1$ and cluster-sized objects correspond to a coupling mass $M_S$ scale of the order of $10^{-6}$ Planck masses, a region of the parameter space which is still allowed by the solar-system constraints \citep{2011PhRvD..84j3521H} and which has not been explored by other tests of symmetron gravity \citep[see, e.g.,][]{2018PhRvD..98f4019O, 2018LRR....21....1B}. In particular, the upper limit on $f$ produced here directly translates into a constraint on the symmetron field potential of \cite{Contigiani_2019}.\footnote{However, we stress here that this constraint does not have implications for dark energy, as the model considered is not able to drive cosmic acceleration in the absence of a cosmological constant.} In terms of the explicit parameters of the potential, reported here with an additional subscript $s$ for clarity ($M_s, \lambda_s, \mu_s$), we can define the degeneracy line delimiting the boundary of the constraint using the following relations:
\begin{align}
f \propto \mu_s \lambda_s^{-1} M_s^{-4} && (1+z_\text{ssb})^3
\propto M_s^2\mu_s^2.
\end{align}
Therefore, our result shows that we can test the existence of scalar fields with quite weak couplings and directly project these measurements into a broader theory parameter space.
\subsection{Future prospects}
\label{sec:future}
Our results show that the precision of the recovered splashback mass is not comparable to the low uncertainty of the lensing measurements. Because of this, every constraint based on comparing the two is currently limited by the uncertainty of the first. While this paper's focus is not to provide accurate forecasts, we attempt to quantify how we expect these results to improve in the future with larger and deeper samples. In particular, we focus our attention on wide stage-IV surveys such as \emph{Euclid} \citep{laureijs2011euclid} and Legacy Survey of Space and Time \citep[LSST,][]{2009arXiv0912.0201L}.
First, we investigate how our results can be rescaled. In the process of inferring $M_\text{200m}$ from $r_\text{sp}$, we find that the relative precision of the former is always a multiple ($3-4$) of the latter. This statement, which we have verified over a wide range of redshifts ($z \in [0, 1.5]$) and masses ($M_\text{200m} \in [10^{13}, 10^{15}]~\text{M}_\odot$), is a simple consequence of the low slope of the $M_\text{200m}-r_\text{sp}$ relation. Second, we estimate the size of a cluster sample we can obtain and how that translates into an improved errorbar for $r_\text{sp}$. LSST is expected to reach $2.5$ magnitudes deeper than KiDS and to cover an area of the sky $18$ times larger \citep{2009arXiv0912.0201L}. Part of this region is covered by the Galactic plane and will need to be excluded in practice, but the resulting LRG sample will reach up to $z\sim1.2$ and cover a comoving volume about a factor $100$ larger than what is considered in this work. Because the selected LRGs are designed to have a constant comoving density, we can use this estimate to scale the error bars of our galaxy profile measurement. A sample $N=100$ times the size would result in a relative precision in $r_\text{sp}$ of about $2.5$ percent, which translates into a measured $M_\text{200m}$ below $10$ percentage points. This result is obtained by simply re-scaling the error bars of the galaxy profiles by a factor $\sqrt{N} = 10$, but we stress that the effects do not scale linearly for $r_\text{sp}$ due to the slightly skewed posterior of this parameter. While this uncertainty is still larger than what is allowed by lensing measurements, we point out that this method can easily be applied to high-redshift clusters, for which lensing measurements are difficult due to the fewer background sources available (see Figure~\ref{fig:redshift}).
We note that this simple forecast sidesteps a few issues. Here we consider three of them and discuss their implications and possible solutions. 1) At high redshift, color identification requires additional bands, as the $4000$ \AA~break moves out of the LSST $grizy$ filters. Additional photometry will be required to account for this. 2) Even if we assume that an LRG sample can be constructed, the population of orbiting satellites at high redshift might not necessarily be easy to identify as the red sequence is only beginning to form. Ideally, there is always a color-magnitude galaxy selection that provides a profile compatible with the dark matter profile, but, at this moment, further investigation is required. 3) Finally, with more depth, we also expect fainter satellites to contribute to the galaxy profile signal, but the details of this population for large cluster samples at high redshift are not known. A simple extrapolation of the observed satellite magnitude distribution implies that the number of satellites forming the galaxy distribution signal might be enhanced by an additional factor $10$, reducing the errors in mass to a few percentage points. This, however, is complicated by the fact that different galaxy populations might present profiles inconsistent with the dark matter features \citep{2022arXiv220205277O}.
In addition to the forecast for the galaxy profiles discussed above, we also expect a measurement of $r_\text{sp}$ with a few percentage point uncertainty directly from the lensing profile \citep{2020MNRAS.499.3534X}. This precision will only be available for relatively low redshifts ($z\sim0.45$), enabling a precise comparison of the dark matter and galaxy profiles. This cross-check can also be used to understand the effects of galaxy evolution in shaping the galaxy phase-space structure \citep{2021arXiv210505914S} and help disentangle the effects of dynamical friction, feedback, and modified models of dark matter \citep{2016JCAP...07..022A, 2020JCAP...02..024B}.
\section{Conclusions}
\label{sec:conclusion}
Accretion connects the mildly non-linear environment of massive haloes to the intrinsic properties of their multi-stream regions. In the last few years, precise measurements of the outer edge of massive dark matter haloes have become feasible thanks to the introduction of large galaxy samples and a new research field has been opened.
In this paper, we have used the splashback feature to measure the average dynamical mass of haloes hosting bright KiDS LRGs. To support our result, we have validated this mass measurement using weak lensing masses and a simple abundance-matching argument (see Figure~\ref{fig:mass} and Table~\ref{tab:table_masses}).
The main achievement that we want to stress here is that these self-consistent measurements are exclusively based on photometric data. In particular, the bright LRG samples used here can be easily matched to simulations, offer a straightforward interpretation, and, in general, are found to be robust against systematic effects in the redshift calibration \citep{2021arXiv210106010B}. This is in contrast to other dynamical mass results presented in the literature: such measurements are based on expensive spectroscopic data \citep[see, e.g., ][]{2016ApJ...819...63R} and are found to produce masses higher than lensing estimates \citep{2020MNRAS.497.4684H}, an effect which might be due to systematic selection biases afflicting these more accurate measurements \citep{2015MNRAS.449.1897O}.
Because the relation between $r_\text{sp}$ and halo mass depends on cosmology, this measurement naturally provides a constraint on structure formation.
In this work, we have shown how the combination of splashback and lensing masses has the ability to constrain deviations from GR and the presence of fifth forces (see Section~\ref{sec:gravity}).
Although the precision of the splashback measurement is relatively low with current data, trends with redshift, mass, and galaxy properties are expected to be informative in the future \citep{2020MNRAS.499.3534X, 2021arXiv210505914S}. Next-generation data will enable new studies of the physics behind galaxy formation \citep{2020arXiv200811663A}, as well as the large-scale environment of massive haloes \citep{2021MNRAS.tmp.1404C}.
As mentioned in Section~\ref{sec:future}, stage IV surveys will substantially advance these new research goals. In particular, we have shown that splashback masses scale purely with survey volume, unlike lensing. This implies that this technique is uniquely positioned to provide accurate high-redshift masses.
\section*{Acknowledgements}
OC is supported by a de Sitter Fellowship of the Netherlands Organization for Scientific Research (NWO) and by the Natural Sciences and Engineering Research Council of Canada (NSERC). HH, MCF, and MV acknowledge support from the Vici grant No. 639.043.512 financed by the Netherlands Organisation for Scientific Research (NWO). AD is supported by a European Research Council Consolidator Grant No. 770935. ZY acknowledges support from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research (Germany). CS acknowledges support from the Agencia Nacional de Investigaci\'on y Desarrollo (ANID) through FONDECYT grant no.\ 11191125. All authors contributed to the development and writing of this paper. The authorship list is given in two alphabetical groups: two lead authors (OC, HH) and a list of authors who made a significant contribution to either the data products or the scientific analysis.
\section*{Data Availability}
The Kilo-Degree Survey data is available at the following link \url{https://kids.strw.leidenuniv.nl/}. The intermediate data products used for this article will be shared at reasonable request to the corresponding authors.
\bibliographystyle{mnras}
|
1,116,691,500,665 | arxiv | \section{Introduction}
Flux compactifications of string theory on non-K{\"a}hler manifolds
\cite{Rohm:1985jv}-\cite{
Serone:2003sv} have attracted much interest for some recent years.
In heterotic string theory, it has been known for a long time that the moduli are partially stabilized by the
NS-NS three form flux $H_{MNP}$\cite{Rohm:1985jv}\cite{Strominger:1986uh}\cite{Dine:1985rz}.
In general, turning on fluxes produces a potential \cite{Gukov:1999ya}, and one finds new vacua with some stabilized moduli at the potential minima \cite{Kachru:2003aw}-\cite{Gauntlett:2003cy}.
More recently,
understanding of heterotic moduli stabilization was improved by taking into account the intrinsic torsion
of the geometry
\cite{Lopes Cardoso:2002hd
-\cite{Serone:2003sv}.
Historically, heterotic string theory was extensively studied in 1980s as a candidate unified theory
including gravity. After the D-branes were found, however, the research focus
has shifted from heterotic to type II theories, in particular, while the brane-world scenario \cite{Randall:1999ee} \cite{Randall:1999vf}
was readily realized in type II theories using D-branes, there are no known
such constructions
in heterotic string theory. Despite this, an attempt was made in \cite{Kimura:2009tb}
to realize warped compactification of heterotic string theory by using NS5-branes,
where the
authors considered a domain-wall type smeared intersecting NS5-brane solution in $E_8~{\times}~E_8$ heterotic string theory
and explicitly solved the gaugino Dirac equation to find
one net chiral fermionic zeromode. This result was in agreement with the naive
counting argument of Nambu-Goldstone modes on this background \cite{Kimura:2009tb}.
In this paper,
we perform the gaugino-zeromode analysis on a similar smeared intersecting
background, in which, unlike the one considered in \cite{Kimura:2009tb},
the field configurations depend on not only just one
of the overall transverse coordinates (that is, the domain-wall type) but {\em full two}-dimensional
overall transverse coordinates to the branes (the vortex type)
\footnote{They constitute the intersecting $p$-brane
solution in the original form obtained in \cite{Argurio:1997gt,Ohta:1997gw}. Note that the relatively transverse dimensions are
still smeared.
}.
Although
the Dirac operator becomes nontrivial and much more
complicated than the one considered in \cite{Kimura:2009tb},
we will solve the zeromode equation under some boundary conditions, and
compute the complete spectrum of zeromodes.
In particular, we will find that, among infinite towers of Fourier modes,
there exist only three localized normalizable zeromodes, one of which has opposite chirality
to the other two. This agrees with the result obtained in \cite{Kimura:2009tb},
supporting the claim that there exists one net chiral zeromode localized
on the intersection of the heterotic five-brane system.
We begin with the neutral smeared intersecting five-brane solution \cite{Argurio:1997gt} \cite{Ohta:1997gw}:
\begin{eqnarray}
ds^2&=& \sum_{i,j=0,7,8,9}\eta_{ij}dx^idx^j+h(x^1,x^2)^2\sum_{\mu,\nu=1,2}\delta_{\mu\nu}dx^{\mu}dx^{\nu}+
h(x^1,x^2)\sum_{\mu,\nu=3,4,5,6}\delta_{\mu\nu}dx^{\mu}dx^{\nu},\nonumber\\
h(x^1,x^2)^2&=&e^{2\phi}, \label{Smeared Solution}
\end{eqnarray}
where
\begin{eqnarray}
h(x^1,x^2)=h_0+\xi\log{r}~,~~r=\sqrt{(x^1)^2+(x^2)^2}, \label{potential h}
\end{eqnarray}
$h_0$ and $\xi$ are real constants.
The profiles of the harmonic function $h(x^1,x^2)$ are shown in Figure \ref{fig:one}
for $\xi<0$, and in Figure \ref{fig:two} for $\xi>0$.
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{h_potential1.eps}
\end{center}
\caption{$\xi<0$}
\label{fig:one}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{h_potential2.eps}
\end{center}
\caption{$\xi>0$}
\label{fig:two}
\end{minipage}
\end{figure}
Since $h(x^1,x^2)$ is equal to the string coupling, we only consider the region where
$h(x^1,x^2)$ is positive, and impose the boundary condition that
all the fields become $0$ where $h=0$.
The 3-form flux $H_{\mu\nu\rho}$ is
\begin{eqnarray}
H_{\mu\nu\rho}=
\begin{cases}
{1\over2}{\partial h(x^1,x^2)\over \partial x^1} & \mbox{if $(\mu,\nu,\rho)=(2,3,4),(2,5,6)$ and even permutations},\\
-{1\over2}{\partial h(x^1,x^2)\over \partial x^1} & \mbox{if $(\mu,\nu,\rho)=(2,4,3),(2,6,5)$ and even permutations},\\
-{1\over2}{\partial h(x^1,x^2)\over \partial x^2} & \mbox{if $(\mu,\nu,\rho)=(1,3,4),(1,5,6)$ and even permutations},\\
{1\over2}{\partial h(x^1,x^2)\over \partial x^2} & \mbox{if $(\mu,\nu,\rho)=(1,4,3),(1,6,5)$ and even permutations},\\
0 & \mbox{otherwise}.
\end{cases} \label{H flux}
\end{eqnarray}
(\ref{Smeared Solution}) and (\ref{H flux}) are
a solution to the equations of motion of the type II NS-NS sector Lagrangian.
The metric (\ref{Smeared Solution})
represents two NS5-branes extended in the dimensions shown in
Table \ref{5brane}.
We emphasize here that our solution is
different from the one adopted in \cite{Kimura:2009tb} in that the harmonic function
(\ref{potential h}) has a dependence on both the $x^1$ and $x^2$ coordinates.
\newcommand{\bhline}[1]{\noalign{\hrule height #1}}
\newcommand{\bvline}[1]{\vrule width #1}
\begin{table}[b]
\begin{center}
\begin{tabular}{@{\bvline{1pt}}c|c|c|c|c|c|c|c|c|c|c@{\bvline{1pt}}}
\bhline{1pt}
~~ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9~ \\ \hline
~5-brane1 & ${\times}$ & ~ & ~ & ~ & ~ & ${\times}$ & ${\times}$ & ${\times}$ & ${\times}$ & ${\times}$~ \\ \hline
~5-brane2 & ${\times}$ & ~ & ~ & ${\times}$ & ${\times}$ & ~ & ~ & ${\times}$ & ${\times}$ & ${\times}$~ \\
\bhline{1pt}
\end{tabular}
\caption{Dimensions in which the 5-branes extend}\label{5brane}
\end{center}
\end{table}
In (\ref{potential h}), the parameter $\xi$ is related to the NS5-brane tension.
The brane action is calculated by using
the equations of motion as follows (in the Einstein frame):
\begin{eqnarray}
S_{\mbox{\tiny 5-Brane1}}=
-{T\over2\kappa^2}\int d^6x\sqrt{-\det{G_{mn}}}e^{-\phi/2}\delta(x^1,x^2), \nonumber
\end{eqnarray}
where $G_{mn}$ is the six-dimensional metric $(m,n=0,5,6,7,8,9)$,
and $T$ is the brane tension
\begin{eqnarray}
T=-2\pi\xi. \nonumber
\end{eqnarray}
Therefore, if $\xi < 0$, the 5-brane tension is positive.
On the other hand, if $\xi >0$, the tension becomes negative, implying that
this is an orientifold-like object.
In \cite{Imazato:2011mj}, it was argued that such an object in heterotic string theory
might be understood as a mirror to the Atiyah-Hitchin manifold.
In this paper, we only study positive tension branes.
Next we convert this type II NS5-brane background into an $E_8~{\times}~E_8$ heterotic
background by the standard embedding \cite{Callan:1991dj}.
We generalize the spin connection $\omega_{\mu}$ by adding the 3-form flux $H_{\mu\nu\rho}$
\begin{eqnarray}
\Omega_{\pm\mu}^{~\alpha\beta}=\omega_{\mu}^{~\alpha\beta}\pm H_{\mu}^{~\alpha\beta},
\end{eqnarray}
and we identify $\Omega_{+\mu}$ to the gauge connection $A_{\mu}$.
We present the gauge connections using Gell-Mann matrices $\lambda_i$ $(i=1,{\cdots},8)$ and $2{\times}2$ matrices
$
\mbox{\boldmath $1$}\equiv
\left(
\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}
\right)
$,
$
\mbox{\boldmath $s$}\equiv i\sigma_2=
\left(
\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}
\right)
$:
\begin{eqnarray}
A_1^{~\alpha\beta}&=&
-{1\over h(x^1,x^2)}{\partial h(x^1,x^2)\over \partial x^2}
\{
-{3\over4}(3\lambda_3+\sqrt{3}\lambda_8)\}~{\otimes}~\mbox{\boldmath $s$}, \label{gauge A}\\
A_2^{~\alpha\beta}&=&
{1\over h(x^1,x^2)}{\partial h(x^1,x^2)\over \partial x^1}
\{
-{3\over4}(3\lambda_3+\sqrt{3}\lambda_8)\}~{\otimes}~\mbox{\boldmath $s$}, \nonumber\\
A_3^{~\alpha\beta}&=&
{1\over2 h(x^1,x^2)^{3/2}}{\partial h(x^1,x^2)\over \partial x^1}
(-i\lambda_2)~{\otimes}~\mbox{\boldmath $1$ } -
{1\over2 h(x^1,x^2)^{3/2}}{\partial h(x^1,x^2)\over \partial x^2}
(-\lambda_1)~{\otimes}~\mbox{\boldmath $s$}, \nonumber\\
A_4^{~\alpha\beta}&=&
{1\over2 h(x^1,x^2)^{3/2}}{\partial h(x^1,x^2)\over \partial x^1}
(-\lambda_1)~{\otimes}~\mbox{\boldmath $s$ } +
{1\over2 h(x^1,x^2)^{3/2}}{\partial h(x^1,x^2)\over \partial x^2}
(-i\lambda_2)~{\otimes}~\mbox{\boldmath $1$}, \nonumber\\
A_5^{~\alpha\beta}&=&
{1\over2 h(x^1,x^2)^{3/2}}{\partial h(x^1,x^2)\over \partial x^1}
(-i\lambda_5)~{\otimes}~\mbox{\boldmath $1$ } +
{1\over2 h(x^1,x^2)^{3/2}}{\partial h(x^1,x^2)\over \partial x^2}
(+\lambda_4)~{\otimes}~\mbox{\boldmath $s$}, \nonumber\\
A_6^{~\alpha\beta}&=&
-{1\over2 h(x^1,x^2)^{3/2}}{\partial h(x^1,x^2)\over \partial x^1}
(+\lambda_4)~{\otimes}~\mbox{\boldmath $s$ } -
{1\over2 h(x^1,x^2)^{3/2}}{\partial h(x^1,x^2)\over \partial x^2}
(-i\lambda_5)~{\otimes}~\mbox{\boldmath $1$}. \nonumber
\end{eqnarray}
$\Omega_{+\mu}$, which is generically an $SO(6)$ spin connection, can be written
in this form and set equal to the gauge connection $A_{\mu}$, thanks to
the $SU(3)$ structure of this background.
The eigenvalues of $\mbox{\boldmath $s$}$, $\pm i$, distinguish which $SU(3)$ representation
the gaugino is in, $\mbox{\boldmath $3$}$ or $\bar{\mbox{\boldmath $3$}}$.
We adopt $\mbox{\boldmath $s$}=+i$ from now on.
These backgrounds (\ref{H flux}) and (\ref{gauge A}) preserve 1/4 of supersymmetries since
the generalized spin connections $\Omega_{\pm\mu}$ are in $SU(3)$.
In order to satisfy the Bianchi identity $dH=0$, we embed the gauge group $SU(3)$
to $E_8~{\times}~E_8$ and get the unbroken gauge symmetry $E_6(\times E_8)$.
The adjoint representation of $E_8$ is decomposed by embedding $SU(3)$ as follows:
\begin{eqnarray}
\mbox{\boldmath $248$}=(\mbox{\boldmath $78$},\mbox{\boldmath $1$})~{\oplus}~
(\mbox{\boldmath $27$},\mbox{\boldmath $3$})~{\oplus}~
(\bar{\mbox{\boldmath $27$}},\bar{\mbox{\boldmath $3$}})~{\oplus}~
(\mbox{\boldmath $1$},\mbox{\boldmath $8$}).
\end{eqnarray}
Since the gauge field has a vev in $SU(3)$,
the fields which are transformed as $(\mbox{\boldmath $27$},\mbox{\boldmath $3$})~{\oplus}~(\bar{\mbox{\boldmath $27$}},\bar{\mbox{\boldmath $3$}})$ (as well as $ (\mbox{\boldmath $1$},\mbox{\boldmath $8$})$) become Nambu-Goldstone bosons.
We know that a $D=4$, ${\cal N}=1$ chiral supermultiplet have only two bosonic degrees of freedom.
The Nambu-Goldstone bosons, which belongs to $\mbox{\boldmath $27$}$ and $\bar{\mbox{\boldmath $27$}}$, must be combined with their superpartners
to a single ${\cal N}=1$ chiral supermultiplet.
This means that the moduli are a triplet (of the broken $SU(3)$) of chiral supermultiplets that transform as $\mbox{\boldmath $27$}$ (or $\bar{\mbox{\boldmath $27$}}$) of $E_6$.
Therefore, from this bosonic moduli counting argument
one might conclude
that there would be three chiral zeromodes.
In fact, however, there is left only one net generation of fermions on the intersecting NS5-brane
since the chiralities of the localized solutions are different, as we show below.
The results agree with \cite{Kimura:2009tb}.
To know the number of generations that localize on the branes,
we need to compute the Dirac indices.
The ten-dimensional heterotic gaugino equation of motion is
\begin{eqnarray}
\Gamma^M D_M(\omega-{1\over3}H,A)\chi-\Gamma^M\chi\partial_M\phi
+{1\over8}\Gamma^M\gamma^{AB}(F_{AB}+\hat{F}_{AB})(\psi_M+{2\over3}\Gamma_M\lambda)=0,
\end{eqnarray}
where
\begin{eqnarray}
D_M(\omega-{1\over3}H,A)\chi
\equiv\Bigl( \partial_M + {1\over4}(\omega_M^{~AB}-{1\over3}H_M^{~AB})\gamma_{AB}+\mbox{ad}A_M \Bigr)\chi
\end{eqnarray}
and $\mbox{ad}A_M{\cdot}\chi\equiv[A_M,\chi]$.
We set $\psi_M=0$, $\lambda=0$ and $\tilde{\chi}\equiv e^{-\phi}\chi$, the equation of motion becomes
\begin{eqnarray}
\Gamma^MD_M(\omega-{1\over3}H,A)\tilde{\chi}=0. \label{10D Dirac}
\end{eqnarray}
The $SO(9,1)$ gamma matrices $\Gamma^M$ are
\begin{eqnarray}
\Gamma^a=\gamma^a_{\mbox{\tiny 4D}}{\otimes}\mbox{\boldmath $1$}_8,~~(a=0,7,8,9) ~~~
\Gamma^{\alpha}=\gamma^{\sharp}_{\mbox{\tiny 4D}}{\otimes}\gamma^{\alpha}~~(\alpha=1,{\dotsc},6) \nonumber
\end{eqnarray}
where $\gamma^a_{\mbox{\tiny 4D}}$, $\gamma^{\sharp}_{\mbox{\tiny 4D}}$ are the ordinary $SO(3,1)$ gamma
matrices and chiral operator, respectively.
$\gamma^{\alpha}$ are the $SO(6)$ gamma matrix and we fix them as
\begin{eqnarray}
\gamma^1=\sigma_1{\otimes}\mbox{\boldmath$1$}{\otimes}\mbox{\boldmath$1$},~~\gamma^2=\sigma_2{\otimes}\mbox{\boldmath$1$}{\otimes}\mbox{\boldmath$1$},~~\gamma^3=\sigma_3{\otimes}\sigma_1{\otimes}\mbox{\boldmath$1$}, \nonumber\\
\gamma^4=\sigma_3{\otimes}\sigma_2{\otimes}\mbox{\boldmath$1$},~~\gamma^5=\sigma_3{\otimes}\sigma_3{\otimes}\sigma_1,~~\gamma^6=\sigma_3{\otimes}\sigma_3{\otimes}\sigma_2 .\nonumber
\end{eqnarray}
The $SO(6)$ chiral operator is defined by $\gamma^{\sharp}=-i\gamma^1\gamma^2\gamma^3\gamma^4\gamma^5\gamma^6$.
More explicitly, it is represented in the matrix form:
\begin{eqnarray}
\gamma^{\sharp}
=
\left(
\begin{array}{cccccccc}
-\mbox{\boldmath $1$}_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \mbox{\boldmath $1$}_3 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \mbox{\boldmath $1$}_3 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 &-\mbox{\boldmath $1$}_3 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \mbox{\boldmath $1$}_3 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 &-\mbox{\boldmath $1$}_3 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 &-\mbox{\boldmath $1$}_3 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \mbox{\boldmath $1$}_3
\end{array}
\right) \label{chiral op}
\end{eqnarray}
where $\mbox{\boldmath $1$}_3=
\left(
\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&1
\end{array}
\right)
$.
We can decompose an $SO(9,1)$ Majorana-Weyl spinor into $SO(3,1)$ and $SO(6)$ spinors as
$\mbox{\boldmath $16$} =(\mbox{\boldmath $2$}_+,\mbox{\boldmath $4$}_+)~{\oplus}~(\mbox{\boldmath $2$}_-,\mbox{\boldmath $4$}_-)$,
where the subscripts $\pm$ are the $SO(3,1)$ and $SO(6)$ chiralities.
Since the $SO(9,1)$ spinor is Majorana,
the $(\mbox{\boldmath $2$}_+,\mbox{\boldmath $4$}_+)$ and $(\mbox{\boldmath $2$}_-,\mbox{\boldmath $4$}_-)$ components are associated with charge conjugation.
Equation (\ref{10D Dirac}) can be divided into $i=0,7,8,9$ and $\mu =1,{\cdots},6$ directions \cite{Kimura:2006af}
\begin{eqnarray}
\Gamma^i\partial_i \tilde{\chi}+
\Gamma^{\mu}D_{\mu}(\omega-{1\over3}H,A)\tilde{\chi}=0.
\end{eqnarray}
If $\tilde{\chi}=\tilde{\chi}_{\mbox{\tiny 4D}}~{\otimes}~\tilde{\chi}_{\mbox{\tiny 6D}}$,
the second term looks like the mass term of the four dimensional Dirac equation.
Being a singular noncompact geometry, no index theorem is available on our background. Therefore, in order to count the number of localized fermionic zeromodes
on the intersection of the five-branes
we will solve the Dirac equation directly
\begin{eqnarray}
\gamma^{\mu}D_{\mu}(\omega-{1\over3}H,A)\tilde{\chi}_{\mbox{\tiny 6D}}=0 \label{6D Dirac}
\end{eqnarray}
and find localized solutions which may have either positive or negative chirality.
The gauge connection $A_{\mu}$ has nonzero value in the $SU(3)$ subalgebra,
and therefore the $\tilde{\chi}_{\mbox{\tiny 6D}}$ is transformed as a triplet of $SU(3)$.
We know that the chiralities of $\tilde{\chi}_{\mbox{\tiny 6D}}$ related to each other by charge conjugation.
If we change the representation of the $SU(3)$, $\mbox{\boldmath $3$}$ or $\bar{\mbox{\boldmath $3$}}$,
the chirality of $\tilde{\chi}_{\mbox{\tiny 6D}}$ also changes oppositely.
Since $\tilde{\chi}_{\mbox{\tiny 6D}}$ depend only $x^1$ and $x^2$, (\ref{6D Dirac}) becomes
\begin{eqnarray}
\left(
\begin{array}{cc}
~ & \partial_{\bar{z}}~\mbox{\boldmath $1$}_{12} \\
\partial_{z}~\mbox{\boldmath $1$}_{12} & ~
\end{array}
\right)
\tilde{\chi}_{\mbox{\tiny 6D}}+
{\cal{M}}
\tilde{\chi}_{\mbox{\tiny 6D}}=0, \label{new 6D Dirac}
\end{eqnarray}
where
\begin{eqnarray}
\partial_{z}&=&{\partial \over \partial z}\equiv
{\partial \over \partial x^1} + i {\partial \over \partial x^2},~~
\partial_{\bar{z}}={\partial \over \partial \bar{z}}\equiv
{\partial \over \partial x^1} - i {\partial \over \partial x^2}, \nonumber\\
{\cal M} &=&
\gamma^{\mu}\{{1\over4}(\omega_{\mu}^{~AB}-{1\over3}H_{\mu}^{~AB})\Gamma_{AB}+\mbox{ad}A_{\mu}\}, \nonumber
\end{eqnarray}
and ${\mbox{\boldmath $1$}}_{12}$ is the $12{\times}12$ unit matrix.
$\tilde{\chi}_{\mbox{\tiny 6D}}$ is a fermion that has 24 components, and
we solve the Dirac equation (\ref{new 6D Dirac}) for each component.
In order to find solutions, we expand these components by Fourier modes as
\begin{eqnarray}
\tilde{\chi}_{\mbox{\tiny 6D}}^{\pm N}=\sum_{m=-\infty}^{\infty}e^{im\theta}\tilde{\chi}_{\mbox{\tiny 6D}~m}^{\pm N}(r),
\end{eqnarray}
where the signs $\pm$ denote the chiralities, and $N$ $(=1,{\cdots},12)$ label the
components of the gaugino $\tilde{\chi}_{\mbox{\tiny 6D}}$.
Thus we obtain 24 Dirac equations from (\ref{new 6D Dirac}) form each component,
some of which are complicated
differential equations.
However, we can find a linear transformation matrix $T$ :
\begin{eqnarray}
T=
\left(
\begin{array}{cc}
t_1 & t_2 \\
t_2 & t_1
\end{array}
\right), \nonumber
\end{eqnarray}
\begin{eqnarray}
t_1=
\left(
\begin{array}{cccccccccccc}
0 & 0 & 0 & 0 & 0 &-1/2 & 0 &-1/2 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1/2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1/2 & 0 &-1/2 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1/2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1/4 & 0 & 1/4 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1/2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1/2 & 0 & 0 & 0 & 0 & 0
\end{array}
\right), \nonumber
\end{eqnarray}
\begin{eqnarray}
t_2=
\left(
\begin{array}{cccccccccccc}
1/2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &-1/2 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &-1/2 \\
1/2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1/2 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1/2
\end{array}
\right).\nonumber
\end{eqnarray}
and define a new gaugino field
$\tilde{\chi}^{\prime}_{\mbox{\tiny 6D}}=T\tilde{\chi}_{\mbox{\tiny 6D}}$.
Then the equations in terms of the new components of $\tilde{\chi}^{\prime}_{\mbox{\tiny 6D}}$ become first order differential equations of the radial coordinate $r$,
so that one can easily solve the equations.
The generic form of these 24 equations can be written as follows:
\begin{eqnarray}
{d \over d r}\tilde{\chi}_{\mbox{\tiny 6D}~m}^{\pm N}
-{m+n(N)\over r}\tilde{\chi}_{\mbox{\tiny 6D}~m}^{\pm N}
+{\alpha(N)\over h(r)^2}{d h(r) \over d r}\tilde{\chi}_{\mbox{\tiny 6D}~m}^{\pm N}=0, \label{equation-Y}
\end{eqnarray}
where $n(N)$ is an integer which depends on each $N$.
The real numbers $\alpha(N)$ are found to be
\begin{eqnarray}
+~:~~\alpha&=&
\{~2~,~{3\over2}~,~{3\over2}~,~{3\over2}~,~1~,~1~,~{7\over2}~,~-1~,~1~,~1~,~{7\over2}~,~{3\over2}~\} \nonumber\\
-~:~~\alpha&=&
\{~1~,~{3\over2}~,~{3\over2}~,~{3\over2}~,~2~,~2~,~-{1\over2}~,~4~,~2~,~2~,~-{1\over2}~,~{3\over2}~\}
\label{sets}
\end{eqnarray}
for each chirality.
These sets of numbers are exactly the same as the ones
encountered in the domain-wall type case \cite{Kimura:2009tb}.
Note, however, that, unlike \cite{Kimura:2009tb}, our equations (\ref{equation-Y})
depends on the Fourier frequency $m$.
The solutions of (\ref{equation-Y}), with a constant of integration $C$, are
\begin{eqnarray}
\tilde{\chi}_{\mbox{\tiny 6D}~m}^{\pm N}=Cr^{m+n(N)}e^{\alpha\over h(r)}.
\end{eqnarray}
We consider the cases of $\alpha>0$ and $\alpha<0$ separately.
({\it i}) $\alpha(N)>0$\\
In this case the boundary condition is satisfied if and only if $C=0$,
and therefore there are no localized modes.
({\it ii}) $\alpha(N)<0$\\
In this case, any Fourier mode satisfies the boundary condition. However,
we also require that the mode must be normalizable $\int d^2x|\tilde{\chi}|^2<\infty$.
Such modes are the ones with $m+n(N)=0$; only a single Fourier mode corresponds
to a normalizable mode and is localized for each negative $\alpha(N)$.
Therefore, the sets (\ref{sets}) show that there is only one normalizable localized
mode with positive chirality, while there are two with negative chirality. This is
exactly the same result as \cite{Kimura:2009tb}, confirming the claim that
there exists one net chiral zeromode localized on this heterotic five-brane system.
\section*{Acknowledgments}
We thank Shun'ya Mizoguchi and Tetsuji Kimura for discussions and comments.
|
1,116,691,500,666 | arxiv | \section{Introduction}
\par Modern deep learning methods are descendent from such long-studied fields as statistical learning, optimization, and signal processing, all of which were built on mathematically rigorous foundations. In statistical learning, principled kernel methods have vastly improved the performance of SVMs and PCA \citep{suykens1999least, scholkopf1997kernel}, and boosting theory has enabled weak learners to generate strong classifiers \citep{schapire1990strength}. Optimizers in deep learning are borrowed from the field of convex optimization , where momentum optimizers \citep{nesterov1983method} and conjugate gradient methods provably solve ill-conditioned problems with high efficiency \citep{hestenes1952methods}.
Deep learning harnesses foundational tools from these mature parent fields.
Despite its rigorous roots, deep learning has driven a wedge between theory and practice.
Recent theoretical work has certainly made impressive strides towards understanding optimization and generalization in neural networks. But doing so has required researchers to make strong assumptions and study restricted model classes.
In this paper, we seek to understand whether deep learning theories accurately capture the behaviors and network properties that make realistic deep networks work.
Following a line of previous work, such as \cite{swirszcz_local_2016}, \cite{zhang_understanding_2016}, \cite{balduzzi_shattered_2017} and \cite{santurkar_how_2018}, we put the assumptions and conclusions of deep learning theory to the test using experiments with both toy networks and realistic ones. We focus on the following important theoretical issues:
\begin{itemize}
\item Local minima: Numerous theoretical works argue that all local minima of neural loss functions are globally optimal or that all local minima are nearly optimal. In practice, we find highly suboptimal local minima in realistic neural loss functions, and we discuss reasons why suboptimal local minima exist in the loss surfaces of deep neural networks in general.
\item Weight decay and parameter norms: Research inspired by Tikhonov regularization suggests that low-norm minima generalize better, and for many, this is an intuitive justification for simple regularizers like weight decay. Yet for neural networks, it is not at all clear which form of $\ell_2$-regularization is optimal. We show this by constructing a simple alternative: biasing solutions toward a non-zero norm still works and can even measurably improve performance for modern architectures.
\item Neural tangent kernels and the wide-network limit: We investigate theoretical results concerning neural tangent kernels of realistic architectures. While stochastic sampling of the tangent kernels suggests that theoretical results on tangent kernels of multi-layer networks may apply to some multi-layer networks and basic convolutional architectures, the predictions from theory do not hold for practical networks, and the trend even reverses for ResNet architectures. We show that the combination of skip connections and batch normalization is critical for this trend in ResNets.
\item Rank: Generalization theory has provided guarantees for the performance of low-rank networks. However, we find that regularization which encourages high-rank weight matrices often outperforms that which promotes low-rank matrices. This indicates that low-rank structure is not a significant force behind generalization in practical networks. We further investigate the adversarial robustness of low-rank networks, which are thought to be more resilient to attack, and we find empirically that their robustness is often lower than the baseline or even a purposefully constructed high-rank network.
\end{itemize}
\section{Local minima in loss landscapes: Do suboptimal minima exist?}
It is generally accepted that ``in practice, poor local minima are rarely a problem with large networks.'' \citep{lecun_deep_2015}. However, exact theoretical guarantees for this statement are elusive. Various theoretical studies of local minima have investigated spin-glass models \citep{choromanska_loss_2014}, deep linear models \citep{laurent2018deep,kawaguchi_deep_2016}, parallel subnetworks \citep{haeffele_global_2017}, and dense fully connected models \citep{nguyen_loss_2018} and have shown that either all local minima are global or all have a small optimality gap. The apparent scarcity of poor local minima has lead practitioners to develop the intuition that bad local minima (``bad'' meaning high loss value and suboptimal training performance) are practically non-existent.
To further muddy the waters,
some theoretical works prove the {\em existence} of local minima. Such results exist for simple fully connected architectures \citep{swirszcz_local_2016}, single-layer networks \citep{liang_understanding_2018, yun_small_2018}, and two-layer ReLU networks \citep{safran_spurious_2017}.
For example, \citep{yun2018small} show that local minima exist in single-layer networks with univariate output and unique datapoints. The crucial idea here is that all neurons are activated for all datapoints at the suboptimal local minima.
Unfortunately, these existing analyses of neural loss landscapes require strong assumptions (e.g. random training data, linear activation functions, fully connected layers, or extremely wide network widths) --- so strong, in fact, that it is reasonable to question whether these results have any bearing on practical neural networks or describe the underlying cause of good optimization performance in real-world settings.
In this section, we investigate the existence of suboptimal local minima from a theoretical perspective and an empirical one. If suboptimal local minima exist, they are certainly hard to find by standard methods (otherwise training would not work). Thus, we present simple theoretical results that inform us on how to construct non-trivial suboptimal local minima, concretely generalizing previous constructions, such as those by \citep{yun2018small}. Using experimental methods inspired by theory, we easily find suboptimal local minima in the loss landscapes of a range of classifiers.
Trivial local minima are easy to find in ReLU networks -- consider the case where bias values are sufficiently low so that the ReLUs are ``dead'' (i.e. inputs to ReLUs are strictly negative). Such a point is trivially a local minimum. Below, we make a more subtle observation that multilayer perceptrons (MLPs) must have non-trivial local minima, provided there exists a linear classifer that performs worse than the neural network (an assumption that holds for virtually any standard benchmark problem). Specifically, we show that MLP loss functions contain local minima where they behave identically to a linear classifier on the same data.
We now define a family of low-rank linear functions which represent an MLP. Let ``rank-$s$ affine function'' denote an operator of the form $G(\mathbf{x}) = A\mathbf{x} +\mathbf{b}$ with $\text{rank}(A) = s$.
\begin{definition}
Consider a family of functions, $\{F_\phi: \mathbb{R}^m \to \mathbb{R}^n \}_{\phi\in\mathbb{R}^P}$ parameterized by $\phi.$ We say this family has \emph{rank-$s$ affine expression} if for all rank-$s$ affine functions $G:\mathbb{R}^m \to \mathbb{R}^n$ and finite subsets $\Omega\subset \mathbb{R}^m$, there exists $\phi$ with $F_\phi(\mathbf{x})=G(\mathbf{x}), \, \forall \mathbf{x}\in\Omega$. If $s = \min(n,m)$ we say that this family has \emph{full affine expression}.
\end{definition}
We investigate a family of L-layer MLPs with ReLU activation functions, $\{F_\phi: \mathbb{R}^m \to \mathbb{R}^n \}_{\phi\in\Phi}$, and parameter vectors $\phi$, i.e., $\phi = (A_1, \mathbf{b}_1, A_2, \mathbf{b}_2, \hdots, A_L, \mathbf{b}_L)$, $F_{\phi}(\mathbf{x})=H_L(f(H_{L-1}...f(H_1(\mathbf{x}))))$, where $f$ denotes the ReLU activation function and $H_i(\mathbf{z}) = A_i\mathbf{z} + \mathbf{b}_i$. Let $A_i \in \mathbb{R}^{n_{i} \times n_{i-1}}$, $\mathbf{b}_i\in \mathbb{R}^{n_i}$ with $n_0 = m$ and $n_L = n$.
\begin{lemma}\label{ConstructiveLemma}
Consider a family of L-layer multilayer perceptrons with ReLU activations $\{F_\phi: \mathbb{R}^m \to \mathbb{R}^n \}_{\phi \in \Phi}$, and let $s = \min_i n_i$ be the minimum layer width. Such a family has rank-$s$ affine expression.
\end{lemma}
\begin{proof}
The idea of the proof is to use the singular value decomposition of any rank-$s$ affine function to construct the MLP layers and pick a bias large enough for all activations to remain positive. See Appendix \ref{proof:lem1}.
\end{proof}
The ability of MLPs to represent linear networks allows us to derive a theorem which implies that arbitrarily deep MLPs have local minima at which the performance of the underlying model on the training data is equal to that of a (potentially low-rank) linear model. In other words, neural networks inherit the local minima of elementary linear models.
\begin{theorem}\label{thm:local_minima_linear}
Consider a training set, $\{(\mathbf{x}_i,y_i)\}_{i=1}^N$, a family $\{F_{\phi}\}_{\phi}$ of MLPs with $s = \min_i n_i$ being the smallest width. Consider a parameterized affine function $G_{A,\mathbf{b}}$ solving
\begin{equation}
\min_{A,\mathbf{b}} \mathcal{L}(G_{A,\mathbf{b}}; \{(\mathbf{x}_i,y_i)\}_{i=1}^N), \qquad \text{subject to } \text{rank}(A)\leq s,
\end{equation}
for a continuous loss function $\mathcal{L}$. Then, for each local minimum, $(A', \mathbf{b}')$, of the above training problem, there exists a local minimum, $\phi'$, of the MLP loss $\mathcal{L}(F_{\phi}; \{(\mathbf{x}_i,y_i)\}_{i=1}^N)$ with the property that $F_{\phi'}(\mathbf{x}_i)=G_{A', \mathbf{b}'}(\mathbf{x}_i)$ for $i=1, 2, ..., N$.
\end{theorem}
\begin{proof}
See appendix \ref{proof:thm1}.
\end{proof}
The proof of the above theorem constructs a network in which all activations of all training examples are positive, generalizing previous constructions of this type such as \citet{yun2018small} to more realistic architectures and settings. Another paper has employed a similar construction concurrently to our own work \citep{he2020nonlinearities}. We do expect that the general problem in expressivity occurs every time the support of the activations coincides for all training examples, as the latter reduces the deep network to an affine linear function (on the training set), which relates to the discussion in \citet{balduzzi_shattered_2017}. We test this hypothesis below by initializing deep networks with biases of high variance.
\begin{remark}[CNN and more expressive local minima]
Note that the above constructions of Lemma \ref{ConstructiveLemma} and Theorem \ref{thm:local_minima_linear} are not limited to MLPs and could be extended to convolutional neural networks with suitably restricted linear mappings $G_\phi$ by using the convolution filters to represent identities and using the bias to avoid any negative activations on the training examples. Moreover, shallower MLPs can similarly be embedded into deeper MLPs recursively by replicating the behavior of each linear layer of the shallow MLP with several layers of the deep MLP. Linear classifiers, or even shallow MLPs, often have higher training loss than more expressive networks. Thus, we can use the idea of Theorem 1 to find various suboptimal local minima in the loss landscapes of neural networks. We confirm this with subsequent experiments.
\end{remark}
We find that initializing a network at a point that approximately conforms to Theorem 1 is enough to get trapped in a bad local minimum. We verify this by training a linear classifier on CIFAR-10 with weight decay, (which has a test accuracy of $40.53\%$, loss of $1.57$, and gradient norm of $0.00375$ w.r.t to the logistic regression objective). We then initialize a multilayer network as described in Lemma \ref{ConstructiveLemma} to approximate this linear classifier and recompute these statistics on the full network (see Table \ref{tab:localopt}).
When training with this initialization, the gradient norm drops futher, moving parameters even closer to the linear minimizer. The final training result still yields positive activations for the entire training dataset.
Moreover, any isolated local minimum of a linear network results in many local minima of an MLP $F_{\phi'}$, as the weights $\phi'$ constructed in the proof of Theorem \ref{thm:local_minima_linear} can undergo transformations such as scaling, permutation, or even rotation without changing $F_{\phi'}$ as a function during inference, i.e. $F_{\phi'}(\mathbf{x}) = F_{\phi}(\mathbf{x})$ for all $\mathbf{x}$ for an infinite set of parameters $\phi$, as soon as $F$ has at least one hidden layer.
While our first experiment initializes a deep MLP at a local minimum it inherited from a linear one to empirically illustrate our findings of Theorem \ref{thm:local_minima_linear}, Table \ref{tab:localopt} also illustrates that similarly bad local minima are obtained when choosing large biases (third row) and choosing biases with large variance (fourth row) as conjectured above. To significantly reduce the bias, however, and still obtain a sub-par optimum, we need to rerun the experiment with SGD without momentum, as shown in the last row, reflecting common intuition that momentum is helpful to move away from bad local optima.
\begin{table*}
\caption{Local minima for MLPs generated via various initializations. We show loss, euclidean norm of the gradient vector, and minimum eigenvalue of the Hessian before and after training. We use $500$ iterations of the power method on a shifted Hessian matrix computed on the full dataset to find the minimum eigenvalue\label{tab:localopt}. The experiment in the last row is trained with no momentum (NM).}
\centering
\begin{tabular}{crrrcrrr}\toprule
& \multicolumn{3}{c}{At Initialization} && \multicolumn{3}{c}{After training} \\
\cmidrule{2-4} \cmidrule{6-8}
Init. Type & Loss & Grad. & Min. EV && Loss & Grad. & Min. EV\\ \midrule
Default & 4.5963 & 0.5752 & -1.5549 && 0.0061 & 0.0074 & 0.0007\\
Lemma \ref{ConstructiveLemma} & 1.5702 & 0.0992 & 0.03125 && 1.5699 & 0.0414 & 0.0156\\
Bias+20 & 31.204 & 343.99 & -1.7421 && 2.3301 & 0.0090 & 0.0005\\
Bias $\in \mathcal{U}(-50,50) $& 51.445 & 378.36 & -430.49 && 2.3153 & 0.0048 & 0.0000\\
Bias $\in \mathcal{U}(-10,10)$ NM & 12.209 & 42.454 & -47.733 && 0.2198 & 0.0564 & 0.0013\\
\bottomrule
\end{tabular}
\end{table*}
\begin{remark}[Sharpness of sub-optimal local optima]
An interesting additional property of minima found using the previously discussed initializations is that they are ``sharp''.
Proponents of the sharp-flat hypothesis for generalization have found that minimizers with poor generalization live in sharp attracting basins with low volume and thus low probability in parameter space \citep{keskar_large-batch_2016,huang2019understanding}, although care has to be taken to correctly measure sharpness \citep{dinh_sharp_2017}. Accordingly, we find that the maximum eigenvalue of the Hessian at each suboptimal local minimum is significantly higher than those at near-global minima. For example, the maximum eigenvalue of the initialization by Lemma 1 in Table \ref{tab:localopt} is estimated as $113,598.85$ after training, whereas that of the default initialization is only around $24.01$.
While our analysis has focused on sub-par local optima in training instead of global minima with sub-par generalization, both the scarcity of local optima during normal training and the favorable generalization properties of neural networks seem to correlate with their sharpness.
\end{remark}
\par In light of our finding that neural networks trained with unconventional initialization reach suboptimal local minima, we conclude that poor local minima can readily be found with a poor choice of hyperparameters. Suboptimal minima are less scarce than previously believed, and neural networks avoid these because good initializations and stochastic optimizers have been fine-tuned over time. Fortunately, promising theoretical directions may explain good optimization performance while remaining compatible with empirical observations. The approach followed by \citet{du_gradient_2019} analyzes the loss trajectory of SGD, showing that it avoids bad minima. While this work assumes (unrealistically) large network widths, this theoretical direction is compatible with empirical studies, such as \citet{goodfellow_qualitatively_2014}, showing that the training trajectory of realistic deep networks does not encounter significant local minima.
\section{Weight decay: Are small $\ell_2$-norm solutions better?}
\par Classical learning theory advocates regularization for linear models, such as SVM and linear regression. For SVM, $\ell_2$ regularization endows linear classifiers with a wide-margin property \citep{cortes1995support}, and recent work on neural networks has shown that minimum norm neural network interpolators benefit from over-parametrization \citep{hastie2019surprises}
. Following the long history of explicit parameter norm regularization for linear models, weight decay is used for training nearly all high performance neural networks \citep{he_deep_2015, chollet_xception_2016, huang2017densely, sandler2018mobilenetv2}.
In combination with weight decay, all of these cutting-edge architectures also employ batch normalization after convolutional layers \citep{ioffe2015batch}.
With that in mind, \citet{van_laarhoven_l2_2017} shows that the regularizing effect of weight decay is counteracted by batch normalization, which removes the effect of shrinking weight matrices. \citet{zhang_three_2018} argue that the synergistic interaction between weight decay and batch norm arises because weight decay plays a large role in regulating the effective learning rate of networks, since scaling down the weights of convolutional layers amplifies the effect of each optimization step, effectively increasing the learning rate. Thus, weight decay increases the effective learning rate as the regularizer drags the parameters closer and closer towards the origin. The authors also suggest that data augmentation and carefully chosen learning rate schedules are more powerful than explicit regularizers like weight decay.
\par Other work echos this sentiment and claims that weight decay and dropout have little effect on performance, especially when using data augmentation \citep{hernandez-garcia_deep_2018}. \citet{hoffer_norm_2018} further study the relationship between weight decay and batch normalization, and they develop normalization with respect to other norms. \citet{shah2018minimum} instead suggest that minimum norm solutions may not generalize well in the over-parametrized setting.
We find that the difference between performance of standard network architectures with and without weight decay is often statistically significant, even with a high level of data augmentation, for example, horizontal flips and random crops on CIFAR-10 (see Tables \ref{norm-bias} and \ref{norm-bias-normalized}). But is weight decay the most effective form of $\ell_2$ regularization? Furthermore, is the positive effect of weight decay because the regularizer promotes small norm solutions? We generalize weight decay by biasing the $\ell_2$ norm of the weight vector towards other values using the following regularizer, which we call \emph{norm-bias}:
\begin{equation}
R_\mu(\phi) = \left| \left(\sum_{i=1}^P \phi_i^2\right) - \mu^2 \right|.
\end{equation}
\par $R_{0}$ is equivalent to weight decay, but we find that we can further improve performance by biasing the weights towards higher norms (see Tables \ref{norm-bias} and \ref{norm-bias-normalized}). In our experiments on CIFAR-10 and CIFAR-100, networks are trained using weight decay coefficients from their respective original papers. ResNet-18 and DenseNet are trained with $\mu^2=2500$ and norm-bias coefficient $0.005$, and MobileNetV2 is trained with $\mu^2 = 5000$ and norm-bias coefficient $0.001$. $\mu$ is chosen heuristically by first training a model with weight decay, recording the norm of the resulting parameter vector, and setting $\mu$ to be slightly higher than that norm in order to avoid norm-bias leading to a lower parameter norm than weight decay. While we find that weight decay improves results over a non-regularized baseline for all three models, we also find that models trained with large norm bias (i.e., large $\mu$) outperform models trained with weight decay.
\par These results lend weight to the argument that explicit parameter norm regularization is in fact useful for training networks, even deep CNNs with batch normalization and data augmentation. However, the fact that norm-biased networks can outperform networks trained with weight decay suggests that any benefits of weight decay are unlikely to originate from the superiority of small-norm solutions.
To further investigate the effect of weight decay and parameter norm on generalization, we also consider models without batch norm. In this case, weight decay directly penalizes the norm of the linear operators inside a network, since there are no batch norm coefficients to compensate for the effect of shrinking weights. Our goal is to determine whether small-norm solutions are superior in this setting where the norm of the parameter vector is more meaningful.
In our first experiment without batch norm, we experience improved performance training an MLP with \emph{norm-bias} (see Table \ref{norm-bias-normalized}). In a state-of-the-art setting, we consider ResNet-20 with Fixup initialization, a ResNet variant that removes batch norm and instead uses a sophisticated initialization that solves the exploding gradient problem \citep{zhang_fixup_2019}. We observe that weight decay substantially improves training over SGD with no explicit regularization --- in fact, ResNets with this initialization scheme train quite poorly without explicit regularization and data normalization. Still, we find that \emph{norm-bias} with $\mu^2 = 1000$ and norm-bias coefficient $0.0005$ achieves better results than weight decay (see Table \ref{norm-bias-normalized}). This once again refutes the theory that small-norm parameters generalize better and brings into doubt any relationship between classical Tikhonov regularization and weight decay in neural networks. See Appendix \ref{low-norm-appendix} for a discussion concerning the final parameter norms of Fixup networks as well as additional experiments on CIFAR-100, a harder image classification dataset.
\begin{table}[h!]
\begin{centering}
\caption{ResNet-18, DenseNet-40, and MobileNetV2 models trained on non-normalized CIFAR-10 data with various regularizers. Numerical entries are given by $\overline{m} (\pm s)$, where $\overline{m}$ is the average accuracy over $10$ runs, and $s$ represents standard error.}
\begin{tabular}{r|c|c|c}
\
Model & No weight decay ($\%$) & Weight decay ($\%$) & Norm-bias ($\%$) \\ \hline
ResNet & $93.46$ ($\pm 0.05$) & $94.06$ ($\pm 0.07$) & $\mathbf{94.86}$ ($\pm 0.05$) \\
DenseNet & $89.26$ ($\pm 0.08$) & $92.27$ ($\pm 0.06$) & $\mathbf{92.49}$ ($\pm 0.06$) \\
MobileNetV2 & $92.88$ ($\pm 0.06$) & $92.88$ ($\pm 0.09$)& $\mathbf{93.50}$ ($\pm 0.09$)\\
\end{tabular}
\label{norm-bias}
\end{centering}
\end{table}
\begin{table}[h!]
\begin{centering}
\caption{ResNet-18, DenseNet-40, MobileNetV2, ResNet-20 with Fixup initialization, and a 4-layer multi-layer perceptron (MLP) trained on normalized CIFAR-10 data with various regularizers. Numerical entries are given by $\overline{m} (\pm s)$, where $\overline{m}$ is the average accuracy over $10$ runs, and $s$ represents standard error.}
\begin{tabular}{r|c|c|c}
Model & No weight decay ($\%$) & Weight decay ($\%$) & Norm-bias ($\%$) \\ \hline
ResNet & $93.40$ ($\pm 0.04$) & $94.76$ ($\pm 0.03$) & $\mathbf{94.99}$ ($\pm 0.05$) \\
DenseNet & $90.78$ ($\pm 0.08$) & $92.26$ ($\pm 0.06$)& $\mathbf{92.46}$ ($\pm 0.04$)\\
MobileNetV2 & $92.84$ ($\pm 0.05$) & $\mathbf{93.64}$ ($\pm 0.05$)& $\mathbf{93.64}$ ($\pm 0.03$)\\
ResNet Fixup & $10.00$ ($\pm 0.00$) & $91.42$ ($\pm 0.04$)& $\mathbf{91.55}$ ($\pm 0.07$)\\
MLP & $58.88$ ($\pm 0.10$) & $58.95$ ($\pm 0.07$)& $\mathbf{59.13}$ ($\pm 0.09$)\\
\end{tabular}
\label{norm-bias-normalized}
\end{centering}
\end{table}
\section{Kernel theory and the infinite-width limit}
In light of the recent surge of works discussing the properties of neural networks in the infinite-width limit, in particular, connections between infinite-width deep neural networks and Gaussian processes, see \citet{lee_deep_2017}, several interesting theoretical works have appeared. The wide network limit and Gaussian process interpretations have inspired work on the neural tangent kernel \citep{jacot_neural_2018}, while \citet{lee_wide_2019} and \citet{bietti_kernel_2018} have used wide network assumptions to analyze the training dynamics of deep networks. The connection of deep neural networks to kernel-based learning theory seems promising, but how closely do current architectures match the predictions made for simple networks in the large-width limit?
We focus on the Neural Tangent Kernel (NTK), developed in \citet{jacot_neural_2018}. Theory dictates that, in the wide-network limit, the neural tangent kernel remains nearly constant as a network trains. Furthermore, neural network training dynamics can be described as gradient descent on a convex functional, provided the NTK remains nearly constant during training \citep{lee_wide_2019}. In this section, we experimentally test the validity of these theoretical assumptions.
Fixing a network architecture, we use $\mathcal{F}$ to denote the function space parametrized by $\phi \in \mathbb{R}^p$. For the mapping $F: \mathbb{R}^P \to \mathcal{F}$, the NTK is defined by
\begin{equation}
\Phi(\phi) = \sum_{p=1}^P \partial_{\phi_{p}} F(\phi) \otimes \partial_{\phi_{p}} F(\phi),
\end{equation}
where the derivatives $\partial_{\phi_{p}} F(\phi)$ are evaluated at a particular choice of $\phi$ describing a neural network.
The NTK can be thought of as a similarity measure between images; given any two images as input, the NTK returns an $n\times n$ matrix, where $n$ is the dimensionality of the feature embedding of the neural network. We sample entries from the NTK by drawing a set of $N$ images $\lbrace x_i\rbrace$ from a dataset, and computing the entries in the NTK corresponding to all pairs of images in our image set. We do this for a random neural network $f:\mathbb{R}^{m} \to \mathbb{R}^n$ and computing the tensor $\Phi(\phi) \in R^{N \times N \times n \times n}$ of all pairwise realizations, restricted to the given data:
\begin{equation}
\Phi(\phi)_{ijkl} = \sum_{p=1}^P \partial_{\phi_{p}} f(\mathbf{x}_i, \phi)_k \cdot \partial_{\phi_{p}} f(\mathbf{x}_j, \phi)_l
\label{ntk_sample}
\end{equation}
By evaluating \Eqref{ntk_sample} using automatic differentiation, we compute slices from the NTK before and after training for a large range of architectures and network widths. We consider image classification on CIFAR-10 and compare a two-layer MLP, a four-layer MLP, a simple 5-layer ConvNet, and a ResNet. We draw 25 random images from CIFAR-10 to sample the NTK before and after training. We measure the change in the NTK by computing the correlation coefficient of the (vectorized) NTK before and after training. We do this for many network widths, and see what happens in the wide network limit. For MLPs we increase the width of the hidden layers, for the ConvNet (6-Layer, Convolutions, ReLU, MaxPooling), we increase the number of convolutional filters, for the ResNet we consider the WideResnet \citep{zagoruyko_wide_2016} architecture, where we increase its width parameter. We initialize all models with uniform He initialization as discussed in \citet{he_delving_2015}, departing from specific Gaussian initializations in theoretical works to analyze the effects for modern architectures and methodologies.
\begin{figure}[h]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{images/ntk_sampling_linear_scaling.pdf}
\caption{}
\label{fig:ntk_a}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{images/ntk_sampling_corr.pdf}
\caption{}
\label{fig:ntk_b}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{images/ntk_sampling_ndiff.pdf}
\caption{}
\label{fig:ntk_c}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{images/ntk_stats_change_avg.pdf}
\caption{}
\label{fig:ntk_d}
\end{subfigure}
\caption{(a) The relative norm of the neural tangent kernel as a function of the number of parameters is shown for several networks. This figure highlights the difference between the behavior of ResNets and other architectures. Figure \ref{fig:ntk_c} visualizes the same data in a logarithmic scale. (b) The correlation of the neural tangent kernel before and after training. We expect this coefficient to converge toward 1 in the infinite-width limit for multi-layer networks as in \citet{jacot_neural_2018}. We do not observe this trend for ResNets as is clear from the curve corresponding to the WideResNet. (d) The average norm of parameter change decreases for simple architectures but stays nearly constant for the WideResNet.
\label{fig:ntk_sampling}}
\end{figure}
The results are visualized in Figure \ref{fig:ntk_sampling}, where we plot parameters of the NTK for these different architectures, showing how the number of parameters impacts the relative change in the NTK ($||\Phi_1 - \Phi_0|| / ||\Phi_0||$, where $\Phi_0$/$\Phi_1$ denotes the sub-sampled NTK before/after training) and correlation coefficient ($\operatorname{Cov}(\Phi_1, \Phi_0) /\sigma(\Phi_1)/\sigma(\Phi_0)$). \citet{jacot_neural_2018} predicts that the NTK should change very little during training in the infinite-width limit.
At first glance, it might seem that these expectations are hardly met for our (non-infinite) experiments. Figure \ref{fig:ntk_a} and Figure \ref{fig:ntk_c} show that the relative change in the NTK during training (and also the magnitude of the NTK) is rapidly increasing with width and remains large in magnitude for a whole range of widths of convolutional architectures. The MLP architectures do show a trend toward small changes in the NTK, yet convergence to zero is slower in the 4-Layer case than in the 2-Layer case.
However, a closer look shows that almost all of the relative change in the NTK seen in Figure \ref{fig:ntk_c} is explained by a simple linear re-scaling of the NTK. It should be noted that the scaling of the NTK is strongly effected by the magnitude of parameters at initialization. Within the NTK theory of \citet{lee_deep_2017}, a linear rescaling of the NTK during training corresponds simply to a change in learning rate, and so it makes more sense to measure similarity using a scale-invariant metric.
Measuring similarity between sub-sampled NTKs using the scale-invariant correlation coefficient, as in Figure \ref{fig:ntk_b}, is more promising. Surprisingly, we find that, as predicted in \citet{jacot_neural_2018}, the NTK changes very little (beyond a linear rescaling) for the wide ConvNet architectures. For the dense networks, the predicted trend toward small changes in the NTK also holds for most of the evaluated widths, although there is a dropoff at the end which may be an artifact of the difficulty of training these wide networks on CIFAR-10. For the Wide Residual Neural Networks, however, the general trend toward higher correlation in the wide network limit is completely reversed. The correlation coefficient decreases as network width increases, suggesting that the neural tangent kernel at initialization and after training becomes qualitatively more different as network width increases. The reversal of the correlation trend seems to be a property which emerges from the interaction of batch normalization and skip connections. Removing either of these features from the architecture leads to networks which have an almost constant correlation coefficient for a wide range of network widths, see Figure \ref{fig:nskip} in the appendix, calling for the consideration of both properties in new formulations of the NTK.
In conclusion, we see that although the NTK trends towards stability as the width of simple architectures increases, the opposite holds for the highly performant Wide ResNet architecture. Even further, neither the removal of batch normalization or the removal of skip connections fully recover the positive NTK trend. While we have hope that kernel-based theories of neural networks may yield guarantees for realistic (albeit wide) models in the future, current results do not sufficiently describe state-of-the-art architectures. Moreover, the already good behavior of models with unstable NTKs is an indicator that good optimization and generalization behaviors do not fundamentally hinge on the stability of the NTK.
\section{Rank: Do networks with low-rank layers generalize better?}
State-of-the-art neural networks are highly over-parameterized, and their large number of parameters is a problem both for learning theory and for practical use.
In the theoretical setting, rank has been used to tighten bounds on the generalization gap of neural networks. Generalization bounds from \citet{vc-bound} are improved under conditions of low rank and high sparsity \citep{PACbayes} of parameter matrices, and the compressibility of low-rank matrices (and other low-dimensional structure) can be directly exploited to provide even stronger bounds \citep{arora2018stronger}.
Further studies show a tendency of stochastic gradient methods to find low-rank solutions \citep{layer_alignment}.
The tendency of SGD to find low-rank operators, in conjunction with results showing generalization bounds for low-rank operators, might suggest that the low-rank nature of these operators is important for generalization.
\par \citet{langenberg2019} claim that low-rank networks, in addition to generalizing well to test data, are more robust to adversarial attacks.
Theoretical and empirical results from the aforementioned paper lead the authors to make two major claims. First, the authors claim that networks which undergo adversarial training have low-rank and sparse matrices. Second, they claim that networks with low-rank and sparse parameter matrices are more robust to adversarial attacks. We find in our experiments that neither claim holds up in practical settings, including ResNet-18 models trained on CIFAR-10.
We test the generalization and robustness properties of neural networks with low-rank and high-rank operators by promoting low-rank or high-rank parameter matrices in late epochs. We employ the regularizer introduced in \citet{sedghi2018singular} to create the protocols RankMin, to find low-rank parameters, and RankMax, to find high-rank parameters. RankMin involves fine-tuning a pre-trained model by replacing linear operators with their low-rank approximations, retraining, and repeating this process. Similarly, RankMax involves fine-tuning a pre-trained model by clipping singular values from the SVD of parameter matrices in order to find high-rank approximations. We are able to manipulate the rank of matrices without strongly affecting the performance of the network. We use both natural training and 7-step projected gradient descent (PGD) adversarial training routines \citep{madry2017towards}. The goal of the experiment is to observe how the rank of weight matrices impacts generalization and robustness. We start by attacking naturally trained models with the standard PGD adversarial attack with $\epsilon = 8/255$. Then, we move to the adversarial training setting and test the effect of manipulating rank on generalization and on robustness.
In order to compare our results with \citet{langenberg2019}, we borrow the notion of effective rank, denoted by $r(W)$ for some matrix $W$.
This continuous relaxation of rank is defined as follows. $r(W) = \frac{\|W\|_*}{\|W\|_F}$
where $\|\cdot\|_*$, $\|\cdot\|_1$, and $\|\cdot\|_F$ are the nuclear norm, the 1-norm, and the Frobenius norm, respectively. Note that the singular values of convolution operators can be found quickly with a method from \citet{sedghi2018singular}, and that method is used here.
\begin{table}
\centering
\begin{tabular}{P{20mm}|P{20mm}|P{20mm}|P{20mm}|P{20mm}}
\multicolumn{5}{c}{}\\
Model & Training method & Clean Test \newline Accuracy (\%) & Robust (\%) \newline $\epsilon = 8/255$ & Robust (\%) \newline$\epsilon = 1/255$\\
\hline
\hline
ResNet-18 & Natural & 94.66 & 0.00 & 31.98\\
& RankMax & 93.66 & 0.00 & 22.01 \\
& RankMin & 94.44 & 0.00 & 31.53 \\
\cline{2-5}
& Adversarial & 79.37 & 35.38 & 74.27\\
& RankMaxAdv & 80.00 & 35.55 & 74.92\\
& RankMinAdv & 78.34 & 33.68 & 73.19 \\
\hline
ResNet-18 & Natural & 92.95 & 0.01 & 31.34\\
w/o skips & RankMax & 91.71 & 0.00 & 18.81 \\
& RankMin & 92.42 & 0.00 & 30.37 \\
\cline{2-5}
& Adversarial& 79.57 & 35.95 & 74.88\\
& RankMaxAdv & 79.43 & 36.45 & 74.87\\
& RankMinAdv & 78.52 & 33.97 & 73.64 \\
\multicolumn{5}{c}{}\\
\end{tabular}
\caption{Result presented here are from experiments with CIFAR-10 data and two of the architectures we studied. Robust accuracy is measured with 20-step PGD attacks with the $\epsilon$ values specified at the top of the column.}
\label{tab:rank}
\end{table}
\begin{figure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{images/rank_plot_natural.pdf}
\caption{Effective rank of naturally trained models.}
\label{fig:1}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{images/rank_plot_adv.pdf}
\caption{Effective rank of adversarially trained models.}
\label{fig:2}
\end{subfigure}
\caption{This plot shows the effective rank of each filter for the ResNet-18 models. The filters are indexed on the $x$-axis, so moving to the right is like moving through the layers of the network. Our routines designed to manipulate the rank have exactly the desired effect as shown here.}
\label{rank_plot}
\end{figure}
In our experiments we investigate two architectures, ResNet-18 and ResNet-18 without skip connections. We train on CIFAR-10 and CIFAR-100, both naturally and adversarially. Table \ref{tab:rank} shows that RankMin and RankMax achieve similar generalization on CIFAR-10. More importantly, when adversarially training, a setting when robustness is undeniably the goal, we see the RankMax outperforms both RankMin \emph{and} standard adversarial training in robust accuracy. Figure \ref{rank_plot} confirms that these two training routines do, in fact, control effective rank. Experiments with CIFAR-100 yield similar results and are presented in Appendix \ref{appendix:rankMinMax}. It is clear that increasing rank using an analogue of rank minimizing algorithms does not harm performance. Moreover, we observe that adversarial robustness does not imply low-rank operators, nor do low-rank operators imply robustness. The findings in \citet{layer_alignment} are corroborated here as the black dots in Figures \ref{rank_plot} show that initializations are higher in rank than the trained models. Our investigation into what useful intuition in practical cases can be gained from the theoretical work on the rank of CNNs and from the claims about adversarial robustness reveals that rank plays little to no role in the performance of CNNs in the practical setting of image classification.
\section{Conclusion}
This work highlights the gap between deep learning theory and observations in the real-world setting. We underscore the need to carefully examine the assumptions of theory and to move past the study of toy models, such as deep linear networks or single-layer MLPs, whose traits do not describe those of the practical realm. First, we show that realistic neural networks on realistic learning problems contain suboptimal local minima. Second, we show that low-norm parameters may not be optimal for neural networks, and in fact, biasing parameters to a non-zero norm during training improves performance on several popular datasets and a wide range of networks. Third, we show that the wide-network trends in the neural tangent kernel do not hold for ResNets and that the interaction between skip connections and batch normalization play a large role. Finally, we show that low-rank linear operators and robustness are not correlated, especially for adversarially trained models.
\section*{Acknowledgments}
This work was supported by the AFOSR MURI Program, the National Science Foundation DMS directorate, and also the DARPA YFA and L2M programs. Additional funding was provided by the Sloan Foundation.
|
1,116,691,500,667 | arxiv | \section{Introduction}
In recent years, the scenario of entanglement entropy have call attention in special the works of \cite{Takayanagi:2012kg,Bhattacharya:2019zkb,Bhattacharya:2017gzt,Bhattacharya:2012mi,Calabrese:2004eu,Holzhey:1994we,Ryu:2006ef,Ryu:2006bv,Susskind:1994sm,Chaturvedi:2016kbk,Tonni:2010pv,Mansoori:2015sit,Caputa:2013lfa,Blanco:2013joa,Susskind:2021esx,Kay:2016kbi,Park:2015afa,He:2014lfa} where the $S_{A}$ as the entropy for an observer accessible only to the $A$ subsystem and it cannot receive any $B$ signal. Such subsystem $B$ is analogous to the interior of the black hole horizon. For an observer who is sitting on $A$, that is, outside the horizon. On the other hand, because this analogy is not correct, the quantum correction involving a loop for the entropy BH in the presence of fields of the matter is known to be equal to the entanglement entropy \cite{Susskind:1994sm}. Thus, we have that this interesting relation provides us with an important tip to find the holographic dual of the entanglement entropy. In the context of the AdS$_{3}$/CFT$_{2}$ correspondence presented by \cite{Takayanagi:2012kg,Calabrese:2004eu,Ryu:2006ef} is possible to calculate the entropy S$_{A}$ in a CFT$_{2}$, holographically --- See Fig.~ \ref{3.0}. This entropy is calculated as follows:
\begin{eqnarray}
S_{A}=Min_{\Sigma_{A}}\left[\frac{Area(\Sigma_{A})}{4G_{N}}\right]\label{ES}
\end{eqnarray}
Note that $\Sigma_{A}$ represents a two-dimensional surface, that is, four-dimensional in AdS$_{3}$ satisfying $\partial\Sigma_{A}=\partial A$ since $\Sigma_{A}$ is a counterpart to $A$. In addition to these considerations, we have that (\ref{ES}) is obtained for all surfaces, $\Sigma_{A}$, which is a minimal surface. The equation (\ref{ES}) is applied to any static configuration. Besides, as we know, the minimum surface surrounding the area is well defined in the static case, thus making it possible to consider Euclidean AdS space equivalently.
\begin{figure}[!ht]
\centerline{\includegraphics[scale=0.9]{f3.pdf}}
\caption{The calculation of holographic entanglement entropy.}\label{3.0}
\label{planohwkhz}
\end{figure}
It has an interesting feature that is a striking similarity to the Bekenstein-Hawking (BH) entropy of black holes \cite{Takayanagi:2012kg,Calabrese:2004eu,Ryu:2006ef,Ryu:2006bv,Susskind:1994sm}. Besides, as we know, the minimum surface surrounding the area is well defined in the static case, thus making it possible to consider Euclidean AdS space equivalently. Motivated by the applications of \cite{Chaturvedi:2016kbk}, we propose a scenario of entanglement entropy in Horndeski gravity where is considered an analytically the entanglement
entropy of the subsystem $A$ in the ($2+1$)-dimensional boundary field theory. In our case we consider the duality AdS$_{4}$/CFT$_{3}$, where on the gravity side, we have a planar black hole solutions in Horndeski gravity and with this solution we extract the length and the area integral to the subsystem $A$.
The main motivation to investigate the play role of Horndeski gravity in the entanglement entropy scenario is due the recent investigations of the AdS/CFT correspondence in this gravity \cite{Santos:2020egn,Brito:2019ose,Santos:2021orr,Jiang:2017imk,Baggioli:2017ojd,Liu:2018hzo,Li:2018kqp,Li:2018rgn,Feng:2015oea,Caceres:2017lbr,Hajian:2020dcq,MohammadiMozaffar:2016vpf}. Besides, beyond to the classes of boundary field theories that were studied by Ryu and Takayanagi \cite{Ryu:2006ef,Ryu:2006bv} other classes were proposed using this conjecture, such as AdS black holes with dual charge in the bulk \cite{Tonni:2010pv,Mansoori:2015sit,Caputa:2013lfa}. But in our case, we compute the entanglement entropy in AdS$_{4}$/CFT$_{3}$ within Horndeski gravity for spherically and planar black holes. For this boundary field theory at finite temperature, we address the issues to related to entanglement thermodynamics within Horndeski gravity in our setup, for more discussion about this see \cite{Blanco:2013joa,Park:2015afa,He:2014lfa,Chaturvedi:2016kbk}. For the avaible the excited states we compute the stress-energy tensor of boundary field theory in Horndeski gravity following the prescription of \cite{Santos:2021orr,Balasubramanian:1999re}.
This work is summarized as follows. In Sec.~\ref{v0} we address the issue of finding black hole solutions in Horndeski gravity. In Sec.~\ref{v1}, we present the entanglement entropy. In Sec.~\ref{v2} we compute the entanglement thermodynamics in Horndeski gravity. In section Sec.~\ref{v2}, we present our conclusions.
\section{Black hole solutions in Horndeski gravity}\label{v0}
In this section we address the issue of finding black hole solutions in Horndeski gravity \cite{Santos:2020egn,Brito:2019ose,Santos:2021orr,Horndeski:1974wa,Santos:2020xox,Cisterna:2014nua,Bravo-Gaete:2014haa,Anabalon:2013oea}. Black holes in Horndeski's theory have been previously studied in \cite{Santos:2020egn,Brito:2019ose,Santos:2021orr,Santos:2020xox,Cisterna:2014nua,Anabalon:2013oea}.
The Horndeski Lagrangian is given by
\begin{eqnarray}
&&\mathcal{L}_{H}=\mathcal{L}_{2}+\mathcal{L}_{3}+\mathcal{L}_{4}+\mathcal{L}_{5},\\
&&\mathcal{L}_{2}=G_{2}(X,\phi),\\
&&\mathcal{L}_{3}=-G_{3}(X,\phi)\Box\phi,\\
&&\mathcal{L}_{4}=G_{4}(X,\phi)R+\partial_{X}G_{4}(X,\phi)\delta^{\mu\nu}_{\alpha\beta}\nabla^{\alpha}_{\mu}\phi\nabla^{\beta}_{\nu}\phi,\\
&&\mathcal{L}_{5}=G_{5}(X,\phi)G_{\mu\nu}\nabla^{\mu}\nabla^{\nu}\phi\nonumber\\
&&-\frac{1}{6}\partial_{X}G_{5}(X,\phi)\delta^{\mu\nu\rho}_{\alpha\beta\gamma}\nabla^{\alpha}_{\mu}\phi\nabla^{\beta}_{\nu}\phi\nabla^{\gamma}_{\rho}\phi,
\end{eqnarray}
where $X\equiv -\frac{1}{2}\nabla_{\mu}\phi\nabla^{\nu}\phi$. Furthermore, an interesting special truncation of this theory was presented by \cite{Charmousis:2011bf,Charmousis:2011ea,Starobinsky:2016kua,Bruneton:2012zk},where the idea is realize a constrain the coefficients $G_{k}(X,\phi)$. Through this truncation and considering the non-minimal kinetic coupling, we have
\begin{eqnarray}
&&I[g_{\mu\nu},\phi]=\int{\sqrt{-g}d^{4}x\mathcal{L}}.\label{EH}\\
&&\mathcal{L}=\kappa(R-2\Lambda)-\frac{1}{2}(\alpha g_{\mu\nu}-\gamma G_{\mu\nu})\nabla^{\mu}\phi\nabla^{\nu}\phi\nonumber
\end{eqnarray}
Here the action (\ref{EH}) $\kappa=(16\pi G)^{-1}$. Such action have a non-minimal scalar-tensor coupling and we can define a new field $\phi^{'}\equiv\psi$. The field has a dimension of $(mass)^{2}$ controlled by the parameters $\alpha$ and $\gamma$ where $\alpha$ is dimensionless and $\gamma$ has a dimension of $(mass)^{-2}$. The equations of motion are:
\begin{equation}
G_{\mu\nu}+\Lambda g_{\mu\nu}=\frac{1}{2\kappa}T_{\mu\nu},\label{EH1}
\end{equation}
where $T_{\mu\nu}=\alpha T^{(1)}_{\mu\nu}+\gamma T^{(2)}_{\mu\nu}$. The energy-momentum tensors $T^{(1)}_{\mu\nu}$ and $T^{(2)}_{\mu\nu}$ take the following form
\begin{eqnarray}
&&T^{(1)}_{\mu\nu}=\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\nabla_{\lambda}\phi\nabla^{\lambda}\phi\nonumber\\
&&T^{(2)}_{\mu\nu}=\frac{1}{2}\nabla_{\mu}\phi\nabla_{\nu}\phi R-2\nabla_{\lambda}\phi\nabla_{(\mu}\phi R^{\lambda}_{\nu)}-\nabla^{\lambda}\phi\nabla^{\rho}\phi R_{\mu\lambda\nu\rho}\nonumber\\
&&-g_{\mu\nu}\left[-\frac{1}{2}(\nabla^{\lambda}\nabla^{\rho}\phi)(\nabla_{\lambda}\nabla_{\rho}\phi)+\frac{1}{2}(\Box\phi)^{2}-(\nabla_{\lambda}\phi\nabla_{\rho}\phi)R^{\lambda\rho}\right]\nonumber\\
&&-(\nabla_{\mu}\nabla^{\lambda}\phi)(\nabla_{\nu}\nabla_{\lambda}\phi)+(\nabla_{\mu}\nabla_{\nu}\phi)\Box\phi+\frac{1}{2}G_{\mu\nu}(\nabla\phi)^{2}.\label{g}
\end{eqnarray}
And the scalar field equation is given by
\begin{equation}
\nabla_{\mu}[(\alpha g^{\mu\nu}-\gamma G^{\mu\nu})\nabla_{\nu}\phi]=0.\label{EH2}
\end{equation}
In our case for Einstein-Horndeski gravity, we consider the following {\sl Ansatz} for a four-dimensional black hole of the form
\begin{equation}
ds^{2}=\mathcal{R}^{2}\left(-r^{2}f(r)dt^{2}+r^{2}(dx^{2}+dy^{2})+\frac{dr^{2}}{r^{2}f(r)}\right).\label{me}
\end{equation}
Now, following the results of \cite{Santos:2020egn,Brito:2019ose,Santos:2021orr,Santos:2020xox,Cisterna:2014nua,Anabalon:2013oea}, we can find the black hole solution through the imposing of the radial component with the conserved current that vanishes identically without restricting the radial dependence of the scalar field \cite{Hui:2012qt,Bravo-Gaete:2013dca,Babichev:2013cya}:
\begin{equation}
\alpha g_{rr}-\gamma G_{rr}=0\label{0a}.
\end{equation}
Taking $\phi'(r)\equiv \psi(r)$ and we can easily note that this condition annihilates $\psi^2(r)$ regardless of its behavior at the horizon. Now, using the equation (\ref{0a}) the metric function $f(r)$ can be found as following:
\begin{eqnarray}
f(r)&=&\frac{\alpha \mathcal{R}^{2}}{3\gamma}-\left(\frac{r_{h}}{r}\right)^{3},\label{scalar.1}\\
\psi^{2}(r)&=&-\frac{2\mathcal{R}^{2}\kappa(\alpha+\gamma\Lambda)}{\alpha\gamma r^{2}f(r)},\label{scalar.2}
\end{eqnarray}
The equation of motion (\ref{EH1}) are satisfied by these equations. The solution (\ref{scalar.1}) corresponds to black hole solution for asymptotically AdS$_4$ spacetime \cite{Anabalon:2013oea}. Beyond, through $\Lambda=-3/\mathcal{R}^{2}$ and considering the analyses of \cite{Santos:2020egn,Brito:2019ose,Santos:2021orr} in our prescription, we can write the black hole solution
\begin{eqnarray}
f(r)&=&-\frac{\alpha}{\gamma\Lambda}-\left(\frac{r_{h}}{r}\right)^{3},\label{il}\\
\psi^{2}(r)&=&\frac{6\kappa(\alpha+\gamma\Lambda)}{\alpha\gamma\Lambda r^{2}f(r)},\label{3w}
\end{eqnarray}
we can note that the parameters are defined in the range $-\infty<\alpha/\gamma\Lambda\leq-1$, with $\alpha,\gamma<0$, or $-1\leq\alpha/\gamma\Lambda<0$, with $\alpha,\gamma>0$. Furthermore, we have that the surface located at $ r=r_{h}$ is infinitely shifted to red in relation to an asymptotic observer. On the other hand, we can see by looking at the equation (\ref{3w}), which $(\alpha+\gamma\Lambda)>0$, indicates ghost freedom. Thus, stability requires that $(\alpha+\gamma\Lambda)$ is not negative, this leads to an interval of the form $-\infty<\gamma\leq\alpha/(-\Lambda)$. Thus, following these transformation in the metric (\ref{me}) with solution (\ref{il}), we can write:
\begin{eqnarray}
&&f(r)\to-\frac{\alpha}{\gamma\Lambda}{f}(r),r_h^3\to-\frac{\alpha}{\gamma\Lambda}\;r_h^{3},\nonumber\\
&&\mathcal{R}\to\left(-\frac{{\alpha}}{\gamma\Lambda}\right)^{1/2}\mathcal{R},t\to-\frac{\gamma\Lambda}{\alpha}t,\nonumber\\
&&x\to\left(-\frac{{\gamma\Lambda}}{\alpha}\right)^{1/2}x,y\to\left(-\frac{{\gamma\Lambda}}{\alpha}\right)^{1/2}y,
\end{eqnarray}
in order to put the black hole solution in the standard form
\begin{eqnarray}
{f}(r)&=&1-\left(\frac{r_{h}}{r}\right)^{3},\label{il2}\\
\psi^{2}(r)&=&-\frac{6\kappa(\alpha+\gamma\Lambda)}{\alpha^2 r^{2}{f}(r)}.\label{3w2}
\end{eqnarray}
In the limit $\psi^2(r\to\infty)=0$ into the action (\ref{EH}), provide that this is a genuine vacuum solution. The equations (\ref{il2})-(\ref{3w2}), provide that the black hole geometry is regular everywhere (except at the central singularity), the scalar field derivative $\psi(r)$ diverges at horizon \cite{Anabalon:2013oea,Feng:2015oea,Babichev:2013cya}, but the scalar field does not explode at horizon since it approaches to a constant near the horizon as:
\begin{eqnarray}
\phi^{2}(r)\sim\left((2\Lambda \mathcal{R}^2(\alpha+\gamma\Lambda)/\alpha^{2}r^{2}_{h}f^{'}(r_{h}))(r-r_{h})\right)+const.
\end{eqnarray}
Thus, we agree with the no-hair theorem, such discussions were presented by \cite{Babichev:2013cya}. An interesting characterize is that the scalar field equation (\ref{3w2}) is a real function outside the horizon for $r>r_{h}$, namely, $f(r>r_{h})>0$, and the scalar field is real in the interval $-1<\alpha/\gamma\Lambda<0$, with $\alpha, \gamma>0$. Besides of this analyzes, we have that in the infinity the scalar field itself diverges as $\phi(r)\sim \ln{r}$, but not its derivatives $\psi$ that are the ones present in the action (\ref{EH}), which are finite at asymptotic infinity \cite{Babichev:2013cya}.
In fact, we have that the appearance of a black hole, which has a flat horizon $\Re^{2}$, leads us to a physics in IR that corresponds to placing the invariant theory of scale at a finite temperature. So, we have that the temperature of the black hole is given by
\begin{eqnarray}
T(r_{h})=\frac{f^{'}(r=r_{h})}{4\pi}=\frac{3}{4\pi r_{h}}\label{10}
\end{eqnarray}
\section{Holographic Entanglement Entropy in Horndeski Gravity}\label{v1}
In this section, we present the computations of the entanglement entropy to a subsystem $A$ in Horndeski gravity following the procedures of \cite{Takayanagi:2012kg,Calabrese:2004eu,Ryu:2006ef,Ryu:2006bv,Susskind:1994sm}. For this, considering the metric (\ref{me}) the three-dimensional CFT lives in the space measured by $t$ and $x$. Thus, we can choose the subsystem $A$ (which are the two charges) to be the length $l$ interval $x\epsilon[-l/2,l/2]$, $y\epsilon[L/2,L/2]$ in the infinitely long total space $-\infty<x<\infty$, see Fig.$\sim$\ref{SCHEM}.
\begin{figure}[!ht]
\centerline{\includegraphics[scale=0.5]{f4.pdf}}
\caption{The figure show the schematic of a extremal surface where $l$ is the length of the subsystem $A$, which is anchored on the subsystem living on the boundary.}\label{SCHEM}
\label{planohwkhz}
\end{figure}
Now, we will provide the entanglement entropy for the Horndeski gravity following the steps by steps of \cite{Caceres:2017lbr,Hajian:2020dcq} and motivated by recent studies of \cite{Santos:2021orr}, we have to the action (\ref{EH}) that the action with boundary counterterms is given by:
\begin{eqnarray}
&&I_{E}=I_{bulk}=-\int{\sqrt{g}d^{4}x\mathcal{L}}-2\kappa\int{d^{3}x\sqrt{\bar{\gamma}}\mathcal{L}_{b}}+2\kappa\int{d^{3}x\sqrt{\bar{\gamma}}\mathcal{L}_{ct}},\label{T1}\\
&&\mathcal{L}=\kappa(R-2\Lambda)+\frac{\gamma}{2}G_{\mu\nu}\nabla^{\mu}\phi\nabla^{\nu}\phi\label{T2}\\
&&\mathcal{L}_{b}=K^{({\bar{\gamma}})}-\Sigma^{(\bar{\gamma})}+\frac{\gamma}{4}\left(\nabla_{\mu}\phi\nabla_{\nu}\phi\, n^{\mu}n^{\nu}-(\nabla\phi)^{2}\right)K^{(\bar{\gamma})}+\frac{\gamma}{4}\nabla^{\mu}\phi\nabla^{\nu}\phi K^{(\bar{\gamma})}_{\mu\nu}\label{T3}\\
&&{\cal L}_{ct}=c_{0}+c_{1}R+c_{2}R^{ij}R_{ij}+c_{3}R^{2}+b_{1}(\partial_{i}\phi\partial^{i}\phi)^{2}\label{T4}
\end{eqnarray}
$\mathcal{L}_{b}$ corresponds to the Gibbons-Hawking $\gamma$-dependent terms associated with the Horndeski gravity, where $n^{\mu}$ is an outward pointing unit normal vector to the boundary, $K^{(\bar{\gamma})}=\bar{\gamma}^{\mu\nu}K^{({\bar{\gamma}})}_{\mu\nu}$ is the trace of the extrinsic curvature and $\bar{\gamma}_{\mu\nu}$ is the induced metric on the boundary $r\to\infty$. The Lagragian ${\cal L}_{ct}$ is related to the boundary counterterms, they do not affect the bulk dynamics and will be neglect \cite{Balasubramanian:1999re}. Thus, the induced metric to (\ref{me}) is written as
%
\begin{eqnarray}
ds^{2}_{ind}=\bar{\gamma}_{\mu\nu}dx^{\mu}dx^{\nu}=\mathcal{R}^{2}\left(r^{2}f(r)d\tau^{2}+r^{2}(dx^{2}+dy^{2})+\frac{dr^{2}}{r^{2}f(r)}\right),\label{T5}
\end{eqnarray}
With the above, we can provide that the Ryu-Takayanagi formula \cite{Ryu:2006ef,Ryu:2006bv} is given by:
\begin{eqnarray}
&&S_{A}=\frac{\mathcal{A}}{4G_{N}}\label{ES01},\\
&&\mathcal{A}=\int{ds_{ind}\chi},\\
&&\chi=1-2\gamma(\bar{\gamma}^{\lambda\sigma}\nabla_{\lambda}\phi\nabla_{\sigma}\phi).
\end{eqnarray}
Here this entropy is equivalently to the cases of \cite{Feng:2015oea,Caceres:2017lbr}. For AdS$_{4}$/CFT$_{3}$, we have that the minimum surface is given by the geodesic line in AdS$_{4}$. That is, we have following the prescription of \cite{Chaturvedi:2016kbk}, we have that the area of the surface anchored on the boundary to the subsystem (A), can be expressed as:
\begin{eqnarray}
\mathcal{A}=2\chi\mathcal{R}Lr_{c}\int^{1}_{0}{\frac{du}{u^{2}\sqrt{(1-u^{4})f(u)}}};\quad f(u)=1-\left(\frac{r_{h}u}{r_{c}}\right)^{3},\label{ES1}
\end{eqnarray}
where $u=r_{c}/r$. In our setup to holographic entanglement entropy in Horndeski gravity $r_{c}$ is a constant of integration, which represents the turning point of the extremal surface in the higher dimensional AdS4 bulk spacetime, see Fig.$\sim$\ref{SCHEM}. We can see that through the equation (\ref{ES1}) the area integral is divergent at the point $u=1$ and must be regularized introducing an infrared cutoff ($r_{b}$). From the point of view of the holographic dictionary, we have a relation between the UV cutoff of the boundary field theory ($\epsilon$) to bulk IR cutoff. Such relation is inversely related through the AdS length scale $\mathcal{R}$ and can be establish as, $r_{b}=\mathcal{R}/\epsilon$. But note that the finite part of entanglement entropy can then be used to study the high and low temperature behavior of the boundary field theory which is dual to black hole as covered in
\begin{eqnarray}
&&S^{finite}_{A}=S_{A}-S^{divergent}_{A}=\frac{\mathcal{A}^{finite}}{4G_{N}}\chi.\label{ES02}\\
&&\chi=1+\frac{12\kappa\gamma(\alpha+\gamma\Lambda)}{\alpha^{2}\mathcal{R}^{2}}.
\end{eqnarray}
We can to obtain the quantity $r_{c}$ inverting the equation of motion
\begin{eqnarray}
\frac{l}{2}=\frac{1}{r_{c}}\int^{1}_{0}{\frac{u^{2}du}{\sqrt{(1-u^{4})f(u)}}}\label{ES2}
\end{eqnarray}
As we know, the horizon radius ($r_{h}$) is very small, we have that the black hole remains deep inside the bulk. Thus, far away from the extremal surface, namely, $r_{h}<<r_{c}$.For this limit, we can performing Taylor expand. In this case the quantity $1/\sqrt{f(u)}$ will be expanded around $r_{h}/r_{c}=0$ as:
\begin{eqnarray}
\frac{1}{\sqrt{f(u)}}=1+\frac{1}{2}\left(\frac{r_{h}u}{r_{c}}\right)^{3}+\mathcal{O}\left[\left(\frac{r_{h}u}{r_{c}}\right)^{3}\right]^{4}
\end{eqnarray}
Now, replacing this expansion in the equation (\ref{ES2}), we can wrote
\begin{eqnarray}
&&\frac{lr_{c}}{2}=\int^{1}_{0}{\frac{u^{2}du}{\sqrt{(1-u^{4})}}}+\frac{1}{2}\left(\frac{r_{h}}{r_{c}}\right)^{3}\int^{1}_{0}{\frac{u^{5}du}{\sqrt{(1-u^{4})}}}\label{ES3}\\
&&\frac{lr_{c}}{2}=\pi\left(\frac{r_{h}}{r_{c}}\right)^{3}+\frac{2\sqrt{\pi}\Gamma(3/4)}{\Gamma(1/4)}\label{ES4}
\end{eqnarray}
However, solving the equation (\ref{ES4}) in terms of $r_{h}l$, we have
\begin{eqnarray}
r_{c}=\frac{2\pi\Gamma(3/4)}{l\Gamma(1/4)}+\frac{l^{2}\Gamma^{3}(3/4)}{16\sqrt{\pi}\Gamma^{3}(1/4)}\label{ES5}
\end{eqnarray}
For the extremal area, we can write following the same steps
\begin{eqnarray}
\mathcal{A}=2\chi\mathcal{R}Lr_{c}\left(\int^{1}_{0}{\frac{u^{2}du}{\sqrt{(1-u^{4})}}}+\frac{1}{2}\left(\frac{r_{h}}{r_{c}}\right)^{3}\int^{1}_{0}{\frac{udu}{\sqrt{(1-u^{4})}}}\right)\label{ES6}
\end{eqnarray}
Through the equation (\ref{ES6}), we have that first integral is same of a pure AdS, which is divergent. For this reason, we need regularize her, introducing an UV cutoff $1/r_{b}$ and add a counter term ($-2\mathcal{R}Lr_{b}$), after this considerations we provide the finite part of the extremal area as
\begin{eqnarray}
&&\mathcal{A}^{finite}=2\chi\mathcal{R}Lr_{c}\left(\int^{1}_{0}{\frac{u^{2}du}{\sqrt{(1-u^{4})}}}-2\mathcal{R}Lr_{b}+\frac{1}{2}\left(\frac{r_{h}}{r_{c}}\right)^{3}\int^{1}_{0}{\frac{udu}{\sqrt{(1-u^{4})}}}\right)\label{ES7}\\
&&\mathcal{A}^{finite}=\chi\mathcal{R}Lr_{c}\left(\sqrt{\pi}\frac{\Gamma(-1/4)}{\Gamma(1/4)}+\pi\left(\frac{r_{h}}{r_{c}}\right)^{3}\right)\label{ES8}
\end{eqnarray}
Combining the equation (\ref{ES8}) with (\ref{ES5}) the entanglement entropy is given by:
\begin{eqnarray}
S^{finite}_{A}=\frac{\chi\mathcal{R}L}{4lG_{N}}\left(-\frac{4\pi\Gamma^{2}(3/4)}{\Gamma^{2}(1/4)}+\frac{l^{3}r^{3}_{h}\Gamma^{2}(1/4)}{8\Gamma^{2}(3/4)}\right)\label{ES90}
\end{eqnarray}
For case of extremal black holes we consider $r^{3}_{h}=\mathcal{M}_{ext}/4$ where $\mathcal{M}_{ext}$ is the mass of the extremal black hole, we have
\begin{eqnarray}
&&S^{finite}_{A}=[S^{AdS}_{A}+k\mathcal{M}_{ext}l^{2}L]\chi,\quad k=\frac{\mathcal{R}L}{32G_{N}}\frac{\Gamma^{2}(1/4)}{\Gamma^{2}(3/4)}\label{ES10}\\
&&S^{AdS}_{A}=-\frac{4\pi\mathcal{R}L}{4lG_{N}}\frac{\Gamma^{2}(3/4)}{\Gamma^{2}(1/4)}\label{ES11}
\end{eqnarray}
The results show in the equations (\ref{ES10}) and (\ref{ES11}) are similar to the case of \cite{Chaturvedi:2016kbk}, but in our case, we have the presence of Horndeski parameters, where $S^{AdS}_{A}$ is the entanglement entropy of the subsystem (A), when the bulk theory is pure AdS as described by \cite{Fischler:2012ca}. On the other hand, if $\gamma$ is large the Horndeski contribution through the $\chi$-term in the equation (\ref{ES10}), this imply that $S^{finite}_{A}$ becomes large and the storage of information show that, we have more information about the subsystem (A). Furthermore, at the critical point $\alpha=3\gamma/\mathcal{R}^{2}$, for more discussion see \cite{Li:2018kqp}, the entropy in equation (\ref{ES10}) reduces to the usual case of \cite{Chaturvedi:2016kbk} where sub-leading correction term in the equation (\ref{ES10}), becomes important in defining the first law like relation.
\subsection{Planar black hole}
We address the issue of finding planar black hole solutions in Horndeski gravity. For this, we consider for Einstein-Horndeski gravity the following {\sl Ansatz}:
\begin{equation}
ds^{2}=-f(r)dt^{2}+r^{2}(dx^{2}+dy^{2})+\frac{dr^{2}}{f(r)}.\label{P}
\end{equation}
One can show that the equations (\ref{EH1}) and (\ref{EH2}) are satisfied by the following solution
\begin{eqnarray}
f(r)&=&\frac{\alpha r^{2}}{3\gamma}-\frac{r_{h}}{r},\label{P1}\\
\psi^{2}(r)&=&-\frac{2\kappa(\alpha+\gamma\Lambda)}{\alpha\gamma r^{2}}\frac{1}{f(r)}.\label{P2}
\end{eqnarray}
Following the previous procedures, we have
\begin{eqnarray}
&&\frac{l}{2}=\int^{1}_{0}{\frac{du}{u\sqrt{(1-u^{4})f(u)}}}; \quad f(u)=\frac{\alpha r^{2}_{c}}{3\gamma u^{2}}-\frac{r_{h}u}{r_{c}},\label{P3}\\
&&\mathcal{A}=2\chi Lr_{c}\int^{1}_{0}{\frac{du}{u^{2}\sqrt{(1-u^{4})f(u)}}}\label{P4}
\end{eqnarray}
Again through the expansion of $1/\sqrt{f(u)}$, we have
\begin{eqnarray}
\frac{1}{\sqrt{f(u)}}=\sqrt{\frac{3\gamma}{\alpha}}\frac{u}{r_{c}}+\sqrt{\left(\frac{3\gamma}{\alpha}\right)^{3}}\frac{u^{4}r_{h}}{r^{4}_{c}}\label{P5}
\end{eqnarray}
Now, we can write for the equations (\ref{P3}) and (\ref{P4}) as:
\begin{eqnarray}
&&\frac{l}{2}=\frac{1}{r_{c}}\sqrt{\frac{3\gamma}{\alpha}}\frac{\sqrt{\pi}\Gamma(5/4)}{\Gamma(3/4)}+\frac{r_{h}}{4r^{4}_{c}}\sqrt{\left(\frac{3\gamma}{\alpha}\right)^{3}},\label{P6}\\
&&\mathcal{A}^{finite}\approx \chi L\sqrt{\frac{3\gamma}{\alpha}}\ln\left(\frac{l}{r_{b}}\right)-\frac{3\sqrt{3\pi}l\chi L}{4}\sqrt{\frac{\gamma}{\alpha}}\frac{\Gamma(5/4)}{\Gamma(1/4)}\label{P7}
\end{eqnarray}
Thus, the entanglement entropy to the planar black is
\begin{eqnarray}
S^{finite}_{A}=\frac{\mathcal{A}^{finite}}{4G_{N}}=\frac{\chi L}{4G_{N}}\sqrt{\frac{3\gamma}{\alpha}}\ln\left(\frac{l}{r_{b}}\right)-\frac{3\sqrt{\pi}l\chi L}{16G_{N}}\sqrt{\frac{3\gamma}{\alpha}}\frac{\Gamma(5/4)}{\Gamma(1/4)}\label{P7}
\end{eqnarray}
When we compare the equation (\ref{P7}) with the equation (\ref{ES10}), we have that entanglement entropy to the subsystem (A) have a logarithmic term \cite{Calabrese:2004eu,Holzhey:1994we} and a sub-leading correction with Horndeski parameters. Besides, if $\gamma$ is small, the $S^{finite}_{A}\to 0$ for the planar black hole and all information about the subsystem (A) is destroyed, but not at the critical point $\alpha=3\gamma/\mathcal{R}^{2}$ \cite{Li:2018kqp} where we have a behavior like area law, which is corrected by a logarithmic factor. Such behavior is find fermionic systems with the presence of a finite Fermi surface \cite{Gioev:2006zz,Wolf:2006zzb}.
\section{Entanglement thermodynamics in Horndeski gravity}\label{v2}
In this section, we present the "first law of entanglement thermodynamics". For this, we need of the stress-energy tensor of boundary field theory in Horndeski gravity. Through the renormalization procedure \cite{Balasubramanian:1999re} the form of stress-energy tensor $T_{\alpha\beta}$ can be write as:
\begin{eqnarray}
&&T_{\alpha\beta}=-\frac{r^{3}}{16\pi\mathcal{R}^{3} G_{N}}\left[K^{({\bar{\gamma}})}_{\alpha\beta}-\bar{\gamma}_{\alpha\beta}(K^{({\bar{\gamma}})}-\Sigma^{(\bar{\gamma})})+\frac{\gamma}{4}H_{\alpha\beta}-\kappa T^{R}_{\alpha\beta}-\kappa T^{ct}_{\alpha\beta}\right]\label{T},\label{T6}\\
&&H_{\alpha\beta}=(\nabla_{\alpha}\phi\nabla_{\beta}\phi n^{\alpha}n^{\beta}-(\nabla\phi)^{2})(K^{({\bar{\gamma}})}_{\alpha\beta}-\bar{\gamma}_{\alpha\beta}K^{({\bar{\gamma}})})-(\nabla_{\alpha}\phi\nabla_{\beta}\phi)\bar{\gamma}_{\alpha\beta}K^{({\bar{\gamma}})}.\label{T7}
\end{eqnarray}
Here $T^{R}_{\alpha\beta}$ and $T^{ct}_{\alpha\beta}$ are possible contribution of extrinsic curvature and counter term, respectively. However, fixing the energy-momentum tensor on the boundary with $T^{R}_{\alpha\beta}=T^{ct}_{\alpha\beta}=0$, we have
\begin{eqnarray}
T_{\alpha\beta}=-\frac{r^{3}}{16\pi\mathcal{R}^{3} G_{N}}\left[K^{({\bar{\gamma}})}_{\alpha\beta}-\bar{\gamma}_{\alpha\beta}(K^{({\bar{\gamma}})}-\Sigma^{(\bar{\gamma})})+\frac{\gamma}{4}H_{\alpha\beta}\right].\label{T8}
\end{eqnarray}
However, in order to obtain the “first law of entanglement thermodynamics” in Horndeski gravity we need compute the following quantity:
\begin{eqnarray}
&&\Delta S_{A}=\frac{\Delta E_{A}}{T_{en}},\label{T9}\\
&&\Delta E_{A}=\int_{A}{dxdyT^{Temp\neq 0}_{tt}}-\int_{A}{dxdyT^{Temp=0}_{tt}}.\label{T10}
\end{eqnarray}
But one interesting physical observables is the mass written as:
\begin{eqnarray}
\mathcal{M}=\int_{A}{dxdyT^{Temp\neq 0}_{tt}}.\label{T10}
\end{eqnarray}
With
\begin{eqnarray}
&&T_{tt}=-\chi\frac{r^{3}}{16\mathcal{R}^{3}\pi G_{N}}\left[K^{({\bar{\gamma}})}_{tt}-\bar{\gamma}_{tt}(K^{({\bar{\gamma}})}-\Sigma^{(\bar{\gamma})})\right],\label{T80}\\
&&T_{tt}=\chi\frac{r^{3}}{8\pi\mathcal{R}^{4}G_{N}}.\label{T81}
\end{eqnarray}
The result presented by the equation (\ref{T81}) is the same of \cite{Balasubramanian:1999re} where the equation (\ref{T81}) corresponds to a excited state in the CFT$_{3}$ \cite{Bhattacharya:2012mi}. In this way, we can express the first law of entanglement thermodynamics as:
\begin{eqnarray}
&&\Delta E_{A}=\frac{l\chi L}{8\pi G_{N}}(\mathcal{M}-\mathcal{M}_{ext}),\label{T9}\\
&&T_{en}=\frac{\pi}{16l}\frac{\Gamma^{2}(3/4)}{\Gamma^{2}(1/4)}.\label{T10}
\end{eqnarray}
Where $T_{en}$ is the entanglement temperature \cite{Bhattacharya:2012mi,Chaturvedi:2016kbk}. Such result are in perfect agreement with \cite{Blanco:2013joa}. But here this result and also as shown in \cite{Bhattacharya:2012mi}, it differs from our results, because we consider a canonical ensemble where the ground state of the boundary field theory is dual to the extremal AdS$_{4}$ black hole in the bulk.
\subsection{Planar black hole}
Now, for the planar black hole, following the step by step as done to the spherically case we compute the increased amount of energy for such black hole. Thus, through the equation (\ref{T8}), we can show that:
\begin{eqnarray}
T_{tt}=\frac{\chi}{16\pi\mathcal{R}G_{N}}\sqrt{\frac{3\gamma}{\alpha}}\left(\frac{4}{r}-\frac{1}{R}\right).\label{T11}
\end{eqnarray}
The equation (\ref{T11}), provide that we can must be not need to make any assumptions in the infrared region $r\to\infty$ for this excited state in the CFT$_{3}$. Thus, objects such as black branes or stars in the infrared region can be find. The first has a horizon and is a thermal state, but the second does not have any horizon, however, is dual to a zero temperature state \cite{Bhattacharya:2012mi}. Furthermore, the increased amount of energy in the subsystem (A) can be written as
\begin{eqnarray}
&&\Delta E_{A}=\frac{\chi}{16\pi\mathcal{R}G_{N}}\sqrt{\frac{\alpha}{3\gamma}}\int_{A}{dydr\frac{4}{r}}-\frac{\chi}{16\pi\mathcal{R}^{2}G_{N}}\sqrt{\frac{\alpha}{3\gamma}}\int_{A}{dxdy},\\
&&\Delta E_{A}=\frac{\chi L}{4\pi\mathcal{R}G_{N}}\sqrt{\frac{3\gamma}{\alpha}}\ln\left(\frac{l}{r_{b}}\right)-\frac{l\chi L}{16\pi\mathcal{R}^{2}G_{N}}\sqrt{\frac{3\gamma}{\alpha}},\label{T12}\\
&&T_{en}=\frac{1}{3\sqrt{\pi}l}\frac{\Gamma(1/4)}{\Gamma(5/4)}.\label{T13}
\end{eqnarray}
where the entropy in equation (\ref{T12}) to the planar black hole is the area of the boundary in the region $r\to\infty$. In summary, this result is very similar to the equation (\ref{T9}). On the other hand, when we compare the equation (\ref{T12}) with the equation (\ref{P7}), we see that are very similar. This convergence of results can be done through the renormalization procedure, which removes the logarithmic correction.
\section{Conclusion}\label{v3}
We show in four-dimensions that the study of the entanglement entropy in Horndeski gravity for planar and spherically topologies for a strip like region denoted by the subsystem (A), which is Boundary Conformal Field Theory dual to bulk black holes in an AdS$_{4}$/CFT$_{3}$ scenario, provided interesting results, respectively. For this two topologies, we find two interesting aspects of the holographic entanglement entropy. The first behavior of the entanglement entropy with the Horndeski parameters of the AdS black hole can be becomes pure AdS, if kinematic coupling $\gamma$ is very small, that is, $\gamma\to 0$. Thus, such result show that there is similar behavior of study in the field of AdS/CFT correspondence within Horndeski gravity, when if explore the black hole thermodynamics of the black holes, as for example in the studies of \cite{Santos:2021orr,Feng:2015oea}. For $\gamma=0$ we recover the usual entropy of the AdS space.
The second behavior for the entanglement entropy is very interesting, because the values of parameters of Horndeski gravity impose limitations in storage of information to the planar black holes, if the entanglement entropy of him becomes null, that is, $S^{finite}_{A}\to 0$ when $\gamma\to 0$ the storage information are completely destroyed, but in the critical points \cite{Li:2018kqp} where $\alpha=3\gamma/\mathcal{R}^{2}$, we have that the entanglement entropy's described by equation (\ref{P7}) preserve the storage information of the subsystem (A), because are not constraint by the parameters at this point.
Finally, for the “first law of entanglement thermodynamics” in Horndeski gravity, we show that for spherically case, it has extremal black holes solution. The first topology of extremal black hole the large implies a large horizon radius. For the second topology, we have that the increasing of energy $\Delta E_{A}$ is constraint by the Horndeski parameters. This fact, agree with Ryu-Takayanagi formula as shown in (\ref{P7}). This first law of entanglement thermodynamics in Horndeski gravity in fact agree with the result of \cite{Santos:2021orr,Feng:2015oea,Caceres:2017lbr,Hajian:2020dcq}.
\section*{Acknowledgment}
Fabiano F. Santos would like to thank CNPq and CAPES for partial financial support. I would like to thank Mohammad Hassan Vahidinia for valuable comments and discussions.
\newpage
|
1,116,691,500,668 | arxiv | \section*{References}}
\makeatletter
\def\ps@pprintTitle{%
\let\@oddhead\@empty
\let\@evenhead\@empty
\def\@oddfoot{\centerline{\thepage}}%
\let\@evenfoot\@oddfoot}
\makeatother
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{F}}{\mathbb{F}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathbb{K}}{\mathbb{K}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{T}}{\mathbb{T}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{c}}{\mathbf{c}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{e}}{\mathbf{e}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{f}}{\mathbf{f}}
\newcommand{\mathbf{F}}{\mathbf{F}}
\newcommand{\mathbf{g}}{\mathbf{g}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\mathbf{H}}{\mathbf{H}}
\newcommand{\mathbf{I}}{\mathbf{I}}
\newcommand{\mathbf{J}}{\mathbf{J}}
\newcommand{\mathbf{P}}{\mathbf{P}}
\newcommand{\mathbf{q}}{\mathbf{q}}
\newcommand{\mathbf{S}}{\mathbf{S}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{U}}{\mathbf{U}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbf{y}}{\mathbf{y}}
\newcommand{\mathbf{Z}}{\mathbf{Z}}
\newcommand{\mathbf{z}}{\mathbf{z}}
\newcommand{\boldsymbol{1}}{\boldsymbol{1}}
\newcommand{\boldsymbol{0}}{\boldsymbol{0}}
\newcommand{\boldsymbol{\delta}}{\boldsymbol{\delta}}
\newcommand{\boldsymbol{\Delta}}{\boldsymbol{\Delta}}
\newcommand{\boldsymbol{\gamma}}{\boldsymbol{\gamma}}
\newcommand{\boldsymbol{\Gamma}}{\boldsymbol{\Gamma}}
\newcommand{\boldsymbol{\chi}}{\boldsymbol{\chi}}
\newcommand{\boldsymbol{\varphi}}{\boldsymbol{\varphi}}
\newcommand{\boldsymbol{\Phi}}{\boldsymbol{\Phi}}
\newcommand{\boldsymbol{\psi}}{\boldsymbol{\psi}}
\newcommand{\boldsymbol{\Psi}}{\boldsymbol{\Psi}}
\newcommand{\boldsymbol{\Theta}}{\boldsymbol{\Theta}}
\newcommand{\boldsymbol{\Xi}}{\boldsymbol{\Xi}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathrm{B}}{\mathrm{B}}
\newcommand{\mathrm{c}}{\mathrm{c}}
\newcommand{\mathrm{C}}{\mathrm{C}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathrm{i}}{\mathrm{i}}
\newcommand{\mathrm{N}}{\mathrm{N}}
\newcommand{\mathrm{O}}{\mathrm{O}}
\newcommand{\mathrm{Q}}{\mathrm{Q}}
\newcommand{\mathrm{T}}{\mathrm{T}}
\newcommand{\operatorname{Tr}}{\operatorname{Tr}}
\newcommand{\operatorname{tr}}{\operatorname{tr}}
\newcommand{{\operatorname{Arf}}}{{\operatorname{Arf}}}
\newcommand{{\operatorname{opt}}}{{\operatorname{opt}}}
\newcommand{{\operatorname{sgn}}}{{\operatorname{sgn}}}
\newcommand{\operatorname{dist}}{\operatorname{dist}}
\newcommand{\operatorname{rank}}{\operatorname{rank}}
\newcommand{{\operatorname{BIBD}}}{{\operatorname{BIBD}}}
\newcommand{\operatorname{span}}{\operatorname{span}}
\newcommand{{\operatorname{spark}}}{{\operatorname{spark}}}
\newcommand{{\operatorname{SRG}}}{{\operatorname{SRG}}}
\newcommand{{\operatorname{ETF}}}{{\operatorname{ETF}}}
\newcommand{{\operatorname{ECTFF}}}{{\operatorname{ECTFF}}}
\newcommand{{\operatorname{EITFF}}}{{\operatorname{EITFF}}}
\newcommand{\conv}[2]{{#1}\ast{#2}}
\newcommand{\gen}[1]{\langle{#1}\rangle}
\newcommand{\mathrm{Fro}}{\mathrm{Fro}}
\newcommand{\mathrm{op}}{\mathrm{op}}
\newcommand{\abs}[1]{|{#1}|}
\newcommand{\bigabs}[1]{\bigl|{#1}\bigr|}
\newcommand{\Bigabs}[1]{\Bigl|{#1}\Bigr|}
\newcommand{\biggabs}[1]{\biggl|{#1}\biggr|}
\newcommand{\Biggabs}[1]{\Biggl|{#1}\Biggr|}
\newcommand{\paren}[1]{({#1})}
\newcommand{\bigparen}[1]{\bigl({#1}\bigr)}
\newcommand{\Bigparen}[1]{\Bigl({#1}\Bigr)}
\newcommand{\biggparen}[1]{\biggl({#1}\biggr)}
\newcommand{\Biggparen}[1]{\Biggl({#1}\Biggr)}
\newcommand{\bracket}[1]{[{#1}]}
\newcommand{\bigbracket}[1]{\bigl[{#1}\bigr]}
\newcommand{\Bigbracket}[1]{\Bigl[{#1}\Bigr]}
\newcommand{\biggbracket}[1]{\biggl[{#1}\biggr]}
\newcommand{\Biggbracket}[1]{\Biggl[{#1}\Biggr]}
\newcommand{\set}[1]{\{{#1}\}}
\newcommand{\bigset}[1]{\bigl\{{#1}\bigr\}}
\newcommand{\Bigset}[1]{\Bigl\{{#1}\Bigr\}}
\newcommand{\biggset}[1]{\biggl\{{#1}\biggr\}}
\newcommand{\Biggset}[1]{\Biggl\{{#1}\Biggr\}}
\newcommand{\norm}[1]{\|{#1}\|}
\newcommand{\bignorm}[1]{\bigl\|{#1}\bigr\|}
\newcommand{\Bignorm}[1]{\Bigl\|{#1}\Bigr\|}
\newcommand{\biggnorm}[1]{\biggl\|{#1}\biggr\|}
\newcommand{\Biggnorm}[1]{\Biggl\|{#1}\Biggr\|}
\newcommand{\ip}[2]{\langle{#1},{#2}\rangle}
\newcommand{\bigip}[2]{\bigl\langle{#1},{#2}\bigr\rangle}
\newcommand{\Bigip}[2]{\Bigl\langle{#1},{#2}\Bigr\rangle}
\newcommand{\biggip}[2]{\biggl\langle{#1},{#2}\biggr\rangle}
\newcommand{\Biggip}[2]{\Biggl\langle{#1},{#2}\Biggr\rangle}
\setlength{\arraycolsep}{2pt}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{conjecture}[theorem]{Conjecture}
\begin{document}
\begin{frontmatter}
\title{Grassmannian codes from paired difference sets}
\author[AFIT]{Matthew Fickus}
\ead{[email protected]}
\author[IA]{Joseph W.\ Iverson}
\author[SDSU]{John Jasper}
\author[CSU]{Emily J.\ King}
\address[AFIT]{Department of Mathematics and Statistics, Air Force Institute of Technology, Wright-Patterson AFB, OH 45433}
\address[IA]{Department of Mathematics, Iowa State University, Ames, IA 50011}
\address[SDSU]{Department of Mathematics and Statistics, South Dakota State University, Brookings, SD 57007}
\address[CSU]{Department of Mathematics, Colorado State University, Fort Collins, CO 80523}
\begin{abstract}
An equiangular tight frame (ETF) is a sequence of vectors in a Hilbert space that achieves equality in the Welch bound and so has minimal coherence.
More generally,
an equichordal tight fusion frame (ECTFF) is a sequence of equi-dimensional subspaces of a Hilbert space that achieves equality in Conway, Hardin and Sloane's simplex bound.
Every ECTFF is a type of optimal Grassmannian code,
that is, an optimal packing of equi-dimensional subspaces of a Hilbert space.
We construct ECTFFs by exploiting new relationships between known ETFs.
Harmonic ETFs equate to difference sets for finite abelian groups.
We say that a difference set for such a group is ``paired" with a difference set for its Pontryagin dual when the corresponding subsequence of its harmonic ETF happens to be an ETF for its span.
We show that every such pair yields an ECTFF.
We moreover construct an infinite family of paired difference sets using quadratic forms over the field of two elements.
Together this yields two infinite families of real ECTFFs.
\end{abstract}
\begin{keyword}
equiangular tight frame \sep difference set \sep quadratic form \sep symplectic form \MSC[2020] 42C15
\end{keyword}
\end{frontmatter}
\section{Introduction}
The \textit{chordal distance} between two $R$-dimensional subspaces $\mathcal{U}_1$ and $\mathcal{U}_2$ of a $D$-dimensional real or complex Hilbert space $\mathbb{H}$ is
$\operatorname{dist}(\mathcal{U}_1,\mathcal{U}_2)
:=2^{-\frac12}\norm{\mathbf{P}_1-\mathbf{P}_2}_\mathrm{Fro}
=[R-\operatorname{Tr}(\mathbf{P}_1\mathbf{P}_2)]^{\frac12}$
where $\mathbf{P}_1$ and $\mathbf{P}_2$ are their respective rank-$R$ orthogonal projection operators.
Conway, Hardin and Sloane~\cite{ConwayHS96} showed that the minimum pairwise chordal distance between the members of any sequence $\set{\mathcal{U}_n}_{n=1}^N$ of $R$-dimensional subspaces of $\mathbb{H}$ satisfies the \textit{simplex bound}:
\begin{equation}
\label{eq.simplex bound}
\smash{\min_{n_1\neq n_2}\operatorname{dist}(\mathcal{U}_{n_1},\mathcal{U}_{n_2})
\leq\bigbracket{\tfrac{R(D-R)}{D}\tfrac{N}{N-1}}^{\frac12}.}
\end{equation}
In modern parlance~\cite{KutyniokPCL09},
they further showed that such a sequence $\set{\mathcal{U}_n}_{n=1}^N$ achieves equality in~\eqref{eq.simplex bound} if and only if it is an \textit{equichordal tight fusion frame} (ECTFF) for $\mathbb{H}$,
namely when $\operatorname{dist}(\mathcal{U}_{n_1},\mathcal{U}_{n_2})$ is constant over all $n_1\neq n_2$ (equichordality) and $\sum_{n=1}^N\mathbf{P}_n=A\mathbf{I}$ for some $A>0$ (tightness).
When such an ECTFF for $\mathbb{H}$ exists it is thus an optimal \textit{Grassmannian code}, that is,
an optimal packing (with respect to the chordal distance) of $N$ points on the \textit{Grassmannian} (space) that consists of all $R$-dimensional subspaces of the $D$-dimensional space $\mathbb{H}$.
When $R=1$ the simplex bound~\eqref{eq.simplex bound} reduces to the \textit{Welch bound}~\cite{Welch74,StrohmerH03} on the \textit{coherence} of $N$ nonzero vectors $\set{\boldsymbol{\varphi}_n}_{n=1}^{N}$ in $\mathbb{H}$:
\begin{equation}
\label{eq.Welch bound}
\max_{n_1\neq n_2}
\tfrac{\abs{\ip{\boldsymbol{\varphi}_{n_1}}{\boldsymbol{\varphi}_{n_2}}}}{\norm{\boldsymbol{\varphi}_{n_1}}\norm{\boldsymbol{\varphi}_{n_2}}}
\geq\bigbracket{\tfrac{N-D}{D(N-1)}}^{\frac12}.
\end{equation}
In this case, an ECTFF for $\mathbb{H}$ equates to an \textit{equiangular tight frame} (ETF) for $\mathbb{H}$, namely to a sequence~$\set{\boldsymbol{\varphi}_n}_{n=1}^{N}$ of nonzero equal-norm vectors in $\mathbb{H}$ that achieves equality in~\eqref{eq.Welch bound}.
More generally an ECTFF for $\mathbb{H}$ will have minimal \textit{block coherence} $\max_{n_1\neq n_2}\norm{\mathbf{P}_{n_1}\mathbf{P}_{n_2}}_\mathrm{op}$ if its subspaces are \textit{equi-isoclinic}~\cite{LemmensS73b},
that is, satisfy $\mathbf{P}_{n_1}\mathbf{P}_{n_2}\mathbf{P}_{n_1}=\sigma^2\mathbf{P}_{n_1}$ for some $\sigma\geq0$ and all $n_1\neq n_2$~\cite{DhillonHST08}.
Such an ECTFF is called an \textit{equi-isoclinic tight fusion frame} (EITFF) for $\mathbb{H}$.
ETFs, ECTFFs and EITFFs arise in various applications,
including compressed sensing~\cite{EldarKB10,BajwaCM12,BandeiraFMW13,CalderbankTX15},
quantum information theory~\cite{Zauner99,RenesBSC04},
wireless communication~\cite{StrohmerH03,Bodmann07},
and algebraic coding theory~\cite{JasperMF14}.
Much of the related literature is devoted to the \textit{existence problem}:
for what $D$, $N$ and $R$ does there exist an ${\operatorname{ECTFF}}(D,N,R)$, that is, an ECTFF for a $D$-dimensional Hilbert space that consists of $N$ subspaces of dimension $R$?
Moreover, in such cases, when can these subspaces be chosen to be equi-isoclinic and/or real?
Most positive existence results involve explicit construction from some type of combinatorial design.
See~\cite{FickusM16} for a survey of known ${\operatorname{ETF}}(D,N)$ (i.e., ${\operatorname{ECTFF}}(D,N,1)$).
Several constructions of ${\operatorname{ECTFF}}(D,N,R)$ with $R>1$ are known.
Some of these actually yield ${\operatorname{EITFF}}(D,N,R)$:
one can tensor an ETF with an orthonormal basis (ONB)~\cite{LemmensS73b,CalderbankTX15,King21},
or convert a complex and/or quaternionic ETF into an EITFF over a subfield~\cite{Hoggar77,EtTaoui20,Waldron20},
or exploit a complex \textit{conference} matrix~\cite{EtTaoui18,BlokhuisBE18}.
Other methods yield ECTFFs that are not necessarily equi-isoclinic,
including constructions from
\textit{quadratic residues}~\cite{CalderbankHRSS99,ZhangG18}
and their generalizations~\cite{KocakN17},
\textit{balanced incomplete block designs} (BIBDs)~\cite{Zauner99,ZhangG18},
\textit{$2$-transitive groups}~\cite{Creignou08},
\textit{semiregular divisible difference sets}~\cite{King16} and more generally \textit{difference families}~\cite{FickusMW21},
\textit{Latin squares}~\cite{ZhangG18},
and chains of alternating \textit{Naimark} and \textit{spatial complements}~\cite{CasazzaFMWZ11,FickusMW21}.
Other examples have been found numerically, and some of these have been perfected~\cite{ConwayHS96,DhillonHST08,CohnKM16,FuchsHS17}.
See~\cite{BachocBC04} for connections between ECTFFs and $t$-designs for Grassmannians, and~\cite{BachocE13} for various generalizations of ECTFFs.
Our work here is inspired by some ideas from the recent literature.
It turns out that some ETFs contain others:
if $\set{\boldsymbol{\varphi}_n}_{n=1}^N$ is any ETF for $\mathbb{H}$ then any subsequence of it is equiangular and might, on rare occasion, be a tight frame for the subspace of $\mathbb{H}$ that it spans.
See~\cite{FickusMJ16,ApplebyBDF17,FickusJKM18} for instances of this phenomenon.
Moreover, such \textit{sub-ETFs} can yield ECTFFs.
For example,
when an ETF partitions into regular simplices their respective spans form an ECTFF~\cite{FickusJKM18}.
This applies to \textit{Steiner} ETFs~\cite{GoethalsS70,FickusMT12},
certain \textit{polyphase} ETFs~\cite{FickusJMPW19} as well as to several infinite families of \textit{harmonic ETFs}~\cite{FickusJKM18,FickusS20},
namely ETFs that arise by restricting the characters of a finite abelian group to a \textit{difference set}~\cite{Konig99,StrohmerH03,XiaZG05,DingF07}.
In this paper we construct ECTFFs by exploiting new relationships between known ETFs.
In the next section we review some known concepts and results that we will need later on.
In Section~3, we define when a difference set for a finite abelian group is \textit{paired} with a difference set for its Pontryagin dual (Definition~\ref{def.paired difference sets}).
We show that a harmonic ETF that arises from such a pair contains many overlapping, unitarily equivalent copies of a smaller ETF, and moreover that the spans of these copies form an ECTFF (Theorem~\ref{thm.ECTFF from PDS}).
In Section~4 we exploit quadratic forms over $\mathbb{F}_2$ to construct an infinite family of paired difference sets (Theorem~\ref{thm.infinite family}).
For every integer $M\geq 2$ this yields an ${\operatorname{ETF}}(2^{M-1}(2^M\pm 1),2^{2M})$ that contains many copies of an ${\operatorname{ETF}}(\frac13(2^{2M}-1),2^{M-1}(2^M\mp 1))$.
These ETFs are not new: they equate to known families of \textit{strongly regular graphs}~\cite{Brouwer07,Brouwer17} via the correspondences of~\cite{HolmesP04,Waldron09,BargGOY15,FickusJMPW18}.
Moreover, their parameters match those of certain known real Steiner and Tremain~\cite{FickusJMP18} ETFs (or their Naimark complements).
That said, the resulting real ${\operatorname{ECTFF}}(2^{M-1}(2^M\pm 1),2^{2M},\frac13(2^{2M}-1))$ seem to be new except in the $(D,N,R)=(6,16,5)$ case.
They are not equi-isoclinic.
We conclude in Section~5 with some open problems concerning the existence of paired difference sets.
\section{Preliminaries}
Let $\mathcal{N}$ be a finite set of cardinality $N>1$,
and let $\mathbb{H}$ be a Hilbert space over $\mathbb{F}$ (either $\mathbb{R}$ or $\mathbb{C}$) of dimension $D\geq 1$.
A sequence $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ of $R$-dimensional subspaces of $\mathbb{H}$ is a \textit{tight fusion frame} (TFF) for $\mathbb{H}$ if their projections $\set{\mathbf{P}_n}_{n\in\mathcal{N}}$ satisfy $\sum_{n\in\mathcal{N}}\mathbf{P}_n=A\mathbf{I}$ for some $A>0$.
This requires $A=\frac{NR}{D}\geq 1$ since $NR=\sum_{n\in\mathcal{N}}\operatorname{Tr}(\mathbf{P}_n)=\operatorname{Tr}(\sum_{n\in\mathcal{N}}\mathbf{P}_n)=\operatorname{Tr}(A\mathbf{I})=AD$ and $NR=\sum_{n\in\mathcal{N}}\operatorname{rank}(\mathbf{P}_n)\geq\operatorname{rank}(A\mathbf{I})=D$.
Thus, for any $R$-dimensional subspaces $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ of $\mathbb{H}$,
\begin{equation}
\label{eq.ECTFF derivation 1}
0\leq\tfrac1{N(N-1)}\operatorname{Tr}\Bigbracket{\Bigparen{\sum_{n\in\mathcal{N}}\mathbf{P}_n-\tfrac{NR}{D}\mathbf{I}}^2}
=\tfrac1{N(N-1)}\sum_{n_1\in\mathcal{N}}\sum_{n_2\neq n_1}
\operatorname{Tr}(\mathbf{P}_{n_1}\mathbf{P}_{n_2})-\tfrac{R(NR-D)}{D(N-1)},
\end{equation}
where equality holds if and only if $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ is a TFF for $\mathbb{H}$.
Each term $\operatorname{Tr}(\mathbf{P}_{n_1}\mathbf{P}_{n_2})$ is real since
\begin{equation*}
[\operatorname{dist}(\mathcal{U}_{n_1},\mathcal{U}_{n_2})]^2
=\tfrac12\norm{\mathbf{P}_{n_1}-\mathbf{P}_{n_2}}_\mathrm{Fro}^2
=\tfrac12\operatorname{Tr}[(\mathbf{P}_{n_1}-\mathbf{P}_{n_2})^2]
=R-\operatorname{Tr}(\mathbf{P}_{n_1}\mathbf{P}_{n_2}).
\end{equation*}
As such, we can rearrange and continue~\eqref{eq.ECTFF derivation 1} as
\begin{equation}
\label{eq.generalized Welch}
\tfrac{R(NR-D)}{D(N-1)}
\leq\tfrac1{N(N-1)}\sum_{n_1\in\mathcal{N}}\sum_{n_2\neq n_1}
\operatorname{Tr}(\mathbf{P}_{n_1}\mathbf{P}_{n_2})
\leq \max_{n_1\neq n_2}\operatorname{Tr}(\mathbf{P}_{n_1}\mathbf{P}_{n_2})
=R-\min_{n_1\neq n_2}[\operatorname{dist}(\mathcal{U}_{n_1},\mathcal{U}_{n_2})]^2.
\end{equation}
Equality holds throughout~\eqref{eq.generalized Welch} if and only if $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ is an ECTFF for $\mathbb{H}$,
namely a TFF that is also \textit{equichordal} in the sense that
$\operatorname{dist}(\mathcal{U}_{n_1},\mathcal{U}_{n_2})$ is constant over all $n_1\neq n_2$.
Rearranging~\eqref{eq.generalized Welch} gives the simplex bound~\eqref{eq.simplex bound}, which is called this since $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ is an ECTFF for $\mathbb{H}$ if and only if $\set{\mathbf{P}_n-\frac{R}{D}\mathbf{I}}_{n\in\mathcal{N}}$ is a regular simplex for its span in the real Hilbert space of traceless self-adjoint operators on $\mathbb{H}$, equipped with the Frobenius inner product~\cite{ConwayHS96}.
In particular,
an ECTFF can only exist if $N\leq\mathrm{d}_\mathbb{F}(D)+1$ where $\mathrm{d}_\mathbb{F}(D)$ is the dimension of this space, namely
\begin{equation*}
\mathrm{d}_\mathbb{F}(D)=\left\{\begin{array}{cl}
\frac12(D-1)(D+2),&\ \mathbb{F}=\mathbb{R},\\
D^2-1,&\ \mathbb{F}=\mathbb{C}.
\end{array}\right.
\end{equation*}
This necessary condition on the existence of an ECTFF is often called \textit{Gerzon's bound}; see~\cite{CalderbankHRSS99,KocakN17,ZhangG18} for some examples of ECTFFs that achieve equality in it.
When it is violated, $\set{\mathbf{P}_n-\frac{R}{D}\mathbf{I}}_{n\in\mathcal{N}}$ cannot be mutually obtuse,
meaning there exists $n_1\neq n_2$ such that
\begin{equation*}
0\leq\ip{\mathbf{P}_{n_1}-\tfrac{R}{D}\mathbf{I}}{\mathbf{P}_{n_2}-\tfrac{R}{D}\mathbf{I}}_\mathrm{Fro}
=\operatorname{Tr}(\mathbf{P}_{n_1}\mathbf{P}_{n_2})-\tfrac{R^2}{D}
=\tfrac{R(D-R)}{D}-[\operatorname{dist}(\mathcal{U}_{n_1},\mathcal{U}_{n_2})]^2,
\end{equation*}
implying the \textit{orthoplex bound} of~\cite{ConwayHS96}, namely that
$\min_{n_1\neq n_2}\operatorname{dist}(\mathcal{U}_{n_1},\mathcal{U}_{n_2})
\leq\bigbracket{\tfrac{R(D-R)}{D}}^{\frac12}$;
see~\cite{KocakN17} for some recent constructions of sequences of subspaces that achieve equality in it.
Since both equichordality and tightness are preserved by both unitary transformations on $\mathbb{H}$ and bijections on $\mathcal{N}$,
the existence of an ECTFF depends only on the parameters $(D,N,R)$ and $\mathbb{F}$.
We refer to any ECTFF for a possibly-complex $D$-dimensional Hilbert space $\mathbb{H}$ that consists of $N$ subspaces of it, each of dimension $R$, as an ``${\operatorname{ECTFF}}(D,N,R)$,"
and say it is \textit{real} when $\mathbb{H}$ can be chosen to be $\mathbb{R}^D$.
The \textit{spatial complement}~\cite{CasazzaFMWZ11} of an ${\operatorname{ECTFF}}(D,N,R)$ $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ for $\mathbb{H}$ with $R<D$ is the sequence $\set{\mathcal{U}_n^\perp}_{n\in\mathcal{N}}$ of its members' orthogonal complements.
It is an ${\operatorname{ECTFF}}(D,N,D-R)$ for $\mathbb{H}$ since $\sum_{n\in\mathcal{N}}(\mathbf{I}-\mathbf{P}_n)=(N-\tfrac{NR}D)\mathbf{I}$ and
\begin{equation*}
\operatorname{dist}(\mathcal{U}_{n_1}^\perp,\mathcal{U}_{n_2}^\perp)
=\tfrac1{\sqrt{2}}\norm{(\mathbf{I}-\mathbf{P}_{n_1})-(\mathbf{I}-\mathbf{P}_{n_2})}_\mathrm{Fro}
=\tfrac1{\sqrt{2}}\norm{\mathbf{P}_{n_1}-\mathbf{P}_{n_2}}_\mathrm{Fro}
=\operatorname{dist}(\mathcal{U}_{n_1},\mathcal{U}_{n_2}),
\ \forall\,n_1\neq n_2.
\end{equation*}
It can also be shown that if $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ achieves equality in the orthoplex bound then $\set{\mathcal{U}_n^\perp}_{n\in\mathcal{N}}$ does as well~\cite{King21}.
\subsection{Grassmannian codes and finite frame theory}
Equip $\mathbb{F}^\mathcal{N}:=\set{\mathbf{x}:\mathcal{N}\rightarrow\mathbb{F}}$ with the inner product $\ip{\mathbf{x}_1}{\mathbf{x}_2}:=\sum_{n\in\mathcal{N}}\overline{\mathbf{x}_1(n)}\mathbf{x}_2(n)$.
(Under this notation, for any positive integer $N$, ``$\mathbb{F}^N$" is shorthand for $\mathbb{F}^{[N]}$ where $[N]:=\set{n\in\mathbb{Z}: 1\leq n\leq N}$.
Throughout, our complex inner products are conjugate-linear in their first arguments.)
The \textit{synthesis operator} of a sequence $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ of vectors in $\mathbb{H}$ is $\boldsymbol{\Phi}:\mathbb{F}^\mathcal{N}\rightarrow\mathbb{H}$, $\boldsymbol{\Phi}\mathbf{x}:=\sum_{n\in\mathcal{N}}\mathbf{x}(n)\boldsymbol{\varphi}_n$.
Its adjoint is the corresponding \textit{analysis operator} $\boldsymbol{\Phi}^*:\mathbb{H}\rightarrow\mathbb{F}^\mathcal{N}$, $\boldsymbol{\Phi}^*\mathbf{y}=\sum_{n\in\mathcal{N}}\ip{\boldsymbol{\varphi}_n}{\mathbf{y}}\boldsymbol{\delta}_n$,
where $\set{\boldsymbol{\delta}_n}_{n\in\mathcal{N}}$ is the standard basis for $\mathbb{F}^\mathcal{N}$.
In particular, we can regard a single vector $\boldsymbol{\varphi}\in\mathbb{H}$
as the synthesis operator $\boldsymbol{\varphi}:\mathbb{F}\rightarrow\mathbb{H}$, $\boldsymbol{\varphi}(x):=x\boldsymbol{\varphi}$ whose adjoint $\boldsymbol{\varphi}^*:\mathbb{H}\rightarrow\mathbb{F}$, $\boldsymbol{\varphi}^*\mathbf{y}=\ip{\boldsymbol{\varphi}_n}{\mathbf{y}}$ is a linear functional.
Composing $\boldsymbol{\Phi}$ and $\boldsymbol{\Phi}^*$ gives the \textit{frame operator} $\boldsymbol{\Phi}\bfPhi^*:\mathbb{H}\rightarrow\mathbb{H}$, $\boldsymbol{\Phi}\bfPhi^*=\sum_{n\in\mathcal{N}}\boldsymbol{\varphi}_n^{}\boldsymbol{\varphi}_n^*$
and the $\mathcal{N}\times\mathcal{N}$ \textit{Gram matrix} $\boldsymbol{\Phi}^*\boldsymbol{\Phi}:\mathbb{F}^\mathcal{N}\rightarrow\mathbb{F}^\mathcal{N}$ whose $(n,n')$th entry is $(\boldsymbol{\Phi}^*\boldsymbol{\Phi})(n,n')=\ip{\boldsymbol{\varphi}_n}{\boldsymbol{\varphi}_{n'}}$.
In the special case where $\mathbb{H}=\mathbb{F}^\mathcal{D}=\set{\mathbf{y}:\mathcal{D}\rightarrow\mathbb{F}}$ for some finite set $\mathcal{D}$ of cardinality $D>0$,
$\boldsymbol{\Phi}$ is the $\mathcal{D}\times\mathcal{N}$ matrix whose $n$th column is $\boldsymbol{\varphi}_n$,
$\boldsymbol{\Phi}^*$ is its $\mathcal{N}\times\mathcal{D}$ conjugate transpose,
and $\boldsymbol{\Phi}\bfPhi^*$ and $\boldsymbol{\Phi}^*\boldsymbol{\Phi}$ are their $\mathcal{D}\times\mathcal{D}$ and $\mathcal{N}\times\mathcal{N}$ products, respectively.
In general, any $\mathbb{F}$-valued positive semidefinite $\mathcal{N}\times\mathcal{N}$ matrix $\mathbf{G}$ factors as $\mathbf{G}=\boldsymbol{\Phi}^*\boldsymbol{\Phi}$ for some sequence $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ of vectors in a Hilbert space $\mathbb{H}$ over $\mathbb{F}$ of dimension $D=\operatorname{rank}(\mathbf{G})$.
This space is only unique up to a unitary transformation.
A sequence $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ of vectors in $\mathbb{H}$ is an ($A$-)\textit{tight frame} for $\mathbb{H}$ if $\boldsymbol{\Phi}\bfPhi^*=A\mathbf{I}$ for some $A>0$.
In this case, any $\mathbf{y}\in\mathbb{H}$ can be written as
$\mathbf{y}=\frac1A\boldsymbol{\Phi}\bfPhi^*\mathbf{y}=\frac1A\sum_{n\in\mathcal{N}}\ip{\boldsymbol{\varphi}_n}{\mathbf{y}}\boldsymbol{\varphi}_n$ and so $\mathbb{H}$ is necessarily $\operatorname{span}\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}=\boldsymbol{\Phi}(\mathbb{F}^\mathcal{D})$.
More generally, $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ is an $A$-tight frame for its span when $\boldsymbol{\Phi}\bfPhi^*\mathbf{y}=A\mathbf{y}$ for all $\mathbf{y}\in\boldsymbol{\Phi}(\mathbb{F}^\mathcal{D})$, namely when $\boldsymbol{\Phi}\bfPhi^*\boldsymbol{\Phi}=A\boldsymbol{\Phi}$.
This occurs if and only if $(\boldsymbol{\Phi}^*\boldsymbol{\Phi})^2=A\boldsymbol{\Phi}^*\boldsymbol{\Phi}$
(since having the latter implies that the image of $\boldsymbol{\Phi}(\boldsymbol{\Phi}^*\boldsymbol{\Phi}-A\mathbf{I})$ is contained in both $\boldsymbol{\Phi}(\mathbb{F}^\mathcal{D})$ and $\ker(\boldsymbol{\Phi}^*)=[\boldsymbol{\Phi}(\mathbb{F}^\mathcal{D})]^\perp$).
As such,
a nonzero self-adjoint $\mathcal{N}\times\mathcal{N}$ matrix $\mathbf{G}$ is the Gram matrix $\boldsymbol{\Phi}^*\boldsymbol{\Phi}$ of an $A$-tight frame $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ for its span if and only if $\frac1A\mathbf{G}$ is a projection.
In this case, letting $D$ be $\dim(\operatorname{span}\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}})=\operatorname{rank}(\boldsymbol{\Phi})=\frac1A\operatorname{Tr}(\mathbf{G})$ we have that $\mathbf{I}-\frac1A\boldsymbol{\Phi}^*\boldsymbol{\Phi}$ is a projection of rank $N-D$.
If $D<N$, there thus exists an $A$-tight frame $\set{\boldsymbol{\psi}_n}_{n\in\mathcal{N}}$ for a space of dimension $N-D$ that is uniquely defined (up to unitary transformations) by having
\begin{equation}
\label{eq.Naimark}
\boldsymbol{\Psi}^*\boldsymbol{\Psi}=A\mathbf{I}-\boldsymbol{\Phi}^*\boldsymbol{\Phi},
\quad\text{i.e.,}\quad
\ip{\boldsymbol{\psi}_{n_1}}{\boldsymbol{\psi}_{n_2}}
=\left\{\begin{array}{rl}
A-\norm{\boldsymbol{\varphi}_n}^2,&\ n_1=n_2,\\
-\ip{\boldsymbol{\varphi}_{n_1}}{\boldsymbol{\varphi}_{n_2}},&\ n_1\neq n_2.
\end{array}\right.
\end{equation}
Such tight frames $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ and $\set{\boldsymbol{\psi}_n}_{n\in\mathcal{N}}$ are called \textit{Naimark complements} of each other.
Now again let $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ be any sequence of $R$-dimensional subspaces of a $D$-dimensional Hilbert space $\mathbb{H}$.
For each $n\in\mathcal{N}$ let $\boldsymbol{\Phi}_n:\mathbb{F}^\mathcal{R}\rightarrow\mathbb{H}$ be the synthesis operator of an ONB $\set{\boldsymbol{\varphi}_{n,r}}_{r\in\mathcal{R}}$ for $\mathcal{U}_n$,
and so $\mathbf{P}_n=\boldsymbol{\Phi}_n^{}\boldsymbol{\Phi}_n^*$ where $\boldsymbol{\Phi}_n^*\boldsymbol{\Phi}_n^{}=\mathbf{I}$.
Here, $\set{\boldsymbol{\varphi}_{n,r}}_{r\in\mathcal{R}}$ is only unique up to $\mathcal{R}\times\mathcal{R}$ unitaries.
That is, it can be any member of the fiber of the \textit{Stiefel manifold} that projects onto the point $\mathcal{U}_n$ in the Grassmannian.
The frame operator of the concatenation $\set{\boldsymbol{\varphi}_{n,r}}_{(n,r)\in\mathcal{N}\times\mathcal{R}}$ of these bases is
$\sum_{n\in\mathcal{N}}\sum_{r\in\mathcal{R}}\boldsymbol{\varphi}_{n,r}^{}\boldsymbol{\varphi}_{n,r}^*
=\sum_{n\in\mathcal{N}}\boldsymbol{\Phi}_n^{}\boldsymbol{\Phi}_n^*
=\sum_{n\in\mathcal{N}}\mathbf{P}_n$.
In particular,
$\set{\boldsymbol{\varphi}_{n,r}}_{(n,r)\in\mathcal{N}\times\mathcal{R}}$ is a tight frame for $\mathbb{H}$ if and only if $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ is a TFF for $\mathbb{H}$.
Meanwhile, the Gram matrix of $\set{\boldsymbol{\varphi}_{n,r}}_{(n,r)\in\mathcal{N}\times\mathcal{R}}$ has
$\ip{\boldsymbol{\varphi}_{n_1,r_1}}{\boldsymbol{\varphi}_{n_2,r_2}}
=\ip{\boldsymbol{\Phi}_{n_1}\boldsymbol{\delta}_{r_1}}{\boldsymbol{\Phi}_{n_2}\boldsymbol{\delta}_{r_2}}
=(\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{})(r_1,r_2)$ as its $((n_1,r_1),(n_2,r_2))$th entry,
and so is naturally regarded as an $\mathcal{N}\times\mathcal{N}$ array whose $(n_1,n_2)$th block is the $\mathcal{R}\times\mathcal{R}$ \textit{cross-Gram} matrix $\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}$.
Since $\norm{\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}}_\mathrm{op}\leq\norm{\boldsymbol{\Phi}_{n_1}}_\mathrm{op}\norm{\boldsymbol{\Phi}_{n_2}}_\mathrm{op}=1$,
the singular values of this matrix can be written as $\set{\cos(\theta_{n_1,n_2,r})}_{r=1}^R$ for some nondecreasing sequence $\set{\theta_{n_1,n_2,r}}_{r=1}^R$ of \textit{principal angles} in $[0,\frac\pi 2]$.
From this perspective,
$\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ is equichordal if and only if
$\operatorname{Tr}(\mathbf{P}_{n_1}\mathbf{P}_{n_2})
=\operatorname{Tr}(\boldsymbol{\Phi}_{n_1}^{}\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}\boldsymbol{\Phi}_{n_2}^*)
=\operatorname{Tr}(\boldsymbol{\Phi}_{n_2}^*\boldsymbol{\Phi}_{n_1}^{}\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{})
=\norm{\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}}_\mathrm{Fro}^2
=\sum_{r=1}^R\cos^2(\theta_{n_1,n_2,r})$
is constant over all $n_1\neq n_2$.
This perspective also gives a way to continue~\eqref{eq.ECTFF derivation 1} in a way that differs from~\eqref{eq.generalized Welch}:
\begin{equation*}
\tfrac{NR-D}{D(N-1)}
\leq\tfrac1{NR(N-1)}\sum_{n_1\in\mathcal{N}}\sum_{n_2\neq n_1}
\norm{\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}}_\mathrm{Fro}^2\\
\leq\tfrac1{N(N-1)}\sum_{n_1\in\mathcal{N}}\sum_{n_2\neq n_1}
\norm{\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}}_2^2
\leq \max_{n_1\neq n_2}\norm{\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}}_2^2.
\end{equation*}
Here, equality holds throughout if and only if $\set{\boldsymbol{\varphi}_{n,r}}_{(n,r)\in\mathcal{N}\times\mathcal{R}}$ is a tight frame for $\mathbb{H}$ and $\theta_{n_1,n_2,r}$ is constant over all $n_1\neq n_2$ and $r$.
This occurs if and only if $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ is an EITFF for $\mathbb{H}$:
since $\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_1}^{}=\mathbf{I}$,
$\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}$ is a unitary scaled by some $\sigma\geq0$ if and only if
$\mathbf{P}_{n_1}\mathbf{P}_{n_2}\mathbf{P}_{n_1}
=\boldsymbol{\Phi}_{n_1}^{}\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}\boldsymbol{\Phi}_{n_2}^*\boldsymbol{\Phi}_{n_1}^{}\boldsymbol{\Phi}_{n_1}^*$
equals
$\boldsymbol{\Phi}_{n_1}^{}(\sigma^2\mathbf{I})\boldsymbol{\Phi}_{n_1}^*
=\sigma^2\mathbf{P}_{n_1}$.
In particular, every EITFF is an optimal packing of members of the Grassmannian with respect to the \textit{spectral distance}, defined as
\smash{$\operatorname{dist}_{\mathrm{s}}(\mathcal{U}_{n_1},\mathcal{U}_{n_2})
:=(1-\norm{\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}}_2^2)^{\frac12}$}~\cite{DhillonHST08}.
In the special case where $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ is a sequence of subspaces of $\mathbb{H}$ of dimension $R=1$ we have $\boldsymbol{\Phi}_n=\norm{\boldsymbol{\varphi}_n}^{-1}\boldsymbol{\varphi}_n$ where $\boldsymbol{\varphi}_n$ is an arbitrary nonzero vector in $\mathcal{U}_n$.
Here, each cross-Gram matrix $\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}$ is a $1\times 1$ matrix with entry $(\norm{\boldsymbol{\varphi}_{n_1}}\norm{\boldsymbol{\varphi}_{n_2}})^{-1}\ip{\boldsymbol{\varphi}_{n_1}}{\boldsymbol{\varphi}_{n_2}}$.
In this case,
both the above inequality and \eqref{eq.generalized Welch} reduce to the Welch bound~\eqref{eq.Welch bound}.
Assuming without loss of generality that $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ is equal-norm,
it achieves equality in this bound if and only if it is a tight frame for $\mathbb{H}$ that is also \textit{equiangular} in the sense that $\abs{\ip{\boldsymbol{\varphi}_{n_1}}{\boldsymbol{\varphi}_{n_2}}}$ is constant over all $n_1\neq n_2$.
If $\set{\boldsymbol{\psi}_{n,r}}_{(n,r)\in\mathcal{N}\times\mathcal{R}}$ is the Naimark complement~\eqref{eq.Naimark} of a concatenation $\set{\boldsymbol{\varphi}_{n,r}}_{(n,r)\in\mathcal{N}\times\mathcal{R}}$ of ONBs of the subspaces $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ of an ${\operatorname{ECTFF}}(D,N,R)$ with $D<NR$ then
$\set{\mathcal{V}_n}_{n\in\mathcal{N}}$, $\mathcal{V}_n:=\operatorname{span}\set{\boldsymbol{\psi}_{n,r}}_{r\in\mathcal{R}}$
is an ${\operatorname{ECTFF}}(NR-D,N,R)$ (and is an EITFF if and only if $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ is as well).
Taking alternating Naimark and spatial complements~\cite{CasazzaFMWZ11} of an ${\operatorname{ECTFF}}(D,N,R)$ often leads to an infinite chain of mutually distinct ECTFFs.
We caution that the spatial complement of an ${\operatorname{EITFF}}(D,N,R)$ is itself an EITFF if and only if $D=2R$.
In general, letting $\set{\boldsymbol{\Phi}_n}_{n\in\mathcal{N}}$ and $\set{\boldsymbol{\Theta}_n}_{n\in\mathcal{N}}$ be synthesis operators for ONBs for an ECTFF $\set{\mathcal{U}_n}_{n\in\mathcal{N}}$ and its spatial complement $\set{\mathcal{U}_n^\perp}_{n\in\mathcal{N}}$, respectively, we have $\boldsymbol{\Phi}_n^*\boldsymbol{\Phi}_n^{}=\mathbf{I}$, $\boldsymbol{\Theta}_n^*\boldsymbol{\Theta}_n^{}=\mathbf{I}$ and
$\boldsymbol{\Phi}_n^{}\boldsymbol{\Phi}_n^*+\boldsymbol{\Theta}_n^{}\boldsymbol{\Theta}_n^*=\mathbf{I}$ for all $n$, and so
\begin{align*}
\mathbf{I}-(\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{})(\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{})^*
&=\mathbf{I}-\boldsymbol{\Phi}_{n_1}^*(\mathbf{I}-\boldsymbol{\Theta}_{n_2}^{}\boldsymbol{\Theta}_{n_2}^*)\boldsymbol{\Phi}_{n_1}^{}
=(\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Theta}_{n_2}^{})(\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Theta}_{n_2}^{})^*,\\
\mathbf{I}-(\boldsymbol{\Theta}_{n_1}^*\boldsymbol{\Theta}_{n_2}^{})^*(\boldsymbol{\Theta}_{n_1}^*\boldsymbol{\Theta}_{n_2}^{})
&=\mathbf{I}-\boldsymbol{\Theta}_{n_2}^*(\mathbf{I}-\boldsymbol{\Phi}_{n_1}^{}\boldsymbol{\Phi}_{n_1}^*)\boldsymbol{\Theta}_{n_2}^{}
=(\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Theta}_{n_2}^{})^*(\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Theta}_{n_2}^{}).
\end{align*}
This implies that the sequences of singular values of $\boldsymbol{\Phi}_{n_1}^*\boldsymbol{\Phi}_{n_2}^{}$ and $\boldsymbol{\Theta}_{n_1}^*\boldsymbol{\Theta}_{n_2}^{}$ are $1$-padded versions of each other, in general.
In particular, any ${\operatorname{ECTFF}}(D,N,R)$ with $\frac D2<R<D$ is not an EITFF since some but not all of the principal angles between any two of its subspaces are $0$.
\subsection{Harmonic equiangular tight frames}
A \textit{character} of a finite abelian group $\mathcal{G}$ is a homomorphism $\gamma:\mathcal{G}\rightarrow\mathbb{T}:=\set{z\in\mathbb{C}: \abs{z}=1}$.
The set $\hat{\mathcal{G}}$ of all characters of $\mathcal{G}$ is called the \textit{Pontryagin dual} of $\mathcal{G}$, and is itself a group under pointwise multiplication.
In this finite setting, it is well known that $\hat{\mathcal{G}}$ is isomorphic to $\mathcal{G}$ and that its members form an equal-norm orthogonal basis for $\mathbb{C}^\mathcal{G}$.
The synthesis operator \smash{$\boldsymbol{\Gamma}:\mathbb{C}^{\hat{\mathcal{G}}}\rightarrow\mathbb{C}^\mathcal{G}$} of the sequence \smash{$\set{\gamma}_{\gamma\in\hat{\mathcal{G}}}$} of all characters of $\mathcal{G}$ (each serving as its own index) is thus a square \smash{$\mathcal{G}\times\hat{\mathcal{G}}$} matrix that satisfies $\boldsymbol{\Gamma}^*=N\boldsymbol{\Gamma}^{-1}$ where $N:=\#(\mathcal{G})$.
It is the \textit{character table} of $\mathcal{G}$,
having $(g,\gamma)$th entry $\boldsymbol{\Gamma}(g,\gamma)=\gamma(g)$.
Its adjoint (conjugate-transpose) $\boldsymbol{\Gamma}^*$ is the analysis operator of the characters, namely the \textit{discrete Fourier transform} (DFT) over $\mathcal{G}$.
We identify $\mathcal{G}$ with the Pontryagin dual of $\hat{\mathcal{G}}$ via the isomorphism $g\mapsto(\gamma\mapsto\gamma(g))$.
That is, we define $g(\gamma):=\gamma(g)$,
meaning the $\hat{\mathcal{G}}\times\mathcal{G}$ character table of $\hat{\mathcal{G}}$ is simply the (nonconjugate) transpose of $\boldsymbol{\Gamma}$.
A \textit{harmonic} frame over $\mathcal{G}$ is one obtained by restricting the characters of $\mathcal{G}$ to some nonempty subset $\mathcal{D}$ of $\mathcal{G}$,
namely \smash{$\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\hat{\mathcal{G}}}\subseteq\mathbb{C}^\mathcal{D}$}, $\boldsymbol{\varphi}_\gamma(d):=\gamma(d)$.
It is a tight frame for $\mathbb{C}^\mathcal{D}$ since its synthesis operator $\boldsymbol{\Phi}$ satisfies
$(\boldsymbol{\Phi}\bfPhi^*)(d_1,d_2)=(\boldsymbol{\Gamma}\bfGamma^*)(d_1,d_2)=N\mathbf{I}(d_1,d_2)$ for all $d_1,d_2\in\mathcal{D}$.
It is also equal norm since $\norm{\boldsymbol{\varphi}_\gamma}^2=\sum_{d\in\mathcal{D}}\abs{\gamma(d)}^2=D:=\#(\mathcal{D})$ for all $\gamma\in\hat{\mathcal{G}}$.
Its Gram matrix is $\hat{\mathcal{G}}$-circulant,
having entries arising from the DFT of the characteristic function $\boldsymbol{\chi}_\mathcal{D}$ of $\mathcal{D}$:
\begin{equation*}
(\boldsymbol{\Phi}^*\boldsymbol{\Phi})(\gamma_1,\gamma_2)
=\ip{\boldsymbol{\varphi}_{\gamma_1}}{\boldsymbol{\varphi}_{\gamma_2}}
=\sum_{g\in\mathcal{D}}\overline{\gamma_1(g)}\gamma_2(g)
=\sum_{g\in\mathcal{G}}\overline{(\gamma_1^{}\gamma_2^{-1})(g)}\boldsymbol{\chi}_\mathcal{D}(g)
=(\boldsymbol{\Gamma}^*\boldsymbol{\chi}_\mathcal{D})(\gamma_1^{}\gamma_2^{-1}),
\end{equation*}
for any $\gamma_1,\gamma_2\in\hat{\mathcal{G}}$.
To compute just the magnitudes of these entries,
we exploit the way in which the DFT interacts with the \textit{convolution} $\mathbf{x}_1*\mathbf{x}_2\in\mathbb{C}^\mathcal{G}$, \smash{$(\mathbf{x}_1*\mathbf{x}_2)(g):=\sum_{g'\in\mathcal{G}}\mathbf{x}_1(g')\mathbf{x}_2(g-g')$} and \textit{involution} $\tilde{\mathbf{x}}_1\in\mathbb{C}^\mathcal{G}$,
\smash{$\tilde{\mathbf{x}}_1(g):=\overline{\mathbf{x}_1(-g)}$} of any given $\mathbf{x}_1,\mathbf{x}_2\in\mathbb{C}^\mathcal{G}$.
(In this general setting, we typically use additive notation on $\mathcal{G}$ and multiplicative notation on $\hat{\mathcal{G}}$.)
Specifically, for any $\gamma\in\hat{\mathcal{G}}$ we have
$[\boldsymbol{\Gamma}^*(\mathbf{x}_1*\mathbf{x}_2)](\gamma)=\mathbf{x}_1(\gamma)\mathbf{x}_2(\gamma)$
and $(\boldsymbol{\Gamma}^*\tilde{\mathbf{x}}_1)(\gamma)=\overline{(\boldsymbol{\Gamma}^*\mathbf{x}_1)(\gamma)}$.
Thus, for any $\gamma_1,\gamma_2\in\hat{\mathcal{G}}$,
\begin{equation}
\label{eq.difference set derivation 1}
\abs{(\boldsymbol{\Phi}^*\boldsymbol{\Phi})(\gamma_1,\gamma_2)}^2
=\abs{\ip{\boldsymbol{\varphi}_{\gamma_1}}{\boldsymbol{\varphi}_{\gamma_2}}}^2
=\abs{(\boldsymbol{\Gamma}^*\boldsymbol{\chi}_\mathcal{D})(\gamma_1^{}\gamma_2^{-1})}^2
=[\boldsymbol{\Gamma}^*(\boldsymbol{\chi}_\mathcal{D}*\tilde{\boldsymbol{\chi}}_\mathcal{D})](\gamma_1^{}\gamma_2^{-1}),
\end{equation}
where $\boldsymbol{\chi}_\mathcal{D}*\tilde{\boldsymbol{\chi}}_\mathcal{D}$ is the \textit{autocorrelation} function of $\boldsymbol{\chi}_\mathcal{D}$.
For any $g\in\mathcal{G}$,
the mapping $g_1\mapsto(g_1,g_1-g)$ is a bijection from $\mathcal{D}\cap(g+\mathcal{D})$ onto $\set{(g_1,g_2)\in\mathcal{D}\times\mathcal{D}: g=g_1-g_2}$,
meaning
\begin{equation*}
(\boldsymbol{\chi}_\mathcal{D}*\tilde{\boldsymbol{\chi}}_\mathcal{D})(g)
=\sum_{g'\in\mathcal{G}}\boldsymbol{\chi}_\mathcal{D}(g')\boldsymbol{\chi}_{g+\mathcal{D}}(g')
=\#[\mathcal{D}\cap(g+\mathcal{D})]
=\#\set{(g_1,g_2)\in\mathcal{D}\times\mathcal{D}: g=g_1-g_2}
\end{equation*}
is both the number of elements of $\mathcal{G}$ that $\mathcal{D}$ has in common with $g+\mathcal{D}$ and the number of distinct ways that $g$ can be written as a difference of members of $\mathcal{D}$.
Now consider the special case where $\mathcal{D}$ is a \textit{difference set} for $\mathcal{G}$,
namely when $\mathcal{G}\neq\set{0}$ and there exists $\Lambda$ such that $(\boldsymbol{\chi}_\mathcal{D}*\tilde{\boldsymbol{\chi}}_\mathcal{D})(g)=\Lambda$ for all $g\neq0$.
Since $(\boldsymbol{\chi}_\mathcal{D}*\tilde{\boldsymbol{\chi}}_\mathcal{D})(0)=D$,
this occurs if and only if $\boldsymbol{\chi}_\mathcal{D}*\tilde{\boldsymbol{\chi}}_\mathcal{D}=(D-\Lambda)\boldsymbol{\delta}_0+\Lambda\boldsymbol{\chi}_\mathcal{G}$.
Taking DFTs equivalently gives $\abs{(\boldsymbol{\Gamma}^*\boldsymbol{\chi}_\mathcal{D})(\gamma)}^2=(D-\Lambda)+\Lambda N\boldsymbol{\delta}_1(\gamma)$ for all $\gamma\in\hat{\mathcal{G}}$.
Here,
evaluating at $\gamma=1$ gives $D^2=(D-\Lambda)+\Lambda N$,
and so $\Lambda$ is necessarily \smash{$\frac{D(D-1)}{N-1}$},
a fact that also follows from a simple counting argument.
That is, $\mathcal{D}$ is a difference set of $\mathcal{G}$ if and only if
$\abs{(\boldsymbol{\Gamma}^*\boldsymbol{\chi}_\mathcal{D})(\gamma)}^2=D-\tfrac{D(D-1)}{N-1}=\tfrac{D(N-D)}{N-1}$ for all $\gamma\neq1$.
When combined with~\eqref{eq.difference set derivation 1},
this classical characterization~\cite{Turyn65} of difference sets yields the more recent observation~\cite{Konig99,StrohmerH03,XiaZG05,DingF07} that a nonempty subset $\mathcal{D}$ of $\mathcal{G}$ is a difference set for $\mathcal{G}$ if and only if the corresponding harmonic frame \smash{$\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} is an ETF for $\mathbb{C}^\mathcal{D}$,
since it equates to having
\begin{equation*}
\tfrac{\abs{\ip{\boldsymbol{\varphi}_{\gamma_1}}{\boldsymbol{\varphi}_{\gamma_2}}}}{\norm{\boldsymbol{\varphi}_{\gamma_1}}\norm{\boldsymbol{\varphi}_{\gamma_2}}}
=\tfrac1D\abs{(\boldsymbol{\Gamma}^*\boldsymbol{\chi}_\mathcal{D})(\gamma_1^{}\gamma_2^{-1})}
=\bigbracket{\tfrac{N-D}{D(N-1)}}^{\frac12},
\quad\forall\ \gamma_1\neq\gamma_2,
\end{equation*}
namely to achieving equality in the Welch bound~\eqref{eq.Welch bound}.
If $\mathcal{D}$ is any difference set for $\mathcal{G}$ then $\mathcal{D}^\mathrm{c}$ is as well;
provided $\mathcal{D}$ is not $\emptyset$ or $\mathcal{G}$, the fact that $\boldsymbol{\Gamma}$ has equal-norm orthogonal columns implies that the two resulting harmonic ETFs are Naimark complements of each other.
\section{Equichordal tight fusion frames from paired difference sets}
As discussed in the previous section,
an ${\operatorname{ECTFF}}(D,N,R)$ for a $D$-dimensional space $\mathbb{H}$ equates to an $NR$-vector tight frame for $\mathbb{H}$ that can be partitioned into $N$ orthonormal subsequences whose $R\times R$ cross-Gram matrices have a common Frobenius norm,
and moreover, is an EITFF for $\mathbb{H}$ precisely when these cross-Gram matrices are a common scalar multiple of some unitaries.
This Stiefel-based perspective of Grassmannian codes pervades the literature:
\cite{Zauner99,King16} construct ECTFFs from collections of orthonormal vectors whose cross-Gram matrices are either remarkably sparse or flat (and so have readily computed Frobenius norms),
whereas~\cite{LemmensS73b,CalderbankTX15,Hoggar77,EtTaoui20,Waldron20,EtTaoui18,BlokhuisBE18} construct EITFFs by converting each off-diagonal entry of a suitably nice $\mathcal{N}\times\mathcal{N}$ matrix into an $\mathcal{R}\times\mathcal{R}$ scaled unitary,
all while simultaneously ensuring tightness.
That said, echoing a common theme of frame theory,
it is sometimes easier to find nice tight frames for the subspaces of an ECTFF than it is to find nice ONBs for them.
For example, when an ETF partitions into regular simplices,
their spans naturally form an ECTFF~\cite{FickusJKM18}.
To be fair, some of these ECTFFs are more easily constructed from other methods:
those arising from Steiner ETFs (including McFarland-harmonic ETFs~\cite{JasperMF14}) also arise directly from their underlying BIBDs~\cite{Zauner99},
while those arising from Singer-complement-harmonic ETFs are actually ETF-tensor-ONB-type EITFFs~\cite{FickusS20}.
Nevertheless, some of these ECTFFs have not been explained by competing methods,
including some arising from polyphase ${\operatorname{ETF}}(q+1,q^3+1)$ and twin-prime-power-complement-harmonic ETFs~\cite{FickusJKM18}.
One downside to such an approach is that it can become more difficult to characterize when a resulting ECTFF is actually an EITFF~\cite{FickusS20}.
In this paper we carry this idea further,
constructing ECTFFs from many overlapping sub-ETFs of a single ETF.
More precisely, we use a harmonic ETF that contains a sub-ETF whose members are themselves indexed by the elements of a difference set:
\begin{definition}
\label{def.paired difference sets}
We say a difference set $\mathcal{D}$ for a finite abelian group $\mathcal{G}$ is \textit{paired} with a difference set $\mathcal{E}$ for its Pontryagin dual $\hat{\mathcal{G}}$ if
$\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}\subseteq\mathbb{C}^\mathcal{D}$,
$\boldsymbol{\varphi}_\varepsilon(d):=\varepsilon(d)$ is a tight frame for its span.
\end{definition}
This concept was briefly discussed in~\cite{FickusMJ16}.
That paper also mentions two numerically obtained examples of such pairs.
The first of these consisted of certain subsets $\mathcal{D}$ and $\mathcal{E}$ of $\mathbb{Z}_2^4$ and its dual of cardinality $6$ and $10$, respectively.
The other consisted of two subsets of $\mathbb{Z}_4^2$ and its dual of these same cardinalities.
While the latter remains a mystery,
we were able to find an explicit version of the former that, as explained in Section~4,
generalizes to an infinite family of paired difference sets:
\begin{example}
\label{ex.PDS}
Let $\mathcal{G}=\mathbb{Z}_2^4$ be the elementary abelian group of order $16$.
As detailed and generalized later on,
the function $\mathrm{Q}:\mathbb{Z}_2^4\rightarrow\mathbb{Z}_2$, $\mathrm{Q}(\mathbf{x})=\mathrm{Q}(x_1,x_2,x_3,x_4)=x_1x_2+x_3x_4+x_3^2+x_4^2$ is a \textit{quadratic} form that gives rise to the \textit{symplectic} (nondegenerate alternating bilinear) form $\mathrm{B}:\mathbb{Z}_2^4\times\mathbb{Z}_2^4\rightarrow\mathbb{Z}_2$,
$\mathrm{B}(\mathbf{x},\mathbf{y})=\mathrm{Q}(\mathbf{x}+\mathbf{y})+\mathrm{Q}(\mathbf{x})+\mathrm{Q}(\mathbf{y})=x_1y_2+x_2y_1+x_3y_4+x_4y_3$.
A point $\mathbf{x}\in\mathbb{Z}_2^4$ is \textit{singular} if $\mathrm{Q}(\mathbf{x})=0$, and is otherwise \textit{nonsingular}.
Let $\mathcal{D}$ and $\mathcal{E}=\mathcal{D}^\mathrm{c}$ be the $6$- and $10$-element sets of all singular and nonsingular points of $\mathrm{Q}$, respectively:
\begin{align}
\begin{split}
\label{eq.PDS(16,6,10)}
\mathcal{D}&=\set{0000,0100,1000,1101,1110,1111},\\
\mathcal{E}&=\set{0001,0010,0011,0101,0110,0111,1001,1010,1011,1100}.
\end{split}
\end{align}
These are complementary difference sets for $\mathbb{Z}_2^4$.
This can be verified by noting, for example, that every nonzero element of $\mathbb{Z}_2^4$ appears in the difference table of $\mathcal{D}$ exactly $\Lambda=\frac{6(6-1)}{16-1}=2$ times:
\begin{equation*}
\begin{array}{c|cccccc}
-&0000&0100&1000&1101&1110&1111\\\hline
0000&0000&0100&1000&1101&1110&1111\\
0100&0100&0000&1100&1001&1010&1011\\
1000&1000&1100&0000&0101&0110&0111\\
1101&1101&1001&0101&0000&0011&0010\\
1110&1110&1010&0110&0011&0000&0001\\
1111&1111&1011&0111&0010&0001&0000
\end{array}.
\end{equation*}
To explicitly construct the corresponding harmonic ETFs we identify $\mathbb{Z}_2^4$ with the Pontryagin dual via the isomorphism that maps $\mathbf{y}\in\mathbb{Z}_2^4$ to the character $\mathbf{x}\mapsto(-1)^{\mathrm{B}(\mathbf{x},\mathbf{y})}$.
Under this identification, the character table $\boldsymbol{\Gamma}$ becomes the following $\mathbb{Z}_2^4\times\mathbb{Z}_2^4$ matrix with entries $\boldsymbol{\Gamma}(\mathbf{x},\mathbf{y})=(-1)^{\mathrm{B}(\mathbf{x},\mathbf{y})}$,
and its $(\mathcal{D}\times\mathbb{Z}_2^4)$- and $(\mathcal{E}\times\mathbb{Z}_2^4)$-indexed submatrices $\boldsymbol{\Gamma}_0$ and $\boldsymbol{\Gamma}_1$ are the synthesis operators of a harmonic ${\operatorname{ETF}}(6,16)$ and its Naimark-complementary harmonic ${\operatorname{ETF}}(10,16)$, respectively:
\begin{equation}
\label{eq.16 x 16 Gamma}
\boldsymbol{\Gamma}=\left[\begin{smallmatrix}
+&+&+&+&+&+&+&+&+&+&+&+&+&+&+&+\\
+&+&-&-&+&+&-&-&+&+&-&-&+&+&-&-\\
+&-&+&-&+&-&+&-&+&-&+&-&+&-&+&-\\
+&-&-&+&+&-&-&+&+&-&-&+&+&-&-&+\\
+&+&+&+&+&+&+&+&-&-&-&-&-&-&-&-\\
+&+&-&-&+&+&-&-&-&-&+&+&-&-&+&+\\
+&-&+&-&+&-&+&-&-&+&-&+&-&+&-&+\\
+&-&-&+&+&-&-&+&-&+&+&-&-&+&+&-\\
+&+&+&+&-&-&-&-&+&+&+&+&-&-&-&-\\
+&+&-&-&-&-&+&+&+&+&-&-&-&-&+&+\\
+&-&+&-&-&+&-&+&+&-&+&-&-&+&-&+\\
+&-&-&+&-&+&+&-&+&-&-&+&-&+&+&-\\
+&+&+&+&-&-&-&-&-&-&-&-&+&+&+&+\\
+&+&-&-&-&-&+&+&-&-&+&+&+&+&-&-\\
+&-&+&-&-&+&-&+&-&+&-&+&+&-&+&-\\
+&-&-&+&-&+&+&-&-&+&+&-&+&-&-&+
\end{smallmatrix}\right],
\quad
\begin{array}{c}
\boldsymbol{\Gamma}_0=\left[\begin{smallmatrix}
+&+&+&+&+&+&+&+&+&+&+&+&+&+&+&+\\
+&+&+&+&+&+&+&+&-&-&-&-&-&-&-&-\\
+&+&+&+&-&-&-&-&+&+&+&+&-&-&-&-\\
+&+&-&-&-&-&+&+&-&-&+&+&+&+&-&-\\
+&-&+&-&-&+&-&+&-&+&-&+&+&-&+&-\\
+&-&-&+&-&+&+&-&-&+&+&-&+&-&-&+
\end{smallmatrix}\right],\bigskip\\
\boldsymbol{\Gamma}_1=\left[\begin{smallmatrix}
+&+&-&-&+&+&-&-&+&+&-&-&+&+&-&-\\
+&-&+&-&+&-&+&-&+&-&+&-&+&-&+&-\\
+&-&-&+&+&-&-&+&+&-&-&+&+&-&-&+\\
+&+&-&-&+&+&-&-&-&-&+&+&-&-&+&+\\
+&-&+&-&+&-&+&-&-&+&-&+&-&+&-&+\\
+&-&-&+&+&-&-&+&-&+&+&-&-&+&+&-\\
+&+&-&-&-&-&+&+&+&+&-&-&-&-&+&+\\
+&-&+&-&-&+&-&+&+&-&+&-&-&+&-&+\\
+&-&-&+&-&+&+&-&+&-&-&+&-&+&+&-\\
+&+&+&+&-&-&-&-&-&-&-&-&+&+&+&+
\end{smallmatrix}\right].
\end{array}
\end{equation}
(Here the elements of subsets of $\mathbb{Z}_2^4$ are ordered lexicographically, and ``$+$" and ``$-$" are shorthand for $1$ and $-1$, respectively.)
In particular, $\boldsymbol{\Gamma}_0^{}\boldsymbol{\Gamma}_0^*=16\mathbf{I}$ (tightness),
the diagonal entries of $\boldsymbol{\Gamma}_0^*\boldsymbol{\Gamma}_0^{}$ are $6$ while its off-diagonal entries have modulus $2$ (equiangularity),
and $\boldsymbol{\Gamma}_0^*\boldsymbol{\Gamma}_0^{}+\boldsymbol{\Gamma}_1^*\boldsymbol{\Gamma}_1^{}=16\mathbf{I}$
(Naimark complementarity).
Such real harmonic ETFs are well known~\cite{DingF07,JasperMF14},
and yield optimal packings of $16$ lines (one-dimensional subspaces) of $\mathbb{R}^6$ and $\mathbb{R}^{10}$.
What is new here is that under the aforementioned identification of $\mathbb{Z}_2^4$ with its Pontryagin dual,
the two difference sets $\mathcal{D}$ and $\mathcal{E}$ are paired in the sense of Definition~\ref{def.paired difference sets}.
That is, the columns of the $(\mathcal{D}\times\mathcal{E})$-indexed submatrix
\begin{equation}
\label{eq.ETF(5,10)}
\boldsymbol{\Gamma}_{01}=\left[\begin{smallmatrix}
+&+&+&+&+&+&+&+&+&+\\
+&+&+&+&+&+&-&-&-&-\\
+&+&+&-&-&-&+&+&+&-\\
+&-&-&-&+&+&-&+&+&+\\
-&+&-&+&-&+&+&-&+&+\\
-&-&+&+&+&-&+&+&-&+
\end{smallmatrix}\right]
\end{equation}
of $\boldsymbol{\Gamma}_0$ (and $\boldsymbol{\Gamma}$) form a tight frame for their span.
This is far from obvious, but can be explicitly verified by showing that
$\boldsymbol{\Gamma}_{01}^{}\boldsymbol{\Gamma}_{01}^{*}\boldsymbol{\Gamma}_{01}^{}
=12\boldsymbol{\Gamma}_{01}^{}$, or equivalently,
that $\frac1{12}\boldsymbol{\Gamma}_{01}^{*}\boldsymbol{\Gamma}_{01}^{}$ is a projection.
Here, the tight frame constant $A=12$ is significant:
since $\frac1{12}\boldsymbol{\Gamma}_{01}^{*}\boldsymbol{\Gamma}_{01}^{}$ is a $10\times 10$ projection matrix with diagonal entries $\frac 6{12}$, the columns of $\boldsymbol{\Gamma}_{01}$ form a tight frame for a subspace of $\mathbb{R}^\mathcal{D}\cong\mathbb{R}^6$ of dimension $\operatorname{rank}(\boldsymbol{\Gamma}_{01})
=\operatorname{rank}(\boldsymbol{\Gamma}_{01}^*\boldsymbol{\Gamma}_{01}^{})
=\operatorname{Tr}(\frac1{12}\boldsymbol{\Gamma}_{01}^{*}\boldsymbol{\Gamma}_{01}^{})
=5$.
As the $10$ columns of $\boldsymbol{\Gamma}_{01}$ are moreover equiangular (being $10$ of the $16$ equiangular columns of $\boldsymbol{\Gamma}_0$)
they thus form an ${\operatorname{ETF}}(5,10)$ for their span.
This itself is remarkable: there is an optimal packing of $10$ lines in $\mathbb{R}^5$ that extends to an optimal packing of $16$ lines in $\mathbb{R}^6$.
As we now explain, it moreover implies the existence of a real ${\operatorname{ECTFF}}(6,16,5)$ and a real ${\operatorname{ECTFF}}(10,16,5)$.
\end{example}
\begin{theorem}
\label{thm.ECTFF from PDS}
Let $\mathcal{D}$ and $\mathcal{E}$ be paired difference sets (Definition~\ref{def.paired difference sets}) for a finite abelian group $\mathcal{G}$ and its Pontryagin dual $\hat{\mathcal{G}}$, respectively.
For any $\gamma\in\hat{\mathcal{G}}$ let $\mathcal{U}_\gamma:=\operatorname{span}\set{\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}$ where,
for any $\varepsilon\in\mathcal{E}$,
$\boldsymbol{\varphi}_{\gamma\varepsilon}\in\mathbb{C}^\mathcal{D}$ is defined by $\boldsymbol{\varphi}_{\gamma\varepsilon}(d):=\gamma(d)\varepsilon(d)$ for all $d\in\mathcal{D}$.
Also let
\begin{equation}
\label{eq.paired diff set rank}
R=\tfrac{DE(N-1)}{(D+E-1)N-DE}
\ \text{where}\ D:=\#(\mathcal{D}),\ E:=\#(\mathcal{E}),\ N:=\#(\mathcal{G})=\#(\hat{\mathcal{G}}).
\end{equation}
Then $\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$ is an ${\operatorname{ECTFF}}(D,N,R)$ for $\mathbb{C}^\mathcal{D}$ where, for each $\gamma\in\hat{\mathcal{G}}$,
$\set{\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}$ is an ${\operatorname{ETF}}(R,E)$ for $\mathcal{U}_\gamma$ that is unitarily equivalent to $\set{\boldsymbol{\varphi}_{\varepsilon}}_{\varepsilon\in\mathcal{E}}$.
Moreover, the relation of being paired is symmetric: $\mathcal{E}$ and $\mathcal{D}$ are also paired, yielding an analogous ${\operatorname{ECTFF}}(E,N,R)$ for $\mathbb{C}^\mathcal{E}$.
\end{theorem}
\begin{proof}
Since $\mathcal{D}$ is a difference set for $\mathcal{G}$ its harmonic frame \smash{$\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$}, $\boldsymbol{\varphi}_\gamma(d):=\gamma(d)$ is an ${\operatorname{ETF}}(D,N)$ for $\mathbb{C}^\mathcal{D}$.
Since $\mathcal{D}$ and $\mathcal{E}$ are paired,
the corresponding subsequence $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ of this ${\operatorname{ETF}}(D,N)$ is, by definition, a tight frame for $\mathcal{U}_1=\operatorname{span}\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$.
Moreover, since $\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$ is equiangular,
this subsequence $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ is also equiangular,
and the two sequences share the same coherence (Welch bound).
In particular, $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ is an ${\operatorname{ETF}}(R,E)$ for $\mathcal{U}_1$ where $R=\dim(\mathcal{U}_1)$ satisfies
\begin{equation*}
(\tfrac{E}{R}-1)\tfrac1{E-1}
=\tfrac{E-R}{R(E-1)}
=\tfrac{N-D}{D(N-1)}.
\end{equation*}
Solving for $R$ gives~\eqref{eq.paired diff set rank}.
(In the degenerate case where $E=1$,
instead note that the single vector $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ is an ETF for its span, which has dimension \smash{$\tfrac{DE(N-1)}{(D+E-1)N-DE}=\tfrac{D(N-1)}{DN-D}=1=R$}.)
Next, for any $\gamma\in\hat{\mathcal{G}}$,
$\set{\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}$ and $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ have the same Gram matrix, implying they are unitarily equivalent:
for any $\varepsilon_1,\varepsilon_2\in\mathcal{E}$,
\begin{equation*}
\ip{\boldsymbol{\varphi}_{\gamma\varepsilon_1}}{\boldsymbol{\varphi}_{\gamma\varepsilon_2}}
=\sum_{d\in\mathcal{D}}\overline{\gamma(d)\varepsilon_1(d)}\gamma(d)\varepsilon_2(d)
=\sum_{d\in\mathcal{D}}\overline{\varepsilon_1(d)}\varepsilon_2(d)
=\ip{\boldsymbol{\varphi}_{\varepsilon_1}}{\boldsymbol{\varphi}_{\varepsilon_2}}.
\end{equation*}
In particular,
for any $\gamma\in\hat{\mathcal{G}}$,
$\set{\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}$ is an ${\operatorname{ETF}}(R,E)$ for its span $\mathcal{U}_\gamma$.
Next, to show that \smash{$\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} is an ${\operatorname{ECTFF}}(D,N,R)$ for $\mathbb{C}^\mathcal{D}$,
let $\boldsymbol{\Phi}_\gamma$ be the synthesis operator of $\set{\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}$.
Since $\set{\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}$ is an $E$-vector tight frame for $\mathcal{U}_\gamma$ and $\norm{\boldsymbol{\varphi}_{\gamma\varepsilon}}^2=D$ for all $\varepsilon$,
the projection $\mathbf{P}_\gamma$ onto $\mathcal{U}_\gamma$ can be expressed as
\smash{$\mathbf{P}_\gamma
=\frac{R}{DE}\boldsymbol{\Phi}_\gamma^{}\boldsymbol{\Phi}_\gamma^*$}.
To see that \smash{$\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} is a TFF for $\mathbb{C}^\mathcal{D}$ note that for any $\gamma'\in\hat{\mathcal{G}}$, there are exactly $E$ choices of $\gamma\in\hat{\mathcal{G}}$ such that $\gamma'\in\gamma\mathcal{E}$.
This allows us to write the fusion frame operator of \smash{$\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} in terms of the synthesis operator $\boldsymbol{\Phi}$ of \smash{$\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$}:
\begin{equation}
\label{eq.proof of ECTFF from PDS 1}
\sum_{\gamma\in\hat{\mathcal{G}}}\mathbf{P}_\gamma
=\sum_{\gamma\in\hat{\mathcal{G}}}\tfrac{R}{DE}\boldsymbol{\Phi}_\gamma^{}\boldsymbol{\Phi}_\gamma^*
=\tfrac{R}{DE}\sum_{\gamma\in\hat{\mathcal{G}}}\sum_{\varepsilon\in\mathcal{E}}
\boldsymbol{\varphi}_{\gamma\varepsilon}^{}\boldsymbol{\varphi}_{\gamma\varepsilon}^*
=\tfrac{R}{D}\sum_{\gamma'\in\hat{\mathcal{G}}}\boldsymbol{\varphi}_{\gamma'}^{}\boldsymbol{\varphi}_{\gamma'}^{*}
=\tfrac{R}{D}\boldsymbol{\Phi}\bfPhi^*
=\tfrac{NR}{D}\mathbf{I}.
\end{equation}
Next, to show that \smash{$\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} is equichordal,
note that for any $\gamma_1,\gamma_2\in\hat{\mathcal{G}}$,
$\gamma_1\neq\gamma_2$,
\begin{equation*}
\operatorname{Tr}(\mathbf{P}_{\gamma_1}\mathbf{P}_{\gamma_2})
=\tfrac{R^2}{D^2E^2}\operatorname{Tr}(\boldsymbol{\Phi}_{\gamma_1}^{}\boldsymbol{\Phi}_{\gamma_1}^*\boldsymbol{\Phi}_{\gamma_2}^{}\boldsymbol{\Phi}_{\gamma_2}^*)
=\tfrac{R^2}{D^2E^2}\norm{\boldsymbol{\Phi}_{\gamma_1}^*\boldsymbol{\Phi}_{\gamma_2}^{}}_{\mathrm{Fro}}^2
=\tfrac{R^2}{D^2E^2}\sum_{\varepsilon_1\in\mathcal{E}}\sum_{\varepsilon_2\in\mathcal{E}}
\abs{\ip{\boldsymbol{\varphi}_{\gamma_1\varepsilon_1}}{\boldsymbol{\varphi}_{\gamma_2\varepsilon_2}}}^2.
\end{equation*}
Here, since \smash{$\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} is an ${\operatorname{ETF}}(D,N)$ with $\norm{\boldsymbol{\varphi}_\gamma}^2=D$ for all $\gamma$,
we have \smash{$\abs{\ip{\boldsymbol{\varphi}_{\gamma}}{\boldsymbol{\varphi}_{\gamma'}}}^2=\frac{D(N-D)}{N-1}$} for all $\gamma,\gamma'\in\hat{\mathcal{G}}$ with $\gamma\neq\gamma'$.
As such, the value of the above sum depends entirely on the number of pairs $(\varepsilon_1,\varepsilon_2)\in\mathcal{E}\times\mathcal{E}$ such that $\gamma_1\varepsilon_1=\gamma_2\varepsilon_2$,
that is, such that $\gamma_1^{}\gamma_2^{-1}=\varepsilon_1^{-1}\varepsilon_2^{}$.
Since $\gamma_1\neq\gamma_2$ and $\mathcal{E}$ is a difference set for $\hat{\mathcal{G}}$,
this number is exactly \smash{$\frac{E(E-1)}{N-1}$}.
That is, for any $\gamma_1\neq\gamma_2$,
\begin{equation*}
\operatorname{Tr}(\mathbf{P}_{\gamma_1}\mathbf{P}_{\gamma_2})
=\tfrac{R^2}{D^2E^2}\sum_{\varepsilon_1\in\mathcal{E}}\sum_{\varepsilon_2\in\mathcal{E}}
\abs{\ip{\boldsymbol{\varphi}_{\gamma_1\varepsilon_1}}{\boldsymbol{\varphi}_{\gamma_2\varepsilon_2}}}^2
=\tfrac{R^2}{D^2E^2}\bigset{\tfrac{E(E-1)}{N-1}D^2+[E^2-\tfrac{E(E-1)}{N-1}]\tfrac{D(N-D)}{N-1}}.
\end{equation*}
Since this value is constant over all $\gamma_1\neq\gamma_2$,
\smash{$\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} is an ECTFF.
(Alternatively, we may forgo~\eqref{eq.proof of ECTFF from PDS 1} provided we instead use~\eqref{eq.paired diff set rank} to show that the above value for $\operatorname{Tr}(\mathbf{P}_{\gamma_1}\mathbf{P}_{\gamma_2})$ simplifies to \smash{$\frac{R(NR-D)}{D(N-1)}$},
meaning $\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$ achieves equality throughout~\eqref{eq.generalized Welch} and so is necessarily tight.)
For the final conclusions,
recall that in general,
we identify $\mathcal{G}$ with the Pontryagin dual of $\hat{\mathcal{G}}$ via the isomorphism $g\mapsto(\gamma\mapsto\gamma(g))$, that is, we define $g(\gamma):=\gamma(g)$.
Since $\mathcal{E}$ is a difference set for $\hat{\mathcal{G}}$,
its harmonic frame $\set{\boldsymbol{\psi}_g}_{g\in\mathcal{G}}$, $\boldsymbol{\psi}_g(\varepsilon):=g(\varepsilon)=\varepsilon(g)$
is an ${\operatorname{ETF}}(E,N)$ for $\mathbb{C}^\mathcal{E}$.
To show that $\mathcal{E}$ and $\mathcal{D}$ are paired,
we show the corresponding subsequence $\set{\boldsymbol{\psi}_d}_{d\in\mathcal{D}}$ of this harmonic ETF is a tight frame for its span.
To do this, note the synthesis operators $\boldsymbol{\Phi}_{\mathcal{E}}$ and $\boldsymbol{\Psi}_{\mathcal{D}}$ of $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ and $\set{\boldsymbol{\psi}_d}_{d\in\mathcal{D}}$, respectively,
are transposes of each other: for any $d\in\mathcal{D}$, $\varepsilon\in\mathcal{E}$,
\begin{equation*}
\boldsymbol{\Psi}_{\mathcal{D}}(\varepsilon,d)
=\boldsymbol{\psi}_d(\varepsilon)
=d(\varepsilon)=\varepsilon(d)
=\boldsymbol{\varphi}_\varepsilon(d)
=\boldsymbol{\Phi}_{\mathcal{E}}(d,\varepsilon).
\end{equation*}
At the same time, since $\mathcal{D}$ and $\mathcal{E}$ are paired, $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ is a tight frame for its span and so there exists $A>0$ such that
$\boldsymbol{\Phi}_{\mathcal{E}}^{}\boldsymbol{\Phi}_{\mathcal{E}}^{*}\boldsymbol{\Phi}_{\mathcal{E}}^{}=A\boldsymbol{\Phi}_{\mathcal{E}}^{}$.
Taking transposes of this equation thus gives
\begin{equation*}
A\boldsymbol{\Psi}_{\mathcal{D}}
=A\boldsymbol{\Phi}_\mathcal{E}^\mathrm{T}
=(\boldsymbol{\Phi}_{\mathcal{E}}^{}\boldsymbol{\Phi}_{\mathcal{E}}^{*}\boldsymbol{\Phi}_{\mathcal{E}}^{})^\mathrm{T}
=\boldsymbol{\Phi}_{\mathcal{E}}^{\mathrm{T}}(\boldsymbol{\Phi}_{\mathcal{E}}^*)^{\mathrm{T}}\boldsymbol{\Phi}_{\mathcal{E}}^{\mathrm{T}}
=\boldsymbol{\Phi}_{\mathcal{E}}^{\mathrm{T}}(\boldsymbol{\Phi}_{\mathcal{E}}^\mathrm{T})^{*}\boldsymbol{\Phi}_{\mathcal{E}}^{\mathrm{T}}
=\boldsymbol{\Psi}_{\mathcal{D}}^{}\boldsymbol{\Psi}_{\mathcal{D}}^{*}\boldsymbol{\Psi}_{\mathcal{D}}^{},
\end{equation*}
and so $\set{\boldsymbol{\psi}_d}_{d\in\mathcal{D}}$ is indeed a tight frame for its span.
Since the expression given for $R$ in~\eqref{eq.paired diff set rank} is symmetric with respect to $D$ and $E$, this span has dimension $R$.
As such, applying the first part of this theorem to it yields an ${\operatorname{ECTFF}}(E,N,R)$ for $\mathbb{C}^\mathcal{E}$, as claimed.
\end{proof}
\begin{example}
\label{ex.ECTFF}
When applied to the paired difference sets $\mathcal{D}$ and $\mathcal{E}$ of \eqref{eq.PDS(16,6,10)} of Example~\ref{ex.PDS},
Theorem~\ref{thm.ECTFF from PDS} implies that the columns of $\boldsymbol{\Gamma}_{01}$ of~\eqref{eq.ETF(5,10)} form an ${\operatorname{ETF}}(5,10)$ for their span,
that the ${\operatorname{ETF}}(6,16)$ formed by the columns of $\boldsymbol{\Gamma}_0$ of~\eqref{eq.16 x 16 Gamma} contains sixteen unitarily equivalent copies of this ${\operatorname{ETF}}(5,10)$ (each indexed by a shift $\mathbf{x}+\mathcal{E}$ of $\mathcal{E}$),
and that the spans of these copies form an ${\operatorname{ECTFF}}(6,16,5)$ for $\mathbb{C}^\mathcal{D}$.
In fact, since the character table $\boldsymbol{\Gamma}$ of~\eqref{eq.16 x 16 Gamma} of $\mathbb{Z}_2^4$ is real-valued,
this ECTFF is real.
The existence of a real ${\operatorname{ECTFF}}(6,16,5)$ is not new:
one arises, for example, by taking the spatial complement of a real ${\operatorname{ECTFF}}(6,16,1)$ that equates to a real ${\operatorname{ETF}}(6,16)$.
Theorem~\ref{thm.ECTFF from PDS} further gives that $\mathcal{E}$ and $\mathcal{D}$ are paired,
and this yields a seemingly new ECTFF.
As seen in its proof, this stems from the fact that the character tables of a finite abelian group and its Pontryagin dual are transposes of each other,
and the fact that the columns of a matrix form a tight frame for their span if and only if their rows do as well.
In general, this means that the ${\operatorname{ECTFF}}(E,N,R)$ produced by Theorem~\ref{thm.ECTFF from PDS} arises as the row spaces of the various $[(g+\mathcal{D})\times\mathcal{E})]$-indexed submatrices of $\boldsymbol{\Gamma}$.
For this particular example,
we further have that $\mathcal{E}=\mathcal{D}^\mathrm{c}$ where $\mathbb{Z}_2^4$ has been identified with its Pontryagin dual in a way that makes $\boldsymbol{\Gamma}$ of~\eqref{eq.16 x 16 Gamma} symmetric.
As such, Theorem~\ref{thm.ECTFF from PDS} moreover gives here that the columns of the $(\mathcal{E}\times\mathcal{D})$-indexed submatrix of $\boldsymbol{\Gamma}$ (the transpose of~\eqref{eq.ETF(5,10)}) form an ${\operatorname{ETF}}(5,6)$ for their span,
that the ${\operatorname{ETF}}(10,16)$ formed by the columns of $\boldsymbol{\Gamma}_1$ of~\eqref{eq.16 x 16 Gamma} (the Naimark complement of that formed by the columns of $\boldsymbol{\Gamma}_0$) contains sixteen unitarily equivalent copies of this ${\operatorname{ETF}}(5,6)$ (each indexed by a shift $\mathcal{D}$),
and that the spans of these copies form a real ${\operatorname{ECTFF}}(10,16,5)$ for $\mathbb{R}^\mathcal{E}$.
We know of no other construction of an ECTFF with these parameters (real or complex).
\end{example}
In general,
when paired difference sets $\mathcal{D}$ and $\mathcal{E}$ for $\mathcal{G}$ and $\hat{\mathcal{G}}$ exist, they are not unique.
For example,
Theorem~\ref{thm.ECTFF from PDS} gives that for any $\gamma\in\hat{\mathcal{G}}$, $\set{\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}$ is an ETF (and so a tight frame) for its span, implying $\mathcal{D}$ and $\gamma\mathcal{E}$ are also paired.
Since Theorem~\ref{thm.ECTFF from PDS} also gives that ``pairing" is a symmetric relation,
we further have that $g+\mathcal{D}$ and $\gamma\mathcal{E}$ are paired regardless of one's choice of $g\in\mathcal{G}$, $\gamma\in\hat{\mathcal{G}}$.
One can also apply an automorphism $\sigma$ of $\mathcal{G}$ to $\mathcal{D}$ provided one simultaneously applies the induced automorphism $\gamma\mapsto\gamma\circ\sigma^{-1}$ to $\hat{\mathcal{G}}$.
This is because the $[\sigma(\mathcal{D})\times(\mathcal{E}\circ\sigma^{-1})]$-indexed submatrix of $\boldsymbol{\Gamma}$ can be obtained by pre- and post-multiplying its $(\mathcal{D}\times\mathcal{E})$-indexed submatrix by permutation matrices,
implying its columns form a tight frame for their span.
The next result gives the most fundamental characterization of paired difference sets that we have found so far.
It is not purely combinatorial,
but rather involves sums over certain submatrices of the character table.
One can use it, for example, to obtain alternate proofs of the above facts.
\begin{theorem}
Let $\mathcal{D}$ and $\mathcal{E}$ be nonempty difference sets for a finite abelian group $\mathcal{G}$ and its Pontryagin dual $\hat{\mathcal{G}}$, respectively.
Then $\mathcal{D}$ and $\mathcal{E}$ are paired (Definition~\ref{def.paired difference sets}) if and only if
\begin{equation}
\label{eq.partial character table sum}
\sum\nolimits_{(d',\varepsilon')\in(\mathcal{D}-d)\times(\varepsilon^{-1}\mathcal{E})}\varepsilon'(d')
\end{equation}
has constant value over all $d\in\mathcal{D}$, $\varepsilon\in\mathcal{E}$.
\end{theorem}
\begin{proof}
Let $\boldsymbol{\Phi}$ be the $(\mathcal{D}\times\mathcal{E})$-indexed submatrix of the character table of $\mathcal{G}$, which is defined by having $\boldsymbol{\Phi}(d,\varepsilon)=\varepsilon(d)$ for all $d\in\mathcal{D}$, $\varepsilon\in\mathcal{E}$.
By definition, $\mathcal{D}$ and $\mathcal{E}$ are paired if and only if $\boldsymbol{\Phi}\bfPhi^*\boldsymbol{\Phi}=A\boldsymbol{\Phi}$ for some $A>0$.
Since $\boldsymbol{\Phi}\neq\boldsymbol{0}$,
this occurs if and only if $\boldsymbol{\Phi}\bfPhi^*\boldsymbol{\Phi}=A\boldsymbol{\Phi}$ for some $A\in\mathbb{C}$:
in the latter case, we have $(\boldsymbol{\Phi}^*\boldsymbol{\Phi})^2=A(\boldsymbol{\Phi}^*\boldsymbol{\Phi})$,
meaning $A$ is the nonzero eigenvalue of the nonzero positive semidefinite matrix $\boldsymbol{\Phi}^*\boldsymbol{\Phi}$.
This equates to having $A\in\mathbb{C}$ such that
\begin{equation*}
A\varepsilon(d)
=A\boldsymbol{\Phi}(d,\varepsilon)
=\sum_{d'\in\mathcal{D}}\sum_{\varepsilon'\in\mathcal{E}}\boldsymbol{\Phi}(d,\varepsilon')\boldsymbol{\Phi}^*(\varepsilon',d')\boldsymbol{\Phi}(d',\varepsilon)
=\sum_{d'\in\mathcal{D}}\sum_{\varepsilon'\in\mathcal{E}}\varepsilon'(d)\overline{\varepsilon'(d')}\varepsilon(d')
\end{equation*}
for all $d\in\mathcal{D}$, $\varepsilon\in\mathcal{E}$.
Simplifying,
this occurs if and only if for some $A\in\mathbb{C}$,
\begin{equation*}
\overline{A}
=\sum_{d'\in\mathcal{D}}\sum_{\varepsilon'\in\mathcal{E}}\overline{\varepsilon'(d)}\varepsilon'(d')\overline{\varepsilon(d')}\varepsilon(d)
=\sum_{d'\in\mathcal{D}}\sum_{\varepsilon'\in\mathcal{E}}(\varepsilon'\varepsilon^{-1})(d'-d)
\end{equation*}
for all $d\in\mathcal{D}$, $\varepsilon\in\mathcal{E}$.
Making a change of variables gives that this equates to the value of~\eqref{eq.partial character table sum} being constant over all $d\in\mathcal{D}$, $\varepsilon\in\mathcal{E}$.
\end{proof}
Some paired difference sets $\mathcal{D}$ and $\mathcal{E}$ are \textit{trivial} in the sense that either $\mathcal{D}$ or $\mathcal{E}$ is either a singleton set or is its entire group.
In such cases, Theorem~\ref{thm.ECTFF from PDS} still applies, but the resulting ETFs and ECTFFs are not new.
To explain,
any singleton set $\mathcal{D}$ is a difference set for $\mathcal{G}$ and it pairs with any nonempty difference set $\mathcal{E}$ for $\hat{\mathcal{G}}$ since $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ equates to a sequence of $E$ unimodular scalars in the one-dimensional space $\mathbb{C}^\mathcal{D}\cong\mathbb{C}$.
Also, \smash{$\mathcal{E}=\hat{\mathcal{G}}$} is a difference set for $\hat{\mathcal{G}}$ that pairs with any nonempty difference set $\mathcal{D}$ for $\mathcal{G}$ since \smash{$\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$} is the harmonic ETF arising from $\mathcal{D}$.
In either of these two cases the resulting ${\operatorname{ECTFF}}(D,N,D)$ \smash{$\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} consists of $N$ copies of the entire space $\mathbb{C}^\mathcal{D}$.
Meanwhile, any singleton set $\mathcal{E}$ is a difference set for \smash{$\hat{\mathcal{G}}$}
and it pairs with any nonempty difference set $\mathcal{D}$ for $\mathcal{G}$ since the single vector $\set{\boldsymbol{\varphi}_{\varepsilon}}_{\varepsilon\in\mathcal{E}}$ is a tight frame for its one-dimensional span.
In this case, the ${\operatorname{ECTFF}}(D,N,1)$ \smash{$\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} consists of the $N$ one-dimensional subspaces of $\mathbb{C}^\mathcal{D}$ that are individually spanned by the members of the underlying harmonic ${\operatorname{ETF}}(D,N)$ \smash{$\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$} for $\mathbb{C}^\mathcal{D}$.
The remaining ``trivial" case is the most interesting in that it yields a nontrivial result:
$\mathcal{D}=\mathcal{G}$ is a difference set for $\mathcal{G}$ that pairs with any nonempty difference set $\mathcal{E}$ for $\hat{\mathcal{G}}$ since in this case $\set{\boldsymbol{\varphi}_\varepsilon}_{\varepsilon\in\mathcal{E}}$ is a sequence of equal-norm orthogonal vectors (and so is a tight frame for its span).
In this case, for any $\gamma\in\hat{\mathcal{G}}$,
the corresponding subspace $\mathcal{U}_\gamma=\operatorname{span}\set{\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}$ of $\mathbb{C}^\mathcal{D}=\mathbb{C}^\mathcal{G}$ is the span of the characters of $\mathcal{G}$ that happen to lie in $\gamma\mathcal{E}:=\set{\gamma\varepsilon: \varepsilon\in\mathcal{E}}$.
Taking the DFT of $\mathcal{U}_\gamma$ thus yields the subspace of \smash{$\mathbb{C}^{\hat{\mathcal{G}}}$} that is spanned by the members of the standard basis that are indexed by $\gamma\mathcal{E}$,
namely
$\boldsymbol{\Gamma}^*\mathcal{U}_\gamma
=\operatorname{span}\set{\boldsymbol{\Gamma}^*\boldsymbol{\varphi}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}
=\operatorname{span}\set{\boldsymbol{\delta}_{\gamma\varepsilon}}_{\varepsilon\in\mathcal{E}}
=\operatorname{span}\set{\boldsymbol{\delta}_{\gamma'}}_{\gamma'\in\gamma\mathcal{E}}$.
Since the DFT is a scalar multiple of a unitary operator,
$\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$ is an ECTFF for $\mathbb{C}^{\mathcal{D}}=\mathbb{C}^\mathcal{G}$ if and only if $\set{\boldsymbol{\Gamma}^*\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$ is an ECTFF for \smash{$\mathbb{C}^{\hat{\mathcal{G}}}$}.
While Theorem~\ref{thm.ECTFF from PDS} gives the former,
the latter is known:
Zauner~\cite{Zauner99} showed that if $(\mathcal{V},\mathcal{B})$ is any BIBD,
then the subspaces $\set{\mathcal{X}_v}_{v\in\mathcal{V}}$ of $\mathbb{R}^\mathcal{B}$,
$\mathcal{X}_v:=\operatorname{span}\set{\boldsymbol{\delta}_b: v\in b\in\mathcal{B}}$ form an ECTFF for $\mathbb{R}^\mathcal{B}$.
Here, since $\mathcal{E}$ is a difference set of $\hat{\mathcal{G}}$,
$\mathcal{E}^{-1}:=\set{\varepsilon^{-1}: \varepsilon\in\mathcal{E}}$ is as well,
implying $\mathcal{B}=\set{\gamma\mathcal{E}^{-1}}_{\gamma\in\hat{\mathcal{G}}}$ is the block set for a BIBD on \smash{$\mathcal{V}=\hat{\mathcal{G}}$}.
In this case, the $\gamma$th subspace of Zauner's construction is
\smash{$\mathcal{X}_\gamma
=\operatorname{span}\set{\boldsymbol{\delta}_{\gamma'}: \gamma\in\gamma'\mathcal{E}^{-1}}
=\operatorname{span}\set{\boldsymbol{\delta}_{\gamma'}}_{\gamma'\in\gamma\mathcal{E}}
=\boldsymbol{\Gamma}^*\mathcal{U}_\gamma$}.
When $\mathcal{D}$ and $\mathcal{E}$ are nontrivial paired difference sets the ${\operatorname{ECTFF}}(D,N,R)$ $\set{\mathcal{U}_\gamma}_{\gamma\in\hat{\mathcal{G}}}$ constructed in Theorem~\ref{thm.ECTFF from PDS} consists of $N$ proper and distinct subspaces of \smash{$\mathbb{C}^\mathcal{D}$}.
Indeed,
having $1<D$ and $E<N$ implies
\smash{$R
=\tfrac{DE(N-1)}{(D+E-1)N-DE}
<D$} and so
\smash{$[\operatorname{dist}(\mathcal{U}_{\gamma_1},\mathcal{U}_{\gamma_2})]^2
=\tfrac{R(D-R)}{D}\tfrac{N}{N-1}>0$} for any $\gamma_1\neq\gamma_2$.
Moreover, in this nontrivial case, these subspaces are not equi-isoclinic since they have nontrivial intersection:
since $\mathcal{E}$ is a difference set for $\hat{\mathcal{G}}$,
\smash{$\#(\gamma_1\mathcal{E}\cap\gamma_2\mathcal{E})
=\#[\mathcal{E}\cap(\gamma_1^{-1}\gamma_2^{})\mathcal{E}]
=\frac{E(E-1)}{N-1}>0$} for any $\gamma_1\neq\gamma_2$,
implying
$\mathcal{U}_{\gamma_1}\cap\mathcal{U}_{\gamma_2}
=\operatorname{span}\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\gamma_1\mathcal{E}}
\cap\operatorname{span}\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\gamma_1\mathcal{E}}
\supseteq\operatorname{span}\set{\boldsymbol{\varphi}_\gamma}_{\gamma\in\gamma_1\mathcal{E}\cap\gamma_2\mathcal{E}}$ has positive dimension.
As such, some but not all of the principal angles between $\mathcal{U}_{\gamma_1}$ and $\mathcal{U}_{\gamma_2}$ are zero.
We further note that if $\mathcal{D}$ and $\mathcal{E}$ are nontrivial paired difference sets then so are $\mathcal{E}$ and $\mathcal{D}$, implying the $N$ subspaces of the resulting ${\operatorname{ECTFF}}(E,N,R)$ are also proper, distinct, and not equi-isoclinic.
We conclude this section with a necessary condition on the existence of paired difference sets, and a characterization of when the complements of paired difference sets are themselves paired:
\begin{theorem}
\label{thm.necessary}
Let $\mathcal{D}$ and $\mathcal{E}$ be paired difference sets (Definition~\ref{def.paired difference sets}) for a finite abelian group $\mathcal{G}$ and its Pontryagin dual $\hat{\mathcal{G}}$, respectively, where $\mathcal{D}\neq\mathcal{G}$ and $\mathcal{E}\neq\hat{\mathcal{G}}$.
Then
\begin{equation}
\label{eq.necessary}
D+E\leq N
\ \text{where}\ D:=\#(\mathcal{D}),\ E:=\#(\mathcal{E}),\ N:=\#(\mathcal{G})=\#(\hat{\mathcal{G}}).
\end{equation}
Moreover, $\mathcal{D}^\mathrm{c}$ and $\mathcal{E}^\mathrm{c}$ are paired difference sets if and only if $D+E=N$.
\end{theorem}
\begin{proof}
Here,
for any nonempty subsets $\mathcal{X}$ and $\mathcal{Y}$ of $\mathcal{G}$ and $\hat{\mathcal{G}}$, respectively,
we denote the corresponding submatrix of the $\mathcal{G}\times\hat{\mathcal{G}}$ character table $\boldsymbol{\Gamma}$ of $\mathcal{G}$ as $\boldsymbol{\Gamma}_{\mathcal{X}\times\mathcal{Y}}$.
Since $\mathcal{D}$ and $\mathcal{E}$ are paired difference sets,
$\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}$ is the synthesis operator of an ${\operatorname{ETF}}(R,E)$ for its span where $R$ is given by~\eqref{eq.paired diff set rank}.
In particular, the spectrum of $\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{*}$ consists of a tight frame constant $A>0$ and $0$ with multiplicities $R$ and $D-R$, respectively,
while the spectrum of $\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{}$ consists of $A$ and $0$ with multiplicities $R$ and $E-R$, respectively.
Here, since every entry of $\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}$ has unit modulus,
$\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}$ is an $E\times E$ matrix whose every diagonal entry has value $D$,
implying
$DE=\operatorname{Tr}(\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{})=RA$, and so $A=\frac{DE}{R}$.
We now claim that $A\neq N$.
Indeed, otherwise~\eqref{eq.paired diff set rank} gives
\smash{$N
=A
=\tfrac{DE}{R}
=\tfrac{(D+E-1)N-DE}{N-1}$},
which implies the following contradiction of the assumption that $\mathcal{D}\neq\mathcal{G}$ and \smash{$\mathcal{E}\neq\hat{\mathcal{G}}$}:
\begin{equation*}
0=N(N-1)-[(D+E-1)N-DE]=N^2-(D+E)N+DE=(N-D)(N-E)>0.
\end{equation*}
Next note that since $\boldsymbol{\Gamma}$ is a (possibly complex) Hadamard matrix,
\begin{equation*}
N\mathbf{I}_{\mathcal{D}\times\mathcal{D}}
=\boldsymbol{\Gamma}_{\mathcal{D}\times\hat{\mathcal{G}}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\hat{\mathcal{G}}}^{*}
=\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{*}
+\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{*},
\end{equation*}
and so the spectrum of $\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{*}
=N\mathbf{I}_{\mathcal{D}\times\mathcal{D}}-\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}}^{*}$ consists of $N-A\neq0$ and $N$ with multiplicities $R$ and $D-R$, respectively.
In particular, $\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{*}$ is invertible, implying
\begin{equation*}
D
=\operatorname{rank}(\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{*})
=\operatorname{rank}(\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{})
\leq\#(\mathcal{E}^\mathrm{c})
=N-E,
\end{equation*}
namely~\eqref{eq.necessary}.
For the final conclusion, note that since $D\leq N-E$,
the spectrum of
$\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{}$ is obtained by padding that of $\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{*}$ with $N-D-E$ zeros,
and so consists of $N-A\neq0$, $N$ and $0$ with multiplicities $R$, $D-R$ and $N-D-E$, respectively.
Since
\begin{equation*}
N\mathbf{I}_{\mathcal{E}^\mathrm{c}\times\mathcal{E}^\mathrm{c}}
=\boldsymbol{\Gamma}_{\mathcal{G}\times\mathcal{E}^\mathrm{c}}^{*}\boldsymbol{\Gamma}_{\mathcal{G}\times\mathcal{E}^\mathrm{c}}^{}
=\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{}
+\boldsymbol{\Gamma}_{\mathcal{D}^\mathrm{c}\times\mathcal{E}^\mathrm{c}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}^\mathrm{c}\times\mathcal{E}^\mathrm{c}}^{},
\end{equation*}
this in turn implies that the spectrum of
$\boldsymbol{\Gamma}_{\mathcal{D}^\mathrm{c}\times\mathcal{E}^\mathrm{c}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}^\mathrm{c}\times\mathcal{E}^\mathrm{c}}^{}
=N\mathbf{I}_{\mathcal{E}^\mathrm{c}\times\mathcal{E}^\mathrm{c}}
-\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}\times\mathcal{E}^\mathrm{c}}^{}$ consists of $A$, $0$ and $N$ with multiplicities $R$, $D-R$ and $N-D-E$, respectively.
In particular, since $A\neq N$,
we have that $D+E=N$ if and only if
$\boldsymbol{\Gamma}_{\mathcal{D}^\mathrm{c}\times\mathcal{E}^\mathrm{c}}^{*}\boldsymbol{\Gamma}_{\mathcal{D}^\mathrm{c}\times\mathcal{E}^\mathrm{c}}^{}$ is a scalar multiple of a projection,
namely if and only if $\mathcal{D}^\mathrm{c}$ and $\mathcal{E}^\mathrm{c}$ are paired difference sets.
\end{proof}
When $\mathcal{D}$ and $\mathcal{E}$ are paired difference sets that achieve equality in~\eqref{eq.necessary}, the cardinalities of $\mathcal{D}^\mathrm{c}$ and $\mathcal{E}^\mathrm{c}$ equal those of $\mathcal{E}$ and $\mathcal{D}$, respectively.
As such, even in this case, the parameters of the ECTFF that arises via Theorem~\ref{thm.ECTFF from PDS} from $\mathcal{D}^\mathrm{c}$ and $\mathcal{E}^\mathrm{c}$ equal those of the ECTFF that arises via Theorem~\ref{thm.ECTFF from PDS} from $\mathcal{E}$ and $\mathcal{D}$ directly.
We further note that the inequality in~\eqref{eq.necessary} is sometimes strict, including for example trivial paired difference sets with $D=1<N$ and $E<N-1$.
\section{Paired difference sets from quadratic forms}
In this section we construct an infinite family of nontrivial paired difference sets.
Applying Theorem~\ref{thm.ECTFF from PDS} to them produces two infinite families of ECTFFs.
As foreshadowed by Example~\ref{ex.PDS},
the construction involves quadratic forms on finite vector spaces over the binary field $\mathbb{F}_2$.
Quadratic forms are a classical subject,
with those over fields of characteristic two being notably different than their cousins over other fields~\cite{Grove02}.
From the perspective of finite frame theory,
quadratic forms over $\mathbb{F}_2$ are already notable,
having been used to construct the maximum number of \textit{mutually unbiased bases} in real spaces whose dimension is a power of $4$~\cite{CameronS73,CalderbankCKS97}.
We review only the small part of the classical literature~\cite{Taylor92,Grove02} that we use to prove our results.
Let $\mathcal{V}$ be a finite-dimensional vector space over $\mathbb{F}_2$,
which equates to a finite elementary abelian $2$-group.
A \textit{bilinear form} on $\mathcal{V}$ is any function $\mathrm{B}:\mathcal{V}\times\mathcal{V}\rightarrow\mathbb{F}_2$ that is linear in either argument while the other is held fixed.
Such a form is \textit{nondegenerate} if having $\mathrm{B}(v_1,v_2)=0$ for all $v_1\in\mathcal{V}$ implies that $v_2=0$.
Any choice of nondegenerate bilinear form $\mathrm{B}$ on $\mathcal{V}$ induces an isomorphism from (the additive group of) $\mathcal{V}$ to its Pontryagin dual, namely the mapping $v_2\mapsto(v_1\mapsto(-1)^{\mathrm{B}(v_1,v_2)})$.
(This mapping is a well-defined homomorphism since $\mathrm{B}$ is bilinear,
and is injective since $\mathrm{B}$ is nondegenerate.)
Under this identification, the character table $\boldsymbol{\Gamma}$ of $\mathcal{V}$ becomes the $(\mathcal{V}\times\mathcal{V})$-indexed real Hadamard matrix with entries $\boldsymbol{\Gamma}(v_1,v_2)=(-1)^{\mathrm{B}(v_1,v_2)}$.
A bilinear form on $\mathcal{V}$ is \textit{alternating} if $\mathrm{B}(v,v)=0$ for all $v\in\mathcal{V}$,
and is \textit{symmetric} if $\mathrm{B}(v_1,v_2)=\mathrm{B}(v_2,v_1)$ for all $v_1,v_2\in\mathcal{V}$.
Here, every alternating form is symmetric since
$0=\mathrm{B}(v_1+v_2,v_1+v_2)=0+\mathrm{B}(v_1,v_2)+B(v_2,v_2)+0$.
A \textit{symplectic form} on $\mathcal{V}$ is a nondegenerate alternating bilinear form on $\mathcal{V}$.
For such forms,
$\boldsymbol{\Gamma}(v_1,v_2)=(-1)^{\mathrm{B}(v_1,v_2)}$ defines a real symmetric Hadamard matrix whose diagonal entries have value $1$.
In particular, letting $V=\#(\mathcal{V})$ we have $\boldsymbol{\Gamma}^2=\boldsymbol{\Gamma}^*\boldsymbol{\Gamma}=V\mathbf{I}$ and $\operatorname{Tr}(\boldsymbol{\Gamma})=V$,
implying that $\boldsymbol{\Gamma}$ has eigenvalues $\sqrt{V}$ and $-\sqrt{V}$ with multiplicities $\frac12(V+\sqrt{V})$ and $\frac12(V-\sqrt{V})$, respectively.
In particular, a symplectic form on $\mathcal{V}$ can only exist if $\sqrt{V}$ is an integer,
namely only if the dimension of $\mathcal{V}$ over $\mathbb{F}_2$ is even.
In such cases, $\sqrt{V}\mathbf{I}+\boldsymbol{\Gamma}$ is the Gram matrix of a real ${\operatorname{ETF}}(\frac12(V+\sqrt{V}),V)$.
Such \textit{symplectic ETFs} are well known,
with various special cases and generalizations of them and their Naimark complements appearing in the literature in numerous guises,
including Theorem~5.4 of~\cite{BodmannE10},
Theorem~4.1 of~\cite{CoutinkhoGSZ16} (when applied to the \textit{Thas--Somma} construction of \textit{distance-regular antipodal covers of complete graphs} (DRACKNs)),
Theorem~5.1 of~\cite{FickusJMPW19},
Theorem~6.4 of~\cite{IversonJM16},
and Theorem~4.11 of~\cite{BodmannK20};
see Example 6.10 of~\cite{IversonM20} for more discussion.
\begin{example}
\label{ex.symplectic form}
For any positive integer $M$ and $\mathbf{x}=(x_1,\dotsc,x_{2M}),\,\mathbf{y}=(y_1,\dotsc,y_{2M})\in\mathbb{F}_2^{2M}$ let
\begin{equation}
\label{eq.canonical symplectic}
\mathrm{B}(\mathbf{x},\mathbf{y})
:=\sum_{m=1}^M (x_{2m-1}y_{2m}+x_{2m}y_{2m-1})
=(x_1y_2+x_2y_1)+\dotsb+(x_{2M-1}y_{2M}+x_{2M}y_{2M-1}).
\end{equation}
That is, $\mathrm{B}(\mathbf{x},\mathbf{y})=\mathbf{x}^\mathrm{T}\mathbf{B}\mathbf{y}$ where $\mathbf{B}$ is the $2M\times 2M$ block diagonal matrix over $\mathbb{F}_2$ whose $M$ diagonal blocks are all $[\begin{smallmatrix}0&1\\1&0\end{smallmatrix}]$.
This is clearly a bilinear form on $\mathbb{F}_2^{2M}$.
It is moreover alternating
(since every summand of~\eqref{eq.canonical symplectic} is zero when $\mathbf{x}=\mathbf{y}$)
and nondegenerate (since if $\mathbf{B}(\mathbf{x},\mathbf{y})=0$ for all members $\mathbf{x}$ of the standard basis then $\mathbf{y}=0$).
It is thus a symplectic form on $\mathbb{F}_2^{2M}$.
Provided we order the members of $\mathbb{F}_2^{2M}$ lexicographically,
the corresponding character table $\boldsymbol{\Gamma}$ can be formed by tensoring together $M$ copies of
\begin{equation*}
\left[\begin{smallmatrix}
+&+&+&+\\
+&+&-&-\\
+&-&+&-\\
+&-&-&+
\end{smallmatrix}\right].
\end{equation*}
Taking $M=2$ for instance yields the symplectic form $\mathrm{B}$ of Example~\ref{ex.PDS} and the resulting character table $\boldsymbol{\Gamma}$ of~\eqref{eq.16 x 16 Gamma}.
Though not necessary for our work below,
it is known that up to isomorphism,
this is the only symplectic form on a vector space $\mathcal{V}$ over $\mathbb{F}_2$ of dimension $2M$~\cite{Grove02}.
The binary matrices that preserve the form given in~\eqref{eq.canonical symplectic} form the classical \textit{symplectic group} $\mathrm{Sp}(2M,2)$.
\end{example}
A \textit{quadratic form} on $\mathcal{V}$ is any function $\mathrm{Q}:\mathcal{V}\rightarrow\mathbb{F}_2$ such that
\begin{equation}
\label{eq.polarization}
\mathrm{B}(v_1,v_2):=\mathrm{Q}(v_1+v_2)+\mathrm{Q}(v_1)+\mathrm{Q}(v_2)
\end{equation}
defines a bilinear form $\mathrm{B}$ on $\mathcal{V}$.
Any such bilinear form $\mathrm{B}$ is necessarily alternating since
$\mathrm{B}(v,v)=\mathrm{Q}(v+v)+\mathrm{Q}(v)+\mathrm{Q}(v)=\mathrm{Q}(0)$ for any $v\in\mathcal{V}$ where,
as a special case of this, $\mathrm{Q}(0)=\mathrm{B}(0,0)=0$.
A point $v\in\mathcal{V}$ is \textit{singular} with respect to a given quadratic form $\mathrm{Q}$ if $\mathrm{Q}(v)=0$ and is otherwise \textit{nonsingular}.
The \textit{quadric} of $\mathrm{Q}$ is the set $\mathcal{D}=\set{v\in\mathcal{V}: \mathrm{Q}(v)=0}$ of its singular vectors, which necessarily includes $v=0$.
Remarkably, the quadratic form $\mathrm{Q}$ that gives rise to a particular bilinear form $\mathrm{B}$ is not unique in general.
For example,
for any $v_0\in\mathcal{V}$, the function $\tilde{\mathrm{Q}}:\mathcal{V}\rightarrow\mathbb{F}_2$, $\tilde{\mathrm{Q}}(v):=\mathrm{Q}(v+v_0)+\mathrm{Q}(v_0)=\mathrm{B}(v,v_0)+\mathrm{Q}(v)$ also satisfies~\eqref{eq.polarization},
since for any $v_1,v_2\in\mathcal{V}$,
\begin{align*}
\tilde{\mathrm{Q}}(v_1+v_2)+\tilde{\mathrm{Q}}(v_1)+\tilde{\mathrm{Q}}(v_2)
&=\mathrm{B}(v_1+v_2,v_0)+\mathrm{Q}(v_1+v_2)+\mathrm{B}(v_1,v_0)+\mathrm{Q}(v_1)+\mathrm{B}(v_2,v_0)+\mathrm{Q}(v_2)\\
&=\mathrm{Q}(v_1+v_2)+\mathrm{Q}(v_1)+\mathrm{Q}(v_2).
\end{align*}
The quadric $\tilde{\mathcal{D}}$ of $\tilde{\mathrm{Q}}$ is the corresponding shift of either the quadric $\mathcal{D}$ of $\mathrm{Q}$ or its complement,
depending on whether or not $v_0$ is singular:
\begin{equation*}
\tilde{\mathcal{D}}
=\set{v\in\mathcal{V}: \tilde{\mathrm{Q}}(v)=0}
=\set{v\in\mathcal{V}: \mathrm{Q}(v+v_0)=\mathrm{Q}(v_0)}
=\left\{\begin{array}{ll}
v_0+\mathcal{D}, &\ v_0\in\mathcal{D},\\
v_0+\mathcal{D}^\mathrm{c},&\ v_0\in\mathcal{D}^\mathrm{c}.
\end{array}\right.
\end{equation*}
We focus on quadratic forms $\mathrm{Q}:\mathcal{V}\rightarrow\mathbb{F}_2$ that are \textit{nondefective}, that is, for which the bilinear form~\eqref{eq.polarization} is nondegenerate, and so symplectic.
Here, $V=\#(\mathcal{V})=2^{2M}$ for some positive integer $M$.
Such forms are classical, and we now summarize and explain the parts of their folklore that we shall make use of:
\begin{lemma}
\label{lem.chirp facts}
Let $\mathcal{V}$ be a vector space over $\mathbb{F}_2$ of cardinality $2^{2M}$.
Let $\mathrm{Q}:\mathcal{V}\rightarrow\mathbb{F}_2$ be a nondefective quadratic form with associated symplectic form~\eqref{eq.polarization} and sign
\begin{equation}
\label{eq.sign of quadratic form}
{\operatorname{sgn}}(\mathrm{Q}):=\frac1{2^M}\sum_{v\in\mathcal{V}}(-1)^{\mathrm{Q}(v)}\in\set{1,-1}.
\end{equation}
(If the opposite sign is desired, replace $\mathrm{Q}$ with $\tilde{\mathrm{Q}}(v):=\mathrm{Q}(v+v_0)+1$ where $Q(v_0)=1$.)
Then the $(\mathcal{V}\times\mathcal{V})$-indexed matrices $\boldsymbol{\Gamma}$, $\mathbf{C}$ and $\boldsymbol{\Delta}$ defined by
\begin{equation*}
\boldsymbol{\Gamma}(v_1,v_2)=(-1)^{\mathrm{B}(v_1,v_2)},
\quad
\mathbf{C}(v_1,v_2)=(-1)^{\mathrm{Q}(v_1+v_2)},
\quad
\boldsymbol{\Delta}(v_1,v_2)
=\left\{\begin{array}{cl}
(-1)^{\mathrm{Q}(v_1)},&\ v_1=v_2,\\
0,&\ v_1\neq v_2,
\end{array}\right.
\end{equation*}
are real-symmetric, and satisfy
\begin{equation}
\label{eq.chirp Fourier}
\boldsymbol{\Gamma}^2=2^{2M}\mathbf{I}=\mathbf{C}^2,
\quad
\boldsymbol{\Delta}^2=\mathbf{I},
\quad
\boldsymbol{\Gamma}=\boldsymbol{\Delta}\mathbf{C}\boldsymbol{\Delta},
\quad
(\boldsymbol{\Gamma}\boldsymbol{\Delta})^3=2^{3M}{\operatorname{sgn}}(\mathrm{Q})\mathbf{I}.
\end{equation}
Moreover, $\mathcal{D}=\set{v\in\mathcal{V}: \mathrm{Q}(v)=0}$ is a difference set for $\mathcal{V}$ of cardinality $2^{M-1}[2^M+{\operatorname{sgn}}(\mathrm{Q})]$ whose harmonic ETF
$\set{\boldsymbol{\varphi}_v}_{v\in\mathcal{V}}$,
\smash{$\boldsymbol{\varphi}_v(d):=(-1)^{\mathrm{B}(d,v)}$} has Gram matrix $2^{M-1}[2^M\mathbf{I}+{\operatorname{sgn}}(\mathrm{Q})\mathbf{C}]$.
\end{lemma}
\begin{proof}
Consider the \textit{(quadratic) chirp} function $\mathbf{c}\in\mathbb{R}^{\mathcal{V}}$, $\mathbf{c}(v):=(-1)^{\mathrm{Q}(v)}$.
This is an eigenvector of the DFT matrix $\boldsymbol{\Gamma}^*=\boldsymbol{\Gamma}$: for any $v_1\in\mathcal{V}$,
using~\eqref{eq.polarization} and making the substitution $v=v_1+v_2$ gives
\begin{equation}
\label{eq.chirp is eigenvector}
(\boldsymbol{\Gamma}\mathbf{c})(v_1)
=\sum_{v_2\in\mathcal{V}}(-1)^{\mathrm{B}(v_1,v_2)}(-1)^{\mathrm{Q}(v_2)}
=\sum_{v_2\in\mathcal{V}}(-1)^{\mathrm{Q}(v_1+v_2)+\mathrm{Q}(v_1)}
=\biggparen{\,\sum_{v\in\mathcal{V}}(-1)^{\mathrm{Q}(v)}}\mathbf{c}(v_1).
\end{equation}
Hence the ``Gauss sum" $\sum_{v\in\mathcal{V}}(-1)^{\mathrm{Q}(v)}$ is an eigenvalue of $\boldsymbol{\Gamma}$, and so is either $\sqrt{V}=2^M$ or $-\sqrt{V}=-2^M$.
In particular, the \textit{sign}~\eqref{eq.sign of quadratic form} of $\mathrm{Q}$ is indeed either $1$ or $-1$.
We caution that the sign of a quadratic form $\mathrm{Q}$ that gives rise to a particular symplectic form $\mathrm{B}$ is determined by $\mathrm{Q}$ but not by $\mathrm{B}$: for example, ${\operatorname{sgn}}(\tilde{\mathrm{Q}})=(-1)^{\mathrm{Q}(v_0)}{\operatorname{sgn}}(\mathrm{Q})$ where $\tilde{\mathrm{Q}}(v):=\mathrm{Q}(v+v_0)+\mathrm{Q}(v_0)$.
A nondefective quadratic form $\mathrm{Q}$ is called \textit{hyperbolic} when ${\operatorname{sgn}}(\mathrm{Q})=1$ and called \textit{elliptic} when ${\operatorname{sgn}}(\mathrm{Q})=-1$.
Under this notation, \eqref{eq.chirp is eigenvector} becomes $\boldsymbol{\Gamma}\mathbf{c}=2^M\,{\operatorname{sgn}}(\mathrm{Q})\mathbf{c}$.
Since $\mathbf{c}=2\boldsymbol{\chi}_\mathcal{D}-\boldsymbol{1}$ where $\mathcal{D}$ is the quadric of $\mathrm{Q}$,
this can be restated as
$2\boldsymbol{\Gamma}\boldsymbol{\chi}_\mathcal{D}-2^{2M}\boldsymbol{\delta}_0
=\boldsymbol{\Gamma}(2\boldsymbol{\chi}_\mathcal{D}-\boldsymbol{1})
=2^M\,{\operatorname{sgn}}(\mathrm{Q})(2\boldsymbol{\chi}_\mathcal{D}-\boldsymbol{1})$,
that is,
\begin{equation}
\label{eq.DFT derivation}
\boldsymbol{\Gamma}\boldsymbol{\chi}_\mathcal{D}
=2^{M-1}[2^M\boldsymbol{\delta}_0+{\operatorname{sgn}}(\mathrm{Q})\mathbf{c}]
=2^{M-1}[2^M\boldsymbol{\delta}_0+{\operatorname{sgn}}(\mathrm{Q})(2\boldsymbol{\chi}_\mathcal{D}-\boldsymbol{1})].
\end{equation}
When evaluated at $v=0$, this gives
$\#(\mathcal{D})=(\boldsymbol{\Gamma}\boldsymbol{\chi}_\mathcal{D})(0)=2^{M-1}[2^M+{\operatorname{sgn}}(\mathrm{Q})]$.
When instead evaluated at any $v\neq 0$, \eqref{eq.DFT derivation} gives
$\abs{(\boldsymbol{\Gamma}\boldsymbol{\chi}_\mathcal{D})(v)}=2^{M-1}$.
As such, $\mathcal{D}$ is a difference set for $\mathcal{V}$.
(From this perspective, \eqref{eq.DFT derivation} itself is remarkable: for many difference sets $\mathcal{D}$,
no simple expression for the phase of the DFT of $\boldsymbol{\chi}_\mathcal{D}$ is known.)
In particular, \eqref{eq.DFT derivation} implies that the corresponding harmonic ETF $\set{\boldsymbol{\varphi}_v}_{v\in\mathcal{V}}$ for $\mathbb{R}^\mathcal{D}$,
\smash{$\boldsymbol{\varphi}_v(d):=(-1)^{\mathrm{B}(d,v)}$}, satisfies
\begin{equation*}
\ip{\boldsymbol{\varphi}_{v_1}}{\boldsymbol{\varphi}_{v_2}}
=\sum_{d\in\mathcal{D}}(-1)^{\mathrm{B}(d,v_1+v_2)}
=(\boldsymbol{\Gamma}\boldsymbol{\chi}_\mathcal{D})(v_1+v_2)
=2^{M-1}\left\{\begin{array}{cl}
2^M+{\operatorname{sgn}}(\mathrm{Q}),&\ v_1=v_2,\\
{\operatorname{sgn}}(\mathrm{Q})\mathbf{c}(v_1+v_2),&\ v_1\neq v_2.
\end{array}\right.
\end{equation*}
That is, $\set{\boldsymbol{\varphi}_v}_{v\in\mathcal{V}}$ has Gram matrix $\boldsymbol{\Phi}^*\boldsymbol{\Phi}=2^{M-1}[2^M\mathbf{I}+{\operatorname{sgn}}(\mathrm{Q})\mathbf{C}]$,
where $\mathbf{C}$ is the ($\mathcal{V}$-circulant) filter defined by $\mathbf{C}\mathbf{x}:=\mathbf{c}*\mathbf{x}$,
namely where $\mathbf{C}(v_1,v_2)=\mathbf{c}(v_1+v_2)=(-1)^{\mathrm{Q}(v_1+v_2)}$.
Like all filters over $\mathcal{V}$, this matrix $\mathbf{C}$ is diagonalized by the DFT $\boldsymbol{\Gamma}$:
for any $\mathbf{x}\in\mathbb{R}^\mathcal{V}$, $v\in\mathcal{V}$,
\begin{equation*}
(\boldsymbol{\Gamma}\mathbf{C}\mathbf{x})(v)
=[\boldsymbol{\Gamma}(\mathbf{c}*\mathbf{x})](v)
=(\boldsymbol{\Gamma}\mathbf{c})(v)(\boldsymbol{\Gamma}\mathbf{x})(v)
=2^M\,{\operatorname{sgn}}(\mathrm{Q})\mathbf{c}(v)(\boldsymbol{\Gamma}\mathbf{x})(v)
=2^M\,{\operatorname{sgn}}(\mathrm{Q})(\boldsymbol{\Delta}\boldsymbol{\Gamma}\mathbf{x})(v),
\end{equation*}
where $\boldsymbol{\Delta}:\mathbb{R}^\mathcal{V}\rightarrow\mathbb{R}^\mathcal{V}$,
$(\boldsymbol{\Delta}\mathbf{x})(v):=\mathbf{c}(v)\mathbf{x}(v)$ is the \textit{chirp modulation} operator,
namely the diagonal $(\mathcal{V}\times\mathcal{V})$-indexed orthogonal matrix $\boldsymbol{\Delta}$ whose $v$th diagonal entry is $\boldsymbol{\Delta}(v,v)=\mathbf{c}(v)=(-1)^{\mathrm{Q}(v)}$.
That is, $\mathbf{C}=2^{-M}{\operatorname{sgn}}(\mathrm{Q})\boldsymbol{\Gamma}\boldsymbol{\Delta}\boldsymbol{\Gamma}$.
It is remarkable however that $\boldsymbol{\Gamma}$ and $\mathbf{C}$ are also related by conjugation by $\boldsymbol{\Delta}$:
for any $v_1,v_2\in\mathcal{V}$, \eqref{eq.polarization} gives
\begin{equation*}
(\boldsymbol{\Delta}\mathbf{C}\boldsymbol{\Delta})(v_1,v_2)
=(-1)^{\mathrm{Q}(v_1)}(-1)^{\mathrm{Q}(v_1+v_2)}(-1)^{\mathrm{Q}(v_2)}
=(-1)^{\mathrm{B}(v_1,v_2)}
=\boldsymbol{\Gamma}(v_1,v_2),
\end{equation*}
and so $\boldsymbol{\Gamma}=\boldsymbol{\Delta}\mathbf{C}\boldsymbol{\Delta}$.
In particular, $\mathbf{C}=\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta}$ (like $\boldsymbol{\Gamma}$) is a real-symmetric Hadamard matrix whose diagonal entries have value $1$.
(We caution that $\boldsymbol{\Gamma}$ and $\mathbf{C}$ are distinct:
$\mathbf{C}$ is $\mathcal{V}$-circulant whereas $\boldsymbol{\Gamma}$ is not, with the latter having an all-ones first column.)
Moreover, combining the above facts gives
\smash{$\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta}=\mathbf{C}=2^{-M}{\operatorname{sgn}}(\mathrm{Q})\boldsymbol{\Gamma}\boldsymbol{\Delta}\boldsymbol{\Gamma}$},
namely that the \textit{Fourier-chirp} transform $\boldsymbol{\Gamma}\boldsymbol{\Delta}$ satisfies $(\boldsymbol{\Gamma}\boldsymbol{\Delta})^3=2^{3M}{\operatorname{sgn}}(\mathrm{Q})\mathbf{I}$.
Analogous transforms arise in the study of \textit{SIC-POVMs}; see~Section~3.4 of~\cite{Zauner99}, and~\cite{Fickus09}.
Interestingly, this implies that the shifts of $\mathbf{c}$ form an equal-norm orthogonal basis of eigenvectors of the DFT,
that is, $\boldsymbol{\Gamma}$ and $\mathbf{C}$ orthogonally diagonalize each other:
\begin{equation*}
\boldsymbol{\Gamma}
=2^{-3M}{\operatorname{sgn}}(\mathrm{Q})\boldsymbol{\Gamma}(\boldsymbol{\Gamma}\boldsymbol{\Delta})^3
=2^{-M}{\operatorname{sgn}}(\mathrm{Q})(\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta})\boldsymbol{\Delta}(\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta})
=2^{-M}{\operatorname{sgn}}(\mathrm{Q})\mathbf{C}\boldsymbol{\Delta}\mathbf{C}.\qedhere
\end{equation*}
\end{proof}
\begin{example}
\label{ex.quadratic forms}
Continuing Example~\ref{ex.symplectic form}, for any positive integer $M$,
the bilinear form $\mathrm{B}$ on $\mathcal{V}=\mathbb{F}_2^M$ given in~\eqref{eq.canonical symplectic} arises via~\eqref{eq.polarization}, for example, from the quadratic form $\mathrm{Q}:\mathbb{F}_2^{2M}\rightarrow\mathbb{F}_2$,
\begin{equation}
\label{eq.canonical hyperbolic}
\mathrm{Q}(\mathbf{x})
=\mathrm{Q}(x_1,\dotsc,x_{2M})
:=\sum_{m=1}^M x_{2m-1}x_{2m}
=x_1x_2+\dotsc+x_{2M-1}x_{2M}.
\end{equation}
Since $\mathrm{B}$ is nondegenerate (symplectic),
$\mathrm{Q}$ is nondefective.
When $M=1$, $\mathrm{Q}(x_1,x_2)=x_1x_2$ has three singular vectors and one nonsingular one: its quadric is $\mathcal{D}=\set{00,01,10}$, and so $\mathcal{D}^\mathrm{c}=\set{11}$.
For $M>1$, a vector in $\mathbb{F}_2^{2M}$ is singular if and only if it is obtained by either appending $00$, $10$ or $01$ to a singular vector in $\mathbb{F}_2^{\smash{2M-2}}$ or appending $11$ to a nonsingular one.
By induction, this implies that~\eqref{eq.canonical hyperbolic} has exactly
$\#(\mathcal{D})=2^{M-1}(2^M+1)$ singular vectors, and so is hyperbolic.
For an elliptic quadratic form $\tilde{\mathrm{Q}}$ that yields the same symplectic form $\mathrm{B}$, we can for example let $\tilde{\mathrm{Q}}(\mathbf{x}):=\mathrm{Q}(\mathbf{x}+\mathbf{x}_0)+1$ where $\mathrm{Q}(\mathbf{x}_0)=1$.
Taking $\mathbf{x}_0$ to be $00\dotsb0011$ for instance yields
\begin{equation}
\label{eq.canonical elliptic}
\tilde{\mathrm{Q}}(\mathbf{x})
:=\sum_{m=1}^M x_{2m-1}x_{2m}+x_{2M-1}^2+x_{2M}^2
=x_1x_2+\dotsc+x_{2M-1}x_{2M}+x_{2M-1}^2+x_{2M}^2.
\end{equation}
When $M=2$ this becomes the elliptic quadratic form $x_1x_2+x_3x_4+x_3^2+x_4^2$ used in Example~\ref{ex.PDS} whose $6$-element quadric $\mathcal{D}$ is given in~\eqref{eq.PDS(16,6,10)}.
(Adding $0011$ to these vectors gives the nonsingular vectors of the hyperbolic quadratic form $x_1x_2+x_3x_4$.)
The corresponding chirp $\mathbf{c}$ has values
$+---+---+----+++$.
Conjugating $\boldsymbol{\Gamma}$ of~\eqref{eq.16 x 16 Gamma} by the diagonal matrix $\boldsymbol{\Delta}$ gives $\mathbf{C}=\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta}$ (the unique $\mathbb{F}_2^{2M}$-circulant matrix that has $\mathbf{c}$ as its first column).
This matrix $\mathbf{C}$ naturally arises in the Gram matrix $\boldsymbol{\Gamma}_0^*\boldsymbol{\Gamma}_0^{}=2(4\mathbf{I}-\mathbf{C})=2\boldsymbol{\Delta}(4\mathbf{I}-\boldsymbol{\Gamma})\boldsymbol{\Delta}$ of the corresponding harmonic ${\operatorname{ETF}}(6,16)$ whose synthesis operator $\boldsymbol{\Gamma}_0$ is given in~\eqref{eq.16 x 16 Gamma}.
Here, $\boldsymbol{\Gamma}\boldsymbol{\Delta}$ is a real Hadamard matrix with the remarkable property that $(\boldsymbol{\Gamma}\boldsymbol{\Delta})^3=-64\mathbf{I}$.
In the next result we use this Fourier-chirp relation to prove that $\mathcal{D}$ and $\mathcal{D}^\mathrm{c}$ are paired difference sets.
Though not necessary for our work below,
it is known that up to isomorphism,
\eqref{eq.canonical hyperbolic} and \eqref{eq.canonical elliptic} are the only hyperbolic and elliptic nondefective quadratic forms on a vector space $\mathcal{V}$ over $\mathbb{F}_2$ of dimension $2M$~\cite{Grove02}.
The binary matrices that preserve the form given in~\eqref{eq.canonical hyperbolic} or~\eqref{eq.canonical elliptic} form the classical \textit{orthogonal groups} $\mathrm{O}^+(2M,2)$ and $\mathrm{O}^{-}(2M,2)$, respectively.
\end{example}
\begin{theorem}
\label{thm.PDS from quadratic}
Let $\mathrm{Q}$ be a nondefective quadratic form on a vector space $\mathcal{V}$ over $\mathbb{F}_2$,
and let $\mathrm{B}$ be the associated symplectic form~\eqref{eq.polarization}.
Then the set $\mathcal{D}=\set{v\in\mathcal{V}: \mathrm{Q}(v)=0}$ of all singular vectors of $\mathrm{Q}$ is a difference set for $\mathcal{V}$ that is paired with $\mathcal{D}^\mathrm{c}$ in the sense of Definition~\ref{def.paired difference sets}, provided we identify (the additive group of) $\mathcal{V}$ with its Pontryagin dual via the isomorphism $v_2\mapsto(v_1\mapsto(-1)^{\mathrm{B}(v_1,v_2)})$.
\end{theorem}
\begin{proof}
Recall the notation and facts of Lemma~\ref{lem.chirp facts}.
We already know that $\mathcal{D}$ is a difference set for $\mathcal{V}$
(and so $\mathcal{D}^\mathrm{c}$ is as well),
and that the synthesis operator $\boldsymbol{\Gamma}_0$ of the resulting harmonic ETF
(the $(\mathcal{D}\times\mathcal{V})$-indexed submatrix of $\boldsymbol{\Gamma}$) satisfies $\boldsymbol{\Gamma}_0^*\boldsymbol{\Gamma}_0^{}=2^{M-1}[2^M\mathbf{I}+{\operatorname{sgn}}(\mathrm{Q})\mathbf{C}]$.
To show that $\mathcal{D}$ and $\mathcal{D}^\mathrm{c}$ are paired in the sense of Definition~\ref{def.paired difference sets},
we want to show that $\mathcal{D}^\mathrm{c}$-indexed columns of $\boldsymbol{\Gamma}_0$ form a tight frame for their span, or equivalently, that the $(\mathcal{D}^\mathrm{c}\times\mathcal{D}^\mathrm{c})$-indexed submatrix of $\boldsymbol{\Gamma}_0^*\boldsymbol{\Gamma}_0^{}=2^{M-1}[2^M\mathbf{I}+{\operatorname{sgn}}(\mathrm{Q})\mathbf{C}]$ is a scalar multiple of a projection.
Here since $\frac12(\mathbf{I}-\boldsymbol{\Delta})$ is the diagonal $\set{0,1}$-valued matrix whose diagonal entries indicate $\mathcal{D}^\mathrm{c}$,
this equates to showing that
\begin{equation*}
\mathbf{G}
:=\tfrac12(\mathbf{I}-\boldsymbol{\Delta})[2^M\mathbf{I}+{\operatorname{sgn}}(\mathrm{Q})\mathbf{C}]\tfrac12(\mathbf{I}-\boldsymbol{\Delta})
\end{equation*}
satisfies $\mathbf{G}^2=A\mathbf{G}$ for some $A>0$.
To simplify this expression for $\mathbf{G}$,
note that since $\boldsymbol{\Delta}\mathbf{C}\boldsymbol{\Delta}=\boldsymbol{\Gamma}$ where $\boldsymbol{\Delta}^2=\mathbf{I}$
(and so $[\frac12(\mathbf{I}-\boldsymbol{\Delta})]^2=\frac12(\mathbf{I}-\boldsymbol{\Delta})$ and $(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Delta}=-(\mathbf{I}-\boldsymbol{\Delta})$),
\begin{equation}
\label{eq.pf of infinite family 1}
\mathbf{G}
=\tfrac12(\mathbf{I}-\boldsymbol{\Delta})[2^M\mathbf{I}+{\operatorname{sgn}}(\mathrm{Q})\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta}]\tfrac12(\mathbf{I}-\boldsymbol{\Delta})
=2^M\tfrac{1}{2}(\mathbf{I}-\boldsymbol{\Delta})+{\operatorname{sgn}}(\mathrm{Q})\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta}).
\end{equation}
Squaring this equation and again making use of the fact that
$[\frac12(\mathbf{I}-\boldsymbol{\Delta})]^2=\frac12(\mathbf{I}-\boldsymbol{\Delta})$ gives
\begin{equation}
\label{eq.pf of infinite family 2}
\mathbf{G}^2
=2^{2M}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})+2^{M+1}\,{\operatorname{sgn}}(\mathrm{Q})\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})
+\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta}).
\end{equation}
To proceed, recall from~\eqref{eq.chirp Fourier} that
$(\boldsymbol{\Gamma}\boldsymbol{\Delta}\boldsymbol{\Gamma})(\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta})
=(\boldsymbol{\Gamma}\boldsymbol{\Delta})^3=2^{3M}{\operatorname{sgn}}(\mathrm{Q})\mathbf{I}$
where $\boldsymbol{\Gamma}^2=2^{2M}\mathbf{I}$ and $\boldsymbol{\Delta}^2=\mathbf{I}$.
Thus, $\boldsymbol{\Gamma}\boldsymbol{\Delta}\boldsymbol{\Gamma}=2^M\,{\operatorname{sgn}}(\mathrm{Q})\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta}$ and so
\begin{equation*}
\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}
=2^{2M-1}\mathbf{I}-\tfrac12\boldsymbol{\Gamma}\boldsymbol{\Delta}\boldsymbol{\Gamma}
=2^{2M-1}\mathbf{I}-2^{M-1}\,{\operatorname{sgn}}(\mathrm{Q})\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta}.
\end{equation*}
Conjugating this equation by $\tfrac12(\mathbf{I}-\boldsymbol{\Delta})$ gives
\begin{equation*}
\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})
=2^{2M-1}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})-2^{M-1}\,{\operatorname{sgn}}(\mathrm{Q})\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta}).
\end{equation*}
Substituting this into~\eqref{eq.pf of infinite family 2} and then recalling~\eqref{eq.pf of infinite family 1} gives that $\mathbf{G}^2=A\mathbf{G}$ for some $A>0$:
\begin{align*}
\mathbf{G}^2
&=2^{2M}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})
+2^{M+1}\,{\operatorname{sgn}}(\mathrm{Q})\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\\
&\qquad+2^{2M-1}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})-2^{M-1}\,{\operatorname{sgn}}(\mathrm{Q})\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\\
&=3(2^{M-1})[2^M\tfrac12(\mathbf{I}-\boldsymbol{\Delta})
+{\operatorname{sgn}}(\mathrm{Q})\tfrac12(\mathbf{I}-\boldsymbol{\Delta})\boldsymbol{\Gamma}\tfrac12(\mathbf{I}-\boldsymbol{\Delta})]
=3(2^{M-1})\mathbf{G}.\qedhere
\end{align*}
\end{proof}
Applying Theorems~\ref{thm.ECTFF from PDS} and~\ref{thm.PDS from quadratic} to the canonical quadratic forms of Example~\ref{ex.quadratic forms} yields the following result:
\begin{theorem}
\label{thm.infinite family}
For any positive integer $M$,
let $\mathrm{Q}$ be either the hyperbolic~\eqref{eq.canonical hyperbolic} or elliptic~\eqref{eq.canonical elliptic} quadratic form over $\mathbb{F}_2^{2M}$ with associated symplectic form $\mathrm{B}$ of~\eqref{eq.canonical symplectic}
(which have ${\operatorname{sgn}}(\mathrm{Q})=1$ and ${\operatorname{sgn}}(\mathrm{Q})=-1$, respectively)
and let
$\mathcal{D}=\set{\mathbf{x}\in\mathbb{F}_2^{2M}: \mathrm{Q}(\mathbf{x})=0}$.
Then $\mathcal{D}$ and $\mathcal{D}^\mathrm{c}$ are paired difference sets for $\mathbb{F}_2^{2M}$
(Theorem~\ref{thm.PDS from quadratic}),
and applying Theorem~\ref{thm.ECTFF from PDS} to them gives that:
\begin{enumerate}
\renewcommand{\labelenumi}{(\alph{enumi})}
\item
$\set{\boldsymbol{\varphi}_\mathbf{y}}_{\mathbf{y}\in\mathbb{F}_2^M}\subseteq\mathbb{R}^\mathcal{D}$, $\boldsymbol{\varphi}_{\mathbf{y}}(\mathbf{x})=(-1)^{\mathrm{B}(\mathbf{x},\mathbf{y})}$ is an ${\operatorname{ETF}}(2^{M-1}[2^M+{\operatorname{sgn}}(\mathrm{Q})],2^{2M})$ for $\mathbb{R}^\mathcal{D}$;\smallskip
\item
for any $\mathbf{y}\in\mathbb{F}_2^{2M}$, $\set{\boldsymbol{\varphi}_{\mathbf{y}+\mathbf{z}}}_{\mathbf{z}\in\mathcal{D}^\mathrm{c}}$ is an ${\operatorname{ETF}}(\frac13(2^{2M}-1),2^{M-1}[2^M-{\operatorname{sgn}}(\mathrm{Q})])$ for its span $\mathcal{U}_\mathbf{y}$;\smallskip
\item
$\set{\mathcal{U}_\mathbf{y}}_{\mathbf{y}\in\mathbb{F}_2^{2M}}$ is an
${\operatorname{ECTFF}}(2^{M-1}[2^M+{\operatorname{sgn}}(\mathrm{Q})],2^{2M},\frac13(2^{2M}-1))$ for $\mathbb{R}^\mathcal{D}$;\smallskip
\item
$\set{\boldsymbol{\psi}_\mathbf{y}}_{\mathbf{y}\in\mathbb{F}_2^M}\subseteq\mathbb{R}^{\mathcal{D}^\mathrm{c}}$, $\boldsymbol{\psi}_{\mathbf{y}}(\mathbf{x})=(-1)^{\mathrm{B}(\mathbf{x},\mathbf{y})}$ is an ${\operatorname{ETF}}(2^{M-1}[2^M-{\operatorname{sgn}}(\mathrm{Q})],2^{2M})$ for $\mathbb{R}^{\mathcal{D}^\mathrm{c}}$;\smallskip
\item
for any $\mathbf{y}\in\mathbb{F}_2^{2M}$, $\set{\boldsymbol{\psi}_{\mathbf{y}+\mathbf{z}}}_{\mathbf{z}\in\mathcal{D}}$ is an ${\operatorname{ETF}}(\frac13(2^{2M}-1),2^{M-1}[2^M+{\operatorname{sgn}}(\mathrm{Q})])$ for its span $\mathcal{V}_\mathbf{y}$;\smallskip
\item
$\set{\mathcal{V}_\mathbf{y}}_{\mathbf{y}\in\mathbb{F}_2^{2M}}$ is an
${\operatorname{ECTFF}}(2^{M-1}[2^M-{\operatorname{sgn}}(\mathrm{Q})],2^{2M},\frac13(2^{2M}-1))$ for $\mathbb{R}^{\mathcal{D}^\mathrm{c}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Most of these results are immediate consequences of Theorems~\ref{thm.ECTFF from PDS} and~\ref{thm.PDS from quadratic}.
To find the dimension $R$ of the spans of the sub-ETFs in (b) and (e) we use~\eqref{eq.paired diff set rank} where
$N=\#(\mathcal{V})=2^{2M}$,
$D=\#(\mathcal{D})=2^{M-1}[2^M+{\operatorname{sgn}}(\mathrm{Q})]$
and
$E=\#(\mathcal{D}^\mathrm{c})=2^{2M}-2^{M-1}[2^M+{\operatorname{sgn}}(\mathrm{Q})]=2^{M-1}[2^M-{\operatorname{sgn}}(\mathrm{Q})]$.
Here, since $DE=2^{2M-2}(N-1)$ and $D+E=N$,
\begin{equation*}
R
=\tfrac{DE(N-1)}{(D+E-1)N-DE}
=\tfrac{2^{2M-2}(N-1)^2}{(N-1)N-2^{2M-2}(N-1)}
=\tfrac{2^{2M-2}(N-1)}{N-2^{2M-2}}
=\tfrac{2^{2M-2}(2^{2M}-1)}{2^{2M}-2^{2M-2}}
=\tfrac13(2^{2M}-1).
\end{equation*}
Another peculiarity of these examples is that since $\boldsymbol{\Gamma}$ is symmetric and $\mathcal{D}$ and $\mathcal{D}^\mathrm{c}$ are complementary,
it is valid to construct the ECTFFs (c) and (f) that arise from Theorem~\ref{thm.ECTFF from PDS} in this setting from sub-ETFs (b) and (e) of Naimark complementary ETFs (a) and (d), respectively.
\end{proof}
As discussed in Example~\ref{ex.ECTFF}, when $M=2$ this yields both an ${\operatorname{ETF}}(6,16)$ that contains $16$ (distinct, but overlapping and unitarily equivalent) sub-${\operatorname{ETF}}(5,10)$ whose spans form an ${\operatorname{ECTFF}}(6,16,5)$,
as well as an ${\operatorname{ETF}}(10,16)$ that contains $16$ sub-${\operatorname{ETF}}(5,6)$ whose spans form an ${\operatorname{ECTFF}}(10,16,5)$.
When instead $M=3$, it yields an ${\operatorname{ETF}}(28,64)$ that contains $64$ sub-${\operatorname{ETF}}(21,36)$ whose spans form an ${\operatorname{ECTFF}}(28,64,21)$ as well as an ${\operatorname{ETF}}(36,64)$ that contains $64$ sub-${\operatorname{ETF}}(21,28)$ whose spans form an ${\operatorname{ECTFF}}(36,64,21)$.
These alone account for a remarkable proportion of real ETFs with small parameters~\cite{FickusM16}.
(When $M=1$, Theorem~\ref{thm.infinite family} becomes trivial,
yielding an ${\operatorname{ETF}}(1,4)$ that contains $4$ sub-${\operatorname{ETF}}(1,3)$ whose spans form an ${\operatorname{ECTFF}}(1,4,1)$ and an ${\operatorname{ETF}}(3,4)$ that contains $4$ sub-${\operatorname{ETF}}(1,1)$ whose spans form an ${\operatorname{ECTFF}}(3,4,1)$.)
With the exception of this ${\operatorname{ECTFF}}(6,16,5)$ (which as already noted is the spatial complement of an ${\operatorname{ETF}}(6,16)$), the ECTFFs produced by Theorem~\ref{thm.infinite family} when $M\geq 2$ seem to be new:
we could not find any other way to construct (real or complex) ECTFFs with these parameters from any of the methods mentioned in the introduction.
These ECTFFs cannot be EITFFs: as noted in the previous section, this is actually true of any ECTFFs that arise from nontrivial paired difference sets since the resulting subspaces intersect nontrivially;
here, this also follows from the fact that $2R>D$.
A more interesting question is whether the spatial complements of these ECTFFs are EITFFs.
When $M=2$, the spatial complement of the ${\operatorname{ECTFF}}(6,16,5)$ certainly is an EITFF, while that of the ${\operatorname{ECTFF}}(10,16,5)$ certainly is not (since having $D=2R$ implies that its principal angles do not change under spatial complements).
Our preliminary numerical experimentation indicates that when $M\geq 3$ the spatial complements of the ECTFFs of Theorem~\ref{thm.infinite family} are not EITFFs in general.
When $M\geq 3$ an ECTFF of (c) or (f) has $D<2R$ and so any pair of its subspaces have at least
$2R-D
=\frac23(2^M\pm1)(2^{M-2}\mp 1)$ principal angles of $0$;
only when every pair of its subspaces has exactly $2R-D$ principal angles of $0$ and $D-R$ principal angles of some other constant value will its spatial complement be an EITFF.
\begin{remark}
Every ETF constructed in Theorem~\ref{thm.infinite family} equates to a known type of strongly regular graph (SRG)~\cite{Brouwer07,Brouwer17}.
A graph on a $V$-element vertex set $\mathcal{V}$ is \textit{strongly regular} with parameters $(V,K,\Lambda,U)$ if its adjacency matrix $\mathbf{A}$ satisfies
$\mathbf{A}^2=(\Lambda-U)\mathbf{A}+(K-U)\mathbf{I}+U\mathbf{J}$.
These parameters are dependent:
since $\mathbf{A}\boldsymbol{1}=K\boldsymbol{1}$, applying $\mathbf{A}^2$ to $\boldsymbol{1}$ gives $K^2=(\Lambda-U)K+(K-U)+UV$, namely that $U(V-K-1)=K(K-\Lambda-1)$.
In general, there are two notions of equivalence between certain real ETFs and certain SRGs.
By negating some vectors if necessary,
every real $N$-vector ETF is \textit{projectively equivalent} to one $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ for which there exists $n_0\in\mathcal{N}$ such that $\ip{\boldsymbol{\varphi}_{n_0}}{\boldsymbol{\varphi}_{n}}>0$ for all $n$.
Such an ETF equates to an SRG on the vertex set $\mathcal{N}\backslash\set{n_0}$ with $K=2U$~\cite{HolmesP04,Waldron09}.
Here, two vertices $n_1,n_2\in\mathcal{N}\backslash\set{n_0}$ are adjacent when $\ip{\boldsymbol{\varphi}_{n_1}}{\boldsymbol{\varphi}_{n_2}}>0$~\cite{FickusJMPW18} and
\begin{equation}
\label{eq.traditional ETF SRG equivalence}
V=N-1,
\quad
K=\tfrac N2-1-(\tfrac N{2D}-1)\bigbracket{\tfrac{D(N-1)}{N-D}}^{\frac12},
\quad
U=\tfrac K2.
\end{equation}
(In~\cite{Waldron09},
adjacency instead equates to having $\ip{\boldsymbol{\varphi}_{n_1}}{\boldsymbol{\varphi}_{n_2}}<0$,
yielding the complementary graph.)
Sometimes a real $N$-vector ETF $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ for some Hilbert space $\mathbb{H}$ instead has the all-ones vector $\boldsymbol{1}$ as an eigenvector of its Gram matrix.
This can occur in two distinct ways: either the ETF is \textit{centered}, having $\boldsymbol{1}\in\ker(\boldsymbol{\Phi})=\ker(\boldsymbol{\Phi}^*\boldsymbol{\Phi})$ and so $\sum_{n\in\mathcal{N}}\boldsymbol{\varphi}_n=\boldsymbol{\Phi}\boldsymbol{1}=\boldsymbol{0}$,
or is \textit{axial}, having $\boldsymbol{1}\in\boldsymbol{\Phi}^*\boldsymbol{\Phi}(\mathbb{R}^\mathcal{N})=\boldsymbol{\Phi}^*(\mathbb{H})$,
meaning all of its vectors make the same angle with their nonzero centroid.
An axial real ${\operatorname{ETF}}(D,N)$ $\set{\boldsymbol{\varphi}_n}_{n\in\mathcal{N}}$ equates~\cite{FickusJMPW18} to an SRG on the vertex set $\mathcal{N}$ with $V=4K-2\Lambda-2U$ and $V-2K-1<0$.
Here, two vertices $n_1,n_2\in\mathcal{N}$ are adjacent if and only if $\ip{\boldsymbol{\varphi}_{n_1}}{\boldsymbol{\varphi}_{n_2}}>0$ and
\begin{equation}
\label{eq.axial ETF SRG equivalence}
V=N,
\quad
K=\tfrac{N-1}{2}+\tfrac12(\tfrac{N}{D}-1)\bigbracket{\tfrac{D(N-1)}{N-D}}^{\frac12},
\quad
U=\tfrac K2\tfrac{V-2K-2}{V-2K-1}.
\end{equation}
(An analogous characterization of centered real ETFs is also known but is superfluous since an ETF is axial if and only if its Naimark complement is centered~\cite{FickusJMPW18}.)
While every real ETF arises from the equivalence of~\eqref{eq.traditional ETF SRG equivalence}, it is an open problem if the same holds for~\eqref{eq.axial ETF SRG equivalence}: we do not know if every real ETF with $V$ vectors is projectively equivalent to one whose signature matrix matches the Seidel adjacency matrix of a $(V,K,\Lambda,U)$-SRG with $V=4K-2\Lambda-2U$~\cite{FickusJMPW18}.
With respect to the ETFs of Theorem~\ref{thm.infinite family},
note that since the zero vector in $\mathbb{F}_2^{2M}$ is singular,
the synthesis operator $\boldsymbol{\Gamma}_0$ of the harmonic ETF of (a) includes the all-ones ($0$-indexed) row of $\boldsymbol{\Gamma}$.
See~\eqref{eq.16 x 16 Gamma} for when $M=2$ and ${\operatorname{sgn}}(\mathrm{Q})=-1$, for example.
As such, this ETF is axial.
Applying~\eqref{eq.axial ETF SRG equivalence} to it yields an SRG with $V=2^{2M}$, $K=\tfrac12[2^M-{\operatorname{sgn}}(\mathrm{Q})][2^M+{\operatorname{sgn}}(\mathrm{Q})+1]$ and
$U=2^{M-1}(2^{M-1}+1)$ in which,
since $\boldsymbol{\Gamma}_0^*\boldsymbol{\Gamma}_0^{}=2^{M-1}[2^M\mathbf{I}+{\operatorname{sgn}}(\mathrm{Q})\mathbf{C}]$,
adjacency depends on the value of $\mathrm{C}(\mathbf{y}_1,\mathbf{y}_2)=\mathrm{c}(\mathbf{y}_1+\mathbf{y}_2)=(-1)^{\mathrm{Q}(\mathbf{y}_1+\mathbf{y}_2)}$.
This is a known~\textit{affine polar graph} ``$VO^{\pm}_{2M}(2)$"~\cite{Brouwer07}.
Its Naimark complement (the ETF of (d)) equates to the (graph) complement of this SRG.
Since $\mathbf{C}=\boldsymbol{\Delta}\boldsymbol{\Gamma}\boldsymbol{\Delta}$ the ETF of (a) is moreover projectively equivalent to one with Gram matrix $2^{M-1}[2^M\mathbf{I}+{\operatorname{sgn}}(\mathrm{Q})\boldsymbol{\Gamma}]$.
Since the entries in the $0$th row and column of $\boldsymbol{\Gamma}$ are constant,
we can apply~\eqref{eq.axial ETF SRG equivalence} to this ETF to obtain a subordinate SRG on the $V=2^{2M}-1$ vertices of $\mathbb{F}_2^{2M}\backslash\set{0}$ in which adjacency depends on the value of $\boldsymbol{\Gamma}(\mathbf{y}_1,\mathbf{y}_2)=(-1)^{\mathrm{B}(\mathbf{y}_1,\mathbf{y}_2)}$.
This is a known \textit{symplectic graph} ``$Sp_{2M}(2)$"~\cite{Brouwer07}.
Every ETF of (b) is a sub-ETF of the axial ETF (a) and so is also axial:
if the analysis operator of a given sequence of vectors contains an all-ones vector, then the same is true for any of its subsequences.
The row space of~\eqref{eq.ETF(5,10)}, for example, clearly contains the all-ones vector.
Applying~\eqref{eq.axial ETF SRG equivalence} to it yields an SRG with $V=2^{M-1}[2^M-{\operatorname{sgn}}(\mathrm{Q})]$, $K=\frac12(2^{M-1}+1)[2^M-1-{\operatorname{sgn}}(\mathrm{Q})]$ and
$U=2^{M-3}[2^M+3-{\operatorname{sgn}}(\mathrm{Q})]$ on the nonsingular points of $\mathbb{F}_2^{2M}$ in which adjacency depends on the value of $\mathrm{Q}(\mathbf{y}_1+\mathbf{y}_2)=\mathrm{B}(\mathbf{y}_1,\mathbf{y}_2)$.
This is a known ``$NO^{\pm}_{2M}(2)$" SRG~\cite{Brouwer07}.
We caution that a sub-ETF of a centered ETF is not necessarily centered.
In particular, the Gram matrix of an ETF of (e) is the $(\mathcal{D}\times\mathcal{D})$-indexed submatrix of
$2^{M-1}[2^M\mathbf{I}-{\operatorname{sgn}}(\mathrm{Q})\boldsymbol{\Gamma}]$ and so is neither axial nor centered, having a ($0$-indexed) row and column with entries of constant value.
Applying~\eqref{eq.traditional ETF SRG equivalence} to it yields an SRG on the $V=2^{M-1}[2^M-{\operatorname{sgn}}(\mathrm{Q})]-1$ nonzero singular points of $\mathbb{F}_2^M$ in which adjacency depends on the value of $\mathrm{Q}(\mathbf{y}_1+\mathbf{y}_2)=\mathrm{B}(\mathbf{y}_1,\mathbf{y}_2)$.
This is a known ``$O_{2M}^{\pm}(2)$" SRG.
In fact, a careful analysis reveals that an ETF of (e) is projectively equivalent to one of (b) with opposite sign (since the quadrics of~\eqref{eq.canonical hyperbolic} and~\eqref{eq.canonical elliptic} are shifts of each other),
meaning that ``$O_{2M}^{\pm}(2)$" is subordinate to $NO_{2M}^{\mp}(2)$.
Real ETFs with the same (or Naimark complementary) parameters as those of (a) and (c) arise from McFarland difference sets~\cite{DingF07} and Steiner ETFs~\cite{GoethalsS70,FickusMT12} from ${\operatorname{BIBD}}(2^M,2,1)$.
Real ETFs with the same (or Naimark complementary) parameters as those of (b) and (e) arise from Steiner and Tremain~\cite{FickusJMP18} ETFs from ${\operatorname{BIBD}}(2^M-1,3,1)$.
Whether or not such ETFs are truly equivalent (up to unitary transformations on their spans and signed permutations of their vectors) is a question we leave for future research.
\end{remark}
\section{Conclusions and future work}
We have seen that paired difference sets (Definition~\ref{def.paired difference sets}) yield ECTFFs (Theorem~\ref{thm.ECTFF from PDS}) and that an infinite family of nontrivial such pairs exists (Theorem~\ref{thm.infinite family}).
As noted in~\cite{FickusMJ16}, at least one other nontrivial pair exists.
Like that of Example~\ref{ex.PDS} (the $M=2$ case of Theorem~\ref{thm.infinite family}) it consists of a $6$- and $10$-element subset of a group of order $16$.
But unlike that example, the group in question is $\mathbb{Z}_4^2$, not $\mathbb{Z}_2^4$.
(Interestingly, paired difference sets in $\mathbb{Z}_2^2\times\mathbb{Z}_4$ and $\mathbb{Z}_2\times\mathbb{Z}_8$ do not seem to exist, despite the fact that they too contain difference sets of order $6$ and $10$~\cite{FickusMJ16}.)
Nontrivial paired difference sets seem rare, in general:
our numerical search found only $27$ integer triples $(D,E,N)$ that meet even the simplest conditions on the existence of nontrivial paired difference sets of cardinality $D$ and $E$ (ordered without loss of generality according to size) in some abelian group of order $N$ which is at most $1024$,
namely that $1<D\leq E<N\leq 1024$ and that $\frac{D(D-1)}{N-1}$, $\frac{E(E-1)}{N}$ and $R$ of~\eqref{eq.paired diff set rank} are integers.
These include only four triples such that $D+E\neq N$.
Remarkably, these four triples along with seven others are ruled out by cross-referencing against a table of known difference sets~\cite{Gordon19} that makes use of more sophisticated known necessary conditions.
This itself raises an interesting open problem:
do the cardinalities of any nontrivial paired difference sets always sum to the cardinality of the corresponding group?
Of the $16$ triples that remain, four correspond to those produced by Theorem~\ref{thm.infinite family} when $M=2,3,4,5$,
namely those with $(R,D,E,N)$ parameters $(5,6,10,16)$ (Example~\ref{ex.PDS}),
$(21,28,36,64)$, $(85,120,136,256)$ and $(341,496,528,1024)$, respectively.
The remaining $12$ cases are open.
For five of these, the existence of even a difference set of cardinality $D$ for a group of order $N$ is unresolved, namely when $(D,N)$ is $(190,400)$, $(325,676)$, $(378,784)$, $(385,925)$ and $(280,931)$.
This leaves just seven open cases that should probably bear the most scrutiny.
They have $(R,D,E,N)$ parameters
$(11,12,33,45)$, $(19,20,76,96)$, $(29,30,145,175)$, $(105,126,225,351)$, $(55,56,385,441)$, $(71,72,568,640)$ and $(89,90,801,891)$.
Interestingly, like those of Theorem~\ref{thm.infinite family}, these parameters all match those arising from certain McFarland difference sets with one exception: $(105,126,225,351)$ is instead consistent with a certain \textit{Spence} difference set~\cite{JungnickelPS07}.
Of these, $(19,20,76,96)$, $(105,126,225,351)$ and $(71,72,568,640)$ seem the most promising since ETFs with the same parameters as those guaranteed by Theorem~\ref{thm.ECTFF from PDS} are already known to exist~\cite{FickusM16}.
It would be more surprising if $12$- and $33$-element paired difference sets for either of the two abelian groups of order $45$ existed since this would give an ${\operatorname{ETF}}(11,33)$, which would be the smallest new ETF discovered in years.
Our numerical work indicates that such paired difference sets do not exist.
This itself raises another open problem: do all paired difference sets consist of $2^{M-1}(2^M-1)$- and $2^{M-1}(2^M+1)$-element subsets of an abelian group of order $2^{2M}$?
\section*{Acknowledgments}
The authors thank Prof.~Dustin~G.~Mixon and the two anonymous reviewers for their thoughtful comments.
We are especially grateful for the anonymous remark that led to Theorem~\ref{thm.necessary}.
This work was partially supported by NSF DMS 1830066, and began during the Summer of Frame Theory (SOFT) 2016.
The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S.~Government.
|
1,116,691,500,669 | arxiv | \section{Introduction}
A Bose--Einstein condensate (BEC) is an exotic state of matter, which takes place in bosonic systems below a critical temperature,
when a macroscopic fraction of particles occupy the same fundamental quantum state \cite{pitaevskii2016bose}. Almost three decades ago,
Bose--Einstein condensation was observed for the first time by Anderson et al. in a dilute ultra-cold atomic gas \cite{BECExp95}.
Since then, BECs have been realized in a wide range of different systems, from solid-state quasiparticles \cite{kasprzak2006bose, demokritov2006bose} to light in optical micro-cavities \cite{klaers2010bose}.
Bose--Einstein condensation is intimately related to the notion of superfluidity, which is the capability of a system to flow without viscous dissipation \cite{pitaevskii2016bose}. Superfluidity was first detected almost one century ago in liquid helium
$^4\mathrm{He}$ \cite{Kapitza1938,Allen1938} below 2.17K, and it is a known feature also of atomic BECs and light in nonlinear optical systems \cite{CarusottoSuperfluidsLight}. Both superfluidity and Bose--Einstein condensation are a manifestation of quantum effects on a macroscopic scale, which is why these systems are usually called quantum fluids. Theoretically, a quantum fluid can be described by a macroscopic complex wave function. This represents the order parameter of the Bose--Einstein condensation phase transition and it is directly related to the density and the inviscid velocity of the superflow via a Madelung transformation \cite{noretal}.
As a consequence of superfluidity, an impurity immersed in a quantum fluid does not experience any drag and can move without resistance. However, if the speed of the impurity is too large, superfluidity is broken because of the emission of topological defects of the order parameter, known as quantum vortices \cite{donnelly1991quantized,FrischCritVel,BrachetCritVel,ActiveWiniecki}. Moreover, at finite temperature the thermal excitations in the system may interact with the impurities and drive their motion \cite{ClusteringUmberto}. The behavior of particles and impurities immersed in a superfluid has been a central subject of study since long time \cite{donnelly1991quantized}. The interest has been recently renewed by the experimental implementation of solidified hydrogen particles to visualize quantum vortices in superfluid helium \cite{bewley2006superfluid, LaMantiaParticles}, the study of polarons in atomic gases \cite{Impurity_BEC,Tracers_BEC} and the use of impurities to investigate the properties of superfluids of light \cite{MichelSuperLight,CarusottoLight2014}. A particularly interesting kind of impurity arises in the immiscible regime of the multi-component BEC. It has been shown that when two condensates of different species highly repel each other, one of the two components exists in a localized region and can be thought as a finite-size impurity \cite{NonlinearBEC_book,RicaRoberts}. If many components are present simultaneously, different phases can be identified, depending on the ratios between the coupling constants \cite{RicaRoberts}. In particular, for {positive scattering lengths} between the impurity fields, the components separate from the main condensate and show a hard-sphere repulsion between each other. Experimentally, mixtures of different condensates have been realized with cold atomic gases \cite{Modugno2components,Myatt2components}, and the immiscibility properties have been studied \cite{Papp2components}.
In this work we aim at studying numerically the dynamics of an immiscible and finite-size impurity in a quantum fluid at finite temperature. There are several models which have been proposed to take into account finite temperature effects in a quantum fluid, although at the moment there is no uniform consensus on which is the best one \cite{ProukakisFiniteTemperature}. A successful example is the Zaremba-Nikuni-Griffin framework, in which a modified-dissipative Gross--Piteaevskii equation for the condensate wavefunction is coupled with a Boltzmann equation for the thermal cloud \cite{ZNG}. A simpler model is the Fourier truncated Gross--Pitaevskii (FTGP) equation, in which thermal fluctuations of the bosonic field are naturally taken into account without the coupling with an external thermal bath \cite{DavisFiniteTEmpBEC}. The main idea behind the FTGP model is that imposing an ultraviolet cutoff $k_{\mathrm{max}}$, and truncating the system in Fourier space, allows for the regularization of the classical ultraviolet divergence and states at thermal equilibrium can be generated. The FTGP model has been successfully used to reproduce the condensation transition \cite{DavisFiniteTEmpBEC,DavideBKT,CondensationRica,KrstulovicBottleneck}, to study finite temperature effects on quantum vortex dynamics \cite{BerloffRing,KrstulovicSlowdown,GiorgioFiniteTempPRE} and to investigate the effective viscosity in the system \cite{ShuklaViscosity}.
In this article, we couple the FTGP equation with a minimal model for impurities, which are described as localized repulsive potentials with classical degrees of freedom \cite{ActiveWiniecki,ShuklaParticlesPRA2017}. It has been recently utilized systematically to investigate the interaction between particles and quantum vortices at very low temperature \cite{GiuriatoApproach,GiuriatoKelvinwaves,GiuriatoReconnections,GiuriatoTangle}. We stress that this minimal model is suitable for extensive numerical simulations and Monte-Carlo sampling. Indeed, its simplicity makes it computationally much cheaper than more complex approaches in which the impurities have many (infinite) degrees of freedom, like the Gross--Clark model \cite{BerloffBubble,VilloisBubble} or the multi-component BEC model \cite{RicaRoberts}.
Recently, a drag force acting on an impurity in the weak coupling regime has been detected using a damped GP equation at finite temperature \cite{SpanishDrag}, extending an analytical work in which the resistance of the GP fluid on a point particle was studied at zero temperature \cite{PitaevskiiTheory}. In the case of immiscible active impurities, it has been shown that a multitude of them coupled with the FTGP model can form clusters, depending on the temperature and the ratio between the fluid mediated attraction and the impurity-impurity repulsion \cite{ClusteringUmberto}. Moreover, the presence of such clusters turned out to be responsible for an increase of the condensation temperature. However, the precise characterization of the dynamics of a single impurity immersed in a bath of FTGP thermal modes has not been addressed yet. This is indeed the purpose of the present work. In the next section, we present the FTGP model coupled with a single three-dimensional impurity, and provide details for the numerical techniques used to simulate such system. In section \ref{sec:impurity_motion}, we present a statistical analysis of extensive numerical simulations of the system. In particular, we find that at large times the dynamics of an impurity in a finite temperature quantum fluid is akin to an Ornstein--Uhlenbeck process with a temperature dependent friction coefficient, that we are able to explain. Eventually, we exploit this information to show that for the sizes of the impurities considered, their motion is consistent with a scenario where the thermal excitations behave as a gas of waves rather than a continuum liquid.
\section{Finite temperature model}
\label{sec:model}
We use the Fourier truncated Gross-Pitaevskii model to describe a weakly interacting quantum fluid at finite temperature, with a repulsive impurity immersed in it \cite{ClusteringUmberto}. The Hamiltonian of the model is given by:
\begin{eqnarray}
H&=&\int\left( \frac{\hbar^2}{2m} |\grad \psi |^2 +\frac{g}{2}|\mathcal{P}_{\rm G}[|\psi|^2]|^2\right)\,\mathrm{d} \mathbf{x} + \nonumber \\
&&\intV_\mathrm{I}(| \mathbf{x} - \mathbf{q} |)\mathcal{P}_{\rm G}[|\psi|^2]\,\mathrm{d} \mathbf{x} +\frac{\mathbf{p}^2}{2 M_{\rm I}} ,
\label{Eq:HGP}
\end{eqnarray}
where $\psi(\mathbf{x},t)$ is the bosonic field, $m$ is the mass of the constituting bosons and $g=4 \pi a_\mathrm{s} \hbar^2 /m $ is the self-interaction coupling constant, with $a_{\rm s}$ the bosons $s$-wave scattering length.
The bosonic field is coupled with an impurity of mass $M_{\rm I}$, described by its classical position $\mathbf{q}(t)$ and momentum $\mathbf{p}(t)=M_{\rm I}\mathbf{\dot{q}}(t)$. The impurity is modeled by a repulsive potential $V_\mathrm{I}(|\mathbf{x}-\mathbf{q}|)$, which defines a spherical
region centered in $\mathbf{q}(t)$ where the condensate is completely depleted.
Note that the functional shape of the potential $V_\mathrm{I}(|\mathbf{x}-\mathbf{q}|)$ is not important, provided that it is sufficiently repulsive to completely deplete the fluid. The relevant parameter is indeed the size of the depleted region, which in turns identifies the impurity radius $a_{\rm I}$. The Galerkin projector $\mathcal{P}_{\rm G}$ truncates the system imposing an UV cutoff in Fourier space: $\mathcal{P}_{\rm G} [\hat{\psi}_{\mathbf{k}}] = \theta(k_\mathrm{max}-|\mathbf{k}|)\hat{\psi}_{\mathbf{k}}$
with $\theta(\cdot)$ the Heaviside theta function, $\hat{\psi}_\mathbf{k}$ the Fourier transform of $\psi(\mathbf{x})$ and $\mathbf{k}$ the wave vector.
The time evolution equation of the wavefunction and the impurity are obtained straightforwardly by varying the Hamiltonian (\ref{Eq:HGP}):
\begin{equation}
i\hbar\dertt{\psi}=\mathcal{P}_{\rm G} \left[- \alps \gd \psi + \bet\mathcal{P}_{\rm G} [|\psi|^2]\psi+ V_\mathrm{I}(| \mathbf{x} -{\bf q}|)\psi\right], \label{Eq:GPE}
\end{equation}
\begin{equation}
M_{\rm I}\frac{\rm d \mathbf{\dot{q}}}{\rm d t}= - \int V_\mathrm{I}(| \mathbf{x} -{\bf q} |) \mathcal{P}_{\rm G}[\nabla|\psi|^2]\, \mathrm{d} \mathbf{x}. \label{Eq:Particles}
\end{equation}
Note that the projection of the density $|\psi|^2$ in Eq.\eqref{Eq:GPE} is a de-aliasing step that is necessary to conserve momentum \cite{GiorgioFiniteTempPRE} in the truncated equations. This procedure slightly differs with the Projected Gross--Pitaevskii model \cite{DavisFiniteTEmpBEC} as some high-momentum scattering processes are not considered in the FTGP framework.
At zero temperature and without the impurity, Eq.(\ref{Eq:GPE}) can be linearized about the condensate ground state $\psi_0=|\psi_0|\exp{(-i\mu t/\hbar)}$, fixed by the chemical potential $\mu=g|\psi_0|^2$. The excitations of the condensate propagate with the Bogoliubov dispersion relation:
\begin{equation}
\omega_\mathrm{B}(k) = ck\sqrt{1+\frac{\xi^2k^2}{2}},
\label{Eq:bogo}
\end{equation}
where $k=|\mathbf{k}|$,
$c=\sqrt{g|\psi_0|^2/m}$ is the speed of sound and $\xi=\sqrt{\hbar/{2gm|\psi_0|^2}}$ defines the healing length at zero temperature. Note that the impurity completely depletes the condensate in the region where $V_{\mathrm{I}}>\mu$.
The Hamiltonian $H$ and the number of bosons $N=\int |\psi|^2\mathrm{d} \mathbf{x}$ are invariants of the FTGP model.
Thus, it possesses finite temperature absolute equilibrium solutions, distributed with the probability
\begin{equation}
\mathbb{P}[\psi,\mathbf{q},\mathbf{\dot{q}}]\propto e^{-\beta(H - \mu N)}.
\label{Eq:equilibrium}
\end{equation}
The concept of absolute equilibria of Fourier truncated equations was first introduced in the context of the Euler equation \cite{Lee1952,Kraichnan1967} and directly generalizes to FTGP \cite{GiorgioFiniteTempPRE}. Such equilibria are steady solutions of the associated Liouville equation. The Liouville equation describes the microcanonical evolution of the phase-space distribution function of an ensemble of states driven by Eqs. (\ref{Eq:GPE},\ref{Eq:Particles}).
Note that a state which solves Eqs. (\ref{Eq:GPE},\ref{Eq:Particles}) conserves the invariants $N$ and $H$, and the equilibrium distribution in Eq. \eqref{Eq:equilibrium} is nothing but the probability of picking one of these states at given inverse temperature $\beta$ and chemical potential $\mu$.
This is true whether the impurity is present in the system or not. The argument of the exponential in Eq. (\ref{Eq:equilibrium}) is a linear combination of the invariants $H$ and $N$, and $\beta$ is a Lagrange multiplier identified with the inverse temperature. Given a random initial condition with energy $H$ and number of bosons $N$, long time integration of the equations (\ref{Eq:GPE},\ref{Eq:Particles}) will let the system evolve to an equilibrium state belonging to the distribution (\ref{Eq:equilibrium}). The temperature is not directly available as a control parameter, since such dynamics is microcanonical, but it is biunivocally associated to the given conserved invariants \cite{DavisFiniteTEmpBEC}.
At finite temperature, many modes are excited and interact non-linearly. Such interactions lead to a spectral broadening of the dispersion relation, together with small corrections of the frequency. Overall, the dispersion relation can be well approximated taking into account the depletion
of the condensate mode in the following manner \cite{ShuklaViscosity}:
\begin{equation}
\omega^T_\mathrm{B}(k) = ck\sqrt{n_0(T)+\frac{\xi^2k^2}{2}},
\label{Eq:bogoT}
\end{equation}
where $n_0(T)$ is the condensate fraction. We define it as
\begin{equation}
n_0(T) = \frac{\left\langle |\int\psi\,\mathrm{d}\mathbf{x}|^2 \right\rangle_T}{\left\langle |\int\psi\,\mathrm{d}\mathbf{x}|^2 \right\rangle_{T=0}},
\label{Eq:condensate}
\end{equation}
namely as the ratio between the occupation number of the zero mode at temperature $T$ and at temperature $T=0$. With such definition, the condensate fraction is normalized to be one at zero temperature. In this way, the depletion of the condensate due to the presence of the impurity is properly taken into account \cite{ClusteringUmberto}. The fraction of superfluid component $n_\mathrm{s}(T)=\rho_\mathrm{s}/\bar{\rho}$ and normal fluid component $n_\mathrm{n}(T)=\rho_\mathrm{n}/\bar{\rho}$,
where $\bar{\rho} = \frac{1}{L^3}\int m|\psi|^2\,\mathrm{d}\mathbf{x}$ is the average mass density, can be computed using a linear response approach \cite{ClarkDerrikSuper,FosterBKT,ClusteringUmberto}. They read, respectively:
\begin{equation}
n_\mathrm{n}(T)=\frac{\lim_{k\rightarrow 0}\chi_I(k)}{\lim_{k\rightarrow 0}\chi_C(k)},\qquad\quad
n_\mathrm{s}(T) = 1 - n_\mathrm{n}(T),
\label{Eq:chiratio}
\end{equation}
where $\chi_C(k)$ and $\chi_I(k)$ are respectively the compressible (longitudinal) and incompressible (transverse) coefficients
of the 2-points momentum correlator:
\begin{equation}
\left\langle \hat{j}_i(\mathbf{k})\hat{j}_j(\mathbf{-k}) \right\rangle \propto\frac{k_ik_j}{k^2}\chi_C(k)+\left(\delta_{ij}-\frac{k_ik_j}{k^2}\right)\chi_I(k),
\label{Eq:momcorr}
\end{equation}
with $\hat{j}_i(\mathbf{k},t)$ the Fourier transform of the $i$-th component of the momentum density $j_i(\mathbf{x},t)=\frac{i\hbar}{2}\left[\psi\partial_i\psi^*-\psi^*\partial_i\psi\right]$.
\subsection*{Numerical methods and parameters}
In the numerics presented in this work, we integrate the system (\ref{Eq:GPE},\ref{Eq:Particles}) by using a pseudo-spectral method with $N_{\mathrm{res}}=128$ uniform grid points per direction of a cubic domain of size $L=2\pi$. We further set the UV cutoff $k_\mathrm{max}=N_{\mathrm{res}}/3$, so that, besides the Hamiltonian $H$ and the number of bosons $N$, the truncated system (\ref{Eq:GPE},\ref{Eq:Particles}) conserves the total momentum ${\bf P}=\int \frac{i\hbar}{2}\left( \psi {\bf \nabla}\psi^* - \psi^* {\bf \nabla}\psi\right)\mathrm{d} \mathbf{x}+\mathbf{p}$ as well (provided that initially $\mathcal{P}_{\rm G} [\psi]=\psi$ and $\mathcal{P}_{\rm G} [V_\mathrm{I}]=V_\mathrm{I}$) \cite{GiorgioFiniteTempPRE,GiuriatoReconnections}. {In thermal states, the cutoff $k_\mathrm{max}$ plays an important role. The dimensionless parameter $\xik_\mathrm{max}$ controls the amount of dispersion of the system and therefore the strength of the non-linear interactions of the BEC gas. The smaller its value, strongest the interaction is. Note that, as scales of the order of the healing length have to be resolved numerically, it cannot be arbitrarily small. See for instance references \cite{GiorgioFiniteTempPRE,ShuklaViscosity} for further discussions}. In this work we fix this parameter to $\xi k_\mathrm{max}=2\pi/3$. Note that in our results all the lengths are expressed in units of the healing length at zero temperature $\xi$ and the velocities in units of the speed of sound $c$ at zero temperature. {In this units, the system size is $L=128\xi$}.
The potential used to model the impurity is a smoothed hat-function $V_\mathrm{I}(r)=\frac{V_0}{2}(1-\tanh\left[\frac{r^2 -\eta_a^2}{4\Delta_a^2}\right])$. The impurity radius $a_{\rm I}$ is estimated at zero temperature by measuring the volume of the displaced fluid
$\frac{4}{3}\pi a_{\rm I}^3 = \int (|\psi_0|^2 - |\psi_\mathrm{p}|^2)\,\mathrm{d}\mathbf{x}$, where $\psi_\mathrm{p}$ is the steady state with one impurity. The impurity mass density is then $\rho_\mathrm{I} = M_{\rm I}/\left(\frac{4}{3}\pi a_{\rm I}^3\right)$. In all the simulations we fix $\mu=|\psi_0|=1$ and for the impurity potential $V_0=20\mu$ and $\Delta_a = 2.5\xi$. We consider an impurity of radius $a_{\rm I}=7.6\xi$ setting $\eta_a = 2\xi$ and an impurity of size $a_{\rm I}=12.7\xi$ setting $\eta_a = 10\xi$.
Note that, although the shape of the impurity potential is fixed, fluctuations of the impurity surface are allowed by the model.
Such fluctuations are shown in Fig.\ref{Fig:3Dtraj} (that will be commented in Section \ref{sec:impurity_motion}) as green contours of the fluid density at a low value around the spherical potential.
\begin{figure}
\includegraphics[width=.99\linewidth]{Fig1}
\caption{
(\textit{Color online}) Snapshots of the GP field
with an impurity of size $a_{\rm I}=7.6\xi$ at time $t=3056\xi/c$ \textbf{(a,b)}
and an impurity of size $a_{\rm I}=12.7\xi$ at time $t=7130\xi/c$ \textbf{(c,d)}
at temperatures $T=0.22\,T_\lambda$ \textbf{(a,c)}
and $T=0.52\,T_\lambda$ \textbf{(b,d)}.
The GP sound waves are rendered in blue, the dark sphere is the
impurity potential and the green surfaces are contours of the
GP density at $\rho/\bar{\rho}=0.15$.
The impurity trajectory is displayed as a solid line. }
\label{Fig:3Dtraj}
\end{figure}
We prepare separately the ground state with an impurity $\psi_\mathrm{p}$ (at zero temperature) and the FTGP states at finite temperature $\psi_T$, without the impurity. The first one is obtained by performing the imaginary time evolution of the equation (\ref{Eq:GPE}), while the second one is realized with the stochastic real Ginzburg--Landau (SRGL) \cite{GiorgioFiniteTempPRE,ClusteringUmberto,ShuklaViscosity}, protocol that allows to explicitly control the temperature. The SRGL method is briefly recalled below. The initial condition for the FTGP simulations is then obtained as $\psi = \psi_\mathrm{p}\times\psi_T$. For our analysis, {we considered $\sim 22$ different realizations for each of the $15$ studied temperatures and for each impurity}. The initial velocity of the impurity is always set to zero and the temporal length of each realization is $\sim 9000\,\xi/c$. In all the statistical analysis presented in the following sections, we checked that including or not the data associated to the early times of the simulation does not change the results. The thermalization of the impurity will be studied explicitly in the next Section \ref{sec:impurity_motion}, but this fact gives already a first indication that the impurity reaches the equilibrium with the thermal bath in the very early stages of the simulations.
We operatively define the condensation temperature $T_\lambda$ as the first point of the temperature scan at which the condensate fraction $n_0(T)$ goes to zero. The normal fluid fraction $n_\mathrm{n}(T)$ and consequently the superfluid fraction $n_\mathrm{s}(T)=1-n_\mathrm{n}(T)$ are evaluated numerically with the following protocol \cite{FosterBKT}.
At fixed temperature, we measure the angle--averaged incompressible and compressible spectra of the momentum correlator, respectively $\chi^{1d}_I(k)\propto\left\langle k^2|\mathbf{j}_I(\mathbf{k})|^2 \right\rangle$ and $\chi^{1d}_C(k)\propto\left\langle k^2|\mathbf{j}_C(\mathbf{k})|^2 \right\rangle$. We fit the logarithm of $\chi^\mathrm{1d}_I(k)/k^2$ and $\chi^\mathrm{1d}_C(k)/k^2$ with a cubic polynomial in the range $3\cdot L/2\pi < k < 3k_\mathrm{max}/2$; we extrapolate the values of the fits at $k=0$ and finally divide them to get $n_\mathrm{n}(T)=\chi_I(k=0)/\chi_C(k=0)$. Such method works well at low temperatures while it is strongly affected by numerical noise at temperatures $T\gtrsim T_\lambda$ \cite{FosterBKT}. These last points are then simply assumed to be equal to zero.
Finally note that in this work, if not explicitly specified, all the averages are intended over realizations for a fixed temperature $T$. Moreover, because of isotropy, we treat each dimension of any vectorial quantity as a different realization of the same distribution.
\subsection*{Grand-canonical thermal states}
We recall here the SRGL protocol used to obtain equilibrium thermal states of the truncated GP equation.
We refer to Ref.\cite{GiorgioFiniteTempPRE} for further details about the method. The FTGP grand-canonical thermal states
obey the (steady) Gibbs distribution which coincides with Eq. (\ref{Eq:equilibrium}). A stochastic process that converges to a realization of this probability distribution is given by the following stochastic equation (in physical space):
\begin{eqnarray}
\hbar\frac{\partial \psi}{\partial t} &=& \mathcal{P}_{\rm G} [\alps \gd \psi + \mu\psi - \bet\mathcal{P}_{\rm G} [|\psi|^2]\psi+ V_\mathrm{I}(| \mathbf{x} -{\bf q}|)\psi] \nonumber \\
&&+ \sqrt{\frac{2\hbar}{\beta L^3}} \mathcal{P}_{\rm G} [\zeta(\mathbf{x},t)],
\label{Eq:stoch}
\end{eqnarray}
where $\zeta(\mathbf{x},t)$ is a complex Gaussian white noise with zero mean and delta-correlated in space and time:
$\left\langle \zeta (\mathbf{x},t) \zeta^* (\mathbf{x}',t') \right\rangle = \delta(\mathbf{x} - \mathbf{x}') \delta(t-t')$. In principle such process is coupled with analogous equations for the impurity degrees of freedom \cite{ClusteringUmberto}. Here, we do not consider them, since we are interested in generating thermal states without impurities. As explained in the previous section, the impurity is added
afterwards to the thermal states in order to observe its dynamics according to the evolution equations (\ref{Eq:GPE},\ref{Eq:Particles}).
In the right hand side of Eq. \eqref{Eq:stoch} a deterministic term and a stochastic term compete against each other. The distribution which entails the balance between such fluctuations and dissipation is Eq. \eqref{Eq:equilibrium}, i.e. the steady solution of the Fokker--Planck equation associated to Eq. \eqref{Eq:stoch} \cite{GiorgioFiniteTempPRE}.
We define the temperature as $T=1/k_\mathcal{N}\beta$, where $k_\mathcal{N} = L^3/\mathcal{N}$ and $\mathcal{N} = \frac{4}{3}\pi k_\mathrm{max}^3$ is the number of Fourier modes in the system. With this choice, the temperature has units of energy density
and the intensive quantities remain constant in the thermodynamic limit, that is $k_\mathrm{max}\rightarrow \infty$
with $L$ constant. Finally, in order to control the steady value of the average density $\bar{\rho}$, the chemical potential is also dynamically evolved with the \emph{ad hoc} equation $\dot{\mu} = -\nu_\rho(\bar{\rho}-\bar{\rho}_\mathrm{t})$ during the stochastic relaxation. In this way, the system converges to the control density $ \bar{\rho}=\bar{\rho}_\mathrm{t}$ that we set equal to $m|\psi_0|^2=1$.
We finally mention that a similar approach can be used to generate and study thermal states, which is the stochastic GP model \cite{ProukakisFiniteTemperature}. There, the stochastic relaxation (\ref{Eq:stoch}) is combined with the physical GP evolution (\ref{Eq:GPE}). However, unlike the FTGP model, the stochastic GP model is dissipative and has an adjustable parameter in which the interaction between the condensate and the thermal cloud is encoded.
\section{Impurity motion}
\label{sec:impurity_motion}
We perform a series of numerical simulations of the model (\ref{Eq:GPE},\ref{Eq:Particles}), varying the temperature and the size of the impurity. Typical impurity trajectories are displayed in Fig.\ref{Fig:3Dtraj} for two different temperatures, together with a volume rendering of the field and of the impurity. The motion of the impurity is clearly driven by a random force, due to the interaction with the thermal excitations of the condensate.
Before studying the stochastic dynamics of the impurity, we characterize some properties of the thermal states that will be used later. In Fig.\ref{Fig:Cond}.a we show the condensate fraction $n_0$, the superfluid component $n_\mathrm{s}$ and the normal fluid component $n_\mathrm{n}$ plotted against temperature.
\begin{figure}
\includegraphics[width=.99\linewidth]{Fig2}
\caption{
(\textit{Color online}) Temperature evolution of condensate fraction
(green solid line), superfluid fraction (dashed blue line)
and normal fraction (dotted red line) for simulations
without impurity.
The circles of corresponding colors refer to simulations
in presence of an impurity of size $a_{\rm I}=12.7\xi$
and mass density $\rho_\mathrm{I}=\bar{\rho}$.
\textbf{(b)} Temperature evolution of the decorrelation time of the FTGP density gradients.
(\textit{inset}) Time evolution of the two-points correlators of the FTGP density gradients
(\ref{Eq:DensGradCorr})
for three different temperatures.}
\label{Fig:Cond}
\end{figure}
The lines refer to the simulations without impurity while the circles are obtained in presence of the largest impurity considered ($a_{\rm I}=12.7\xi$). Almost no difference between the two cases is detected, since the volume occupied by the impurity is only $0.5\%$. Indeed, in Ref. \cite{ClusteringUmberto} it was shown that the condensate fraction starts to
increase at high temperatures if the impurities filling fraction is larger than $4\%$. We can therefore safely assume that the impurity has no impact on the statistical properties of the thermal fluctuations.
From the impurity Eq.~\eqref{Eq:Particles}, we observe that the quantum fluid interacts with the impurity via a convolution between the impurity potential and the density gradient. It is thus interesting to understand the typical correlation time of density fluctuations, in particular of its gradients.
In Fig.\ref{Fig:Cond}.b we compute the decorrelation time $\tau_{\mathrm{GP}}$ of the thermal excitations as a function of temperature.
Such time is evaluated performing a FTGP evolution of thermal states without impurity and considering
the time correlator of one of the component of the density gradient:
\begin{equation}
C_{\partial\rho}(t) = \frac{ \left\langle \partial_i\rho(t_0)\partial_i\rho(t_0+t) \right\rangle
} { \left\langle (\partial_i\rho)^2 \right\rangle}.
\label{Eq:DensGradCorr}
\end{equation}
The averages in Eq. (\ref{Eq:DensGradCorr}) are performed over space and different realizations. Three examples for three different temperatures of the time evolution of this correlator are shown in the inset of Fig.\ref{Fig:Cond}.b. They show a damped oscillating behavior and touch zero for the first time after a time $\sim 1 c/\xi$. We estimate the decorrelation time $\tau_{\mathrm{GP}}$ as the time after which the correlator (\ref{Eq:DensGradCorr}) is always less than $1\%$. At timescales larger than $\tau_{\mathrm{GP}}$, we expect that the interactions between the impurity and the thermal excitations can be considered as random and rapid.
Before checking if this is the case, we verify explicitly whether the impurity reaches the thermal equilibrium with the quantum fluid.
If the number of the excitations-impurity interactions is large, the velocity of the impurity is expected to be normally distributed at the equilibrium, in accordance with the central limit theorem. Indeed, we show this in Fig.\ref{Fig:PDF},
where the probability density function (PDF) for the single component of the impurity velocity is displayed.
\begin{figure}
\includegraphics[width=.99\linewidth]{Fig3}
\caption{
(\textit{Color online}) PDF of the single component velocity
of an impurity of size $a_{\rm I}=7.6\xi$ and mass density $\rho_\mathrm{I}=\bar{\rho}$, for different temperatures.
\textbf{(a)} Velocities normalized with the
speed of sound at zero temperature.
\textbf{(b)} Velocities normalized with the standard deviation.
Dotted black line is a Gaussian distribution with zero mean and unit variance.}
\label{Fig:PDF}
\end{figure}
Assuming ergodicity, the PDFs are computed averaging also over time, besides over realizations. Since we expect the impurity to be in thermal equilibrium with the surrounding GP fluid, the second order moment of its velocity should relax to a constant value, that is related to the temperature via the equipartition of energy:
\begin{equation}
\left\langle \dot{q}^2_i \right\rangle = \frac{k_\mathcal{N} T}{M_{\rm I}}.
\label{Eq:variance}
\end{equation}
The perfect agreement between Eq.~(\ref{Eq:variance}) and the numerical simulations is displayed in Fig.\ref{Fig:KBT}.
\begin{figure}
\includegraphics[width=.99\linewidth]{Fig4}
\caption{
(\textit{Color online}) Second order moment of the single component
velocity of impurities of size $a_{\rm I}=7.6\xi$ (red circles)
and $a_{\rm I}=12.7\xi$ (blue diamonds),
as a function of the temperature.
The mass density is $\rho_\mathrm{I}=\bar{\rho}$ for both.
(\textit{inset}) GP energy density versus temperature
(blue points). Orange dashed line is the equipartition line
$e_\mathrm{GP}=T_\lambda$.}
\label{Fig:KBT}
\end{figure}
It confirms that the impurity is indeed in thermal equilibrium with the thermal bath.
Note that the linear scaling with temperature persists also at high temperatures, where the GP energies are not in equipartition anymore because of high nonlinear interactions. This is not a contradiction, since the impurity is a classical object with
a simple quadratic kinetic energy. For comparison, the deviation from equipartition of the GP energy density {$e_\mathrm{GP} = (H-\mu N)/L^3+\mu^2/2g$} (without impurities) is reported in the inset of Fig.\ref{Fig:KBT}.
We consider now the evolution of the two-point impurity velocity correlator $C_v(t)$. If the collisions between the superfluid thermal excitations and the impurity are fast and random, we expect it to decay as
\begin{equation}
C_v(t) = \lim_{t\rightarrow\infty} \frac{ \left\langle \dot{q}_i(t_0)\dot{q}_i(t_0+t) \right\rangle - \left\langle \dot{q}_i \right\rangle^2 } { \left\langle \dot{q}_i^2 \right\rangle - \left\langle\dot{q}_i\right\rangle^2 } = e^{-\frac{t}{\tau_{\mathrm{I}}}}.
\label{Eq:correlator}
\end{equation}
where $\tau_{\mathrm{I}}$ is the dynamical correlation time of the impurity velocity.
Specifically, the behavior (\ref{Eq:correlator}) should certainly hold at time-lags larger than the
decorrelation time of the GP excitations $\tau_{\mathrm{GP}}$, estimated in Fig.\ref{Fig:Cond}.b.
This scenario is confirmed by the measurements of $C_v(t)$, reported in Fig.\ref{Fig:Corr}
for the impurity of size $a_{\rm I}=7.6\xi$.
\begin{figure}
\includegraphics[width=.99\linewidth]{Fig5}
\caption{
(\textit{Color online}) Time evolution of the two-points velocity correlator for
the impurity of size $a_{\rm I}=7.6\xi$
and mass density $\rho_\mathrm{I}=\bar{\rho}$
in \textbf{(a)} Log-Lin scale and \textbf{(b)} Log-Log scale.
Different colors are associated to different temperatures
(same legend of Fig.\ref{Fig:PDF}).
Dotted lines are linear fits.
(\textit{inset}) Temperature evolution of the dynamical correlation time of the impurity.
}
\label{Fig:Corr}
\end{figure}
The exponential decay is evident for time-lags larger than $\sim 10\xi/c$ for all the temperatures.
According to the results mentioned so far, at sufficiently large timescales the interactions between the impurity and the thermal bath can be considered to be effectively fast, random and decorrelated.
Thus, it is natural to suppose that the impurity dynamics may be described by the
Ornstein-Uhlenbeck (OU) process \cite{VanKampen}:
\begin{equation}
M_{\rm I}\mathbf{\ddot{q}}= -\gamma\mathbf{\dot{q}} + \sqrt{\sigma^2}\mathbf{\zeta}_\mathrm{r}(t),
\label{Eq:OU}
\end{equation}
where $\mathbf{\zeta}_\mathrm{r}(t)$ is a (Gaussian) white noise in
time, i.e.
$\left\langle \mathbf{\zeta}_\mathrm{r}(t) \right\rangle = 0$
and
$\left\langle \mathbf{\zeta}_{\mathrm{r},i}(t_1)\mathbf{\zeta}_{\mathrm{r},j}(t_2) \right\rangle = \delta_{ij} \delta(t_1-t_2)$
where $\sigma^2$ is related to the diffusion coefficient.
The term $-\gamma\mathbf{\dot{q}}$ is the drag force, with $\gamma$ a friction coefficient that in general may depend on temperature and on the impurity size. In particular, the friction should be directly related to exponential decay timescale $\tau_{\mathrm{I}}$ of the correlator (\ref{Eq:correlator}) as $\gamma=M_{\rm I}/\tau_{\mathrm{I}}$. In Fig.\ref{Fig:Corr} we clearly see that the correlators decay faster for higher temperatures. The values of the correlation time $\tau_{\mathrm{I}}$ at different temperatures are obtained through linear fits of $\ln C_{v}(t)$, shown as dotted lines in Fig.\ref{Fig:Corr}.a. The decreasing of $\tau_{\mathrm{I}}$ with temperature is then explicitly displayed in the inset of Fig.\ref{Fig:Corr}.b. Note that $\tau_{\mathrm{I}} \gg\tau_{\mathrm{GP}}$, consistently with the assumputions of the OU process.
The physical consequence of such behavior, according to the OU picture,
is that the friction $\gamma$
between the impurity and the fluid
is larger for larger temperatures.
We will dedicate the next section to the discussion
on the temperature dependence of $\gamma$.
We briefly comment on the short time-lags limit ($t\lesssim10\xi/c$),
where the measured correlator appears to decay fast and with the same slope
for all the temperatures.
This is particularly evident in the Log-Log plots in
Fig.\ref{Fig:Corr}.b.
In this regime,
the assumptions necessary for an OU regime to be
established are certainly not valid.
Indeed, we are looking at timescales
shorter than the decorrelation time of the thermal excitations
$\tau_{\mathrm{GP}}$, so that the collisions between
the excitations and the impurity cannot be considered random, rapid and decorrelated as in the forcing $\mathbf{\zeta}_\mathrm{r}(t)$ in (\ref{Eq:OU}).
It is worth noting that, for low temperatures, the velocity correlator partially recovers before the exponential decay.
This unusual feature may be a consequence of a lack of decorrelation due to the small fraction of thermal excitations at low temperatures, which prevents the emergence of a diffusive regime. Such phenomenon requires
further investigations.
Another important prediction that can be obtained from the OU process is that the variance of the displacement $\delta_t q_i(t)=q_i(t+t_0) - q_i(t_0)$ obeys the law
\begin{equation}
\left\langle \left(\delta_t q_i\right)^2 \right\rangle = \frac{\sigma^2 M_{\rm I}}{\gamma^3}\left( \frac{\gamma }{M_{\rm I}}t - 1 + e^{-\frac{\gamma }{M_{\rm I}}t} \right).
\label{Eq:disp}
\end{equation}
Two regimes can be identified.
At short time-lags (but still large enough to consider the forcing $\zeta_{\mathrm{r}}(t)$ delta-correlated),
the displacement is ballistic
\begin{equation}
\left\langle \left(\delta_t q_i\right)^2 \right\rangle \underset{t \ll M_{\mathrm{I}}/\gamma}{\longrightarrow} \frac{\sigma^2}{2\gamma M_{\rm I}}t^2.
\label{Eq:ballistic}
\end{equation}
Conversely, after the dynamical relaxation a diffusive regime is established
\begin{equation}
\left\langle \left(\delta_t q_i\right)^2 \right\rangle \underset{t \gg M_{\mathrm{p}}/\gamma}{\longrightarrow} \frac{\sigma^2}{\gamma^2}t=2Dt,
\label{Eq:diffusive}
\end{equation}
where we have defined the diffusion constant $D=\sigma^2/2\gamma^2$.
Finally recall that, since in the OU process we also have that $\left\langle \dot{q}_i^2 \right\rangle = \sigma^2/2M_{\rm I}\gamma=D\gamma/M_{\rm I}$, the diffusion coefficient in Eq.~\eqref{Eq:diffusive} can be related to the equipartition of energy in thermal equilibrium
(\ref{Eq:equilibrium})
through the Einstein relation
\begin{equation}
D=\frac{{k_{\mathcal{N}}T}}{\gamma}.
\label{Eq:EinsteinRel}
\end{equation}
The measurements of the average squared displacement for the impurity of size $a_{\rm I}=7.6\xi$ are shown in Fig.\ref{Fig:Disp}
for all the temperatures analyzed, and compared with the OU predictions.
\begin{figure}
\includegraphics[width=.99\linewidth]{Fig6}
\caption{
(\textit{Color online}) Time evolution of the averaged squared displacement for
the impurity of size $a_{\rm I}=7.6\xi$ for different temperatures.
Different colors are associated to different temperatures
(same legend of Fig.\ref{Fig:PDF}). Dashed green line is
the prediction (\ref{Eq:disp}), assuming the Einstein relation
(\ref{Eq:EinsteinRel}), dash-dotted black line
and dotted line are respectively the asymptotic
(\ref{Eq:ballistic}) and (\ref{Eq:diffusive}).
\textbf{(a)} Lin-Lin scale,
times normalized with $\xi/c$ and distances normalized
with $\xi$.
\textbf{(b)} Log-Log scale, times normalized with the correlation time
$\tau_{\mathrm{I}}$ and distances normalized with
the prefactor of (\ref{Eq:disp}).
(\textit{inset}) Measured diffusion coefficient as a function
of temperature compared with the Einstein relation (\ref{Eq:EinsteinRel}).}
\label{Fig:Disp}
\end{figure}
Once the squared displacement is normalized with
the prefactor of the prediction (\ref{Eq:disp}) and assuming
the Einstein relation (\ref{Eq:EinsteinRel}) to estimate the diffusion coefficient,
the separation between the ballistic regime and the
diffusive one is apparent (bottom panel).
The transition happens at the measured values of the dynamical correlation time $t=\tau_{\mathrm{I}}$,
confirming the validity of the analysis of the
velocity correlator.
The diffusion coefficient $D$ is measured
as the slope of the squared displacement
in the diffusive regime and it is shown
in the inset of Fig.\ref{Fig:Disp}.a.
It is slightly larger
than the prediction given by the Einstein relation (\ref{Eq:EinsteinRel}).
Such trend can be the signature of a memory effect
due to a stochastic forcing of the fluid on the impurity which is not perfectly
delta-correlated. For instance, it
could be traced back to the presence of coherent structures
in the fluid or to the impurity surface fluctuations, due
to the actual interaction between the impurity and the thermal excitations.
\subsection*{Friction modeling}
In this section we show explicitly the behavior of the friction coefficient
observed in the numerical simulations and we give a phenomenological argument
to explain it. In Fig.\ref{Fig:Gamma}, the friction $\gamma$ is plotted as a function of
the temperature for the two impurity sizes analyzed
(red circles for the small one and blue diamonds for the large one).
Each value of $\gamma=M_{\mathrm{p}}/\tau_{\mathrm{I}}$ is estimated from
the measured decay time $\tau_{\mathrm{I}}$ of the impurity velocity correlator,
shown in the inset of Fig.\ref{Fig:Corr}.b.
\begin{figure}
\includegraphics[width=.99\linewidth]{Fig7}
\caption{
(\textit{Color online}) Friction coefficient $\gamma$ nondimensionalized
by $c M_{\rm I}/\xi$
as a function of the temperature,
for impurities of size $a_{\rm I}=7.6\xi$ (red circles)
and $a_{\rm I}=12.7\xi$ (blue diamonds),
with mass density $\rho_\mathrm{I}=\bar{\rho}$.
Dash-dotted lines are fits of the Epstein drag
(\ref{Eq:EpsteinDrag}) using the the normal fluid density
$\rho_\mathrm{n}$.
Solid lines are fits of the Epstein drag using
the density of non-condensed modes $\bar{\rho}-\rho_0$.
(\textit{inset}) Average excitation velocity
$\left\langle v_\mathrm{g} \right\rangle$ (\ref{Eq:vave})
as a function of temperature.}
\label{Fig:Gamma}
\end{figure}
In general terms, the friction $\gamma$ depends on
the interaction between the impurity and the surrounding fluid.
For a classical fluid there are different regimes,
depending on the value of the Knudsen number
$\mathrm{Kn} = \lambda_\mathrm{mfp}/a_{\rm I}$,
where $\lambda_\mathrm{mfp}$ is the mean free path of the
fundamental constituents of the fluid.
If $\mathrm{Kn}\ll 1$, at the scale of the impurity,
the fluid can be effectively considered as a
continuous medium and the Navier--Stokes equations hold.
As a consequence, the drag force acting on the impurity
is the standard Stokes drag
$\mathbf{F}_\mathrm{d}=-6\pia_{\rm I}\eta\mathbf{\dot{q}}$
\cite{Batchelor}, so that the friction is related to the viscosity $\eta$ as
\begin{equation}
\gamma = 6\pia_{\rm I}\eta.
\label{Eq:StokesDrag}
\end{equation}
Instead, if $\mathrm{Kn}\gg 1$, the fluid behaves as a
dilute gas of free molecules.
In this case, the resistance of the impurity is well described
by the Epstein drag \cite{EpsteinDrag}:
\begin{equation}
\mathbf{F}_\mathrm{d}=-\gamma\mathbf{\dot{q}}, \quad \gamma=\frac{4\pi}{3}C_\mathrm{d}a_{\rm I}^2\rho_\mathrm{g}\left\langle v_\mathrm{g} \right\rangle
=C_\mathrm{d} \frac{M_{\rm I}\rho_\mathrm{g}\left\langle v_\mathrm{g} \right\rangle}
{a_{\rm I}\rho_{\rm I}},
\label{Eq:EpsteinDrag}
\end{equation}
where $\rho_\mathrm{g}$ is the mass density of the gas and
$\left\langle v_\mathrm{g} \right\rangle \gg|\mathbf{\dot{q}}|$
is the average velocity of the molecules.
The pre-factor $C_\mathrm{d}$ is a dimensionless constant that depends on the interaction between the impurity and the fluid molecules. In the case of elastic collisions of the fluid excitations (specular reflection),
a simple way of understanding the formula (\ref{Eq:EpsteinDrag}) is summarized in the following \cite{DragSimple}. If an object of mass $M_{\rm I}$ moves with velocity
$\mathbf{\dot{q}}$ in an isotropic gas of free molecules,
the momentum exchanged in the collision between a
surface element $\mathrm{d}A$ and a
molecule (assuming elastic collisions) is
$\Delta \mathbf{p} \sim -2m_\mathrm{g}|\mathbf{\dot{q}}|\cos{\theta}\mathbf{\hat{n}}$,
where $m_\mathrm{g}\ll M_{\rm I}$ is the molecule mass and $\theta$ is the angle between the object velocity and the outward normal
to the surface element $\mathbf{\hat{n}}$. Assuming that the typical speed of the molecules
$\left\langle v_\mathrm{g} \right\rangle$ is much larger than the object velocity,
the average number of collisions
in a time interval
$\Delta t$ is $\mathrm{d}n_\mathrm{coll} =n_\mathrm{g}\left\langle v_\mathrm{g} \right\rangle \Delta t \, \mathrm{d}A$,
which is the number density of molecules
$n_\mathrm{g}=\rho_\mathrm{g}/m_\mathrm{g}$
times the volume spanned by each molecule
$\left\langle v_\mathrm{g} \right\rangle \Delta t \, \mathrm{d}A$.
The infinitesimal force arising from the momentum exchange is therefore
$\mathrm{d}\mathbf{F}_\mathrm{d} = (\Delta \mathbf{p} /\Delta t)\,\mathrm{d} n_\mathrm{coll}$.
By symmetry, if the object is spherical, the force components orthogonal to its
direction of motion will cancel.
Accounting for this, the net drag force results from the integration of
$|\mathrm{d}\mathbf{F}_{\mathrm{d}}|\cos\theta\left(\mathbf{\dot{q}}/|\mathbf{\dot{q}}|\right)$
over half of the sphere surface. This leads precisely to Eq. \eqref{Eq:EpsteinDrag} with $C_\mathrm{d}=1$. Considering different reflection mechanisms leads to the same equation with a different value of the pre-factor $C_\mathrm{d}$. For instance, in the case of full accomodation of the excitations with the impurity surface one gets $C_\mathrm{d}=(1+\pi/8)\sim1.39$ \cite{EpsteinDrag}.
The mean free path $\lambda_\mathrm{mfp} (T)$ in the
FTGP model has been recently estimated in Ref. \cite{ShuklaViscosity}
as the product of the the group velocity of the excitations
and the nonlinear interaction time
(i.e. the reciprocal of the spectral broadening of the dispersion relation)
at a given temperature.
For $\xi k_\mathrm{max}=2\pi/3$, {the value used in this work,} the mean free path $\lambda_\mathrm{mfp}$
turns out to lie between $10\,\xi$ and $50\,\xi$
at temperatures $T<0.7\,T_\lambda$, thus
larger than the sizes of the
impurities studied here
(cfr. Fig.14 of Ref. \cite{ShuklaViscosity}).
As a consequence, we can treat the fluid as a gas of
free molecules and confront the measured friction with the
Epstein drag. In particular,
the role of ``gas molecules" in the GP fluid is
played by the thermal excitations.
Therefore, we can substitute the gas density
$\rho_\mathrm{g}$ in Eq. (\ref{Eq:EpsteinDrag})
with the density of the non-condensed modes
$\rho_\mathrm{g}=\bar{\rho}-\rho_0$, where $\rho_0=n_0\bar{\rho}$
or with the normal fluid density $\rho_\mathrm{g}=\rho_\mathrm{n}=n_\mathrm{n}\bar{\rho}$,
computed using the momentum density correlator \cite{FosterBKT}
(see Fig.\ref{Fig:Cond}).
The velocity of the excitations
$v_\mathrm{g} = \frac{\partial \omega_k}{\partial k}$
is averaged as:
\begin{equation}
\left\langle v_\mathrm{g} \right\rangle = \frac {\sum_{|\mathbf{k}|\in S_\mathbf{k}}n_\mathbf{k}\frac{\partial \omega_k}{\partial k}} {\sum_{|\mathbf{k}|\in S_\mathbf{k}}n_\mathbf{k}}= \frac{ \sum_{k=1}^{k_\mathrm{max}} k^2 n_k^{1d} \frac{\partial \omega_k}{\partial k}} {\sum_{k=1}^{k_\mathrm{max}} n_k^{1d}},
\label{Eq:vave}
\end{equation}
with $n_\mathbf{k}$ the occupation number of the mode
$\mathbf{k}\in S_\mathbf{k}=\{1\le|\mathbf{k}|\le k_\mathrm{max}\}$
and $n^{1d}_k=\sum_{|\mathbf{k}|=k}n_\mathbf{k}$ its angle average.
In Fig.\ref{Fig:Gamma}, the Epstein drag prediction (\ref{Eq:EpsteinDrag}) is compared with the numerical data.
Both using the normal fluid density (dash-dotted lines) or the density of non-condensed modes (solid lines) we get a good accordance at low temperatures, with a fitted pre-factor $C_\mathrm{d}$, whose values are of the order $0.1$. Note that in this way we are
implicitly guessing that the impurity-excitations interaction
is independent of temperature. The specific values of $C_\mathrm{d}$ are reported in the legend of Fig.\ref{Fig:Gamma}. They are consistent with a reasonable scenario in which thermal waves are much less efficient in transferring momentum to the impurity with respect to the standard particles reflection mechanisms \cite{EpsteinDrag}.
We observe that $C_\mathrm{d}$ is slightly increasing with the impurity size (perhaps because of some variation of the impurity surface fluctuations) but it is independent of temperature. Note that the precise determination of radius dependence of $C_\mathrm{d}$ would require even further numerical simulations of what has been presented here.
In the inset of Fig.\ref{Fig:Gamma}, we show the temperature
dependence of the averaged excitations velocity (\ref{Eq:vave}), which turns out to be larger than the speed of phonons because it is dominated by high wave number excitations. Note that the friction increment starts to diverge from the prediction at high temperatures. One reason is that the mean free path of the GP excitations is becoming of the same order of the impurity size and thus the viscosity starts to play a role in the momentum exchange. A second cause may be that the impurity-excitations interactions are modified because of the high nonlinearity
of the GP waves, leading to a temperature dependence of the constant $C_\mathrm{d}$ in Eq. (\ref{Eq:EpsteinDrag}). Eventually, note that a larger discordance with the measurements at high temperature is observed if the normal fluid
density is used. This is probably due to a lack of accuracy in the computation of $\rho_\mathrm{n}$ at high temperatures, but it also suggests that it can be more reasonable to identify the density of the excitations simply with that of the non-condensed modes.
\section{Discussion}
In this article we studied how the stochastic motion
of an active, finite-size and immiscible impurity
immersed in a GP quantum fluid changes when the temperature
is varied.
We demonstrated that the interaction with the thermal
excitations in the system always leads to
a fast thermalization of the impurity. At time-lags
larger than $10\xi/c$ the correlation function of
the impurity velocity shows an exponential decay, which
is steeper for higher temperatures. This and the impurity squared
displacement are reminiscent of an Ornstein--Uhlenbeck
process.
From the measurements of the velocity correlation we
extracted the temperature dependence of the friction
coefficient $\gamma(T)$. The clear result is that
the impurity does not experience the typical
Stokes drag present in a classical fluid. Indeed,
in the case of Stokes drag, the temperature dependence of the
friction (\ref{Eq:StokesDrag}) is through the viscosity $\eta$.
Since the viscosity has been shown to be slightly
decreasing with temperature in the FTGP model
\cite{ShuklaViscosity}, it cannot explain
the trend observed in Fig.\ref{Fig:Gamma}.
The reason is that the settings studied are associated
with large values of the Knudsen number, meaning that
at the scale of the impurity
the GP quantum fluid at finite temperature cannot be
considered as a continuous liquid.
On the contrary, describing phenomenologically
the system as a gas of dilute thermal
excitations reproduces the correct temperature
increment of the friction $\gamma(T)$.
Moreover, we observe a dependence of the friction with the impurity size
compatible with the quadratic scaling $\gamma\proptoa_{\rm I}^2$ predicted by the
Epstein drag (\ref{Eq:EpsteinDrag}), despite some small deviations hidden in the prefactor $C_{\mathrm{d}}$.
In the case of Stokes drag, one should have observed a linear scaling $\gamma\proptoa_{\rm I}$ that is not in agreement with our data.
We stress that the picture outlined does not apply to the
particles typically used as probes in superfluid helium experiments
\cite{bewley2006superfluid,LaMantiaParticles}. Indeed, besides
being liquid helium a strongly interacting system, the typical
size of those particles is $4$ orders of magnitude larger than
the healing length. Thus, in that case
the Knudsen number is certainly
small enough to entail the standard Stokes drag.
However, a similar regime in terms of Knudsen number has been studied experimentally by using microspheres in liquid helium below $0.5\,K$ \cite{Schoepe}. It has been observed that the drag is determined by the ballistic scattering of quasi-particles and the temperature dependence of the friction coefficient is given by the temperature dependence of the quasi-particles density.
Besides helium, we hope that our study may be relevant
for future BEC experiments, in which finite-size and immiscible
impurities can be produced in the strong repulsive regime
of multi-component condensates
\cite{RicaRoberts},
or in the study of the impurity
dynamics in quantum fluids of light
\cite{CarusottoLight2014,MichelSuperLight}.
A possible follow-on of the present work is the
development of a self-consistent theory for the
interaction between the thermal excitations and the
impurity, which takes into account the dependence on
the wave numbers of the colliding waves. This could give
an analytical explanation to the small value of the prefactor $C_{\mathrm{d}}$ in Eq.
(\ref{Eq:EpsteinDrag}) compared to the classical Epstein drag
for elastic collisions.
Note that in a recent publication, the motion of a bright soliton moving in a thermal cloud of distinct atoms has been successfully modeled by using an OU dynamics \cite{OUsoliton}. In that case, the soliton is treated by using a wavefunction and the thermal (non-condensed) cloud as a reservoir. Although in our model the impurity is a rigid body with classical degrees of freedom, the result of \cite{OUsoliton} could inspire an analytical derivation of the OU dynamics for an impurity \eqref{Eq:OU}.
Moreover, the characterization of the motion of a
multitude of impurities in the FTGP system can be deepened,
expanding the findings of Ref. \cite{ClusteringUmberto}.
Finally, the fundamental problem of vortex nucleation
due to fast impurities has been thoroughly
investigated at zero temperature
\cite{ActiveWiniecki,BrachetCritVel,FrischCritVel}, but
few results are known in the finite temperature regime
\cite{WinieckiNucFT,BarenghiNucFT}. In particular,
the FTGP model coupled with impurities (\ref{Eq:HGP}) would
be a suitable framework to address the impurity-vortex
interaction at non-zero temperature.
\acknowledgments{
The authors are grateful to Dr. D. Proment for fruitful discussions.
The authors were supported by
Agence Nationale de la Recherche through the project GIANTE ANR-18-CE30-0020-01. GK is also supported by the EU Horizon 2020 Marie Curie project HALT and the Simons Foundation Collaboration grant Wave Turbulence (Award ID 651471).
Computations were carried out on the M\'esocentre SIGAMM hosted at the Observatoire de la C\^ote d'Azur
and the French HPC Cluster OCCIGEN through the GENCI allocation A0042A10385.
}
|
1,116,691,500,670 | arxiv | \section*{Introduction}
In many physical situations one is led to a family of finite dimensional
Hamiltonians defined over some parameter space (base) $B$ and would like to consider
and classify the level crossings that appear when one changes the parameters.
We came upon this general context when studying the $C^*$--geometry
of wire networks in general and the Double Gyroid network in particular \cite{kkwk,kkwk2}.
These networks are spatially periodic, and the base $B$ is a torus spanned
by the values of the quasimomenta. The dependence of energy eigenvalues on quasimomenta
determines the band structure; the simplest type of band crossing,
a conical intersection,
is often referred to as a Dirac point. Band crossings are interesting because they may
be responsible
for new physical phenomena (as well known, for instance, in the case of Dirac points in
graphene). Of particular interest are Dirac points in triply periodic materials, such as
the Gyroid network: they can be viewed as magnetic monopoles in the 3-dimensional parameter
space \cite{Berry} and as such are expected to be
stable under small deformations of the Hamiltonian.
In order to study various types of level crossings, we first widen the context to that
of a family of Hamiltonians over an arbitrary base,
and then apply the general results to our initial problem, in which the base is actually
the compact $n$--torus.
To be more specific, we
consider a differentiable map from an $n$--dimensional manifold
$B$ to the set of $k\times k$ Hermitian
matrices for a fixed $k$. The case of
interest will be $B=T^n$, the $n$-dimensional torus, and it can be thought of as the space of momenta.
Our results about the multiplicities in the spectrum are obtained using singularity
theory \cite{arnoldbook}.
They are twofold. First we give an analytic way of finding all Dirac points,
by considering the energy levels as the zero set of a smooth function $P$ on $B\times \mathbb R$.
Using
the Morse Lemma (Theorem \ref{morselemma}) we explain
that a Dirac point in this context and language is an isolated $A_1$ singularity with the
signature $(-\dots-+)$ (or $(+\dots+-)$ depending on the sign of $P$)
for the function $P$.
This effectively uses an ambient space to embed
the conical singularity.
Then by considering the energy levels as a singular fibration over the base space of
momenta, we classify the possible singularities in the fibers. The fibration in question is
the first projection $B\times \mathbb R\to \mathbb R$.
For this we again use singularity theory, more precisely that of
the singularity $A_{k-1}$ and its miniversal unfolding. In particular,
we define a characteristic map of the base $B$ of the family to the base $\Lambda=\mathbb C^{k-1}$
of the miniversal unfolding.
From its image, which we call {\em the characteristic region}, one can read off many details,
such as at what points degeneracies occur, and what their nature is.
Degeneracies occur precisely over the points of intersection of the characteristic region
with the discriminant locus.
This point of view allows us to classify the possible degeneracies as those appearing in the
discriminant locus or swallowtail of the $A_{k-1}$ singularity.
These are known by a theorem of Grothendieck \cite{grothendieck} on
the singularities appearing in the fibers of the miniversal unfolding
associated to the singularities corresponding to Dynkin diagrams. Namely, these are precisely
those obtained by deleting vertices (and all incident edges) of that
of $A_{k-1}$ and hence are
of the form $(A_{n_1},\dots, A_{n_r})$ for suitable $n_i$. This corresponds to simultaneous
crossing of $n_1+1,\dots, n_r+1$ levels.
How these levels cross or equivalently how the singularity unfolds is encoded
in the characteristic map and can qualitatively be read off from the characteristic region.
One way to view this result is as a more precise and more general
version of the von Neumann--Wigner
theorem \cite{vNW}. Indeed for the full family of traceless $2\times 2$ Hamiltonians in their standard
parameterization over $\mathbb R^3$, we reproduce that the locus of degeneracy is of codimension $3$.
More precisely, there is only one point $0\in \mathbb R^3$ in the preimage of the characteristic map
restricted to the discriminant.
If the family is more complicated however, our methods tell us where degeneracies
can occur and how the levels cross. In this case the codimension $3$ is not universally true
any more.
Among the graphs we study, we exhibit families, where the codimension of the degenerate locus
is $2$, $3$ or $1$. The precise dimension count comes from the
intersection of the discriminant with the characteristic region and the dimension
of the fibers of the characteristic map. For isolated singularities such as Dirac points
one needs that
the particular fiber of the characteristic map over the corresponding point
in the discriminant is 0--dimensional. This is true
in the von Neumann--Wigner case and this fact corresponds to the ``extra equations'' as we
explain.
Our main aim of application is
the commutative and
non--commutative $C^*$--geometry of wire networks \cite{kkwk,kkwk2} in their description
as graph Hamiltonians.
The input data for the general theory are a graph $\Gamma$, which is embedded in $\mathbb R^n$,
the crystal graph, together with a maximal symmetry group $L$ isomorphic to $\mathbb Z^n$ and a constant magnetic field 2--form given by a skew symmetric matrix
$\Theta$.
A fundamental role in the whole theory is played by the abstract quotient
graph $\bar \Gamma:=\Gamma/L$.
This graph, together with the induced data of the magnetic field and the
embedding, was used to define the Harper Hamiltonian and the relevant $C^*$
algebra $\mathscr B$. This algebra which we called the Bellissard--Harper algebra
is the {\em minimal} $C^*$--algebra generated by the magnetic translations
corresponding to $L$ and the Harper Hamiltonian $H$.
In \cite{kkwk2} we showed that $\mathscr B$ embeds into $M_k({\mathbb T}_{\Theta}^n)$, the $k\times k$ matrices
over the non--commutative torus ${\mathbb T}_{\Theta}^n$.
Here $\Theta$ contains the information of the $B$--field for the lattice $\Gamma$ and
$k$ is the number of vertices of $\bar \Gamma$ which is the number of sites in a primitive cell.
One intriguing aspect is that this
description is very useful even in the commutative case, that is in the
absence of a magnetic field. The $C^*$ approach yields a family of finite dimensional
Hamiltonians parameterized over a base torus $T^n=S^1\times \dots \times S^1$.
Namely, in the commutative case $\Theta=0$ and
$\mathbb T^n_0$ is the $C^*$ algebra of complex valued continuous
functions of the $n$--torus $T^n=S^1\times \dots \times S^1$. In standard
notation $\mathbb T^n_0=C(T^n)$.
Likewise, using the Gel'fand--Naimark theorem the Bellissard algebra $\mathscr B$ is also the
$C^*$ algebra of a certain compact Hausdorff space $X$; $\mathscr B=C(X)$, which as
we showed in \cite{kkwk}
is a branched cover over $T^n$. Physically the base $T^n$ parameterizes the momenta and
the space $X$ is given by the energies of $H$ as these momenta vary. In this sense
they give the energy bands of a one-electron system.
The central questions that arise are the following: At what points do we
have degenerate eigenvalues in the spectrum and which of these points
are Dirac points? And, can these be read off from the
graph $\bar \Gamma$ and its decorations, such as a spanning tree
or weights with values in some ${\mathbb T}_{\Theta}^n$?
Given a specific graph with $C^*$--algebra valued weights on the edges,
this is done by analyzing the function $P$ and the characteristic map from the base torus $T^n$
to the miniversal unfolding of the $A_{k-1}$ singularity.
We apply these considerations to the cases of the Double Gyroid wire network which was our
initial interest
and, to illustrate the concepts and the possible behaviors, we consider several other
examples along the way including the wire networks obtained from the double versions of the P and D
surfaces as well as the honeycomb lattice.
Our main result here is an analytic proof that the spectrum of the Gyroid has four singular fibers,
two of which are $A_2$ singularities and two of which are of the type $(A_1,A_1)$. We furthermore
show that the latter two are Dirac points.
This is the first analytic proof of this fact.
The relevant family of Hamiltonians also arises in a different context \cite{Avron}. There
the authors found numerically that the singular points lie on the diagonal
of $B=T^3$ (viewed as a cube with opposite faces identified) and obtained
the spectrum on the diagonal, see also \S\ref{gyroidsec} and \cite{sym}. We note that although this shows that
on the {\em one--dimensional sub--family} of Hamiltonians given by the diagonal, there are
two triple degeneracies and two two--fold double degeneracies, using
this information alone one
cannot conclude how the singular structure extends onto the full 3--dimensional torus. Our method gives this extension.
In particular, it allows us to show analytically that there are no degeneracies anywhere else in $T^3$.
Applying our program to the honeycomb lattice,
we rediscover the well--known Dirac points of
graphene \cite{Wallace}. Thus it can be hoped that the Dirac points
of the Gyroid will give rise
to important new material properties.
Another natural question in the case of
graph Hamiltonians,
which is addressed in a separate
paper \cite{sym}, is if there are symmetries
that can be derived from the graph setup which explain the degeneracies.
The short answer is that (a) the symmetry group must be extended beyond the
permutation symmetry of the graph to include phase transformations on the vertices
(``re-gaugings''), and
(b) in all the wire network cases corresponding to the 2 and 3 dimensional
self--dual graphs:
P, D, G and honeycomb these symmetries force all the singularities.
\section{Singularities in spectra of families of Hamiltonians}
\subsection{Singularities}
We briefly recall pertinent definition of singularity theory \cite{arnoldbook}.
In this theory one considers germs of smooth functions $f:\mathbb C^n \to \mathbb C$ with critical point at ${\bf 0}$
with critical value $0$ up to equivalence induced by germs of diffeomorphisms.
That is the germ $(f,{\bf 0})$ is equivalent to a germ $(f',{\bf 0})$
if there exists a germ $(g,0)$ of a diffeomorphism $g:\mathbb C^n\to \mathbb C^n$ with $g({\bf 0})={\bf 0}$ such that
$f=f'\circ g$.
A {\em singularity} is an equivalence class of such germs.
The germ $(f',0)$ is then also called the pull--back under $g$.
Analogous definitions hold in the case of complex singularities, that is for germs
functions $f:\mathbb C^n\to \mathbb C$
with diffeomorphisms replaced by biholomorphic maps.
A deformation of a germ $f:\mathbb C^n\to \mathbb C$ with base $\Lambda=\mathbb C^k$ is the germ at zero of a smooth map
$F:(\mathbb C^n\times \mathbb C^k\to \mathbb C,0)$ which satisfies $F(x,0)=f(x)$.
A deformation $F'$ is equivalent to $F$ if there is a smooth germ of a
diffeomorphism $g:\mathbb C^n\times\mathbb C^k\to \mathbb C^n\times \mathbb C^k$ at zero with $g(x,0)=x$.
Given a deformation $F$ with base $\Lambda$ a smooth germ $\theta:(\mathbb C^r,0)\to (\Lambda,0)$ {\em induces} a deformation $F'$ via pull--back. $F'(x,\lambda'):=F(x,\theta(\lambda))$.
A deformation $F$ of a germ $f$ is {\em versal} if every deformation $F'$ of $f$ is equivalent to a deformation induced from $F$. It is called {\em miniversal} if $\Lambda$ is of minimal dimension.
Again one can replace $\mathbb C$ with $\mathbb R$.
Also, one can pull--back not only germs, but actual pointed families by
the same procedure. Here the base spaces $\mathbb C^k$
are then simply replaced by smooth manifolds $B$. This yields the
same local theory.
\subsection{The spectrum as a zero locus}
\label{zerolocsec}
The basic starting point of our analysis in this section is that
given a smooth family of Hamiltonians over a base $B$ the spectrum as
functions on $B$
can be alternatively given as the zero locus of a single function $P$ on
$B\times \mathbb R$.
The map that associates to a point $b$ in the base $B$
the Hamiltonian $H(b)$ is a smooth
map $H:B\to Herm(k)$,
where $Herm(k)$ are the Hermitian $k\times k$ matrices. Now since
the matrices depend differentiably on the parameters, varying these eigenvalues
gives rise to a cover $\pi:X\to B$. Here a priori the cover is just that of
sets, but one can quickly show that this is a cover of topological spaces, e.g.\ using $C^*$ geometry, see \S\ref{coversection}.
The inverse image of a point $B$ under $\pi$
is the set of eigenvalues of $H(b)$.
As these are the zeros of the characteristic polynomial
of $H(b)$, we obtain another description of $X$ as a subset in
an ambient differentiable manifold as follows.
Consider the trivial cover $B \times \mathbb C \to B$ and its real part
$B\times \mathbb R\to B$. On the space $B\times \mathbb C$, we
consider the function $P:B\times \mathbb C\to \mathbb C$ given by $P(b,z):=det(zId-H(b))$.
The zero locus of $P$ is exactly $X$.
We chose the normalization, so that $P$ starts
with $+z^k$. Fiberwise for the zero locus,
we just get the eigenvalues of $H(b)$,
and since $H(b)$ is Hermitian, we know that we have real eigenvalues and
hence $X$ is also the zero locus of $P(z,b)$ contained in $B\times \mathbb R$.
The description above gives $X$ as a singular manifold. Around points of $X$
at which $P$ does not have a critical value $0$,
$P^{-1}(0)$ is a smooth manifold.
The points at which $P$ is critical with critical value $0$ are singular.
Physically $P^{-1}(0)$ are just the energy levels.
A level crossing can occur only at critical points of $P$ with critical value $0$.
Namely, if we fix $(b,z)\in B\times \mathbb R$
and $0$ is a non--critical value in a small neighborhood $U$ of $(b,z)$
then $P|_U^{-1}(0)$ is a smooth manifold.
More precisely, if $\frac{\partial P}{\partial z}\neq 0$ the implicit function
theorem states that
for $(b_0,z_0)\in U$ with $P(b_0,z_0)=0$ there exists a
function $z=E(b)$ such that $P(b,E(b))=0$
and the graph $z=E(b)$ is the component of the
smooth manifold $P|_U^{-1}(0)$ containing $(b_0,z_0)$. In other words:
$E(t)$ is the dispersion relation.
Our main results describe the singular locus of the singular manifold $X$.
There are two approaches
we will take. First we can look at the singularities
of $X$ locally where we regard $X$ as embedded in $B\times \mathbb R$.
We use this to find Dirac points, see \S \ref{diracsection}.
Secondly, we can look at the cover $\pi:B\times \mathbb R\to B$
restricted to $X$, that is $\pi:X\to B$. The upshot is that locally around
a singular point $x$, $X$ is the deformation of the singularity
of the fiber over $\pi(x)$.
By Grothendieck, locally in a fiber
the only singularities that can appear are of type $A_n$ with $n\leq k-1$.
This point of view lets us classify these singularities and their deformations
by means of the miniversal unfolding of the $A_{k-1}$ singularity,
see \S\ref{swallowsection}.
\subsection{Dirac points as Morse or $A_1$ singularities}
\label{diracsection}
From the point of view of material properties, one of the most interesting singularities that can occur are Dirac points.
In the general terminology, a Dirac point is a conical singularity in the spectrum when two levels cross such that the dispersion
relation is linear.
In our situation, this can be formalized in order to yield analytical tools to find and classify these points without solving the Eigenvalue equations. Instead of actually finding an expansion,
we will use a smooth ambient space to characterize Dirac points using singularity theory.
The standard cone is of the form $z^2=\sum_{i=1}^k t^2_i$. Here as above we fix the sign
of the coordinate $z$ to be positive.
We can rewrite this as $F({\bf t},z)=0$ where $F({\bf t},z)=z^2-\sum_{i=1}^k t_i^2$.
The characteristic features of the function are that $f$ has a critical point at
${\bf 0}=(0,\dots,0)$ with critical value $0$.
Moreover the Hessian, i.e.\ the matrix of its second derivatives, is a quadratic form with signature $(-\dots-+)$. In particular
its determinant $hess=det(Hess)\neq 0$. In general if $f$ has a critical point and $hess\neq 0$
the critical point is called a Morse critical point.
The most pertinent theorem about Morse critical points is the Morse Lemma.
\begin{thm}
\label{morselemma}
\cite{morsebook}
In a neighborhood of a nondegenerate critical point $p$ of a smooth function $F:M\to \mathbb R$ from an $n$--dimensional manifold $M$
there are coordinates
$x_i$ centered at $p$, such that in these coordinates
$$F=-x_1^2-x_2^2-\cdots -x_\lambda^2+x_{\lambda+1}^2+\cdots +x_n^2+f(p)$$
where, $\lambda$ is the index of the critical point.
\end{thm}
Now we see that a Dirac point as a germ is equivalent to the germ of $(f,0)$
above, whose index $\lambda$ is $n-1$ or $-1$ if one switches the sign of $F$,
that is if one regards $-F=0$. Let us for the moment assume that the sign of $F$
is chosen such
that the signature is $(-\dots-+)$ in the order of coordinates above.
Such a germ is the pull--back under some diffeomorphism on the ambient space
and the cone itself is the zero set of the function $F$.
Notice that the characteristic properties of being a
Morse critical point are invariant
under the diffeomorphism.
The signature has a geometric meaning. It says that the cone
opens up on the $x_n$--axis.
The dispersion
relation in the new coordinates is just the pull--back and
hence also linear.
In our particular case, that of the fibration $B\times \mathbb R\to B$ the function $F=P$ and the role
of $x_{n}$ should basically be that
of the coordinate $z$ on the fiber $B\times \mathbb R\to B$
and $(x_1,\dots ,x_{n-1})$ should correspond
to the variables on the n--dimensional base, so that we indeed
get a physically sensible dispersion relation $z=E(b)$.
Since we are working with germs,
we do not distinguish between a local neighborhood in $B$ and the image
of its local charts in $\mathbb R^n$.
Now the cone has the right orientation
as long as coefficient $\frac{\partial^2P}{\partial z^2}>0$, as this
states that the $z$--direction lies in the positive part of the cone.
The dispersion relation
can depend on the direction that is the cone might be deformed and tilted. The
tilting of the cone is given by the partial derivatives
$\frac{\partial^2P}{\partial z\partial b_i}$. The cone is
untilted if $\frac{\partial^2f}{\partial z\partial b_i}=0$ for the base coordinates $b_i$.
Rephrased, we need that $T(B\times \mathbb R)=TB\oplus T\mathbb R$ is a decomposition into
a negative definite and a positive definite subspace.
Recall that a stabilization of a singular germ $f({\bf z})$ is a function $f({\bf z},{\bf w})\pm w_1^2 \pm \dots \pm w_m^2$. Thus the Morse singularities
are then stably equivalent to the $A_1$ singularity given by the
germ $f(z)=z^2$.
\subsubsection{Singularity Characterization of Dirac Points}
Therefore, we get the Dirac points in the
spectrum are precisely the critical points
with critical value $0$
of $P$ which are stabilizations of an $A_1$ singularity
with signature $(-\dots-+)$ or $(+\dots+-)$ in the coordinates
$(b_1,\dots,b_n,z)$ such that $T(B\times \mathbb R)=TB\oplus T\mathbb R$ is a decomposition into
a negative definite and a positive definite subspace.
We can now take this as their {\em definition}.
Practically this means that we have to simultaneously
solve the equations $P=0,\nabla P=0$
and then check $hess\neq 0$ and moreover check that the signature is correct,
by computing the principal minors and check that $\frac{\partial^2P}{\partial z^2}=0$.
\label{swallowsection}
\subsection{The spectrum as a pull--back from the miniversal unfolding of the $A_{k-1}$ singularity}
\subsubsection{The miniversal unfolding of $A_{k-1}$ and the swallowtail}
The $A_{k-1}$ singularity is the singularity defined by the function $f(z)=z^k$,
which has a critical point of order $k-1$ at $0$.
Its miniversal deformation \cite{arnoldbook} is
\begin{equation}
\label{Aneq}
F(a,z)=z^k+a_{k-2}z^{k-2}+\dots +a_0
\end{equation}
According to the general theory, the dimension of a miniversal deformation
coincides with the
dimension of the Milnor ring $\mathbb C[z]/(f')$ (also called the Milnor number)
and the terms which are added to $f$ are in 1-1 correspondence
with the vector space basis $(1,z,z^2,\dots,z^{k-2})$ of this ring.
The geometry of the situation is very similar to the cover $\pi$ considered introduced in
\S\ref{zerolocsec}. In particular, the function $F(a,z)$ is a function on
$\mathbb C^{k-1}\times \mathbb C$.
Let $Y:=\{(a,z):F(a,z)=0\}\subset \mathbb C^{k-1}\times \mathbb C$ and
considering the trivial bundle $\mathbb C^{k-1}\times \mathbb C\to \mathbb C^{k-1}$.
Again we get a branched
cover $\pi:Y\to \mathbb C^{k-1}$.
The inverse image under $\pi$ is the set of roots of the polynomial.
Generically there are $k$ of these
roots. However, over a subset $D\subset \mathbb C^{k-1}$ of the base space
the number of inverse images drops as
there are multiple roots. This set is known as the discriminant locus,
the swallowtail or the level bifurcation set and has been extensively
studied (see \cite{arnoldbook,GKZ}).
It is the zero set of the discriminant of the polynomial $F(z):=F(z,a)$ which is
considered as a polynomial with arbitrary coefficients.
The discriminant is a simple polynomial in the $a_i$
and its zero set has codimension $1$ \cite{GKZ}.
\begin{figure}
\includegraphics[width=.3\textwidth]{A2singularity.pdf}
\hspace{1cm}
\includegraphics[width=.5\textwidth]{A3singularitysm.pdf}
\caption {Zero locus in the $A_2$ and $A_3$ singularities}
\label{sing}
\end{figure}
Pictures for this locus in the $A_2$ and $A_3$ are shown in Figure \ref{sing}.
The surface $D$ in the $A_3$ singularity is known as the swallowtail.
In general $D$ goes by the names of discriminant, level-bifurcation set
or also again as the swallowtail.
\subsubsection{The spectrum as a pull--back}
\label{mainthmsec}
Consider the trivial bundle $B\times \mathbb C\to B$ as before
and let $P(b,z)$ be defined as before. We expand
\begin{equation}
\label{peq}
P(b,z)=z^k+a_{k-1}(b)z^{k-1}+a_{k-2}(b)z^{k-2}+\dots +a_0(b)
\end{equation}
where now the coefficients $a_i:B\to \mathbb R$ are real since the matrix $H(b)$
is Hermitian. This has great similarity to Eq.\ (\ref{Aneq}), except for
the second leading term not vanishing.
However a simple invertible
smooth transformation $s:z\mapsto z-a_{k-1}/k$ yields a polynomial of this type.
In our setup, the transformation $s$ gives
a diffeomorphism $g:B\times \mathbb R \to B\times \mathbb R$
given by $(b,z)\mapsto (b,z -a_{k-1}(b)/k)$. This gives an
equivalence between $P$ and $\hat P:=P\circ g$.
Now $\hat P$ expands as
\begin{equation}
\label{phateq}
\hat P(b,z)=z^k+\hat a_{k-2}(b)z^{k-2}+\dots +\hat a_0(b)
\end{equation}
Let $\Xi=(a_0,\dots,a_{k-2}):B\to \mathbb R^{k-1}$,
then the miniversal
unfolding $F$ pulls back via $\Xi$
to the deformation $\Xi^*(F):B\times \mathbb C\to \mathbb C$ given
by $\Xi^*(F)(b,z)=\hat P(\hat a(b),z)$. In other words $\hat P$ is the
deformation induced from the miniversal deformation $F$ via $\Xi$.
We define {\em the characteristic map} to be the coefficient map
$\Xi:B\to \mathbb C^{k-1}$ and the {\em characteristic region $R$}
to be the image of $\Xi$.
Let $\Xi^*(Y)\subset B\times \mathbb C$ be the zero locus of $\Xi^*(F)$,
then we get a pullback of
the map $\pi$: $\Xi^*(\pi):\Xi^*(Y)\to B$ by restricting the projection.
Notice that if the Hamiltonians $H(b)$ are traceless then $a_{k-1}\equiv 0$ and
$P=\hat P$.
Summing up we have the following:
\begin{thm}
\label{mainthm}
The branched
cover $X\to B$ is equivalent via $g$ to the pull back of
the miniversal unfolding of the $A_{k-1}$
singularity along the characteristic map $\Xi$.
Moreover if the family of Hamiltonians is traceless, the cover
is the pull--back on the nose.
\end{thm}
\begin{cor}
The possible degeneracies are pull-backs of
those which appear in the miniversal unfolding of $A_{k-1}$.
Moreover, these singularities are present over the fibers over the real part of the discriminant.
\end{cor}
Since $A_{k-1}$ is a simple singularity the following theorem of Grothendieck applies.
\begin{thm}\cite{grothendieck}
The types of singularities which appear in the swallowtail of a simple singularity are exactly those
corresponding to the Dynkin diagrams obtained by deleting vertices and all the edges incident
to these vertices in the Dynkin diagram of the original singularity.
\end{thm}
If the resulting diagram is disconnected this means that there are several critical points with critical values $0$ in the fiber.
In fact there is a stratification of the swallowtail $\Sigma$ into strata $\Sigma(X_1,\dots, X_n)$
according to the types $X_i$. The above theorem then states which strata are non--empty.
If we take the $A_k$ Dynkin diagram and delete $l$ points, there are at most $l+1$ connected components
all of type $A_r$ for some $r$ and the sum of their Milnor numbers $r$ is at most $k-l$. Deleting
points at the edges or next to each other, we can make the individual Minor numbers smaller.
\begin{cor}
The only possible types of singularities for nonloop graphs in the spectrum are $(A_{r_1}, \dots, A_{r_s})$ with $\sum r_i\leq k-s$.
\end{cor}
\begin{ex} In the unfolding of the $A_3$ singularity, we have an $A_3$ singularity at the origin,
we have $A_1$ singularities along the smooth part of the swallowtail corresponding
to deleting two points of $A_3$. Along the cusps of the swallowtail there are $A_2$ singularities
corresponding to the triple degeneracy
of the roots corresponding to the right two vertices and the left two vertices and over the double points there are $(A_1, A_1)$ singularities corresponding to deleting the middle point of $A_3$.
\end{ex}
\begin{cor}
Near any singular point $x\in X$ of $P$ there is a
neighborhood of $x$ which is a deformation of
an $A_r$ singularity for some $r\leq k-1$.
\end{cor}
\begin{proof}
If $x$ is a singular point, then $y=\Xi(\pi(x))$ lies on the discriminant.
Picking a neighborhood
of $y$ and pulling it back to $B$ and to $X$, we obtain the desired deformation
by restricting to the component that $x$ lies in.
\end{proof}
\subsection{Characteristic region}
Since the families we consider are given by Hermitian matrices,
they have positive eigenvalues. Hence, if $disc$ is the discriminant
function of $F$, then $disc\circ\Xi\geq 0$ as this function is
is quadratic in the differences of the eigenvalues. Thus, we
get that the characteristic region is contained in the locus of $\Lambda$
over which the discriminant is non--negative.
Notice that
if $B$ is compact connected, then the image under $\Xi$ of $B$
is compact connected. This will be the case for graph Hamiltonians
where $B=T^n$, hence
it will then also lie in the closure of a component of
the real part of $\mathbb R^{n-1}\setminus D\cap R^{n-1}$
over which the discriminant is positive.
Since the discriminant is a simple polynomial,
the sign changes when crossing the discriminant locus.
This entails that the intersection of $D$ with the characteristic region
is only along its boundary $Bd(R)$.
Thus if $p$ is an interior point of $R$
then all the fibers of $X$ over the inverse images
of $p$ under $\Xi$ are non--singular
and moreover there is a whole non--singular neighborhood of each fiber.
The fibers of $X$ that are singular all lie over points $b$ whose image
$\Xi(b)$
sits in $D\cap R\subset Bd(R)$.
Thus the characteristic region gives a useful aid in studying the singularities that occur.
If $R\cap D=\emptyset$ then there are no singularities.
In low dimensions this is also
a great visualization tool.
Notice that if $B$ is connected, isolated intersection points only occur for
constant maps, which means that there are no degrees of freedom.
This cannot happen in the crystal/wire case.
If there are non--isolated intersection points,
then their inverse images are singularities.
Namely, by the above, there are nearby fibers,
where the number of pre--images is $k$.
Over a fiber in a component with an $A_r$ singularity,
we hence know that there are transversal directions,
where one Eigenvalue splits into $r+1$ different pre--images. Or
going into the singularity $r+1$ energy level coalesce.
Thus over a point in the stratum $(A_{n_1},\dots, A_{n_l})$ there are
$k$ crossings of $n_1+1,\dots,n_l+1$ levels, respectively.
\subsection{Consistencies and necessary conditions}
There are several consistencies which one might exploit
for analytic or numerical solutions.
Any pre--image $b$ of a point $p$ in the boundary
of $R$ has to satisfy that $J_{\Xi(b)}$,
the Jacobian of $\Xi$ at $b$, does not have maximal rank.
This is of course not a sufficient condition.
We know that to get a singular point we need that $J_{\Xi}(b)=0$.
This again fits well with the
fact that the discriminant restricted to $R$ is $\geq 0$.
So that $disc\circ \Xi$ has a zero Jacobian.
In order for an isolated singularity at $x\in X$, such as a Dirac point,
to occur, a necessary condition is that the fiber
$\Xi^{-1}(\Xi(\pi(x)))$ is discrete.
This takes care of the vertical direction,
but of course there should also be no curve through $x$
transversal to the fibers mapping to the discriminant.
All these types of behaviors can be found in the examples we give.
Another nice consistency check is given by the discriminant of $P$ considered
as a polynomial in $z$.
Since any singular point $x$ in the spectrum is a critical
point with critical value zero, we must have $P(x)=P_z(x)=0$,
which means that discriminant $disc(P)(x)=0$.
Denoting the discriminant of the $A_k$ singularity by $disc$ as well,
we have $disc(P)(x)=disc(\Xi(\pi(x)))=0$.
\subsection{Standard von Neumann--Wigner Example}
We consider the family of Hamiltonians $H(a,b,c)=a\sigma_x+b\sigma_y+c\sigma_z$,
where $\sigma_x,\sigma_y,\sigma_z$ are the Pauli matrices.
This gives us a family with base $\mathbb R^3$. It is the full family of traceless Hermitian $2\times 2$ matrices.
The usual interpretation of von Neumann--Wigner is that for a single level crossing one can reduce to this
$2\times 2$ family. However, this is basically only true in a ``generic'' or abstract setting and not for any arbitrary particular family.
The original article \cite{vNW} does not claim this, but rather computed the co--dimension of the space of Hermitian matrices with degenerate
eigenvalues in the whole space of Hermitian matrices. This is where the pure dimension count takes place.
In our setting the calculation proceeds as follows.
The function $P$ on $\mathbb R^3\times \mathbb R$ is given by $P(a,b,c,z)=z^2-a^2-b^2-c^2$. The singularity is
$A_1$ which has a one dimensional base $\Lambda=\mathbb R$. The characteristic map is the map $\Xi:(a,b,c)\mapsto -a^2-b^2-c^2$.
The discriminant is just the point $0\in \mathbb R$ and we see that we have a level crossing over $\Xi^{-1}(0)=(0,0,0)$.
And indeed, the inverse image is zero dimensional and the codimension of its locus is $3$.
All other fibers of $\Xi$ are of codimension $1$ as one would expect from just a count of equations.
Here we neatly see how the na\"ive equation count fails over the special fiber.
The extra dimension drop can be explained using Picard--Lefschetz theory as a vanishing sphere.
Likewise the ``diabolical'' nature of these points ---that is the behavior of wave functions when moved around the conical singularity---
can be explained via the classical monodromy operator \cite{arnoldbook}.
The fact that the singularity is conical can readily be checked in our framework. Indeed $P$ has an isolated critical point at $(0,0,0,0)$
with value $0$ and signature $(---+)$ of the Hessian.
Finally, if one looks at the even larger family, $H(a,b,c,d)=a\sigma_x+b\sigma_y+c\sigma_z+d\,Id$ of all Hermitian $2\times 2$ matrices,
then one gets a family over $\mathbb R^4$ with $P(a,b,c,d,z)=z^2-2dz+d^2-a^2-b^2-c^2$. So in this situation one has to use the
shift $s: z\to z+d$ upon which $\hat P(a,b,c,d,z)=z^2-a^2-b^2-c^2$. The characteristic map $\Xi(a,b,c,d)=-a^2-b^2-c^2$,
and we see that singularities are over $\Xi^{-1}(0)=(0,0,0,d)$ so that the fiber is now one dimensional, but
still of codimension $3$. The singular locus in the spectrum is given by a conical singularity crossed a line and hence
not isolated. Indeed the Jacobian of $P$ vanishes along the line $(0,0,0,d,d)$ with critical value $0$
and the Hessian of $P$ is degenerate at these points.
\section{Graph Hamiltonian and Wire network setup}
We now discuss the families that come about by considering Hamiltonians obtained
from finite graphs with (commutative) $C^*$ algebra weights on the edges. These
in turn arise from wire networks of real materials. Here the $C^*$--algebra in question
is the noncommutative n--torus ${\mathbb T}_{\Theta}^n$ where $\Theta$ is a skew symmetric matrix that
encodes the commutation relations of the $n$ unitary generators $U_i$ of ${\mathbb T}_{\Theta}$.
Physically it corresponds to a constant magnetic field $B$.
The initial setup for graph Hamiltonians works with a general $C^*$--algebra $\mathscr A$,
the relevant Hilbert space being an $\ell^2$ space.
In case $\mathscr A$ is commutative, by the Gel'fand--Naimark theorem, we get another description
of the algebra that $\mathscr A$ and the Hamiltonian generate
as the set of levels of a family of finite dimensional Hamiltonians parameterized over a base.
For concreteness, we will set ${\mathscr A}={\mathbb T}_{\Theta}$, but
also comment on how to treat the general case.
\subsection{Hamiltonian from a finite graph}
In order to set up the general theory for graph Hamiltonians,
we fix a finite graph $\bar \Gamma$,
a rooted spanning tree $\tau$ of $\bar \Gamma$, an order $<$ of the vertices of $\Gamma$ such that
the root of $\tau$ is the first vertex, a skew symmetric matrix $\Theta$, and a morphism
$w:\{\text{Directed edges of } \Gamma\}\to {\mathbb T}_{\Theta}^n$ which satisfies the following
\begin{enumerate}
\item $w(\vec{e})=w(\cev{e})^*$
if $\vec{e}$ and $\cev{e}$ are the two orientations of an edge $e$.
\item $w(\vec{e})w(\cev{e})=1$
\item $w(\vec{e})=1\in {\mathbb T}_{\Theta}^n$ if the underlying edge $e$ is in the spanning tree.
\end{enumerate}
Let $k$ be the number of vertices of $\Gamma$. We will enumerate
the vertices $v_0,\dots, v_{k-1}$ according to their order; $v_0$ being the root.
Given this data, the Hamiltonian $H=H(\Gamma,\tau,<,w)\in M_k(\TTheta^n)$ is the $k\times k$ matrix whose
entries in ${\mathbb T}_{\Theta}^n$ are
\begin{equation}
H_{ij}=\sum_{\text{directed edges $\vec{e}$ from $v_i$ to $v_j$}} w(\vec{e})
\end{equation}
\subsection{Family in the commutative case}
\label{coversection}
If $\Theta=0$ we can consider these Hamiltonians
as a family of Hamiltonians over the n--torus $T^n$ as follows.
First $\mathbb T^n_0$ is the $C^*$ algebra of continuous $\mathbb C$--valued functions on $T^n$ via
the Gel'fand--Naimark correspondence.
Considered as a space each point of $\mathbb T^n_0$ is given by a character or $C^*$--algebra morphisms
$\chi:\T^n_0\to \mathbb C$. To each such a point, we associate the Hamiltonian $\hat \chi(H)\in M_k(\mathbb C)$
where $\hat \chi$ is the natural lift of $\chi$ to the matrix ring. Specifically,
\begin{equation}
(\hat \chi(H))_{ij}=\chi(H_{ij})
\end{equation}
The correspondence between points and characters is in the following way.
Each point $t\in T^n$
gives rise to the evaluation map $ev(t):C^*(T^n)\to \mathbb C$ which sends a function $f$ to its value $f(t)$ at $t$. Varying $f$ we obtain
a character. The Gel'fand--Naimark theorem asserts that this is 1--1.
Using this correspondence, we get a Hamiltonian $H(t)$ for each $t$. Physically if we think that $T^n$ parameterizes
momenta, we obtain $H(t)$ by just plugging in the given momenta.
Using the formalism we developed, we get a topological cover $X\to T^n$ where the points over a base point $b$ are
the eigenvalues of $H(t)$ and furthermore a realization of this cover as a subspace
in the manifold $T^n \times \mathbb R$ and all of our analysis applies.
\subsubsection{$C^*$--geometry}
One can understand the topological cover $X\to T^n$ in $C^*$--geometry which yields
the basic connection of our analysis in this section to the previous one. Consider $\mathscr B_0\subset M_k(\T^n_0)$,
the algebra generated by $H\in M_k(\T^n_0)$ and the diagonal embedding of $\T^n_0$ into $M_k(\T^n_0)$ as scalars.
This algebra is still commutative and
again by
applying the Gel'fand--Naimark theorem, we obtain a compact Hausdorff space $X$, such that $\mathscr B_0$ is $C^*(X)$.
The main point is that
the cover of the torus given by the $C^*$ analysis from the inclusion $\T^n_0\to \mathscr B$ (see \cite{kkwk})
is exactly the cover $\pi:X\to T^n$ considered in the last section.
\begin{rmk}
One can readily generalize this situation to any commutative unital $C^*$ algebra $\mathscr{A}$. We then
get a Hamiltonian over the base space $B$ which satisfies $C^*(B)=\mathscr{A}$.
The role of the algebra
$\mathscr B_0$ is then played by the algebra in $M_k(\mathscr{A})$ generated by $H$ and the diagonal embedding of $\mathscr{A}$.
\end{rmk}
\subsection{Further characterization of $P$}
Again considering $P$ as a polynomial in $z$,
the coefficient functions $a_k$ in equation (\ref{peq}) can be given a graph theoretical interpretation.
For this it is convenient to introduce the graph $\bar\Gamma_{simp}$ and the weight function $w^+$ associated
to $(\bar\Gamma,w)$. The vertices of $\bar\Gamma_{simp}$ are just the vertices of $\Gamma$. The edges
of $\bar\Gamma_{simp}$ are simply the equivalence classes of edges of $\bar\Gamma$, where two edges are equivalent if they run between the same vertices. This identification induces a weight function $w^{+}$ on $\bar\Gamma_{simp}$,
where now $w^+(\vec{[e]})=\sum_{\vec{e'}\in [e]} w(\vec{e'})$. That is the sum over all edges connecting
the same two vertices as $e$. In this notation
\begin{equation}
H_{ij}=\begin{cases} 0 & \text{ if there is no edge between $v_i$ and $v_j$ in $\bar\Gamma_{simp}$}\\
w^+(\vec{[e]})& \text{ if there is a necessarily unique oriented edge $\vec{[e]}$}\\
&\text{ from $v_i$ to $v_j$ in $\bar\Gamma_{simp}$}\\
\end{cases}
\end{equation}
Plugging this into the usual determinant formula $$det(A)=\sum_{\sigma\in \mathbb{S}_k} sign(\sigma)a_{1\sigma(1)}\cdots a_{k\sigma(k)}$$
we can give the summand corresponding to $\sigma$ graph combinatorially. Decompose $\sigma$ into cycles $c_1,\dots, c_q$ of length $l_1,\dots, l_q$. Then each cycle corresponds to a unique cycle
of oriented edges in $\bar\Gamma_{simp}$. Explicitly if $c_j=(j_1j_2\dots j_{l_j})$ then the cycle of $\bar\Gamma_{simp}$ is
given by the unique directed edges from $v_{j_r}$ to $v_{j_{r+1}}$ and from $v_{j_{l_j}}$ to $v_{j_1}$ if
all these edges exist. In that case and if $l_j>1$ we set $w^+(c_j)=\prod w^+{\vec{[e]}}$
where the product runs over all the oriented edges in that cycle. Otherwise set $w^+(c_j)=0$.
If $l_j=1$ then $w^+(c_j)=-z+w^+\sum_{e}(\vec{e})+w^+(\cev{e})$ where $e$ are loop edges from
$v_{j_1}$ to itself if these exists and if there are no such edges, set $w^+(c_j)=-z$ otherwise.
In this notation:
\begin{equation}
\label{patheq}
P(t,z)=(-1)^k\sum_{\sigma\in \mathbb S_k}sign(\sigma) p_{\sigma}(t,z) \text{ with } p_{\sigma}(t,z) =\prod_{j=1}^q w^+(c_j)
\end{equation}
\subsubsection{Graphs with no small loops }
Assume that $\bar \Gamma$ has no small loops, that is edges which return to the same vertex.
Then all the diagonal entries $H_{ii}=0$ and hence $a_{k-1}=0$.
Furthermore, if $i$ be the number of cycles of length one and we assume that these
are the first $i$ cycles, then
\begin{equation}
\label{pathnoloopeq}
p_{\sigma} =(-z)^i\prod_{j=i+1}^qw^+(c_j)(t)
\end{equation}
where the product is now over the cycles of length $>1$.
One can also again read off that $a_{k-1}=0$. Namely, if $k-1$ cycles have length $1$ then all $k$ cycles
have length one.
Using the formula (\ref{pathnoloopeq}) the coefficient $a_{k-2}$ becomes
\begin{equation}
\label{aktwoeq}
a_{k-2}=-\prod_{e\in \bar\Gamma_{simp}} w^+(\vec{e})w^+(\cev{e})
\end{equation}
\subsubsection{Simply laced graphs with no small loops}
Thus if furthermore $\bar\Gamma$ is simply laced, that is there is at most one edge between two vertices, then $a_{k-2}=|E(\bar \Gamma))|$ is simply the number of edges.
Summing up in this case:
\begin{equation}
P(t,z)=z^k-|E_{\bar \Gamma}|z^{k-2}+a_{k-3}(t)z^{k-3}+\dots +a_0(t)
\end{equation}
Applying the results of \S\ref{mainthmsec} in this situation yields the following.
\begin{thm}
If the graph $\bar \Gamma$ has no small loops the branched cover $X\to T^n$ is the pull back of
the miniversal unfolding of the $A_{k-1}$ singularity along the
characteristic map $\Xi:(a_0(t),\dots,a_{k-2}(t))$.
Otherwise it is equivalent to the pull--back.
The characteristic region is compact connected and lies in the
closure of a component of
the real part of $\mathbb R^{n-1}\setminus D\cap R^{n-1}$ over which the
discriminant is positive.
Moreover if $\bar \Gamma$ has no small loops and is simply laced, then $R$
is contained in the hyperplane
$a_{k-2}=-|E_{\Gamma}|$ of $\Lambda=\mathbb R^{k-1}$.
\end{thm}
\subsection{Wire networks}
We briefly recall the relevant notions from \cite{kkwk} which we will need in the examples.
As in the introduction, we fix a graph $\Gamma$ embedded in $\mathbb R^n$, and a maximal translational symmetry group $L$.
such that $\bar\Gamma:=\Gamma/L$. Let $\pi:\Gamma\to\bar \Gamma$ denote the projection.
We will directly set the magnetic field form $\Theta=0$.
Let $V_{\Gamma}$ be the set of vertices of $\Gamma$ and
$V_{\bar \Gamma}$ be the vertices of $\bar \Gamma$ and consider $\mathscr{H}=\ell^2(V_{\Gamma})=\bigoplus_{v\in \bar \Gamma}\mathscr{H}_v$, where $\mathscr{H}_v=\ell^2(\pi^{-1}(v))$.
The translation group $L$ then naturally acts by translation operators on $\mathscr{H}$ preserving the summands.
This representation is then by commuting linearly independent unitaries and hence gives rise to a copy of $\T^n_0$.
Fix a spanning tree, with root $v_0$, and an order of the vertices. Let $T_{\vec{e}}$ be the translation operator along a vector $\vec{e}$.
Notice that the oriented edges $\vec{e}$ lift to unique vectors in $\mathbb R^n$ under $\pi^{-1}$. Let $T_{v_iv_0}=T_{v_0v_i}^{-1}$
be the total translation along the unique shortest edge path in the spanning tree from $v_0$ to $v_i$.
Then $T_{v_iv_0}$ given an isometry $\mathscr{H}_0\to \mathscr{H}_{v_i}$. And hence we get an isometry
$\mathscr{H}\simeq \bigoplus_{v\in \bar \Gamma}\mathscr{H}_{v_0}:=\mathscr{H}_0$.
Then the weight of an oriented edge $\vec{e}$ from the vertex $v_i$ to the vertex $v_j$ is the translation operator
$ T_{v_0v_i}T_{\vec e}T_{v_iv_0}$. This defines the Harper Hamiltonian as the corresponding
graph Hamiltonian which acts on $\mathscr{H}_0$. It is shown in \cite{kkwk} that indeed these translation operators lie in the translations
generated by $L$ and hence give unitaries in $\T^n_0$.
Pulling back the translation operators of $L$ to $\mathscr{H}_0$, they together with $H$ generate the commutative
$C^*$ algebra $\mathscr B_0$.
The physical background for this data as explained in detail in \cite{kkwk,kkwk2} is as follows. Given one of the triply--periodic CMC surfaces, P (primitive), D (diamond) or G (Gyroid), one can consider its thickened or ``fat'' version. Its boundary then consists of two non--intersecting
surfaces, whence the name Double Gyroid, for instance. These surfaces give interfaces which appear
in nature. In particular, the Double Gyroid could recently be synthesized on the nano--scale \cite{Hillhouse}.
The structure contains three components, the ``fat'' surface or wall and two channels.
Urade et al.\ \cite{Hillhouse}
have also demonstrated a nanofabrication technique in which the channels are
filled with a metal, while the silica wall can be either left in place or removed. This yields
two wire networks, one in each channel. The graph we consider and call Gyroid graph
is the skeletal graph of one of these channels. The P, D, and G examples are the unique such surfaces
where the skeletal graph is symmetric and self--dual. The graph Hamiltonian is then the Harper Hamiltonian for one channel of this wire network.
The 2d--analogue of this structure is the honeycomb lattice underlying graphene.
\section{Calculations}
Since all calculations are for the base $T^n$ we will consider the function $P$ locally pulled back via the exponential map
$\exp:\mathbb R^n\to T^n$
$(a_1,\dots, a_k)\to (\exp(i a_1),\dots, \exp( i a_n))$.
In this notation given a point $(a_i)$ and its corresponding character $\chi$,
the translation operators $U_j$ corresponding to the generators of the algebra $\T^n_0$
get mapped to $\chi(U_j)=\exp(ia_j)$. Indeed under the Gel'fand--Naimark correspondence
the operator $U_j$ is the function $\exp(ia_j)$ for the coordinates of $T^n$ above. To simplify
the calculation, we will drop the $\chi$ and just write the function for the operator.
\subsection{The Gyroid}
\label{gyroidsec}
\subsubsection{The matrix and the function $P$}
As shown in \cite{kkwk}, the relevant graph for the Gyroid wire network
is the full square which is simply laced. We also fix a spanning tree shown in Figure \ref{spt_gyroid}
ordering the vertices as indicated, the root being $1$.
\begin{figure}
\includegraphics[width=.3\textwidth]{Gyroidspantree.pdf}
\hspace{1cm}
\includegraphics[width=.5\textwidth]{newrandomplotsm.pdf}
\caption {Spanning tree and characteristic region for the Gyroid (solid region). The curve
is the slice of the discriminant of the $A_3$ singularity at $a_2\equiv -6$}
\label{spt_gyroid}
\end{figure}
The Harper Hamiltonian is a function on $T^3=S^1\times S^1\times S^1$. We can visualize $T^3$ as a cube where opposite sides are identified.
The Harper Hamiltonian reads \cite{kkwk}
\begin{equation}
H=\left(
\begin{array}{cccc}
0&1&1&1\\
1&0&A&B^*\\
1&A^*&0&C\\
1&B&C^*&0
\end{array}
\right)
\end{equation}
where $A$, $B$ and $C$ are operators generating $\mathbb T^3_0$ each of which we can think of as a function
on $S^1$ We will rewrite them as $A=\exp(i a)$, $B=\exp(i b), C=\exp(i c)$
with $a,b,c$ real.
The eigenvalues of $H$ are given by the roots of the characteristic polynomial:
\begin{equation}
P(a,b,c,z)=z^4-6z^2+a_1(a,b,c)z +a_0(a,b,c)
\label{charactpol}
\end{equation}
where
\begin{eqnarray}
\label{a0a1}
a_1&=&-2 \cos(a)-2 \cos(b)-2 \cos(c) -2 \cos(a+b+c)\nonumber\\
a_0 &=& 3-2\cos(a+b)-2 \cos(b+c) -2 \cos(a+c)\nonumber
\end{eqnarray}
give the characteristic map $\Xi:=(a_0,a_1): T^3\to \mathbb R^2$
\subsubsection{A quick look at the characteristic region.}
The characteristic region of the full square graph is depicted in Figure \ref{spt_gyroid}.
The curve shown in the figure is the discriminant locus which is explicitly given by
\begin{equation}
\label{disc}
20736\; a_0 - 4608 \;a_0^2 + 256 \;a_0^3 + 864\; a_1^2 - 864\; a_0 \;a_1^2 - 27 \;a_1^4=0
\end{equation}
The boundaries of the characteristic region are obtained as the collection of points $(a_0,a_1)$ for
$a=b=c$ and $a=b=-c$.
We see that the characteristic region is contained in the slice $a_2=-6$ of the $A_3$ singularity and intersects the discriminant
in exactly three isolated points, the two cusps and the double point of that slice of the swallowtail.
The two cusps are in the stratum of type $A_2$ and the double point is in the stratum of type $(A_1,A_1)$. As is quickly seen and we calculate below the fibers over all these points are indeed discrete. For the $A_2$ singularities, this is just one point each, giving rise to two triple
crossings, and the fiber over $(A_1,A_1)$ consists of two points. Over each of these points
there are two double crossings and it turns out, see below, that these are
Dirac points.
This is a very special situation in that the points on the discriminant are
actually at singular points of the region.
From the singularities it is easily seen
that the image of the tangent spaces at the points in the fiber is $0$ and hence $J_a$ vanishes.
\subsubsection{Classification of the critical points}
We first check the condition $\nabla P=0$. This yields the equations:
\begin{eqnarray*}
\frac{\partial P}{\partial a}&=&z (2 \sin (a+b+c)+2 \sin (a))+2 \sin (a+b)+2 \sin (a+c)=0\\
\frac{\partial P}{\partial b}&=&z (2 \sin (a+b+c)+2 \sin (b))+2 \sin (a+b)+2 \sin (b+c)=0\nonumber\\
\frac{\partial P}{\partial c}&=&z (2 \sin (a+b+c)+2 \sin (c))+2 \sin (a+c)+2 \sin (b+c)=0\nonumber\\
\frac{\partial P}{\partial z}&=&-2 \cos (a+b+c)-2 \cos (a)-2 \cos (b)-2 \cos (c)+4 z^3-12 z=0\nonumber
\end{eqnarray*}
To solve these equations, we rewrite the first three using trigonometric identities as
\begin{eqnarray*}
z \cos (\frac{b+c}{2})=-\cos (\frac{b-c}{2})\nonumber\\
z \cos (\frac{a+c}{2})=-\cos (\frac{a-c}{2})\nonumber\\
z \cos (\frac{a+b}{2})=-\cos (\frac{a-b}{2})\nonumber
\end{eqnarray*}
In the case that all cosines are different from zero, we can solve each equation for $z$ and set them equal. This leads to
\begin{eqnarray}
\cos (\frac{b-a+2c}{2}) = \cos (\frac{a-b+2c}{2})\nonumber\\
\cos (\frac{2a-b+c}{2}) = \cos (\frac{2a+b-c}{2})\nonumber
\end{eqnarray}
From the first of these equations we get $a=b\; \mbox{mod 2} \pi$, from the second
$b=c \;\mbox{mod 2} \pi$, and $z \cos(a) =-1$ for $\cos(a) \neq 0$. Plugging this back into the last of the original equations, we find
$$8 \cos^6(a)+4-12 \cos^2(a)=0$$ Among the solutions we pick those for which the characteristic polynomial $P(a,b,c,z) $ (see Eq. (\ref{charactpol})) is zero,
namely $\cos(a)=-1/z=\pm1$, i.e. $a=0,\pi$ and $z=\pm1$.
In the case that $\cos(a)=0$ we obtain
$$ 4 z^3-12z=0$$ which has solutions $z=0, \pm \sqrt{3}$. $z=0$ has to be discarded since it does not satisfy $P(a,b,c,z) =0$.
Summing up, the critical points are
\begin{enumerate}
\item $ a=b=c=0 \;\mbox(mod\;2 \pi); z=-1$
\item $a=b=c=\pi \;\mbox(mod\;2 \pi); z=1$
\item $a=b=c=\frac{\pi}{2},\frac{3 \pi}{2}\;\mbox(mod\;2 \pi); z=\pm \sqrt{3}$
\end{enumerate}
Looking at the image of these points under the characteristic map,
we see that $\Xi(0,0,0)=(-9,-3)$ is the lower cusp, which is an $A_2$ point with a triple degeneracy,
so is $a(\pi,\pi,\pi)=(9,3)$. For the other two points, $a(\pi/2,\pi/2,\pi/2)=a(3\pi/2,3\pi/2,3\pi/2)=(0,9)$
and they are in the $(A_1,A_1)$ stratum. This means that these points are candidates for Dirac points,
which they indeed are.
To decide this, we calculate
the Hessian.
Plugging in $a=b=c=\frac{\pi}{2},\frac{3 \pi}{2}\;\mbox{mod}\;2 \pi;$ $ z=\pm \sqrt{3}$, it becomes
\begin{equation}
\mbox{Hess}=\left(
\begin{array}{cccc}
-4&-2&-2&0\\
-2&-4&-2&0\\
-2&-2&-4&0\\
0&0&0&24
\end{array}
\right)
\end{equation}
which has signature$(---+)$. Notice that the corresponding cone is also not tilted.
For the other two points, the Hessian vanishes, as expected.
\begin{figure}
\includegraphics[width=\textwidth]{dispersiondiagonal.pdf}
\caption{Spectrum of the Gyroid Harper Hamiltonian for $a=b=c$}
\label{dispersiondiagonal}
\end{figure}
Since the singular points are all isolated, we can take any direction as transversal.
It turns out that the diagonal curve $C$ given by $a=b=c$ gives a transversal direction
for all 4 singular points at once.
The spectrum on this line is given in Figure \ref{dispersiondiagonal}
can be obtained by using group theory cf.\ \cite{sym}.
This is how it was first found in \cite{Avron} where the authors
considered an equivalent family in another context. They also found no other singularities numerically.
Explicitly the spectrum along this diagonal is given by
\begin{eqnarray}
\lambda_1=\omega \, exp(ia)+\bar{\omega} \exp(-ia)&& \lambda_2=\bar{\omega} \exp(ia)+\omega \exp(ia)\nonumber\\
\lambda_{3,4}=\cos(a)\pm \sqrt{\cos^2(a)+3}
\end{eqnarray}
From this one can see a linear dispersion relation in the direction of the diagonal. Without further
analysis, one cannot deduce the dispersion relation in any other direction from this results.
By our previous analysis we have, however, proven {\em analytically}, that there are indeed no other singularities and furthermore
have determined that there is a linear dispersion relation in {\em all directions} at the points $(\pi/2,\pi/2,\pi/2)$
and $(3\pi/2,3\pi/2,3\pi/2)$ establishing that these are indeed
two Dirac points.
It is interesting to note that the image curve $\Xi(C)$ runs first from $(-9,-3)$ to $(0,9)$ along
the boundary of the region,
continues as the boundary
curve to $(9,-3)$, and then turns back on itself to cover these two boundary pieces twice.
The nature of the two triple points is that they are isolated and they are
a pullback of the unfolding of the $A_2$ singularity. This is given by the image
of the characteristic map where now we consider the local neighborhood of the cusp point
in the slice as a miniversal unfolding of $A_2$. This is indeed possible and a standard
way of embedding the unfolding of $A_2$ into that of $A_3$ \cite{arnoldbook}.
\subsection{The honeycomb and diamond cases}
We treat the diamond and the honeycomb case in parallel.
The graphs $\bar \Gamma$ are given in Figure \ref{graphsDhoney}.
\begin{figure}
\includegraphics[width=.5\textwidth]{Fig4new.pdf}
\caption{The graphs $\bar \Gamma$ for the diamond (left) and the honeycomb case (right)}
\label{graphsDhoney}
\end{figure}
The Hamiltonians are
\begin{equation}
H_{hon}=\left(
\begin{matrix}
0&1+U+V\\
1+U^*+V^*
\end{matrix}
\right)
\end{equation}
and
\begin{equation}
H_D=\left(
\begin{matrix}
0&1+U+V+W\\
1+U^*+V^*+W^*
\end{matrix}
\right)
\end{equation}
We again use $U=exp( i u),V=exp( i v),W=exp(i w)$.
The polynomials are $P(u,v,z)=z^2-3-2cos(u)-2cos(v)-2\cos(u-v)$ and
$P(u,v,w,z)=z^2-4-2\cos(u)-2\cos(v)-2\cos(w)-2\cos(u-v)-2\cos(u-w)-2\cos(v-w)$.
The characteristic regions in $\mathbb R$ are just the intervals $[-9,0]$ and $[-16,0]$. The discriminant
is the point $0$. From this we see that in both cases we have to have $a_0=0$ and
the singular locus is simply this fiber.
\subsubsection{The honeycomb case}
For the honeycomb,
the standard calculation shows that in this case $U=V^*$ and $U\in \{\rho_3:\exp(2\pi i/3),\bar \rho_3\}$, which means that the fiber consists of 2 points.
These are the well known Dirac points $(\rho_3,\bar\rho_3),(\bar\rho_3,\rho_3)$.
We can check this explicitly:
$\nabla(P)=(2\sin(u)+2\sin(u-v),2\sin(v)-2\sin(u-v),z)$ from which we see that $z=0$ and $u\equiv - v
\equiv 2u (2\pi)$. Furthermore
\begin{equation}
Hess_{hon}=\left(
\begin{matrix}
2\cos(u)+2cos(u-v)&-2\cos(u-v)&0\\
-2(cos(u-v))&2\cos(v)-2\cos(u-v)&0\\
0&0&1\\
\end{matrix}
\right)
\end{equation}
which has the correct signature $(--+)$ at the given points, from which we recover
the known result that these points are Dirac points.
\subsubsection{The diamond case}
The equation for the fiber over $0$, $$-4-2cos(u)-2\cos(v)-2\cos(w)-2\cos(u-v)-2\cos(u-w)-2\cos(v-w)=0$$
has been solved in \cite{kkwk2} and the solutions are given by $(u,v,w) =(\phi_i,\phi_j,\phi_k)$
with $\phi_i=\pi, \phi_j\equiv \phi_k+\pi\; \mbox{mod}\; 2\pi$ with $\{i,j,k\}=\{1,2,3\}$.
So in this case the fiber of the characteristic map is 1--dimensional and the pull--back
has singularities along a locus of dimension $1$, which also implies that there are no Dirac points.
Geometrically the singular locus are three circles pairwise intersecting in a point.
\subsubsection{The characteristic map and region}
For both the honeycomb and the diamond graph, the relevant singularity is $A_1$. Both these graphs
are not simply laced, so their image is not contained in a slice. The swallowtail is only one point $0$
and this is the stratum of type $A_1$.
In the honeycomb case the fiber over this point is discrete and consists of two points, while in the case of the diamond lattice the fiber is not discrete and it is given by three circles pairwise intersecting at a point. It turns out that in the honeycomb case the two candidates for Dirac points are indeed Dirac points. While for the $D$ case there is a non--trivial fiber which is essential 1--dimensional.
Hence we do not get Dirac points, but rather spread out singularities.
\subsection{Three--vertex graphs}
In order to have some examples for the $A_2$ singularity and to show
the kind of behavior that is possible, we considered
a three vertex graph with either only simple edges, or one, two or all of the edges doubled.
The characteristic regions are seen in Figures \ref{trigA}-\ref{scatterABCD}.
Here we see that for the simply laced case, we get a slice, for one or two doubled edges,
we get parabolic regions, which intersect the boundary in two points, which are of type $A_1$
and finally, in the case of all edges being double, the characteristic region is a surface,
which is bounded by the discriminant and the line on which $a_1$ takes its maximal value, in this case $a_1=12$.
\subsubsection{Triangle with single bonds}
We consider the graph and the spanning tree given in Figure \ref{trigA}.
\begin{figure}
\includegraphics[width=.3\textwidth]{triangleA.pdf}
\hspace{1cm}
\includegraphics[width=.3\textwidth]{scatterA.pdf}
\caption {Spanning tree and characteristic line for a triangle with single bonds}
\label{trigA}
\end{figure}
The associated Harper Hamiltonian reads:
\begin{equation}
H=\left(
\begin{array}{cccc}
0&1&1\\
1&0&A\\
1&A^*&0
\end{array}
\right)
\end{equation}
where $A$ is an operators on $S^1$. We will rewrite it as $A=\exp(i a)$ with $a$ real.
The characteristic polynomial is:
\begin{equation}
P(a,z)=z^3-3z-2 \cos(a)
\end{equation}
The characteristic region is easy to calculate, since the graph is simply laced
it is contained in the slice $a_1=-3$. The image under $-2\cos(a)=[-2,2]$ thus $R=[-2,2]\times -3$ in $\mathbb R^2$.
Figure \ref{trigA} shows this
together with the zero locus of the discriminant.
From this we see that all possible singularities occur at $a_0=-2\cos(a)=\pm2$ that is $a\equiv 0,\pi (2\pi)$. Indeed calculating $\nabla P(a,z)$, we use
\begin{eqnarray}
\frac{\partial P}{\partial a}&=&2 \sin(a)\\
\frac{\partial P}{\partial z}&=&3 z^2-3
\end{eqnarray}
These equations vanish simultaneously for the following choice of variables:
\begin{enumerate}
\item $z=\pm1$
\item $a=0, \pi \; (\mbox{mod} \;2 \pi)$
\end{enumerate}
Among all possible combinations, the choices $(z=-1,a=0)$ and $(z=1,a=\pi)$ are zeros
of $p(z,a)$. For these two points, the Hessian
\begin{equation}
\mbox{Hess}=\left(
\begin{array}{cccc}
2 \cos(a)&0\\
0&6z
\end{array}
\right)
\end{equation}
has a non-vanishing determinant of $\mbox{det(Hess)}=-12$ and signature $(-+)$.
So we find two Dirac points. Again the cone is not tilted.
\subsubsection{Triangle with one double bond}
We consider the graph and the spanning tree given in Figure \ref{scatterAB} where one of the bonds is a double bond.
The Harper Hamiltonian reads in this case
\begin{equation}
H=\left(
\begin{array}{cccc}
0&1&1\\
1&0&A+B\\
1&A^*+B^*&0
\end{array}
\right)
\end{equation}
where $A$, $B$ are operators on $S^1$. We will rewrite them as $A=\exp(i a)$, $B=\exp(i b)$ with $a,b$ real.
The characteristic polynomial is:
\begin{equation}
P(a,b,z)=z^3-(4+2\cos(a-b))z-2 \cos(a)-2 \cos(b)
\end{equation}
Again, we set $a_1=-(4+2\cos(a-b))$ and $a_0=-2 \cos(a)-2 \cos(b)$.
We see that this time the characteristic region is not contained in a slice, which
was not to be expected since the graph is not simply laced.
The region is depicted via a scatter plot in Figure \ref{scatterAB}. One reads off
that $R$ intersects with the discriminant locus in two points.
\begin{figure}
\includegraphics[width=.3\textwidth]{spt_triangleAB.pdf}
\hspace{1cm}
\includegraphics[width=.5\textwidth]{triangleABsm.pdf}
\caption{Spanning tree and characteristic region with double bond}
\label{scatterAB}
\end{figure}
To calculate $\nabla P(a,b,z)=0$ we use,
\begin{eqnarray}
\frac{\partial P}{\partial a}&=&2 \sin(a-b)z+2 \sin(a)\nonumber\\
\frac{\partial P}{\partial b}&=&-2 \sin(a-b)z+2 \sin(b)\nonumber\\
\frac{\partial P}{\partial z}&=&3 z^2-(4+2\cos(a-b))
\end{eqnarray}
From the first two equations, we get either $a=0,\pi$ and $b=0,\pi$, but for all combinations of those, the remaining two equations ($\frac{\partial P}{\partial z}=0$ and $P(a,b,z)=0$) cannot be simultaneously solved. Therefore the only possible solution for the first two equations is to take $a=-b$ and $z=-\frac{\sin(a)}{\sin(2a)}$ for $\sin(2a)\neq 0$. Putting this into the last equation yields the trigonometric equation
$$
3 \sin^2(a) -4 \sin^2(4a)-2\cos(2a)\sin^2(2a)=0
$$
which has the solutions $a= \pm \frac{ \pi}{3},\pm \frac{2 \pi}{3}$. These also lead to a vanishing of the characteristic polynomial.
So we get the following two solutions (the negative values lead to the same values for $a_0$ and $a_1$):
\begin{enumerate}
\item $a=\frac{ \pi}{3}, b=-\frac{ \pi}{3} \;\mbox{mod}\; 2 \pi, z=-1$
\item $a=\frac{ 2 \pi}{3}, b=-\frac{ 2 \pi}{3}\;\mbox{mod} \;2 \pi, z= 1$
\end{enumerate}
The Hessian is
\begin{equation}
\mbox{Hess}=\left(
\begin{array}{cccc}
2 \cos(a-b)z+2 \cos(a)&-2z\cos(a-b)&2 \sin(a-b)\\
-2z\cos(a-b)& 2 \cos(a-b)z+2 \cos(b)&-2 \sin(a-b)\\
2\sin(a-b)&-2\sin(a-b)&6z\\
\end{array}
\right)
\end{equation}
and has signatures $(++-)$ and $(--+)$, respectively. Here the cone is actually tilted.
Again there are two Dirac points.
\subsubsection{More variations}
In the same way, we can obtain information about possible Dirac points for the following graphs:
\begin{figure}[ht]
\centering
\includegraphics[width=.3\textwidth]{triangleABC.pdf}
\hspace{1cm}
\includegraphics[width=.5\textwidth]{scatterABCsm.pdf}
\caption{Spanning tree and scatter plot of characteristic region with triple bond}
\label{scatterABC}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=.3\textwidth]{triangleABD.pdf}
\hspace{1cm}
\includegraphics[width=.5\textwidth]{scatterABDsm.pdf}
\caption{Spanning tree and characteristic region with two double bonds}
\label{scatterABD}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=.3\textwidth]{triangleABCD.pdf}
\hspace{1cm}
\includegraphics[width=.5\textwidth]{scatterABCDsm.pdf}
\caption{Spanning tree and scatter plot of characteristic region with three double bonds}
\label{scatterABCD}
\end{figure}
For the triple bond case shown in Figure \ref{scatterABC} and the two double bonds in Figure \ref{scatterABD},
we see two isolated intersection points, but the fiber will be dimension
$1$, so like in the D--case there will be no Dirac points.
Considering three double bonds (see Figure \ref{scatterABCD})
we see that now the intersection of $R$ with $D$ is along two of the boundaries
of $R$ and hence not only will the fiber have dimension $2$, but we also expect
to have horizontal directions which to not resolve the singularity. We leave this for further
investigation.
\subsection{The P and other Bravais cases}
Here the graph has small loops, and the characteristic polynomial has to be transformed.
It is simply a polynomial of degree $1$. $P(t,z)=z-\sum_i t_i$, so that after shifting $z$ we
are left with just $z=0$, which is not critical. Not surprisingly, there are no singularities.
\section{Conclusion and outlook}
We have developed a general method to analyze singularities in the spectra of smooth families of $k\times k$
Hamiltonians parameterized
by a base $B$
using singularity theory. In particular, we realized the spectrum as a singular submanifold $X$ of the smooth product $B\times \mathbb R$
as the zero set of a function $P$.
This led us to a simple characterization of Dirac points as $A_1$ or Morse singularities of $P$ with critical value $0$ whose signature is
$(-\dots-+)$ or $(+\dots +-)$ . Furthermore we could represent $\pi:X\to B$ up to a given diffeomorphism on the ambient space
as the pull--back of the miniversal unfolding of the $A_{k-1}$ singularity via a characteristic map $\Xi$. This classifies all
the possible singularities of the fibers of $\pi$ as $(A_{n_1},\dots, A_{n_l})$
The image of the characteristic map, called the characteristic region, allows one to read off which ones occur.
We then applied these techniques to the graph Hamiltonians and wire networks. Here we reproduce the known Dirac points
for graphene and make the surprising find that the Gyroid wire network also has Dirac points. We expect that
this should have practical applications.
The situation for the Gyroid is very special, as the characteristic region goes into the cusps and the self--intersection locus of the swallowtail
without prior contact to the ``walls''. Were this not be the case, one would not expect isolated points.
We gave more graph examples to illustrate how special this behavior is.
Adding multiple edges, we expect to get ``full'' region as in case of the three double bonds in a triangle.
One exciting find is that there seems to be a
commutative/non--commutative
duality in the wire network families first stated in \cite{kkwk2}.
By this we mean the observation that there is a correspondence between the
locus of degenerate points in the commutative setting and
the locus of parameters where the corresponding non--commutative algebra
$\mathscr B_{\Theta}$ is not the full matrix algebra $M_k({\mathbb T}_{\Theta})$.
The correspondence is not 1--1, but the top dimensions agree and
there are further features that look dual. We wish to emphasize that
although the space $T^n$ appears in both settings as the parameter
space, it {\it a priori} plays two different roles.
In the case without
magnetic field, the parameters are quasimomenta, while in the case with magnetic field
they are the field's components.
It is intriguing to speculate that the non--commutative setting is
a model for a non--commutative unfolding of singularities and
that this furnishes the framework to make the duality explicit.
This will be a topic of further research.
There are several other directions of research that are immediate. First one can ask if in the graph case there are symmetries forcing
the degeneracies. This is pursued in \cite{sym} using a regauging groupoid action.
Second, a physically relevant question is how stable the singular fibers are with respect
to deformations of the Hamiltonian. For a 3-dimensional parameter space, simple singularities, i.e.,
Dirac points, are ``magnetic monopoles'' \cite{Berry} and are expected to be topologically stable. This,
and the evolution of other types of singularities, is further discussed in \cite{impure}.
We will also focus on making the
type of analysis explicit in the non--commutative geometry language.
There should be some kind of characteristic classes and parings much like in the
setting of the quantum Hall effect as presented in \cite{BE,Marcolli}.
\section*{Acknowledgments}
RK thankfully acknowledges
support from NSF DMS-0805881.
BK thankfully acknowledges support from the NSF under the grant PHY-0969689.
Any opinions, findings and conclusions or
recommendations expressed in this
material are those of the authors and do not necessarily
reflect the views of the National Science Foundation.
Part of this work was completed when RK was visiting the IAS in Princeton,
the IHES in Bures--sur--Yvette, the Max--Planck--Institute in Bonn and the University of Hamburg with a Humboldt fellowship. He gratefully acknowledges
their contribution. Likewise BK extends her gratitude to the Physics Department
of Princeton, where part of this work was completed and to the DESY theory group where the finishing touches for this article were made.
The authors furthermore thank D. Berenstein, A. Libgober, M.~Marcolli and T. Spencer
for discussions which were key to formalizing and finalizing our concepts.
|
1,116,691,500,671 | arxiv | \subsection{Model spaces}
One can find most of the material below in textbooks
such as \cite{Chav}, or in the survey article \cite{Kar}.
We begin with some basic formulas regarding our model space $(M_\kappa,
g_\kappa)$, the complete, simply connected space with constant sectional
curvature $-\kappa^2$.
First observe that, by the Cartan-Hadamard theorem, $M$ is diffeomorphic
to $\R^n$, so we use global polar coordinates. In these
coordinates, the metric has the form
\begin {equation} \label{model-metric}
g_\kappa = dr^2 + \frac{1}{\kappa^2} \sinh^2 (\kappa r) d\theta^2,
\end {equation} where $d\theta^2$ is the round metric on
the unit sphere.
Using the expansion
$$\frac{\sinh(\kappa r)}{\kappa} = r + \frac{1}{3!} \kappa^2 r^3 + \frac{1}{5!}
\kappa^4 r^5 + \cdots,$$
we (formally) recover the Euclidean metric in polar coordinates as $\kappa \rightarrow
0^+$:
$$g_0 = dr^2 + r^2 d\theta^2 = \lim_{\kappa \rightarrow 0} \left [
dr^2 + \frac{1}{\kappa^2} \sinh^2 (\kappa r)d\theta^2 \right ]. $$
It is also convenient to observe that $\kappa^{-1} \sinh (\kappa r)>
r$ for all $\kappa>0$; geometrically, this says geodesics spread apart
more rapidly (in fact, exponentially more rapidly) in hyperbolic space
than in Euclidean space. From \eqref{model-metric} we see that
\begin {equation} \label {model-volumes1}
|\partial B_r|_\kappa = n\omega_n \kappa^{1-n} (\sinh (\kappa r)
)^{n-1} \end {equation}
and
\begin {equation} \label{model-volumes2}
|B_r|_\kappa = n\omega_n \kappa^{1-n}
\int_0^r (\sinh (\kappa t) )^{n-1} dt = v_\kappa(r),\end {equation}
where $B_r$ is a geodesic ball of radius $r$, and $\omega_n$
is the volume of an $n$-dimensional Euclidean unit ball. Later,
it will be convenient to invert the model volume function $v_\kappa(r)$,
and write its inverse as $r_\kappa(v)$, which we call the volume radius.
Again, we can recover the familiar Euclidean formulae by taking a
limit as $\kappa \rightarrow 0$:
$$|\partial B_r|_0 = n \omega_n r^{n-1}, \quad |B_r|_0 = \omega_n r^n
= v_0(r), \quad r_0(v) = \left ( \frac{v}{\omega_n} \right )^{1/n}.$$
The first eigenfunction $\psi_\kappa$ of a geodesic ball in the
model space $(M_\kappa, g_\kappa)$ is radial, and
so it satisfies
\begin {equation} \label{ball-eigen1}
-\lambda \psi_\kappa = \Delta_\kappa \psi_\kappa = (\sinh (\kappa r))^{1-n}
((\sinh (\kappa r))^{n-1} \psi_\kappa' )', \end {equation}
where we use $'$ to denote differentiation with respect to $r$. (Where
it can be understood from context, we suppress the subscript $\kappa$.)
If we change variables to volume and write $\psi^* (v) = \psi(r_\kappa(v))$
this equation becomes
$$-\lambda \psi^* (v) = n^2 \omega_n^2 \kappa^{2-2n}
\frac{d}{dv} \left ( \sinh^{2n-2}(\kappa r_\kappa(v)) \frac{d\psi^*}{dv} \right ),$$
which we can integrate once to obtain
\begin {equation} \label{ball-eigen2}
-(\psi^*)' (v) = n^{-2} \omega_n^{-2} \lambda \left [ \frac{\sinh(\kappa
r_\kappa (v))}{\kappa} \right ]^{2-2n} \int_0^v \psi^* (t) dt. \end {equation}
As before, we take a limit as $\kappa \rightarrow 0$ to recover
the Euclidean analogs of \eqref{ball-eigen1} and
\eqref{ball-eigen2}, which are (respectively)
$$-\lambda \psi_0 = \Delta_0 \psi_0 = r^{1-n} (r^{n-1} \psi_0')'$$
and
\begin {equation} \label {ball-eigen3}
-(\psi_0^*)'(v) = n^{-2} \omega_n^{-2/n} \lambda v^{-2 + 2/n}
\int_0^v \psi_0^* (t) dt.\end {equation}
\subsection{Isoperimetric inequalities} \label{isop-sec}
In this section we recall some isoperimetric inequalities for general
Riemannian manifolds. Throughout this section, we take $(M,g)$
to be a complete Riemannian manifold, and we usually place a
bound on its curvature. We also let $\Omega \subset
M$ be a domain with $\partial \Omega \in \mathcal{C}^\infty$ (though
this much regularity is rarely necessary), and with $\bar \Omega$
compact. A theorem of Beckenbach and Rad\'o \cite{BR} states
that if $(M,g)$ is a complete surface with nonpositive Gauss curvature
then
$$|\partial \Omega|_g^2 \geq 4\pi |\Omega|_g,$$
and equality can only occur if $(M,g)$ is the Euclidean plane and
$\Omega$ is a round disk.
The next major break-through is a theorem is due to
Croke \cite{Croke}, which states that if $\Sect(g) \leq 0$ then
$$|\partial \Omega|_g \geq c_1(n) |\Omega|_g^{\frac{n-1}{n}},$$
where $c_1(n)$ is an explicit constant and $\Sect(g)$ is the
sectional curvature of $(M,g)$. This inequality is
only an equality when $n=4$ and $\Omega$ is a round ball
in Euclidean space. The next result we quote is due to Kleiner \cite{Klein},
and states that if $\dim(M) = 3$ and $\Sect(g) \leq -\kappa^2 \leq 0$
then $|\partial \Omega|_g \geq |\partial B_r|_\kappa$,
where $B_r$ is a geodesic ball in the model space $(M_\kappa,
g_\kappa)$, with $|\Omega|_g = |B_r|_\kappa$. One only has equality
in Kleiner's result if $\Omega$ is a model geodesic ball. It is worth remarking
that Kleiner's proof relies on the Gauss-Bonnet formula, so it
can only work in dimension three, while Croke's proof can only be
sharp in dimension four. To date, these are the only general results one
can find with no restriction on the size of $\Omega$.
More recently, Morgan and Johnson \cite{MJ} proved a result for compact
manifolds $(M,g)$, so long as $|\Omega|_g$ is sufficiently small. Their results
state that if $\Sect (g) \leq -\kappa^2$ and $|\Omega|_g$ is sufficiently
small then the same inequality $|\partial \Omega|_g \geq |\partial
B_r|_\kappa$ holds, where again $B_r$ is a geodesic ball in the
model space $(M_\kappa, g_\kappa)$ with $|B_r|_\kappa =
|\Omega|_g$. Later, Druet \cite{Druet1} strengthened the Morgan-Johnson
result to the point that one only needs a bound on the scalar curvature
$S_g$ of $g$, of the form $S_g \leq -n(n-1)\kappa^2$. He also
shows that these results hold when $(M,g)$ is complete
and $\Omega$ is contained in a small geodesic ball (whose radius
might depend on position). Again, one can
only have equality in either of these theorems if $\Omega$ is a
geodesic ball in the model space.
We can use our notation from the previous section to write these inequalities
as
\begin {equation} \label {isop-small-vol}
\Omega \textrm{ sufficiently small } \Rightarrow
|\partial \Omega|_g \geq n \omega_n \kappa^{1-n} (\sinh(\kappa
r_\kappa (v)))^{n-1}, \end {equation}
where $r_\kappa(v)$ is the volume radius function, which
inverts \eqref{model-volumes2}. Again, we can contrast this with the
Euclidean case, which states that $|\partial \Omega|_0 \geq n \omega_n^{1/n}
|\Omega|^{\frac{n-1}{n}}= n\omega_n (r_0(v))^{n-1}$.
Using these later forms of the isoperimetric inequality for small
volumes, Druet \cite{Druet2} and Fall \cite{Fall} proved a
Faber-Krahn theorem, and in fact obtained stability estimates. More
precisely, they showed that if $\Sect(g) \leq -\kappa^2$ and $|\Omega|_g$
is small, then $\lambda(\Omega) \geq \lambda(B_r)$,
where $B_r$ is the geodesic ball in the model space as
before. Moreover, they estimate the difference $\lambda(\Omega)
- \lambda(B_r)$, again when $|\Omega|_g$ is small.
\section {Rearrangements and reverse-H\"older inequalities}
\label{rearrange-sec}
In this section we discuss a rearrangement of the first eigenfunction
$\phi$ of $\Omega$, and use it to prove an integro-differential inequality
similar to that of Talenti \cite{Tal}. Next, we obtain inequalities which
generalize the results of Chiti \cite{Chiti1, Chiti2} to the Riemannian
setting. As outlined in our introduction, our standing hypotheses will
be that \eqref{isop-small-vol} holds. While the precise statements below
have not yet appeared in the
literature (to our knowledge), we suspect that much of this section is,
in the words of A. Treibergs, ``well-known to those who know it well."
Recall that the first eigenfunction $\phi$ satisfies
\begin {equation} \label {eigenfunction1}
\lambda(\Omega) = \frac{\int_\Omega |\nabla \phi|^2 \,dm}
{\int_\Omega \phi^2 \,dm} = \inf \left \{ \frac{\int_\Omega
|\nabla u|^2 \,dm}{\int_\Omega u^2 \,dm} : u \in W^{1,2}_0(\Omega)
\right \}, \end {equation}
or, alternatively,
\begin {equation} \label {eigenfunction2}
\Delta_g \phi + \lambda(\Omega) \phi = 0, \quad \left.
\phi\right |_{\partial \Omega} = 0, \quad \phi> 0
\textrm{ inside }\Omega. \end {equation}
Let $m = \sup_\Omega \phi$, and for $0 \leq t \leq m$ define
\begin {equation} \label{dist-funct}
D_t = \{ \phi > t\}, \qquad \mu(t) = |D_t|_g.
\end {equation}
By the co-area formula, we have
\begin {equation} \label {coarea}
\mu(t) = \int_t^m \int_{\partial D_\tau} \frac{d\sigma}{|\nabla \phi|}
\,d\tau, \mbox{ so that } \mu'(t) = -\int_{\partial D_t}
\frac{d\sigma}{|\nabla \phi|} < 0.\end {equation}
Here $d\sigma$ is the $(n-1)$-dimensional volume element induced on
$\partial D_t$ by its inclusion in $\Omega$.
Therefore, $\mu$ is monotone, and so it has an inverse
function we call $\phi^*(v)$, defined by
$$\phi^*(v) = \inf \{t \in [0,m]: \mu(t) < v\}.$$
While the following is an easy adaptation of equation (34) of \cite{Tal},
we include its proof for the reader's convenience.
\begin {lemma} Let $\Omega \subset (M, g)$ be a domain with compact closure,
smooth boundary, and sufficiently small that \eqref{isop-small-vol} holds.
Then the function $\phi^*$ satisfies
\begin {equation} \label {eigenfunction3}
-(\phi^*)'(v) \leq n^{-2} \omega_n^{-2} \lambda(\Omega)
\left [ \frac{\sinh(\kappa r_\kappa(v))}{\kappa} \right ]^{2-2n}
\int_0^v \phi^*(t) dt.
\end {equation}
Moreover, equality can only occur if $\Omega$ is isometric to a
geodesic ball in the model space $(M_\kappa, g_\kappa)$.
\end {lemma}
\begin {proof} Let $\lambda = \lambda(\Omega)$.
First observe
$$\lambda \int_{D_t} \phi dm =
- \int_{D_t} \Delta_g \phi dm = -\int_{\partial D_t} \frac{\partial \phi}
{\partial \eta} d\sigma = \int_{\partial D_t} |\nabla \phi| d\sigma,$$
where we have used the divergence theorem and the fact that $\phi$ is
constant on $\partial D_t$. Next we use Cauchy-Schwarz to see that
$$|\partial D_t|_g \leq \left [ \int_{\partial D_t} |\nabla \phi| d\sigma
\int_{\partial D_t} \frac{d\sigma}{|\nabla \phi|} \right ]^{1/2} =
\left [ -\lambda \mu'(t) \int_{D_t} \phi dm \right ]^{1/2}.$$
Squaring the above inequality and using \eqref{isop-small-vol} will yield
\begin {equation} \label{integro-diff1}
-\lambda \mu'(t) \int_{D_t} \phi dm \geq |\partial D_t|_g^2
\geq |\partial B_r|_\kappa^2 = n^2 \omega_n^2 \kappa^{2-2n}
(\sinh(\kappa r_\kappa (v)))^{2n-2}. \end {equation}
Finally we change variables to $v = \mu(t)$, and use the fact that
$\mu'(t) = \frac{1}{(\phi^*)'(v)}$. This transforms \eqref{integro-diff1}
into
$$-(\phi^*)'(v) \leq n^{-2} \omega_n^{-2} \kappa^{2n-2} \lambda
(\sinh(\kappa r_\kappa(v)))^{2-2n} \int_0^v \phi^*(t) dt,$$
as claimed. \end {proof}
\noindent It is vital that \eqref{eigenfunction3} and \eqref{ball-eigen2} have
essentially the same right hand side.
\noindent We also record Talenti's original equality for the eigenfunction, which is
\begin {equation} \label {eigenfunction4}
-(\phi^*)' (v) \leq n^{-2} \omega_n^{-2/n} v^{-2 + 2/n} \int_0^v \phi^*(t) dt.
\end {equation}
This inequality holds when $\Omega \subset (M,g)$ is a domain with compact closure and
smooth boundary, with sufficiently small volume, and $S_g \leq 0$.
To see \eqref{eigenfunction4}, one can either evaluate a limit
of \eqref{eigenfunction3} as $\kappa \rightarrow 0$ or use the same
proof with the classical form of the isoperimetric inequality.
We can adapt the arguments of \cite{Chiti1, Chiti2} to our Riemannian
setting.
\begin {thm} \label{chiti-thm1}
Let $(M,g)$ and $\Omega$ be as above, so that \eqref{isop-small-vol}
applies. Let $B^* \subset (M_\kappa, g_\kappa)$ be a ball in the model space with the same fundamental frequency as $\Omega$, so that $\lambda(B^*) =
\lambda(\Omega) = \lambda$. Let
$\phi$ and $\psi$ the the first Dirichlet eigenfunctions of $\Omega$ and of $B^*$
respectively, normalized so that $\|\phi\|_{L^\infty(\Omega)}
= \|\psi\|_{L^\infty(B^*)}$. Then $\phi^*(v) \geq \psi^*(v)$ for
$0 \leq v \leq |B^*|_\kappa$, and equality can only occur for $v>0$ if
$\Omega$ is isometric to $B^*$. \end {thm}
\begin {proof} First observe that, by the Faber-Krahn inequality, we have
$|\Omega|_g \geq |B^*|_\kappa$, so that both functions $\psi^*$ and $\phi^*$
are well-defined on the interval $[0,|B^*|_\kappa]$. Moreover, if $|\Omega|_g
= |B^*|_\kappa$, then $\Omega$ must be isometric to $B^*$, and we have
nothing to prove. Therefore, we can take $|\Omega|_g > |B^*|_\kappa$
without loss of generality. By our normalization of $\psi$, we also know
$$\phi^*(0) = \psi^*(0) = m, \quad \psi^*(|B^*|_\kappa) = 0, \quad
\phi^* > 0 \textrm{ on }[0,|B^*|_\kappa].$$
Therefore, there exists $k>1$ such that $k \phi^*(v) > \psi^*(v)$
for all $v \in [0,|B^*|_\kappa]$. Define
$$k_0 = \inf \{ k > 1 : k \phi^*(v) > \psi^*(v)
\textrm{ on }[0,|B^*|_\kappa \}.$$
If $k_0 = 1$ then we've completed the proof, and otherwise there
exists $v_0 \in (0,|B^*|_\kappa)$ such that $k_0 \phi^*(v_0)
= \psi^*(v_0)$. If we let
$$u^*(v) = \left \{ \begin {array}{rl} k_0\phi^*(v) & 0 \leq v \leq v_0 \\
\psi^*(v) & v_0 < v \leq |B^*|_\kappa, \end {array} \right. $$
then, by \eqref{eigenfunction3} and \eqref{ball-eigen2}, we have
\begin {equation} \label{chiti1}
-(u^*)'(v) \leq n^{-2} \omega_n^{-2/n} \lambda \left [
\frac{\sinh(\kappa r_\kappa(v))}{\kappa} \right ]^{2-2n}
\int_0^v u^*(t) dt .\end {equation}
Now define a radial test function on $B^*$ by $u(r) = u^*(v_\kappa(r))$. We
use the chain rule and
$$\frac{dv_\kappa}{dr} = n \omega_n \left ( \frac{\sinh
(\kappa r)}{\kappa} \right )^{n-1}$$
to see that
\begin {eqnarray*}
\int_{B^*} |\nabla u|^2 dm & = & \int_0^{|B^*|_\kappa}
n^2 \omega_n^2 \left [ \frac{\sinh (\kappa r_\kappa(v))}
{\kappa} \right ]^{2n-2} (-(u^*)'(v))^2 dv \\
& \leq & \lambda \int_0^{|B^*|_\kappa} (-(u^*)'(v))
\int_0^v u^*(\tau) d\tau \\
& = & \lambda \int_0^{|B^*|_\kappa} u^*(\tau) \int_\tau^{|B^*|_\kappa}
(-(u^*)'(v)) dv d\tau = \lambda \int_0^{|B^*|_\kappa} (u^*(\tau))^2d\tau \\
& = & \lambda \int_{B^*} u^2 dm.\end {eqnarray*}
However, this is impossible unless $u = \psi$, which would contradict
$k_0 > 1$.
\end {proof}
We can integrate the inequality in Theorem \ref{chiti-thm1}
to obtain the following (scale-invariant) corollary.
\begin {cor} \label{chiti-thm2}
Let $\Omega \subset (M,g)$ be as above, and let $B^*$
be the geodesic ball in the model space $(M_\kappa, g_\kappa)$
with $\lambda(\Omega) = \lambda(B^*)$. Let $\psi$ be the
first eigenfunction of $B^*$ and let $\phi$ be the first eigenfunction
of $\Omega$. Then for all $p>0$ we have
$$\frac{\|\phi\|_{L^p(\Omega)}}{\|\phi\|_{L^\infty(\Omega)}}
\geq \frac{\|\psi\|_{L^p(B^*)}}{\|\psi\|_{L^\infty(B^*)}}.$$
Equality can only occur if $\Omega$ is isometric to $B^*$.
\end {cor}
One can find a version of the following theorem, which reverses the
standard H\"older inequality, in the hyperbolic setting
in Section 9 of \cite{BL}. Both proofs utilize Chiti's method
from \cite{Chiti2}.
\begin {thm} With the same $\Omega$ as above and any choice
$0 < p < q < \infty$, there exists a
positive, finite constant $C = C(n,p,q,\kappa, \lambda)$ such that
the first eigenfunction $\phi$ of $\Omega$, with eigenvalue $\lambda$,
satisfies
\begin {equation} \label{reverse-holder1}
\left ( \int_\Omega \phi^p dm \right )^q \geq C \left (
\int_\Omega \phi^q dm \right )^p. \end {equation}
Equality can only occur if $\Omega$ is isometric to
$B^*$.
\end {thm}
In fact, it will be transparent from the proof that
\begin {equation} \label{holder-const1}
C = \frac{\left ( \int_{B^*} \psi^p dm \right )^q}
{\left ( \int_{B^*} \psi^q dm \right )^p},\end {equation}
where $B^*$ is the geodesic ball in the model space
with $\lambda(B^*) = \lambda = \lambda(\Omega)$,
and $\psi$ is its first eigenfunction.
\begin {proof} We use the same approach as in
the proof of Theorem \ref{chiti-thm1}, but this time normalize $\psi$
such that
$$\int_{B^*}\psi^p dm = \int_\Omega \phi^p dm.$$
Thus, by Corollary \ref{chiti-thm2} above, $\| \psi\|_{L^\infty(B^*)} \geq
\| \phi\|_{L^\infty(\Omega)}$, with equality if and only if $\Omega$
is isometric to $B^*$. We may therefore assume
\begin {equation} \label{rearrange1}
\psi^*(0) = \| \psi\|_{L^\infty(B^*)} >\|\phi\|_{L^\infty(\Omega)} = \phi^*(0).
\end {equation}
We also know, as before, that
$$\psi^*(|B^*|_\kappa) = 0, \quad \phi^* > 0 \textrm{ on }
[0, |B^*|_\kappa],$$
which combined with \eqref{rearrange1} tells us the graphs
of $\phi^*$ and $\psi^*$ must cross, and not just touch, at least once on the interval
$[0,|B^*|_\kappa]$. Define
$$v_0 = \sup \{ v \in (0, |B^*|_\kappa) : \phi^*(\tilde v) \leq \psi^*(\tilde v)
\textrm{ for all }\tilde v \in (0,v) \}, $$
so that we have
$$0 < v_0 < |B^*|_\kappa, \quad \psi^* \geq \phi^* \textrm{ in }[0, v_0],
\quad \phi^*(v_0) = \psi^*(v_0). $$
Additionally, there must exist $\delta>0$ such that $\phi^*(v) > \psi^*(v)$
for $v \in (v_0, v_0 + \delta)$.
We claim that actually $\phi^* > \psi^*$ in the interval $(v_0, |B^*|_\kappa]$.
Indeed, if this were not the case then there would exist $v_1$
such that
$$v_0 < v_1 < |B^*|_\kappa, \quad \psi^*(v_1) = \phi^*(v_1),
\quad \phi^*(v) > \psi^*(v) \textrm { for }v_0 < v< v_1.$$
This allows us to define a test function for $B^*$ as
$$u^*(v) = \left \{ \begin {array} {rl} \psi^*(v) & 0 \leq v \leq v_0 \\
\phi^*(v) & v_0 \leq v \leq v_1 \\ \psi^* (v) & v_1 \leq v \leq |B^*|_\kappa .
\end {array} \right . $$
As before, our test function satisfies \eqref{chiti1},
$$-(u^*)'(v) \leq n^{-2} \omega_n^{-2/n} \lambda \left [
\frac{\sinh(\kappa r_\kappa(v))}{\kappa} \right ]^{2-2n}
\int_0^v u^*(t) dt,$$
and we can define a radial test function $u$ on $B^*$ by
$u(r) = u^* (v_\kappa(r))$, which in turn satisfies
\begin {eqnarray*}
\int_{B^*} |\nabla u|^2 dm & = & \int_0^{|B^*|_\kappa}
n^2 \omega_n^2 \left [ \frac{\sinh (\kappa r_\kappa(v))}
{\kappa} \right ]^{2n-2} (-(u^*)'(v))^2 dv \\
& \leq & \lambda \int_0^{|B^*|_\kappa} (-(u^*)'(v))
\int_0^v u^*(\tau) d\tau \\
& = & \lambda \int_0^{|B^*|_\kappa} u^*(\tau) \int_\tau^{|B^*|_\kappa}
(-(u^*)'(v)) dv d\tau = \lambda \int_0^{|B^*|_\kappa} (u^*(\tau))^2d\tau \\
& = & \lambda \int_{B^*} u^2 dm.\end {eqnarray*}
As before, this is only possible if $u= \psi$, which contradicts
our assumption $\psi^*(0) > \phi^*(0)$.
So far, we have shown there exists $v_0 \in (0, |B^*|_\kappa)$
such that $\psi^*\geq\phi^*$ on $(0,v_0)$ and $\phi^* > \psi^*$
on $(v_0, |B^*|_\kappa)$. We extend $\psi^*$ to be zero on the
interval $[|B^*|_\kappa, |\Omega|_g]$, and claim that
\begin {equation} \label{chiti2}
v\in [0, |\Omega|_g] \Rightarrow
\int_0^v (\psi^*(\tau))^p d\tau \geq \int_0^v (\phi^*(\tau))^p d\tau.
\end {equation}
To prove this claim, we let
$$I(v) = \int_0^v (\psi^*(\tau))^p d\tau - \int_0^v
(\phi^*(\tau))^p d\tau$$
and observe
$$I(0) = I(|\Omega|_g) = 0, \quad I' (v) = (\psi^*(v))^p
- (\phi^*(v))^p.$$
Thus $I$ is increasing on the interval $[0, v_0)$ and decreasing
on the interval $(v_0, |\Omega|_g]$. It follows immediately that
$I(v) > 0$ for $0 \leq v \leq v_0$. If we had $I(v_1) < 0$ for some
$v_1 \in (v_0, |\Omega|_g)$ then, because $I$ is decreasing
in this interval, we would also have $I(|\Omega|_g) < 0$, which is a
contradiction. We conclude \eqref{chiti2}. It follows from an
inequality of Hardy, Littlewood, and P\'olya \cite{HLP} that for all $q>p$
we have
$$\left ( \int_\Omega \phi^q dm \right )^{1/q} \leq \left ( \int_{B^*} \psi^q
dm \right )^{1/q} = \frac{\left ( \int_{B^*} \psi^q dm \right )^{1/q} }
{\left ( \int_{B^*} \psi^p dm \right )^{1/p}} \cdot \left ( \int_{B^*}
\phi^p dm \right )^{1/p},$$
which we can rearrange to read
$$\frac{\left (\int_\Omega \phi^p dm \right )^{1/p}}
{\left ( \int_\Omega \phi^q dm \right )^{1/q}} \geq \frac
{\left ( \int_{B^*} \psi^p dm \right )^{1/p}}{\left (\int_{B^*}
\psi^q dm \right )^{1/q}} = \tilde C.$$
Raising this inequality to the power $pq$, we then obtain
\[
\left ( \int_\Omega \phi^p dm \right )^q \geq \tilde C^{pq}
\left (\int_\Omega \phi^q dm \right )^p.
\qedhere\]
\end {proof}
In the case $S_g \leq 0$ we can extract the explicit dependence of the constant
$C$ in \eqref{reverse-holder1} on the eigenvalue $\lambda$. The
dependence on the eigenvalue in the hyperbolic case is more
challenging to understand, because the eigenfunctions
on geodesic balls do not scale in curved setting (see, for
instance, Section 3 of \cite{BL}).
\begin {cor} Suppose $\Omega$ is a domain in $(M,g)$, where $S_g \leq 0$,
which is sufficiently small so that \eqref{isop-small-vol} applies.
Let $\phi$ be its first eigenfunction, with eigenvalue $\lambda$.
Then there is a constant $K = K(n,p,q)$ such that
\begin {equation} \label{reverse-holder2}
\left ( \int_\Omega \phi^p dm \right )^q \geq K
\lambda^{n(p-q)/2} \left ( \int_\Omega \phi^q dm \right )^p.
\end {equation}
\end {cor}
\begin {proof} This time our comparison domains are round
balls in Euclidean space, and the dilation of an eigenfunction
on a ball is an eigenfunction on the corresponding dilated ball.
We have, according to \eqref{holder-const1}
$$C = \frac{\left ( \int_{B^*} \psi^p dm \right )^q}
{\left ( \int_{B^*} \psi^q dm \right )^p}.$$
Denote the Euclidean radius of $B^*$ by $\rho$, and change variables
to the unit ball by defining the function $\tilde \psi(r)
= \psi (r\rho)$, so that
$$C = \rho^{n(q-p)} \frac{\left (\int_{B_1} \tilde \psi^p \,dm
\right )^q} {\left (\int_{B_1} \tilde \psi^q \,dm \right )^p}.$$
Now, $\tilde \psi$ is the first eigenfunction on the unit
ball in Euclidean space, and all that remains is to recall the
scaling law for eigenvalues: $\lambda (B^*) = \rho^{-2} \lambda(B_1)$.
Thus we see that
$$\rho = \left ( \frac{\lambda (B^*)}{\lambda(B_1)}
\right )^{-1/2} = \left ( \frac{\lambda (\Omega)}{\lambda(B_1)}
\right )^{-1/2},$$
and so
\[
C = \lambda^{-\frac{n}{2} (q-p)} \lambda(B_1)^{\frac{n}{2}
(q-p)} \frac{\left ( \int_{B_1} \tilde \psi^p \,dm \right )^q}
{\left ( \int_{B_1} \tilde \psi^q \,dm \right )^p}.
\qedhere
\]
\end {proof}
We will later use the case of $p=1$ and $q=2$, which reads
\begin {equation} \label{reverse-holder3}
\left ( \int_\Omega \phi \,dm \right )^2 \geq K \lambda^{-n/2}
\int_\Omega \phi^2 \,dm.\end {equation}
In the case of $\dim(M) = 2$ we recover an inequality of
Payne and Rayner \cite{PR}:
\begin {equation} \label{reverse-holder4}
\left ( \int_\Omega \phi \,dm \right )^2 \geq \frac{4\pi}{\lambda}
\int_\Omega \phi^2 \,dm.
\end {equation}
Here we have used the sharp version of the isoperimetric inequality
of Beckenbach and Rad\'o \cite{BR} for complete surfaces with
nonpositive Gauss curvature. It is also important to notice that in the
two-dimensional case we do not place any restriction on the size
of $\Omega$.
The reverse Cauchy-Schwarz inequality \eqref{reverse-holder4} can be rewritten as a geometric
isoperimetric inequality for the (singular) conformal metric
$\tilde g = |\nabla \phi|^2 g$. We have the following corollary.
\begin {cor} Let $(M,g)$ is a surface with nonpositive Gauss curvature,
and let $\Omega$ be a domain with $\bar \Omega$ compact and
$\partial \Omega \in \mathcal{C}^\infty$. Place the (singular) conformal
metric $\tilde g = |\nabla \phi|^2 g$ on $\Omega$, where $\phi$ is the
first Dirichlet eigenfunction of $\Delta_g$ on $\Omega$. Then, with
respect to $\tilde g$, we have
$$\tilde L^2 \geq 4\pi \tilde A,$$
and equality can only occur if $\Omega$ is isometric to a flat
disk.
\end {cor}
\begin {proof} We begin with the left hand side of \eqref{reverse-holder4}. We
have
$$\left ( \int_\Omega \phi dm \right )^2 = \frac{1}{\lambda^2} \left ( \int_\Omega
\Delta \phi dm \right )^2 = \frac{1}{\lambda^2} \left ( \int_{\partial \Omega}
\frac{\partial \phi}{\partial \eta} d\sigma \right )^2 = \frac{1}{\lambda^2}
\left ( \int_{\partial \Omega} |\nabla \phi| d\sigma \right )^2 = \frac{\tilde L^2}
{\lambda^2},$$
where we have used the PDE satisfied by $\phi$, the divergence theorem, and
the fact that $\phi$ is constant on $\partial \Omega$.
On the other hand, the right hand side of \eqref{reverse-holder4} is
$$\frac{4\pi}{\lambda} \int_\Omega \phi^2 dm = \frac{4\pi}{\lambda^2}
\int_{\Omega} |\nabla \phi|^2 dm = \frac{4\pi \tilde A}{\lambda^2}.$$
The result follows. \end {proof}
\section {Monotonicity of the first eigenvalue} \label{main-thm-sec}
In this section we study the evolution of $\lambda$ as
$\Omega$ evolves.
A key ingredient is the reverse-H\"older inequality for the first eigenfunction
we developed in Section \ref{rearrange-sec}. Another key
ingredient is, naturally, the Hadamard variation fomula for the first eigenvalue.
We consider a one-parameter family of
diffeomorphisms $\zeta (t,p) : (-\epsilon, \epsilon) \times M \rightarrow
M$, and let $\Omega_t = \zeta(t,\Omega)$. The family of
mappings $\zeta$ is the flow of the time-dependent vector field
$\chi$, where
\begin {equation} \label{flow-eqn}
\frac{\partial \zeta}{\partial t} (t,p) = \chi (t,p).\end {equation}
In this way, if $\Omega = \Omega_0$ satisfies our standing
hypotheses, then so will $\Omega_t$ for $t$ sufficiently small.
We let $\lambda(t) = \lambda(\Omega_t)$, and use a dot to denote
differentiation with respect to $t$. A classical theorem of
Hadamard \cite{Had} states that
\begin {equation} \label {had-var}
\dot \lambda(0) = - \int_{\partial \Omega} \langle \chi, \eta
\rangle \left ( \frac{\partial \phi}{\partial \eta} \right )^2 d\sigma,
\end {equation}
where $\phi$ is the first eigenfunction of $\Omega$, normalized so
that $\int_\Omega \phi^2 dm = 1$. We include the proof
for the reader's convenience.
\begin {proof} First we compute the time derivative
of the boundary terms
of the normalized first eigenfunction $\phi$. Taking
a derivative of the condition
\[
\phi\big(t,\zeta(t,p)\big) = 0,\ p \in
\partial \Omega
\]
with respect to $t$ and using \eqref{flow-eqn}, we
obtain
\[
\dot\phi\big(t,\zeta(t,p)\big) +
\big\langle \nabla\phi\big(t, \zeta(t,p)\big),
\chi(p) \big\rangle = 0.
\]
Here and later, the gradient refers only to the
spatial derivative. Set $t=0$ and use the fact
that $\phi$ is constant along $\partial \Omega_t$ to
obtain
\begin {equation} \label {first-var-a}
\dot \phi(0,p)
= - \big\langle \nabla \phi (0,p), \chi(p)
\big\rangle = -\Big\langle \left.\frac{\partial \phi}
{\partial \eta} \right|_{(0,p)} \eta(p), \chi(p)
\Big\rangle, \quad p \in \partial \Omega.
\end {equation}
Next we take the derivative of the eigenfunction
equation
\begin {equation} \label {first-var-b}
\Delta \phi\big(t,\zeta(t,p)\big) + \lambda(t) \phi\big(t,\zeta(t,p)\big) = 0
\end {equation}
with respect to $t$, leading to
\begin {eqnarray*}
0 & = & \Delta \left [ \dot \phi + \langle \nabla
\phi, \chi \rangle \right] + \lambda(t)
\left [ \dot \phi + \langle \nabla \phi,
\chi\rangle \right ] + \dot \lambda(t) \phi\\
& = & \Delta \dot\phi + \langle \nabla \Delta \phi,
\chi \rangle + \lambda(t)
\dot\phi + \lambda(t) \langle \nabla \phi,
\chi\rangle + \dot \lambda(t) \phi \\
& = & \Delta \dot\phi + \lambda(t) \phi_t +
\dot\lambda(t) \phi .
\end {eqnarray*}
Setting $t=0$ and rearranging yields
\begin {equation} \label{first-var-c}
\Delta \left. \dot\phi \right|_{t=0} + \lambda(0)
\left. \dot\phi \right|_{t=0}
= -\dot\lambda(0) \phi\big\vert_{t=0} \quad
\text{ in } \Omega.
\end {equation}
We multiply (\ref{first-var-b}), with $t=0$,
by $\dot\phi\big\vert_{t=0}$ and
multiply (\ref{first-var-c}) by $\phi$, subtract and
obtain
\begin {equation} \label {first-var-d}
\dot\lambda(0) \phi^2(0,p) = \dot\phi(0,p)
\Delta \phi(0,p) - \phi(0,p)
\Delta \dot\phi(0,p), \quad p \in \Omega.
\end {equation}
Integrate \eqref{first-var-d} over $\Omega$ and
use the fact that $\int_{\Omega_t} \phi^2 dm = 1$
to obtain
\begin {eqnarray*}
\dot\lambda(0) & = & \int_{\Omega} \dot\phi \Delta \phi
- \phi \Delta \dot\phi dm\\
& = & \int_{\partial \Omega} \dot\phi
\frac{\partial \phi}{\partial \eta} d\sigma-
\int_{\Omega} \big\langle \nabla \phi,
\nabla \dot\phi \big \rangle d\sigma+
\int_{\Omega} \big\langle \nabla \phi,
\nabla \dot\phi \big\rangle d\sigma- \int_{\partial \Omega}
\phi \frac{\partial \dot\phi}{\partial \eta} d\sigma \\
& = &\int_{\partial \Omega} \dot\phi
\frac{\partial \phi}{\partial \eta} d\sigma \\
& = & - \int_{\partial \Omega} \frac{\partial \phi}
{\partial\eta} \langle \nabla \phi, \chi \rangle d\sigma\\
& = & - \int_{\partial \Omega} \langle \chi,
\eta\rangle \left ( \frac{\partial \phi}{\partial \eta}
\right )^2 d\sigma,
\end {eqnarray*}
which is equation (\ref{had-var}) as claimed. In the second equality above we
integrated by parts, in the next to last we used \eqref{first-var-a}, and at the last step
we used the fact that $\phi$ is constant on $\partial \Omega$ (and hence $\nabla \phi =
\frac{\partial \phi}{\partial \eta} \eta$ there).
\end {proof}
We will need to transform \eqref{reverse-holder3} and \eqref{reverse-holder4}
for our use later in bounding $\dot \lambda$.
\begin {lemma} Let $(M,g)$ be a complete Riemannian manifold with
nonpositive scalar curvature, and let $\Omega$ be a sufficiently small
domain in $M$ with $\bar \Omega$ compact and $\partial \Omega \in
\mathcal {C}^\infty$, so that \eqref{isop-small-vol} applies. Let $\phi$
be the first Dirichlet eigenvalue of $\Delta_g$ on $\Omega$, normalized
so that $\int_\Omega \phi^2 dm = 1$. Then
\begin {equation} \label {reverse-holder5}
K \lambda^{2-\frac{n}{2}} \leq \left ( \int_{\partial \Omega}
\frac{\partial \phi}{\partial \eta} d\sigma \right )^2,
\end {equation}
where $K$ is the same constant, depending only on $n$,
in \eqref{reverse-holder3}. In dimension two, this inequality reads
\begin {equation} \label {reverse-holder6}
4\pi \lambda \leq \left ( \int_{\partial \Omega} \frac{\partial \phi}
{\partial \eta} d\sigma \right )^2. \end {equation}
Equality can only occur if $\Omega$ is isometric to a flat ball in the
appropriate dimensional Euclidean space.
\end {lemma}
\begin {proof} By our normalization we have
\begin {eqnarray*}
K \lambda^{-n/2} & = & K \lambda^{-n/2} \int_\Omega \phi^2 dm
\leq \left ( \int_\Omega \phi dm \right )^2 \\
& = & \frac{1}{\lambda^2}
\left ( \int_\Omega \Delta \phi dm \right )^2 = \frac{1}{\lambda^2}
\left ( \int_{\partial \Omega} \frac{\partial \phi}{\partial \eta}
d\sigma \right )^2.\end {eqnarray*}
The result follows. \end {proof}
Recall that we have set $\Omega_t = \zeta(t,\Omega)$, where
$\zeta$ is a one-parameter family of diffeomorphisms on $M$. We
let $\lambda(t) = \lambda(\Omega_t)$, and we are assuming all
the hypotheses relevant to \eqref{isop-small-vol} hold.
\begin {thm} \label{evolution-thm4}
Let $\partial \Omega$ move with velocity $e^{w} \eta$, where
$\eta$ is the unit outward normal of $\partial \Omega$ and $w$ is
a bounded and continuous function. If $n = \dim (M) \geq 3$ then
\begin {equation} \label {evolution1}
\frac{d}{dt} \left [ \lambda^{\frac{n-2}{2}} \right ]
\leq - \left ( \frac{n-2}{2} \right ) \frac{K}{\int_{\partial \Omega}
e^{-w} d\sigma}, \end {equation}
where $K$ is the same constant in \eqref{reverse-holder3}, which
depends only on $n$. If $\dim (M) =2$ then
\begin {equation} \label{evolution2}
\frac{d}{dt} \log (\lambda) \leq -\frac{4\pi}{\int_{\partial \Omega}
e^{-w} d\sigma}. \end {equation}
In either case, equality can only occur if $\Omega$ is
isometric to a round ball in the appropriate dimensional Euclidean
space.
\end {thm}
\begin {proof} By Cauchy-Schwarz,
$$-\int_{\partial \Omega} \frac{\partial \phi}{\partial \eta}
d\sigma = \int_{\partial \Omega} \left ( e^{-w/2} \right ) \left ( e^{w/2}
\left | \frac{\partial \phi}{\partial \eta} \right | \right ) d\sigma \leq \left (
\int_{\partial \Omega} e^{-w} d\sigma \right )^{1/2} \left ( \int_{\partial \Omega}
e^w \left ( \frac{\partial \phi}{\partial \eta} \right )^2 d\sigma \right )^{1/2},$$
so that
\begin {equation} \label{cauchy-schwarz1}
\int_{\partial \Omega} e^w \left ( \frac{\partial \phi}{\partial \eta}
\right )^2 d\sigma \geq \frac{1}{\int_{\partial \Omega} e^{-w} d\sigma}
\left ( \int_{\partial \Omega} \frac{\partial \phi}{\partial \eta} d\sigma
\right )^2. \end {equation}
We first prove \eqref{evolution1}.
Using \eqref{had-var}, \eqref{cauchy-schwarz1},
and \eqref{reverse-holder5}, we see
\begin {eqnarray*}
-\dot \lambda & = & \int_{\partial \Omega} e^{w} \left (
\frac{\partial \phi}{\partial \eta} \right )^2 d\sigma \\
& \geq & \frac{1}{\int_{\partial \Omega} e^{-w} d\sigma}
\left ( \int_{\partial \Omega} \frac{\partial \phi}{\partial \eta}
d\sigma \right )^2 \\
& \geq & \frac{K \lambda^{2-\frac{n}{2}}}{\int_{\partial \Omega}
e^{-w} d\sigma }, \end {eqnarray*}
which we can rearrange to read
$$- \frac{d}{dt} \left [ \frac{2}{n-2} \lambda^{\frac{n-2}{2}}
\right ] = - \lambda^{\frac{n}{2} - 2} \dot \lambda \geq
\frac{K}{\int_{\partial \Omega} e^{-w} d\sigma}.$$
The proof of \eqref{evolution2} is very similar. This time we
replace \eqref{reverse-holder5} with \eqref{reverse-holder6}
to obtain
\begin {eqnarray*}
-\dot \lambda & = & \int_{\partial \Omega} e^w \left (
\frac{\partial \phi}{\partial \eta} \right )^2 \\
& \geq & \frac{1}{\int_{\partial \Omega} e^{-w} d\sigma}
\left ( \int_{\partial \Omega} \frac{\partial \phi}{\partial \eta}
d\sigma \right )^2 \\
& \geq & \frac{4\pi \lambda }{\int_{\partial \Omega} e^{-w}
d\sigma} ,\end {eqnarray*}
which we can rearrange to read
$$- \frac{d}{dt} \log \lambda = -\frac{\dot \lambda}{\lambda}
\geq \frac{4\pi}{\int_{\partial \Omega} e^{-w} d\sigma}.$$
\end {proof}
Now Theorem \ref{evolution-thm1} follows by taking $w=0$,
Theorem \ref{evolution-thm2} follows by taking $w= \log{k_g}$, and
Theorem \ref{evolution-thm3} follows by taking $w= \log{H}$.
Finally, we apply our technique to the case that $\Omega$ is the
conformal image of a Euclidean ball. We let $(M,g)$ be a complete
Riemannian manifold of dimension $n$ with $S_g \leq 0$ and
let $F:\R^n \rightarrow M$ be a conformal mapping. Let $B_t$
be the ball of radius $t$ in $\R^n$, and let $\Omega_t = F(B_t)$. Letting
$\lambda(t) = \lambda(B_t)$ and $\tilde \lambda(t) = \lambda (F(B_t))$,
we wish to compare $\lambda(t)$ to $\tilde \lambda(t)$. As $t$ increases,
$\partial B_t$ moves with velocity $\eta= \frac{\partial}{\partial r}$, and
(because $F$ is conformal) $\partial \Omega$ moves with velocity $|DF|
\tilde \eta$, where $\tilde \eta$ is the outward unit normal of $\Omega_t$.
We let $\phi$ be the first Dirichlet eigenfunction of $\Delta$ on $B_t$,
normalized so that $\int_{B_t} \phi^2 dm = 1$, and let $\tilde \phi$ be
the first Dirichlet eigenfunction of $\Delta_g$ on $\Omega_t$, normalized
so that $\int_{\Omega_t} \tilde \phi^2 dm = 1$. It will be convenient to define
$\psi = \tilde \phi \circ F$, and observe that $|\nabla \psi| = |DF| |\nabla \tilde \phi|$.
\begin {thm} \label{schwarz-lemma1}
Let $F:\R^n \rightarrow M$ be conformal, where $(M,g)$ is complete,
with $S_g \leq 0$ as above. If $n=2$ then
\begin {equation} \label {schwarz-ineq1}
\frac{d}{dt} \log (\tilde \lambda/\lambda) < 0 \end {equation}
unless $F$ is an isometry when restricted to $B_t$. If $n \geq 3$,
$t$ is small enough so that \eqref{isop-small-vol} applies to $\Omega_t$,
and $\int_{\partial B_t} |DF|^{n-2} d\sigma > |\partial B_t| = n \omega_n t^{n-1}$
then
\begin {equation} \label{schwarz-ineq2}
\frac{d}{dt} \left [ \tilde \lambda^{\frac{n-2}{2}} -
\lambda^{\frac{n-2}{2}} \right ] < 0. \end {equation}
\end {thm}
Notice that we recover the (a special case of) the Laugesen-Morpurgo result
in \cite{LM} in dimension two. In higher dimensions, this theorem states that
if $F$ is a conformal map with a sufficiently large coformal factor then
$\tilde \lambda^{\frac{n-2}{2}}$ decreases more rapidly than
$\lambda^{\frac{n-2}{2}}$. Thus, our theorem is very much in the
spirit of the results in \cite{LM} and in \cite{BMMPR}.
\begin {proof} First observe that, because $\partial \Omega_t$
moves with velocity $|DF|\tilde \eta$, the Hadamard variation formula
becomes
\begin {equation} \label{had-var2}
-\dot {\tilde \lambda} = \int_{\partial \Omega_t} |DF| \left ( \frac{\partial
\tilde \phi}{\partial \tilde \eta} \right )^2 d\tilde \sigma = \int_{\partial B_t}
|DF|^{n-2} \left ( \frac{\partial \psi}{\partial \eta} \right )^2d\sigma.
\end {equation}
Thus, in dimension $n \geq 3$, the inequality \eqref{reverse-holder5}
gives
\begin {eqnarray*}
K \tilde \lambda^{\frac{4-n}{2}} & \leq &\left ( \int_{\partial
\Omega_t} |\nabla \tilde \phi | d\tilde \sigma \right )^2\\
& = & \left ( \int_{\partial B_t} |DF|^{n-2} |\nabla \psi| d\sigma
\right )^2 \\
& \leq & \int_{\partial B_t} |DF|^{n-2} d\sigma \cdot
\int_{\partial B_t} |DF|^{n-2} |\nabla \psi|^2 d\sigma \\
& = & - \dot{\tilde \lambda} \int_{\partial B_t} |DF|^{n-2} d\sigma,
\end {eqnarray*}
which we can rearrange to give
$$-\frac{d}{dt} \left [ \frac{2}{n-2} \tilde \lambda^{\frac{n-2}{2}} \right ]
= -\tilde \lambda^{\frac{n-4}{2}} \dot {\tilde \lambda} \geq
\frac{K}{\int_{\partial B_t} |DF|^{n-2} d\sigma}.$$
However, the equality case in Theorem \ref{evolution-thm1} tells us
$$-\frac{d}{dt} \left [ \frac{2}{n-2} \lambda^{\frac{n-2}{2}} \right ]
= \frac{K}{|\partial B_t|},$$
so \eqref{schwarz-ineq2} now follows from the inequality $\int_{\partial
\Omega} |DF|^{n-2} d\sigma > |\partial B_t|$. In the two-dimensional
case, we use \eqref{reverse-holder6} to see
\begin {eqnarray*}
4\pi \tilde \lambda & \leq & \left ( \int_{\partial \Omega_t}
|\nabla \tilde \phi| d\tilde \sigma \right )^2 = \left (
\int_{\partial B_t} |\nabla \psi| d\sigma \right )^2 \\
& \leq & |\partial B_t| \int_{\partial B_t} |\nabla \psi|^2 d\sigma \\
& = & -\dot {\tilde \lambda} |\partial B_t|, \end {eqnarray*}
which we can rearrange to give
$$-\frac{d}{dt} \log \tilde \lambda = -\frac{\dot {\tilde \lambda}}{\tilde \lambda} \geq
\frac{4\pi}{|\partial B_t|} = - \frac{\dot \lambda}{\lambda} = -\frac{d}{dt}
\log \lambda,$$
where we have again used the equality case of Theorem \ref{evolution-thm1}. This
completes the proof of \eqref{schwarz-ineq1}.
\end {proof}
\begin {thebibliography}{999}
\bibitem {BR} E. F. Beckenbach and T. Rad\'o, \textsl{Subharmonic
functions and surfaces of negative curvature.\/} Trans. Amer. Math.
Soc. {\bf 35} (1933), 662--674.
\bibitem {BL} R. Benguria and H. Linde, \textsl{A second eigenvalue
bound for the Dirichlet Laplacian in hyperbolic space.\/} Duke
Math. J. {\bf 140} (2007), 245--279.
\bibitem{BMMPR} R. Burckel, D.
Marshall, D. Minda, P. Poggi-Corradini,
and T. Ransford. \textsl{Area, capacity, and
diameter versions of Schwarz's lemma.\/}
Conform. Geom. Dyn. {\bf 12} (2008), 133--151.
\bibitem {Chav} I. Chavel, \textsl{Riemannian Geometry: a
Modern Introduction\/},
Cambridge University Press, 2006.
\bibitem {Chav2} I. Chavel, \textsl{Eigenvalues in Riemannian
Geometry\/}, Academic Press, 1984.
\bibitem {CCL} C. H. Chen, S.-Y. Cheng, and K. H. Look.
\textsl{On the Schwarz lemma for complete Kahler manifolds.\/}
Scientia Sinica {\bf 22} (1979), 1238--1247.
\bibitem {Chiti1}G.\ Chiti, \textsl{An isoperimetric inequality for the
eigenfunctions of linear second order elliptic equations.\/}
Boll.\ Un.\ Mat.\ Ital.\ A {\bf 1} (1982), 145--151.
\bibitem{Chiti2}G.\ Chiti,
\textsl{A reverse H\"older inequality for the eigenfunctions of
linear second order elliptic operators.\/}
Z.\ Angew.\ Math.\ Phys.\ {\bf 33} (1982), 143--148.
\bibitem {Croke} C. Croke, \textsl{A sharp four dimensional
isoperimetric inequality.\/} Comment. Math. Helv. {\bf 59} (1984),
187--192.
\bibitem {Druet1} O. Druet, \textsl{Sharp local isoperimetric inequalities
involving the scalar curvature.\/} Proc. Amer. Math. Soc. {\bf 130} (2002),
2351--2361.
\bibitem {Druet2} O. Druet, \textsl{Asymptotic expansion of the Faber-Krahn
profile of a compact Riemannian manifold.\/} C. R. Math. Acad. SCi. Paris
{\bf 34} (2008), 1163--1167.
\bibitem {Fall} M. M. Fall, \textsl{Some local eigenvalue estimates involving
curvatures.\/} Calc. Var. PDE {\bf 36} (2009), 437--451.
\bibitem{Had} J. Hadamard. {\em
M\'emoire sur le probl\'eme d'analyse relatif
\'a l'\'equilibre des plaques
\'elastiques encastr\'ees.} M\'emoires pr\'esent\'es
par divers savants \'a l'Acad\'emie des Sciences.
{\bf 33} (1908).
\bibitem {HLP} G.\ H.\ Hardy, J.\ E.\ Littlewood, and G.\ P\' olya. \textsl{Some
simple inequalities satisfied by convex functions.} Messenger Math.
{\bf 58} (1929), 145--152.
\bibitem {Kar} H. Karcher, \textsl{Riemannian Comparison Constructions.\/}
in \textsl{Global Differential Geometry.\/} ed. by S. S. Chern, The Mathematical
Association of America (1989), 170--222.
\bibitem {Klein} B. Kleiner, \textsl{An isoperimetric comparison theorem.\/}
Invent. Math. {\bf 108} (1992), 37--47.
\bibitem {LM} R. Laugesen and C. Morpurgo, \textsl{Extremals for
eigenvalues of Laplacians under conformal mappings.\/} J. Funct. Anal.
{\bf 155} (1998), 64--108.
\bibitem {MJ} F. Morgan and D. Johnson, \textsl{Some sharp isoperimetric
theorems for Riemannian manifolds.\/} Indiana U. Math. J. {\bf 49} (2000),
1017--1041.
\bibitem{PR} L. Payne and M. Rayner, \textsl{
An isoperimetric inequality for the first
eigenfunction in the fixed membrane problem.\/}
A. Angew. Math. Phys. {\bf 23} (1972), 13--15.
\bibitem {Tal} G. Talenti, \textsl{Elliptic equations and rearrangements.\/}
Ann.\ Scuola\ Norm.\ Sup.\ Pisa\ Cl.\ Sci.\ {\bf 3} (1976), 697--718.
\bibitem {Yau} S.-T. Yau. \textsl{A general Schwarz lemma for Kahler manifolds.\/}
Amer. J. Math. {\bf 100} (1978), 197--203.
\end {thebibliography}
\end{document}
|
1,116,691,500,672 | arxiv | \subsection*{Maximally symmetric Riemannian metrics}
Given the implications of geometry on the topology of a Riemannian manifold, it is natural to ask whether a given manifold admits a distinguished metric and to address the geometry of any such metric.
As a first attempt at seeking a distinguished metric, let's say that a Riemannian metric $g$ on a manifold $M$ has \emph{maximal symmetry} if for any other Riemannian metric $h$ on $M$ we have
$$ {\operatorname{Isom}}(M,h) \subset {\operatorname{Isom}}(M,\psi^*g), $$
for some diffeomorphism $\psi \in \operatorname{Diff}(M)$. Here $ {\operatorname{Isom}}(M,h)$ is the full isometry group of $(M,h)$. With this notion, the maximally symmetric metrics on the 2-sphere are, as expected, precisely the constant curvature metrics. This fact may be deduced from the works of H.~Poincar\'e and L.E.J.~Brouwer; however, we provide a brief justification here using modern techniques. Applying the (normalized) Ricci flow to any metric on $S^2$, one has that the Ricci flow converges to a metric of constant curvature \cite{Chow:TheRicciFlowOnThe2Sphere}. Furthermore, the Ricci flow preserves isometries \cite{Chen-Zhu:UniquenessRicciFlowCompleteNoncompact} and so the isometry group of the initial metric acts isometrically on the limit metric. As the round metric is unique up to scaling and diffeomorphism, we conclude that it has maximal symmetry.
Interestingly, this result does not hold for all spheres as there exist finite groups acting on the sphere $S^n$ which cannot be realized as subgroups of $O(n+1)$. Thus, higher-dimensional spheres do not admit maximally symmetry metrics in this sense. This failure suggests that the notion of maximal symmetry above is too demanding; perhaps one needs to restrict attention to a subclass of all the Riemannian metrics, for example.
In the setting of Lie groups, we define a notion of maximal symmetry among left-invariant metrics:
\begin{defin}\label{def.maxsym} A left-invariant Riemannian metric $g$ on a Lie group $G$ will be said to be \emph{maximally symmetric} if for any other left-invariant Riemannian metric $h$ on $M$ we have
$$ {\operatorname{Isom}}(M,h) \subset {\operatorname{Isom}}(M,\psi^*g), $$
for some $\psi\in \operatorname{Aut}(G)$.
\end{defin}
This definition appears to be more robust. While it is still the case that not every Lie group admits a maximally symmetric left-invariant Riemannian metric, large classes of Lie groups do. Two questions naturally arise: (i) Which Lie groups admit maximally symmetric left-invariant metrics? (ii) Are left-invariant metrics that are geometrically distinguished --e.g., are those with special curvature properties--maximally symmetric?
We briefly address question (i) in Section 1. The primary focus of this paper is the Main Theorem above, which is an instance of question (ii).
\subsection*{Einstein motivations} The Main Theorem is partially motivated by the well-known Alekseevskii Conjecture and stability questions for the Ricci flow around Einstein metrics.
The Alekseekskii Conjecture asserts that every connected homogeneous Einstein manifold of negative Ricci curvature is diffeomorphic to $\mathbf{R}^n$. An only slightly stronger version of the conjecture is that every such Einstein manifold is a solvmanifold, i.e., it is isometric to a solvable Lie group with a left-invariant Riemannian metric. A major hurdle to proving the conjecture is to show that if a homogeneous space $G/K$ of a semisimple Lie group $G$ admits a left-invariant Einstein metric of negative Ricci curvature, then $K$ must be a maximal compact subgroup of $G$ (in which case $G/K$ is a symmetric space and is isometric to a solvmanifold, since an Iwasawa subgroup of $G$ acts simply transitively). Recently, P. Petersen and the second author \cite[Corollary 1.3]{JP:TowardsTheAlekseevskiiConjecture} showed that if $G$ is semisimple of noncompact type and $G/K$ admits a left-invariant Einstein metric $g$, then the identity component ${\operatorname{Isom}}_0(G,g)$ of the full isometry group consists only of $G$ itself. Thus any counterexample to the Alekseevskii Conjecture of this form would in a sense be \emph{minimally} symmetric. In particular, it would have minimal possible isotropy algebra among all possible left-invariant metrics on $G/K$. In contrast, the Main Theorem asserts that Einstein metrics of negative Ricci curvature on solvable Lie groups have maximal possible isotropy.
A second motivation for the Main Theorem above comes from studying the Ricci flow. It is an open question whether Einstein metrics on solvable Lie groups are stable under the Ricci flow \cite{Arroyo:TheRicciFlowInAClassOfSolvmanifolds,
JPW:LinearStabilityOfAlgebraicRicciSolitons,
WilliamsWu:DynamicalStabilityOfAlgebraicRicciSolitons}. If such spaces were stable, then one would be able to deduce that (locally) their isometry groups are maximal.
\subsection*{Outline of the proof of Main Theorem
~\ref{mt}.} We begin with an arbitrary left-invariant metric $h$ on $S$ and let $G$ be the full isometry group and $L$ the isotropy subgroup. The theorem is equivalent to the statement that $G/L$ admits a $G$-invariant Einstein metric, which is in turn equivalent to the condition that \emph{some} simply transitive solvable subgroup of $G$ admits a left-invariant Einstein metric whose isometry group includes $G$.
At the outset, we replace the original solvable group by one (that we will continue to denote by $S$ in this introductory
outline) in so-called ``standard position'' in $G$. Using results of \cite{Heber,LauretStandard} concerning the structure of Einstein solvmanifolds along with results of \cite{GordonWilson:IsomGrpsOfRiemSolv} concerning isometry groups of solvmanifolds, we show that $S$ can be written as a semi-direct product $S=S_1\ltimes S_2$, where $S_1$ is an Iwasawa subgroup of a semisimple Levi factor of $G$ and $S_2=S\cap \operatorname{Rad}(G)$. Moreover, this new solvable group $S$ admits an Einstein metric isometric to the one on the original solvable group.
The key step in proving that this Einstein metric is $G$-invariant is to prove the following lemma, which is perhaps of interest in its own right:
\begin{lemma}\label{lem} Suppose that $S$ is a semi-direct product $S=S_1\ltimes S_2$ of solvable Lie groups satisfying the following hypotheses:
\begin{itemize}
\item $S_1$ isomorphically embeds as an Iwasawa subgroup in a semisimple Lie group $G_1$.
\item The adjoint action of $S_1$ on the Lie algebra $\operatorname{Lie}(S_2) $ extends to a representation of $G_1$ on $\operatorname{Lie}(S_2)$.
\end{itemize}
Then, $S$ admits a left-invariant Einstein metric of negative Ricci curvature if and only if $S_2$ does. In this case, the Einstein metric $g$ on $S$ may be chosen so that its restriction to $S_2$ is Einstein, its restriction to $S_1$ is symmetric, and the Lie algebras of $S_1$ and $S_2$ are orthogonal.
\end{lemma}
The ``if'' statement and the final statement are elementary. On the other hand, the crucial ``only if'' statement exploits the deep relationship between left-invariant Einstein metrics of negative Ricci curvature on solvable Lie groups and geometric invariant theory. This relationship first appeared in the work of Heber \cite{Heber} and was subsequently refined by Lauret \cite{LauretStandard} and Nikolayevsky \cite{Nikolayevsky:EinsteinSolvmanifoldsandPreEinsteinDerivation}. The connection with geometric invariant theory arises as follows: The question of existence of an Einstein metric on a solvable group $S$ reduces to the question of existence of a nilsoliton metric on the nilradical $N$ of $S$. Denoting the Lie bracket of Lie$(N)$ by $\mu$, we may identify Lie$(N)$ with $\{ \mathbb R^n , \mu \}$ and view $\mu$ as an element of $V = \wedge (\mathbb R^n)^* \otimes \mathbb R^n$, the space of brackets on $\mathbb R^n$. The group $GL_n\mathbb R$ acts on $V$ and there exists a naturally defined subgroup $G_\phi \subset GL_n\mathbb R$ with the following property:
$$\mbox{The orbit } G_\phi \cdot \mu \subset V \mbox{ is closed if and only if } N \mbox{ admits a nilsoliton metric.} $$
In the notation of Lemma~\ref{lem}, the nilradical $N_2$ of $S_2$ is a normal subgroup of the nilradical $N$ of $S$. Denote by $\mu_2$ the Lie bracket of Lie$(N_2) \subset \operatorname{Lie}(N)$, and denote the associated group by $G_{\phi_2}$. To prove the ``only if'' statement, assume that $S$ admits a left-invariant Einstein metric of negative Ricci curvature. We show that $G_{\phi_2} \cdot \mu_2$ inherits the topological property of being closed from the analogous property of the orbit $G_\phi\cdot\mu$. In this way, $N_2$ obtains a nilsoliton metric and then $S_2$ obtains an Einstein metric.
The completion of the proof of the Main Theorem, given the Lemma, proceeds as follows: Since we have already established the existence of an Einstein metric on $S$, the forward direction of the lemma gives us an Einstein metric on $S_2$ and a nilsoliton metric on the nilradical $N_2$ of $S_2$. Using the fact that nilsoliton metrics on nilpotent Lie groups are maximally symmetric (see Section \ref{sec: state of knowledge on max symm on Lie groups}) and that $S_2$ and $N_2$ are normal in $G$, we find that some nilsoliton metric on $N_2$ -- and the resulting Einstein metric on $S_2$ -- are $\operatorname{Ad}(L)$-invariant and further $\operatorname{Ad}(G_1)$ acts by self-adjoint transformations on Lie$(N_2)$. It is then straightforward to show that the extension of this metric on $S_2$ to an Einstein metric on $S$, given in the easier direction of the Lemma, is $G$-invariant. (This Einstein metric on $S$ may differ from the original one by an automorphism).
\subsection*{Organization} The paper is organized as follows: In Section \ref{sec: state of knowledge on max symm on Lie groups} we address maximal symmetry metrics on Lie groups. The structure theory of Einstein solvmanifolds and their automorphism groups is reviewed in Section~\ref{einstauts}. Section~\ref{main} contains the proof of the Main Theorem~\ref{mt} modulo Lemma~\ref{lem}. The proof of the lemma is presented in Section \ref{sec: proof of key lemma} after first addressing the prerequisite Geometric Invariant Theory in Section \ref{sec: technical lemmas on GIT}. The question of Einstein extensions is further addressed in Section~\ref{exts}.
\section{Existence of maximally symmetric left-invariant metrics}\label{sec: state of knowledge on max symm on Lie groups}
In this section we address the question of which Lie groups admit maximally symmetric left-invariant Riemannian metrics, as defined in Definition~\ref{def.maxsym}.
\begin{prop}\label{prop.normal} Suppose $G$ satisfies:
\begin{enumerate}
\item $\operatorname{Aut}(G)$ has only finitely many components. (This condition is always satisfied if $G$ is simply-connected.)
\item For every left-invariant metric $h$ on $G$, ${\operatorname{Isom}}(G,h)< G\rtimes \operatorname{Aut}(G)$, where ${\operatorname{Isom}}(G,h)$ is the full isometry group of $h$.
\end{enumerate}
Then $G$ admits maximally symmetric left-invariant Riemannian metrics.
\end{prop}
The second hypothesis is equivalent to the condition that $G$ (more precisely, the group of left-translations of $G$) is a normal subgroup of ${\operatorname{Isom}}(G,h)$ for every choice of $h$.
\begin{proof}
Let $K<\operatorname{Aut}(G)$ be a maximal compact subgroup of $\operatorname{Aut}(G)$, and let $g$ be a $K$-invariant, left-invariant Riemannian metric. For $h$ any left-invariant metric, the first hypothesis implies that ${\operatorname{Isom}}(G,h)= G\rtimes L$ for some compact subgroup $L<\operatorname{Aut}(G)$. The first hypothesis guarantees that all maximal compact subgroups of $\operatorname{Aut}(G)$ are conjugate. Thus there exists $\tau\in\operatorname{Aut}(G)$ such that $\tau L\tau^{-1}<K$, and we have ${\operatorname{Isom}}(G,h)<{\operatorname{Isom}}(G,\tau^*g)$.
\end{proof}
\begin{cor}\label{cor.pos} Let $G$ be a connected Lie group satisfying any one of the following conditions:
\begin{itemize}
\item $G$ is compact and simple.
\item $G$ is semisimple of noncompact type.
\item $G$ is a simply-connected, completely solvable unimodular Lie group. (E.g., all simply-connected nilpotent Lie groups satisfy this condition.)
\end{itemize}
Then $G$ admits maximally symmetric left-invariant Riemannian metrics.
\end{cor}
\begin{proof} We apply Proposition~\ref{prop.normal}. The first hypothesis of the proposition is trivially satisfied in all three cases. The fact that each of these types of Lie groups satisfy the second hypothesis is proven in \cite{Ochiai-Takahashi}, \cite{G79}, and \cite{GordonWilson:IsomGrpsOfRiemSolv}, respectively. (See also \cite{Wilson} for the special case of nilpotent Lie groups.)
\end{proof}
In some cases included in Corollary~\ref{cor.pos}, one can identify a maximally symmetric left-invariant metric. For compact simple Lie groups, the maximally symmetric left-invariant metrics are precisely the bi-invariant metrics. For semisimple Lie groups of non-compact type, the first author has shown that the natural metric coming from the Killing form is maximally symmetric, see \cite{G79}. The second author proved the following:
\begin{prop}\cite{Jablo:ConceringExistenceOfEinstein}\label{completely_solv} If a completely solvable unimodular Lie group admits a left-invariant Ricci soliton metric $g$, then $g$ is maximally symmetric.
\end{prop}
While Corollary~\ref{cor.pos} gives large families of Lie groups that admit maximally symmetric left-invariant Riemannian metrics, the existence of such metrics is far from universal.
\begin{prop}\label{cmpt} There exist compact semisimple Lie groups $G$ that do not admit a maximally symmetric left-invariant Riemannian metric.
\end{prop}
\begin{proof}
Suppose that $g_0$ is a maximally symmetric left-invariant Riemannian metric on $G$. We first show that $g_0$ must be bi-invariant. Let $g$ be a bi-invariant metric on $G$. Then there exists $\tau\in\operatorname{Aut}(G)$ such that
$$G\rtimes\operatorname{Aut}(G)={\operatorname{Isom}}(G,g) < {\operatorname{Isom}}(G,\tau^*g_0).$$
In particular, $\tau^*g_0$ is invariant under all inner automorphisms and thus under right, as well as left translations. I.e., $\tau^*g_0$ is bi-invariant. But then $\tau^*g_0=g_0$, so $g_0$ is bi-invariant.
Definition~\ref{def.maxsym}, together with the fact that $g_0$ is invariant under $\operatorname{Aut}(G)$, thus implies that for every left-invariant metric $h$, we have ${\operatorname{Isom}}(G,h)<{\operatorname{Isom}}(G,g_0)=G\rtimes \operatorname{Aut}(G)$ and hence $G$ is normal in ${\operatorname{Isom}}(G,h)$. However, Ozeki \cite{Ozeki} proved that there exist left-invariant Riemannian metrics $h$ on some compact semisimple Lie groups $G$ such that the group of left-translations of $G$ is not a normal subgroup of ${\operatorname{Isom}}(G,h)$.
\end{proof}
\begin{remark} In the same article \cite{Ozeki} cited in the proof of Proposition~\ref{cmpt}, Ozeki showed that for every left-invariant metric $h$ on a compact semisimple Lie group $G$, there exists a normal subgroup $G'$ of ${\operatorname{Isom}}(G,h)$ that is isomorphic to $G$. It is thus easy to see that bi-invariant metrics $g$ on $G$ satisfy the following weaker notion of maximal symmetry: For every left-invariant Riemannian metric $h$ on $G$, the isometry group of $h$ is isomorphic to a subgroup of ${\operatorname{Isom}}(G,g)$.
\end{remark}
\begin{example}\label{ex.sl} Let $S$ be the connected, simply-connected solvable Lie group given by the direct product $S=S_1\times \mathbf{R}$ where $S_1$ is the Iwasawa subgroup of $SL(2,\mathbf{R})$, i.e., $S_1$ is the unique connected, simply-connected, non-abelian, two-dimensional solvable Lie group. We show that $S$ cannot admit a maximally symmetric left-invariant Riemannian metric. First note that for any left-invariant metric on a three-dimensional Lie group, the full isometry group must have dimension 3, 4 or 6, since the isotropy algebra must be isomorphic to a subalgebra of $\mathfrak{so}(3)$. Moreover, every three-dimensional manifold with a six-dimensional isometry group must have constant sectional curvature. Since $S$ does not admit a left-invariant metric of constant curvature, the isometry group of any left-invariant metric on $S$ must have dimension at most four. We will exhibit a pair of left-invariant metrics $h_1$ and $h_2$ on $S$ such that the identify components ${\operatorname{Isom}}_0(S,h_1)$ and ${\operatorname{Isom}}_0(S,h_2)$ are non-isomorphic four-dimensional Lie groups. If $S$ admitted a maximally symmetric left-invariant Riemannian metric $g$, then ${\operatorname{Isom}}(S,g)$ would have to contain isomorphic copies of both ${\operatorname{Isom}}_0(S,h_1)$ and ${\operatorname{Isom}}_0(S,h_2)$. This is impossible since ${\operatorname{Isom}}(S,g)$ can itself be at most four-dimensional.
We construct the two metrics.
We define $h_1$ to be the direct product of the hyperbolic metric on $S_1$ and a Euclidean metric on $\mathbf{R}$. Then $${\operatorname{Isom}}_0(S,h_1)= PSL(2,\mathbf{R})\times \mathbf{R}.$$ To define $h_2$, first consider a left-invariant metric $h$ on the universal cover $\widetilde{SL}(2,\mathbf{R})$ of $SL(2,\mathbf{R})$, defined by an $\operatorname{Ad}(K)$-invariant inner product on the Lie algebra $\mathfrak{sl}(2,\mathbf{R})$, where $K=SO(2,\mathbf{R})$. The identity component of the isometry group of $h$ is given by
$${\operatorname{Isom}}_0(\widetilde{SL}(2,\mathbf{R}), h)=(\widetilde{SL}(2,\mathbf{R})\times \tilde{K})/D.$$
(See \cite{G79}.)
Here $\tilde{K}\simeq \mathbf{R}$ is the connected subgroup of $\widetilde{SL}(2,\mathbf{R})$ with Lie algebra $\mathfrak{so}(2,\mathbf{R})$. The action of $(a,b)\in \widetilde{SL}(2,\mathbf{R})\times \tilde{K}$ on $c\in \widetilde{SL}(2,\mathbf{R})$ is given by $c\mapsto acb^{-1}$. The center of $\widetilde{SL}(2,\mathbf{R})$ is isomorphic to $\mathbf{Z}$ and is contained in $\tilde{K}$. The subgroup $D$ of $\widetilde{SL}(2,\mathbf{R})\times \tilde{K}$ is the image of the center under the embedding $z\mapsto (z, z)\in \tilde{K}\times \tilde{K}<\widetilde{SL}(2,\mathbf{R})\times \tilde{K}$. Viewing the Iwasawa group $S_1$ as a subgroup of $\widetilde{SL}(2,\mathbf{R})$, then $S_1\times \tilde{K}$ is a simply-transitive subgroup of ${\operatorname{Isom}}_0(\widetilde{SL}(2,\mathbf{R}), h)$ isomorphic to $S=S_1\times \mathbf{R}$. Thus the metric $h$ defines a left-invariant Riemannian metric $h_2$ on $S$ with $${\operatorname{Isom}}_0(S,h_2)\simeq(\widetilde{SL}(2,\mathbf{R})\times \tilde{K})/D.$$ This completes the proof that $S$ does not admit a maximally symmetric left-invariant Riemannian metric.
\end{example}
\begin{remark}
The metric $h_1$ in Example~\ref{ex.sl} is a solvsoliton, i.e., a left-invariant Ricci soliton on the solvable Lie group. Thus the Main Theorem~\ref{mt} fails when Einstein is replaced by Ricci soliton.
\end{remark}
\section{Background}\label{einstauts}
\begin{notconv} In the remainder of this article, we will always use the corresponding fraktur, with any appropriate subscripts or superscripts, to denote the Lie algebra of a given Lie group. E.g., if a Lie group is named $G_1$, its Lie algebra will be denoted $\mathfrak{g}_1$.
\end{notconv}
In this preliminary section, we first review existence and structural results for Einstein solvmanifolds of negative Ricci curvature. We then discuss a technique of Y. Nikolayevsky for determining whether a solvable Lie group admits such a metric. Finally we review the structure theory of isometry groups of arbitrary solvmanifolds.
\subsection{Solvable Lie groups admitting Einstein metrics of negative Ricci curvature}\label{subsec.einst}
We restrict our attention to non-flat homogeneous Einstein metrics. Any solvable Lie group admitting such a non-flat Einstein metric is necessarily non-unimodular \cite{DottiMiatello:TransitveGroupActionsAndRicciCurvatureProperties}.
\begin{defin}\label{std}\text{}
\begin{enumerate}
\item A solvable Lie group $S$ will be said to be of \emph{Einstein type} if it admits a left-invariant Einstein metric of negative Ricci curvature. We will also say that its Lie algebra $\mathfrak{s}$ is of Einstein type. We will say that the nilradical $N$ of $S$ (or the nilradical $\mathfrak{n}$ of $\mathfrak{s}$) is an Einstein nilradical.
\item A non-unimodular, metric solvable Lie algebra $(\mathfrak{s}, \la\,,\,\ra)$ is said to be \emph{standard} if it can be written as an orthogonal direct sum $\mathfrak{s}=\mathfrak{a}+\mathfrak{n}$ where $\mathfrak{a}$ is abelian and $\mathfrak{n}=[\mathfrak{s},\mathfrak{s}]$ is the nilradical.
\item A standard metric solvable Lie algebra is said to be of \emph{Iwasawa} type if $\operatorname{ad}(A)|_{\mathfrak{n}}$ is symmetric and non-zero for all $A\in \mathfrak{a}$ and if there exists some $H\in\mathfrak{a}$ such that $\operatorname{ad}(H)|_{\mathfrak{n}}$ is positive-definite. We will say a solvable Lie algebra $\mathfrak{s}$ is of Iwasawa type if it admits an inner product satisfying these conditions.
\item A solvable Lie group (with a left-invariant Riemannian metric) is said to be standard, respectively of Iwasawa type, if the associated metric Lie algebra is standard, respectively of Iwasawa type.
\end{enumerate}
\end{defin}
In 1998, J. Heber \cite{Heber} extensively addressed the structure of standard Einstein solvmanifolds. Heber's work resulted in great interest in the question of whether all non-flat Einstein solvmanifolds are standard. A decade later J. Lauret \cite{LauretStandard} answered this question in the affirmative. The three propositions in this and the next subsection summarize the part of the extensive work of Heber, Lauret and Y. Nikolayevsky that provide the background needed in the later sections of this paper.
\begin{notarem}\label{nota.ss}\text{}
\begin{enumerate}
\item Given a decomposition $\mathfrak{s}=\mathfrak{a}+\mathfrak{n}$ of a solvable Lie algebra $\mathfrak{s}$, where $\mathfrak{n}=\operatorname{Nilrad}(\mathfrak{s})$ and $\mathfrak{a}$ is an abelian complement, we will denote by $\operatorname{Der}(\mathfrak{s})^{\mathfrak{a}}$ the space of derivations commuting with $\operatorname{ad}_\mathfrak{s}(\mathfrak{a})$, i.e., the derivations that vanish on $\mathfrak{a}$. Via the isomorphism $\alpha\mapsto \alpha|_{n}$, we may identify $\operatorname{Der}(\mathfrak{s})^\mathfrak{a}$ with the space $\operatorname{Der}(\mathfrak{n})^\mathfrak{a}$ of all derivations of $\mathfrak{n}$ commuting with $\operatorname{ad}(\mathfrak{a})|_\mathfrak{n}$.
\item If $T$ is a semisimple endomorphism of a finite-dimensional vector space $V$, then $T$ can be uniquely decomposed as $T=T^\mathbf{R} + T^{i\mathbf{R}}$ where $T^\mathbf{R}$, respectively $T^{i\mathbf{R}}$, is a semisimple endomorphism with all eigenvalues real, respectively, purely imaginary, and where $T^\mathbf{R}$ and $T^{i\mathbf{R}}$ commute with $T$ (hence with each other). Moreover, $T^\mathbf{R}$ and $T^{i\mathbf{R}}$ commute with any other endomorphism that commutes with $T$. We will refer to $T^\mathbf{R}$ and $T^{i\mathbf{R}}$ as the \emph{real and imaginary parts} of $T$. If $D$ is a semisimple derivation of a Lie algebra $\mathfrak{g}$, then $D^\mathbf{R}$ and $D^{i\mathbf{R}}$ are also derivations of $\mathfrak{g}$.
\end{enumerate}
\end{notarem}
\begin{prop}[Heber \cite{Heber}]\label{prop.heb}
If a solvable Lie group $S$ admits a standard Einstein metric $g$, then \emph{every} Einstein metric on $S$ is isometric to $g$ up to scaling.
Let $\mathfrak{s}=\mathfrak{a}+\mathfrak{n} $ be a decomposition of $\mathfrak{s}$ as in Definition~\ref{std}. Then in the notation of~\ref{nota.ss}:
\begin{enumerate}
\item
$\operatorname{Der}(\mathfrak{s})=\operatorname{ad}_\mathfrak{s}(\mathfrak{n}) + \operatorname{Der}(\mathfrak{s})^{\mathfrak{a}}.$
Moreover, $\operatorname{Der}(\mathfrak{s})^{\mathfrak{a}}$ is reductive and decomposes as $\mathfrak{k}+\mathfrak{p}$ where the elements of $\mathfrak{k}$ are skew-symmetric and the elements of $\mathfrak{p}$ are symmetric relative to $g$. (We will often identify elements of $\operatorname{Der}(\mathfrak{s})^{\mathfrak{a}}$ with their restrictions to $\mathfrak{n}$.)
\item For $0\neq A\in\mathfrak{a}$, we have $0\neq \operatorname{ad}(A)^\mathbf{R}\in\mathfrak{p}$ and $\operatorname{ad}(A)^{i\mathbf{R}}\in \mathfrak{k}$. Let $\mathfrak{a}'=\{\operatorname{ad}(A)^\mathbf{R}:A\in \mathfrak{a}\}$ and let $\mathfrak{s}'$ be the semi-direct sum of $\mathfrak{a}'$ and $\mathfrak{n}$. Then $\mathfrak{s}'$ is of Iwasawa type and the associated simply-connected solvable Lie group acts simply-transitively on $S$.
\item Let $H$ be the unique element of $\mathfrak{a}$ such that $g(H, A)=\operatorname{trace}(\operatorname{ad}(A))$ for all $A\in\mathfrak{a}$. Then there exists $\lambda\in\mathbf{R}^+$ such that the eigenvalues of $\lambda\, \operatorname{ad}(H)^\mathbf{R}|_\mathfrak{n}$ are positive integers. Thus $\mathfrak{s}'$ is of Iwasawa type. The derivation $\operatorname{ad}(H)|_{\mathfrak{n}}^\mathbf{R}$ is sometimes called the \emph{Einstein derivation}.
\item Let $\mathfrak{c}$ be any abelian subspace of $\mathfrak{p}$ containing the Einstein derivation. Then the semi-direct product $\mathfrak{c}\ltimes \mathfrak{n}$ of $\mathfrak{c}$ with $\mathfrak{n}$ admits an Einstein inner product.
\end{enumerate}
\end{prop}
\begin{defin} A left-invariant Riemannian metric $g$ on a nilpotent Lie group $N$ is called a \emph{nilsoliton} if it is a Ricci soliton. This condition is equivalent to $Ric_g =c Id + D$ for some constant $c$ and some $D\in \operatorname{Der}(\mathfrak{n})$. (Here $Ric_g$ is the Ricci operator.)
\end{defin}
\begin{remark} In the defintition above, the condition $Ric_g = c Id +D$ always produces a left-invariant Ricci soliton on a Lie group. In the case of nilpotent groups, all Ricci solitons satisfy this condition, but this is not true more generally for solvable Lie groups. If a left-invariant metric on a solvable Lie group satisfies $Ric_g = c Id +D$, it is called a solvsoliton.
It is known that a Ricci soliton on a solvable Lie group is isometric to a solvsoliton (on a possibly different solvable Lie group) \cite{Jablo:HomogeneousRicciSolitons}. To go between these two different solvable Lie structures, one uses the process of `modifications', see Definition \ref{def.stdpos}.
\end{remark}
\begin{prop}[Lauret \cite{LauretStandard,LauretNilsoliton}]\label{prop.Lau}
$\text{}$
\begin{enumerate}
\item Every Einstein solvmanifold of negative Ricci curvature is standard.
\item Let $N$ be a simply-connected nilpotent Lie group. Then $N$ is an Einstein nilradical if and only if $N$ admits a nilsoliton metric. If $(S,g)$ is any Einstein solvmanifold of negative Ricci curvature with nilradical $N$, then the restriction of $g$ to $N$ is a nilsoliton.
\item A nilpotent Lie group admits at most one nilsoliton metric, up to automorphism and scaling.
\end{enumerate}
\end{prop}
\begin{remark}\label{rem.maxred} Let $\mathfrak{s}$ be a solvable Lie algebra of Einstein type. By Propostion~\ref{prop.Lau}, every Einstein inner product on $\mathfrak{s}$ is standard. Given such an inner product, write $\mathfrak{s}=\mathfrak{a}+\mathfrak{n}$, where $\mathfrak{a}=\mathfrak{n}^\perp$. By Proposition~\ref{prop.heb}, $\operatorname{ad}_\mathfrak{s}(\mathfrak{a})$ is a fully reducible subalgebra of $\operatorname{ad}(\mathfrak{s})$. Moreover, $[\mathfrak{a},\mathfrak{n}]=\mathfrak{n}$, so $\mathfrak{s}$ has trivial center and $\operatorname{ad}_\mathfrak{s}(X)$ is a non-trivial nilpotent operator for every $X\in\mathfrak{n}$. Thus $\operatorname{ad}_\mathfrak{s}(\mathfrak a)$ is a maximal $ad$-reductive subalgebra. By the work of Mostow \cite[Theorem 4.1]{Mostow:FullyReducibleSubgrpsOfAlgGrps}, all maximal fully reducible subalgebras of linear Lie algebras, in particular, of $\operatorname{ad}(\mathfrak{s})$, are conjugate by an inner automorphism. Since $\mathfrak s$ has no center, $ad: \mathfrak s \to \mathfrak{gl(s)}$ is an isomorphism onto its image. Thus, the decomposition $\mathfrak{s}=\mathfrak{a}+\mathfrak{n}$ is unique up to conjugacy by an element of the nilradical and every maximal fully $\operatorname{ad}$-reducible subalgebra of $\mathfrak{s}$ is conjugate to $\mathfrak{a}$.
If $\mathfrak{a}$ is any maximal fully $\operatorname{ad}$-reducible subalgebra of $\mathfrak{s}$, we will refer to $\mathfrak{s}=\mathfrak{a}+\mathfrak{n}$ as a \emph{standard decomposition} of $\mathfrak{s}$. By the uniqueness statement above, given any standard decomposition $\mathfrak{s}=\mathfrak{a}+\mathfrak{n}$ of a solvable Lie algebra of Einstein type, there exists an Einstein metric for which $\mathfrak{a}\perp\mathfrak{n}$.
\end{remark}
\subsection{Nikolayevsky's technique}\label{subsec.nik}
\begin{defin}\label{def.pre-Einst} (See \cite{Nikolayevsky:EinsteinSolvmanifoldsandPreEinsteinDerivation}.)
A derivation $\varphi \in Der(\mathfrak g)$ of a Lie algebra $\mathfrak{g}$ is a \textit{pre-Einstein derviation} if it is semisimple as an element of $End(\mathfrak g)$ with all eigenvalues real, and satisfies
\begin{equation}\label{eqn:pre-Einstein deriv} \operatorname{trace}(\varphi A) = \operatorname{trace}(A) \quad \mbox{ for all } A\in \operatorname{Der}(\mathfrak{g})
\end{equation}
\end{defin}
\begin{prop}[Nikolayevsky \cite{Nikolayevsky:EinsteinSolvmanifoldsandPreEinsteinDerivation}]\label{prop.nik}
$\text{}$
\begin{enumerate}
\item Every Lie algebra admits a pre-Einstein derivation $\varphi$, unique up to automorphism. The eigenvalues of $\varphi$ are rational.
\item If $N$ is an Einstein nilradical, then the Einstein derivation (see Proposition~\ref{prop.heb}(iii)) of every Einstein solvmanifold with nilradical $N$ is a positive multiple of a pre-Einstein derivation $\varphi$ of $\mathfrak{n}$.
\end{enumerate}
\end{prop}
\begin{prop}[Nikolayevsky \cite{Nikolayevsky:EinsteinSolvmanifoldsandPreEinsteinDerivation}]\label{prop.nik2}
Let $\mathfrak{n}$ be a nilpotent Lie algebra of dimension $n$. View the bracket $\mu:\mathfrak{n}\times\mathfrak{n}\to\mathfrak{n}$ as an element of $V=\wedge^2(\mathbb R^n)^*\otimes \mathbb R^n$. The group $GL_n(
\mathbf{R})$ acts on $V$ via $A.\nu(x,y)=A\nu(A^{-1}x,A^{-1}y)$ for $A\in GL_n(\mathbf{R})$, $\nu\in V$ and $x,y\in\mathbf{R}^n$, and this action gives rise to an action of the Lie algebra $\mathfrak{gl}_n(\mathbf{R})$ on $V$.
Fix a choice of pre-Einstein derivation $\varphi$ of $\mathfrak{n}$. Let $t: GL_n(\mathbf{R})\to \mathbf{R}$ be given by $t(A)=\operatorname{trace}(A\varphi)$ and let
$$\mathfrak g_\varphi = \mathfrak{sl}_n(\mathbb R) \cap \mathfrak z(\varphi) \cap Ker~t$$
where $\mathfrak z(\varphi)$ is the centralizer of $\varphi$ in $\mathfrak{gl}_n(\mathbf{R})$. Let $G_\varphi$ be the connected subgroup of $SL_n(\mathbf{R})$ with Lie algebra $\mathfrak g_\varphi$. The group $G_\varphi$ is fully reducible, and
the simply-connected Lie group $N$ with Lie algebra $\mathfrak{n}$ admits a nilsoliton metric if and only if the orbit $G_\varphi.\mu$ is closed in $V$.
\end{prop}
\begin{notarem}\label{note.nik}\text{}
\begin{enumerate}
\item Observe that the Lie algebra of the stabilizer of $\mu$ in $G_\varphi$ is precisely $\operatorname{Der}(\mathfrak{n})\cap \mathfrak{sl}(n,\mathbf{R})$.
\item As discussed in \cite{Nikolayevsky:EinsteinSolvmanifoldsandPreEinsteinDerivation}, the group $G_\varphi$ is pre-algebraic; it is the identity component of the algebraic, fully reducible subgroup $\hat G_\varphi < SL(n,\mathbf{R})$ given as follows: Let $V_1,\dots, V_k$ be the eigenspaces of $\varphi$ and $\lambda_1,\dots, \lambda_k$ the corresponding eigenvalues. By~\ref{prop.nik} the $\lambda_j$ are positive rational numbers. Let $N$ be the least positive integer for which all the $a_j:=N\lambda_j$ are integers. The group $\hat G_\varphi $ is defined by
$$\hat G_\varphi= \{(\alpha_1,\dots, \alpha_k)\in\prod_{j=1}^k\,GL(V_j)<GL(\mathfrak{n}): \prod_{j=1}^k\,\det(\alpha_j)= \prod_{j=1}^k\,\det(\alpha_j)^{a_j}=1\}.$$
\item The subgroup $G_\varphi$ of $SL(n,\mathbf{R})$ is self-adjoint with respect to any inner product for which $\varphi$ is symmetric.
\item We will sometimes abuse language and identify the bilinear map $\mu$ in Proposition~\ref{prop.nik2} with the Lie algebra $\mathfrak n$. For $X\in\mathbf{R}^n$, we will write $\operatorname{ad}_\mu(X):\mathbf{R}^n\to\mathbf{R}^n$ to mean the linear mapping of $\mathbf{R}^n$ associated with the bracket $\mu$.
\end{enumerate}
\end{notarem}
We conclude this section with a corollary of Propositions~\ref{prop.heb}, \ref{prop.Lau}, and \ref{prop.nik}.
\begin{cor}\label{cor.nik}
Let $N$ be a simply-connected nilpotent Lie group that admits a nilsoliton metric, and let $\varphi$ be the pre-Einstein derivation of $\mathfrak{n}$. Let $\mathfrak{a}$ be any abelian subalgebra of $\operatorname{Der}(\mathfrak{n})$, all of whose elements are semisimple with non-zero real part, and suppose that the pre-Einstein derivation $\varphi$ is given by $A^\mathbf{R}$ for some $A\in \mathfrak{a}$. Then $\mathfrak{a}\ltimes\mathfrak{n}$ is a solvable Lie algebra of Einstein type.
\end{cor}
Since the corollary requires a slight strengthening of Proposition~\ref{prop.heb}(iv), we include the proof here after first recalling results of Mostow concerning Cartan decompositions. For later use, we state Mostow's results in greater generality than needed for the proof of Corollary~\ref{cor.nik}.
\begin{notarem}\label{cd} In the language of Mostow \cite{Mostow:SelfAdjointGroups}, an ``fcc'' group is a Lie group with finitely many connected components. If $\hat{H}$ is an fcc group, every maximal compact subgroup $\hat{K}$ of $\hat{H}$ satisfies $\hat{H}=H\hat{K}$ where $H$ is the identity component of $\hat{H}$. In particular, $K:=\hat{K}\cap H$ is a maximal compact subgroup of $H$ and $\hat{K}/K$ is finite. Any two maximal compact subgroups of $\hat{H}$ are conjugate via an element of $H$. Generalizing the language of semisimple Lie group theory, we say that $\hat{H}=\hat{K}P$ is a \emph{Cartan decomposition} of $\hat{H}$ if: (i) $\hat{K}$ is a maximal compact subgroup of $\hat{H}$ and (ii) there exists a compact real form $\mathfrak{c}$ of the complexification $\mathfrak{h}^\mathbf{C}$ of $\mathfrak{h}$ such that $\mathfrak{k}=\mathfrak{h}\cap \mathfrak{c}$ and $P=\exp(\mathfrak{p})$, where $\mathfrak{p}=\mathfrak{h}\cap i\mathfrak{c}$. (Here $i=\sqrt{-1}$, and $\mathfrak{h}$ and $\mathfrak{k}$ are the Lie algebras of $\hat{H}$ and $\hat{K}$, respectively.) The existence of a Cartan decomposition implies that $\hat{H}$ is a reductive Lie group. However, not every reductive fcc group admits a Cartan decomposition. If a Cartan decomposition does exist, it is unique up to conjugation by elements of the identity component $H$ of $\hat{H}$.
Theorem 4.1 of \cite{Mostow:SelfAdjointGroups} states: If $\hat{H}$ and $\hat{H}'$ are fcc Lie groups with $\hat{H}<\hat{H}'$ and if both $\hat{H}$ and $\hat{H}'$ admit Cartan decompositions, then for each Cartan decomposition $\hat{H}=\hat{K}P$, there exists a compatible Cartan decomposition $\hat{H}'=\hat{K}'P'$. By \emph{compatible} we mean that $\hat{K}<\hat{K}'$ and $P<P'$.
\end{notarem}
\begin{proof}[Proof of Corollary~\ref{cor.nik}] Let $\mathfrak{a}_0$ be the one-dimensional Lie algebra $\mathbf{R}\varphi$ where $\varphi$ is the pre-Einstein derivation of $\mathfrak{n}$. By Propositions~\ref{prop.Lau} and~\ref{prop.nik}, $\mathfrak{s}_0:=\mathfrak{a}_0\ltimes\mathfrak{n}$ is a solvable Lie algebra of Einstein type. Let $g_0$ denote both an Einstein metric on $\mathfrak{s}_0$ and its restriction to a nilsoliton metric on $\mathfrak{n}$. By~\ref{nota.ss} and the hypotheses that $\mathfrak{a}$ is abelian and that $\varphi=A^\mathbf{R}$ for some $A\in\mathfrak{a}$, we have $\mathfrak{a}<\operatorname{Der}(\mathfrak{n})^{\mathfrak{a}_0}$. Again by~\ref{nota.ss}, $X^\mathbf{R}$ and $X^{i\mathbf{R}}\in \operatorname{Der}(\mathfrak{n})^{\mathfrak{a}_0}$ for all $X\in\mathfrak{a}$.
Let $\mathfrak{b}=\mathfrak{a}^\mathbf{R}+\mathfrak{a}^{i\mathbf{R}}$, where $\mathfrak{a}^\mathbf{R}=\{X^\mathbf{R}:X\in\mathfrak{a}\}$ and similarly for $\mathfrak{a}^{i\mathbf{R}}$. Note that $\mathfrak{b}=\mathfrak{a}^{i\mathbf{R}}+\mathfrak{a}^{\mathbf{R}}$ is a Cartan decomposition of $\mathfrak{b}$. By~\ref{cd}, there exists a Cartan decomposition $\mathfrak{k}+\mathfrak{p}$ of $\operatorname{Der}(\mathfrak{n})^{\mathfrak{a}_0}$ such that $\mathfrak{a}^{i\mathbf{R}}\subset \mathfrak{k}$ and $\mathfrak{a}^\mathbf{R} \subset \mathfrak{p}$. By the conjugacy of Cartan decompositions and Proposition~\ref{prop.heb}, there exists $\tau\in \operatorname{Aut}(\mathfrak{n})^{\mathfrak{a}_0}$ such that the elements of $\mathfrak{k}$, respectively $\mathfrak{p}$, are skew-symmetric, respectively symmetric, with respect to the nilsoliton metric $\tau^*(g_0)$ on $\mathfrak{n}$. Since $\tau(\varphi)=\varphi$, the Einstein derivation of $(\mathfrak{s}_0,g)$ is again $\varphi$. By Proposition~\ref{prop.heb}, $\mathfrak{r}:=\mathfrak{a}^\mathbf{R}+\mathfrak{n}$ is of Einstein type. Let $M$ be the associated simply-connected Einstein manifold.
Since $\mathfrak{a}^{i\mathbf{R}}$ acts skew-symmetrically, the isometry algebra of $(R,g)$ contains $\mathfrak{a}^{i\mathbf{R}}+\mathfrak{r}=\mathfrak{b}+\mathfrak{n}$, and it is easy to see that the simply-connected Lie group $S$ with Lie algebra $\mathfrak{a}+\mathfrak{n}<\mathfrak{b}+\mathfrak{n}$ acts simply-transitively on $M$. Thus $\mathfrak{a}+\mathfrak{n}$ is of Einstein type.
\end{proof}
\subsection{Isometry groups of solvmanifolds}\label{subsec.isom}
We review results of \cite{GordonWilson:IsomGrpsOfRiemSolv} concerning the structure of isometry groups of arbitrary solvmanifolds. We will restrict our attention here to simply-connected solvmanifolds, since all solvable Lie groups of Einstein type are simply-connected.
\begin{defin}\label{def.stdpos} Let $(\mathcal{M}, h)$ be a simply-connected Riemannian solvmanifold, let $G=I_0(\mathcal{M})$ be the identity component of the full isometry group of $\mathcal{M}$, and let $\mathfrak{g}$ be the Lie algebra of $G$. Let ${\mathcal R}={\mathcal R}(h)$ denote the collection of all simply-transitive solvable subgroups of $G$. Fix once and for all a base point $p\in {\mathcal M}$. For $R\in{\mathcal R}$, we will continue to denote by $h$ the left-invariant Riemannian metric on $R$ defined by identifying $R$ with ${\mathcal M}$ via $a\mapsto a(p)$ for $a\in R$.
\begin{enumerate}
\item For $R\in{\mathcal R}$, recall that the normalizer $ N_\mathfrak{g}(\mathfrak{r})$ of $\mathfrak{r}$ in $\mathfrak{g}$ is given by the semi-direct sum $N_\mathfrak{g}(\mathfrak{r})=\operatorname{Der}_{\operatorname{skew}}(\mathfrak{r},h)+\mathfrak{r}$ where $\operatorname{Der}_{\operatorname{skew}}(\mathfrak{r},h)$ is the space of skew-symmetric derivations of $(\mathfrak{r},h)$. The \emph{standard modification} $\mathfrak{r}'$ of $\mathfrak{r}$ with respect to $h$ is defined to be the orthogonal complement in $N_\mathfrak{g}(\mathfrak{r})$ of $\operatorname{Der}_{\operatorname{skew}}(\mathfrak{r},h)$ with respect to the Killing form. The connected subgroup $R'$ of $G$ with Lie algebra $\mathfrak{r}'$ will also be called the standard modification of $R$. Observe that $R'\in{\mathcal R}$.
\item We say that $R$ (or its Lie algebra $\mathfrak{r}$) is in \emph{standard position} in $G$ with respect to $h$ if it is equal to its own standard modification.
\end{enumerate}
\end{defin}
\begin{notarem}\label{remf} (See \cite{GordonWilson:IsomGrpsOfRiemSolv}.)
\begin{enumerate}
\item Let $\mathcal{F}$ be the collection of subgroups of $G$ that are maximal with respect to the property of having no non-trivial connected noncompact simple subgroups. Then the elements $F\in \mathcal{F}$ form a conjugacy class of subgroups of $G$ given as follows: Let $G=G_1G_2$ be any Levi decomposition of $G$ and write $G_1=G_{nc}G_c$, where $G_{nc}$ and $G_c$ are the maximal semsimple connected normal subgroups of $G$ of noncompact and compact type, respectively. Let $G_1= K_1A_1N_1$ be any Iwasawa decomposition of $G_1$ (in particular, $G_c<K_1$), and let $M_1$ be the centralizer of $A_1$ in $K_1\cap G_{nc}$. Set
\begin{equation}\label{eqf}F=(M_1A_1N_1)G_cG_2.\end{equation}
Then $F\in\mathcal{F}$, and every element of $\mathcal{F}$ is of this form.
\item A subgroup $S_1$ of $G$ of the form $S_1=A_1N_1$, where $K_1A_1N_1$ is an Iwasawa decompostion of some semisimple Levi factor of $G$, will be called an \emph{Iwasawa subgroup} of $G$.
\item In the notation of (i), the group $K_1$ is compact if and only if $G_{nc}$ has finite center. This condition is equivalent to the condition that the Lie algebra of some, hence any, maximal compact connected subgroup of $G$ is a maximal compactly embedded subalgebra of $\mathfrak{g}$. In this case, every maximal compact subgroup $U$ of $G$ is of the form $U=K_1(U\cap G_2)$ relative to some Levi and Iwasawa decompositions as in (i).
\end{enumerate}
\end{notarem}
\begin{prop}\label{prop.f}\cite{GordonWilson:IsomGrpsOfRiemSolv} Let $\mathcal{M}$ be a simply-connected Riemannian solvmanifold and let $G=I_0(\mathcal{M})$ be the identity component of the full isometry group of $\mathcal{M}$. Then:
\begin{enumerate}
\item The collection of all simply transitive solvable subgroups of $G$ in standard position with respect to $h$ forms a (non-empty) conjugacy class $\mathcal{S}={\mathcal S}(h)$ of subgroups of $G$.
\item Given $R\in{\mathcal R}$, let $R'$ be the standard modification of $R$ and let $R''$ be the standard modification of $R'$. Then $R''$ is in standard position in $G$ with respect to $h$, and the normalizer of $R''$ in $G$ contains that of $R$.
\item (See the notation of~\ref{remf}.) For $S\in\mathcal{S}$, the normalizer $N_G(S)$ is an element of $\mathcal{F}$. Conversely, given any $F\in \mathcal{F}$, there exists $S\in\mathcal{S}$ such that $N_G(S)=F$.
\item Let $S\in\mathcal{S}$ and let $G=G_1G_2$ and $G_1=K_1A_1N_1$ be the Levi and Iwasawa decompositions associated with $F=N_G(S)$ as in~\ref{remf}. Then the Lie algebra of $S$ satisfies $$\mathfrak{a}_1+\mathfrak{n}_1+[\mathfrak{g},\mathfrak{g}_2]\subset \mathfrak{s}\subset Z(\mathfrak{m}_1)+ \mathfrak{a}_1 +\mathfrak{n}_1+\mathfrak{g}_2,$$ where $Z(\mathfrak{m}_1)$ is the center of the Lie algebra $\mathfrak{m}_1$ of $M_1$. In particular, $S$ contains an Iwasawa subgroup of $G$.
\end{enumerate}
\end{prop}
\section{The Main Theorem}\label{main}
Our goal is to prove the following theorem:
\begin{thm}\label{mainthm}
Let $R$ be a solvable Lie group of Einstein type and $h$ a left-invariant Riemannian metric on $R$. Let $\hat G={\operatorname{Isom}}(R,h)$ be the full isometry group and let $\hat L$ be the isotropy subgroup of $\hat G$ at the identity $e\in R$. Then $\hat G/\hat L$ admits a $\hat G$-invariant Einstein metric.
\end{thm}
Since $R$ acts simply-transitively on $\hat G/\hat L$, it will follow that $R$ admits an Einstein metric whose isometry group contains that of $h$, thus proving the Main Theorem~\ref{mt}.
The proof of Theorem~\ref{mainthm} has three main parts:
\begin{enumerate}
\item We apply a result of the second author to show that the normalizer of $R$ (or of any simply transitive solvable subgroup of $\hat G$ in $\hat G$) leaves an Einstein metric invariant. This result along with a study of the structure of the isometry groups of left-invariant metrics on solvable Lie groups of Einstein type enables us to reduce the theorem to Lemma~\ref{lem}.
\item We use Ricci curvature computations to prove the ``if'' statement and the final statement of Lemma~\ref{lem}.
\item We apply Nikolayevsky's technique, as outlined in Subsection~\ref{subsec.nik} to prove the forward statement of Lemma~\ref{lem}.
\end{enumerate}
In this section we carry out the first two parts of the proof.
\subsection{Part (i) of the proof.}\label{subsec.part1}
In the notation of Theorem~\ref{mainthm}, we first show that the normalizer of $R$ in $\hat G$ leaves an Einstein metric invariant. Recall that the normalizer of $R$ in $\hat G$ is the group $\operatorname{Aut}_{\operatorname{orth}}(R,h)$ of orthogonal automorphisms of $(R,h)$.
\begin{prop}\label{prop.step1} Let $R$ be a solvable Lie group that admits an Einstein metric and let $h$ be any left-invariant metric on $R$. Then there exists an Einstein metric $g$ on $R$ such that
$${\operatorname{Isom}}(R,h)\cap \operatorname{Aut}(R)\subset {\operatorname{Isom}}(R,g).$$
\end{prop}
\begin{remark} We note that the proposition above holds more generally for solvsolitons, with the same proof, but we will not need this fact.
\end{remark}
\begin{proof} We apply results of \cite{Jablo:ConceringExistenceOfEinstein}. (That article addresses the more general setting of solvable Lie groups admitting solvsolitons but we only apply it to the special case of those admitting Einstein metrics.) There is a natural correspondence between Einstein (or solvsoliton) metrics on $R$ and so-called \emph{distinguished metrics}. Letting $\mathfrak{n}$ denote the nilradical of $\mathfrak{r}$, a distinguished metric and the corresponding Einstein metric agree on $\mathfrak{n}$ (and restrict to a nilsoliton metric on $\mathfrak{n}$), and the orthogonal complement of $\mathfrak{n}$ relative to both metrics is the same abelian algebra $\mathfrak{a}$. The two metrics differ only on $\mathfrak{a}$. The expression for the distinguished metric on $\mathfrak{a}$ will not be needed below. The Einstein metric is given on $\mathfrak{a}$ by
\begin{equation}\label{eq.la}g(A,A)=c~\operatorname{trace}(S_A)^2
\end{equation}
where $c$ is a constant and where $S_A$ is the symmetric part of $\operatorname{ad}(A)|_{\mathfrak{n}}$ with respect to the given nilsoliton metric on $\mathfrak{n}$; i.e., $S_A=\frac{1}{2}(\operatorname{ad}(A)|_{\mathfrak{n}}+\operatorname{ad}(A)|_{\mathfrak{n}}^t)$.
Theorem 4.1 of \cite{Jablo:ConceringExistenceOfEinstein} (stated as Proposition~\ref{completely_solv} above) states that left-invariant solvsoliton metrics on completely solvable unimodular Lie groups are maximally symmetric. In our case, $R$ is not assumed to be either completely solvable or unimodular. However, the first step in the proof of Theorem 5.8 of \cite{Jablo:ConceringExistenceOfEinstein} applies to \emph{all} solvable Lie groups that admit solvsoliton metrics. It asserts that for any left-invariant metric $h$ on $R$, there exists a distinguished metric $g_0$ such that
$$\operatorname{Aut}_{\operatorname{orth}}(R,h)\subset {\operatorname{Isom}}(R,g_0).$$
Here $\operatorname{Aut}_{\operatorname{orth}}(R,h)$ denotes the group $\operatorname{Aut} (R) \cap {\operatorname{Isom}}(R,h)$. As $R$ is simply-connected, this group corresponds precisely to the orthogonal automorphisms of the Lie algebra, $\operatorname{Aut}(\mathfrak{r})\cap O(\mathfrak{r},h)$.
To complete the proof, we need only show that $\operatorname{Aut}_{\operatorname{orth}}(R, g_0)\subset \operatorname{Aut}_{\operatorname{orth}}(R,g)$ where $g$ is the Einstein metric associated with $g_0$.
Let $\tau\in \operatorname{Aut}(\mathfrak{r})\cap O(\mathfrak{r},g_0)$, and let $A\in\mathfrak{a}$. By Proposition~\ref{prop.heb}, we have $\operatorname{ad}(A)|_{\mathfrak{n}}=S_A + T_A$, where $T_A$ is a skew-symmetric derivation of $\mathfrak{n}$ and $S_A$, as defined above, is a symmetric derivation. Moreover, $S_A$ and $T_A$ both commute with $\operatorname{ad}(A)$ and hence with each other.
Since $\tau\in \operatorname{Aut}(\mathfrak{r})$, we have $\tau|_{\mathfrak{n}}\circ \operatorname{ad}(A)|_{\mathfrak{n}}=\operatorname{ad}(\tau(A))|_{\mathfrak{n}}\circ \tau|_{\mathfrak{n}}$, so $\operatorname{ad}(\tau(A))|_{\mathfrak{n}}=\tau|_{\mathfrak{n}}\circ \operatorname{ad}(A)|_{\mathfrak{n}}\circ \tau|_{\mathfrak{n}}^{-1}$. Since $\tau|_{\mathfrak{n}}$ is orthogonal, we also have that $S_{\tau(A)}=\tau|_{\mathfrak{n}}\circ S_A\circ \tau|_{\mathfrak{n}}^{-1}$.
It is now immediate from
Equation~(\ref{eq.la}) that $\tau\in O(\mathfrak{r},g)$.
\end{proof}
Restricting our attention to the identity component $G$ of $\hat G$ for now, we next apply Proposition~\ref{prop.step1} to describe the subgroups of $G$ in standard position, in the language of Definition~\ref{def.stdpos}.
\begin{lemma}\label{lem.stdmodEinst} Let $R$ be a solvable Lie group of Einstein type and let $h$ be an arbitrary but fixed left-invariant metric on $R$ (not necessarily an Einstein metric). Let $G=I_0(R,h)$ and let $L$ be the isotropy subgroup of $G$ at the identity element. We use the notation of~\ref{remf} and Proposition~\ref{prop.f}. Then:
\begin{enumerate}
\item Each $S\in\mathcal{S}(h)$ is a solvable Lie group of Einstein type.
\item There exist a Levi decomposition $G=G_1G_2$ and an Iwasawa decomposition $G_1=K_1A_1N_1$ such that, in the notation of~\ref{remf},
$$L=K_1(L\cap G_2).$$
In particular, $K_1$ has finite center and $L$ is a maximal compact subgroup of $G$.
\item There exists a characteristic subgroup $S_2$ of $G$ contained in $G_2$ such that:
\begin{enumerate}
\item $\mathcal{S}(h)=\{S_1\ltimes S_2: S_1\mbox{\,is an Iwasawa subgroup of \,}G\}.$
\item $G_2=(L\cap G_2)\ltimes S_2$.
\item $\operatorname{Nilrad}(G)=\operatorname{Nilrad}(S_2)$.
\end{enumerate}
\item Let $F\in\mathcal{F}$. Then $F/(F\cap L)$ admits a left-invariant Einstein metric. (In particular, the Einstein metric on $R$ and on the associated element $S$ of ${\mathcal S}(h)$ can be chosen to be invariant under $N_G(S)\in\mathcal{F}$.)
\end{enumerate}
\end{lemma}
\begin{proof} (i) Write $\mathfrak{r}=\mathfrak{a}+\mathfrak{n}$ as in Definition~\ref{std}. Let $\mathfrak{h}=\operatorname{Der}_{\operatorname{skew}}(\mathfrak{r},h) +\mathfrak{r}$ (semi-direct product) and let $H$ be the semi-direct product of $R$ with the group $\operatorname{Aut}_{\operatorname{orth}}(R,h)$ of orthogonal automorphisms of $(R,h)$. Then $H$ has Lie algebra $\mathfrak{h}$. By Proposition~\ref{prop.step1}, there exists an Einstein metric $g$ on $R$ invariant under $\operatorname{Aut}_{\operatorname{orth}}(R,h)$. Thus $H/\operatorname{Aut}_{\operatorname{orth}}(R,h)$ admits an $H$-invariant Einstein metric. Since $R'$ acts simply-transitively on $H/\operatorname{Aut}_{\operatorname{orth}}(R,h)$, this Einstein metric defines a left-invariant Einstein metric on $R'$. Thus $R'$ is of Einstein type. Continuing, the standard modification $S$ of $R'$ is also of Einstein type and, by Proposition~\ref{prop.f}, $S\in {\mathcal S}(h)$.
(ii) We have $\mathfrak{g}=\mathfrak{l}+\mathfrak{r}$ with $\mathfrak{l}\cap\mathfrak{r}=\{0\}$. Let $\mathfrak{u}$ be a maximal compactly embedded subalgebra of $\mathfrak{g}$ containing $\mathfrak{l}$. Suppose $X\in \mathfrak{u}\cap\mathfrak{r}$. Then $\operatorname{ad}(X)$ is semisimple with purely imaginary eigenvalues. By Propositions~\ref{prop.heb}(ii) and~\ref{prop.Lau}, it follows that all eigenvalues of $\operatorname{ad}(X)$ are zero and thus $X$ must be central. However, by Propositions~\ref{prop.heb}(iii) and ~\ref{prop.Lau}, the center of $\mathfrak{r}$ is trivial. Thus $\mathfrak{u}\cap\mathfrak{r}=\{0\}$, and $\mathfrak{l}$ is a maximal compactly embedded subalgebra of $\mathfrak{g}$. Statement (ii) now follows from~\ref{remf}(iii).
(iii) Define $\mathfrak{s}_2$ to be the orthogonal complement of $\mathfrak{l}\cap\mathfrak{g}_2$ in $\mathfrak{g}_2$ with respect to the Killing form $B_\mathfrak{g}$,
and let $S_2$ be the corresponding connected subgroup of $G$. Then $\operatorname{Nilrad}(\mathfrak{g})<\mathfrak{s}_2<\mathfrak{g}_2$. Since $[\mathfrak{g},\mathfrak{g}_2]<\operatorname{Nilrad}(\mathfrak{g})$, it follows that $\mathfrak{s}_2$ is a $\mathfrak{g}$-ideal. We first show that it is a characteristic ideal; i.e., that it is invariant under $\operatorname{Aut}(\mathfrak{g})$. By statement (ii) and the fact that $B_\mathfrak{g}(\mathfrak{g}_1,\mathfrak{g}_2)=0$ for any semisimple Levi factor $\mathfrak{g}_1$, we see that $\mathfrak{s}_2=\mathfrak{l}^\perp\cap \mathfrak{g}_2$ where $\mathfrak{l}^\perp$ is the orthogonal complement of $\mathfrak{l}$ with respect to $B_\mathfrak{g}$. Let $\tau\in\operatorname{Aut}(G)$. Then $\tau(L)$ is a maximal compact subgroup of $G$ and hence is conjugate to $L$; i.e., $\tau_*(\mathfrak{l})=\operatorname{Ad}(a)(\mathfrak{l})$ for some $a\in G$. For $X\in\mathfrak{s}_2$, we have
$$B_\mathfrak{g}(\mathfrak{l}, \tau_*(X))=B_\mathfrak{g}(\tau^{-1}_*(\mathfrak{l}),X)=B_\mathfrak{g}(\operatorname{Ad}(a^{-1})(\mathfrak{l}), X)=B_\mathfrak{g}(\mathfrak{l},\operatorname{Ad}(a)(X))=0$$
where the last equality uses the fact that $\mathfrak{s}_2$ is a $\mathfrak{g}$-ideal and thus is invariant under $\operatorname{Ad}(G)$. Thus
$\tau_*(\mathfrak{s}_2)\perp \mathfrak{l}$ with respect to $B_\mathfrak{g}$. Since also $\tau_*(\mathfrak{s}_2)<\mathfrak{g}_2$, we have $\tau_*(\mathfrak{s}_2)<\mathfrak{s}_2$ and trivially equality must hold. Thus $\mathfrak{s}_2$ is a characteristic ideal in $\mathfrak{g}$ and $S_2$ is a characteristic subgroup of $G$.
The fact that the Killing form of $\mathfrak{g}$ is negative-definite on $\mathfrak{l}$ implies that $\mathfrak{g}_2=(\mathfrak{l}\cap\mathfrak{g}_2)\ltimes \mathfrak{s}_2$. Thus $S_2$ satisfies condition (b).
For (c), since $\operatorname{Nilrad}(\mathfrak{g})$ is a nilpotent ideal of $\mathfrak{s}_2$, we have $\operatorname{Nilrad}(\mathfrak{g})<\operatorname{Nilrad}(\mathfrak{s}_2)$. For the opposite inclusion, note that any subspace of $\mathfrak{g}_2$ containing $\operatorname{Nilrad}(\mathfrak{g}_2)$ is a $\mathfrak{g}$-ideal since $[\mathfrak{g},\mathfrak{g}_2]<\operatorname{Nilrad}(\mathfrak{g})$. In particular, $\operatorname{Nilrad}(\mathfrak{s}_2)$ is a nilpotent ideal of $\mathfrak{g}$ and hence $\operatorname{Nilrad}(\mathfrak{s}_2)<\operatorname{Nilrad}(\mathfrak{g})$. Thus $S_2$ satisfies condition (c).
Finally we prove that condition (a) holds. Consider the Levi and Iwasawa decompositions in part (ii) of the Lemma and the corresponding group $F\in \mathcal{F}$ given by $F=(M_1A_1N_1)G_cG_2$. By (ii), $L\cap F=M_1G_c(L\cap G_2)$. Let $S\in{\mathcal S}(h)$ be the subgroup of $F$ in standard position, i.e., $\mathfrak{s}$ is the orthogonal complement of $\mathfrak{l}\cap \mathfrak{f}$ in $\mathfrak{f}$ relative to $B_\mathfrak{f}$. For $X\in\mathfrak{l}\cap \mathfrak{f}$ and $Y\in\mathfrak{g}_2$, we have
$$B_\mathfrak{f}(X,Y)=\operatorname{trace}(\operatorname{ad}(X)\circ\operatorname{ad}(Y)_{|\mathfrak{g}_2})=B_\mathfrak{g}(X,Y).$$
It thus follows from the definition of $\mathfrak{s}_2$ that $\mathfrak{s}_2\perp(\mathfrak{l}\cap\mathfrak{f})$ relative to $B_\mathfrak{f}$ and hence $\mathfrak{s}_2<\mathfrak{s}$ by Definition~\ref{def.stdpos}. By Proposition~\ref{prop.f}, we also have that $\mathfrak{a}_1+\mathfrak{n}_1<\mathfrak{s}$. Write $\mathfrak{s}_1=\mathfrak{a}_1+\mathfrak{n}_1$. Since $\mathfrak{f}=(\mathfrak{l}\cap\mathfrak{f})+(\mathfrak{s}_1+\mathfrak{s}_2)$, we must have $\mathfrak{s}=\mathfrak{s}_1+\mathfrak{s}_2$, and then $S=S_1\ltimes S_2$. Thus we have found one element $S\in{\mathcal S}(h)$ of the form stated in condition (a). Condition (a) now follows from Proposition~\ref{prop.f}(i), the fact that $S_2$ is normal in $G$ and the fact that the Iwasawa subgroups of $G$ form a $G$-conjugacy class of subgroups.
(iv) is immediate from Proposition~\ref{prop.step1}.
\end{proof}
\begin{remark} One can also show directly, using Proposition~\ref{prop.heb} and ~\ref{prop.Lau}, that the standard modification of $R$ with respect to $h$ is of Einstein type and moreover that it is given by $\mathfrak{r}'=\mathfrak{a}'+\mathfrak{n}$, where $\mathfrak{n}$ is the nilradical of both $\mathfrak{r}$ and $\mathfrak{r}'$. Moreover, by the proof of Theorem 3.5 of \cite{GordonWilson:IsomGrpsOfRiemSolv}, the fact that the standard modification $\mathfrak{r}'$ satisfies $\operatorname{Nilrad}(\mathfrak{r}')=\operatorname{Nilrad}(\mathfrak{r})$ implies that $R'$ is in standard position with respect to $h$. Thus for solvable Lie groups of Einstein type, only a single standard modification is needed to reach standard position. We will not need this fact, however.
\end{remark}
Lemma~\ref{lem.stdmodEinst} says that each $S\in{\mathcal S}(h)$ satisfies the hypotheses of the Key Lemma stated below.
\begin{key}[``Only if'' statement of Lemma~\ref{lem}.] \label{key}Suppose that $S$ is a solvable Lie group of Einstein type and that $S$ is a semi-direct product $S=S_1\ltimes S_2$ of subgroups satisfying the following hypotheses:
\begin{itemize}
\item $S_1$ isomorphically embeds as an Iwasawa subgroup in a semisimple Lie group $G_1$.
\item The adjoint action of $S_1$ on the Lie algebra $\operatorname{Lie}(S_2) $ extends to a representation of $G_1$ on $\operatorname{Lie}(S_2)$.
\end{itemize}
Then, $S_2$ is of Einstein type.\end{key}
In the remainder of this section, we assume the Key Lemma and complete the proof of the Main Theorem~\ref{mainthm}. We will then prove the Key Lemma in a later section.
Assuming the Key Lemma , we have reduced the proof of the Main Theorem to the following proposition:
\begin{prop}\label{prop.key} Let $\hat G$ be a (not necessarily connected) Lie group, let $\hat L$ be a compact subgroup of $\hat G$ and denote by $G$ and $L$ the identity components of $\hat G$ and $\hat L$, respectively. Suppose that there exists a Levi decomposition $G=G_1G_2$, an Iwasawa decomposition $G_1=K_1S_1$, and a connected solvable normal subgroup $S_2$ of $\hat G$ satisfying the following:
\begin{enumerate}
\item $L=K_1(L\cap G_2)$ and $G_2=(L\cap G_2)S_2$;
\item The solvable group $S:=S_1\ltimes S_2$ acts simply transitively on $\hat G/\hat L$;
\item $S_2$ is of Einstein type.
\end{enumerate}
Then $\hat G/\hat L$ admits a left-invariant Einstein metric of negative Ricci curvature.
\end{prop}
(There is some redundancy in the hypotheses of the proposition; one can show that hypothesis (i) follows from the remaining hypotheses.) Note that this proposition together with the Key Lemma~\ref{key} form Lemma~\ref{lem} stated in the introduction.
The Main Theorem~\ref{mainthm} is an immediate consequence of Lemma~\ref{lem.stdmodEinst}, Lemma~\ref{key} and Proposition~\ref{prop.key}. The statement of Proposition~\ref{prop.key} is actually stronger than needed to prove the Main Theorem, since it does not assume that $S$ itself is of Einstein type; instead the conclusion of the proposition implies that $S$ is of Einstein type.
\begin{prop}\label{prop.full_aut} Let $S$ be a solvable Lie group of Einstein type and let $W$ be a maximal compact (not necessarily connected) subgroup of $\operatorname{Aut}(\mathfrak{s})$. Then:
\begin{enumerate}
\item There exists a left-invariant Einstein metric $g$ on $S$ that is $W$-invariant.
\item Let $H$ be the connected reductive subgroup of $\operatorname{Aut}(\mathfrak{s})$ with Lie algebra $\operatorname{Der}(s)^\mathfrak{a}$, where $\mathfrak{s}=\mathfrak{a}+\mathfrak{n}$ is the orthogonal decomposition of $\mathfrak{s}$ given in Proposition~\ref{prop.heb} with respect to the Einstein metric $g$ in (i). Then $H$ is normalized by $W$. The Lie subgroup $\hat H:=W H$ of $\operatorname{Aut}(S)$ has identity component $H$ and has only finitely many connected components.
\item In the language of~\ref{cd}, $\hat H$ has a Cartan decomposition $\hat H=WP$, with $W$ acting orthogonally and $P$ acting symmetrically on $\mathfrak{s}$ with respect to the inner product $g$.
\end{enumerate}
\end{prop}
\begin{proof} The first statement is immediate from Proposition~\ref{prop.step1}. For the second statement, note that $H$ is the identity component of $\operatorname{Aut}(\mathfrak{s})^\mathfrak{a}:=\{\tau\in \operatorname{Aut}(S): \tau|_{\mathfrak{a}}=Id_\mathfrak{a}\}$. The group $W$ leaves $\mathfrak{a}$ invariant since it acts orthogonally relative to $g$ and thus normalizes $H$. The Lie algebra of $\hat H$ contains
$\operatorname{Der}(\mathfrak{s})^\mathfrak{a}$ and consists of derivations that leave $\mathfrak{a}$ invariant. By Proposition~\ref{prop.heb}, it follows that $\hat H$ has Lie algebra $\mathfrak{h}:=\operatorname{Der}(\mathfrak{s})^\mathfrak{a}$. This, together with the fact that $W$ is compact, yields (ii).
Proposition~\ref{prop.heb}(i) gives us a Cartan decomposition $H=WP$, where $P=\exp(\mathfrak{p})$, with $\mathfrak{p}$ consisting of all elements of $\mathfrak{h}=\operatorname{Der}(\mathfrak{s})^\mathfrak{a}$ that are symmetric with respect to $g$. Since $\hat{W}$ acts orthogonally with respect to $g$, this Cartan decomposition extends to a Cartan decomposition $\hat H=\hat{W} P$.
\end{proof}
\begin{ab}[{\bf Choosing an Einstein metric on $S_2$}]\label{lem.com} In the notation of Proposition~\ref{prop.key}, the Lie group $\hat G$ acts by conjugation on $S_2$. Let $\rho:\hat G\to \operatorname{Aut}(\mathfrak{s}_2)$ be the resulting action on $\mathfrak{s}_2$ so $\rho_*(X)=\operatorname{ad}(X)_{|\mathfrak{s}_2}$. By Proposition~\ref{prop.step1}, there exists a left-invariant Einstein metric $h_2$ on $S_2$ invariant under the action of $\hat L$. We have a standard decomposition $\mathfrak{s}_2=\mathfrak{a}_2+\mathfrak{n}_2$ as in Definition~\ref{std}, with $\mathfrak{a}_2\perp \mathfrak{n}_2$ relative to $h_2$. The group $\hat L$ normalizes $\mathfrak{a}_2$ and $L$ acts trivially on $\mathfrak{a}_2$. By Proposition~\ref{prop.heb}, we have that $\mathfrak{g}=\mathfrak{n}_2 + Z_\mathfrak{g}(\mathfrak{a}_2)$, and $\rho_*(Z_\mathfrak{g}(\mathfrak{a}_2))$ lies in the reductive Lie algebra $\operatorname{Der}(\mathfrak{s}_2)^\mathfrak{a}$. In fact $Z_\mathfrak{g}(\mathfrak{a}_2)$ is itself a reductive Lie algebra complementary to $\mathfrak{n}_2$. Indeed, by Proposition~\ref{prop.heb}, we have $Z_\mathfrak{g}(\mathfrak{a}_2)\cap\mathfrak{n}_2)=\{0\}$ and thus the projection $\mathfrak{g}\to \mathfrak{g}/\mathfrak{n}_2$ restricts to an isomorphism of $Z_\mathfrak{g}(\mathfrak{a}_2)$ with $\mathfrak{g}/\mathfrak{n}_2$. But it is easily seen that $\mathfrak{n}_2=\operatorname{Nilrad}(\mathfrak{g})$. Thus $\mathfrak{g}/\mathfrak{n}_2$, and hence $Z_\mathfrak{g}(\mathfrak{a}_2)$, is reductive. The derived algebra of $Z_\mathfrak{g}(\mathfrak{a}_2)$ is a semisimple Levi factor of $\mathfrak{g}$, we replace $\mathfrak{g}_1$ by $[Z_\mathfrak{g}(\mathfrak{a}_2),Z_\mathfrak{g}(\mathfrak{a}_2)]$. This may result in replacing $L$ (thus $K_1$) and $S_1$ by conjugates, but that does not affect the validity of Proposition~\ref{prop.key}.
The Lie group $G_1\hat L$ is a reductive fcc group (see~\ref{cd}) with identity component $G_1(L\cap G_2)$. Let $G_1=K_1P_1$ be a Cartan decomposition. Noting that $K_1=L\cap G_1$, we see that $\hat L P_1$ is a Cartan decomposition of $G_1\hat L$. Let $S_2$ play the role of $S$ in Proposition~\ref{prop.full_aut}. Then the Lie group $\hat H$ in~\ref{prop.full_aut} contains $G_1\hat L$ and thus admits a Cartan decomposition $\hat H=WQ$ compatible with the Cartan decomposition $\hat L P$. By Proposition~\ref{prop.full_aut} and the uniqueness of Cartan decompositions up to conjugacy, there exists $\tau\in\hat H$ such that $W$, hence $\hat L$, acts orthogonally and $Q$, hence $P$, acts symmetrically on $\mathfrak{s}_2$ relative to the Einstein inner product $\tau^*h_2$. We set $g_2=\tau^* h_2$.
\end{ab}
\begin{ab}[{\bf Computing Ricci Curvature}]
We review the expression for the Ricci curvature on a Riemannian homogeneous space $G/K$. Let $\mathfrak{g}=\mathfrak{k}+\mathfrak{q}$ be a reductive decomposition (I.e., $\mathfrak{q}$ is an $\operatorname{Ad}_G(K)$-invariant complement of $\mathfrak{k}$), and let $\la\,,\,\ra$ be the Riemannian inner product on $\mathfrak{q}$. We may extend $\la\,,\,\ra$ to an $\operatorname{Ad}_G(K)$-invariant inner product on $\mathfrak{g}$ with $\mathfrak{k}\perp\mathfrak{q}$. For $X\in\mathfrak{g}$, write $X=X_\mathfrak{k} +X_\mathfrak{q}$ with $X_\mathfrak{k}\in\mathfrak{k}$ and$X_\mathfrak{q}\in\mathfrak{q}$. Let $\operatorname{Rc}:\mathfrak{q}\times \mathfrak{q}\to \mathbf{R}$ denote the Ricci tensor of $\la\,,\,\ra$, and let $\operatorname{Ric}:\mathfrak{q}\to\mathfrak{q}$ denote the corresponding Ricci operator; i.e., $\langle \operatorname{Ric}(X),Y\rangle =\operatorname{Rc}(X,Y)$ for $X,Y\in\mathfrak{q}$. Let $H\in\mathfrak{q}$ be the unique element such that $\langle H,X\rangle =\operatorname{trace}(\operatorname{ad}(X))$ for all $X\in\mathfrak{g}$. Let $B_\mathfrak{g}$ denote the
Killing form of $\mathfrak{g}$. Define $M:\mathfrak{q}\to\mathfrak{q}$ by
\begin{equation}\label{m}\langle M(X),Y\rangle=\sum_{i=1}^n\,-\frac{1}{2}\langle [X,X_i]_\mathfrak{q}, [Y,X_i]_\mathfrak{q}\rangle +\frac{1}{4}\sum_{i,j=1}^n\,\langle [X_i,X_j]_\mathfrak{q},X\rangle \langle [X_i,X_j]_\mathfrak{q},Y\rangle\end{equation}
where $\{X_1,\dots, X_n\}$ is an orthonormal basis of $\mathfrak{q}$. Then
\begin{equation}\label{ric}\operatorname{Rc}(X,Y) =\langle M(X),Y\rangle -\frac{1}{2}B_\mathfrak{g}(X,Y)-\frac{1}{2}\langle [H,X],Y\rangle -\frac{1}{2}\langle X, [H,Y]\rangle.\end{equation}
\end{ab}
\begin{proof}[Proof of Proposition~\ref{prop.key}]
Let $\mathfrak{u}=Z(\mathfrak{a}_2)=\mathfrak{l} +(\mathfrak{p}_1+\mathfrak{a}_2)$ and $\mathfrak{q}=\mathfrak{p}_1+\mathfrak{a}_2+\mathfrak{n}_2$. Then $\mathfrak{u}$ is a reductive Lie algebra and $\mathfrak{g}=\mathfrak{u}+\mathfrak{n}_2=\mathfrak{l}+\mathfrak{q}$ with each term in the two decompositions being $\operatorname{Ad}_{\hat G}(\hat L)$-invariant. We define an inner product $\la\,,\,\ra$ on $\mathfrak{q}$ satisfying:
\begin{enumerate}
\item $\mathfrak{p}_1\perp\mathfrak{s}_2$;
\item The restriction of $\la\,,\,\ra$ to $\mathfrak{s}_2\times \mathfrak{s}_2$ is the Einstein inner product defined in~\ref{lem.com};
\item Writing the noncompact part $\mathfrak{g}_{nc}$ of $\mathfrak{g}_1$ as a direct sum $\mathfrak{g}_{nc}=\mathfrak{h}_1\oplus\dots\mathfrak{h}_r$ of simple ideals and letting $\mathfrak{p}_{1,i}=\mathfrak{p}_1\cap \mathfrak{h}_i$, then $\mathfrak{p}_{1,i}\perp\mathfrak{p}_{1,j}$ for $i\neq j$;
\item The restriction of $\la\,,\,\ra$ to $\mathfrak{p}_{1,i}\times\mathfrak{p}_{1,i}$ is a positive multiple $\alpha_iB_{\mathfrak{h}_i}$ of the Killing form of $\mathfrak{h}_i$.
\end{enumerate}
Any such inner product is automatically invariant under $\operatorname{Ad}_G(L)$, where $L=K_1(L\cap G_2)$ is the identity component of $\hat L$. Our goal is to choose the constants $\alpha_i$
in such a way that $\la\,,\,\ra$ is $\operatorname{Ad}_{\hat G}(\hat L)$-invariant and so that the resulting Riemannian metric on $\hat G/\hat L$ is Einstein.
Let $\operatorname{Rc}:\mathfrak{q}\times \mathfrak{q}\to \mathbf{R}$ denote the Ricci tensor of $\la\,,\,\ra$, let $\operatorname{Rc}_1$ and $\operatorname{Rc}_2$ denote, respectively, the Ricci tensors of $G_1/K_1$ and $S_2$ with respect to the Riemannian metrics defined by the restrictions of $\la\,,\,\ra$ to $\mathfrak{p}_1\times\mathfrak{p}_1$ and to $\mathfrak{s}_2\times\mathfrak{s}_2$, and let $\operatorname{Ric}_1$ and $\operatorname{Ric}_2$ denote the corresponding Ricci operators. Since $S_2$ is Einstein, we have $\operatorname{Ric}_2=c\,Id_{\mathfrak{s}_2}$ for some negative constant $c$. The metric on $G_1/K_1$ is symmetric and we have
\begin{equation}\label{rc1}\operatorname{Ric}_1=-\left(\frac{1}{\alpha_1}\operatorname{Id}_{\mathfrak{p}_{1,1}}\times\dots\times \frac{1}{\alpha_r}\operatorname{Id}_{\mathfrak{p}_{1,r}}\right).\end{equation}
We now compare the Ricci tensors $\operatorname{Rc}_1$ and $\operatorname{Rc}_2$ to the restictions of $\operatorname{Rc}$ to $\mathfrak{p}_1$ and $\mathfrak{s}_2$, respectively. For $i=1,2$, we will write $H_i$ and $M_i$ for the expressions in Equation~\ref{ric} for $\operatorname{Rc}_i$. Since $G_{1}$ is semisimple, and $\mathfrak{g}_1\perp\mathfrak{s}_2$, we have
\begin{equation}\label{h} H_1=0 \mbox{ and } H=H_2\in\mathfrak{s}_2.\end{equation}
To compare $M$ and $M_i$, we use a computation carried out by Lauret and Lafuente in\cite{LauretLafuente:StructureOfHomogeneousRicciSolitonsAndTheAlekseevskiiConjecture}, Lemma 4.4. They considered the case of a reductive decomposition $\mathfrak{g}=\mathfrak{l}+\mathfrak{q}$ with $\mathfrak{q}=\mathfrak{h}+\mathfrak{n}$ where $\mathfrak{u}=\mathfrak{l}+\mathfrak{h}$ is a reductive Lie algebra and $\mathfrak{n}=\operatorname{Nilrad}(\mathfrak{g})$. In our notation, $\mathfrak{h}=\mathfrak{p}_1+\mathfrak{a}_2$ and $\mathfrak{n}=\mathfrak{n}_2$. For $X\in\mathfrak{g}$, write
$$\rho(X)=\operatorname{ad}(X)|_{\mathfrak{n}_2}.$$ Using Lauret--Lafuente's computation, along with the fact that $\operatorname{ad}(X)|_{\mathfrak{n}_2}$ is symmetric with respect to $\la\,,\,\ra$ for all $X\in\mathfrak{p}_1$ (see~\ref{lem.com}), we find that
\begin{equation}\label{m2}M_2=M|_{\mathfrak{s}_2\times\mathfrak{s}_2},\end{equation}
\begin{equation}\label{m1}\langle M_1(X),Y\rangle=\langle M(X),Y\rangle+\frac{1}{2}\operatorname{trace}(\rho(X)\rho(Y)) \mbox{ for } X,Y\in\mathfrak{p}_1\end{equation}
and
\begin{equation}\label{m12}\langle M(X),Y\rangle=0 \mbox{ for } X\in\mathfrak{p}_1 \mbox{ and } Y\in\mathfrak{s}_2.\end{equation}
(The third equation uses the fact that $\operatorname{trace}(\rho(X)\rho(Y))=0$ for $X\in\mathfrak{p}_1$ and $Y\in\mathfrak{a}_2$.)
The Killing forms satisfy
\begin{equation}\label{kf}B_ \mathfrak{g}|_{\mathfrak{s}_2\times\mathfrak{s}_2}=B_{\mathfrak{s}_2} \mbox{ and } B_\mathfrak{g}(X,Y)=B_{\mathfrak{g}_{1}}(X,Y)+\operatorname{trace}(\rho(X)\rho(Y)) \mbox{ for } X,Y\in\mathfrak{p}_1.\end{equation}
Equations~\ref{ric}--\ref{kf} yield
\begin{equation}\label{rc2}Rc_2=Rc|_{\mathfrak{s}_2\times\mathfrak{s}_2}\end{equation}
and
\begin{equation}\label{L}\operatorname{Rc}_1(X,Y)=\operatorname{Rc}(X,Y)+T(X,Y),\end{equation}
where
\begin{equation}\label{rho}T(X,Y)=\operatorname{trace}(\rho(X)\rho(Y)) \mbox{ for } X,Y\in\mathfrak{g}_{nc}.\end{equation}
Since $T$ is an $\operatorname{Ad}(G_{1})$ invariant bilinear form on $\mathfrak{g}_{1}$, we have $T(\mathfrak{h}_i,\mathfrak{h}_j)=0$ for $i\neq j$ and, for each $i$, there exists a constant $\beta_i$ such that
\begin{equation}\label{beta}T|_{\mathfrak{h}_i\times\mathfrak{h}_i}=\beta_iB_{\mathfrak{h}_i}=\frac{\beta_i}{\alpha_i}\la\,,\,\ra|_{\mathfrak{p}_{1,i}\times\mathfrak{p}_{1,i}}.\end{equation}
Since $\rho|_{\mathfrak{p}_1}$ is symmetric, we have $\beta_i\geq 0$ with equality if and only $[\mathfrak{h}_i,\mathfrak{g}_2]=0$.
We now define the constants $\alpha_i$, $i=1,\dots r$, in condition (iv) by
\begin{equation}\label{alpha}\alpha_i=\frac{-1-\beta_i}{c}\end{equation}
and observe that $\alpha_i>0$ since $c<0$ and $\beta_i\geq 0$. By Equations~\ref{rc1}, \ref{rc2}, \ref{L}, \ref{beta} and \ref{alpha}, we have $\operatorname{Rc}=c\la\,,\,\ra$ on all of $\mathfrak{q}$. Thus we have constructed a left-invariant Einstein metric on $G/L$.
It remains only to show that the inner product $\la\,,\,\ra$ on $\mathfrak{q}$ is invariant under $\operatorname{Ad}_{\hat G}(\hat L)|_\mathfrak{q}$. Condition (ii) guarantees that the restriction of $\la\,,\,\ra$ to $\mathfrak{s}_2\times \mathfrak{s}_2$ is $\operatorname{Ad}_{\hat G}(\hat L)$-invariant, so it remains only to check the restriction to $\mathfrak{p}_1$. By~\ref{lem.com},
the Cartan decomposition $\mathfrak{g}_1=\mathfrak{k}_1+\mathfrak{p}_1$ is $\operatorname{Ad}_{\hat G}(\hat L)$-invariant. Thus, for each $\gamma\in \operatorname{Ad}_{\hat G}(\hat L)$, there exists a permutation $\sigma$ of $\{1,2,\dots,r\}$ such that $\gamma(\mathfrak{h}_{i})=\mathfrak{h}_{\sigma(i)}$ and $\gamma(\mathfrak{p}_{1,i})=\mathfrak{p}_{1,\sigma(i)}$ for all $i$. Since the automorphism $\gamma$ preserves both the bilinear form $T$ in Equation~\ref{rho} and intertwines the Killing form of $\mathfrak{h}_i$ with that of $\mathfrak{h}_{\sigma(i)}$, Equation~\ref{beta} shows that $\beta_{\sigma(i)}=\beta_i$, and then Equation~\ref{alpha} implies $\alpha_{\sigma(i)}=\alpha_i$ for each $i$. Thus by Condition (iv), $\la\,,\,\ra$ is $\gamma$-invariant.
\end{proof}
\section{Technical lemmas on GIT} \label{sec: technical lemmas on GIT}
In this section, the groups of primary interest are fully reducible subgroups of $GL(V)$, where $V$ is a real or complex vector space. For a point $p\in V$, we are interested understanding when the orbit $G\cdot p$ is closed in $V$.
\subsection{Preliminaries on fully reducible groups} We first recall the definition of fully reducible.
\begin{defin} A subgroup $G \subset GL(V)$ is called fully reducible if for any $G$-invariant subspace $W$ of $V$, there exists a complementary $G$-invariant subspace $W'$ of $V$.
\end{defin}
Observe that if a subspace $W$ is invariant under $G$, then it is also invariant under the Zariski closure $\overline G$ of $G$. In this way we see that $G$ being fully reducible implies $\overline G$ is also fully reducible. Further, if $G$ is connected and fully reducible, then $G$ may be written as $G=[G,G] Z(G)$, where $[G,G]$ is semisimple, $Z(G)$ is the center of $G$, and the elements of $Z(G)$ are semisimple (i.e.\ diagonalizable). This fact is well-known for algebraic groups and the proof is more or less the same in the case of connected Lie groups.
Let $H$ be an algebraic, fully reducible subgroup of an algebraic, fully reducible subgroup $G$ of $GL(V)$. It is well-known that the normalizer $N_G(H)$ and centralizer $Z_G(H)$ of $H$ in $G$ are also fully reducible and algebraic.
Further, $Z_G(H) \cdot H $ is a finite index subgroup of $N_G(H)$. At the Lie algebra level, this is precisely
$$N_\mathfrak g (\mathfrak h) = Z_\mathfrak g(\mathfrak h) + \mathfrak h.$$
\begin{defin} Let $k=\mathbb R$ or $\mathbb C$ and consider the multiplicative group $k^*$ of all non-zero elements.
A \textit{1-parameter subgroup} of $G$ is a homomorphism $\lambda : k^* \to G$, where we consider $k=\mathbb R$ for real Lie groups $G$ and $k=\mathbb C$ for complex Lie groups $G$. We say $\lambda$ is an algebraic 1-parameter subgroup if the map $\lambda$ is regular (i.e. a map between algebraic groups).
\end{defin}
\begin{remark} Let $\lambda$ be a 1-parameter subgroup of $G$. The image $\lambda(\mathbb R)$ is a subgroup of $G$ and we will often abuse notation by denoting this subgroup simply by $\lambda$.
\end{remark}
Our definition of 1-parameter subgroup is somewhat restrictive as it does not include one-parameter subgroups of nilpotent Lie groups, however this is a standard definition when studying reductive algebraic groups. Note that algebraic 1-parameter subgroups are fully reducible. (This can be worked out by hand, but it is also a special case of a general result of Mostow on regular representations of algebraic, reductive groups \cite{Mostow:FullyReducibleSubgrpsOfAlgGrps}.)
\subsection{Geometric Invariant Theory}
\begin{thm}\label{thm: HMC - classical}[Hilbert-Mumford criterion] Let $G$ be an algebraic, fully reducible subgroup of $GL(V)$.
Suppose the stabilizer subgroup $G_p$ is finite. Then $G\cdot p$ is closed if and only if $\lambda \cdot p$ is closed for all algebraic 1-parameter subgroups $\lambda$ of $G$.
\end{thm}
The theorem above was proven over $\mathbb C$ by Mumford and extended to real algebraic groups by Birkes \cite{Birkes:OrbitsofLinAlgGrps}. Applying the criterion twice, we have the following immediate consequence on the inheritance of closed orbits.
\begin{cor}\label{cor: HMC finite stab - inheritance of closed} Let $G$ be a fully reducible, algebraic group such that $G\cdot p$ is closed and $G_p$ is finite. For any fully reducible, algebraic subgroup $G'$ of $G$, we have that $G'\cdot p$ is closed. \end{cor}
\begin{proof}
The orbit $G\cdot p$ being closed implies $\lambda \cdot p$ closed for all algebraic 1-parameter subgroups $\lambda$ of $G$. Now consider only those $\lambda$ which are subgroups of $G'$. The stabilizer subgroup $G'_p \subset G_p$ is finite and so applying the Hilbert-Mumford criterion, we see that $G'\cdot p$ is closed.
\end{proof}
The statement of the corollary is rather strong and, in fact, does not hold without the condition on the stabilizer; see, e.g., \cite[Example 6]{Jablo:GoodRepresentations}. However, the corollary generalizes in a partial and useful way, as we will see.
In the results below, we do not specify whether the ground field is $\mathbb R$ or $\mathbb C$ as knowing the result for one ground field implies that it holds for the other, cf \cite{BHC,RichSlow}. The following are well-known.
\begin{prop}\label{prop: closed orbit of Gp vs G^0p} Let $G$ be a fully reducible, algebraic group. Denote the connected component of the identity (in the Hausdorff topology) by $G_0$. Then $G\cdot p$ is closed if and only if $G_0\cdot p$ is closed.
\end{prop}
We say a group $G$ is \textit{pre-algebraic} if $G$ is the connected component of the identity of an algebraic group. Obviously, if $G$ is pre-algebraic we have $G = (\overline G)_0$ where $\overline G$ is the Zariski closure of $G$. We note that a pre-algebraic group is fully reducible if and only if its Zariski closure is fully reducible; this follows from the fact that a (not necessarily connected) algebraic group is fully reducible if and only if its Lie algebra is so \cite{Mostow:FullyReducibleSubgrpsOfAlgGrps}.
\begin{prop}\label{prop: Gp closed implies stabilizer is reductive} Let $G$ be an algebraic group. The stabilizer subgroup $G_p$ of a point $p$ is an algebraic group. Further, if $G$ is fully reducible and the orbit $G\cdot p$ is closed, then $G_p$ is fully reducible as well.
\end{prop}
\begin{prop}\cite[Corollary 3.1]{Luna:AdherencesDOrbiteEtInvariants}\label{prop: luna on normalizer of a stab grp having closed orbit} Let $G$ be a fully reducible, algebraic group. Let $H$ be a fully reducible, algebraic subgroup of $G$ which stabilizes a point $p$. Then $G\cdot p$ is closed if and only if $N_G(H)\cdot p$ is closed.
\end{prop}
From this result, we have the following useful lemma.
\begin{lemma}\label{lemma: luna on centralizer of a stab grp having closed orbit} Let $G$ be a fully reducible, pre-algebraic group and $H$ a connected, fully reducible (not necessarily algebraic) subgroup of $G$ which stabilizes a point $p$. Then $G\cdot p$ is closed if and only if $Z_G(H)_0\cdot p$ is closed.
\end{lemma}
\begin{proof} We prove the lemma first in the special case that $G$ is algebraic. Let $\overline H$ denote the Zariski closure of $H$. First observe that $Z_G(H) = Z_G(\overline H)$. As $H$ is fully reducible, we have $H = [H,H] \ Z(H)$, where $Z(H)$ is the center of $H$. Using the fact that connected, linear semisimple groups are necessarily pre-algebraic, we have $[H,H]$ is pre-algebraic and so $\overline H = \overline{[H,H]} \ \overline{Z(H)}$, where $(\overline{[H,H]})_0=[H,H]$.
As $H$ is fully reducible we have that $\overline H$ is fully reducible. Additionally, $\overline H$ is algebraic and so
$$N_\mathfrak g(\overline{\mathfrak h}) = Z_\mathfrak g (\overline{\mathfrak h}) + \overline{\mathfrak h},$$
which implies $N_G(\overline H)_0 = Z_G(\overline H)_0 \cdot \overline H _0= Z_G(H)_0 \cdot H$.
As $H$ stabilizes $p$ we see that $N_G(\overline H)_0 \cdot p = Z_G(H)_0 \cdot p$. Noting that $Z_G(H)$ is algebraic and applying Propositions \ref{prop: closed orbit of Gp vs G^0p} \& \ref{prop: luna on normalizer of a stab grp having closed orbit}, the lemma follows for $G$ algebraic.
For the general case, consider a pre-algebraic group $G_0$ with Zariski closure $G$. By Proposition \ref{prop: closed orbit of Gp vs G^0p} and the work above, we have $G_0\cdot p$ is closed if and only if $Z_G(H)_0\cdot p$ is closed. However, $Z_G(H)_0 = Z_{G_0}(H)_0$ as they have the same Lie algebra and the lemma is proven.
\end{proof}
\begin{cor} Let $G$ be a fully reducible, pre-algebraic group. The orbit $G \cdot p \subset V$ is closed if and only if the stabilizer $H=(G_p)_0$ is fully reducible and $Z_G(H)_0\cdot p$ is closed.
\end{cor}
The following is a slight generalization of the Hilbert-Mumford criterion.
\begin{lemma} Let $G$ be a fully reducible, (pre-)algebraic group such that $H=(G_p)_0 \subset Z(G)$, the center of $G$.
Then $G\cdot p$ is closed if and only if $\lambda \cdot p$ is closed for all (pre-)algebraic 1-parameter subgroups $\lambda$ of $G$. \end{lemma}
We note that the condition on $H$ makes it a fully reducible subgroup automatically.
\begin{proof} Let $G$ be a pre-algebraic group with Zariski closure $\overline G$. Observe that $Z(G)\subset Z(\overline G)$. This fact together with Proposition \ref{prop: closed orbit of Gp vs G^0p} implies that it suffices to prove the lemma in the case that $G$ and $\lambda$ are algebraic.
We begin by decomposing $G$ as a product $G = IH$, where $I$ is a fully reducible, algebraic subgroup with finite stabilizer. The group $I$ is defined as the Lie group whose Lie algebra is defined by
$$Lie~I = \{ X\in \mathfrak g \ | \ \operatorname{trace}(XY)=0 \mbox{ for all } Y\in \mathfrak h \}.$$
Details for showing $I$ is fully reducible and algebraic are the same as those given in Section 6 of \cite{Jablo:ConceringExistenceOfEinstein}, see the discussion after Proposition 6.6. To see that $G = IH$ is a product of groups, it suffices to show $\mathfrak g = \mathfrak i + \mathfrak h$ is a direct sum of Lie algebras.
By hypothesis, $\mathfrak h$ commutes with all of $\mathfrak g$ and so commutes with $\mathfrak i$; hence, we simply need to show the sum $\mathfrak g = \mathfrak i + \mathfrak h$ is a vector space direct sum. As $H$ and $G$ are pre-algebraic subgroups of some $GL(V)$, there exists some inner product on $V$ under which $H$ and $G$ are self-adjoint \cite{Mostow:SelfAdjointGroups}. Using the resulting inner product on $\mathfrak{gl}(V)$, we see that $\mathfrak i$ is precisely the orthogonal complement of $\mathfrak h$ in $\mathfrak g$, thence we have the direct sum $\mathfrak g = \mathfrak i + \mathfrak h$.
Let $\lambda$ be a (pre-)algebraic 1-parameter subgroup of $G$. Observe that $\lambda = \lambda_1 \lambda_2 $, where $\lambda_1 $ is a (pre-)algebraic 1-parameter subgroup of $I$ and $\lambda_2$ is a (pre-)algebraic 1-parameter subgroup of $H$.
As $H$ stabilizes $p$, we see from Theorem~\ref{thm: HMC - classical} that $G\cdot p = I\cdot p$ is closed if and only if $\lambda \cdot p = \lambda_1 \cdot p$ is closed for all (pre-)algebraic 1-parameter subgroups $\lambda_1 $ of $I$ or, equivalently, for all (pre-)algebraic 1-parameter subgroups $\lambda$ of $G$.
\end{proof}
The next result on the inheritance of closed orbits is one of the main technical results needed in the proof of the Key Lemma (see Section \ref{sec: proof of key lemma}).
\begin{cor}\label{cor:inheritance of closed orbits} Let $G$ be a fully reducible, pre-algebraic group such that $(G_p)_0 \subset Z(G)$.
Consider a fully reducible, pre-algebraic subgroup $G'$ of $G$. If $G\cdot p$ is closed, then so is the orbit $G'\cdot p$.
\end{cor}
One proves this corollary in the same way that one proves Corollary \ref{cor: HMC finite stab - inheritance of closed}. We note that $(G'_p)_0$ being central in $G'$ follows from $(G_p)_0$ being central in $G$.
In the sequel, we make use of the following well-known proposition.
\begin{prop}\label{prop: normal subgroups have closed orbits} Let $G$ be a fully reducible, pre-algebraic subgroup of $GL(V)$ such that $G\cdot p$ is closed for some $p\in V$. If $N$ is a normal, fully reducible, pre-algebraic subgroup of $G$, then $N\cdot p$ is closed.
\end{prop}
\begin{remark}
As this result is known to many working in geometric invariant theory, a reference is hard to find and so we provide a short argument for completeness.
\end{remark}
\begin{proof} By Proposition \ref{prop: closed orbit of Gp vs G^0p}, we may assume that $G$ and $N$ are algebraic.
Since $N$ is a fully reducible, algebraic group acting on the closed variety $G\cdot p$, we know that there exists $g\in G$ such that $N\cdot gp$ is closed. In fact, this is true for `almost all' $g\in G$; this is the main result of \cite{Luna:ClosedOrbitsofReductiveGroups}. However, $N\cdot gp = g (N\cdot p)$, as $N$ is normal. The map $g:V\to V$ being a homeomorphism gives that $N\cdot p$ is closed as well.
\end{proof}
\section{Proof of the Key Lemma~\ref{key}}\label{sec: proof of key lemma}
The goal of this section is to prove the Key Lemma~\ref{key}, which we restate here for convenience:
\begin{lemma}[Key Lemma]\label{k} Let $S$ be a solvable Lie group of Einstein type. Suppose that $S$ can be written as a semi-direct product $S=S_1\ltimes S_2$ satisfying the following hypotheses:
\begin{itemize}
\item $S_1$ isomorphically embeds as an Iwasawa subgroup in a semisimple Lie group $G_1$. In particular, we can write $S_1=A_1N_1$, where $G_1=K_1A_1N_1$ is an Iwasawa decomposition.
\item The adjoint action of $S_1$ on the ideal $\mathfrak{s}_2$ of $\mathfrak{s}$ extends to a representation of $G_1$ on $\mathfrak{s}_2$. We thus view $S$ as a subgroup of $G_1\ltimes S_2$.
\end{itemize}
Then $S_2$ is of Einstein type.
\end{lemma}
\begin{hypoth}\label{first.simp} We may assume that $G_1$ is semisimple of noncompact type. Indeed, in the language of Lemma~\ref{k}, the group $S_1$ lies in the noncompact part $G_{nc}$ of $G_1$, and the hypotheses of the Lemma trivially remains true if we replace $G_1$ by $G_{nc}$.
\end{hypoth}
We will apply Nikolayevsky's technique, described in Subsection~\ref{subsec.nik} to prove Lemma~\ref{k}.
\subsection{The pre-Einstein derivation of $N_2$.}\label{subsec.pre-einst}
\begin{nota}\label{nota.rho} The representation of $G_1$ on $\mathfrak{s}_2$ in Lemma~\ref{k} leaves the nilradical $\mathfrak{n}_2$ of $\mathfrak{s}_2$ invariant. We will denote by $\rho:\mathfrak{g}_1\to \operatorname{End}(\mathfrak{n}_2)$ the induced representation of the Lie algebra $\mathfrak{g}_1$ on $\mathfrak{n}_2$.
\end{nota}
\begin{lemma}\label{lem.jab} Let $H$ be a semisimple Lie group of noncompact type and let $\rho: H\to GL(V)$ be a finite-dimensional representation. Let $H=KS$ be an Iwasawa decomposition. If an element $T\in End(V)$ commutes with all elements of $\rho(S)$, then it commutes with all of $\rho(H)$.
\end{lemma}
\begin{proof} $H$ acts on $End(V)$ by $(h,T)\mapsto \rho(h)T\rho(h)^{-1}$ for $h\in H$, $T\in End(V)$. If $T$ commutes with $\rho(S)$, then the orbit of $T$ under the action of $H$ is compact. By \cite[Lemma 7.2]{Jablo:StronglySolvable}, every compact orbit of a finite-dimensional representation of a semisimple Lie group of noncompact type consists of a single point. The lemma follows.
\end{proof}
\begin{lemma}\label{lem.ady} Let $H$ be a connected Lie group and $N$ its nilradical. Let $W\in\mathfrak{h}$, and suppose that $\operatorname{ad}(W)|_\mathfrak{n}:\mathfrak{n}\to\mathfrak{n}$ is a non-singular derivation. Then the orbit of $W$ under $\operatorname{Ad}_H(N)$ is given by
$$\operatorname{Ad}_H(N)(W)=\{W+X: X\in\mathfrak{n}\}.$$
\end{lemma}
\begin{proof} We induct on the step size of the nilpotent Lie algebra $\mathfrak{n}$. If $\mathfrak{n}$ is abelian, then we have
$\operatorname{Ad}(\exp(Y))(W)=W+[Y,W]=W-\operatorname{ad}(W)(Y)$. Thus the lemma follows in this case from the non-singularity of $\operatorname{ad}(W)|_\mathfrak{n}$.
For the general case, let $X\in\mathfrak{n}$. We need to find $n\in N$ such that $\operatorname{Ad}_H(n)(W)=W+X$. Write $[N,N]$ for the normal subgroup of $H$ with Lie algebra $[\mathfrak{n},\mathfrak{n}]$, and let $\bar{H}=H/[N,N]$. Let $\pi:H\to \bar{H}$ be the homomorphic projection. Since $\bar{H}$ has abelian nilradical $\bar{N}=N/[N,N]$, there exists $\bar{n}_1\in \bar{N}$ such that $\operatorname{Ad}_{\bar{H}}(\bar{n}_1)(\bar{W})=\bar{W}+\bar{X}$. Choose $n_1\in N$ such that $\pi(n_1)=\bar{n}_1$. We then have
$$V:=\operatorname{Ad}(n_1)(W)\equiv W + X\,\,\,\rm{mod}\,[\mathfrak{n},\mathfrak{n}].$$
Set $U=(W+X)-V\in[\mathfrak{n},\mathfrak{n}]$. Note that $\operatorname{ad}(V)|_{\mathfrak{n}}$ is a non-singular derivation and thus restricts to a non-singular derivation of $[\mathfrak{n},\mathfrak{n}]$. Let $\mathfrak{s}$ be the subalgebra of $\mathfrak{h}$ given by $\mathfrak{s}=\mathbf{R} V+ [\mathfrak{n},\mathfrak{n}]$ and let $S$ denote the corresponding connected subgroup of $H$.
The Lie algebra $\mathfrak{s}$ has nilradical $[\mathfrak{n},\mathfrak{n}]$. Since the step size of $[\mathfrak{n},\mathfrak{n}]$ is less than that of $\mathfrak{n}$, the inductive hypothesis gives us an element $n_2$ of $[N,N]$ such that $\operatorname{Ad}_S(n_2)(V)=V+U$. Note that $\operatorname{Ad}_H(n_2)(V)=\operatorname{Ad}_S(n_2)(V).$ Let $n=n_2n_1$. We then have
$$\operatorname{Ad}_H(n)(W) =\operatorname{Ad}_H(n_2)(V)=V+U=W+X$$
and the lemma follows.
\end{proof}
\begin{prop}\label{prop.a2} We assume the hypotheses of Lemma~\ref{k} and~\ref{first.simp}. Then, letting $\mathfrak{n}_2=\operatorname{Nilrad}(\mathfrak{s}_2)$, there exists an abelian complement $\mathfrak{a}_2$ of $\mathfrak{n}_2$ in $\mathfrak{s}_2$ such that:
\begin{enumerate}
\item Letting $\mathfrak{a}=\mathfrak{a}_1+\mathfrak{a}_2$ and $\mathfrak{n}=\mathfrak{n}_1+\mathfrak{n}_2$, then $\mathfrak{s}=\mathfrak{a}+\mathfrak{n}$ is a standard decomposition of $\mathfrak{s}$. (See Remark~\ref{rem.maxred} for the definition of standard decomposition.)
\item $\mathfrak{a}_2$ commutes with $\mathfrak{g}_1$.
\item There exist an element $W=W_1+W_2\in \mathfrak{a}$ with $W_i\in \mathfrak{a}_i$, $i=1,2$ such that, in the notation of~\ref{nota.ss}, $\varphi:=\operatorname{ad}(W)|_{\mathfrak{n}}^\mathbf{R}$ is a pre-Einstein derivation of $\mathfrak{n}$ and $\varphi_2:=\operatorname{ad}(W_2)|_{\mathfrak{n}_2}^\mathbf{R}$ is a pre-Einstein derivation of $\mathfrak{n}_2$.
\item $\varphi_2$ is positive-definite.
\end{enumerate}
\end{prop}
\begin{proof}[Proof of Proposition~\ref{prop.a2}]
We view $\mathfrak{s}_1$ as a subalgebra of $\mathfrak{g}_1$. By the second hypothesis, the representation $X\mapsto \operatorname{ad}(X)|_{\mathfrak{s}_2}$ of $\mathfrak{s}_1$ extends to a representation $\rho: \mathfrak{g}_1\to End(\mathfrak{s}_2)$. Thus we may view $\mathfrak{s}$ as a subalgebra of the semi-driect sum $\mathfrak{g}_1\ltimes \mathfrak{s}_2$. Since $\rho(\mathfrak{s}_1)$ is an Iwasawa subalgebra of $\rho(\mathfrak{g}_1)$, we see that $\rho(\mathfrak{a}_1)$ consists of fully reducible elements. Since also $\mathfrak{a}_1$ acts fully reducibly on $\mathfrak{n}_1$, it follows that $\operatorname{ad}_\mathfrak{s}(\mathfrak{a}_1)$ consists of fully reducible elements. Let $\mathfrak{a}'$ be a maximal fully $\operatorname{ad}$-reducible abelian subalgebra of $\mathfrak{s}$ containing $\mathfrak{a}_1$, and let $\mathfrak{n}=\operatorname{Nilrad}(\mathfrak{s})$. By Remark~\ref{rem.maxred}, $\mathfrak{s}=\mathfrak{a}'+\mathfrak{n}$ is a standard decomposition.
Define $\mathfrak{a}'_2 := (\mathfrak n_1 + \mathfrak s_2) \cap \mathfrak a'$. Since $\mathfrak{s}=\mathfrak{a}_1+\mathfrak{n}_1+\mathfrak{s}_2$ (vector space direct sum) and $\mathfrak{a}_1\subset \mathfrak{a}'$, we have
$\mathfrak{a}'=\mathfrak{a}_1+\mathfrak{a}'_2$. From the facts that $\mathfrak a_1$ commutes with $\mathfrak{a} ' $, normalizes each of $\mathfrak{n}_1$ and $\mathfrak{s}_2$, and contains an element which is non-singular on $\mathfrak n_1$, we see that $\mathfrak{a}'_2 \subset \mathfrak{s}_2$. Thus $\mathfrak{s}_2=\mathfrak{a}'_2+\mathfrak{n}_2$.
By Propositions~\ref{prop.heb} and \ref{prop.nik}, there exists an element $W'\in\mathfrak{a}'$ such that $(\operatorname{ad}(W')|_{\mathfrak{n}})^\mathbf{R}$ is a pre-Einstein derivation of $\mathfrak{n}$. Write $W'=W_1'+W_2'$ with $W_i'\in\mathfrak{a}'_i$.
Since $[\mathfrak{g}_1,\mathfrak{s}_2] \subset \mathfrak{n}_2$, there exists a subspace $\mathfrak{c}$ of $\mathfrak{s}_2$ complementary to $\mathfrak{n}_2$ such that $[\mathfrak{g}_1,\mathfrak{c}]=0$. Write $W'_2=W_2+X$ with $W_2\in\mathfrak{c}$ and $X\in \mathfrak{n}_2$. Since both $\mathfrak{c}$ and $\mathfrak{a}'_2$ commute with $\mathfrak{a}_1$, so does the element $X$; i.e., $X$ lies in the zero-eigenspace $\mathfrak{q}$ of $\operatorname{ad}(\mathfrak{a}_1)|_{\mathfrak{n}}$. The eigenspace $\mathfrak{q}$ is a subalgebra of $\mathfrak{n}_2$. Since $\mathfrak{a}'_2$ commutes with $\mathfrak{a}_1$, $\operatorname{ad}(\mathfrak{a}'_2)$ leaves $\mathfrak{q}$ invariant. Moreover, since $[W',\mathfrak{n}]=\mathfrak{n}$ and $[W_1',\mathfrak{q}]=0$, we must have $[W_2',\mathfrak{q}]=\mathfrak{q}$. Thus $\operatorname{ad}(W_2')|_{\mathfrak{q}}$ is a non-singular semisimple derivation of the nilpotent Lie algebra $\mathfrak{q}$. By Lemma~\ref{lem.ady} there exists an element $Y\in \mathfrak{q}$ such that $\operatorname{Ad}(\exp(Y))(W_2')=W_2'-X=W_2$. Set $W_1=W_1'$ and $W=W_1+W_2$. Observe, $W = \operatorname{Ad}(\exp(Y))(W')$ is a pre-Einstein derivation.
Let $\mathfrak{a}_2=\operatorname{Ad}(\exp(Y))(\mathfrak{a}'_2) \subset \mathfrak{s}_2$ and set $\mathfrak{a}=\mathfrak{a}_1+\mathfrak{a}_2 $.
We need to show that $\mathfrak{a}$ and $W$ satisfy conditions (i)--(iv). Noting that $\mathfrak{a}=\operatorname{Ad}(\exp(Y))(\mathfrak{a}')$, we see that $\mathfrak{a}+\mathfrak{n}$ is again a standard decomposition of $\mathfrak{s}$, and thus we have (i).
We next prove (iv). Let $\lambda$ be an eigenvalue of $\operatorname{ad}(W_2)|_{\mathfrak{n}_2}$ and $V_\lambda$ the corresponding eigenspace. Then $V_\lambda$ is $\rho(\mathfrak{g}_1)$-invariant since $W_2$ commutes with $\rho(\mathfrak{g}_1)$. Since $\mathfrak{g}_1$ is semisimple, we must have
$\operatorname{trace}(\operatorname{ad}(X)|_{V_\lambda})=0$ for all $X\in\mathfrak{g}_1$ and thus
$$\operatorname{trace}(\operatorname{ad}(W)_{V_\lambda}=\operatorname{trace}(\operatorname{ad}(W_2)_{V_\lambda})=\lambda\,\dim(V_\lambda).$$ Since $\varphi=\operatorname{ad}(W)|_{\mathfrak{n}}^\mathbf{R}$ is the pre-Einstein derivation of the Einstein nilradical $\mathfrak{n}$, all eigenvalues of $\operatorname{ad}(W)_{\mathfrak{n}_2}$ have positive real part by Proposition~\ref{prop.heb} and thus we must have $\operatorname{Real}(\lambda)>0$. This proves (iv).
We next prove that $\mathfrak{a}_2$ satisfies (ii). Since $W_2\in\mathfrak{c}$, we have $[W_2,\mathfrak{g}_1]=0$. On the other hand, $[W_2,\mathfrak{n}_2]=\mathfrak{n}_2$ by (iv). Hence $\mathfrak{g}_1+\mathfrak{a}_2$ is the zero-eigenspace of $\operatorname{ad}_{\mathfrak{g}_1\ltimes\mathfrak{s}_2}(W_2)$. All elements of $ad_{\mathfrak{g}_1\ltimes \mathfrak{s}_2}(\mathfrak{a}_2)$ commute with $\operatorname{ad}_{\mathfrak{g}_1\ltimes\mathfrak{s}_2}(W_2)$ and thus leave $\mathfrak{g}_1+\mathfrak{a}_2$ invariant. Since $[\mathfrak{g}_1,\mathfrak{s}_2]\subset \mathfrak{n}_2$, we thus have
$[\mathfrak{a}_2,\mathfrak{g}_1]\subset (\mathfrak{g}_1+\mathfrak{a}_2)\cap \mathfrak{n}_2=(0)$, proving (ii).
(iii) We are left to prove that $\varphi_2$, as defined in (iii), is the pre-Einstein derivation of $\mathfrak{n}_2$; i.e., that
\begin{equation}\label{preEcond} \operatorname{trace}(\varphi_2 D)=\operatorname{trace}(D)\end{equation} for all $D\in \operatorname{Der}(\mathfrak{n}_2)$.
We use the shorthand notation $\operatorname{Der}$ for $\operatorname{Der}(\mathfrak{n}_2)$. As $\operatorname{Der}$ is the Lie algebra of the algebraic group $\operatorname{Aut}(\mathfrak{n}_2)$, it has a Levi decomposition
$$\operatorname{Der} = \operatorname{Der}_1 + \operatorname{Der}_2,$$
where $\operatorname{Der}_1$ is a maximal semisimple subalgebra and the radical $\operatorname{Der}_2$ splits as a semi-direct sum
$$\operatorname{Der}_2=\operatorname{Der}^{\operatorname{ab}}+\operatorname{Nilrad}(\operatorname{Der})$$
with $\operatorname{Der}^{\operatorname{ab}}$ an abelian subalgebra commuting with $\operatorname{Der}_1$. The subalgebra $\operatorname{Der}_1 + \operatorname{Der}^{\operatorname{ab}}$ is a maximal fully reducible subalgebra of $\operatorname{Der}$ (i.e., maximal among all subalgebras of $\operatorname{Der}$ that act fully reducibly on $\mathfrak{n}_2$), and $\operatorname{Der}^{\operatorname{ab}}$ consists of semisimple derivations of $\mathfrak{n}_2$. The elements of $\operatorname{Nilrad}(\operatorname{Der})$ are nilpotent derivations. By Mostow \cite[Theorem 4.1]{Mostow:FullyReducibleSubgrpsOfAlgGrps}, the maximal fully reducible subalgebra of $\operatorname{Der}$ is unique up to conjugation. Every pre-Einstein derivation $\psi$ of $\mathfrak{n}_2$ lies in the center of a maximal fully reducible subalgebra (see proof of \cite[Theorem 1.1(a)]{Nikolayevsky:EinsteinSolvmanifoldsandPreEinsteinDerivation}) and hence is conjugate to an element of $\operatorname{Der}^{\operatorname{ab}}$.
To prove that $\varphi_2$ is the pre-Einstein derivation, we are required to show that Equation~\ref{preEcond} holds for all $D\in\operatorname{Der}$. However, the next lemma (with $\varphi_2$ playing the role of $\sigma$) shows that it in fact suffices to verify Equation~\ref{preEcond} only for a select subset of derivations $D$. The lemma is motivated by the proof of Theorem 1 in \cite{Nikolayevsky:EinsteinSolvmanifoldsandPreEinsteinDerivation}.
\begin{lemma}\label{lemma:reducing pre-Einstein eqn} Let $\sigma$ be a semisimple derivation of $\mathfrak{n}_2$ with real eigenvalues. Let $\mathfrak r$ be any fixed choice of subalgebra of $\operatorname{Der}_1 + \operatorname{Der}^{ab}$ which contains $\sigma$ and $\operatorname{Der}^{ab}$. If $ \operatorname{trace}(\sigma D)=\operatorname{trace}(D) $ holds for all $D \in \mathfrak r$ (cf.\ Eqn \ref{preEcond}), then $\sigma$ is a pre-Einstein derivation of $\mathfrak{n}_2$.
\end{lemma}
\begin{remark} Although the pre-Einstein derivation is contained in $\operatorname{Der}^{ab}$, we are careful to note that, a priori, our subalgebra $\mathfrak r$ must explicitly contain both $\sigma$ and $\operatorname{Der}^{ab}$.
\end{remark}
\begin{proof} Let $\psi$ denote the unique pre-Einstein derivation contained in $\operatorname{Der}^{ab}$. As such, we know that
$$\operatorname{trace} (\psi D) = \operatorname{trace} D \quad \mbox{ for all } D\in Der.$$
By hypothesis, we have that $\psi, \sigma \in \mathfrak r$ and that $\sigma$ satisfies Eqn \ref{preEcond} for $D\in\mathfrak r$. This yields
$$\operatorname{trace} (\psi - \sigma)^2 = \operatorname{trace} (\psi (\psi - \sigma)) - \operatorname{trace} (\sigma (\psi-\sigma)) = 0.$$
Now recall that $\psi$ is diagonalizable over $\mathbb R$ as it is a pre-Einstein derivation. Together with the fact that $\sigma$ is diagonalizable over $\mathbb R$ (being positive definite) and that $\psi$ and $\sigma$ commute, we see that $\psi - \sigma$ is diagonalizable over $\mathbb R$.
Finally, $\operatorname{trace} (\psi - \sigma)^2 = 0$ implies $\psi = \varphi_2$, thence $\sigma$ is a pre-Einstein derivation and so Eqn \ref{preEcond} holds for all derivations.
\end{proof}
To use the lemma above, we carefully select a subalgebra of $\operatorname{Der}_1 + \operatorname{Der}^{ab}$ which satisfies the hypotheses. Given any subalgebra $\mathfrak{b}$ of $\operatorname{Der}(\mathfrak{n}_2)$, denote by $\operatorname{Der}(\mathfrak{n}_2)^\mathfrak{b}$ the subalgebra of all derivations that commute with $\mathfrak{b}$.
\begin{lemma}\label{lem.extend} Let
$\mathfrak{e}=\{D\in \operatorname{Der}(\mathfrak{n})^\mathfrak{a}: D|_{\mathfrak{n}_1}=0\}.$ Then:
\begin{enumerate}
\item $\mathfrak{e}$ is an ideal in $\operatorname{Der}(\mathfrak{n})^\mathfrak{a}$.
\item $\Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}=\{D|_{\mathfrak{n}_2}: D\in \mathfrak{e}\}$. (Here we are identifying $\mathfrak{a}_2$ with $\operatorname{ad}(\mathfrak{a}_2)|_{\mathfrak{n}_2}$.)
\item $\Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}=\operatorname{Der}(\mathfrak{n}_2)^{\mathfrak{a}_2 + \rho(\mathfrak{g}_{1})}.$
\item $\mathfrak{e}$ acts fully reducibly on $\mathfrak{n}$ and $\Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}$ acts fully reducibly on $\mathfrak{n}_2$.
\item Equation~\ref{preEcond} holds for every $D\in \Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Since $\mathfrak{a}_2\subset \mathfrak{a}$ and $\mathfrak{n}_1$ is the zero-eigenspace of $\mathfrak{a}_2$ while $\mathfrak{n}_2=[\mathfrak{a}_2,\mathfrak{n}_2]$, all elements of $\operatorname{Der}(\mathfrak{n})^\mathfrak{a}$ leave each of $\mathfrak{n}_1$ and $\mathfrak{n}_2$ invariant. Thus $\mathfrak{e}$ is an ideal in $\operatorname{Der}(\mathfrak{n})^\mathfrak{a}$.
(ii) is elementary and (iii) follows from Lemma~\ref{lem.jab}.
(iv) By Proposition~\ref{prop.heb}, $\operatorname{Der}(\mathfrak{n})^\mathfrak{a}$ acts fully reducibly on $\mathfrak{n}$ and thus also on $\mathfrak{n}_2$. Since $\mathfrak{e}$ is an ideal in $\operatorname{Der}(\mathfrak{n})^\mathfrak{a}$, it also acts fully reducibly. (See \cite{Mostow:FullyReducibleSubgrpsOfAlgGrps}, p. 208.) The second statement in (iv) follows from (ii).
(v) Let $D\in\Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}$. By (ii), $D$ extends to a derivation $\tilde{D}\in \operatorname{Der}(\mathfrak{n})$ satisfying $\tilde{D}|_{\mathfrak{n}_1}=0.$ Since $\varphi$ is the pre-Einstein derivation of $\mathfrak{n}$, we have (writing $\varphi_1=(\operatorname{ad}(W_1)|_{\mathfrak{n}})^\mathbf{R}$)
$$\operatorname{trace}(D) = \operatorname{trace}(\tilde{D}) = \operatorname{trace}(\varphi \tilde{D}) = \operatorname{trace}( \varphi_1 \tilde{D}) + \operatorname{trace}(\varphi_2 D)$$
$$= \operatorname{trace}(\varphi_1|_{\mathfrak n_2}D) + \operatorname{trace}(\varphi_2D)$$
Thus we need only show that $\operatorname{trace}(\varphi_1|_{\mathfrak{n}_2}D) =0$. By (iii), each eigenspace $V$ of $D$ in $\mathfrak{n}_2$ is preserved by $\rho(\mathfrak{g}_{1})$ and we have $\operatorname{trace}(\rho(X)|_V)=0$ for all $X\in\mathfrak{g}_{1}$, since every finite-dimensional representation of a semisimple Lie algebra is traceless. This holds in particular for $X=W_1$, and hence we have $\operatorname{trace}(\varphi_1|_V)=0$ on each such eigenspace. Thus $\operatorname{trace}(\varphi_1|_{\mathfrak n_2}D) =0$ as was to be shown. This completes the proof of the lemma.
\end{proof}
As $\Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}$, $\mathfrak a_2$, and $\rho(\mathfrak g_1)$ all act fully reducibly on $\mathfrak{n}_2$ and commute, the subalgebra $\Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)} + \mathfrak a_2 + \rho(\mathfrak g_1)$ acts fully reducibly. Thus, there is some maximal reductive subalgebra $\operatorname{Der}_1 + \operatorname{Der}^{ab}$ which contains them all. Clearly, $\operatorname{Der}^{ab} \subset \Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}$. Since $\varphi_2 \in \Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}$, we may apply the lemmas above to see that, in fact, $\varphi_2$ is a pre-Einstein derivation of $\mathfrak{n}_2$. This completes the proof of Proposition \ref{prop.a2}.
\end{proof}
\begin{cor}\label{cor.final} To prove Lemma~\ref{k}, it suffices to show that $N_2$ admits a nilsoliton metric.
\end{cor}
Corollary~\ref{cor.final} follows from Proposition~\ref{prop.a2} and Corollary~\ref{cor.nik}.
We will carry out the proof of the existence of a nilsoliton metric on $N_2$ in the next subsection.
\subsection{Existence of a nilsoliton metric on $N_2$.}\label{subsec.nilsol}
By Proposition~\ref{prop.a2}, we know that $\mathfrak{s}_2$ can be written as $\mathfrak{a}_2+\mathfrak{n}_2$ where $\mathfrak a_2$ is abelian and $[\mathfrak{g}_1,\mathfrak{a}_2]=0$. Moreover, there exists an element $W=W_1+W_2\in \mathfrak{a}$ with $W_i\in \mathfrak{a}_i$, $i=1,2$ such that $\varphi:=\operatorname{ad}(W)|_{\mathfrak{n}}^\mathbf{R}$ is a pre-Einstein derivation of $\mathfrak{n}$ and $\varphi_2:=\operatorname{ad}(W_2)|_{\mathfrak{n}_2}^\mathbf{R}$ is a pre-Einstein derivation of $\mathfrak{n}_2$.
\begin{hypoth}\label{hyp} In addition to the hypotheses of Lemma~\ref{k} and our first simplification~\ref{first.simp}, we claim that it suffices to prove the existence of a nilsoliton metric on $N_2$ under the following additional hypotheses on $S$:
\begin{enumerate}
\item All eigenvalues of $\operatorname{ad}(W_2)|_{\mathfrak{n}_2}$ are real; equivalently, $\varphi_2:=\operatorname{ad}(W_2)|_{\mathfrak{n}_2}$.
\item $\mathfrak{a}_2$ is one-dimensional; equivalently, $\mathfrak{a}_2=\mathbf{R} W_2$.
\end{enumerate}
\end{hypoth}
Indeed, let $\mathfrak{s}'_2$ be the semi-direct sum of $\mathbf{R} \varphi_2\ltimes\mathfrak{n}_2$. By Remark~\ref{nota.ss} and Proposition~\ref{prop.a2}, $\varphi_2$ commutes with the action $\rho$ of $\mathfrak{g}_1$ on $\mathfrak{n}_2$. In particular, $\mathfrak{s}'=\mathfrak{s}_1\ltimes \mathfrak{s}'_2$ is a well-defined solvable Lie algebra and, by Corollary~\ref{cor.nik},
the corresponding simply-connected Lie group $S'$ is of Einstein type. We can again use Remark \ref{nota.ss} to form the semi-direct product $G_{1}\ltimes S'_2$. Since the nilradical $N_2$ of $S'_2$ coincides with that of $S_2$ and since $S'$ satisfies the hypotheses of Lemma~\ref{k}, the claim follows.
\begin{notarem}\label{note.nik2} In the notation of Proposition~\ref{prop.a2}, we will identify $W_2$ with $\varphi_2$ and thus write $\varphi_2\in \mathfrak{a}_2$. We will similarly identify the pre-Einstein derivation $\varphi$ of $\mathfrak{s}$ with $W=W_1+\varphi_2$. (Note that $W_1=W_1^\mathbf{R}$ since $W_1\in\mathfrak{a}_1$ and $\mathfrak{s}_1=\mathfrak{a}_1+\mathfrak{n}_1$ is an Iwasawa algebra.)
(ii) We use the identifications in~\ref{note.nik}. Let $n=\dim(\mathfrak{n})$ and $n_i=\dim(\mathfrak{n}_i)$. We identify $\mathbf{R}^n$ with $\mathbf{R}^{n_1}\times\mathbf{R}^{n_2}$ and let
$$i:GL(\mathbf{R}^{n_2}) \to GL(\mathbf{R}^n)$$
be the associated embedding. Thus
$$i(\alpha)=\begin{bmatrix}I&0\\0&\alpha\end{bmatrix}$$
and $$i_*(X)=\begin{bmatrix}0&0\\0&X\end{bmatrix}$$
for $\alpha\in GL(n_2,\mathbf{R})$ and $X\in \mathfrak{gl}(n_2,\mathbf{R})$.
In the notation of Proposition~\ref{prop.nik2}, we write $\mu$ for the Lie bracket of $\mathfrak{n}=\mathfrak{n}_1+\mathfrak{n}_2$ and $\mu_i$ for the Lie bracket of $\mathfrak{n}_i$ and view $\mathfrak{n}$, respectively $\mathfrak{n}_i$, as the vector space $\mathbf{R}^n$, respectively $\mathbf{R}^{n_i}$, with bracket $\mu$, respectively $\mu_i$. Under the identification of $\mathbf{R}^n$ with $\mathbf{R}^{n_1}\times \mathbf{R}^{n_2}$, we may write
\begin{equation}\label{eq.mu}\mu = \mu_1 + \mu_{12} + \mu_2,\end{equation}
where $\mu_{12}$ denotes the adjoint action of $\mathfrak n_1$ on the ideal $\mathfrak n_2$. Note that
$$\mu_{12}(X,Y)=\rho(X)Y \mbox{\,for all\,}X\in \mathfrak{n}_1,\,Y\in\mathfrak{n}_2$$
where $\rho:\mathfrak{g}_1\to Der(\mathfrak{s}_2)$ is the differential of the representation of $G_1$ in Lemma~\ref{k}.
\end{notarem}
Let $G_\varphi<SL(n,\mathbf{R})$ and $G_{\varphi_2}<SL(n_2,\mathbf{R})$ be the pre-algebraic groups defined in Proposition~\ref{prop.nik2}. To prove the Lemma~\ref{k}, we need to show that $ G_{\varphi_2}\cdot\mu_2$ is closed. We will use Equation~\ref{eq.mu} and the fact that $G_\varphi\cdot\mu$ is closed.
\begin{lemma}\label{lem.rho}\text{}
\begin{enumerate}
\item $\rho(\mathfrak{g}_1)<\mathfrak{g}_{\varphi_2}$.
\item Let $\mathfrak{c}_\rho=Z_{\mathfrak{g}_{\varphi_2}}(\rho(\mathfrak{g}_1))$ be the centralizer of $\rho(\mathfrak{g}_1)$ in $\mathfrak{g}_{\varphi_2}$ and let $C_\rho$ be the corresponding connected subgroup of $G_{\varphi_2}$. Then $i(C_\rho)<G_{\varphi}.$
\item For $\alpha\in C_\rho$, we have
$$i(\alpha)\cdot\mu=\mu_1+\mu_{12}+\alpha\cdot\mu_{22}.$$
\end{enumerate}
\end{lemma}
\begin{proof} (i) is immediate since $\rho(\mathfrak{g}_1)$ consists of derivations of trace zero that commute with $\varphi_2$, as noted immediately following the statement of~\ref{hyp}. For (ii), let $X\in\mathfrak{c}_\rho$. Recalling that $\varphi=W_1+\varphi_2$ with $W_1\in\mathfrak{a}_1$, we see that
$$[\varphi,i_*X]_{\mathfrak{gl}(n,\mathbf{R})}=i_*([\rho(W_1),X]_{\mathfrak{gl}(n_2,\mathbf{R})}+ [\varphi_2,X]_{\mathfrak{gl}(n_2,\mathbf{R})})=0.$$
(Here $[\cdot\,\cdot]_{\mathfrak{gl}(m,\mathbf{R})}$ denotes the Lie bracket of $\mathfrak{gl}(m,\mathbf{R})$.) Thus $i_*(X)\in \mathfrak z(\varphi)$. Moreover $$\operatorname{trace}(\varphi \,i_*(X))=\operatorname{trace}(\varphi_2 X)=0.$$
(The first equality follows from the fact that each element of $\rho(\mathfrak{g}_1)$--in particular, $\rho(W_1)$-- restricts to a traceless representation on each eigenspace of $\varphi_2$, and the second equality is immediate from the definition of $\mathfrak{g}_{\varphi_2}$ in Proposition~\ref{prop.nik2}.) Hence $i_*X\in \mathfrak{g}_\varphi$. Finally, (iii) follows from Equation~\ref{eq.mu} and the fact that $\alpha$ commutes with $\rho(\mathfrak{g}_1)$.
\end{proof}
As noted in~\ref{note.nik}(i), the stabilizer of $\mu_2$ in $G_{\varphi_2}$ has Lie algebra $\operatorname{Der}(\mathfrak{n}_2)\cap \mathfrak{sl}(n_2,\mathbf{R})$. Let $H$ be any connected fully reducible subgroup of the stabilizer and let $C:=Z_{G_{\varphi_2}}(H)_0$. By Lemma~\ref{lemma: luna on centralizer of a stab grp having closed orbit}, $G_{\varphi_2}\cdot\mu_2$ is closed if and only if $C\cdot\mu_2$ is closed. If, moreover, we choose $H$ so that its Lie algebra contains $\rho(\mathfrak{g}_1)$, then by Lemma~\ref{lem.rho}, the latter condition is equivalent to $i(C)\cdot\mu$ being closed. In the following corollary, we make a choice of $H$.
\begin{cor}\label{cor.plan} Let $\mathfrak{d}=\Der(\n_2)^{\mathfrak{a}_2 + \rho(\mathfrak s_1)}\cap\mathfrak{sl}(n_2,\mathbf{R})$ and let $\mathfrak{h}=\rho(\mathfrak{g}_1)+\mathfrak{d}<\operatorname{Der}(\mathfrak{n}_2)$. Let $\mathfrak{c}=Z_{\mathfrak{g}_{\varphi_2}}(\mathfrak{h})$, and let $C$ be the corresponding connected subgroup of $G_{\varphi_2}$. Then the following are equivalent:
\begin{itemize}
\item $N_2$ admits a nilsoliton metric;
\item the orbit $C\cdot\mu_2$ is closed;
\item the orbit $i(C)\cdot\mu$ is closed.
\end{itemize}
\end{cor}
\begin{proof} By Lemma~\ref{lem.extend}, $\mathfrak{d}$ is fully reducible and it commutes with the fully reducible algebra $\rho(\mathfrak g_1)$; thus $\mathfrak{h}$ is fully reducible. Hence the equivalence of the first two conditions follows from Lemma~\ref{lemma: luna on centralizer of a stab grp having closed orbit} and Proposition~\ref{prop.nik2}. The equivalence of the second and third conditions follows from Lemma~\ref{lem.rho}(iii) since $C<C_\rho$.
\end{proof}
To complete the proof of the Key Lemma, we need to show that $i(C)\cdot\mu$ is closed. We exploit the notion of `inheritance of closed orbits', see Section \ref{sec: technical lemmas on GIT}.
\begin{lemma}\label{thm:refinement of Niko} Let $\mathfrak{g}_\varphi^{\mathfrak a}$ be the subalgebra of $\mathfrak{g}_\varphi$ consisting of all elements that commute with $\operatorname{ad}_\mathfrak{s}(\mathfrak{a})|_\mathfrak{n}$, and let $G_\varphi ^\mathfrak a$ be the corresponding connected subgroup of $G_\varphi$. Then $G_\varphi ^\mathfrak a$ is a pre-algebraic fully reducible subgroup of $G_\varphi$, and $G_\varphi ^\mathfrak a \cdot \mu$ is closed.
\end{lemma}
\begin{proof} By~\ref{note.nik}, $G_\varphi$ is fully reducible and pre-algebraic. By Definition~\ref{def.pre-Einst} and Proposition~\ref{prop.nik2}, $\operatorname{ad}(\mathfrak{a})\cap\ker(t)=\operatorname{ad}(\mathfrak{a})\cap \mathfrak{sl}(n,\mathbf{R})$ and hence there exists a codimension one subspace $\mathfrak{a}_0$ of $\mathfrak{a}$ such that $\operatorname{ad}(\mathfrak{a}_0)\subset \mathfrak{g}_\varphi$ and $\mathfrak{a}=\mathfrak{a}_0+\mathbf{R} \varphi$. Since all elements of $G_\varphi$ commute with $\varphi$, we have $G_\varphi^{\mathfrak a}=Z_{G_{\varphi}}(\operatorname{ad}(\mathfrak{a}_0))$. The group $\operatorname{Ad}(A_0)$ is an abelian group of real semisimple transformations and hence is fully reducible. Hence $G_\varphi ^\mathfrak a$ is also pre-algebraic and fully reducible. By Luna's result Lemma~\ref{lemma: luna on centralizer of a stab grp having closed orbit}, the orbit $G_\varphi ^\mathfrak a \cdot \mu$ is closed.
\end{proof}
\begin{lemma}\label{prop.h} Let $F=\{X\in G_\varphi^\mathfrak a: X|_{\mathfrak{n}_1}=Id\}_0$. Then
\begin{enumerate}
\item $F$ is a fully reducible, pre-algebraic, normal subgroup of $G_\varphi ^\mathfrak a $.
\item $F\cdot \mu$ is closed.
\item The Lie algebra of the stabilizer $F_\mu$ coincides with $\mathfrak{e}\cap \mathfrak{sl}(n,\mathbf{R})$ where $\mathfrak{e}$ is the subalgebra of $\operatorname{Der}(\mathfrak{n})$ defined in Lemma~\ref{lem.extend}.
\item Writing $E=(F_\mu)_0$, then $(Z_F(E))_0\cdot \mu$ is closed.
\item $i(C)<(Z_F(E))_0$.
\end{enumerate}
\end{lemma}
\begin{proof} Since $G_\varphi^{\mathfrak a}$ commutes with $\varphi_2$, it normalizes each of $\mathfrak{n}_1$ and $\mathfrak{n}_2$. Statement (i) is thus immediate. Statement (ii) follows from the closedness of the orbit $G_\phi ^\mathfrak{a} \cdot \mu $ together with Proposition \ref{prop: normal subgroups have closed orbits}, and Statement (iii) follows from Lemma~\ref{lem.extend}(i). (iv) is a consequence of Lemma~\ref{lemma: luna on centralizer of a stab grp having closed orbit}. Finally, (v) follows from the definition of $C$ in Corollary~\ref{cor.plan}; in fact $i(C)=(Z_F(E))_0\cap i(C_\rho)$.
\end{proof}
To complete the proof of the Key Lemma, note that $i(C)$ is a fully reducible, pre-algebraic group. We apply Corollary~\ref{cor:inheritance of closed orbits} with $(Z_F(E))_0$ playing the role of $G$ and $i(C)$ playing the role of $G'$ to conclude that $i(C)\cdot\mu$ is closed. The Key Lemma now follows from Corollary~\ref{cor.plan}.
\section{Extensions of soliton metrics}\label{exts}
A long standing and important question is to understand when a solvable or nilpotent Lie group admits an Einstein or Ricci soliton metric. The first characterization on the existence of Ricci soliton metrics on nilpotent Lie groups was due to Lauret \cite{LauretNilsoliton}.
\begin{thm}[Lauret] A nilpotent Lie group $N$ admits a Ricci soliton metric if there exists an abelian group $A$ acting reducibly on $\mathfrak n$ such that $A\ltimes N$ admits an Einstein metric.
\end{thm}
This was later extended to solvable Lie groups for which the Ricci soliton is a so-called `algebraic Ricci soliton' \cite{Lauret:SolSolitons} and in \cite{Jablo:HomogeneousRicciSolitons} it was shown that all Ricci solitons on solvable Lie groups are algebraic. Thus we have the following.
\begin{thm}[Lauret, Jablonski]\label{lj} A solvable Lie group $S$ admits a Ricci soliton metric if there exists an abelian group $A$ acting reducibly on $\mathfrak s$ such that $A\ltimes S$ admits an Einstein metric.
\end{thm}
Upon inspection of the structure of these groups, one can drop the condition that $A$ act reducibly, a priori, as it can be deduced.
All known examples of non-compact, homogeneous Einstein and Ricci soliton spaces are isometric to solvable Lie groups with left-invariant Riemannian metrics. Naturally, one asks if results analogous to Theorem~\ref{lj} are possible if $A$ is replaced with some non-abelian solvable Lie group. The Key Lemma above (Lemma \ref{key}) is one such extension where a nilpotent Lie group is extended by a solvable group with some conditions on the action of the extension.
\begin{question} Consider an extension $S_1 \ltimes N_2$ as in the Key Lemma. Can we drop the hypothesis that the adjoint representation of $S_1$ on $\mathfrak n_2$ extend to a representation of the full semisimple to which $S_1$ belongs?
\end{question}
\begin{prop} There exists a solvable group $S_1$ and a nilpotent group $N_2$ such that $S = S_1\ltimes N_2$ admits an Einstein metric, but $N_2$ does not admit a soliton.
\end{prop}
To build such an example, we begin by defining the bracket relations for a nilpotent Lie algebra which will be the nilradical $\mathfrak n$ of $\mathfrak s$. Let $\mathfrak n = span\{ e_0, \dots , e_8,z_1,z_2 \}$ as a vector space. We define a 2-step nilpotent Lie bracket structure on $\mathfrak n$ according to the following relations.
$$\begin{array}{ll}
\left[e_1,e_2\right] = \sqrt 8 \ z_1 \hspace{1cm} { } & \left[e_0,e_1\right] = \sqrt 8 \ z_2 \\
\left[e_3,e_4\right] = \sqrt 12 \ z_1 & \\
\left[e_5,e_6\right] = \sqrt 3 \ z_1 & \left[e_5,e_8\right] = 3 \ z_2 \\
\left[e_7,e_8\right] = \sqrt 3 \ z_1 & \left[e_6,e_7\right] = 3 \ z_2
\end{array}$$
Using skew-symmetry and bilinearity, these relations completely determine the Lie bracket. Notice that the center of $\mathfrak n$ is spanned by $\{z_1,z_2\}$.
We now extend the Lie bracket on $\mathfrak n$ to a solvable Lie algebra of dimension one greater. Let $\mathfrak s = \mathfrak a \ltimes \mathfrak n$ where $\mathfrak a$ is 1-dimensional and spanned by an element $A$ acting on $\mathfrak n$ by
$$ad~A|_\mathfrak n = diag\{21,17,21,19,19,19,19,19,19,38,38\},$$
where the diagonal matrix is relative to the basis $\{ e_0, \dots , e_8,z_1,z_2 \}$ of $\mathfrak n$.
If we choose $\{A,e_0,\dots,e_8,z_1,z_2\}$ to be an orthonormal basis of $\mathfrak s$, then the Lie group $S$ with Lie algebra $\mathfrak s$ has a left-invariant Einstein metric.
Now observe that $\mathfrak s_1 = span\{A,e_0\}$ is isomorphic to the Iwasawa algebra of the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb R)$. Consider $\mathfrak n_2 = span\{e_1,\dots,e_8,z_1,z_2\}$ and observe that $\mathfrak n_2$ is an ideal. In this way, we have decomposed the group $S$ into a semi-direct product
$$S= S_1\ltimes N_2.$$
All that remains is to prove $N_2$ does not admit a left-invariant Ricci soliton. By Proposition 4.4 of \cite{Jablo:ModuliOfEinsteinAndNoneinstein}, we see that $N_2$ does not admit such a metric. (In the notation of that work, we have $k=2$ and $n=1$.)
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,500,673 | arxiv | \section{Introduction}
If the fundamental group of a compact topological 4-manifold $X$ satisfies the Null Disc Lemma ($NDL$) of M.~H.~Freedman \cite{FQ}, then all topological $s$-cobordisms on $X$ are products $X \times I$, and $X$ possesses an exact surgery sequence \cite{FreedmanTeichner, KQ}. In this paper, we extend the existence of the $s$-cobordism surgery sequence to finitary connected sums $X$ of orientable $NDL$ pieces $X_i$. In particular, we obtain exactness at the topological $s$-cobordism structure set $\mathcal{S}^s_\mathrm{TOP}(X)$ and calculate it to be trivial under certain conditions on the fundamental groups of aspherical $X_i$. In addition, we extend the existence of the $s$-cobordism surgery sequence to aspherical 4-manifolds with fundamental group satisfying the Farrell--Jones Conjecture.
\subsection{The topological $s$-cobordism surgery sequence}
\begin{defn}[Freedman]
A discrete group $G$ is \textbf{$NDL$ (or good)} if the $\pi_1$-Null Disc Lemma holds for it (see \cite{FreedmanTeichner} for details). The class $NDL$ is closed under the operations of forming subgroups, extensions, and filtered colimits.
\end{defn}
This class contains subexponential and exponential growth \cite{FQ, FreedmanTeichner, KQ}.
\begin{thm}[Freedman--Quinn, Freedman--Teichner, Krushkal--Quinn]
The class $NDL$ contains all virtually polycyclic groups and all groups of subexponential growth.
\end{thm}
\begin{exm}
Here are some exotic examples in $NDL$. The semidirect product $\mathbb{Z}^2 \rtimes_\alpha \mathbb{Z}$ with $\alpha=\left(\begin{smallmatrix}2 & 1 \\ 1 & 1\end{smallmatrix}\right)$ is polycyclic but has exponential growth. For all integers $n \neq -1, 0, 1$, the Baumslag--Solitar group $BS(1,n) = \mathbb{Z}[1/n] \rtimes_n \mathbb{Z}$ is finitely presented and solvable but not polycyclic. Grigorchuk's infinite 2-group $G$ is finitely generated but not finitely presented and has intermediate growth.
\end{exm}
Recall that, unless specified in the notation, the structure sets $\mathcal{S}_\mathrm{TOP}^h$ and normal invariants $\mathcal{N}_\mathrm{TOP}$ are homeomorphisms on the boundary (that is, rel $\partial$) \cite[\S6.2]{HillmanBook}.
\begin{defn}
Let $Z$ be a non-empty compact connected topological 4-manifold. Denote the fundamental group $\pi := \pi_1(Z)$ and orientation character $\omega := w_1(\tau_Z)$. We declare that \textbf{$Z$ has class $SES^h$} if there exists an exact sequence of based sets:
\[
\mathcal{N}_\mathrm{TOP}(Z \times I) \xrightarrow{~\sigma_5^h~} L_5^h(\pi,\omega) \xrightarrow{~\partial~} \mathcal{S}_\mathrm{TOP}^h(Z) \xrightarrow{~\eta~} \mathcal{N}_\mathrm{TOP}(Z) \xrightarrow{~\sigma_4^h~} L_4^h(\pi,\omega).
\]
\end{defn}
The subclass $SES^h_+$ includes actions of groups in $K$- and $L$-theory (Defn.~\ref{defn:SESplus}).
This exact sequence has been proven for the above groups \cite[Thm.~11.3A]{FQ}.
\begin{thm}[Freedman--Quinn]\label{thm:FTKQ}
Let $X$ be a compact connected topological manifold of dimension 4. If $\pi_1(X)$ has class $NDL$, then $X$ has class $SES^h_+$.
\end{thm}
Here is our main theorem, extending their hard result by a soft technique. From a controlled-topology point of view, one may interpret the result as gaining control for surgery over a certain finite tree of groups whose edge groups are trivial.
\begin{thm}\label{thm:main}
Let $X$ be a compact connected topological manifold of dimension 4.
\begin{enumerate}
\item
Suppose the fundamental group $\pi_1(X)$ is a free product of groups of class $NDL$. If $X$ is non-orientable, assume $\pi_1(X)$ is 2-torsionfree. Then there exists $r \geq 0$ such that the $r$-th stabilization $X \# r(S^2 \times S^2)$ has class $SES^h_+$.
\item
Suppose $X$ has the homotopy type of a connected sum $X_1 \# \cdots \# X_n$ such that each $X_i$ has class $SES^h_+$. If $X$ is non-orientable, assume that $\pi_1(X)$ is 2-torsionfree. Then the homotopy connected sum $X$ has class $SES^h_+$. Moreover, the following induced function is a bijection:
\[
\#: \prod_{i=1}^n \mathcal{S}_\mathrm{TOP}^h(X_i) \longrightarrow \mathcal{S}_\mathrm{TOP}^h(X).
\]
\end{enumerate}
\end{thm}
Indeed, it turns out that a limited form of surgery does work for free groups.
\begin{exm}
Suppose $X$ is a closed connected topological 4-manifold with free fundamental group: $\pi_1(X)=F_n$. Then a fixed stabilization $X \# r(S^2 \times S^2)$ has a topological $s$-cobordism surgery sequence, for some $r \geq 0$ depending on $X$.
\end{exm}
The proof of our theorem consists of two steps: first homology split along each essential 3-sphere \cite{Weinberger_fibering}, and then perform a neck exchange trick \cite{FQ} to replace homology 3-spheres with genuine ones (cf.~\cite{KLT2, JK_RP4}). The first step is possible because the high-dimensional splitting obstruction group \cite{Cappell_free} has recently been shown to vanish \cite{CD2}. No direct surgeries are performed---only cobordisms are attached. Our techniques do not show triviality of $s$-cobordisms.
\begin{rem}
The stable surgery sequence of S.~Cappell and J.~Shaneson \cite{CS1} holds for all closed connected topological 4-manifolds $X$. However, the amount of $S^2 \times S^2$ stabilization used is not fixed in the stable structure set $\ol{\mathcal{S}}_\mathrm{TOP}(X)$ \cite{KT}.
\end{rem}
\begin{rem}
The analogous decomposition, $\#$, holds in certain high dimensions, by recent work of J.~Davis and F.~Connolly \cite{CD2}. Namely, suppose $m > 4$ and $m \equiv 0, 3 \pmod{4}$. If each non-empty compact connected $m$-dimensional manifold $X_i$ is oriented (resp. non-orientable and $\pi_1(X_i)$ is 2-torsionfree), then $\#$ is a bijection.
\end{rem}
In fact, these two remarks can be combined for the stable version of Theorem~\ref{thm:main}. Note the hypothesis below has no $NDL$ restriction on the fundamental groups $\Gamma_i$.
\begin{thm}\label{thm:main_stable}
Let $X$ be a compact connected orientable topological manifold of dimension 4.
Suppose the fundamental group $\pi_1(X)$ is a free product of groups $\Gamma_1, \ldots, \Gamma_n$. Then there exist compact connected topological 4-manifolds $X_1, \ldots, X_n$ with each fundamental group $\pi_1(X_i)$ isomorphic to $\Gamma_i$ such that there is a bijection:
\[
\#: \prod_{i=1}^n \ol{\mathcal{S}}_\mathrm{TOP}^h(X_i) \longrightarrow \ol{\mathcal{S}}_\mathrm{TOP}^h(X).
\]
Moreover, these $X_i$ are unique up to $(S^2 \times S^2)$-stabilization and re-ordering.
\end{thm}
Here are other caveats, which place our main theorem into historical context.
\begin{rem}
A homotopy decomposition into a connected sum need not exist. A counterexample to the homotopy Kneser conjecture with $\pi_1(X) = G_3 * G_5$ where $G_p := C_p \times C_p$ was constructed by M.~Kreck, W.~L\"uck, and P.~Teichner \cite{KLT2}.
\end{rem}
\begin{rem}
Given a homotopy decomposition into a connected sum, a homeomorphism decomposition need not exist. There exist infinitely many examples of non-orientable closed topological 4-manifolds homotopy equivalent to a connected sum ($X = \mathbb{RP}^4 \# \mathbb{RP}^4$) that are not homeomorphic a non-trivial connected sum \cite{JK_RP4, BDK}. Hence $\#$ is not always a bijection in the case $\pi_1(X) = D_\infty \in NDL$.
\end{rem}
\begin{rem}
For certain groups $\pi_1(X)$ unknown in $NDL$, such as poly-surface groups, results on exactness at $\mathcal{N}_\mathrm{TOP}(X)$ are found in \cite{HillmanBook, Khan_smoothable, HegRep, CavSpa}.
\end{rem}
\begin{rem}
The modular group $PSL(2,\mathbb{Z}) \cong C_2 * C_3$ is a tantalyzing example of a free product of $NDL$ groups. It has a discrete cofinite-area action on $\mathbb{H}^2$. Our theorem in the non-orientable case neither includes it nor $SL(2,\mathbb{Z}) \cong C_4 *_{C_2} C_6$. The group $PSL(2,\mathbb{Z})$ plays a key role in the orientable case of free products \cite{CD2}.
\end{rem}
Let us conclude this subsection with an application to fibering of 5-manifolds. Partial results were obtained in \cite{Weinberger_fibering, Khan_fibering}. The proof is located in Section~\ref{sec:main_proofs}.
\begin{thm}\label{thm:fibering}
Let $M$ be a closed topological 5-manifold. Let $X$ be a closed topological 4-manifold of class $SES^h_+$. Suppose $f: M \to S^1$ is a continuous map such that the induced infinite cyclic cover $\overline{M} = \mathrm{hofiber}(f)$ is homotopy equivalent to $X$. If the Farrell--Siebenmann fibering obstruction $\tau(f) \in \mathrm{Wh}_1(\pi_1 M)$ vanishes, then $f$ is homotopic to a topological $s$-block bundle projection with pseudofiber $X$.
\end{thm}
\noindent If $X$ satisfies the $s$-cobordism conjecture, then we obtain a fiber bundle projection.
\begin{quest}
Do similar results hold for the non-orientable 4-manifold $X = \mathbb{RP}^4 \# \mathbb{RP}^4 \# \mathbb{RP}^4$? If $X$ had class $SES^h_+$ then $\mathcal{S}_\mathrm{TOP}^h(X)$ would be countably infinite.
\end{quest}
\subsection{Topological $s$-rigidity for 4-dimensional manifolds}
\begin{defn}
A compact topological manifold $Z$ is \textbf{topologically rigid} if, for all compact topological manifolds $M$, any homotopy equivalence $h: M \to Z$, with restriction $\partial h: \partial M \to \partial Z$ a homeomorphism, is homotopic to a homeomorphism.
\end{defn}
Recall the Borel conjecture is proven for certain good groups \cite[Thm.~11.5]{FQ}.
\begin{thm}[Freedman--Quinn]
Suppose $Z$ is an aspherical compact topological 4-manifold such that $\pi_1(Z)$ is virtually polycyclic. Then $Z$ is topologically rigid.
\end{thm}
The following crystallographic examples include the 4-torus $T^4$. It turns out that there are only finitely many examples in any dimension (e.g., see~\cite[Thm.~21]{Farkas}).
\begin{exm}
Suppose $\Gamma$ is a Bieberbach group of rank 4, that is, a torsionfree lattice in the Lie group $\mathrm{Isom}(\mathbb{E}^4)$. Then $Z = \mathbb{R}^4/\Gamma$ is topologically rigid (cf.~\cite{FH}).
\end{exm}
Let us now turn our attention to a weaker form of rigidity for general groups.
\begin{defn}
A compact topological manifold $Z$ is \textbf{topologically $s$-rigid} if, for all compact topological manifolds $M$, any homotopy equivalence $h: M \to Z$, with restriction $\partial h: \partial M \to \partial Z$ a homeomorphism, is itself topologically $s$-bordant rel $\partial M$ to a homeomorphism. It suffices that the Whitehead group $\mathrm{Wh}_1(\pi_1 Z)$ vanishes and the topological $s$-cobordism structure set $\mathcal{S}_\mathrm{TOP}^s(Z)$ is a singleton.
\end{defn}
The following important basic observation does not seem to have appeared in the literature.\footnote{Other authors use these hypotheses when $Z$ is oriented and aspherical (cf.~\cite[Defn.~1.2]{HKT_BS}).} In particular, we do not assume that the fundamental group is $NDL$.
\begin{thm}\label{thm:rigidity}
Let $Z$ be a compact topological 4-manifold with fundamental group $\pi$ and orientation character $\omega: \pi \to \{\pm 1\}$. Suppose the surgery obstruction map $\sigma_4^s: \mathcal{N}_\mathrm{TOP}(Z) \to L_4^s(\pi,\omega)$ is injective and $\sigma_5^s: \mathcal{N}_\mathrm{TOP}(Z \times I) \to L_5^s(\pi,\omega)$ is surjective. If $\mathrm{Wh}_1(\pi)=0$ then $Z$ is topologically $s$-rigid. Also $Z$ has class $SES^s_+$
\end{thm}
We sharpen an observation of J.~Hillman \cite[Lem.~6.10]{HillmanBook} to include map data.
\begin{cor}\label{cor:unstable_rigidity}
Let $Z$ be a compact topological 4-manifold. Suppose the product $Z \times S^1$ is topologically rigid. If $\mathrm{Wh}_1(\pi_1 Z)=0$ then $Z$ is topologically $s$-rigid.
\end{cor}
This allows us to generalize a theorem of J.~Hillman for surface bundles over surfaces \cite[Thm.~6.15]{HillmanBook}. His conclusion was that the source and target are abstractly $s$-cobordant. Our new feature is $s$-rigidity of the homotopy equivalence.
\begin{exm}
Suppose $Z$ is a compact topological 4-manifold that is the total space of a topological fiber bundle of aspherical surfaces over an aspherical surface. Then $Z$ is topologically $s$-rigid, as follows. By \cite[Theorem~6.2]{HillmanBook}, the group $\mathrm{Wh}_1(\pi_1 Z)$ vanishes. By the proof of \cite[Theorem~6.15]{HillmanBook}, the set $\mathcal{S}_\mathrm{TOP}^s(Z \times S^1)$ is a singleton. Now apply Corollary~\ref{cor:unstable_rigidity}. Alternatively, we can use Corollary~\ref{cor:algebraic_rigidity} and the recently established validity of $FJ_L$ for polysurface groups \cite{BFL_lattice}.
\end{exm}
In the topology of high-dimensional manifolds, the following class of fundamental groups has been of intense interdisciplinary interest for at least the past two decades.
\begin{defn}
Denote $FJ_L$ as the class of groups $\Gamma$ that are $K$-flat and satisfy the Farrell--Jones Conjecture in $L$-theory \cite{FJ_iso}. That is, the elements $\Gamma$ of $FJ_L$ satisfy $\mathrm{Wh}_1(\Gamma \times \mathbb{Z}^n) = 0$ and $H_n^\Gamma(E_{\matheurm{all}}\Gamma, E_{\matheurm{vc}}\Gamma; \underline{\mathbb{L}}_\mathbb{Z}^{-\infty}) = 0$ for all $n \geq 0$ (see~\cite{DL1}).
\end{defn}
We shall focus on the torsionfree case. This has nice subclasses \cite{FJ_GLmR, BL_CAT0, BFL_lattice}.
\begin{thm}[Farrell--Jones, Bartels--L\"uck, Bartels--Farrell--L\"uck]
Let $\Gamma$ be a discrete torsionfree group. Then $\Gamma$ has class $FJ_L$ if:
\begin{itemize}
\item $\Gamma$ is the fundamental group of a complete $A$-regular Riemannian manifold with all sectional curvatures non-positive, or
\item $\Gamma$ is hyperbolic with respect to the word metric, or
\item $\Gamma$ admits a cocompact proper action by isometries on a complete finite-dimensional $\mathrm{CAT}(0)$ metric space, or
\item $\Gamma$ is a virtually polycyclic group (equivalently, a virtually poly-$\mathbb{Z}$ group), or
\item $\Gamma$ is a cocompact lattice in a virtually connected Lie group.
\end{itemize}
\end{thm}
We state our $s$-cobordism answer to the Borel conjecture for exponential growth.
\begin{cor}\label{cor:algebraic_rigidity}
Suppose $Z$ is an aspherical compact topological 4-manifold such that $\pi_1(Z)$ has class $FJ_L$. Then $Z$ is topologically $s$-rigid. Also $Z$ has class $SES^h_+$.
\end{cor}
\begin{exm}
Topological $s$-rigidity occurs if $Z-\partial Z$ is complete finite-volume hyperbolic. That is, $Z - \partial Z = \mathbb{R}^4/\Gamma$ for some torsionfree lattice $\Gamma$ in $\mathrm{Isom}(\mathbb{H}^4)$.
\end{exm}
\begin{exm}
A non-Riemannian example of topological $s$-rigidity is the closed 4-manifold $Z$ of M.~Davis \cite{DavisM_aspherical}. The universal cover $\widetilde{Z}$ is a complete $\mathrm{CAT}(0)$ metric space. Most strikingly, $\widetilde{Z}$ is contractible but not homeomorphic to $\mathbb{R}^4$.
\end{exm}
The next example involves multiple citations, so we give a formal proof later. Currently, due to $\mathrm{Nil}$ summands, it is unknown if its Whitehead group vanishes.
\begin{cor}\label{cor:mappingtorus}
Suppose $Z$ is the mapping torus of a homeomorphism of an aspherical closed 3-manifold $K$. If $\mathrm{Wh}_1(\pi_1 Z)=0$ then $Z$ is topologically $s$-rigid.
\end{cor}
Now, let us pass to connected sums, which fail to be aspherical if there is more than one factor. The next statement shall follow from Theorems~\ref{thm:main} and \ref{thm:rigidity}. Below, we write $\mathrm{cdim}(G)$ for the cohomological dimension of any discrete group $G$.
\begin{cor}\label{cor:connectsum_vanishingsecondhomotopy}
Let $n > 0$. For each $1 \leq i \leq n$, let $X_i$ be a compact oriented topological 4-manifold. If each fundamental group $\Gamma_i := \pi_1(X_i)$ is torsionfree of class $FJ_L$ with $\mathrm{cdim}(\Gamma_i) \leq 4$, and each mod-two second homotopy group vanishes: $\pi_2(X_i) \otimes \mathbb{Z}_2 = 0$, then the connected sum $X := X_1 \# \cdots \# X_n$ is topologically $s$-rigid.
\end{cor}
Next, we illustrate the basic but important example of non-aspherical oriented factors $X_i=S^1 \times S^3$. Here, the connected sum $X$ has free fundamental group $F_n$.\footnote{Since the infinite cyclic group $\mathbb{Z}$ is $NDL$ and we know the Postnikov stage $G/TOP^{[5]} = K(\mathbb{Z}/2,2) \times K(\mathbb{Z},4)$, a cohomology argument gives exactness at $\mathcal{N}_\mathrm{TOP}(X) = [X/\partial X, G/TOP]_+$. If $\pi_1(X)$ is a free group but we do not assume that $X$ is a connected sum, then partial results on exactness at $\mathcal{N}_\mathrm{TOP}(X)$ are found in \cite{KL} \cite[\S6.2]{HillmanBook}. Other non-aspherical (but $NDL$) examples of topological rigidity in dimension 4 are discussed in \cite[Thm.~0.9, Thm.~0.11]{KreckLueck}.}
\begin{exm}
Let $n>0$. Recall $\mathrm{Wh}_1(\mathbb{Z})=0$. Then, by Corollary~\ref{cor:connectsum_vanishingsecondhomotopy}, the closed 4-manifold $X = \# n(S^1 \times S^3)$ has class $SES^h_+$ and is topologically $s$-rigid.
\end{exm}
Finally, we specialize Corollary~\ref{cor:connectsum_vanishingsecondhomotopy} to the setting of the Borel conjecture.
\begin{cor}
Let $n > 0$. For each $1 \leq i \leq n$, suppose $X_i$ is an aspherical compact oriented topological 4-manifold with fundamental group $\Gamma_i := \pi_1(X_i)$ of class $FJ_L$. Then the connected sum $X := X_1 \# \cdots \# X_n$ is topologically $s$-rigid.\qed
\end{cor}
Here is an outline of the rest of the paper. Foundations are laid in Sections~\ref{sec:structuresets}--\ref{sec:Weinberger}, where we expand work of Cappell and Weinberger in dimension four.
Applications are made in Sections~\ref{sec:main_proofs}--\ref{sec:rigidity_proofs}, where we prove the stated results of the Introduction. The reader may find most of our notation and terminology in Kirby--Taylor \cite{KT}.
\subsection*{Acknowledgments}
The author thanks Masayuki Yamasaki for his kind hospitality. This paper was conceived in May 2009 at the Okayama University of Science. Jonathan Hillman and Andrew Ranicki provided expert e-mail consultation. Finally, the author is grateful to the Hausdorff Research Insititute for Mathematics. Through discussions there in October 2009 at the Rigidity Trimester Program, the earlier results on topological $s$-rigidity were extended to the class of $FJ_L$ summands.
\section{The language of structure sets}\label{sec:structuresets}
To start, the following equivalence relations play prominent roles in Section~\ref{sec:Weinberger}.
\begin{defn}
Let $Z$ be a compact topological manifold. Let $h: M \to Z$ and $h': M' \to Z$ be continuous maps. A \textbf{topological bordism} $H: h \to h'$ rel $\partial Z$ consists of a continuous map $|H|: W \to Z \times I$ and a compact topological manifold $W$ such that $H|_M=h$ and $H|_{M'}=h'$ and $\partial W = M \cup_{\partial Z} M'$.
Moreover, we call $H: h \to h'$ a \textbf{$h$-bordism} if $(W;M,M')$ is an $h$-cobordism. Furthermore, we call $H: h \to h'$ an \textbf{$s$-bordism} if $(W;M,M')$ is an $s$-cobordism.
\end{defn}
Next, we relativize the surgical language in the Introduction (cf.~\cite{Wall}).
\begin{defn}
Let $Z$ be a topological manifold such that the boundary $\partial Z$ is collared. Let $\partial_0 Z$ be a compact locally flat codimension-zero submanifold of $\partial Z$. The pair $(Z,\partial_0 Z)$ is called a \textbf{$\mathrm{TOP}$ manifold pair}. Write $\partial_1 Z := \partial Z - \mathrm{int}\,\partial_0 Z$. The induced triple $(Z,\partial_0 Z,\partial_1 Z)$ is called a \textbf{$\mathrm{TOP}$ manifold triad}.
\end{defn}
Here is the precise definition of the relative structure set that we use in proofs.
\begin{defn}
Let $(Z,\partial_0 Z)$ be a compact $\mathrm{TOP}$ 4-manifold pair. The \textbf{structure set $\mathcal{S}_\mathrm{TOP}^h(Z,\partial_0 Z)$} consists of $\sim$-equivalence classes of continuous maps $h: (M,\partial_0 M,\partial_1 M) \to (Z,\partial_0 Z, \partial_1 Z)$ of compact $\mathrm{TOP}$ 4-manifolds triads such that:
\begin{itemize}
\item
$|h|: M \to Z$ is a homotopy equivalence with $M \subset \mathbb{R}^\infty$,
\item
$\partial_0 h: \partial_0 M \to \partial_0 Z$ is a $\mathbb{Z}[\Gamma_0]$-homology equivalence, and
\item
$\partial_1 h: \partial_1 M \to \partial_1 Z$ is a homeomorphism.
\end{itemize}
We call such $h: (M, \partial_0 M) \to (Z, \partial_0 Z)$ a \textbf{homotopy--homology equivalence}.
Here, $h \sim h'$ if there exists a $\mathrm{TOP}$ bordism $H: h \to h'$ such that:
\begin{itemize}
\item $|H|: W \to Z \times I$ is a homotopy equivalence,
\item
$\partial_0 H: \partial_0 W \to \partial_0 Z \times I$ is a $\mathbb{Z}[\Gamma_0]$-homology equivalence, and
\item
$\partial_1 H: \partial_1 W \to \partial_1 Z \times I$ is a homeomorphism.
\end{itemize}
We call such $H: h \to h'$ a \textbf{homotopy--homology $h$-bordism}.
\end{defn}
The 4-dimensional relative surgery sequence is defined carefully as follows.
\begin{defn}
Let $(Z,\partial_0 Z)$ be a compact $\mathrm{TOP}$ 4-manifold pair. Denote the fundamental groupoids $\Gamma := \pi_1(Z)$ and $\Gamma_0 := \pi_1(\partial_0 Z)$. Denote the orientation character $\omega := w_1(\tau_Z): \Gamma \to \{\pm 1\}$. We declare that \textbf{$(Z,\partial_0 Z)$ has class $SES^h$} if there exists an exact sequence of based sets:
\begin{multline*}
\mathcal{N}_\mathrm{TOP}(Z \times I, \partial_0 Z \times I) \xrightarrow{~\sigma_5^h~} L_5^h(\Gamma,\Gamma_0,\omega) \xrightarrow{~\partial~}\\ \mathcal{S}_\mathrm{TOP}^h(Z, \partial_0 Z) \xrightarrow{~\eta~} \mathcal{N}_\mathrm{TOP}(Z, \partial_0 Z) \xrightarrow{~\sigma_4^h~} L_4^h(\Gamma,\Gamma_0,\omega).
\end{multline*}
\end{defn}
Last is the enhancement to include actions of certain groups in $K$- and $L$-theory.
\begin{defn}\label{defn:SESplus}
Furthermore, we declare that \textbf{$(Z,\partial_0 Z)$ has class $SES^h_+$} if, for all elements $h \in \mathcal{S}_\mathrm{TOP}^h(Z,\partial_0 Z)$ and $t \in \mathrm{Wh}_1(\Gamma)$ and $x \in L_5^h(\Gamma,\Gamma_0,\omega)$, there exist:
\begin{itemize}
\item an action of the group $\mathrm{Wh}_1(\Gamma)$ on the set $\mathcal{S}_\mathrm{TOP}^h(Z,\partial_0 Z)$ such that:
\begin{itemize}
\item
there is an $h$-bordism $F: W \to Z \times I$ rel $\partial Z$ from $h: M \to Z$ to $t(h): M' \to Z$ with Whitehead torsion $\tau(W;M,M')=t$, and
\end{itemize}
\item an action of the group $L_5^h(\Gamma,\Gamma_0,\omega)$ on the set $\mathcal{S}_\mathrm{TOP}^h(Z,\partial_0 Z)$ such that:
\begin{itemize}
\item there exists a normal bordism $F$ from $h$ to $x(h)$ with $\sigma_5^h(F) = x$, and
\item the equation $\partial(x) = x(\mathrm{id}_Z)$ holds.
\end{itemize}
\end{itemize}
\end{defn}
Before moving on, we consider the stable version of the above structure set.
\begin{defn}
Let $(Z,\partial_0 Z)$ be a compact $\mathrm{TOP}$ 4-manifold pair.
The \textbf{stable structure set $\ol{\mathcal{S}}_\mathrm{TOP}^h(Z,\partial_0 Z)$} consists of $\ol{\sim}$-equivalence classes of homotopy--homology equivalences $h: (M, \partial_0 M) \to (Z\# r(S^2 \times S^2), \partial_0 Z)$ for any $r \geq 0$.
Here, we define $h \ol{\sim} h'$ if there exist $s, s' \geq 0$ and a homotopy--homology $h$-bordism $H: h \# \mathrm{id}_{s(S^2 \times S^2)} \to h' \# \mathrm{id}_{s'(S^2 \times S^2)}$.
\end{defn}
The next theorem was proven by S.~Cappell and J.~Shaneson \cite{CS1} (cf.~\cite{KT}).
\begin{thm}[Cappell--Shaneson]\label{thm:CappellShaneson}
Let $(Z,\partial_0 Z)$ be a compact $\mathrm{TOP}$ 4-manifold pair. Denote the fundamental groupoids $\Gamma := \pi_1(Z)$ and $\Gamma_0 := \pi_1(\partial_0 Z)$ and orientation character $\omega: \Gamma \to \{\pm 1\}$. Then there is an exact sequence of based sets:
\begin{multline*}
\mathcal{N}_\mathrm{TOP}(Z \times I, \partial_0 Z \times I) \xrightarrow{~\sigma_5^h~} L_5^h(\Gamma,\Gamma_0,\omega) \xrightarrow{~\partial~}\\ \ol{\mathcal{S}}_\mathrm{TOP}^h(Z, \partial_0 Z) \xrightarrow{~\eta~} \mathcal{N}_\mathrm{TOP}(Z, \partial_0 Z) \xrightarrow{~\sigma_4^h~} L_4^h(\Gamma,\Gamma_0,\omega).
\end{multline*}
Moreover, the abelian group $L_5^h(\Gamma,\Gamma_0,\omega)$ acts on the set $\ol{\mathcal{S}}^h_\mathrm{TOP}(Z,\partial_0 Z)$ in such a way that the above map $\partial$ is equivariant.
\end{thm}
\section{A Weinberger-type homology splitting theorem}\label{sec:Weinberger}
Now we are ready to improve the $\Lambda$-splitting theorem of S.~Weinberger \cite{Weinberger_fibering} by slightly modifying his proof. In essence Theorems~\ref{thm:main} \& \ref{thm:main_stable} shall be its corollaries.
\begin{defn}
In the setting below, the homotopy equivalence $h: M \to X$ is \textbf{$\mathbb{Z}[\Gamma_0]$-split} if $h$ is topologically transverse to $X_0$ and its restriction $h_0: h^{-1}(X_0) \to X_0$ is a $\mathbb{Z}[\Gamma_0]$-homology equivalence (hence $h-h_0: h^{-1}(X-X_0) \to X-X_0$ is also).
\end{defn}
\begin{thm}\label{thm:generalized_weinberger}
Let $X$ be a non-empty compact connected topological 4-manifold. Let $X_0$ be a closed connected incompressible separating topological 3-submanifold of $X$. The decomposition of manifolds $X = X_1 \cup_{X_0} X_2$ induces the decomposition of fundamental groups $\Gamma = \Gamma_1 *_{\Gamma_0} \Gamma_2$. Define a closed simply-connected 8-manifold
\[
Q := \mathbb{CP}^4 \# (S^3 \times S^5) \# (S^3 \times S^5).
\]
Let $M$ be a compact topological 4-manifold. Suppose $h: M \to X$ is a homotopy equivalence such that the restriction $\partial h: \partial M \to \partial X$ is a homeomorphism.
\begin{enumerate}
\item
Assume $(*)$: the group $\Gamma_0$ has class $NDL$ and the 4-manifold pairs $(X_1,X_0)$ and $(X_2,X_0)$ have class $SES^h_+$.
Then $h$ is topologically $s$-bordant rel $\partial M$ to a $\mathbb{Z}[\Gamma_0]$-split homotopy equivalence $h''': M''' \to X$ if and only if $h \times \mathrm{id}_Q$ is homotopic rel $\partial M \times Q$ to a split homotopy equivalence along $X_0 \times Q$.
\item
Do not assume Hypothesis $(*)$. Then, for some $r \geq 0$, the $r$-th stabilization $h \# \mathrm{id}_{r(S^2 \times S^2)}$ is homotopic rel $\partial M$ to a $\mathbb{Z}[\Gamma_0]$-split homotopy equivalence $h''': M''' \to X\# r(S^2 \times S^2)$ if and only if $h \times \mathrm{id}_Q$ is homotopic rel $\partial M \times Q$ to a split homotopy equivalence along $X_0 \times Q$.
\end{enumerate}
Moreover, there is an analogous statement if $X_0$ is two-sided and non-separating.
\end{thm}
Note the map $\Gamma_0 \to \Gamma$ is injective, but the amalgam $\Gamma$ need not have class $NDL$. Observe the 8-manifold $Q$ has both Euler characteristic and signature equal to one.
\begin{cor}[Weinberger]\label{cor:weinberger}
In the previous theorem, instead of $(*)$, assume $(**)$: $\partial X$ is empty and the fundamental group $\Gamma$ has class $NDL$. Then $h$ is homotopic to a $\mathbb{Z}[\Gamma_0]$-split homotopy equivalence along $X_0$ if and only if $h \times \mathrm{id}_Q$ is homotopic to a split homotopy equivalence along $X_0 \times Q$.
\end{cor}
\begin{proof}
Since $\Gamma$ has class $NDL$, the subgroups $\Gamma_0, \Gamma_1, \Gamma_2$ have class $NDL$. Then, since $\Gamma_0, \Gamma_1, \Gamma_2$ have class $NDL$, by \cite{FreedmanTeichner, KQ}, the 4-manifold pairs $(X_i, X_0)$ have class $SES^h_+$. Hence Hypothesis $(**)$ implies Hypothesis $(*)$.
Now, since $\Gamma \in NDL$, by \cite{FreedmanTeichner, KQ}, the $\mathrm{TOP}$ $s$-cobordism of Theorem \ref{thm:generalized_weinberger}(1) is a product.
\end{proof}
\begin{rem}
Weinberger's theorem (Cor.~\ref{cor:weinberger}) \cite[Thm.~1]{Weinberger_fibering} was stated in a limited form. The only applicable situations were injective amalgamated products $\Gamma = \Gamma_1 *_{\Gamma_0} \Gamma_0 = \Gamma_1$ and $\Gamma = C_2 * C_2 = D_\infty$ in class $NDL$. (The second case was applied in \cite{JK_RP4, BDK}.) We effectively delete the last phrase in his proof. Earlier, there was a homology splitting result of M.~Freedman and L.~Taylor \cite{FreedmanTaylor} which required that $\Gamma = \Gamma_0 *_{\Gamma_0} \Gamma_0 = \Gamma_0$ but did not require that $\Gamma$ have class $NDL$.
\end{rem}
Next, we modify Weinberger's clever cobordism argument, adding a few details. We suppress the orientation characters $\omega$ needed in the non-orientable case.
\begin{proof}[Proof of Theorem~\ref{thm:generalized_weinberger}(1)]
For brevity, we shall denote ${}^Q$ for either $\times Q$ or $\times \mathrm{id}_Q$.
$(\Longrightarrow)$: Since $\dim(M_0^Q)=11 >4$, this follows from the $\mathrm{TOP}$ $s$-cobordism theorem and the handlebody version of the Quillen plus construction \cite{KS}.
$(\Longleftarrow)$: Suppose $h^Q$ is homotopic to a split homotopy equivalence along $X_0^Q$. By topological transversality \cite{FQ}, we may assume, up to homotopy rel $\partial M$, that $h: M \to X$ is $\mathrm{TOP}$ transverse to $X_0$. There is an induced decomposition of compact manifolds $M = M_1 \cup_{M_0} M_2$, where for all $j=0,1,2$ the restrictions $h_j := h|M_j: M_j \to X_j$ are degree one $\mathrm{TOP}$ normal maps and $\partial h_j$ are homeomorphisms.
Since $Q$ has both Euler characteristic and signature equal to 1, by periodicity and product formula \cite{Wall}, for all $i=1,2$, the relative surgery obstructions vanish:
\[
\sigma_*(h_i,h_0) = \sigma_*(h_i,h_0) \cdot \sigma^*(Q) = \sigma_*(h_i^Q,h_0^Q) = 0.
\]
Then, by exactness at $\mathcal{N}_\mathrm{TOP}$ in Hypothesis $(*)$, for all $i=1,2$, there exists a $\mathrm{TOP}$ normal bordism $F_i: (W_i,\partial W_i) \to (X_i,X_0)$ from $(h_i,h_0): (M_i,M_0) \to (X_i,X_0)$ to a homotopy--homology equivalence $(h'_i,\partial h'_i): (M'_i,\partial M'_i) \to (X_i,X_0)$.
Consider the $\mathrm{TOP}$ normal map $F: (W,M'_0;M') \to (X \times I,X_0 \times 1;X')$ of triads, where the restriction $\partial_1 F: M' \to X'$ is a homotopy equivalence:
\begin{eqnarray*}
F &:=& F_1 \cup_{h_1} (h \times \mathrm{id}_I) \cup_{h_2} F_2\\
W &:=& W_1 \cup_{M_1} (M \times I) \cup_{M_2} W_2\\
M'_0 &:=& \partial W_1 \cup_{M_0} \partial W_2\\
M' &:=& M'_1 \sqcup (M \times 0) \sqcup M'_2\\
X' &:=& (X_1 \times 1) \sqcup (X \times 0) \sqcup (X_2 \times 1).
\end{eqnarray*}
Select a homotopy $H: M^Q \times I \to X^Q$ from $h^Q$ to a split homotopy equivalence $g = g_1 \cup_{g_0} g_2$ along $X_0^Q$. By topological transversality \cite{FQ}, we may assume that both $F^Q$ and $H$ are transverse to $X_0^Q$. Define a degree one $\mathrm{TOP}$ normal map
\begin{eqnarray*}
G_0 &:=& (h_0^Q \times I) \cup_{(h_0^Q \times 0)} H_0\\
G_i &:=& F_i^Q \cup_{(h_i^Q \times 1)} (h_i^Q \times I) \cup_{(h_i^Q \times 0)} H_i.
\end{eqnarray*}
Note $F^Q \cup_h H = G_1 \cup_{G_0} G_2$.
Define a degree one $\mathrm{TOP}$ normal map
\[
\bar{G}_i := (G_i,\partial_1 F_1^Q \cup_{h_0^Q \times 1} G_0;h_i' \sqcup g_i)
\]
such that the restriction $\partial_1 \bar{G}_i = h_i' \sqcup g_i$ is a homotopy equivalence. Observe there are defined surgery obstructions
\begin{eqnarray*}
x &:=& \sigma_*(F,\partial_0 F) \in L_5^h(\Gamma,\Gamma_0)\\
x_i &:=& \sigma_*(\bar{G}_i, \partial_0 \bar{G}_i) \in L_{13}^h(\Gamma_i,\Gamma_0).
\end{eqnarray*}
Denote the inclusion-induced homomorphism $j_i: L_5^h(\Gamma_i,\Gamma_0) \to L_5^h(\Gamma,\Gamma_0)$. Then, by periodicity, sum formula, and $L_*^h(\Gamma_0,\Gamma_0)=0$ \cite{Wall}, we obtain:
\[
x = \sigma_*(F^Q, \partial_0 F^Q) = \sigma_*(F^Q \cup_h H,\partial_0 F^Q) = j_1(x_1) + j_2(x_2).
\]
In particular, we obtain that $x \in L_5^h(\Gamma,\Gamma_0)$ is the image of a surgery obstruction $x^B \in L_5^B(\Gamma,\Gamma_0)$ uniquely determined by $F^Q$, where the decoration subgroup is
\[
B := j_1\mathrm{Wh}_1(\Gamma_1) + j_2\mathrm{Wh}_1(\Gamma_2) \subseteq \mathrm{Wh}_1(\Gamma).
\]
Next, by existence of an $L_5^h$-action in Hypothesis $(*)$, for each $i=1,2$, there exists a $\mathrm{TOP}$ normal bordism $F'_i: (W'_i,\partial W'_i) \to (X_i,X_0)$ from $(h'_i,\partial h'_i)$ to $(h''_i, \partial h''_i)$ with surgery obstruction $\sigma_*(F'_i,\partial F'_i) = -x_i$ such that $h''_i: (M''_i, \partial_1 M''_i) \to (X_i, X_0)$ is a homotopy--homology equivalence. Now consider the $\mathrm{TOP}$ normal map $F': (W',M_0'';M'') \to (X \times I, X_0 \times 1; X')$ of triads, where the restriction $\partial_1 F': M'' \to X'$ is a homotopy equivalence:
\begin{eqnarray*}
F' &:=& F'_1 \cup_{h'_1} F \cup_{h'_2} F'_2\\
M''_0 &:=& W'_1 \cup_{\partial M'_1} M'_0 \cup_{\partial M'_2} W'_2\\
M'' &:=& M''_1 \sqcup (M \times 0) \sqcup M''_2.
\end{eqnarray*}
Note the surgery obstruction vanishes:
\[
\sigma_*(F',\partial_0 F')=j_1(-x_1)+x+j_2(-x_2)=0 \in L_5^B(\Gamma,\Gamma_0).
\]
Since the Null Disc Lemma holds for $\Gamma_0$, by 5-dimensional relative surgery \cite{Wall, FQ}, there is a normal bordism $G$ rel $M''$ to a $B$-torsion homotopy equivalence
\[
F'': (W'',M'''_0;M'') \longrightarrow (X \times I, X_0 \times 1; X')
\]
of triads such that for each $i=1,2$ the restriction $\partial F'_i: \partial M'_i \to X_0$ is a $\mathbb{Z}[\Gamma_0]$-homology equivalence. Hence $F''$ is a $B$-torsion $\mathrm{TOP}$ $h$-bordism from $h$ to a $\mathbb{Z}[\Gamma_0]$-split homotopy equivalence $(\partial_+ F'';\partial F'_1): (\partial_+W'';\partial M'_0) \to (X;X_0)$ along $X_0$.
Finally, consider the Whitehead torsion of the achieved $h$-cobordism rel $\partial M$:
\[
t := \tau(W'';M,\partial_+W'') \in B.
\]
Then there exist $t_i \in \mathrm{Wh}(\Gamma_i)$ such that $t = j_1(t_1) + j_2(t_2)$.
By existence of a $\mathrm{Wh}_1$-action in Hypothesis $(*)$, for all $i=1,2$, there exist $h$-bordisms $F''_i ~\mathrm{rel}~ \partial$ such that torsion of the domain $h$-cobordism is $j_i(-t_i)$. Therefore, by the sum formula, we obtain a topological $s$-bordism $F''' := F''_1 \cup F'' \cup F''_2$ rel $\partial M$ from $h$ to a $\mathbb{Z}[\Gamma_0]$-split homotopy equivalence $h''' := \partial_+F'''$ along $X_0$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:generalized_weinberger}(2)]
The argument in the stable case, Part (2), is similar to the unstable case, Part (1). The places where we used the hypothesis that $(X_1, X_0)$ and $(X_2, X_0)$ have class $SES^h_+$ can be replaced with the use of Theorem~\ref{thm:CappellShaneson}. Moreover, the places where we used the hypothesis that $\Gamma_0$ has class $NDL$ had target $X_0 \times I$ for surgery problems, and so can be replaced with the use of Theorem~\ref{thm:CappellShaneson}.
Realization of elements of the Whitehead group by $h$-cobordisms on any given compact 4-manifold is the same as in high dimensions \cite[p.~90]{RS}. Finally, by \cite[Theorem~9.1]{FQ}, any $\mathrm{TOP}$ $s$-cobordism $W'''$ on a compact 4-manifold admits a $\mathrm{TOP}$ handlebody structure. Then we proceed as in the proof of the high-dimensional $s$-cobordism theorem (e.g., see \cite[Thm.~6.19]{RS}), except we resolve double-point singularities of immersed Whitney 2-discs via Norman tricks \cite[Lem.~1]{Norman}. We conclude, for some $r \geq 0$, that the sum stabilization $W''' \natural r(S^2 \times S^2 \times I)$ (defined on \cite[p.~107]{FQ}) is homeomorphic to the product $(X \# r(S^2 \times S^2)) \times I$.
\end{proof}
\section{Proofs for the surgery sequence}\label{sec:main_proofs}
Again, we suppress the orientation characters $\omega$ used in the non-orientable case.
We start with a puncturing lemma. Section~\ref{sec:Weinberger} contains the terminology for pairs.
\begin{lem}\label{lem:puncture}
Let $Z$ be a non-empty compact connected topological 4-manifold. Write $pZ := Z - \mathrm{int}\,D^4$. If $Z$ has class $SES^h_+$, then $(pZ, S^3)$ has class $SES^h_+$.
\end{lem}
\begin{proof}
Denote the fundamental group $\Gamma := \pi_1(Z)$. First, let $(M,\partial_0 M)$ be a compact topological 4-manifold pair, and let $(f,\partial_0 f): (M,\partial_0 M) \to (pZ,S^3)$ be a degree-one $\mathrm{TOP}$ normal map of pairs that restricts to a homeomorphism $\partial_1 f: \partial_1 M \to \partial Z$. Suppose the relative surgery obstruction vanishes: $\sigma_4^h(f)=0 \in L_4^h(\Gamma,1)$. Recall the geometric exact sequence of C.T.C.~Wall \cite[Cor.~3.1.1]{Wall}:
\[
\mathbb{Z} = L_4^h(1) \xra{~\varepsilon~} L_4^h(\Gamma) \longrightarrow L_4^h(\Gamma,1) \longrightarrow L_3^h(1) = 0.
\]
Then $\partial_0 f: \partial_0 M \to S^3$ is $\mathrm{DIFF}$ normally bordant to a $\mathbb{Z}$-homology equivalence $g: \Sigma \to S^3$. Since any closed oriented 3-manifold $\Sigma$ is parallelizable, by a theorem of M.~Freedman \cite[Cor.~9.3C]{FQ}, it follows there exists a $\mathrm{TOP}$ normal null-bordism of $g$ over $D^4$. Thus $(f,\partial_0 f)$ is $\mathrm{TOP}$ normally bordant, as a pair relative to $\partial_1 M$, to a degree-one map $f': M' \to Z$ such that $\partial f': \partial_1 M \to \partial Z$ is a homeomorphism. Moreover, by connecting sum with copies of the $\mathrm{TOP}$ $E_8$-manifold or its reverse, we may assume that the absolute surgery obstruction vanishes: $\sigma_4^h(f') = 0 \in L_4^h(\Gamma)$. By hypothesis, $f'$ is $\mathrm{TOP}$ normally bordant to a homotopy equivalence $h: M'' \to Z$. We may assume that $h$ is transverse to a point $z \in Z$ and that $h^{-1}\{z\}$ is a singleton. Thus $(f,\partial_0 f)$ is normally bordant to a homotopy equivalence $(ph,\mathrm{id}): (pM'',S^3) \to (pZ,S^3)$. Therefore we obtain exactness at the normal invariants $\mathcal{N}_\mathrm{TOP}(pZ,S^3)$.
Next, define an appropriate action of $L_5^h(\Gamma,1)$ on $\mathcal{S}_\mathrm{TOP}^h(pZ,S^3)$ as follows. By puncturing at a transversal singleton $\{z\} \subset Z$ with connected preimage, we obtain a function $p: \mathcal{S}_\mathrm{TOP}^h(Z) \to\mathcal{S}_\mathrm{TOP}^h(pZ,S^3)$. By the existence of 1-connected $\mathrm{TOP}$ $h$-cobordism from a homology 3-sphere $\Sigma$ to the genuine one \cite[Cor.~9.3C]{FQ}, it follows that $p$ is surjective. By the topological plus construction \cite[Thm.~11.1A]{FQ}, applied to any homology $h$-cobordism of $S^3$ to itself, it follows that $p$ is injective. By hypothesis, there is an appropriate action of $L_5^h(\Gamma)$ on $\mathcal{S}_\mathrm{TOP}^h(Z)$. This extends, via the bijection $p$, to an action of $L_5^h(\Gamma)$ on $\mathcal{S}_\mathrm{TOP}^h(pZ,S^3)$. For any orientation character $\omega$, there is a unique $k \geq 0$ such that Wall's exact sequence becomes
\[
0 \longrightarrow L_5^h(\Gamma) \xra{~\iota~} L_5^h(\Gamma,1) \longrightarrow k\mathbb{Z} = \mathrm{Ker}(\varepsilon) \longrightarrow 0.
\]
(Here $k=0$ if and only if $\omega=1$, equivalently, $Z$ is orientable.) Since these groups are abelian, we obtain a non-canonical isomorphism
\[
\varphi: L_5^h(\Gamma,1) \longrightarrow L_5^h(\Gamma) \oplus k\mathbb{Z}.
\]
The relevant action of $L_4^h(1)$ on the homology structure set $\mathcal{S}_\mathrm{TOP}^{h\mathbb{Z}}(S^3)$ via twice-punctured $E_8$-manifolds restricts/extends to an action of $k\mathbb{Z}$ on $\mathcal{S}_\mathrm{TOP}^h(pZ,S^3)$. Thus, via the isomorphism $\varphi$, we obtain an appropriate action of $L_5^h(\Gamma,1)$, given by concatenation of the actions. Therefore, we conclude $(pZ,S^3)$ has class $SES^h_+$.
\end{proof}
At last, we are ready to establish our main theorem using homology splitting.
For any non-empty compact connected 4-manifold $Z$, we use the following notation:
\begin{eqnarray*}
pZ &:=& Z - \mathrm{int}\,D^4\\
\widetilde{\mathcal{N}}_\mathrm{TOP}(Z) &:=& \mathcal{N}_\mathrm{TOP}(Z) / (E_8 \to S^4)\\
\widetilde{L}^h_4(\pi_1 Z) &:=& \mathrm{Cok}\left(\varepsilon: L^h_4(1) \longrightarrow L^h_4(\pi_1 Z)\right).
\end{eqnarray*}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Since $\Gamma := \pi_1(X)$ is isomorphic to a free product $\Gamma_1 * \cdots * \Gamma_n$, by an existence theorem of J.~Hillman \cite{HillmanFreeProduct} (cf.~\cite{KLT1, KurMat}), there exist $r \geq 0$ and closed topological 4-manifolds $X_1,\ldots,X_n$ with each $\pi_1(X_i)$ isomorphic to $\Gamma_i$ such that $X \# r(S^2 \times S^2)$ is homeomorphic to $X_1 \# \cdots \# X_n$. For Part~(1), since each $\Gamma_i$ has class $NDL$, by Theorem~\ref{thm:FTKQ}, we obtain that each $X_i$ has class $SES^h_+$. For Part~(2), this is assumed of the $X_i$, and the $SES^h_+$ property only depends on the homotopy type of $X$. Therefore we may assume that $X = X_1 \# \cdots \# X_n$ with each $X_i$ of class $SES^h_+$. Write $\Gamma_i := \pi_1(X_i)$ for each fundamental group.
We induct on $n > 0$. Assume for some $n \geq 1$ that the $(n-1)$-fold connected sum of all compact connected topological 4-manifolds of class $SES^h_+$ has class $SES^h_+$, where in the non-orientable case we assume 2-torsionfree fundamental group. Write
\[
X' := X_1 \# \cdots \# X_{n-1} \qquad \Gamma' := \Gamma_1 * \cdots * \Gamma_{n-1}.
\]
Hence $X = X' \# X_n$ and $\Gamma = \Gamma' * \Gamma_n$. By hypothesis, both $X'$ and $X_n$ have class $SES^h_+$. Then, by Lemma~\ref{lem:puncture}, the pairs $(pX',S^3)$ and $(pX_n,S^3)$ have class $SES^h_+$. Next, we shall show that our original 4-manifold has class $SES^h_+$:
\[
X = pX' \cup_{S^3} pX_n.
\]
First, the $K$-theory splitting obstruction group vanishes \cite{Waldhausen_Rings}, and, by a recent vanishing result \cite{CR, BL_CAT0, CD2}, so do the $L$-theory obstruction groups:\footnote{If $\Gamma$ is 2-torsionfree, then $\mathrm{UNil}_*^h=0$ by Cappell's earlier result \cite{Cappell_unitary} \cite[Lem.~II.10]{Cappell_split}. Furthermore, we require $\Gamma$ to be 2-torsionfree in the non-orientable case, due to semi-periodicity.}
\begin{eqnarray*}
\widetilde{\mathrm{Nil}}_0(\mathbb{Z};\mathbb{Z}[\Gamma'-1],\mathbb{Z}[\Gamma_n-1]) &=& 0\\
\mathrm{UNil}_4^h(\mathbb{Z};\mathbb{Z}[\Gamma'-1],\mathbb{Z}[\Gamma_n-1]) &=& 0\\
\mathrm{UNil}_5^h(\mathbb{Z};\mathbb{Z}[\Gamma'-1],\mathbb{Z}[\Gamma_n-1]) &=& 0.
\end{eqnarray*}
So observe, by Stalling's theorem for Whitehead groups of free products \cite{Stallings} and the Mayer--Vietoris type exact sequence for $L$-theory groups \cite{Cappell_unitary}, that:
\begin{eqnarray*}
\mathrm{Wh}_1(\Gamma) &=& \mathrm{Wh}_1(\Gamma') \oplus \mathrm{Wh}_1(\Gamma_n)\\
\widetilde{L}_4^h(\Gamma) &=& \widetilde{L}_4^h(\Gamma') \oplus \widetilde{L}_4^h(\Gamma_n)\\
\widetilde{L}_5^h(\Gamma) &=& \widetilde{L}_5^h(\Gamma') \oplus \widetilde{L}_5^h(\Gamma_n).
\end{eqnarray*}
Here, from the Mayer--Vietoris sequence for any free product $G = G_1 * G_2$, we write
\begin{eqnarray*}
\widetilde{L}^h_5(G) &:=& \mathrm{Ker}\left(\partial: L^h_5(G) \longrightarrow L_4^h(1) \right).
\end{eqnarray*}
Second, since $\mathcal{N}_\mathrm{TOP}(S^3)$ and $\widetilde{\mathcal{N}}_\mathrm{TOP}(S^3 \times I)$ are singletons, by $\mathrm{TOP}$ transversality \cite{FQ} and by attaching thickened normal bordisms, we obtain:
\begin{eqnarray*}
\widetilde{\mathcal{N}}_\mathrm{TOP}(X) &=& \widetilde{\mathcal{N}}_\mathrm{TOP}(X') \times \widetilde{\mathcal{N}}_\mathrm{TOP}(X_n).
\end{eqnarray*}
So, since the surgery sequence for both $X'$ and $X_n$ is exact at $\mathcal{N}_\mathrm{TOP}$, the surgery sequence for the connected sum $X$ is exact at $\mathcal{N}_\mathrm{TOP}$.
Third, since $(pX', S^3)$ and $(pX_n, S^3)$ have class $SES^h_+$ and the splitting obstruction groups vanish, by Theorem~\ref{thm:generalized_weinberger}(1), any homotopy equivalence to $X$ is $\mathrm{TOP}$ $s$-bordant rel $\partial M$ to a $\mathbb{Z}$-homology split map along $S^3$. That is, the top part of the $s$-bordism is a homotopy equivalence whose preimage of $S^3$ is a $\mathbb{Z}$-homology 3-sphere $\Sigma$. Thus the following inclusion is an equality (compare \cite[Thm.~3]{Cappell_free}):
\[
\subseteq~:~ \mathcal{S}_\mathrm{TOP}^{\text{$\mathbb{Z}$-split}}(X;S^3) \longrightarrow \mathcal{S}_\mathrm{TOP}^h(X).
\]
By \cite[Corollary~9.3C]{FQ}, there exists a $\mathrm{TOP}$ $\mathbb{Z}$-homology $h$-cobordism $(W;\Sigma,S^3)$ such that $W$ is 1-connected. Furthermore, there exists an extension of the degree one normal map $\Sigma \to S^3$ to a degree one normal map $W \to S^3 \times I$. Thus, by attaching the thickened normal bordism, the following inclusion is an equality:
\[
\subseteq~:~ \mathcal{S}_\mathrm{TOP}^{\text{split}}(X;S^3) \longrightarrow \mathcal{S}_\mathrm{TOP}^{\text{$\mathbb{Z}$-split}}(X;S^3).
\]
(The process of this last equality is called \emph{neck exchange}, cf.~\cite{KLT2, JK_RP4}.)
Therefore the following map $\#$, given by interior connected sum, is surjective:
\[
\#~:~ \mathcal{S}_\mathrm{TOP}^h(X') \times \mathcal{S}_\mathrm{TOP}^h(X_n) \longrightarrow \mathcal{S}_\mathrm{TOP}^h(X).
\]
In order to show that $\#$ is injective, suppose $h_1 \# h_2$ is $\mathrm{TOP}$ $h$-bordant to $h'_1 \# h'_2$, say by a map $H: W \to X \times I$. Since $S^3 \times I$ is a 1-connected 4-manifold \cite{FQ}, and $\partial H$ is split along $S^3 \times \partial I$, by the relative 5-dimensional form of Cappell's nilpotent normal cobordism construction \cite{Cappell_free, Cappell_split}, there exists a $\mathrm{TOP}$ normal bordism rel $\partial W$ from $H$ to an $h$-bordism $H': W' \to X \times I$ split along $S^3 \times I$. So $H' = H'_1 \# H'_2$. Therefore $\#$ is injective.
Now $\mathrm{Wh}_1(\Gamma)$ and $\widetilde{L}_5^h(\Gamma)$ can be given product actions on $\mathcal{S}_\mathrm{TOP}^h(X)$. The latter extends to an action of $L_5^h(\Gamma)$ by attaching a thickened multiple of a twice-punctured $E_8$ manifold along $S^3$.
Hence the surgery sequence for $X$ is exact at $\mathcal{S}^h_\mathrm{TOP}$ and $L_5^h$. This completes the induction. Therefore arbitrary connected sums $X=X_1 \# \cdots \# X_n$ have class $SES^h_+$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main_stable}]
By the stable prime decomposition of Kreck--L\"uck--Teichner \cite{KLT1}, there exist 4-manifolds $X_i$, unique up to stabilization and permutation, with fundamental groups $\Gamma_i$ such that $X$ is $(S^2 \times S^2)$-stably homeomorphic to $X_1 \# \cdots \# X_n$. By a theorem of Waldhausen \cite{Waldhausen_Rings} and a recent calculation of Connolly--Davis \cite{CD2}, the algebraic $K$- and $L$-theory splitting obstruction groups associated to each connecting 3-sphere vanish:
\[
\widetilde{\mathrm{Nil}}_0=0 \;\text{ and }\; \mathrm{UNil}_5^h = 0.
\]
Therefore, by Theorem~\ref{thm:generalized_weinberger}(2), using Cappell's high-dimensional splitting theorem \cite{Cappell_free, Cappell_split}, we obtain inductively that $\#$ is a bijection.
\end{proof}
The following argument is partly based on Farrell's 1970 ICM address \cite{Farrell_ICM1970}.
\begin{proof}[Proof of Theorem~\ref{thm:fibering}]
One repeats the mapping torus argument of the proof of \cite[Theorem~5.6]{Khan_fibering}, constructing a homotopy equivalence $h: X \to X$ using $f$. Since the achieved homotopy equivalence $g: M \to X \rtimes_h S^1$ has Whitehead torsion $\tau(g) = \tau(f) = 0$, there are no splitting obstructions. Since $X$ has class $SES^h_+$, the proof of splitting $g$ along $X$ holds \cite[Thm.~5.4]{Khan_fibering}; one no longer requires that $M$ and $X$ be $\mathrm{DIFF}$ manifolds. Therefore the argument of \cite[Theorem~5.6]{Khan_fibering} shows that $f: M \to S^1$ is homotopic to a $\mathrm{TOP}$ $s$-block bundle projection.
\end{proof}
\section{Proofs for topological rigidity}\label{sec:rigidity_proofs}
The following elementary argument is similar to J.~Hillman's \cite[Cor.~6.7.2]{HillmanBook}.
\begin{proof}[Proof of Theorem~\ref{thm:rigidity}]
First, we show that the $s$-cobordism structure set $\mathcal{S}_\mathrm{TOP}^s(Z)$ is a singleton. Let $M$ be a compact topological 4-manifold, and let $h: M \to Z$ be a simple homotopy equivalence that restricts to a homeomorphism $\partial h: \partial M \to \partial Z$. Then the surgery obstruction $\sigma_4^s(\eta(h)) \in L_4^s(\pi,\omega)$ vanishes. Since $\sigma_4^s$ is injective, there exists a $\mathrm{TOP}$ normal bordism $F: W \to Z \times I$ to $\eta(h)$ from the identity $\mathrm{id}_Z$. Since $\sigma_5^s$ is surjective, there exists a $\mathrm{TOP}$ normal bordism $F': W' \to Z \times I$ to $\mathrm{id}_Z$ from $\mathrm{id}_Z$ with opposite surgery obstruction: $\sigma_5^s(F') = -\sigma_5^s(F)$. Hence the union
\[
F'' := F' \cup_{\mathrm{id}_Z} F: W' \cup_Z W \longrightarrow Z \times I
\]
is a $\mathrm{TOP}$ normal bordism to $\eta(h)$ from $\mathrm{id}_Z$ with vanishing surgery obstruction: $\sigma_5^s(F'')=0$. Therefore, by 5-dimensional $\mathrm{TOP}$ surgery theory \cite{Wall, KS}, we obtain that $F''$ is $\mathrm{TOP}$ normally bordant $\!\!~\mathrm{rel}~ \partial$ to a simple homotopy equivalence $F''': (W''';Z,M) \to (Z \times I; Z\times 0, Z \times 1)$ of manifold triads. Therefore we have found a $\mathrm{TOP}$ $s$-bordism to $h$ from $\mathrm{id}_Z$. That is, $\mathcal{S}_\mathrm{TOP}^s(Z)$ is a singleton $\{*\}$.
Next, observe that trivially we obtain an exact sequence of based sets:
\[
\mathcal{N}_\mathrm{TOP}(Z \times I) \xrightarrow{~\sigma_5^s~} L_5^s(\pi,\omega) \xrightarrow{~\partial~} \{*\} \xrightarrow{~\eta~} \mathcal{N}_\mathrm{TOP}(Z) \xrightarrow{~\sigma_4^s~} L_4^s(\pi,\omega).
\]
We declare the action of $L_5^s(\pi,\omega)$ on $\mathcal{S}_\mathrm{TOP}^s(Z)$ to be trivial.
Finally, if $\mathrm{Wh}_1(\pi)=0$, then homotopy equivalences to $Z$ are simple, and so $Z$ is topologically $s$-rigid.
\end{proof}
We employ a case of a lemma of Hillman \cite[Lem.~6.8]{HillmanBook}, providing its details.
\begin{proof}[Proof of Corollary~\ref{cor:unstable_rigidity}]
Let $k \geq 0$. By the Mayer--Vietoris sequence in homology, the Shaneson sequence in $L$-theory \cite{ShanesonSequence}, and the Ranicki assembly map \cite[p.~148]{RanickiTOP}, the following diagram commutes with right-split exact rows:
\[\begin{CD}
H_{5+k}(Z;\mathbb{L}_0) @>{i_*}>> H_{5+k}(Z \times S^1;\mathbb{L}_0) @>{\partial}>> H_{4+k}(Z;\mathbb{L}_0)\\
@VV{A_{5+k}^s(Z)}V @VV{A_{5+k}^s(Z \times S^1)}V @VV{A_{4+k}^h(Z)}V\\
L_{5+k}^s(Z) @>{i_*}>> L_{5+k}^s(Z \times S^1) @>{\partial}>> L_{4+k}^h(Z).
\end{CD}\]
Moreover, the algebraic right-splitting is given by multiplying local or global quadratic complexes by the symmetric complex of the circle. This choice of splitting commutes with the connective assembly maps $A_{5+k}^s(Z \times S^1)$ and $A_{4+k}^h(Z)$.
Assume $Z \times S^1$ is topologically rigid. Then $\mathcal{S}_\mathrm{TOP}^s(Z \times S^1)=\{*\}$. So, by Wall's surgery exact sequence \cite[\S10]{Wall} and Ranicki's identification of the surgery obstruction map with the assembly map \cite[Prop.~18.3(1)]{RanickiTOP} via topological transversality \cite{FQ}, we obtain that $A_{5}^s(Z \times S^1)$ is injective and $A_{6}^s(Z \times S^1)$ is surjective. Hence, using $k=0$ in the above diagram and the right-splitting, $\sigma_4^h = A_4^h(Z)$ is injective. Also, using $k=1$ in the above diagram, $\sigma_{5}^h = A_{5}^h(Z)$ is surjective. Therefore, by Theorem~\ref{thm:rigidity}, we obtain that $\mathcal{S}_\mathrm{TOP}^h(Z)=\{*\}$. Hence, since $\mathrm{Wh}_1(\pi_1 Z)=0$ by hypothesis, we conclude that $Z$ is topologically $s$-rigid.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:algebraic_rigidity}]
Denote $\Gamma := \pi_1(Z)$. Via topological transversality \cite{FQ}, there are commutative squares with bijective left vertical maps \cite[Prop.~18.3(1)]{RanickiTOP}:
\[\begin{CD}
\mathcal{N}_\mathrm{TOP}(Z) @>{\sigma_4^s}>> L_4^s(\Gamma)\\
@VV{\cap [Z]_{\mathbb{L}^0}}V @A{A_4^\Gamma}AA\\
H_4(Z;\mathbb{L}_0) @>{u_4}>> H_4(B\Gamma;\mathbb{L}_0)
\end{CD}
\qquad
\begin{CD}
\mathcal{N}_\mathrm{TOP}(Z \times I) @>{\sigma_5^s}>> L_5^s(\Gamma)\\
@VV{\cap [Z]_{\mathbb{L}^0}}V @A{A_5^\Gamma}AA\\
H_5(Z;\mathbb{L}_0) @>{u_5}>> H_5(B\Gamma;\mathbb{L}_0)
\end{CD}
\]
Here, we are using the identification $\mathcal{N}_\mathrm{TOP}(Z)=[Z/\partial Z,G/TOP]_+$. Since $Z$ is aspherical, the bottom horizontal maps are isomorphisms. Since $\Gamma$ is torsionfree with $\mathrm{cdim}(\Gamma)=4$ and has class $FJ_L$, $\mathrm{Wh}_1(\Gamma)=0$, the map $A_4^\Gamma$ is a monomorphism, and $A_5^\Gamma$ is an isomorphism. Hence $\sigma_4^s$ is injective and $\sigma_4^s$ is surjective. Therefore, by Theorem~\ref{thm:rigidity}, we obtain that $Z$ is topologically $s$-rigid and has class $SES^h_+$.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:mappingtorus}]
Let $\alpha: K \to K$ be the homeomorphism.
It follows from the homotopy sequence of a fibration that $Z = K \rtimes_\alpha S^1$ is aspherical. By a recent theorem\footnote{Their proof depends on G.~Perelman's affirmation of Thurston's Geometrization Conjecture (cf.~\cite{AndersonMT_Survey}) and on individual casework of S.~Roushon and P.~K\"uhl.} of Bartels--Farrell--L\"uck \cite{BFL_lattice}, we obtain that $\Gamma_0 := \pi_1(K)$ has class $FJ_L$.
Write $\Gamma := \pi_1(Z)$. Then $\Gamma = \Gamma_0 \rtimes_\alpha \mathbb{Z}$. By the excisive Wang sequence and the Shaneson Wang-type sequence, there is a commutative diagram with exact rows:
\[\begin{CD}
H_n(B\Gamma_0;\mathbb{L}) @>{1-\alpha_*}>> H_n(B\Gamma_0; \mathbb{L}) @>{i_*}>> H_n(B\Gamma; \mathbb{L}) @>{\partial}>> H_{n-1}(B\Gamma_0;\mathbb{L})\\
@VV{A_n^{\Gamma_0}}V @VV{A_n^{\Gamma_0}}V @VV{A_n^{\Gamma}}V @VV{A_{n-1}^{\Gamma_0}}V\\
L_n^{s={-\infty}}(\Gamma_0) @>{1-\alpha_*}>> L_n^s(\Gamma_0) @>{i_*}>> L_n^s(\Gamma) @>{\partial}>> L_{n-1}^{h=s}(\Gamma_0)
\end{CD}\]
Since $\Gamma_0$ is torsionfree and has class $FJ_L$, the non-connective assembly maps $A_*^{\Gamma_0}$ are isomorphisms. Hence, by the five lemma, the non-connective assembly maps $A_*^\Gamma$ are isomorphisms. Using topological tranversality and Poincar\'e duality, similar to the proof of Corollary~\ref{cor:algebraic_rigidity}, by Theorem~\ref{thm:rigidity}, we obtain that $\mathcal{S}_\mathrm{TOP}^s(Z) = \{*\}$.
Hence, since $\mathrm{Wh}_1(\pi_1 Z)=0$, we conclude that $Z$ is topologically $s$-rigid.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:connectsum_vanishingsecondhomotopy}]
Since each $X_i$ is orientable and has class $SES^h_+$, by Theorem~\ref{thm:main}, we obtain that $X$ has class $SES^h_+$ and the following function is a bijection:
\[
\#~:~ \prod_{i=1}^n \mathcal{S}_\mathrm{TOP}^h(X_i) \longrightarrow \mathcal{S}_\mathrm{TOP}^h(X).
\]
Next, let $1 \leq i \leq n$. Consider the connective assembly map components \cite{TW}:
\begin{eqnarray*}
A_4 = \left(\begin{smallmatrix}I_0 & \kappa_2\end{smallmatrix}\right) &:&
H_4(B\Gamma_i; \mathbb{L}_0) = H_0(B\Gamma_i;\mathbb{Z}) \oplus H_2(B\Gamma_i;\mathbb{Z}_2) \longrightarrow L_4^h(\Gamma_i)\\
A_5 = \left(\begin{smallmatrix}I_1 & \kappa_3\end{smallmatrix}\right) &:&
H_5(B\Gamma_i; \mathbb{L}_0) = H_1(B\Gamma_i;\mathbb{Z}) \oplus H_3(B\Gamma_i;\mathbb{Z}_2) \longrightarrow L_5^h(\Gamma_i).
\end{eqnarray*}
Assume $\Gamma_i$ is torsionfree and $\pi_2(X_i)\otimes \mathbb{Z}_2 = 0$. Since $\Gamma_i$ has class $FJ_L$ and $\mathrm{cdim}(\Gamma_i) \leq 4$, we obtain that $A_4$ is a monomorphism and $A_5$ is an isomorphism.
Recall the universal covering $\widetilde{X}_i \to X_i$ is classified by a unique homotopy class of map $u: X_i \to B\Gamma_i$, which induces an isomorphism on fundamental groups.
Since $X_i$ is a closed oriented topological manifold, using topological transversality \cite{FQ}, the Quinn--Ranicki $H$-space structure on $G/TOP$, and Poincar\'e duality with respect to the $\mathbb{L}^0$-orientation \cite{RanickiTOP}, we obtain induced homomorphisms
\begin{eqnarray*}
u_4' &:& \mathcal{N}_\mathrm{TOP}(X_i) \cong [(X_i)_+, G/TOP]_+ \cong H_4(X_i;\mathbb{L}_0) \xra{~u_*~} H_4(B\Gamma_i;\mathbb{L}_0)\\
u_5' &:& \mathcal{N}_\mathrm{TOP}(X_i \times I) \cong [(X_i)_+ \wedge S_1,G/TOP]_+ \cong H_5(X_i;\mathbb{L}_0) \xra{~u_*~} H_5(B\Gamma_i;\mathbb{L}_0)
\end{eqnarray*}
such that the surgery obstruction map factors: $\sigma_4^h = A_4 \circ u_4'$ and $\sigma_5^h = A_5 \circ u_5'$. Recall the Hopf sequence, which is obtained from the Leray--Serre spectral sequence:
\[
H_3(X_i;\mathbb{Z}_2) \xra{u_3} H_3(B\Gamma_i;\mathbb{Z}_2) \longrightarrow H_2(\widetilde{X};\mathbb{Z}_2) \longrightarrow H_2(X;\mathbb{Z}_2) \xra{u_2} H_2(B\Gamma_i;\mathbb{Z}_2) \longrightarrow 0.
\]
Since $H_2(\widetilde{X};\mathbb{Z}_2) = \pi_2(X_i) \otimes \mathbb{Z}_2=0$, we have $\mathrm{Ker}(u_2)=0$ and $\mathrm{Cok}(u_3)=0$.
Hence
\begin{eqnarray*}
\mathrm{Ker}(\sigma_4^h) &=& \mathrm{Ker}(u_4') = \mathrm{Ker}(u_0) \oplus \mathrm{Ker}(u_2) = 0\\
\mathrm{Cok}(\sigma_5^h) &=& \mathrm{Cok}(u_5') = \mathrm{Cok}(u_1) \oplus \mathrm{Cok}(u_3) = 0. \end{eqnarray*}
Therefore, since $X_i$ has class $SES^h_+$ and $\mathrm{Wh}_1(\Gamma_i)=0$, we obtain that $\mathcal{S}_\mathrm{TOP}^s(X_i)$ is a singleton. Thus, since $\#$ is a bijection, the Whitehead group $\mathrm{Wh}_1(\Gamma)$ and $s$-cobordism structure set $\mathcal{S}_\mathrm{TOP}^s(X)$ of $X = X_1 \# \cdots \# X_n$ are singletons also.
\end{proof}
\bibliographystyle{alpha}
|
1,116,691,500,674 | arxiv | \section{Model}
Our analysis is based on the Majorana Single Charge Transistor (MSCT), \cite{zazunov2011,hutzen2012}
which results from the usual Majorana setup of a quantum wire with strong spin-orbit interaction in a
magnetic field, but where the coupled superconductor is mesoscopic and floating, with a charging energy $E_C$,
where $E_C \ll \Delta_{TS}$,
with $\Delta_{TS}$ being the proximity induced gap of the topological superconductor. We furthermore
assume that $E_C$ is large compared with all other energy scales, notably the tunnel couplings $t_1$ and $t_2$ to
the leads, temperature $T$ and applied voltage bias $V$. This system is described by the Hamiltonian
$H = H_{el} + H_T + H_C$.
The leads are treated as non-interacting reservoirs,
$H_{el} = \sum_{j,k,\sigma} \epsilon_{jk} c_{jk\sigma}^\dagger c_{jk\sigma}$,
where $c_{jk\sigma}$ are electron operators for leads $j=1,2$, momenta $k$ and spins $\sigma=\uparrow,\downarrow$, with the
dispersion $\epsilon_{jk}$. The coupling between the leads and the superconductor is restricted to
tunnelling into the Majorana states $\gamma_1$ and $\gamma_2$, and we explicitly exclude the possibility of exciting
quasiparticles. \cite{plugge2016}
The tunnelling Hamiltonian can then be written as \cite{hutzen2012}
$H_T = \sum_{k} ( t_1 c_{1k\downarrow}^\dagger \eta_1 + i t_2 c_{2k\uparrow}^\dagger \eta_2 ) + \text{H.c.}$
We note that tunnelling through the Majoranas is spin polarized, \cite{sticlet2012,oreg2010}
for instance, with opposite spins for both Majoranas if the magnetic field is applied perpendicular to the spin-orbit
polarization direction, as written here. The spin polarization may also be non-antiparallel, if the magnetic field is tilted or if there exists a mixture of Rashba and Dresselhaus spin orbit coupling. For the purposes of this paper, it is only important that the coupling to the leads no longer has the spin degree of freedom. This allows us to effectively eliminate the spin index in the notations and we write $c_{1k}=c_{1k\downarrow}$ and $c_{2k}=c_{2k\uparrow}$. Furthermore
$t_1$ and$t_2$ are the tunnelling amplitudes and
$\eta_{1,2} = d \pm \mathrm{e}^{-i\chi} d^\dagger$, with $d = (\gamma_1+i\gamma_2)/\sqrt{2}$. The form of the $\eta_{1}$ and $\eta_2$ operators takes into account that
tunnelling between Majorana and lead can occur over two channels: by removal of an electron from the fermionic state
$d$ obtained by the superposition of $\gamma_1$ and $\gamma_2$ (normal tunnelling), or by splitting a Cooper pair and transferring one electron to the lead
and the other electron to the $d$ state (anomalous tunnelling) as shown in Fig. \ref{elandscape}. The removal of a Cooper pair is expressed by
$\mathrm{e}^{-i\chi}$, where $\chi$ is the superconducting phase operator, which obeys $[N_C,\mathrm{e}^{-i\chi}]= -\mathrm{e}^{-i\chi}$
where $N_C$ is the Cooper pair number operator. We have deliberatley omitted Andreev tunnelling processes from our analysis for two reasons. First, their amplitude is proportional to $t^2/\Delta$ and so is much smaller than the amplitude, $t$, relevant for the considered processes. Second, an Andreev process changes the number of charges on the nanowire by $\pm 2$, leaving the system in an excited state that needs further relaxation, and so the Andreev processes exist only at higher orders.
Finally, the charging state of the Majorana wire is given by
$H_C = E_C (2N_C + n_d - n_g)^2$, where $n_d = d^\dagger d$ and $n_g$ is a constant controlled by the gate voltage $V_g$.
In contrast to Ref. \onlinecite{hutzen2012} we do not consider any Josephson coupling to a further superconductor.
In this work, we tune $n_g$ to the value $n_g = 2n - \frac{1}{2}$, with $n$ an integer. This results in a charging
ground state degeneracy between the states $(N_C=n,n_d=0)$ and $(N_C=n-1,n_d=1)$, with the next excited states
at $(N_C=n,n_d=1)$ and $(N_C=n-1,n_d=0)$ having an excitation energy $2E_C$, as shown in Fig. \ref{elandscape}. Note that we have neglected any Majorana interaction energy, $H_{int}=\epsilon_m\left(n_d-1/2\right)$, which would break the ground state degeneracy. The energy $\epsilon_m$ is proportional to the Majorana wave function overlap and can be made exponentially small by a sufficiently large system size. Although this must be balanced by the requirement of maintaining a large $E_C$, this is not an issue since degeneracy can be restored by retuning $n_g$ to $n_g=2n-\frac{1}{2}-\frac{\epsilon_m}{2E_C}$. While this retuning does cause a splitting between the first excited states of $4\epsilon_m$, the degeneracy of these states is inessential to our results and as long as $\epsilon_m \ll E_C$ this perturbation to the excited state energy is of no consequence.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig_2}
\caption{\label{elandscape}
Charging energy against total number of electrons, $N_e$, in the nanowire for $n_g = 2n-\frac{1}{2} $.
Filled (empty) circles represent states with $n_d=1$ ($n_d=0$). Both the ground and excited states are degenerate,
with transitions between them via normal tunnelling (a)
and anomalous tunnelling (b). Note that there is no process mediating
transitions between the two excited states.
}
\end{figure}
Further excited states appear only at energy $4E_C$ above the first excited states and are neglected
in the present low energy description.
The resulting situation is reminiscent of the large interaction limit of the Anderson model with
two-fold degenerate ground and first excited states, yet with the restriction that there is no
direct scattering process connecting the two excited states because they have different total particle numbers,
$2n-2$ and $2n+1$. This excludes the virtual spin-flip type processes, that dominate Kondo physics, arising
from the usual mapping of the Anderson model on the Kondo model, and the resulting physics for the present
situation is fundamentally different. To discriminate it from the Kondo type behaviour obtained by a mutual coupling
of several such Majorana wires through a common superconductor,
\cite{fu2010,zazunov2011,hutzen2012,zazunov2012,wang2013,ulrich2015,sau2015,plugge2015,plugge2016} and from the behaviour of independent Majorana states,
we call the effective
model obtained from an analogous mapping the \emph{Kondorana model} because it combines Kondo and Majorana
properties on an equal footing, but exhibits exciting new physics. We note in passing that the other charging degeneracy point, $n_g = 2n+\frac{1}{2}$, also results in a many-body state similar to the one found in this work, due to particle-like symmetry.\cite{plugge2015}
We construct the effective model through a Schrieffer-Wolff transformation, whose details are provided
in Appendix A.
As indicated in Fig. \ref{elandscape},
the normal particle tunnelling between leads and Majoranas generates the high energy excitations. Since there is no
direct transition between both excited states, the virtual excitations into the high energy sector generate
an $n_d$ dependent scattering potential between electrons, including a teleportation type scattering
across the Majorana wire $\sim c_{1k}^\dagger c_{2k'}$, \cite{fu2010} but do not cause any change in the Majorana parity.
The resulting effective Hamiltonian is
\begin{align}
H_\text{eff}
&= H_{el}
+ \frac{1}{\sqrt{2}} \sum_k \left[\bigl(J^{(1)}_\pm c_{1k}^\dagger + i J^{(2)}_\pm c_{2k}^\dagger \bigr)S^+ + \text{H.c.}\right]
\nonumber\\
&+ \sum_{k,k'}
\Bigl[
J_z^{(11)}c^\dagger_{1k}c_{1k'} +J_z^{(22)}c^\dagger_{2k}c_{2k'}
\nonumber \\
&+ J_z^{(12)} i \bigl(c^\dagger_{1k}c_{2k'} -c^\dagger_{2k'}c_{1k}\bigr)
\Bigr]S^z,
\label{eq:Heff}
\end{align}
where $S^+ = \sqrt{2} d^\dagger \mathrm{e}^{-i\chi}$, $S^- = \sqrt{2} d \mathrm{e}^{i\chi}$, $S^z = 2n_d -1$
are pseudo-spin operators, and $J^{(j)}_\pm = t_{j}$, $J^{(jj')}_z = t_j t_{j'}/2E_C$ for $j,j'=1,2$. A Zeeman-like term arising from the Schrieffer-Wolff transformation is omitted in Eq. (1) since it can be eliminated in the same way as $\epsilon_m$ (see above and Appendix A).
This Hamiltonian differs from the Kondo Hamiltonian in two essential ways.
Firstly, it cannot be written down as a pure spin-spin interaction because it involves
the tunnelling terms $J_\pm^{(j)}$ which create and annihilate electrons while flipping $S$. Secondly, the $S^z$ term couples in parallel to an $s^z$ and $s^y$ electron
pseudo-spin: Since the tunnelling electrons have a spin polarization locked to the lead,
we can define a lead-spin pseudo-spin with projections
$s \in \{s_+,s_-\} = \{(j=1,\downarrow),(j=2,\uparrow)\}$
and operators
$s^{\alpha}_{k,k'} = c^\dagger_{ks} \sigma^\alpha_{s,s'} c_{k's'}$ for $\sigma^\alpha$
the Pauli matrices (with $\sigma^0$ the unit matrix).
This allows us to write the $S^z$ term as
$ \sum_{k,k'} [\frac{1}{2} (J_z^{(11)} + J_z^{(22)}) s^0_{kk'} + \frac{1}{2}(J_z^{(11)} - J_z^{(22)}) s^z_{kk'}
+J_z^{(12)} s^y_{kk'}]S^z$. This special form, mainly the appearance of the $s^y_{kk'}$ term,
leads to a behaviour of the Kondorana model that is entirely different from the usual Kondo physics.
\section{Renormalization} The non-Kondo behaviour of the model becomes evident if we
consider a renormalization group analysis. Since $H_\text{eff}$ describes free electrons
that are coupled to a single localized pseudo-spin $S$, the poor man's scaling technique \cite{anderson1970}
provides a transparent approach to the physics while being accurate.
The renormalization incorporates the modification of the coupling constants $J$ by virtual
scattering processes to high energies. The corresponding diagrams for the $J_\pm^{(j)}$
coefficients are shown in Fig. \ref{jpmscaling}, and details of the calculation are given in Appendix B. We obtain the scaling equations
\begin{equation} \label{eq:scaling}
\frac{d}{d\ell} \begin{pmatrix} J^{(1)}_\pm \\ J^{(2)}_\pm \end{pmatrix}
=
-2 \rho
\begin{pmatrix}
J^{(11)}_z & -J^{(12)}_z \\
-J^{(12)}_z & J^{(22)}_z
\end{pmatrix}
\begin{pmatrix} J^{(1)}_\pm \\ J^{(2)}_\pm \end{pmatrix},
\end{equation}
where $\ell \sim -\ln(D/D_0)$ and $\rho \sim 1/D_0$, with $D$ the running cutoff energy and $D_0$ the initial electron bandwidth.
From a similar analysis we find that $d J^{(jj')}_z/d\ell = 0$ for all $j,j'$,
which is a consequence of the fact that there are no $S^\pm s^\mp$ terms in $H_\text{eff}$ and there are thus
no $J^{(j)}_\pm$ terms contributing to the $J_z^{(jj')}$ renormalization.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig_3}
\caption{\label{jpmscaling}
Scattering channels which contribute to renormalization of $J_\pm^{(1)}$. Diagrams (a) and (b) show particle mediated scattering via the left and right leads, respectively. Similarly, (c) and (d) depict hole mediated scattering. Curved lines represent lead electrons, whilst the straight lines correspond to the nanowire with dashed and solid lines denoting $S_z=-1$ and $S_z=+1$, respectively.
}
\end{figure}
The renormalization flow of $J^{(j)}_\pm$ is governed by the eigenvalues of the
matrix in Eq. \eqref{eq:scaling}, which remain constant due to the invariance of the $J^{(jj')}_z$.
Since $J^{(jj')}_z = t_j t_j'/2E_C$ we find that the matrix has the eigenvalues $0$ and
$\lambda = J^{(11)}_z + J^{(22)}_z = (t_1^2+t_2^2)/2 E_C$, such that
\begin{equation} \label{eq:scaling_result}
\begin{pmatrix} J^{(1)}_\pm(\ell) \\ J^{(2)}_\pm(\ell) \end{pmatrix}
=
\frac{t_1^2-t_2^2}{t_1^2+t_2^2} \begin{pmatrix} t_1 \\ -t_2 \end{pmatrix} \mathrm{e}^{-2 \rho \lambda \ell}
+
\frac{2 t_1 t_2}{t_1^2+t_2^2} \begin{pmatrix} t_2 \\ t_1 \end{pmatrix}.
\end{equation}
The scaling therefore interpolates between the bare $J^{(j)}_\pm$ values and the fixed points
$\bar{J}^{(1)}_\pm = 2 t_1 t_2^2 / (t_1^2 + t_2^2)$ and
$\bar{J}^{(2)}_\pm = 2 t_1^2 t_2 / (t_1^2 + t_2^2)$, as shown in Fig. \ref{rgflow}, and does not display the weak or strong coupling
behaviour of a regular Kondo system. Although the fixed point is finite and the Hamiltonian maintains its form, the resulting state has an involved non-local many-body structure. This is exemplified by the fact that the tunnel coupling asymmetry, $t_1>t_2$, is reversed such that $t_1^*<t_2^*$ at the fixed point, showing that even for local coupling, the entire system including the leads is involved. Indeed, the state revealed above is highly non-local, extending over both leads regardless of nanowire length, and is comprised of lead electrons, Majorana modes and the superconducting condensate. We believe that such a state surpasses the threshold of being merely described as dressed and requires the many-body epithet.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{fig_4}
\caption{\label{rgflow}
Change in anomalous tunnel couplings, $J_\pm^{(j)}$, with scaling parameter $\ell$, as given by Eq. \eqref{eq:scaling_result} for $t_1 = 0.1$meV, $t_2=0.01$meV and further parameters \cite{parameters} based on a recent experiment. \cite{albrecht2016} The solid purple and dashed red lines show $J_\pm^{(1)}$ and $J_\pm^{(2)}$, respectively. The vertical dashed black line is the value of $\ell$ corresponding to the crossover temperature $T_c$. The couplings display a rapid, exponential change from their initial values of $t_1$ and $t_2$ as $\ell$ increases. This high temperature sensitivity even far above $T_c$ is due to the large ratio $t_1/t_2=10$. }
\end{figure}
It must be stressed that the existence of the 0 eigenvalue is a direct consequence of the
$S^z s^y$ term in the Hamiltonian which incorporates the teleportation contribution unique
to the Majorana system, and that the scaling equations \eqref{eq:scaling_result} would be hard
to obtain in any other system. In the absence of the $S^z s^y$ term, the renormalization would flow to the regular weak coupling limit of the ferromagnetic Kondo model.
A significant consideration is whether or not the fixed point $\bar{J}^{(j)}_\pm$ will be reached in practice.
The final value for $\ell$ is determined by the cutoff scale $D$ and the renormalization stops
when $D$ becomes equal to the thermal energy $k_B T$ or any voltage bias applied to the system.
The crossover scale from the bare to the renormalized values is obtained by setting $2 \rho \lambda \ell \sim 1$,
which resolves to
\begin{equation}\label{T_c}
k_B T_c \sim D_0 \mathrm{e}^{- 1/2\rho\lambda} = D_0 \mathrm{e}^{-E_C D_0/ (t_1^2+t_2^2)}.
\end{equation}
But, since the $J^{(j)}_\pm$ only renormalize for $t_1 \neq t_2$,
this only makes sense for substantially different $t_1$ and $t_2$, as otherwise the changes
in $J^{(j)}_\pm$ are small. Substituting realistic system parameters \cite{parameters} into Eq. \eqref{T_c}, we notice that the flow is very slow, and practically the fixed point is never reached. Due to this it is also of little relevance if the fixed point remains finite when further corrections beyond poor man's scaling are taken into account. Such corrections have an even slower renormalization flow and are always cut off before becoming important.
\section{Transport} A straightforward verification of the behaviour predicted by Eq. \eqref{eq:scaling_result} can be achieved
by measuring the two terminal conductance of the topological superconductor through the Majorana states. Neglecting terms
in Eq.~\eqref{eq:Heff} proportional to $J_z^{(jj')}$, which are a factor $t/E_C$ smaller than the anomalous tunnelling
processes, the effective tunnelling Hamiltonian is given by
$H_T = \sum_k [J_\pm^{(1)}c^\dagger_{1k}f+J_\pm^{(2)}c_{2k}^\dagger f + \text{h.c.}]$, where we define
the composite fermion, $f=d^\dagger e^{-i\chi}$, and where the amplitudes $J_\pm^{(j)}$ are the results of
the renormalization flow.
If we further treat the leads in the wide-band limit, then the situation is exactly analogous to a non-interacting resonant
level model, for which the (peak maximum) differential conductance at $eV<k_BT$ is \cite{haug1996}
\begin{equation}\label{conductance}
G =K\frac{|J^{(1)}_\pm|^2|J^{(2)}_\pm|^2}{|J^{(1)}_\pm|^2+|J^{(2)}_\pm|^2},
\end{equation}
where $K= \frac{\pi^2 e^2\rho}{h k_BT}$, with $e$ being the electronic charge and $h$ being Planck's Constant. In principle, this conductance offers two signatures of the many-body state found above. First, at constant temperature, $T$, the variation of conductance with changing $t_1,t_2$ asymmetry is markedly different for $T\gg T_c$ and $T\ll T_c$. In the former case, the conductance is $G = K\frac{t_1^2t_2^2}{t_1^2+t_2^2}$,
whereas, at low temperatures, we find that
\begin{equation}\label{lowG}
G \simeq K\frac{4t_1^4t_2^4\left[t_1^2t_2^2+\left(t_1^2-t_2^2\right)^2\mathrm{e}^{-\alpha}\right]}{\left(t_1^2+t_2^2\right)^3} \hspace{5pt} \text{at $T\ll T_c$},
\end{equation}
\noindent where $\alpha = \frac{\ln(k_BT/D_0)}{\ln(k_BT_c/D_0)}$. However, for realistic system parameters, Eq. \eqref{lowG} implies that, even though $T_c$ may be just about realisable in experiments, the temperature at which true fixed point behaviour is achieved is several orders of magnitude lower.
We therefore propose a further test for the existence of a many-body state, at $T>T_c$. Fixing the system parameters, but varying $T$, results in a distinctive signature, as shown in Fig. \ref{GTvsT}. Here we plot the product of $G$ and $T$, to remove the direct $1/T$ dependence in Eq. \eqref{conductance}.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{fig_5}
\caption{\label{GTvsT}
Variation of conductance amplitude with temperature, adjusted to account for generic $1/T$ dependence, as given by Eqs. \eqref{eq:scaling_result} and \eqref{conductance}.
The solid blue line depicts the result for a many-body state whilst the dashed orange line corresponds
to the bare tunnel couplings. The dot-dashed magenta line is a high temperature ($T\gg T_c$), high asymmetry ($t_1\gg t_2$) approximation, $G=K t_2^2\left(\mathrm{e}^{-2\alpha}-4\mathrm{e}^{-\alpha}+4\right)$, where $\alpha = \frac{\ln(k_BT/D_0)}{\ln(k_BT_c/D_0)}$. The parameters used \cite{parameters} imply
$T_c = 2$ mK.
}
\end{figure}
It is remarkable that, even at temperatures well above $T_c$, there is a clear difference between the scaled result and the result found from the bare tunnel couplings. That the influence of the many-body state extends to such high temperatures is a result of the strong $J_\pm^{(jj')}$ dependence in both the numerator and denominator of Eq. (6). Observation of the characteristic behaviour shown in Fig. \ref{GTvsT} appears to be within reach of current experiments and would provide compelling testament to the importance of many-body effects in this system.
\section{Conclusions}
A floating topological superconductor is an obvious Majorana-based system in which to explore Kondo physics. The Majorana modes of the superconducting condensate are analogous to magnetic impurities or quantum dots in the traditional Kondo effect, whilst the metallic leads fill the role of the electronic continuum.
One might initially expect this system to exhibit something close to conventional Kondo physics, possibly with a slight modification due to the Majorana modes. However, as we have shown in this work, this is not the case. Using a Schrieffer-Wolff transformation and a scaling analysis to eliminate the high energy sectors in the wire and leads respectively, we have demonstrated that the system flows to an intermediate fixed point, rather than the strong or weak coupling of the Kondo model. Indeed, we have shown that the presence of Majoranas is essential for this intermediate scaling to exist.
The distinct behaviour arising from the interplay of Kondo and Majorana physics motivates our use of the term \textit{Kondorana} to describe our model and the many-body state that arises from it. We have suggested possible experimental transport signatures of this state which are, in principle, within reach of current experiments.
\section{Acknowledgements} We thank C. J. F. Carroll, R. Egger, K. Grove-Rasmussen, C. A. Hooley, M. Scheurer, P. Wahl and A. A. Zyuzin for valuable discussions and comments. IJvB acknowledges studentship funding from EPSRC under grant no. EP/I007002/1.
\section{Appendix A: Derivation of Effective Hamiltonian}
To find an effective low-energy theory of the full Hamiltonian, we carry out a Schrieffer-Wolff transformation
defined by the unitary transformation $H_{\text{eff}} = e^{W}He^{-W}$. This transformation is chosen such that
it eliminates the tunnelling processes into the high energy sector of the model and replaces them by
effective low-energy processes created by virtual high energy excursions. With our choice of tuning the gate
to the degenerate ground states $(N_C=n,n_d=0)$ and $(N_C=n-1,n_d=1)$, the direct tunnelling terms provide the
excitations to the high energy sector, given by the Hamiltonian
\begin{equation}
H_1 = \sum_k ( t_1 c_{1k}^\dagger d + i t_2 c_{2k}^\dagger d ) + \mathrm{H.c.},
\end{equation}
whereas the low energy sector is described by
\begin{align}
H_0 = H_{el} + H_C + \sum_k ( t_1 c_{1k}^\dagger \mathrm{e}^{-i\chi} d^\dagger + i t_2 c_{2k}^\dagger \mathrm{e}^{-i \chi} d^\dagger) + \mathrm{H.c.}
\end{align}
Expanding the unitary transformation in $W$ leads then to the effective Hamiltonian
\begin{equation}
H_{\text{eff}} = H_0 + \frac{1}{2}\left[W,H_1\right],
\end{equation}
in which we have required $\left[W,H_0\right] = -H_1$ such that the first order high energy excitations
vanish. This requirement has the solution
\begin{equation}
W = \sum_k \Xi(\epsilon_k) \bigl(t_1 c^\dagger_{1k}d -i t_2 c^\dagger_{2k}d \bigr) - \text{H.c.},
\end{equation}
with $\Xi(\epsilon_k) = [\epsilon_k-E_C(4N_C+1-2n_g)]^{-1}$.
In deriving this result we have neglected
Andreev type processes of the form $c^\dagger c^\dagger \mathrm{e}^{-i\chi}$. Such processes are indeed generated
at second order in tunnelling, but since they change the number of charges on the wire by 2, they
always lead to high energy excitations and contribute to the low energy theory only at order
$\mathcal{O}(t_j^3/E_C^2)$.
Since is always possible to choose real $t_1, t_2$ (any phase can be absorbed by shifting the
phases of the lead electrons through a gauge transformation), we then find that the
effective Hamiltonian is given by
\begin{align}
&H_{\text{eff}}
=
E_C (2N_C +n_d -n_g)^2
\nonumber\\
&+
\sum_k
\Bigl[
\epsilon_k
\bigl( c^\dagger_{1k}c_{1k}+c^\dagger_{2k}c_{2k}\bigr)
\nonumber\\
&+
\bigl(
t_{1} c^\dagger_{1k}d^\dagger e^{-i\chi}
+
it_{2}c^\dagger_{2k}d^\dagger e^{-i\chi}
+ \text{H.c.}
\bigr)
\Bigr]
\nonumber\\
&+\sum_{k,k'} \Xi(\epsilon_k)
\Bigl[
t_1^2c^\dagger_{1k}c_{1k'}
+
t^2_2c^\dagger_{2k}c_{2k'}
\nonumber\\
&-
\delta_{k,k'}\bigl(t_1^2+t_2^2\bigr) n_d
+
it_1t_2
\bigl(c^\dagger_{1k}c_{2k'}-c^\dagger_{2k}c_{1k'}\bigr)
\Bigr].
\end{align}
The term with $\delta_{k,k'}$ in the last line produces an energy shift for the
$n_d$ level, similar to an overlap integral between the two Majorana wave functions, or a Zeeman-type term for the pseudo-spin $S^z$ in Eq. \eqref{eq:Heff}.
If $\rho(\epsilon)$ is the density of states and $D_0$ is the electron bandwidth
such that $\rho \sim 1/D_0$, we can estimate the magnitude of this term
as
\begin{align}
&(t_1^2+t_2^2)\sum_{k,k'} \delta_{k,k'} \Xi(\epsilon_k)
=
(t_1^2+t_2^2) \int d\epsilon \rho(\epsilon) \Xi(\epsilon)
\nonumber\\
&\sim
-\frac{t_1^2+t_2^2}{D_0} \ln\left[\frac{D_0-E_C(4N_C+1-2n_g)}{D_0+E_C(4N_C+1-2n_g)}\right].
\label{eq:nd_energy_ren}
\end{align}
For $E_C < D_0$ this term is on the order of $\mathcal{O} \left(t_j^2 E_C/ D_0^2\right)$ and thus smaller
than all other considered energies. Yet it can always be removed by a slight adjustment
of $n_g$ through the gate voltage since it plays the same role as the charging energy,
and we shall set it to zero henceforth.
For the remaining effective theory the $\epsilon_k$ term in $\Xi(\epsilon_k)$ is
unimportant since it causes only small corrections for the low-energy properties, and we shall
drop it in the following and use the approximation $\Xi(\epsilon_k) = \Xi = - [E_C(4N_C+1-n_g)]^{-1}$.
We finally notice that with $n_g = 2n+1/2$,
\begin{equation} \label{eq:NC_nd_id}
4N_C+1-2n_g =
\begin{cases}
+2 & \text{for $(N_c=n,n_d=0)$},\\
-2 & \text{for $(N_c=n-1,n_d=1)$},
\end{cases}
\end{equation}
which allows us to write $\Xi = \left(2n_d-1\right)/2E_C$ for these two states.
With these results we find that the effective Hamiltonian
takes the form of Eq. \eqref{eq:Heff}, with the latter identities leading to the
$S^z$ pseudo-spin term. Deviations from Eq. \eqref{eq:NC_nd_id}, such as by tuning $n_g$
slightly away from $2n+1/2$ due to compensation of Eq. \eqref{eq:nd_energy_ren} or due to
the neglected dependence of $\Xi$ on $\epsilon_k$ cause only corrections that either
remain proportional to $S^z$ or are independent of $S^z$ and consist only of renormalizations
of the chemical potentials in the leads. Equation \eqref{eq:Heff} therefore represents the
generic effective Hamiltonian of the system.
\section{Appendix B: Renormalization of Couplings }
Poor man's scaling consists in a renormalization group approach in which excitations to high energy states are successively
integrated out, and the bandwidth is effectively reduced. In the following we label with $q,q'$ these high energy states
and with $k,k'$ the initial and final low energy states. The renormalization proceeds by directly producing corrections
to the Hamiltonian.
We follow a diagrammatic variant of poor man's scaling. The first point to note is that the $J_z^{(jj')}$ couplings are invariant under scaling. The reason for this can be understood by considering Fig. \ref{jzscaling}, which shows the two vertex process contributing to the scaling of $J_z^{(11)}$.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{fig_6}
\caption{\label{jzscaling} The lowest order particle mediated process contributing to the scaling of $J_z^{(11)}$. The line between the two vertices denotes a particle excited to the high energy shell. Note that the two scattering events commute, so the pathway shown here is exactly cancelled by a corresponding hole-mediated process and does not contribute to scaling.
}
\end{figure}
Neither of the two vertices causes a change in Majorana parity, i.e.\ an $S^z$ spin flip, and they therefore commute. The result is that the hole-mediated version of the depicted process will result in exact cancellation. Indeed, since only terms in the Hamiltonian proportional to $J_\pm^{(j)}$ change $S^z$, and such terms constitute terminal vertices, as shown in Fig. \ref{jpmscaling}, there are no scattering diagrams, to any order, that result in scaling of $J_z^{(jj')}$.
We now turn to the scaling of $J_\pm^{(1)}$, for which the first order scattering processes are shown in Fig. \ref{jpmscaling}. The particle mediated channels, with excitations $q,q'$ into an energy shell of width $\delta D$ at the upper band edge, lead to the following correction of the Hamiltonian,
\begin{align}
&\delta H_p = \sum_{q,q'}\Bigl[
J_z^{(11)}S^zc^\dagger_{1k}c_{1q'}\left(E-H_0\right)^{-1}J_\pm^{(1)}c^\dagger_{1q}\frac{1}{\sqrt{2}}S^+
\nonumber\\
&+
iJ_z^{(12)}S^zc^\dagger_{1k}c_{2q'}\left(E-H_0\right)^{-1}iJ_\pm^{(2)}c^\dagger_{2q}\frac{1}{\sqrt{2}}S^+
\Bigr]
\nonumber\\
&= \sum_{q,q'}\Bigl[
J_z^{(11)}J_\pm^{(1)}c^\dagger_{1k}c_{1q'}\left(E-D\right)^{-1}c^\dagger_{1q}\frac{1}{\sqrt{2}}S^+
\nonumber\\
&-
J_z^{(12)}J_\pm^{(2)}c^\dagger_{1k}c_{2q'}\left(E-D\right)^{-1}c^\dagger_{2q}\frac{1}{\sqrt{2}}S^+
\Bigr],
\end{align}
where $E$ is the energy at which the system is probed and $D$ is the running bandwidth of the leads.
We have used the fact that $S^zS^+=S^+$ and written $H_0c^{\dagger}S^+=Dc^{\dagger}S^+$, since $|\delta D| \ll D$ and $S^+$ corresponds to zero energy excitations. Summing over the high energy interval $|\delta D|$, using the fact that $E-D \simeq -D$ and that
far above the Fermi surface $c_{1q'}c^\dagger_{1q} = \delta_{qq'}$, we find that the particle mediated contribution to the scaling is
\begin{equation}
\delta H_p = -\frac{\rho|\delta D|}{D}\left[J_z^{(11)}J_\pm^{(1)}-J_z^{(12)}J_\pm^{(2)}\right]c^\dagger_{1k}\frac{1}{\sqrt{2}}S^+,
\end{equation}
where $\rho$ is the density of states in the leads. A similar analysis for the hole mediated terms provides an identical result,
such that the total Hamiltonian associated with the two vertex events corresponding to $J_\pm^{(1)}$ is therefore
\begin{equation}
\delta H_{\text{2v}}
= -\frac{2\rho|\delta D|}{D}\left[J_z^{(11)}J_\pm^{(1)}-J_z^{(12)}J_\pm^{(2)}\right]\frac{1}{\sqrt{2}}c^\dagger_{1k}S^+.
\end{equation}
Comparing this with Eq. \eqref{eq:Heff}, we see that renormalization group flow equation for $J_\pm^{(1)}$ is
\begin{equation}\label{eqn:1flow}
\frac{d}{d\ell}J_\pm^{(1)} = - 2\rho\left[J_z^{(11)}J_\pm^{(1)}-J_z^{(12)}J_\pm^{(2)}\right].
\end{equation}
The derivation of the scaling for $J_\pm^{(2)}$ is essentially identical and, with Eq. \eqref{eqn:1flow} gives the result shown in Eq. \eqref{eq:scaling}.
\vfill
|
1,116,691,500,675 | arxiv | \section{Introduction}
\todo{In Stich et al. they cite the Giens paper for performance comparisons w.r.t. newuoa and bfgs - we might want to discuss indeed the local search performance - at least mention that the algorithms are seldomly presented as robust local search algorithms, however indeed good local cv property (linear cv towards local optima). Not true for many ``GA'' or PSO. In the mind of people, methods are good for global convergence ``only''. }
Derivative-free optimization (DFO) algorithms have the advantage to handle numerical optimization problems where the function $f: \mathbb{R}^{\dim} \mapsto \mathbb{R}$ to be WLG minimized can be seen as a black-box that is only able to return an objective function value $f(\ensuremath{\mathbf{x}})$ for a given input vector $\ensuremath{\mathbf{x}}$. This context is particularly useful when dealing with many numerical optimization problems. Indeed, first, the function that needs to be optimized can result from a computer simulation where the source code might be too complex to exploit or might not be available to the person who has to do the optimization (this is typical in industry, where often only executables of the code are provided). Hence automatic differentiation to compute the gradient is not conceivable. Second, gradients can be non-exploitable because the function can be ``rugged'' that is noisy, very irregular, ...
Among DFO, we distinguish \emph{function-value-free} (FVF) algorithms that do not exploit the exact objective function value but only comparisons between candidate solutions. The Nelder-Mead simplex algorithm is one of the oldest deterministic FVF algorithm \cite{NelderMead:65}.
While the distinction between DFO and FVF algorithms is rarely made, it has some importance both in theory and practice because FVF algorithms are invariant to composing the objective function by a strictly increasing function and hence can be seen as more robust.
We here focus on a particular class of probabilistic or randomized comparison-based (or FVF) algorithms that adapt a mean vector (thought as favorite solution) and step-size. A general framework for those methods has been formalized under the name \emph{comparison based step-size adaptive randomized search} (CB-SARS) \cite{methodology-paper}. Those methods find their roots among the first papers published on randomized FVF algorithms in the 60's \cite{Matyas:1965,Schumer:68,Devroye:72,Rechenberg}. They were, later on, further developed in the Evolution Strategies (ES) community. The nowadays state-of-the-art Covariance Matrix Evolution Strategy (CMA-ES) where in addition to the step-size, a full covariance matrix is adapted (allowing to solve efficiently ill-conditioned problems) ensued from the developments on CB-SARS\ \cite{hansen2001}. Note that contrary to some common preconception, randomized FVF (in particular CMA-ES) are competitive also for ``local'' optimization and can show superior performance \new{compared to}\del{than} the standard BFGS or the NEWUOA \cite{newuoaReport:2007} algorithm on \emph{unimodal} functions provided they are significantly non-separable and non-convex \cite{auger:colloquegiens}.
We investigate the convergence of one of the earliest CB-SARS, introduced independently by Rechenberg under the name $(1+1)$-ES with one-fifth success rule \cite{Rechenberg}, by Devroye as the compound random search \cite{Devroye:72} and by Schumer and Steiglitz as step-size adaptive random search \cite{Schumer:68}.
Formally, let $\ensuremath{\mathbf{X}_\k} \in \mathbb{R}^{n}$ be the mean of a multivariate normal distribution representing the favorite solution at the current iteration $t$. A new solution centered in $\ensuremath{\mathbf{X}_\k}$ and following a multivariate normal distribution with standard deviation $\ensuremath{\sigma_\k}$ (corresponding also to the step-size) is sampled:
\begin{equation}\label{eq:sampling}
\ensuremath{\mathbf{X}_\k}^{1} = \ensuremath{\mathbf{X}_\k} + \ensuremath{\sigma_\k} \ensuremath{\mathbf{U}_{\k}}^{1}
\end{equation}
where $\ensuremath{\mathbf{U}_{\k}}^{1}$ follows a standard multivariate normal distribution, i.e., $\ensuremath{\mathbf{U}_{\k}}^{1} \sim \ensuremath{\mathcal{N}}(0, I_n)$.
The new solution is evaluated on the objective function $f$ and compared to $\ensuremath{\mathbf{X}_\k}$. If it is better than $\ensuremath{\mathbf{X}_\k}$, in this case we talk about success, it becomes $\ensuremath{\mathbf{X}_{\k+1}}$, otherwise it is rejected:
\begin{equation}\label{eq:update-mean}
\ensuremath{\mathbf{X}_{\k+1}} = \ensuremath{\mathbf{X}_\k} + \ensuremath{\sigma_\k} \ensuremath{\mathbf{U}_{\k}}^{1} 1_{\{f(\ensuremath{\mathbf{X}_\k}^{1}) \leq f(\ensuremath{\mathbf{X}_\k}) \}} \enspace .
\end{equation}
As for the step-size, it is increased in case of success and decreased otherwise \cite{Schumer:68,Devroye:72,Rechenberg}. We denote $\gamma > 1$ the increasing factor and introduce a parameter $q \in \mathbb{R}^{+}_{>}$ such that the factor for decrease equals $\gamma^{-1/q}$. Overall the step-size update reads
\begin{equation}\label{eq:update-ss}
\ensuremath{\sigma_{\k+1}} = \ensuremath{\sigma_\k} \gamma 1_{\{f(\ensuremath{\mathbf{X}_\k}^{1}) \leq f(\ensuremath{\mathbf{X}_\k}) \}} + \ensuremath{\sigma_\k} \gamma ^{-1/q} 1_{\{f(\ensuremath{\mathbf{X}_\k}^{1})> f(\ensuremath{\mathbf{X}_\k}) \}} \enspace .
\end{equation}
The idea to maintain a probability of success around $1/5$ was proposed in \cite{Schumer:68,Devroye:72,Rechenberg}. The constant $1/5$ is a trade-off between the asymptotic (in $n$) optimal success probability on the sphere function $f(\ensuremath{\mathbf{x}}) = \| \ensuremath{\mathbf{x}} \|^{2}$ where it is approximately $0.27$ \cite{Schumer:68,Rechenberg} and the corridor function\footnote{The corridor function is defined as $f(\ensuremath{\mathbf{x}}) = \ensuremath{\mathbf{x}}_{1}$ for $- b < \ensuremath{\mathbf{x}}_{2} < b, \ldots -b < \ensuremath{\mathbf{x}}_{n} < b$ (where $b >0$) otherwise $+ \infty$.} \cite{Rechenberg}. One implementation of the update of the step-size with target probability of success of $1/5$ is to set $-1/q = -1/4$\footnote{Assuming indeed a probability of success of $1/5$ and having set $q=4$ we find that $E[ \ln \ensuremath{\sigma_{\k+1}} | \ensuremath{\sigma_\k} ] = \ln \ensuremath{\sigma_\k} + \frac15 \ln \gamma + \frac45 \ln \gamma^{-1/4} = \ln \ensuremath{\sigma_\k} $, i.e., the step-size is stationary.}.
We call in the sequel the algorithm following equations \eqref{eq:sampling}, \eqref{eq:update-mean} and \eqref{eq:update-ss}, a $(1+1)$-ES with generalized one-fifth success rule and sometimes in short $(1+1)$-ES as there is no ambiguity for this paper that the step-size mechanism adopted is the generalized one-fifth success rule.
CB-SARS\ algorithms are observed to typically converge linearly towards local optima on a wide class of functions. Linear convergence of single runs is illustrated in Figure~\ref{fig:simul} for the $(1+1)$-ES on the simple sphere function $f(\ensuremath{\mathbf{x}}) = \| \ensuremath{\mathbf{x}} \|^{2}$. We observe that both the distance to the optimum $\| \ensuremath{\mathbf{X}_\k}\|$ and the step-size $\ensuremath{\sigma_\k}$ converge linearly at the same rate, in the sense that the logarithm of $\| \ensuremath{\mathbf{X}_\k} \|$ (or $\ensuremath{\sigma_\k}$) divided by $t$ converges to $- {\rm CR}$ (where $-{\rm CR}$ corresponds to the slope of the line observed in the second stage of the convergence).
Despite overwhelming empirical evidence of the linear convergence of CB-SARS\ and the fact that the methods are relatively old, few formal proofs of their linear convergence actually exist. A variant of the $(1+1)$-ES presented here was however studied by J\"agersk\"upper\footnote{In this variant, the step-size is kept constant for a period of several iterations before to increase or decrease depending on the observed probability of success during the period.} who proved on the sphere function and some convex-quadratic functions lower and upper bounds (on the time to reduce the error by a given fraction) that imply linear convergence \cite{Jens:2007,jens:gecco:2006,jens:2005,jens:tcs:2006}. The linear convergence of another CB-SARS\ using so-called self-adaptation as step-size adaptation mechanism was also proven on the sphere function \cite{TCSAnne04}.
We study in this paper the global linear convergence of the $(1+1)$-ES on a class of unimodal functions. More precisely convergence is investigated on functions $h$ that are the composition of a strictly increasing transformation $g$ by a positively homogeneous function with degree $\alpha$, $f$, i.e., satisfying $f(\rho(\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}^{\star}})) = \rho^{\alpha} f(\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}^{\star}}) $ for any $\rho >0$, $\alpha >0$ and $\ensuremath{\mathbf{x}^{\star}}$ that is the global optimum of the function (we assume that $f$ is strictly positive except in $\ensuremath{\mathbf{x}^{\star}}$ where it can be zero). This class of function is a subset of scaling-invariant functions \cite{methodology-paper}.
Under the assumptions that $f$ is continuously differentiable plus mild assumptions, we prove global linear convergence of the $(1+1)$-ES optimizing $h
$ provided $\gamma > 1$ and the condition
$
\frac12 \left( \frac{1}{\gamma^{\alpha}} + \gamma^{\alpha/q} \right) < 1
$
is satisfied.
(This latter condition translates that the step-size increases on a linear function in the sense that one over expected change to the $\alpha$ on a linear function is smaller $1$.)
More formally, under the conditions sketched above, assuming w.l.o.g. that $\ensuremath{\mathbf{x}^{\star}}$ is zero, we prove the existence of ${\rm CR} > 0$ such that from any initial condition $(\ensuremath{\mathbf{X}}_{0}, \sigma_{0})$, almost surely
$$
\frac{1}{t} \ln \frac{\| \ensuremath{\mathbf{X}_\k} \| }{ \| \ensuremath{\mathbf{X}}_{0} \|} \xrightarrow[t \to \infty]{} - {\rm CR} \mbox{ and }\,\, \frac{1}{t} \ln \frac{\ensuremath{\sigma_\k}}{\sigma_{0}} \xrightarrow[t \to \infty]{} - {\rm CR} \enspace
$$
hold. We provide a comprehensive expression for the convergence rate as
$$
{\rm CR} = - \ln \gamma \left( \frac{q+1}{q} {\rm PS} - \frac1q \right)
$$
where ${\rm PS}$ is the asymptotic probability of success.
We also prove that in expectation from any initial condition $(\ensuremath{\mathbf{X}}_{0},\sigma_{0}) = (\ensuremath{\mathbf{x}},\sigma)$
$$
E_{\frac{\ensuremath{\mathbf{x}}}{\sigma}} \ln \frac{\| \ensuremath{\mathbf{X}_{\k+1}} \|}{\| \ensuremath{\mathbf{X}_\k} \|} \xrightarrow[t \to \infty]{} - {\rm CR} \mbox{ and } \,\, E_{\frac{\ensuremath{\mathbf{x}}}{\sigma}} \ln \frac{\ensuremath{\sigma_{\k+1}}}{\ensuremath{\sigma_\k}} \xrightarrow[t \to \infty]{} - {\rm CR} \enspace.
$$
We finally precise the speed of convergence for the step-size of the two previous equations. We prove a Central Limit Theorem associated to the first equation and then
prove that $ E_{\frac{\ensuremath{\mathbf{x}}}{\sigma}} \ln \frac{\ensuremath{\sigma_{\k+1}}}{\ensuremath{\sigma_\k}}$ converges geometrically fast towards $-{\rm CR}$.
Our proof technique follows a methodology developed in \cite{methodology-paper}\ exploiting the fact that the $(1+1)$-ES is a scale-invariant CB-SARS\ and that thus linear convergence on scaling-invariant functions can be turned into the stability study of the homogeneous Markov chain $\ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{X}_\k} / \ensuremath{\sigma_\k}$.
More precisely we study the $\psi$-irreducibility, Harris-reccurence, positivity and geometric ergodicity of $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$. We use for this, standard tools for the analysis of Monte Carlo Markov chains algorithms and in particular Foster-Lyapunov drift conditions \cite{Tweedie:book1993}.
This paper is organized as follows. In Section~\ref{sec:NMC} we summarize the results from the companion paper \cite{methodology-paper}\ setting the framework for the theoretical analysis, i.e., allowing us to define the normalized Markov chain that needs to be studied for proving the convergence. In addition, we define the objective functions under study and set some first assumptions. In Section~\ref{sec:stability} we study the normalized chain $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ namely its $\varphi$-irreducibility, aperiodicity, investigate small sets and prove its geometric ergodicity that constitutes the core part of the study. Using those results, we finally prove in Section~\ref{sec:linear-convergence} the global linear convergence of the $(1+1)$-ES with generalized one-fifth success rule almost surely and in expectation. We provide a comprehensive expression for the convergence rate. Last we discuss our findings in Section~\ref{sec:discussion}.
\todo{
\begin{itemize}
\item talk in the intro or somewhere else about Motivation w.r.t line search (see below).
\end{itemize}
}
\nnote{Literature review - motivation: QueryComplexityDFO.pdf : analyze complexity of comparison based for stochastic optimization (i.e., noise) motivate with DE algorithm. Cite 3 paper for bounds when opt. problem not noisy - including nesterov below (propose algorithm like EGS)
NesterovRandomGradientFreeConvex.pdf : analyze an EGS like algorithm - check carefully what is done there. Seems to be considered as the god as only paper cited.
http://dev.related-work.net/arxiv:0912.3995 - several references.
Motivation wrt line search:
$\star$ adaptation of step-size use information already within the population - no additional assumption.
$\star$ perfect line search gives lower bound of $1 - \frac{1}{n}$--``tight'' as expected distance decrease optimally bracketed between $1-\frac1n$ and $1-0.5 \frac1n$.
General motivation (see Jens Hit-and-Run intro which is quite good): said in Nesterov and in Nemirovsky and Yudin: difficult to study.
Note: generally puzzled by the bound for noisy optimization - they are probably not the lower bounds I believe as on multiplicative noise we get linear convergence.
Randomized DFO are arguably more robust to noise, outliers or multi-modality than deterministic DFO due to their stochastic natures. We however focus here on a ``local'' convergence property deriving a linear convergence rate.
}
\paragraph*{Notations} We denote $\mathcal{N}(0,I_{n})$ a standard multivariate normal distribution, i.e., with mean vector $0$ and covariance matrix identity. Its density is denoted $p_{{\mathcal{N}}}$. Given a set $C$ we denote its complementary $C^{c}$. We denote $\mathbb{R}^{n}_{\neq}$ the set $\mathbb{R}^{n}$ minus the null vector and $\mathbb{R}^{+}_{>}$ denotes the set of strictly positive real numbers. The set of strictly increasing functions $g$ from $\mathbb{R}$ to $\mathbb{R}$ or a subset of $\mathbb{R}$ to $\mathbb{R}$ is denoted $\ensuremath{\mathcal{M}}$. Given an objective function $f$ we denote by $\mathcal{L}_{c}$ the level set $\{ \ensuremath{\mathbf{x}}, f(\ensuremath{\mathbf{x}}) = c\}$ and by $\bar{\mathcal{L}}_{c}$ the corresponding sublevel set, i.e., $\bar{\mathcal{L}}_{c}= \{ \ensuremath{\mathbf{x}}, f(\ensuremath{\mathbf{x}}) \leq c \}$. We denote by $\ensuremath{{\mathbb{{N}}}}$ the set of natural numbers including $0$, i.e., $\ensuremath{{\mathbb{{N}}}}=\{0,1,\ldots \}$ and $\mathbb{N}_{>}$ the set $\{1, 2, \ldots \}$. The euclidian norm of a vector $\ensuremath{\mathbf{x}}$ is denoted $\| \ensuremath{\mathbf{x}} \|$. A ball of center $\ensuremath{\mathbf{x}}$ and radius $r$ is denoted $\mathbf{B}(\ensuremath{\mathbf{x}},r)$.
\section{Normalized Markov Chain and Objective Function Assumptions}\label{sec:NMC}
In this section we summarize the main results from \cite{methodology-paper}\ allowing to define on the class of scaling-invariant functions the normalized Markov chain $\ensuremath{\mathbf{X}_\k}/\ensuremath{\sigma_\k}$. The study of the stability of this latter chain will imply the global linear convergence of the $(1+1)$-ES with generalized one-fifth success rule.
\subsection{The (1+1)-ES as a Comparison-Based Step-Size Adaptive Randomized Search}
We remind in this section that the $(1+1)$-ES is a CB-SARS\ after recalling the general definition of a step-size adaptive randomized search (SARS) and a CB-SARS.
A SARS\ algorithm is identified to a sequence of random vectors $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})_{t \in \ensuremath{{\mathbb{{N}}}}}$ where $\ensuremath{\mathbf{X}_\k} \in \mathbb{R}^{n}$ and $\ensuremath{\sigma_\k} \in \mathbb{R}^{+}_{>}$. The vector $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})$ is the state of the algorithm at iteration $t$ and $\Omega = \mathbb{R}^{n} \times \mathbb{R}^{+}_{>} $ is its state space. Let $\ensuremath{\mathbb{U}}$ be a subset of $\mathbb{R}^{m}$ that is called sampling space and $\ensuremath{\mathbb{U}^{p}}=\ensuremath{\mathbb{U}} \times \ldots \times \ensuremath{\mathbb{U}}$ for $p \in \ensuremath{{\mathbb{{N}}}}_{>}$. Given $(\ensuremath{\mathbf{X}}_{0},\sigma_{0}) \in \Omega$, the sequence $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})$ is inductively defined via
$$
(\ensuremath{\mathbf{X}_{\k+1}},\ensuremath{\sigma_{\k+1}}) = \mathcal{F}^{f}((\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k}),\ensuremath{\mathbf{U}_{\k}})
$$
where $\mathcal{F}$ is a measurable function and $(\ensuremath{\mathbf{U}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ is an independent and identically distributed (i.i.d.) sequence of random vectors of $\ensuremath{\mathbb{U}^{p}}$. The objective function $f$ is also an input argument to the update function $\mathcal{F}$, however fixed over time, hence it is denoted as upper-script of $\mathcal{F}$. A comparison-based SARS\ is a particular case of SARS\ where candidate solutions are (i) sampled from $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})$ using a solution function $\mathcal{S}ol$, (ii) evaluated on the objective function and ordered. The order of the candidate solutions is then \emph{solely} used for updating the state $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})$ of the algorithm. Formally, let us define first the solution function and ordering function.
\begin{definition}[$\mathcal{S}ol$ function]\label{def:Sol}
A $\mathcal{S}ol$ function used to create candidate solutions is a measurable function mapping $\Omega \times \ensuremath{\mathbb{U}}$ into $\mathbb{R}^{n}$, i.e.,
$$
\mathcal{S}ol: \Omega \times \ensuremath{\mathbb{U}} \mapsto \mathbb{R}^{n} \enspace.
$$
\end{definition}
\begin{definition}[$\mathcal{O}rd$ function]\label{def:Ord} The ordering function $\mathcal{O}rd$ maps $\mathbb{R}^{p}$ to $\mathfrak{S}(p)$, the set of permutations with $p$ elements and returns for any set of real values $(f_{1},\ldots,f_{p})$ the permutation of ordered indexes. That is $\mathcal{S}}%{\mathrm{Perm} = \mathcal{O}rd(f_{1},\ldots,f_{p}) \in \mathfrak{S}(p)$ where
$$
f_{\mathcal{S}}%{\mathrm{Perm}(1)} \leq \ldots \leq f_{\mathcal{S}}%{\mathrm{Perm}(p)} \enspace.
$$
When more convenient we might denote $\mathcal{O}rd((f_{i})_{i=1,\ldots,p})$ instead of $\mathcal{O}rd(f_{1},\ldots,f_{p}) $. \new{When needed for the sake of clarity, we might use the notations $\mathcal{O}rd^{f}$ or $\mathcal{S}}%{\mathrm{Perm}^{f}$ to emphasize the dependency in $f$.} \smalltodo{I might actually not need that}
\end{definition}
Given a permutation $\mathcal{S}}%{\mathrm{Perm} \in \mathfrak{S}(p)$, the star operator $*$ defines the action of $\mathcal{S}}%{\mathrm{Perm}$ on the coordinates of a vector $\ensuremath{\mathbf{U}}=(\ensuremath{\mathbf{U}}^{1},\ldots,\ensuremath{\mathbf{U}}^{p})$ belonging to $\ensuremath{\mathbb{U}^{p}}$ as
\begin{equation}
\begin{aligned}
\mathfrak{S}(p) \times \ensuremath{\mathbb{U}^{p}} \to & \ensuremath{\mathbb{U}^{p}} \\
(\mathcal{S}}%{\mathrm{Perm}, \ensuremath{\mathbf{U}}) \mapsto & \mathcal{S}}%{\mathrm{Perm}*\ensuremath{\mathbf{U}} = \left(\ensuremath{\mathbf{U}}^{\mathcal{S}}%{\mathrm{Perm}(1)},\ldots,\ensuremath{\mathbf{U}}^{\mathcal{S}}%{\mathrm{Perm}(p)} \right) \enspace.
\end{aligned}
\end{equation}
A CB-SARS\ can now be defined using a solution function $\mathcal{S}ol$, the ordering function and the star operator.
\begin{definition}[CB-SARS\ minimizing $f:\mathbb{R}^{\dim} \to \mathbb{R}$]\label{def:SSAES} Let $p \in \ensuremath{{\mathbb{{N}}}}_{>}$ and $\ensuremath{\mathbb{U}^{p}}=\ensuremath{\mathbb{U}} \times \ldots \times \ensuremath{\mathbb{U}}$ where $\ensuremath{\mathbb{U}}$ is a subset of $\mathbb{R}^{m}$. Let $p_{\ensuremath{\mathbf{U}}}$ be a probability distribution defined on $\ensuremath{\mathbb{U}^{p}}$ where each $\ensuremath{\mathbf{U}}$ distributed according to $p_{\ensuremath{\mathbf{U}}}$ has a representation $(\ensuremath{\mathbf{U}}^{1},\ldots,\ensuremath{\mathbf{U}}^{p})$ (each $\ensuremath{\mathbf{U}}^{i} \in \ensuremath{\mathbb{U}}$). Let $\mathcal{S}ol$ be a solution function as in Definition~\ref{def:Sol}. Let $\mathcal{G}_{1}: \Omega \times \ensuremath{\mathbb{U}^{p}} \mapsto \mathbb{R}^{n} $ and $\mathcal{G}_{2}: \mathbb{R}^{+}_{>} \times \ensuremath{\mathbb{U}^{p}} \mapsto \R^{+}$ be two mesurable mappings and denote $\mathcal{G}=(\mathcal{G}_{1},\mathcal{G}_{2})$.
A CB-SARS\ is determined by the quadruplet $(\mathcal{S}ol,\mathcal{G},\ensuremath{\mathbb{U}^{p}},p_{\ensuremath{\mathbf{U}}})$ from which the recursive sequence $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k}) \in \mathbb{R}^{n} \times \mathbb{R}^{+}_{>}$ is defined via $(\ensuremath{\mathbf{X}}_{0},\sigma_{0}) \in \mathbb{R}^{\dim} \times \mathbb{R}^{+}_{>}$ and for all $t$:
\begin{align}\label{eq:sol}
\ensuremath{\mathbf{X}_\k}^{i} & = \mathcal{S}ol((\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k}),\ensuremath{\mathbf{U}_{\k}}^{i}) \,, i=1,\ldots,p \\\label{eq:perm}
\mathcal{S}}%{\mathrm{Perm} & = \mathcal{O}rd(f(\ensuremath{\mathbf{X}_\k}^{1}),\ldots,f(\ensuremath{\mathbf{X}_\k}^{p})) \in \mathfrak{S}(p)\\\label{eq:G1}
\ensuremath{\mathbf{X}_{\k+1}} & = \mathcal{G}_{1}\left( (\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k}), \mathcal{S}}%{\mathrm{Perm}*\ensuremath{\mathbf{U}_{\k}} \right)\\\label{eq:G2}
\ensuremath{\sigma_{\k+1}} & = \mathcal{G}_{2}\left(\ensuremath{\sigma_\k}, \mathcal{S}}%{\mathrm{Perm}*\ensuremath{\mathbf{U}_{\k}} \right)
\end{align}
where $(\ensuremath{\mathbf{U}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ is an i.i.d.\ sequence of random vectors on \ensuremath{\mathbb{U}^{p}}\ distributed according to $p_{\ensuremath{\mathbf{U}}}$, $\mathcal{O}rd$ is the ordering function as in Definition~\ref{def:Ord}.
\end{definition}
In the next lemma we state that the $(1+1)$-ES with generalized one-fifth success rule is a CB-SARS\ and define its different components. The proof is immediate and hence omitted.
\begin{lemma}
The $(1+1)$-ES with generalized one-fifth success rule satisfies Definition~\ref{def:SSAES} with $p=2$, $\ensuremath{\mathbb{U}}=\mathbb{R}^{n}$, $\ensuremath{\mathbb{U}^{p}}=\mathbb{R}^{n} \times \mathbb{R}^{n}$. Its solution function equals to
$$
\mathcal{S}ol: ((\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{u}}) \in (\mathbb{R}^{n} \times \mathbb{R}^{+}_{>}) \times \ensuremath{\mathbb{U}}) \mapsto \ensuremath{\mathbf{x}} + \sigma \ensuremath{\mathbf{u}} \enspace.
$$
The sampling distribution is $p_{\ensuremath{\mathbf{U}}}(\ensuremath{\mathbf{u}}_{1},\ensuremath{\mathbf{u}}_{2})=p_{{\mathcal{N}}}(\ensuremath{\mathbf{u}}_{1}) \delta_{0}(\ensuremath{\mathbf{u}}_{2})$ where $p_{{\mathcal{N}}}$
is the density of a standard multivariate normal distribution and $\delta_{0}$ is the Dirac-delta function. The update function $\mathcal{G}=(\mathcal{G}_{1},\mathcal{G}_{2})$ equals
\begin{equation}\label{eq:Goneplusone}
\mathcal{G}((\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{y}}) =
\left( \begin{smallmatrix} \mathcal{G}_{1}((\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{y}}) \\
\mathcal{G}_{2}(\sigma,\ensuremath{\mathbf{y}})
\end{smallmatrix} \right)
= \left( \begin{smallmatrix} \ensuremath{\mathbf{x}} + \sigma \ensuremath{\mathbf{y}}^{1} \\
\sigma
\left( (\gamma - \gamma ^{-1/q}) 1_{\{ \ensuremath{\mathbf{y}}^{1} \neq 0 \}} + \gamma ^{-1/q} \right)
\end{smallmatrix} \right) \enspace.
\end{equation}
\end{lemma}
The solution and update functions associated to the $(1+1)$-ES have a specific structure that is useful for proving invariance properties of the algorithm. We state those properties in the following lemma and omit the proof which is also immediate.
\begin{lemma}\label{lem:PP}
Let $(\mathcal{S}ol,\mathcal{G},\ensuremath{\mathbb{U}^{p}},p_{\ensuremath{\mathbf{U}}})$ be the quadruplet associated to the CB-SARS\ $(1+1)$-ES with generalized one-fifth success rule. Then the following properties are satisfied:\\ For all $\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}_{0} \in \mathbb{R}^{n}$ for all $\sigma > 0$, for all $\ensuremath{\mathbf{u}} \in \ensuremath{\mathbb{U}}$, $\ensuremath{\mathbf{y}} \in \ensuremath{\mathbb{U}^{p}}$
\begin{align}\label{eq:ouap1}
\mathcal{S}ol((\ensuremath{\mathbf{x}}+\ensuremath{\mathbf{x}}_{0},\sigma),\ensuremath{\mathbf{u}}) & = \mathcal{S}ol((\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{u}}) + \ensuremath{\mathbf{x}}_{0} \\\label{eq:ouap2}
\mathcal{G}_{1}((\ensuremath{\mathbf{x}}+\ensuremath{\mathbf{x}}_{0},\sigma),\ensuremath{\mathbf{y}}) & = \mathcal{G}_{1}((\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{y}}) + \ensuremath{\mathbf{x}}_{0} \enspace.
\end{align}
For all $\alpha >0$, $(\ensuremath{\mathbf{x}},\sigma) \in \Omega$, $\ensuremath{\mathbf{u}}^{i} \in \ensuremath{\mathbb{U}}$, $\ensuremath{\mathbf{y}} \in \ensuremath{\mathbb{U}^{p}}$
\begin{align}\label{eq:SIsol}
\mathcal{S}ol((\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{u}}^{i}) & = \alpha \mathcal{S}ol \left( \left(\frac{\ensuremath{\mathbf{x}}}{\alpha}, \frac{\sigma}{\alpha}\right), \ensuremath{\mathbf{u}}^{i} \right) \\
\label{eq:SIG1}
\mathcal{G}_{1}( (\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{y}}) & = \alpha \mathcal{G}_{1}\left(\left(\frac{\ensuremath{\mathbf{x}}}{\alpha},\frac{\sigma}{\alpha}\right),\ensuremath{\mathbf{y}}\right)\\\label{eq:SIG2}
\mathcal{G}_{2}(\sigma,\ensuremath{\mathbf{y}}) & = \alpha \mathcal{G}_{2}\left(\frac{\sigma}{\alpha},\ensuremath{\mathbf{y}}\right)\enspace.
\end{align}
\end{lemma}
\subsection{Invariances}
As a direct consequence of the fact that the $(1+1)$-ES is comparison based, it is invariant to monotonically increasing transformations of the objective function. That is, for any $g \in \ensuremath{\mathcal{M}}$, the sequence $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})_{t \in \ensuremath{{\mathbb{{N}}}}}$ optimizing $g \circ f$ or optimizing $f$ are almost surely equal (see Proposition~2.4 in \cite{methodology-paper}). In addition the $(1+1)$-ES is translation and scale-invariant as detailed below.
Translation invariance implies identical behavior on a function $h(\ensuremath{\mathbf{x}})$ or any of its translated version $\ensuremath{\mathbf{x}} \mapsto h(\ensuremath{\mathbf{x}} - \mathbf{x}_{0})$.
It is formally defined for a SARS\ using a group homomorphism from the group $(\mathbb{R}^{n},+)$ to the group $(\mathcal{A}(\Omega),\circ)$, set of invertible mappings from the state space $\Omega$ to itself endowed with the function composition $\circ$. More precisely, a SARS\ is translation invariant if there exists a group homomorphism $\Phi \in \rm Homo( (\mathbb{R}^{n},+), (\mathcal{A}(\Omega), \circ) )$ such that for any objective function $f$, for any $\mathbf{x}_{0} \in \mathbb{R}^{n}$, for any $(\ensuremath{\mathbf{x}},\sigma) \in \Omega$ and \new{for any} $\ensuremath{\mathbf{u}} \in \ensuremath{\mathbb{U}^{p}}$
\begin{equation}\label{eq:TI}
\mathcal{F}^{f(\ensuremath{\mathbf{x}})}((\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{u}}) = \underbrace{[\Phi(\mathbf{x}_{0})]^{-1}}_{\Phi(-\mathbf{x}_{0})} \left( \mathcal{F}^{f\left( \ensuremath{\mathbf{x}} - \mathbf{x}_{0} \right)}(\Phi(\mathbf{x}_{0})(\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{u}}) \right)
\enspace.
\end{equation}
The $(1+1)$-ES is translation invariant and the group homomorphism associated equals $\Phi(\ensuremath{\mathbf{x}}_{0})(\ensuremath{\mathbf{x}},\sigma) = (\ensuremath{\mathbf{x}} + \mathbf{x}_{0},\sigma)$. This property is a consequence of \eqref{eq:ouap1} and \eqref{eq:ouap2} (see Proposition~2.7 in \cite{methodology-paper}). Similarly, scale-invariance that translates that an algorithm has no intrinsic notion of scale is defined via homomorphisms from the group $(\mathbb{R}^{+}_{>},.)$ (where $.$ denotes the multiplication in $\mathbb{R}$) to the group $ (\mathcal{A}(\Omega), \circ) $. More precisely a SARS\ is scale-invariant if there exists an homomorphism $\Phi \in \rm Homo((\mathbb{R}^{+}_{>},.),(\mathcal{A}(\Omega),\circ))$ such that for any $f$, for any $\alpha > 0$, for any $(\ensuremath{\mathbf{x}},\sigma) \in \Omega$ and \new{for any} $\ensuremath{\mathbf{u}} \in \ensuremath{\mathbb{U}^{p}}$
\begin{equation}\label{eq:SI}
\mathcal{F}^{f(\ensuremath{\mathbf{x}})}((\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{u}}) = \underbrace{[\Phi(\alpha)]^{-1}}_{\Phi(1/\alpha)} \left( \mathcal{F}^{f\left(\alpha \ensuremath{\mathbf{x}} \right)}(\Phi(\alpha)(\ensuremath{\mathbf{x}},\sigma),\ensuremath{\mathbf{u}}) \right)
\enspace.
\end{equation}
The $(1+1)$-ES is scale-invariant and the group homomorphism associated equals $\Phi(\alpha) (\ensuremath{\mathbf{x}},\sigma) = (\ensuremath{\mathbf{x}}/\alpha, \sigma / \alpha) $. This property is a consequence of the properties \eqref{eq:SIG1} and \eqref{eq:SIG2} (see Proposition~2.9 in \cite{methodology-paper}).
\subsection{Normalized Markov Chain on Scaling-Invariant Functions}\label{sec:defZ}
A class of functions that plays a specific role for CB-SARS\ are scaling invariant functions defined as: for all $\rho > 0$, $\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}} \in \mathbb{R}^{n}$
\begin{equation}\label{eq:scaling-invariant}
f(\rho(\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}^{\star}})) \leq f(\rho(\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{x}^{\star}})) \Leftrightarrow f(\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{x}^{\star}}) \leq f(\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{x}^{\star}}) \enspace,
\end{equation}
where $\ensuremath{\mathbf{x}^{\star}} \in \mathbb{R}^{n}$. The latter function is said scaling-invariant w.r.t.\ $\ensuremath{\mathbf{x}^{\star}}$. A linear function or any $g \circ f$ where $f$ is a norm and $g \in \ensuremath{\mathcal{M}}$ are scaling-invariant. Also some non quasi-convex functions are scaling-invariant. Scaling-invariant functions are essentially unimodal, formally they do not admit any strict local extrema (see Proposition~3.2 in \cite{methodology-paper}).
We assume given a scaling-invariant function w.r.t. $\ensuremath{\mathbf{x}^{\star}} = 0$ (w.l.o.g.). Then, for a translation and scale-invariant CB-SARS\ defined by the quadruplet $(\mathcal{S}ol,\mathcal{G},\ensuremath{\mathbb{U}^{p}},p_{\ensuremath{\mathbf{U}}})$ where scale-invariance is a consequence of the properties \eqref{eq:SIsol}, \eqref{eq:SIG1} and \eqref{eq:SIG2}, the normalized sequence $(\ensuremath{\mathbf{X}_\k}/\ensuremath{\sigma_\k})_{t \in \ensuremath{{\mathbb{{N}}}}}$ is an homogeneous Markov chain (Proposition~4.1 in \cite{methodology-paper}). This Markov chain can be defined independently of $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})$ provided $\ensuremath{\mathbf{Z}}_{0} = \frac{\ensuremath{\mathbf{X}}_{0}}{\sigma_{0}}$ via
\begin{align}
\ensuremath{\mathbf{Z}_{\k}}^{i} & = \mathcal{S}ol( (\ensuremath{\mathbf{Z}_{\k}},1),\ensuremath{\mathbf{U}_{\k}}^{i}), i=1,\ldots,p \\
\mathcal{S}}%{\mathrm{Perm} & = \mathcal{O}rd(f(\ensuremath{\mathbf{Z}_{\k}}^{1}),\ldots,f(\ensuremath{\mathbf{Z}_{\k}}^{p}))\\
\ensuremath{\mathbf{Z}_{\k+1}} & = G(\ensuremath{\mathbf{Z}_{\k}},\mathcal{S}}%{\mathrm{Perm}*\ensuremath{\mathbf{U}_{\k}})
\end{align}
where the transition function $G$ equals for all $\ensuremath{\mathbf{z}} \in \mathbb{R}^{n}$ and $\ensuremath{\mathbf{y}} \in \ensuremath{\mathbb{U}^{p}}$
\begin{equation}
G(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{y}}) = \frac{\mathcal{G}_{1}((\ensuremath{\mathbf{z}},1),\ensuremath{\mathbf{y}})}{\mathcal{G}_{2}(1,\ensuremath{\mathbf{y}})}
\enspace .
\end{equation}
According to the previous equation, the transition function $G$ for the normalized chain $(\ensuremath{\mathbf{Z}_{\k}}=\frac{\ensuremath{\mathbf{X}_\k}}{\ensuremath{\sigma_\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ associated to the $(1+1)$-ES on scaling-invariant functions is given by
\begin{equation*}
G(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{y}})=\frac{\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{y}}^{1}}{ (\gamma - \gamma ^{-1/q}) 1_{\{ \ensuremath{\mathbf{y}}^{1} \neq 0 \}} + \gamma ^{-1/q} }
\end{equation*}
where the selected step $\ensuremath{\mathbf{y}}=(\ensuremath{\mathbf{y}}^{1},\ensuremath{\mathbf{y}}^{2})$ is according to the f-ranking of the solutions $\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{u}}^{1}$ and $\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{u}}^{2}$,
i.e.,
$
f(\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{y}}^{1}) \leq f(\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{y}}^{2})
$.
However, since $\ensuremath{\mathbf{u}}^{2} = 0$ (because $p_{\ensuremath{\mathbf{U}}}(\ensuremath{\mathbf{u}}^{1},\ensuremath{\mathbf{u}}^{2})=p_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathbf{u}}^{1})\delta_{0}(\ensuremath{\mathbf{u}}^{2})$), $\ensuremath{\mathbf{y}}^{1} = \ensuremath{\mathbf{u}}^{1} 1_{\{f(\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{u}}^{1}) \leq f(\ensuremath{\mathbf{z}})\}} $. In addition since $\ensuremath{\mathbf{U}_{\k}}^{1} \sim \ensuremath{\mathcal{N}}(0,I_n)$, the event ${\{ \ensuremath{\mathbf{Y}_\k}^{1} \neq 0 \}}$ is almost surely equal to the event $\{ \ensuremath{\mathbf{Y}_\k}^{1} = \ensuremath{\mathbf{U}_{\k}}^{1} \}$ and hence almost surely equal to the event ${\{f(\ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1})\leq f(\ensuremath{\mathbf{Z}_{\k}}) \}}$.
Overall the Markov chain $(\ensuremath{\mathbf{Z}_{\k}})_{t\in \ensuremath{{\mathbb{{N}}}}}$ satisfies $\ensuremath{\mathbf{Z}}_{0} = \frac{\ensuremath{\mathbf{X}}_{0}}{\sigma_{0}} $ and given $(\ensuremath{\mathbf{U}}_{t}^{1})_{t \in \ensuremath{{\mathbb{{N}}}}}$ i.i.d with $\ensuremath{\mathbf{U}}_{t}^{1} \sim \ensuremath{\mathcal{N}}(0,I_n)$
\begin{equation}\label{eq:transitionZ}
\boxed{
\ensuremath{\mathbf{Z}_{\k+1}} = \frac{\ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1} 1_{\{f(\ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1})\leq f(\ensuremath{\mathbf{Z}_{\k}}) \}} }{ (\gamma - \gamma ^{-1/q}) 1_{\{f(\ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1})\leq f(\ensuremath{\mathbf{Z}_{\k}}) \}} + \gamma ^{-1/q} } \enspace. }
\end{equation}
Following \cite{methodology-paper}, we introduce the notation $\ensuremath{\eta^{\star}}$ for the step-size change, i.e.,
\begin{equation}\label{eq:sschangeETA}
\ensuremath{\eta^{\star}} = (\gamma - \gamma^{-1/q}) 1_{\{ f(\ensuremath{\mathbf{X}_\k}^{1}) \leq f(\ensuremath{\mathbf{X}_\k}) \}} + \gamma^{-1/q}
\end{equation}
and remind that on scaling-invariant functions, the step-size change starting from $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})$ is the same as the step-size change starting from $(\ensuremath{\mathbf{Z}_{\k}},1)$ (see Eq.~(4.7) in \cite{methodology-paper}) such that
\begin{equation}\label{eq:ping}
\ensuremath{\mathbf{Z}_{\k+1}} = \frac{\ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1} 1_{\{f(\ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1})\leq f(\ensuremath{\mathbf{Z}_{\k}}) \}}}{\ensuremath{\eta^{\star}}} \enspace .
\end{equation}
\subsection{Objective Function Assumptions}\label{sec:scaling-inv}
We consider scaling-invariant functions formally defined by \eqref{eq:scaling-invariant} where in addition we assume that $\ensuremath{\mathbf{x}^{\star}}=0$. This can be done w.l.o.g.\ because the $(1+1)$-ES is translation invariant. This assumption is sufficient to build the normalized Markov chain $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ (see Section~\ref{sec:defZ}). However for studying its stability, we will make further hypothesis on $f$.
\del{We state and explain in this section those main hypothesis.}
We will consider a particular class of scaling-invariant functions, namely positively homogeneous functions. Formally a positively homogeneous function with degree $\alpha$ satisfies the following definition.
\begin{definition}\label{def:poshf}[Positively homogeneous functions]
A function $f:\mathbb{R}^\dim \mapsto \mathbb{R}$ is said positively homogeneous with degree $\alpha$ if
for all $\rho >0$ and for all $\ensuremath{\mathbf{x}} \in \mathbb{R}^\dim$, $f(\rho \ensuremath{\mathbf{x}}) = \rho^{\alpha} f(\ensuremath{\mathbf{x}})$.
\end{definition}
Remark that positive homogeneity is not always preserved if $f$ is composed by a non-decreasing transformation. We will in addition make the following assumptions on the objective function:
\begin{assumption}\label{ass:f}
The function $f:\mathbb{R}^{\dim} \to \mathbb{R}$ is homogeneous with degree $\alpha$ and $f(\ensuremath{\mathbf{x}}) > 0$ for all $\ensuremath{\mathbf{x}} \neq 0$.
\end{assumption}
This assumption implies that the function $f$ has a \emph{unique optimum} located w.l.o.g.\ in $0$ (if the optimum $\ensuremath{\mathbf{x}^{\star}}$ is not in $0$, consider $\tilde f = f(\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}^{\star}})$) as seen in the next lemma point (i). Remark that with this assumption, we exclude linear functions.
In the next lemma, we state some properties of positive homogeneous functions satisfying Assumptions~\ref{ass:f}. We denote for $c \geq 0$, $\bar{\mathcal{L}}_{c} = \{ \ensuremath{\mathbf{x}}, f(\ensuremath{\mathbf{x}}) \leq c \}$ the sublevel set of $f$ associated to $c$ and $\mathcal{L}_{c}= \{ \ensuremath{\mathbf{x}}, f(\ensuremath{\mathbf{x}}) = c\}$ its level set. The hypersphere surface of radius $r$ is denoted $\mathbb{S}_{r}$, that is $\mathbb{S}_{r}=\{ \ensuremath{\mathbf{x}} , \| \ensuremath{\mathbf{x}} \| = r \}$.
\begin{lemma}\label{lem:propf}
Let $f$ be an homogeneous function with degree $\alpha >0$ and $f(\ensuremath{\mathbf{x}}) > 0$ for all $\ensuremath{\mathbf{x}} \neq 0$ and $f(\ensuremath{\mathbf{x}})$ finite for every $\ensuremath{\mathbf{x}} \in \mathbb{R}^\dim$. Then the following holds:
\begin{itemize}
\item[(i)] $\lim_{t \to 0} f(t \ensuremath{\mathbf{x}})=0$ and assuming that $f(\ensuremath{\mathbf{0}})=0$, for all $\ensuremath{\mathbf{s}} \neq 0 $, the function $f_{\ensuremath{\mathbf{s}}}: t \in [0, + \infty[ \mapsto f(t \ensuremath{\mathbf{s}})$ is continuous, strictly increasing and converges to $+ \infty$ when $t$ goes to $+ \infty$.
\item[(ii)] If $f$ is lower semi-continuous, then $\bar{\mathcal{L}}_{c}$ is compact.
\end{itemize}
\end{lemma}
{\em Proof.}
(i) Since $f(t \ensuremath{\mathbf{x}}) = t^{\alpha} f(\ensuremath{\mathbf{x}})$, fixing $\ensuremath{\mathbf{x}}$ and taking the limit for $t$ to zero we have that $\lim_{t \to 0} f(t \ensuremath{\mathbf{x}})=0$. For any $\ensuremath{\mathbf{s}}$, the function $f_{\ensuremath{\mathbf{s}}}$ satisfies $f_{\ensuremath{\mathbf{s}}}(t) = t^{\alpha} f(\ensuremath{\mathbf{s}})$. It is thus continuous on $[0,+\infty[$, strictly increasing and converges to infinity when $t$ goes to infinity. \\
(ii) Since $f$ is lower semi continuous, the inverse image of sets of the form $(-\infty,r]$ are closed sets. Hence $\bar{\mathcal{L}}_{c} = f^{-1}((-\infty,c])$ is closed. Let us consider the surface $\mathbb{S}_{r}$ for $r>0$. Since $f$ is lower semi-continuous, there exists $\ensuremath{\mathbf{x}}_{0} \in \mathbb{S}_{r} $ such that $\inf_{\ensuremath{\mathbf{x}} \in \mathbb{S}_{r}} f(\ensuremath{\mathbf{x}}) = f(\ensuremath{\mathbf{x}}_{0})$. Since $f(\ensuremath{\mathbf{x}}) > 0$ for $\ensuremath{\mathbf{x}} \neq 0$, $f(\ensuremath{\mathbf{x}}_{0}) : = m \neq 0$. Hence we have $\bar{\mathcal{L}}_{m} \subset B_{r}$. Because $f$ is homogeneous with degree $\alpha$, we have thus $\bar{\mathcal{L}}_{m \sigma^{\alpha}} \subset B_{r \sigma}$ for all $\sigma >0$. Hence, for any $c$ we can include $\bar{\mathcal{L}}_{c}$ is a ball which proves that it is bounded and hence compact. \hfill \endproof
A positively homogeneous function satisfies for all $\ensuremath{\mathbf{x}} \neq 0$
\begin{equation}\label{eq:pos-hom-relation}
f(\ensuremath{\mathbf{x}}) = \| \ensuremath{\mathbf{x}} \|^{\alpha} f \left( \ensuremath{\mathbf{x}} / \| \ensuremath{\mathbf{x}} \| \right) \enspace.
\end{equation}
From this latter relation it follows that $f$ is continuous on $\mathbb{S}_{1}=\{ \ensuremath{\mathbf{x}} \in \mathbb{R}^{\dim}, \| \ensuremath{\mathbf{x}} \|=1 \}$ if and only if $f$ is continuous on $\mathbb{R}^{n}_{\neq}$.
\mathnote{Formally we use the continuity of $\ensuremath{\mathbf{z}} \to \| \ensuremath{\mathbf{z}} \|$, of $\ensuremath{\mathbf{z}} \to \frac{\ensuremath{\mathbf{z}}}{\| \ensuremath{\mathbf{z}} \|}$ and of the product.}
Assuming continuity on $\mathbb{R}^{n}_{\neq}$, we denote in the sequel $m$ the minimum of $f$ on $\mathbb{S}_{1}$ and $M$ its maximum, that is
\begin{align}\label{eq:min-max}
m & =\min_{\ensuremath{\mathbf{z}} \in \mathbb{S}_{1}}f(\ensuremath{\mathbf{z}}) \\\label{eq:min-max2}
M & = \max_{\ensuremath{\mathbf{z}} \in \mathbb{S}_{1}} f(\ensuremath{\mathbf{z}}) \enspace.
\end{align}
The following lemma will be used several times when investigating the stability of the normalized Markov chain $\ensuremath{\mathbf{Z}} = (\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$.
\begin{lemma}\label{lem:too}
Let $f$ satisfy Assumptions~\ref{ass:f} and $f$ be continuous on $\mathbb{S}_{1}$. Then for all $\ensuremath{\mathbf{z}} \neq 0$
\begin{equation}\label{eq:LB-UB}
\| \ensuremath{\mathbf{z}} \| m^{1/\alpha} \leq f(\ensuremath{\mathbf{z}})^{1/\alpha} \leq \| \ensuremath{\mathbf{z}} \| M^{1/\alpha} \enspace,
\end{equation}
where $m$ and $M$ are defined in \eqref{eq:min-max} and \eqref{eq:min-max2}.
Hence, $f(\ensuremath{\mathbf{z}}) \to 0 $ when $\| \ensuremath{\mathbf{z}} \| \to 0$, $f(\ensuremath{\mathbf{z}}) \to \infty$ when $\| \ensuremath{\mathbf{z}} \|$ goes to $\infty$ and $|\ln \| \ensuremath{\mathbf{z}} \|| f(\ensuremath{\mathbf{z}})^{1/\alpha} \to 0$ when $\| \ensuremath{\mathbf{z}} \| \to 0$.
\end{lemma}
\begin{proof}
By homogeneity, for all $\ensuremath{\mathbf{z}} \neq 0$, we have $f(\ensuremath{\mathbf{z}})=f\left(\| \ensuremath{\mathbf{z}} \| \frac{\ensuremath{\mathbf{z}}}{\|\ensuremath{\mathbf{z}}\|} \right) = \| \ensuremath{\mathbf{z}} \|^{\alpha}f\left(\frac{\ensuremath{\mathbf{z}}}{\|\ensuremath{\mathbf{z}}\|} \right)$. Since $f$ is continuous on the compact $\mathbb{S}_{1}$, $m = \min_{\ensuremath{\mathbf{z}} \in \Sphere_{1}}f(\ensuremath{\mathbf{z}}) > 0$ and $M =\max_{\ensuremath{\mathbf{z}} \in \Sphere_{1}} f(\ensuremath{\mathbf{z}}) > 0$ and $M < \infty$. We hence have
$$
\| \ensuremath{\mathbf{z}} \|^{\alpha} \underbrace{\min_{\ensuremath{\mathbf{z}} \in \mathbb{S}_{1}}f(\ensuremath{\mathbf{z}})}_{m} \leq f(\ensuremath{\mathbf{z}}) \leq \| \ensuremath{\mathbf{z}} \|^{\alpha} \underbrace{\max_{\ensuremath{\mathbf{z}} \in \mathbb{S}_{1}} f(\ensuremath{\mathbf{z}})}_{M}
$$
and thus $\| \ensuremath{\mathbf{z}} \| m^{1/\alpha} \leq f(\ensuremath{\mathbf{z}})^{1/\alpha} \leq \| \ensuremath{\mathbf{z}} \| M^{1/\alpha} $.
Since $\|\ensuremath{\mathbf{z}}\| |\ln \| \ensuremath{\mathbf{z}} \|| \to 0$ when $\|\ensuremath{\mathbf{z}} \| \to 0$, we hence obtain that $|\ln \| \ensuremath{\mathbf{z}} \|| f(\ensuremath{\mathbf{z}})^{1/\alpha} \to 0$.
\end{proof}
The following lemma is a consequence of the previous one and will be useful in the sequel.
\begin{lemma}\label{lem:si}
Let $f$ satisfy Assumptions~\ref{ass:f} and $f$ be continuous on $\Sphere_{1}$, for all $\rho > 0$, the ball centered in $0$ and of radius $\rho$ is included in the sublevel set of degree $\rho^{\alpha} M$, i.e.,
\begin{equation}\label{eq:bart1}
\mathbf{B}(0,\rho) \subset \bar{\mathcal{L}}_{\rho^{\alpha} M} \enspace.
\end{equation}
For all $K > 0$, the sublevel set of degree $K$ is included into the ball centered in $0$ and of radius $(K/m)^{\alpha}$, i.e.,
\begin{equation}\label{eq:bart2}
\bar{\mathcal{L}}_{K} \subset \mathbf{B}(0,{(K/m)^{\alpha}}) \enspace.
\end{equation}
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:too} we have that for all $\ensuremath{\mathbf{z}}$, $m \| \ensuremath{\mathbf{z}} \|^{\alpha} \leq f(\ensuremath{\mathbf{z}}) \leq M \| \ensuremath{\mathbf{z}} \|^{\alpha} $. Let $\ensuremath{\mathbf{z}} \in \mathbf{B}(0,\rho)$, then $f(\ensuremath{\mathbf{z}}) \leq M \rho^{\alpha}$, i.e., $\ensuremath{\mathbf{z}} \in \bar{\mathcal{L}}_{\rho^{\alpha} M}$. Let $\ensuremath{\mathbf{z}} \in \bar{\mathcal{L}}_{K}$, then $f(\ensuremath{\mathbf{z}}) \leq K$ and hence $\| \ensuremath{\mathbf{z}} \| \leq (K/m)^{\alpha} $.
\end{proof}
Last, we remind the Euler's homogeneous function theorem.
\begin{theorem}[Euler's homogeneous function theorem]
Suppose that the function $f: \mathbb{R}^{n} \backslash \{0\} \mapsto \mathbb{R}$ is continuously differentiable. Then $f$ is positive homogeneous of degree $\alpha$ if and only if
$$
\mathbf{x} \cdot \nabla f(\mathbf{x})= \alpha f(\mathbf{x}) \enspace.
$$
\end{theorem}
\noindent This theorem implies that if $f$ is positively homogeneous and continuously differentiable, if $f(\ensuremath{\mathbf{x}}) > 0$ for $\ensuremath{\mathbf{x}} \neq 0$ (i.e., Assumption~\ref{ass:f}), then
\begin{equation}\label{prop-grad}
\nabla f(\ensuremath{\mathbf{x}}) \neq 0 \mbox{ for all } \ensuremath{\mathbf{x}} \neq 0 \enspace.
\end{equation}
\smalltodo{check among the additional assumptions I use in the stability section, which one I want to include here.}
\subsection{State Space for the Normalized Markov Chain}
The state-space for the normalized Markov chain $\ensuremath{\mathbf{Z}}=(\ensuremath{\mathbf{Z}_{\k}})_{t\in\ensuremath{{\mathbb{{N}}}}}$ is {\it a priori} $\mathbb{R}^{\dim}$. However if we start from $\ensuremath{\mathbf{Z}}_{0}=0$, we will stay in $0$ forever, i.e., $\ensuremath{\mathbf{Z}_{\k}}=0$ for all $t$. This is due to the fact that the $(1+1)$-ES cannot accept worse solutions and $0$ is the global optimum of $f$. This would then preclude the chain $\ensuremath{\mathbf{Z}}$ to be irreducible w.r.t. to a non-singular measure. We therefore exclude $0$ from the state space that is now equal to $\mathcal{Z}= \mathbb{R}^{\dim}_{\neq}$.
\todo{DO we want that: Remark that it amounts to consider the function $\tilde{f}$ such that $\tilde{f}(\ensuremath{\mathbf{x}^{\star}})=+\infty$ instead of $f$. Both $f$ and $\tilde{f}$ are equal almost everywhere and have the same essential infimum. Take initialization $\ensuremath{\mathbf{Z}}_{0}$ in $\mathcal{Z}$.}
\section{Study of the Normalized Chain}\label{sec:stability}
We study in this section different properties of the homogeneous Markov chain $\ensuremath{\mathbf{Z}}=(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ defined in Section~\ref{sec:defZ}. Those properties will imply linear convergence of the $(1+1)$-ES as we will see in Section~\ref{sec:linear-convergence}. We start in the next section by expressing the transition kernel of the Markov chain.
\subsection{Transition Probability Kernel}
We follow standard notations and terminology for a time homogeneous Markov chain $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ on a topological space $\mathcal{Z}$. The Borel sets of $\mathcal{Z}$ are denoted $\mathcal{B}(\mathcal{Z})$. A kernel $T$ is any function on $\mathcal{Z} \times \mathcal{B}(\mathcal{Z})$ such that $T(.,A)$ is measurable for all $A \in \mathcal{B}(\mathcal{Z})$ and $T(\ensuremath{\mathbf{z}},.)$ is a measure for all $\ensuremath{\mathbf{z}} \in \mathcal{Z}$. The transition probability kernel for $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ is a kernel $P$ such that $P(.,A)$ is a non-negative measurable function for all $A \in \mathcal{B}(\mathcal{Z})$ and the measure $P(\ensuremath{\mathbf{z}},.)$ for all $\ensuremath{\mathbf{z}}$ is a probability measure. It is defined as
$$
P(\ensuremath{\mathbf{z}},A) = P_{\ensuremath{\mathbf{z}}}(\ensuremath{\mathbf{Z}}_{1} \in A) \enspace,
$$
where $P_{\ensuremath{\mathbf{z}}}$ denotes the probability law of the chain under the initial condition $\ensuremath{\mathbf{Z}}_{0} = \ensuremath{\mathbf{z}}$. Similarly $E_{\ensuremath{\mathbf{z}}}$ denotes the expectation of the chain under the initial condition $\ensuremath{\mathbf{Z}}_{0} = \ensuremath{\mathbf{z}}$. If a probability $\mu$ on $(\mathcal{Z},\mathcal{B}(\mathcal{Z}))$ is the initial distribution, the probability law and expectation under $\mu$ are denoted $P_{\mu}$ and $E_{\mu}$.
The \emph{n-step transition probability law} is defined iteratively by setting $P^{0}(\ensuremath{\mathbf{z}},A)=\delta_{\ensuremath{\mathbf{z}}}(A)$ and for $t \geq 1$, inductively by
$$
P^{t}(\ensuremath{\mathbf{z}},A)=\int P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}})P^{t-1}(\ensuremath{\mathbf{y}},A) \enspace.
$$
The relation $P^{t}(\ensuremath{\mathbf{z}},A) = P_{\ensuremath{\mathbf{z}}}( \ensuremath{\mathbf{Z}}_{t} \in A ) $ holds. With an abuse of notations similar to \cite[p~56]{Tweedie:book1993}, we will also for instance denote $\Pr(\ensuremath{\mathcal{N}} \in A) $ or $\Pr(f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}}))$ for the probability of the events $\{ \ensuremath{\mathcal{N}} \in A \}$, $\{ f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}}) \}$ (where $\ensuremath{\mathcal{N}}$ will typically be a standard normal multivariate distribution) without specifically defining the space where $\ensuremath{\mathcal{N}}$ exists which could be the space where $\ensuremath{\mathbf{Z}}$ is defined or another space. Similarly $E[1_{\{ f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}}) \}}]$ will be used for the expectation of the random variable $1_{\{ f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}}) \}}$.
We derive in the next proposition an expression for the transition kernel of $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ when $f$ is a scaling-invariant function.
\begin{proposition}\label{prop:kernelplus}
Let $f:\mathbb{R}^{n} \mapsto \mathbb{R}$ be a scaling-invariant function and let $\ensuremath{\mathbf{Z}}$ be the Markov chain defined in \eqref{eq:transitionZ}. Its transition probability kernel is given for all $\ensuremath{\mathbf{z}} \in \mathcal{Z}= \mathbb{R}^{\dim}_{\neq}$ and $A \in \mathcal{B}(\mathcal{Z})$ by
\begin{align}
P(\ensuremath{\mathbf{z}},A)& = \int 1_{A}(\ensuremath{\mathbf{u}}) {q}(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}}) d \ensuremath{\mathbf{u}}
+ 1_{A}(\ensuremath{\mathbf{z}} \gamma^{1/q}) \Pr(f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}}) )
\end{align}
where ${q}(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}})=1_{\{f(\ensuremath{\mathbf{u}}) \leq f(\ensuremath{\mathbf{z}}/\gamma) \}}(\ensuremath{\mathbf{u}}) p_{{\mathcal{N}}}(\gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}}) \gamma $ with $p_{{\mathcal{N}}}$ the density of a standard multivariate normal distribution and $\ensuremath{\mathcal{N}} \sim \mathcal{N}(0,I_{n})$.
\end{proposition}
{\em Proof.}
Given $\ensuremath{\mathbf{Z}}_{0} = \ensuremath{\mathbf{z}}$, and $\ensuremath{\mathbf{U}}_{0}^{1} = \ensuremath{\mathcal{N}}$ where $\ensuremath{\mathcal{N}} \sim \ensuremath{\mathcal{N}}(0,I_n)$, $\ensuremath{\mathbf{Z}}_{1}$ satisfies
$
\ensuremath{\mathbf{Z}}_{1} = \frac{\ensuremath{\mathbf{z}} + \ensuremath{\mathcal{N}} 1_{\{ f( \ensuremath{\mathbf{z}} + \ensuremath{\mathcal{N}}) \leq f(\ensuremath{\mathbf{z}}) \}}}{(\gamma - \gamma^{-1/q})1_{\{f(\ensuremath{\mathbf{z}}+ \ensuremath{\mathcal{N}}) \leq f(\ensuremath{\mathbf{z}}) \}} + \gamma^{-1/q}}
$.
Hence, the transition probability kernel satisfies $P(\ensuremath{\mathbf{z}},A)=$
$$
\Pr \left( \frac{\ensuremath{\mathbf{z}} + \ensuremath{\mathcal{N}}}{\gamma} \in A \,\, \cap \{f(\ensuremath{\mathbf{z}}+ \ensuremath{\mathcal{N}}) \leq f(\ensuremath{\mathbf{z}}) \} \right) + \Pr \left( \frac{\ensuremath{\mathbf{z}}}{\gamma^{-1/q}} \in A \cap \{f(\ensuremath{\mathbf{z}}+ \ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}}) \} \right) \enspace,
$$
and thus satisfies
\begin{align*}
P(\ensuremath{\mathbf{z}},A)&= \int 1_{A} \left( \frac{\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{u}}}{\gamma} \right) 1_{\{f(\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{u}}) \leq f(\ensuremath{\mathbf{z}}) \}}(\ensuremath{\mathbf{u}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{u}}) d\ensuremath{\mathbf{u}}
+ 1_{A}(\ensuremath{\mathbf{z}} \gamma ^{1/q}) \Pr(f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}}) )\\
& = \int 1_{A}(\bar \ensuremath{\mathbf{u}}) \underbrace{1_{\{f(\gamma \bar \ensuremath{\mathbf{u}}) \leq f(\ensuremath{\mathbf{z}}) \}}(\bar \ensuremath{\mathbf{u}}) p_{{\mathcal{N}}}(\gamma \bar \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}}) \gamma}_{{q}(\ensuremath{\mathbf{z}},\bar \ensuremath{\mathbf{u}})} d \bar \ensuremath{\mathbf{u}}
+ 1_{A}(\ensuremath{\mathbf{z}} \gamma ^{\frac1q}) \Pr(f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}}) ). \endproof
\end{align*}
\mathnote{I take out this part for the submission as length start to be an issue and I do not need the Feller property for the paper. Note that for proving the weak Feller property (would need to be checked carrefully) the cited proposition does not apply directly. I however need to do an ad-hoc proof, showing that the points of discontinuity in $\ensuremath{\mathbf{n}}$ are not problematic to apply the dominated function theorem.}
\del{The transition probability kernel $P$ acts on bounded functions through the mapping
$$
Ph(\ensuremath{\mathbf{z}}) = \int P(\ensuremath{\mathbf{z}},d \ensuremath{\mathbf{y}}) h(\ensuremath{\mathbf{y}}), \ensuremath{\mathbf{z}} \in \mathcal{Z} \enspace.
$$
For locally compact separable metric topological space $\mathcal{Z}$, the class of bounded continuous functions from $\mathcal{Z}$ to $\mathbb{R}$ is denoted $\mathcal{C}(\mathcal{Z})$. A chain $\ensuremath{\mathbf{Z}}$ is said weak Feller if $P$ maps $\mathcal{C}(\mathcal{Z})$ into $\mathcal{C}(\mathcal{Z})$ and strong Feller if it maps bounded functions to $\mathcal{C}(\mathcal{Z})$. The Markov chain associated to the \ensuremath{(1+1)}\xspace-ES is weak Feller. The proof of this result follows the exact same lines as the proof of \cite[Proposition 6.1.3.]{Tweedie:book1993}. We formalize this result needed later in the following lemma:
\begin{lemma}\label{lem:weakFeller}
The Markov chain associated to the \ensuremath{(1+1)}\xspace-ES is weak Feller.
\end{lemma}
\begin{proof}
\nnote{Actually the proof cited before is using stronger assumptions that are not needed for the Dominated convergence theorem. Hence do a quick and correct proof}
\nnote{Probably state the lemma in terms of property of the mapping $\mathcal{G}$ $G$}
\end{proof}
However the singular part of the transition kernel \ensuremath{(1+1)}\xspace makes that the chain is not strong Feller.
\todo{actually it's not just the singular part that makes it not strong feller - for the comma case it is also not strong feller}
}
\subsection{Irreducibility, Small Sets and Aperiodicity}\label{sec:ISSA}
A Markov Chain $\ensuremath{\mathbf{Z}}=(\ensuremath{\mathbf{Z}_{\k}})_{\t \in \ensuremath{{\mathbb{{N}}}}}$ on a state space $\mathcal{Z}$ is said $\varphi$-irreducible if there exists a measure $\varphi$ on $\mathcal{Z}$ such that for all $A \in \mathcal{B}(\mathcal{Z})$, $\varphi(A) > 0$ implies that $P_{\ensuremath{\mathbf{z}}}(\tau_{A} < \infty) > 0$ for all $\ensuremath{\mathbf{z}} \in \mathcal{Z}$ where $\tau_{A}=\min \{t > 0 : \ensuremath{\mathbf{Z}_{\k}} \in A\}$ is the first return time to $A$. Another equivalent definition is for all $\ensuremath{\mathbf{z}} \in \mathcal{Z}$ and for all $A \in \mathcal{B}(\mathcal{Z})$
$$
\varphi(A) > 0 \Rightarrow \exists \, \, t_{\ensuremath{\mathbf{z}},A} \in \ensuremath{{\mathbb{{N}}}} \text{ such that} \,\, P_{\ensuremath{\mathbf{z}}}(\ensuremath{\mathbf{Z}}_{ t_{\ensuremath{\mathbf{z}},A}} \in A) > 0 \enspace.
$$
Given that a chain $\ensuremath{\mathbf{Z}}$ is $\varphi$-irreducible, there exists a maximal irreducibility measure $\psi$ and all maximal irreducibility measure are equivalent (see \cite[Proposition~4.4.2]{Tweedie:book1993}).
The set of positive $\psi$-measure is denoted
$$
\mathcal{B}^{+}(\mathcal{Z}) := \{ A \in \mathcal{B}(\mathcal{Z}) : \psi(A) > 0 \} \enspace.
$$
In the sequel we continue to denote $\psi$ the maximal irreducibility measure and hence if $\ensuremath{\mathbf{Z}}$ is $\psi$-irreducible it means that it is $\varphi$-irreducible for some $\varphi$ and that $\psi$ is a maximal irreducibility measure.
A set $A$ is \emph{full} if $\psi(A^{c}) = 0 $ and \emph{absorbing} if $P(\ensuremath{\mathbf{z}},A) =1$ for $\ensuremath{\mathbf{z}} \in A$.
In addition, a set $C$ is a \emph{small set} if there exists $t \in \ensuremath{{\mathbb{{N}}}}$ and a non-trivial measure $\nu_{t}$ on $\mathcal{B}(\mathcal{Z})$ such that for all $\ensuremath{\mathbf{z}} \in C$
\begin{equation}
P^{t}(\ensuremath{\mathbf{z}},A) \geq \nu_{t}(A) \, , A \in \mathcal{B}(\mathcal{Z}) \enspace.
\end{equation}
The small set is then called a $\nu_{t}$-small set.
Consider a small set $C$ satisfying the previous equation with $\nu_{t}(C) >0$ and denote $\nu_{t} = \nu$. The chain is called aperiodic if the g.c.d.\ of the set
$$
E_{C} = \{ k \geq 1: C \text{ is a } \nu_{k} \mbox{-small set with } \nu_{k} = \alpha_{k} \nu \mbox{ for some } \alpha_{k} >0 \}
$$
is one for some (and then for every) small set $C$.
We establish now the $\varphi$-irreducibility, identify some small sets and show the aperiodicity of the normalized chain associated to the \ensuremath{(1+1)}\xspace-ES.
\subsubsection{$\varphi$-irreducibility}
We denote $\mu_{\rm Leb}$ the Lebesgue measure on $\mathcal{Z} = \mathbb{R}^{\dim}_{\neq}$.
We prove in the next proposition that the normalized MC associated to the \ensuremath{(1+1)}\xspace-ES is irreducible with respect to the Lebesgue measure.
\begin{proposition}Assume that $f$ satisfies Assumptions~\ref{ass:f} and is continuous on $\mathbb{R}^{n}_{\neq}$. Assume that $\gamma > 1$. Then, the Markov chain $\ensuremath{\mathbf{Z}}$ associated to the \ensuremath{(1+1)}\xspace-ES is irreducible w.r.t. the Lebesgue measure $\mu_{\rm Leb}$.
\end{proposition}
\begin{proof}
Let $\ensuremath{\mathbf{z}} \in \mathcal{Z}$ and $\bar{\mathcal{L}}_{f(\ensuremath{\mathbf{z}}/\gamma)}^{f}$ be the sublevel set $\{ \ensuremath{\mathbf{u}} \in \mathcal{Z}, f(\ensuremath{\mathbf{u}}) \leq f(\ensuremath{\mathbf{z}}/\gamma) \} = \{ \ensuremath{\mathbf{u}} \in \mathcal{Z}, f( \gamma \ensuremath{\mathbf{u}}) \leq f(\ensuremath{\mathbf{z}}) \}$. And let $A \in \mathcal{B}(\mathcal{Z})$ such that $\mu_{\rm Leb}(A) > 0$. By the regularity of the Lebesgue measure we can include a compact $K$ in A such that $\mu(K) > 0$. Since $K \subset A$, for all $\ensuremath{\mathbf{z}}$, $P(\ensuremath{\mathbf{z}},A) \geq P(\ensuremath{\mathbf{z}},K).$
\mathnote{ Regularity of a measure would be with a closed and not a compact (see Bilingsley) - however for Lebesgue measure, works with compact - note also that in Billingsley Theorem 12.3 is stated for $A$ such that $\mu(A) < \infty$, however also holds for $A$ with infinite measure. For a nice proof see the measure-note.pdf Theorem 2.23 - class notes by John K. Hunter.}
If (i) $K \subset \bar{\mathcal{L}}_{f(\ensuremath{\mathbf{z}}/\gamma)}^{f}$ then $ P(\ensuremath{\mathbf{z}},K) = \int_{K} \gamma p_{{\mathcal{N}}}(\gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}}) d\ensuremath{\mathbf{u}} >0$ as $\ensuremath{\mathbf{u}} \in \mathbb{R}^{\dim} \mapsto p_{{\mathcal{N}}}(\gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}}) > 0 $.
\mathnote{Here we do not need that $K$ is compact since for a positive function $f$, $\int f = 0$ if and only if $f = 0$ a.e. Just take $f(\ensuremath{\mathbf{u}}) = 1_{K}(\ensuremath{\mathbf{u}}) p_{{\mathcal{N}}}( \gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}})$. We find that for any lebesgue measurable set $A$, $\mu(A) = 0 $ iff $\int_{A} \gamma p_{{\mathcal{N}}}( \gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}}) =0 $. Hence $\mu(A) > 0$ iff $\int_{A} \gamma p_{{\mathcal{N}}}( \gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}}) > 0 $. Of course if we have a compact we can just use the fact that $\ensuremath{\mathbf{u}} \in \mathbb{R}^{\dim} \mapsto p_{{\mathcal{N}}}(\gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}}) > 0 $ is continuous on the compact $K$ and hence bounded and reaches its bound, hence there exists $B$ such that $p_{{\mathcal{N}}}(\gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}}) \geq B $ and then $P(\ensuremath{\mathbf{z}},K) \geq \gamma B$.}
If (ii) $K$ is not included in $ \bar{\mathcal{L}}_{f(\ensuremath{\mathbf{z}}/\gamma)}^{f} $, then by sampling $\ensuremath{\mathcal{N}}$ such that $f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}})$ (which happens with strictly positive probability since by Lemma~\ref{lem:propf}, $ \bar{\mathcal{L}}_{f(\ensuremath{\mathbf{z}})}^{f} $ is bounded, hence sampling such that $f(\ensuremath{\mathbf{z}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}})$ can be achieved by sampling outside a ball), $\ensuremath{\mathbf{Z}}_{1}=\ensuremath{\mathbf{z}} \gamma^{1/q}$ which is at a larger distance from $0$ (as we assumed that $\gamma > 1$). By repeating this, we build a sequence $\ensuremath{\mathbf{Z}_{\k}}= \ensuremath{\mathbf{z}} \gamma^{\t/q}$ and $f(\ensuremath{\mathbf{Z}_{\k}}) = ({\gamma^{\t/q}})^{\alpha} f(\ensuremath{\mathbf{z}})$ hence $f(\ensuremath{\mathbf{Z}_{\k}})$ and $f(\ensuremath{\mathbf{Z}_{\k}} / \gamma)$ go to $\infty$. The set $K$ being compact we can find a ball $\mathbf{B}(0,\rho)$ such that $K \subset \mathbf{B}(0,\rho)$. In addition from Lemma~\ref{lem:si}, we know that for all $\rho$, $\mathbf{B}(0,\rho) \subset \bar{\mathcal{L}}_{\rho^{\alpha}M}$, hence choosing $t$ large enough such that $\bar{\mathcal{L}}_{\rho^{\alpha} M}^{f} \subset \bar{\mathcal{L}}_{f(\ensuremath{\mathbf{Z}_{\k}}/\gamma)}^{f}$, we have that $K \subset \bar{\mathcal{L}}_{f(\ensuremath{\mathbf{Z}_{\k}}/\gamma)}^{f}$ and by (i) $P(\ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{z}}\gamma^{t/q},K) > 0$.
\newcommand{\bar{\z}}{\bar{\ensuremath{\mathbf{z}}}}
Thus $P^{t+1}(\ensuremath{\mathbf{z}},A) \geq P^{t+1}(\ensuremath{\mathbf{z}},K) \geq \Pr(f(\ensuremath{\mathbf{z}} + \ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}})) \ldots \Pr\left(f(\ensuremath{\mathbf{z}} \gamma^{\frac{t-1}{q}}+\ensuremath{\mathcal{N}}) > f(\ensuremath{\mathbf{z}} \gamma^{\frac{t-1}{q}}) \right) P(\ensuremath{\mathbf{z}}\gamma^{t/q},K) \geq \theta^{t} P(\ensuremath{\mathbf{z}}\gamma^{t/q},K)> 0$ where $\theta > 0$ is a lower bound on $\bar{\z} \to \Pr(f(\bar{\z} + \ensuremath{\mathcal{N}}) > f(\bar{\z})) $ on a ball that includes $ \bar{\mathcal{L}}_{f(\ensuremath{\mathbf{Z}_{\k}})}^{f} $.
Indeed, since $f(\ensuremath{\mathbf{Z}}_{k})$ increases, for all $k \leq t$, $f(\ensuremath{\mathbf{Z}}_{k}) \in \bar{\mathcal{L}}_{f(\ensuremath{\mathbf{Z}_{\k}})}^{f}$. However according to Lemma~\ref{lem:si}, there exists $R$ such that $ \bar{\mathcal{L}}_{f(\ensuremath{\mathbf{Z}_{\k}})}^{f} \subset B(0,R) $. Then $ \{ \ensuremath{\mathcal{N}} \in B(0,2R)^{c} \} \subset \{ f(\bar{\z} + \ensuremath{\mathcal{N}}) > f(\bar{\z}) \}$ for all $\bar{\z}$ in $\bar{\mathcal{L}}_{f(\ensuremath{\mathbf{Z}_{\k}})}^{f} $. We can take $\theta = \Pr( \ensuremath{\mathcal{N}} \in B(0,2R)^{c} )$.
\end{proof}
\subsubsection{Small Sets and Aperiodicity}\label{small-sets}
We investigate small sets for the \ensuremath{(1+1)}\xspace-ES assuming that $f$ is positively homogeneous with degree $\alpha$ with $f(\ensuremath{\mathbf{x}}) >0$ for $\ensuremath{\mathbf{x}} \neq 0$ and $f$ is continuous on $\mathbb{R}^{n}_{\neq}$. Consider sets $D_{[l_{1},l_{2}]}$ with $ 0 < l_{1} < l_{2} $ defined as
\newcommand{D_{[l_{1},l_{2}]}}{D_{[l_{1},l_{2}]}}
\begin{equation}\label{eq:yang}
D_{[l_{1},l_{2}]} : = \{ \ensuremath{\mathbf{z}} \in \mathcal{Z}, l_{1} \leq f(\ensuremath{\mathbf{z}}) \leq l_{2} \} \enspace.
\end{equation}
Because $f$ is continuous, the sets $D_{[l_{1},l_{2}]} = f^{-1}([l_{1},l_{2}])$ are closed and by Lemma~\ref{lem:si} they are also bounded such that the sets $D_{[l_{1},l_{2}]}$ are compact sets. We prove in this section that the sets $D_{[l_{1},l_{2}]}$ are small sets for the Markov chain $\ensuremath{\mathbf{Z}}$.
\begin{lemma}\label{lem:smallset}
Assume that $f$ is positively homogeneous with degree $\alpha$, $f(\ensuremath{\mathbf{x}})>0$ for $\ensuremath{\mathbf{x}} \neq 0$ and $f$ is continuous on $\mathbb{R}^{n}_{\neq}$. Assume that $\gamma > 1$. Let $D_{[l_{1},l_{2}]}$ be a set of the type \eqref{eq:yang} with $0<l_{1} < l_{2}$.
Let $t_{0} \geq 1$ and let $R > 0$ such that $ \bar{\mathcal{L}}_{\gamma^{\frac{\alpha t_{0}}{q}} l_{2}} \subset B(0,R)$ (see Lemma~\ref{lem:si}). Then for all $\ensuremath{\mathbf{z}} $ in $D_{[l_{1},l_{2}]}$ and for all $t \leq t_{0}$
\begin{equation}\label{eq:maj-proba}
\Pr \left( f( \ensuremath{\mathbf{z}} \gamma^{\frac{t}{q}} + \ensuremath{\mathcal{N}} ) > f \left(\ensuremath{\mathbf{z}} \gamma^{\frac{t}{q}} \right) \right) \geq
\Pr \left( \ensuremath{\mathcal{N}} \in B(0,2R)^{c} \right) =:\theta_{\ref{lem:smallset}}
\end{equation}
where $\ensuremath{\mathcal{N}} \sim \ensuremath{\mathcal{N}}(0,I_n)$.
For all $\ensuremath{\mathbf{z}} \in D_{[l_{1},l_{2}]}$ and $t \leq t_{0}$, the following minorization holds, for $A \in \mathcal{B}(\mathcal{Z})$
\begin{equation}\label{eq:ntplusone-smallset}
P^{t+1}(\ensuremath{\mathbf{z}},A) \geq\theta_{\ref{lem:smallset}}^{t} \gamma \delta_{t} \int 1_{A \cap D_{[l_{1},l_{2}]}}(\ensuremath{\mathbf{u}}) 1_{ \{ f(\gamma \ensuremath{\mathbf{u}}) \leq \gamma^{{t \alpha}/{q}} l_{1} \} }(\ensuremath{\mathbf{u}}) d \ensuremath{\mathbf{u}} = : \nu_{t+1}(A) \enspace,
\end{equation}
where $\delta_{t} = \min_{(\ensuremath{\mathbf{z}}, \ensuremath{\mathbf{u}}) \in D_{[l_{1},l_{2}]}^{2}} p_{{\mathcal{N}}}( \gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/q} ) > 0$.
In addition $\nu_{t+1}$ is a non-trivial measure if $t > q$ and hence $D_{[l_{1},l_{2}]}$ is a $\nu_{t+1}$-small set provided $t > q$.
\end{lemma}
\mathnote{$ \min_{\ensuremath{\mathbf{z}} \in D_{[l_{1},l_{2}]}} p_{{\mathcal{N}}}( \gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/4} )$
a measurable application and even a continuous application if we assume $p$ continuous. I guess I do not have the simplest proof for that. I basically use the uniform continuity of a continuous function on a compact. Consider $u_{n}$ converging to $u_{0}$ where we want to prove the continuity of $g(u) = \min_{\ensuremath{\mathbf{z}} \in D_{[l_{1},l_{2}]} } p_{{\mathcal{N}}}( \gamma \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/4} ) = \min_{\ensuremath{\mathbf{z}} \in D_{[l_{1},l_{2}]} } p_{{\mathcal{N}}}(\ensuremath{\mathbf{z}},u)$ Let $z_{0}$ such that $g(u_{0}) = p_{{\mathcal{N}}}( \ensuremath{\mathbf{z}}_{0}, u_{0}) $ (exists because we have a continuous function on a compact. Let us denote $K=D_{[l_{1},l_{2}]}$. On $K \times B(u0,\epsilon)$, $p_{{\mathcal{N}}}(\ensuremath{\mathbf{z}},u)$ is uniformly continuous. Consider $\min_{K \times B(u0,\epsilon)} p_{{\mathcal{N}}}(\ensuremath{\mathbf{z}},u):=A$. Then $A < g(u_{0})$. We need to show that for $u_{n}$ close from $u_{0}$, $z_{n}$ where the minimum is reached cannot be far from $\ensuremath{\mathbf{z}}_{0}$ which implies by uniform continuity that $g(un)$ is close from $g(u_{0})$. We proceed by proving that both (un,zn) [associated to $g(un)$] and (u0,z0) [associated to $g(u0)$] are close from the point where $A$ is reached. by contradiction if it was not close then we would find in a neighborhood the point where the global minimum $A$ is reached a point $u_{0}, zz$ that would satisfy $p_{{\mathcal{N}}}(zz,uo) < g(u0)$ which is impossible. Same for (un,zn). Overall both (un,zn) and (u0,z0) are closed from the point where $A$ is reached and therefore they are close together.
Note: 3 August 2013: I do not see that the previous proof is correct ...
What is going on: 1) given $u_{0}$ via the compact we know that the minimum is reached at a point $z_{0}$. 2) Moving $u_{0}$ in a ball has the effect to move also via the translation $\gamma u - \ensuremath{\mathbf{z}} \gamma^{t/4}$, $z_{0}$ in a ball as well. Basically the translation moves every point of the compact continuously. Then we check the density $p$ for this moving ``z-domain''. In case the minimum was only reached at one point of the border of the compact (BTW, the minimum with a gaussian distribution can only be reached at the border), then the minimum will stay reached at the same $z_{0}$. If it was reached at two different points of the border, then the effect of moving in the ball will be to determine one of the two minimum - still continuity remains.
Would be good to have a clean proof: check slide - might give the proof idea.
\url{http://www.math.ku.dk/~erhansen/Markov_04/}
Usefull
\url{http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.53.3081&rep=rep1&type=pdf}
Meyn Tweedie page 89 (or Section 4.3.3) - require for additive RW to have the density of the increment bounded away from zero in the neighborhood of zero.
}
\mathnote{Pour un autoregressive process $\ensuremath{\mathbf{Z}_{\k+1}} = \ensuremath{\mathbf{Z}_{\k}} + Wt$ where $Wt$ admits a density $\gamma(x)$ such that for all compact C $\inf_{x \in C} \gamma(x) >0$. The compacts are small sets. (see cours Hansen \url{http://www.math.ku.dk/~erhansen/Markov_04/}).
The slides where it is stated are: CoursHansenMCMC-slides-AR1-smallsets.ps and the solution is in the handwritten notes: HansenHandOut.pdf. (p 13)
It turns out that this is the same argument that we find in the slides of Rosenthal (p 13):
density of the kernel $f(x,y)$, satisfy $f(x,y) > \delta$ for all $x, y \in C$ (true if C compact and f satisfy $\inf_{x,y \in C} f(x,y) > 0$
$ P(x,A) \geq \delta Leb(A \cap C) = \delta Leb(C)[ Leb(A \cap C)/Leb(C)]$ This latter equation we have exhibited the probability measure.
Note we need that the sum of two compacts is still a compact (see the proof in the additional material, exo 7 if needed).
}
\begin{proof}
Note first that for all $t \leq t_{0}$, for all $\ensuremath{\mathbf{z}} \in D_{[l_{1},l_{2}]}$, $\bar{\mathcal{L}}_{f(\ensuremath{\mathbf{z}} \gamma^{t/q})} \subset \bar{\mathcal{L}}_{\gamma^{{\alpha t_{0}}/{q}} l_{2}} $ (we use here that $\gamma > 1$). We now claim that if $\ensuremath{\mathcal{N}} \in B(0,2R)^{c}$, then $ f( \ensuremath{\mathbf{z}} \gamma^{{t}/{q}} + \ensuremath{\mathcal{N}} ) > f \left(\ensuremath{\mathbf{z}} \gamma^{{t}/{q}} \right)$. Indeed, if $\ensuremath{\mathcal{N}} \in B(0,2R)^{c}$, then for all $\ensuremath{\mathbf{z}}$ in $D_{[l_{1},l_{2}]}$, $\ensuremath{\mathbf{z}} \gamma^{\frac{t}{q}} $ is inside $\bar{\mathcal{L}}_{f(\ensuremath{\mathbf{z}} \gamma^{t/q})} \subset\bar{\mathcal{L}}_{\gamma^{{\alpha t_{0}}/{q}} l_{2}} \subset B(0,R)$ (by definition of $R$) and hence $\ensuremath{\mathbf{z}} \gamma^{\frac{t}{q}} + \ensuremath{\mathcal{N}}$ will be outside $B(0,R)$, i.e., outside $\bar{\mathcal{L}}_{f \left(\ensuremath{\mathbf{z}} \gamma^{{t}/{q}} \right)}$, i.e., $f( \ensuremath{\mathbf{z}} \gamma^{{t}/{q}} + \ensuremath{\mathcal{N}} ) > f \left(\ensuremath{\mathbf{z}} \gamma^{{t}/{q}} \right)$. Hence \eqref{eq:maj-proba}. \todo{explain that it does not work for strong aperiodicity - explain the intuitive idea about the construction}We will now prove \eqref{eq:ntplusone-smallset}. We lower bound the probability $P^{t+1}(\ensuremath{\mathbf{z}},A)=P_{\ensuremath{\mathbf{z}}}( \ensuremath{\mathbf{Z}}_{t+1} \in A) $ by the probability to reach $A$ in $t+1$ steps starting from $\ensuremath{\mathbf{z}}$ by having no success for the first $t-1$ iterations:
\begin{align*}
P^{t+1}(\ensuremath{\mathbf{z}},A) \geq \underbrace{P_{\ensuremath{\mathbf{z}}} \left( \{ \ensuremath{\mathbf{Z}}_{t+1} \in A \} \cap \{ f( \ensuremath{\mathbf{Z}}_{t} + \ensuremath{\mathbf{U}_{\k}}^{1}) \leq f( \ensuremath{\mathbf{Z}}_{t}) \} \bigcap_{k=0}^{t-1} \{ f( \ensuremath{\mathbf{Z}}_{k} + \ensuremath{\mathbf{U}}_{k}^{1}) > f(\ensuremath{\mathbf{Z}}_{k}) \} \right)}_{A_{1}} \enspace.
\end{align*}
However if $ \{ f( \ensuremath{\mathbf{Z}}_{k} + \ensuremath{\mathbf{U}}_{k}^{1}) > f(\ensuremath{\mathbf{Z}}_{k}) \}$ then $\ensuremath{\mathbf{Z}}_{k+1} = \ensuremath{\mathbf{Z}}_{k} \gamma^{1/q} $ such that given that $\ensuremath{\mathbf{Z}}_{0} = \ensuremath{\mathbf{z}}$, the following equalities between events holds
\begin{multline*}
\{ \ensuremath{\mathbf{Z}}_{t+1} \in A \} \cap \{ f( \ensuremath{\mathbf{Z}}_{t} + \ensuremath{\mathbf{U}}_{t}^{1}) \leq f( \ensuremath{\mathbf{Z}}_{t} ) \}
\bigcap_{k=0}^{t-1} \{ f( \ensuremath{\mathbf{Z}}_{k} + \ensuremath{\mathbf{U}}_{k}^{1}) > f(\ensuremath{\mathbf{Z}}_{k}) \} = \\
\left\{ \frac{\ensuremath{\mathbf{z}} \gamma^{t/q} + \ensuremath{\mathbf{U}}_{t}^{1}}{\gamma} \in A \right\}
\cap \left\{ f( \ensuremath{\mathbf{z}} \gamma^{t/q} + \ensuremath{\mathbf{U}}_{t}^{1}) \leq f( \ensuremath{\mathbf{z}} \gamma^{t/q} ) \}
\bigcap_{k=0}^{t-1} \{ f( \ensuremath{\mathbf{z}} \gamma^{k/q} + \ensuremath{\mathbf{U}}_{k}^{1}) > f(\ensuremath{\mathbf{z}} \gamma^{k/q}) \right\} \enspace.
\end{multline*}
Hence by independence of the $(\ensuremath{\mathbf{U}}_{k})_{0 \leq k \leq t}$, $A_{1} =$
\begin{align*}
&
P_{\ensuremath{\mathbf{z}}} \left( \frac{\ensuremath{\mathbf{z}} \gamma^{t/q} + \ensuremath{\mathbf{U}}_{t}^{1}}{\gamma} \in A \cap \{ f( \ensuremath{\mathbf{z}} \gamma^{\frac{t}{q}} + \ensuremath{\mathbf{U}}_{t}^{1}) \leq f(\ensuremath{\mathbf{z}} \gamma^{\frac{t}{q}} ) \} \right) \prod_{k=0}^{t-1} P_{\ensuremath{\mathbf{z}}} \left( f( \ensuremath{\mathbf{z}} \gamma^{\frac{k}{q}} + \ensuremath{\mathbf{U}}_{k}^{1}) > f(\ensuremath{\mathbf{z}} \gamma^{\frac{k}{q}}) \right)
\intertext{using now \eqref{eq:maj-proba}}
& \geq P_{\ensuremath{\mathbf{z}}} \left( \frac{\ensuremath{\mathbf{z}} \gamma^{t/q} + \ensuremath{\mathbf{U}}_{t}^{1}}{\gamma} \in A \cap \{ f( \ensuremath{\mathbf{z}} \gamma^{t/q} + \ensuremath{\mathbf{U}}_{t}^{1}) \leq f( \ensuremath{\mathbf{z}} \gamma^{t/q} ) \} \right) \theta_{\ref{lem:smallset}}^{t} \\
& = \theta_{\ref{lem:smallset}}^{t} \int 1_{A} \left( \frac{\ensuremath{\mathbf{z}} \gamma^{t/q} + \ensuremath{\mathbf{u}}}{\gamma} \right) 1_{\{ f( \ensuremath{\mathbf{z}} \gamma^{t/q} + \ensuremath{\mathbf{u}}) \leq f( \ensuremath{\mathbf{z}} \gamma^{t/q} ) \}}(\ensuremath{\mathbf{u}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{u}}) d \ensuremath{\mathbf{u}} \\
& = \theta_{\ref{lem:smallset}}^{t} \int 1_{A} \left( \bar \ensuremath{\mathbf{u}} \right) 1_{\{ f( \gamma \bar \ensuremath{\mathbf{u}}) \leq f( \ensuremath{\mathbf{z}} \gamma^{t/q} ) \}}(\bar \ensuremath{\mathbf{u}}) p_{{\mathcal{N}}}( \gamma \bar \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/q}) \gamma d \bar \ensuremath{\mathbf{u}} \\
& \geq \theta_{\ref{lem:smallset}}^{t} \int 1_{A \cap D_{[l_{1},l_{2}]}} \left( \bar \ensuremath{\mathbf{u}} \right) 1_{\{ f( \gamma \bar \ensuremath{\mathbf{u}}) \leq f( \ensuremath{\mathbf{z}} \gamma^{t/q} ) \}}(\bar \ensuremath{\mathbf{u}}) p_{{\mathcal{N}}}( \gamma \bar \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/q}) \gamma d \bar \ensuremath{\mathbf{u}} \enspace .
\end{align*}
For all $\ensuremath{\mathbf{z}} \in D_{[l_{1},l_{2}]}$, $\bar \ensuremath{\mathbf{u}}$, $ 1_{\{ f( \gamma \bar \ensuremath{\mathbf{u}}) \leq f( \ensuremath{\mathbf{z}} \gamma^{t/q} ) \}}(\bar \ensuremath{\mathbf{u}}) = 1_{\{ f( \gamma \bar \ensuremath{\mathbf{u}}) \leq \gamma^{{t\alpha}{q}} f( \ensuremath{\mathbf{z}} ) \}}(\bar \ensuremath{\mathbf{u}}) \geq 1_{\left\{ f( \gamma \bar \ensuremath{\mathbf{u}}) \leq \gamma^{{t\alpha}{q}} l_{1} \right\}}(\bar \ensuremath{\mathbf{u}}) $ and for all $(\ensuremath{\mathbf{z}},\bar \ensuremath{\mathbf{u}}) \in D_{[l_{1},l_{2}]}$, $p_{{\mathcal{N}}}( \gamma \bar \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/q}) \geq \min_{(\ensuremath{\mathbf{z}},\bar \ensuremath{\mathbf{u}}) \in D_{[l_{1},l_{2}]}^{2}} p_{{\mathcal{N}}}( \gamma \bar \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/q} )$. Since $D_{[l_{1},l_{2}]}$ is compact, $\{\gamma \bar \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/q}, (\ensuremath{\mathbf{z}},\bar \ensuremath{\mathbf{u}}) \in D_{[l_{1},l_{2}]}^{2} \}$ is also compact and thus there exists $\delta_{t}$ such that $\min_{(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}}) \in D_{[l_{1},l_{2}]}^{2}} p_{{\mathcal{N}}}( \gamma \bar \ensuremath{\mathbf{u}} - \ensuremath{\mathbf{z}} \gamma^{t/q} ) \geq \delta_{t} > 0$
\mathnote{Continuous image of a compact is a compact.}
\mathnote{that's here that we need that the sum of two compacts is compact}
\mathnote{actually two proofs of the same fact - OK}
Hence
$$
P^{t+1}(\ensuremath{\mathbf{z}},A) \geq A_{1} \geq \theta_{\ref{lem:smallset}}^{t} \delta_{t} \gamma \int 1_{A \cap D_{[l_{1},l_{2}]}}(\ensuremath{\mathbf{u}})1_{\left\{ f( \gamma \ensuremath{\mathbf{u}}) \leq \gamma^{{t\alpha}/{q}} l_{1} \right\}}(\ensuremath{\mathbf{u}}) d\ensuremath{\mathbf{u}}
$$
which is a non-trivial measure if $t > q$.
\mathnote{$ l_{1} \leq f(\ensuremath{\mathbf{u}}) $ and $\gamma^{\alpha} f(\ensuremath{\mathbf{u}}) \leq \gamma^{t \alpha/q} l_{1} $.}
\end{proof}
Remark that the constant $\theta_{\ref{lem:smallset}}$ defined in \eqref{eq:maj-proba} and used in \eqref{eq:ntplusone-smallset} depends on $t_{0}$ as the radius $R$ of the ball where the sublevel set $ \bar{\mathcal{L}}_{\gamma^{{\alpha t_{0}}/{q}} l_{2}}$ is included depends on $t_{0}$. To prove the aperiodicity of the chain we construct a joint minorization measure $\nu$ working for two consecutive integers having hence $1$ as greatest common divisor and that satisfies $\nu(D_{[l_{1},l_{2}]}) > 0$. More precisely we prove the following proposition.
\newcommand{\bar{q}}{\bar{q}}
\begin{proposition}\label{prop:D-6-7smallSets}
Assume that $f$ is positively homogeneous with degree $\alpha$, $f(\ensuremath{\mathbf{x}})>0$ for $\ensuremath{\mathbf{x}} \neq 0$ and $f$ is continuous on $\mathbb{R}^{n}_{\neq}$. Assume that $\gamma > 1$. Let $D_{[l_{1},l_{2}]}$ be a set of the type \eqref{eq:yang}. Let $\bar{q} = \lfloor q \rfloor +1 $. Then for all $\ensuremath{\mathbf{z}} \in D_{[l_{1},l_{2}]}$, $A \in \mathcal{B}(\mathcal{Z})$
\begin{align}\label{hom1}
P^{\bar{q}+1}(\ensuremath{\mathbf{z}}, A) & \geq \zeta^{\bar{q}} \nu(A) \\\label{hom2}
P^{\bar{q}+2}(\ensuremath{\mathbf{z}}, A) & \geq \zeta^{\bar{q}+1} \nu(A)
\end{align}
where $\nu$ is the measure defined by
$\nu(A) = \delta \gamma \int 1_{A \cap D_{[l_{1},l_{2}]}} ( \ensuremath{\mathbf{u}})1_{\left\{ f( \gamma \ensuremath{\mathbf{u}}) \leq \gamma^{\bar{q} \alpha/q} l_{1} \right\}}(\ensuremath{\mathbf{u}}) d \ensuremath{\mathbf{u}}$ with $\delta = \min \{ \delta_{\bar{q}}, \delta_{\bar{q}+1} \}$ and $\zeta$ is the constant $\theta_{\ref{lem:smallset}}$ in \eqref{eq:maj-proba} for $t_{0}=\bar{q}+2$.
In addition $\nu(D_{[l_{1},l_{2}]}) > 0$ which implies that the chain $\ensuremath{\mathbf{Z}}$ is aperiodic.
\end{proposition}
\begin{proof}
From Lemma~\ref{lem:smallset}
\begin{align}\label{eq:tchoug}
P^{\bar{q}+1}(\ensuremath{\mathbf{z}},A) & \geq \zeta^{\bar{q}} \delta_{\bar{q}} \gamma \int 1_{A \cap D_{[l_{1},l_{2}]}}(\ensuremath{\mathbf{u}}) 1_{ \{ f(\gamma \ensuremath{\mathbf{u}}) \leq \gamma^{ {\bar{q} \alpha}/{q}} l_{1} \} }(\ensuremath{\mathbf{u}}) d \ensuremath{\mathbf{u}} \enspace, \\ \label{eq:tchoug2}
P^{\bar{q}+2}(\ensuremath{\mathbf{z}},A) & \geq \zeta^{\bar{q}+1} \gamma \delta_{\bar{q}+1} \int 1_{A \cap D_{[l_{1},l_{2}]}}(\ensuremath{\mathbf{u}}) 1_{ \{ f(\gamma \ensuremath{\mathbf{u}}) \leq \gamma^{{(\bar{q}+1) \alpha}/{q}} l_{1} \} }(\ensuremath{\mathbf{u}}) d \ensuremath{\mathbf{u}} \enspace.
\end{align}
Since $\delta \leq \delta_{\bar{q}} $ we find using \eqref{eq:tchoug} that
$$
P^{\bar{q}+1}(\ensuremath{\mathbf{z}},A) \geq \zeta^{\bar{q}} \delta \gamma \int 1_{A \cap D_{[l_{1},l_{2}]}}(\ensuremath{\mathbf{u}}) 1_{ \{ f(\gamma \ensuremath{\mathbf{u}}) \leq \gamma^{{\bar{q} \alpha}/{q}} l_{1} \} }(\ensuremath{\mathbf{u}}) d \ensuremath{\mathbf{u}},
$$
which is exactly \eqref{hom1}. Using in addition the fact that $1_{ \{ f(\gamma \ensuremath{\mathbf{u}}) \leq \gamma^{{(\bar{q}+1) \alpha}/{q}} l_{1} \} } (\ensuremath{\mathbf{u}}) \geq 1_{ \{ f(\gamma \ensuremath{\mathbf{u}}) \leq \gamma^{{\bar{q} \alpha}/{q}} l_{1} \} }(\ensuremath{\mathbf{u}}) $ that we inject in \eqref{eq:tchoug2}, we find \eqref{hom2}.
Since $\bar{q}/q > 1$, $\nu(D_{[l_{1},l_{2}]}) > 0$. As two consecutive integers have $1$ as g.c.d., the g.c.d.\ of $\bar{q}+1$ and $\bar{q}+2$ is one and hence the chain $\ensuremath{\mathbf{Z}}$ is aperiodic.
\mathnote{proof of two consecutive integer have 1 has greatest common divisor can be found on wikipedia: \url{http://www.proofwiki.org/wiki/Consecutive_Integers_are_Coprime}}
\end{proof}
\subsection{Geometric Ergodicity}
In this section we derive the geometric ergodicity of the chain $\ensuremath{\mathbf{Z}}$. Geometric ergodicity will imply the other stability properties needed, namely positivity and Harris recurrence whose definitions are reminded below. First, let us recall that
a $\sigma$-finite measure $\pi$ on $\mathcal{B}(\mathcal{Z})$ with the property
$$
\pi(A) = \int_{\mathcal{Z}} \pi(d \ensuremath{\mathbf{z}}) P(\ensuremath{\mathbf{z}},A), \, A \in \mathcal{B}(\mathcal{Z})
$$
is called invariant. A $\varphi$-irreducible chain admitting an invariant probability measure is called a \emph{positive} chain.
Harris recurrence is a concept ensuring that a chain visits the state space sufficiently often. It is defined for a $\psi$-irreducible chain as: A $\psi$-irreducible Markov chain is \emph{Harris-recurrent} if for all $A \subset \mathcal{Z}$ with $\psi(A) > 0$, and for all $\ensuremath{\mathbf{z}} \in \mathcal{Z}$, the chain will eventually reach $A$ with probability $1$ starting from $\ensuremath{\mathbf{z}}$, formally if $P_{\ensuremath{\mathbf{z}}}(\eta_{A} = \infty) = 1$ where $\eta_{A}$ be the \emph{occupation time} of $A$, i.e., $\eta_{A} = \sum_{t=1}^{\infty} 1_{\ensuremath{\mathbf{Z}_{\k}} \in A}$. An (Harris-)recurrent chain admits an unique (up to a constant multiples) invariant measure \cite[Theorem~10.0.4]{Tweedie:book1993}.
For a function $V \geq 1$, the $V$-norm for a signed measure $\nu$ is defined as
$$
\| \nu \|_{V} = \sup_{k: |k| \leq V} | \nu(k)| = \sup_{k: |k| \leq V} | \int k(\ensuremath{\mathbf{y}}) \nu( d \ensuremath{\mathbf{y}})| \enspace.
$$
Geometric ergodicity translates the fact that convergence to the invariant measure takes place at a geometric rate. Different notions of geometric ergodicity do exist (see \cite{Tweedie:book1993}) and we will consider the form that appears in the following theorem. For any $V$, $PV$ is defined as $PV(\ensuremath{\mathbf{z}}) := \int P(\ensuremath{\mathbf{z}}, d\ensuremath{\mathbf{y}}) V(\ensuremath{\mathbf{y}})$.
\begin{theorem}
(Geometric Ergodic Theorem \cite[Theorem 15.0.1]{Tweedie:book1993})
Suppose that the chain $\ensuremath{\mathbf{Z}}$ is $\psi$-irreducible and aperiodic. Then the following three conditions are equivalent:
(i) The chain $\ensuremath{\mathbf{Z}}$ is positive recurrent with invariant probability measure $\pi$, and there exists some petite set $C \in \mathcal{B}^{+}(\mathcal{Z}) $, $\rho_{C} < 1$ and $M_{C} < \infty$ and $P^{\infty}(C) > 0$ such that for all $\ensuremath{\mathbf{z}} \in C$
$$
|P^{t}(\ensuremath{\mathbf{z}},C) - P^{\infty}(C) | \leq M_{C} \rho_{C}^{t}.
$$
(ii) There exists some petite set $C$ and $\kappa > 1$ such that
$$
\sup_{\ensuremath{\mathbf{z}} \in C} E_{\ensuremath{\mathbf{z}}}[\kappa^{\tau_{C}}] < \infty \enspace.
$$
(iii) There exists a petite set $C \in \mathcal{B}(\mathcal{Z})$, constants $b < \infty$, $\vartheta < 1$ and a function $V \geq 1$ finite at some one $\ensuremath{\mathbf{z}}_{0} \in \mathcal{Z}$ satisfying
\begin{equation}\label{eq:driftgerd}
PV(\ensuremath{\mathbf{z}}) \leq \vartheta V(\ensuremath{\mathbf{z}}) + b 1_{C}(\ensuremath{\mathbf{z}}), \ensuremath{\mathbf{z}} \in \mathcal{Z}.
\end{equation}
Any of these three conditions imply that the following two statements hold. The set $S_{V}=\{ \ensuremath{\mathbf{z}}: V(\ensuremath{\mathbf{z}}) < \infty\}$ is absorbing and full, where $V$ is any solution to \eqref{eq:driftgerd}. Furthermore, there exist constants $r>1$, $R < \infty$ such that for any $\ensuremath{\mathbf{z}} \in S_{V}$
\begin{equation}\label{eq:C-Vnorm}
\sum_{t} r^{t} \| P^{t}(\ensuremath{\mathbf{z}},.) - \pi \|_{V} \leq R V(\ensuremath{\mathbf{z}}) \enspace.
\end{equation}
\end{theorem}
The drift operator is defined as
$
\Delta V(\ensuremath{\mathbf{z}}) = PV(\ensuremath{\mathbf{z}}) - V(\ensuremath{\mathbf{z}})
$.
The inequality \eqref{eq:driftgerd} is called a drift condition that can be re-written as
$$
\Delta V(\ensuremath{\mathbf{z}}) \leq \underbrace{(\vartheta - 1)}_{<0} V(\ensuremath{\mathbf{z}}) + b 1_{C}(\ensuremath{\mathbf{z}}) \enspace.
$$
$P$ is then said to admit a drift towards the set $C$.
The previous theorem is using the notion of petite sets but small sets are actually also petite sets (see Section~5.5.2 \cite{Tweedie:book1993}). We will in the sequel prove a geometric drift towards a small set $C$ that will hence imply a geometric drift towards petite set. It will subsequently imply the existence of a probability invariant measure and Harris recurrence
\cite{Tweedie:book1993}.
\subsubsection{Geometric Drift Condition for Positively Homogenous Functions}
In this section we investigate drift conditions for functions that are a monotonically increasing transformation of a positively homogeneous function, i.e., $h = g \circ f$ for $f$ a positively homogeneous function with degree $\alpha$ and $g \in \ensuremath{\mathcal{M}}$.
\newcommand{\tilde{\z}}{\tilde{\ensuremath{\mathbf{z}}}}
We have shown that the sets $D_{[l_{1},l_{2}]}$ are some small sets for $\ensuremath{\mathbf{Z}}$ (under the assumptions of Lemma~\ref{lem:smallset}). Hence proving negativity of the drift function outside a small set requires to prove negativity for $f(\ensuremath{\mathbf{z}})$ ``large'' as well as for $f(\ensuremath{\mathbf{z}})$ close to $0$.
We are going to prove that under some regularity assumptions on $f$, the function
\begin{equation*}
V(\ensuremath{\mathbf{z}})= f(\ensuremath{\mathbf{z}}) 1_{\{ f(\ensuremath{\mathbf{z}}) \geq 1 \}} + \frac{1}{f(\ensuremath{\mathbf{z}})} 1_{\{ f(\ensuremath{\mathbf{z}}) < 1 \}}
\end{equation*}
satisfies a geometric drift condition for the \ensuremath{(1+1)}\xspace-ES algorithm provided $\gamma > 1$ and the expected inverse of the step-size change to the $\alpha$ on \emph{linear functions} is strictly smaller one that directly translates into:
$$
\frac12 \left( \frac{1}{\gamma^{\alpha}} + \gamma^{\alpha/q} \right) < 1 \enspace.
$$
\anne{Note that I need to verify the integrability against the drift function. I am wondering whether Meyn-Tweedie talk about that. Done in a technical result later on.}
\mathnote{I guess their trick is that everything can be possibly infinite.}
Given the shape of the small sets proven in Section~\ref{small-sets}, to establish a geometric drift condition,\smalltodo{not equivalent - think about lim sup} it is enough to prove that the limit of $PV/V$ is strictly smaller 1 when $\ensuremath{\mathbf{z}}$ goes to $0$ and to $\infty$:
\begin{lemma}\label{lem:CSdrift}
Let $f$ be positively homogeneous function with degree $\alpha$ and $f(\ensuremath{\mathbf{x}}) > 0$ for $\ensuremath{\mathbf{x}} \neq 0$ and $f$ continuous on $\mathbb{R}^{n}_{\neq}$. Assume that $\gamma > 1$. Let $V$ be a function finite at some one $\ensuremath{\mathbf{z}}_{0} \in \mathcal{Z}$ and such that $V \geq 1$
that satisfies
\begin{equation}\label{eq:toub}
\lim_{\| \ensuremath{\mathbf{z}} \| \to \infty} \frac{PV(\ensuremath{\mathbf{z}})}{V(\ensuremath{\mathbf{z}})} < 1 \mbox{ and } \lim_{\|\ensuremath{\mathbf{z}}\| \to 0} \frac{PV(\ensuremath{\mathbf{z}})}{V(\ensuremath{\mathbf{z}})} < 1 \enspace.
\end{equation}
Then $V$ is a geometric drift in the sense of \eqref{eq:driftgerd} for the \ensuremath{(1+1)}\xspace-ES.
\end{lemma}
\mathnote{in Geometric Convergence and Central Limit Theorems for Multidimensional Hastings and Metropolis Algorithms Author(s): G. O. Roberts and R. L. Tweedie, a if and only if condition for geometric ergodicity is stated with a lim sup $PV(\ensuremath{\mathbf{z}})/V(\ensuremath{\mathbf{z}})$.}
\begin{proof}
According to Lemma~\ref{lem:smallset}, the sets $D_{[l_{1},l_{2}]}=\{ \ensuremath{\mathbf{z}} \in \mathcal{Z}, l_{1} \leq f(\ensuremath{\mathbf{z}}) \leq l_{2} \}$ with $0 < l_{1} < l_{2}$ are small sets for $\ensuremath{\mathbf{Z}}$.
The limit \eqref{eq:toub} (left) gives that for all $\epsilon > 0$ small enough, there exists $a_{2}$ such that for $\| \ensuremath{\mathbf{z}}\| \geq a_{2}$, $PV(\ensuremath{\mathbf{z}}) \leq (1 - \epsilon) V(\ensuremath{\mathbf{z}}) $. According to \eqref{eq:LB-UB} it implies the existence of $l_{2}$ such that for all $\ensuremath{\mathbf{z}}$ with $f(\ensuremath{\mathbf{z}}) \geq l_{2}$, $PV(\ensuremath{\mathbf{z}}) \leq (1 - \epsilon) V(\ensuremath{\mathbf{z}}) $. Similarly, the limit \eqref{eq:toub} (right) gives that for all $\epsilon > 0$ small enough, there exists $a_{1} > 0$ such that for $\| \ensuremath{\mathbf{z}} \| \leq a_{1}$, $PV(\ensuremath{\mathbf{z}}) \leq (1 - \epsilon) V(\ensuremath{\mathbf{z}})$. According to \eqref{eq:LB-UB} it implies the existence of $l_{1}$ such that for all $\ensuremath{\mathbf{z}}$ with $f(\ensuremath{\mathbf{z}}) \leq l_{1}$, $PV(\ensuremath{\mathbf{z}}) \leq (1 - \epsilon) V(\ensuremath{\mathbf{z}})$. Hence taking $\vartheta = 1-\epsilon$ for epsilon small enough, we have that outside the small set $D_{[l_{1},l_{2}]}$, $PV(\ensuremath{\mathbf{z}}) \leq \vartheta V(\ensuremath{\mathbf{z}})$.
\end{proof}
\paragraph{Technical Results}
Before to establish the main proposition of this section, we derive a few technical results.
\begin{lemma}\label{lem:yop}
Assume that $f$ is continuous on $\mathbb{R}^{n}$ and for all $\ensuremath{\mathbf{n}} \in \mathbb{R}^{n}$, $f(\ensuremath{\mathbf{n}}) > f(0)$ then
$
\lim_{\| \ensuremath{\mathbf{z}} \| \to 0}
\Pr(f(\ensuremath{\mathbf{z}}+\mathcal{N}) > f(\ensuremath{\mathbf{z}})) = 1
\enspace,
$
where $\mathcal{N} \sim \mathcal{N}(0,I_n)$.
\end{lemma}
\begin{proof}
We express the probability $ \Pr(f(\ensuremath{\mathbf{z}}+\mathcal{N}) > f(\ensuremath{\mathbf{z}})) $ using the density of $\mathcal{N}$:
$$
\Pr(f(\ensuremath{\mathbf{z}}+\mathcal{N}) > f(\ensuremath{\mathbf{z}})) = \int 1_{\{f(\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{n}}) - f(\ensuremath{\mathbf{z}}) > 0\}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d\ensuremath{\mathbf{n}} \enspace.
$$
For all $\ensuremath{\mathbf{n}}$ except for $\ensuremath{\mathbf{n}}=0$, $ 1_{\{f(\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{n}}) - f(\ensuremath{\mathbf{z}}) > 0\}}(\ensuremath{\mathbf{n}}) $ converges to $1_{\{f( \ensuremath{\mathbf{n}}) - f(0) > 0 \}}(\ensuremath{\mathbf{n}})$ when $\ensuremath{\mathbf{z}}$ goes to $0$ (the function $t \mapsto 1_{\{ t > 0 \}}(t) $ being discontinuous in $t=0$, for $\ensuremath{\mathbf{n}}=0$, for $\ensuremath{\mathbf{z}}$ to $0$, we arrive at the discontinuity point of the indicator, hence we cannot conclude about the limit). Hence by the dominated convergence theorem, we find that $\Pr(f(\ensuremath{\mathbf{z}}+\mathcal{N}) > f(\ensuremath{\mathbf{z}}))$ converges to $\Pr(f(\mathcal{N}) > f(0)) = 1$.
\end{proof}
\begin{lemma}\label{lem:limitps}
Assume that $f$ is a positively homogeneous function with degree $\alpha$ and $f(\ensuremath{\mathbf{x}}) >0$ for all $\ensuremath{\mathbf{x}} \neq 0$ and assume that $f$ is continuously differentiable. Then
\begin{align}\label{eq:loop1}
& \lim_{\|\ensuremath{\mathbf{z}}\| \to \infty} \Pr ( f(\ensuremath{\mathbf{z}} + \mathcal{N}) > f(\ensuremath{\mathbf{z}}) ) = \frac{1}{2} \\\label{eq:loop2}
&\lim_{\|\ensuremath{\mathbf{z}}\| \to \infty} \Pr \left( \{ f(\ensuremath{\mathbf{z}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \} \cap \{ f(\ensuremath{\mathbf{z}}+\mathcal{N}) \geq \gamma^{\alpha} \} \right) = \frac{1}{2}
\end{align}
where $\mathcal{N} \sim \mathcal{N}(0,I_n)$.
\end{lemma}
\begin{proof}
\newcommand{g_{\ref{lem:limitps}}}{g_{\ref{lem:limitps}}}
\newcommand{h_{\ref{lem:limitps}}}{h_{\ref{lem:limitps}}}
We investigate first the limit \eqref{eq:loop1} and want to prove that
\begin{equation}\label{eq:topp}
\forall \epsilon, \exists\ T > 0, \mbox{ such that for all } \ensuremath{\mathbf{z}} \mbox{ with } \| \ensuremath{\mathbf{z}} \| > T, | \Pr( f(\ensuremath{\mathbf{z}} + \mathcal{N}) > f(\ensuremath{\mathbf{z}}) ) - \frac12 | < \epsilon \enspace.
\end{equation}
Let us fix one arbitrary $\epsilon$ for the rest of the proof and use the homogeneity property to write
$ \Pr ( f(\ensuremath{\mathbf{z}} + \mathcal{N}) > f(\ensuremath{\mathbf{z}}) ) = \Pr( f(\frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{1/\alpha}} + \frac{\mathcal{N}}{f(\ensuremath{\mathbf{z}})^{1/\alpha}} ) > 1 )$. Since $f$ is continuously differentiable, the mean value theorem gives us the existence for all $\ensuremath{\mathbf{n}} \in \mathbb{R}^{\dim}$ of $c_{\ensuremath{\mathbf{n}}} \in [0,1]$ such that
\begin{equation}\label{zoup1}
f\left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + \frac{\ensuremath{\mathbf{n}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) = \underbrace{f\left(\frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}}\right)}_{=1} + \nabla f \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + c_{\ensuremath{\mathbf{n}}} \frac{\ensuremath{\mathbf{n}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) . \frac{\ensuremath{\mathbf{n}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \enspace.
\end{equation}
The event $\{ f(\frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac{1}{\alpha}}} + \frac{\mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac{1}{\alpha}}} ) > 1 \}$ thus equals $\{ \nabla f \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + c_{\mathcal{N}} \frac{\mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) . \frac{\mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} > 0 \}$ also equal to the event $\{ \nabla f \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + c_{\mathcal{N}} \frac{\mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) . \mathcal{N} > 0 \} $. Let us define the function $g_{\ref{lem:limitps}}: \mathcal{L}_{1} \times [0,1] $ as follows
\begin{equation}\label{eq:fun1}
g_{\ref{lem:limitps}}(\ensuremath{\mathbf{u}},v) = \Pr \left( \nabla f (\ensuremath{\mathbf{u}} + c_{\mathcal{N}} v \mathcal{N}). \mathcal{N} > 0 \right)
\end{equation}
such that $\Pr(f(\ensuremath{\mathbf{z}} + \mathcal{N}) > f(\ensuremath{\mathbf{z}})) = g_{\ref{lem:limitps}} \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{1/\alpha}}, \frac{1}{f(\ensuremath{\mathbf{z}})^{1/\alpha}} \right) $. (Given the definition domain of $g_{\ref{lem:limitps}}$ we have assumed that $\ensuremath{\mathbf{z}}$ is large enough such that $f(\ensuremath{\mathbf{z}}) \geq 1$.) We now prove the continuity of $g_{\ref{lem:limitps}}$ that we express with its integral form as
$$
g_{\ref{lem:limitps}}(\ensuremath{\mathbf{u}},v) = \int 1_{\{ \nabla f (\ensuremath{\mathbf{u}} + c_{\ensuremath{\mathbf{n}}} v \ensuremath{\mathbf{n}}). \ensuremath{\mathbf{n}} > 0 \}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}} \enspace.
$$
Because we have assumed that the differential of $f$ is continuous, for all $\ensuremath{\mathbf{n}}$, the function $(\ensuremath{\mathbf{u}},v) \mapsto \nabla f (\ensuremath{\mathbf{u}} + c_{\ensuremath{\mathbf{n}}} v \ensuremath{\mathbf{n}}). \ensuremath{\mathbf{n}} $ is continuous. The indicator function has one discontinuity point that could be reached if $\nabla f (\ensuremath{\mathbf{u}} + c_{\ensuremath{\mathbf{n}}} v \ensuremath{\mathbf{n}}). \ensuremath{\mathbf{n}} = 0$. We thus exclude the point $\ensuremath{\mathbf{n}}=0$. In addition, with Property~\eqref{prop-grad}, $\nabla f(\ensuremath{\mathbf{u}} + c_{\ensuremath{\mathbf{n}}} v \ensuremath{\mathbf{n}}) \neq 0$ if $\ensuremath{\mathbf{u}} + c_{\ensuremath{\mathbf{n}}} v \ensuremath{\mathbf{n}} \neq 0$. Hence, given $(\ensuremath{\mathbf{u}}_{0},v_{0})$ where we want to prove the continuity of $g_{\ref{lem:limitps}}$, let $\mathfrak{N}=\{ \ensuremath{\mathbf{n}} | \ensuremath{\mathbf{n}} = \alpha \ensuremath{\mathbf{u}}_{0}, \alpha \in \mathbb{R} \}$ (this is a set of null measure provided $n \geq 2$) then for all $\ensuremath{\mathbf{n}} \in \mathbb{R}^{n} \backslash \mathfrak{N}$, the function $(\ensuremath{\mathbf{u}},v) \mapsto 1_{\{ \nabla f (\ensuremath{\mathbf{u}} + c_{\ensuremath{\mathbf{n}}} v \ensuremath{\mathbf{n}}). \ensuremath{\mathbf{n}} > 0 \}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}})$ is continuous in $(\ensuremath{\mathbf{u}}_{0},v_{0}) $ and by the dominated convergence theorem we deduce the continuity of $g_{\ref{lem:limitps}}(\ensuremath{\mathbf{u}},v)$ on $\mathcal{L}_{1} \times [0,1]$. By symmetry of $p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}})$, for all $\ensuremath{\mathbf{u}}$,
$$
g_{\ref{lem:limitps}}(\ensuremath{\mathbf{u}},0)= \int 1_{\{ \nabla f(\ensuremath{\mathbf{u}}).\ensuremath{\mathbf{n}} > 0 \}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d\ensuremath{\mathbf{n}} = 1/2 \enspace.
$$
Since $g_{\ref{lem:limitps}}$ is continuous on a compact, it is uniformly continuous and hence there exists $\beta > 0$ such that for all $v \leq \beta$, and $\ensuremath{\mathbf{u}} \in \mathcal{L}_{1}$, $|g_{\ref{lem:limitps}}(\ensuremath{\mathbf{u}},v) - g_{\ref{lem:limitps}}(\ensuremath{\mathbf{u}},0)| \leq \epsilon$ \enspace.
Taking $T' = \frac{1}{\beta}$, we then have that if $f(\ensuremath{\mathbf{z}})^{1/\alpha} \geq T' $, $|g_{\ref{lem:limitps}}(\frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{1/\alpha}},\frac{1}{f(\ensuremath{\mathbf{z}})^{1/\alpha}}) - \frac{1}{2} | \leq \epsilon$. From Lemma~\ref{lem:too}, we find that if $\| \ensuremath{\mathbf{z}} \| \geq T:=T'/M^{1/\alpha} $, then $f(\ensuremath{\mathbf{z}})^{1/\alpha} \geq T' $ and $|g_{\ref{lem:limitps}}(\frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{1/\alpha}},\frac{1}{f(\ensuremath{\mathbf{z}})^{1/\alpha}}) - \frac{1}{2} | \leq \epsilon$. Hence we have proven \eqref{eq:topp} that proves \eqref{eq:loop1} in the case $n \geq 2$. The case $n=1$ is even simpler and boils down to look at the limit when $v$ goes to $0$ of $\int 1_{\{ f( \ensuremath{\mathbf{u}} + v \ensuremath{\mathbf{n}} ) >1 \}}(\ensuremath{\mathbf{n}}) p(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}} = \int_{\mathbb{R}^{+}} 1_{\{ f'(\ensuremath{\mathbf{u}}) + o(1) > 0 \}}(\ensuremath{\mathbf{n}}) p(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}} + \int_{\mathbb{R}^{-}} 1_{\{ - f'(\ensuremath{\mathbf{u}}) + o(1) > 0 \}}(\ensuremath{\mathbf{n}}) p(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}}$. Using the dominated convergence theorem, this latter limit equals $1/2$.\\
\noindent In a similar manner we investigate the limit \eqref{eq:loop2}. Using \eqref{zoup1}, we define in a similar manner on $\mathcal{L}_{1} \times [0,1( $ the function
$$
h_{\ref{lem:limitps}}(\ensuremath{\mathbf{u}},v) = \int 1_{\{ \nabla f ( \ensuremath{\mathbf{u}} + c_{\ensuremath{\mathbf{n}}} v \ensuremath{\mathbf{n}}). \ensuremath{\mathbf{n}} < 0 \}}(\ensuremath{\mathbf{n}}) 1_{\{ f( \ensuremath{\mathbf{u}}/v + \ensuremath{\mathbf{n}}) \geq \gamma^{\alpha} \}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}} \enspace.
$$
Similarly we prove the continuity of $h_{\ref{lem:limitps}}$ on $\mathcal{L}_{1} \times )0,1( $ and prolong it by continuity for $v = 0$ using the fact that for all $\ensuremath{\mathbf{n}}$, $\lim_{v \to 0} 1_{\{ f( \ensuremath{\mathbf{u}}/v + \ensuremath{\mathbf{n}}) \geq \gamma^{\alpha} \}} = 1 $. We find then that
$$
h_{\ref{lem:limitps}}(\ensuremath{\mathbf{u}},0) = \int 1_{\{ \nabla f ( \ensuremath{\mathbf{u}}). \ensuremath{\mathbf{n}} < 0 \}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}} = \frac12 \enspace.
$$
Like in the previous case we prove that for all $\epsilon$, there exists $T>0$ such that
$$
\mbox{for all } \| \ensuremath{\mathbf{z}} \| > T, | \Pr \left( \{ f(\ensuremath{\mathbf{z}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \} \cap \{ f(\ensuremath{\mathbf{z}}+\mathcal{N}) \geq \gamma^{\alpha} \} \right) - \frac12 | < \epsilon
$$
that proves \eqref{eq:loop2}. We omit the details as the proof follows the same lines as before.
\end{proof}
\begin{lemma}\label{lem-integra-V}
Let $f$ be a positively homogeneous function with degree $\alpha$ satisfying $f(\ensuremath{\mathbf{x}}) > 0$ for all $\ensuremath{\mathbf{x}} \neq 0$. Assume that $f$ is continuous on $\mathbb{R}^{n}_{\neq}$. Let $\mathcal{N}$ denote a standard multivariate normal distribution. Then for all $\ensuremath{\mathbf{z}}$
\begin{align}\label{eq:u2-1}
& \mbox{If } \alpha \leq 1, E \left[ f(\ensuremath{\mathbf{z}}+\mathcal{N}) \right] \leq M (\| \ensuremath{\mathbf{z}} \| + E(\| \mathcal{N} \|))^{\alpha} < + \infty \enspace, \\\label{eq:u2-1prime}
& \mbox{If } \alpha \geq 1, E \left[ f(\ensuremath{\mathbf{z}}+\mathcal{N}) \right] \leq M (\| \ensuremath{\mathbf{z}} \| + E[\| \ensuremath{\mathcal{N}} \|^{\alpha}]^{1/\alpha})^{\alpha} < + \infty \enspace.
\end{align}
If in addition, $\alpha \leq n$, then there exists a constant $c_{\ref{lem-integra-V}}$ such that for all $\ensuremath{\mathbf{z}} $
\begin{equation}\label{eq:u2-2}
E \left[ \frac{1}{f(\ensuremath{\mathbf{z}}+ \mathcal{N})} \right] < c_{\ref{lem-integra-V}} \enspace.
\end{equation}
Consequently if $\alpha \leq n$, the function $V(\ensuremath{\mathbf{z}})= f(\ensuremath{\mathbf{z}}) 1_{\{ f(\ensuremath{\mathbf{z}}) \geq 1 \}} + \frac{1}{f(\ensuremath{\mathbf{z}})} 1_{\{ f(\ensuremath{\mathbf{z}}) < 1 \}}$ satisfies for all $\ensuremath{\mathbf{z}} \in \mathbb{R}^{n}_{\neq}$
\begin{equation}\label{eq:u2-3}
\int V(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}}, d \ensuremath{\mathbf{y}}) < \infty \enspace.
\end{equation}
\end{lemma}
\begin{proof}
We start by proving \eqref{eq:u2-1} and \eqref{eq:u2-1prime}. Note first that $E[\| \ensuremath{\mathcal{N}} \|^{\alpha}] < \infty$ for all $\alpha > 0$. \mathnote{is a consequence from the form the density of a multivariate normal distribution.} According to Lemma~\ref{lem:too}, $f(\ensuremath{\mathbf{z}}+ \mathcal{N}) \leq M \| \ensuremath{\mathbf{z}} + \mathcal{N} \|^{\alpha}$. From the triangle inequality we obtain
\begin{equation}\label{eq:TIne}
\| \ensuremath{\mathbf{z}} + \mathcal{N} \|^{\alpha} \leq ( \| \ensuremath{\mathbf{z}} \| + \| \mathcal{N} \|)^{\alpha} \enspace.
\end{equation}
For $\alpha < 1$, $x \in [0,+\infty] \mapsto x^{\alpha}$ being concave, we obtain from Jensen inequality that $E[ ( \| \ensuremath{\mathbf{z}} \| + \| \mathcal{N} \|)^{\alpha}] \leq (E[\| \ensuremath{\mathbf{z}} \| + E(\| \mathcal{N}\|)])^{\alpha} = (\| \ensuremath{\mathbf{z}} \| + E(\| \mathcal{N} \|))^{\alpha} $ which achieves to prove \eqref{eq:u2-1} (the case for $\alpha=1$ being the equality case for the last equations).
For $\alpha \geq 1$, we can apply the Minkowski inequality stating that $E[( \| \ensuremath{\mathbf{z}} \| + \| \mathcal{N} \|)^{\alpha}]^{1/\alpha} \leq E[\| \ensuremath{\mathbf{z}} \|^{\alpha}]^{1/\alpha} + E[\| \ensuremath{\mathcal{N}} \|^{\alpha}]^{1/\alpha} = \| \ensuremath{\mathbf{z}} \| + E[\| \ensuremath{\mathcal{N}} \|^{\alpha}]^{1/\alpha}$. Hence $E( ( \| \ensuremath{\mathbf{z}} \| + \| \mathcal{N} \|)^{\alpha}) \leq
(\| \ensuremath{\mathbf{z}} \| + E[\| \ensuremath{\mathcal{N}} \|^{\alpha}]^{1/\alpha})^{\alpha} $. Overall using the upper bound on $f(\ensuremath{\mathbf{z}}+ \mathcal{N})$ and \eqref{eq:TIne}, we find \eqref{eq:u2-1prime}.
\mathnote{If $0 < r \leq s$, $E[| X |^{s}] < \infty$, $\Rightarrow$ $E[|X|^{r}] < \infty$. This actually comes from the Hoelder inequality and using the fact that we have a finite measure. + Minkowski inequality. If $r \geq 1$, $E(| X + Y|^{r})^{1/r} \leq E[|X|^{r}]^{1/r} + E[|Y|^{r}]^{1/r} $ }
\mathnote{Proposition 4.5 in John K. Hunter, Measure Theory. Consider $f$ $g$ taking values in $[0 \infty]$, if $0 \leq f \leq g$ then $0 \leq \int f \leq \int g$.}
We prove now \eqref{eq:u2-2}. We are writing in the sequel integrals of positive functions that are possibly infinite. We will prove actually that the functions are integrable (and the integrals finite) and prove that we have a bound for the integral independent of $\ensuremath{\mathbf{z}}$. Using Lemma~\ref{lem:too}
\begin{equation}\label{youm1}
A_{\ref{lem-integra-V}}=E \left[ \frac{1}{f(\ensuremath{\mathbf{z}}+ \mathcal{N})} \right] \leq E \left[ \frac{1}{m \| \ensuremath{\mathbf{z}} + \mathcal{N} \|^{\alpha}} \right]
= \frac{1}{m} \int_{\mathbb{R}^{n}} \frac{1}{\| \ensuremath{\mathbf{z}} + \ensuremath{\mathbf{y}}\|^{\alpha}} p_{{\mathcal{N}}}(\ensuremath{\mathbf{y}}) d\ensuremath{\mathbf{y}}
\end{equation}
Using a change of variables
\begin{equation}\label{youm2}
\int_{\mathbb{R}^{n}} \frac{1}{\| \ensuremath{\mathbf{z}} + \ensuremath{\mathbf{y}}\|^{\alpha}} p_{{\mathcal{N}}}(\ensuremath{\mathbf{y}}) d\ensuremath{\mathbf{y}} = \int_{\mathbb{R}^{n}} \frac{1}{\| \ensuremath{\mathbf{\tilde{y}}} \|^{\alpha}} p_{{\mathcal{N}}}( \ensuremath{\mathbf{\tilde{y}}} - \ensuremath{\mathbf{z}}) d \ensuremath{\mathbf{\tilde{y}}} \enspace.
\end{equation}
The previous integral is possibly infinite because the function $\ensuremath{\mathbf{\tilde{y}}} \mapsto \frac{1}{\| \ensuremath{\mathbf{\tilde{y}}} \|^{\alpha}}$ has a singularity in zero. Let us study the integrability close to zero, i.e., investigate
$$
\int_{B(0,1)} \frac{1}{\| \ensuremath{\mathbf{y}} \|^{\alpha}} p_{{\mathcal{N}}}( \ensuremath{\mathbf{y}} - \ensuremath{\mathbf{z}}) d \ensuremath{\mathbf{y}} \leq K_{\ref{lem-integra-V}} \int_{B(0,1)} \frac{1}{\| \ensuremath{\mathbf{y}} \|^{\alpha}} d \ensuremath{\mathbf{y}}
$$
where $K_{\ref{lem-integra-V}} $ is an upper bound on the density $p_{{\mathcal{N}}}$ (hence independent of $\ensuremath{\mathbf{z}}$).
Using spherical coordinates for $n\geq 2$
\begin{multline*}
\int_{B(0,1)} \frac{1}{\| \ensuremath{\mathbf{y}} \|^{\alpha}} d \ensuremath{\mathbf{y}} = \int_{0}^{1} \frac{r^{n-1}}{r^{\alpha}} d r \prod_{i=1}^{n-2} \int_{0}^{\pi} \sin^{n-1-i}(\varphi_{i}) d \varphi_{i} \int_{0}^{2 \pi} d \varphi_{n-1} \\ \leq 2 (\pi)^{n-1} \int_{0}^{1} r^{n-1-\alpha} d r \enspace.
\end{multline*}
The latter integral is finite for $\alpha + 1 -n \leq 1$, i.e., $\alpha \leq n$. For $n=1$ we directly obtain $\int_{-1}^{1} \frac{1}{| \ensuremath{\mathbf{y}} |^{\alpha}} d \ensuremath{\mathbf{y}} < \infty $ if $\alpha \leq 1$.
To prove that $A_{\ref{lem-integra-V}}$ is bounded for all $\ensuremath{\mathbf{z}}$ by a constant independent of $\ensuremath{\mathbf{z}}$, we write
\begin{align*}
\int_{\mathbb{R}^{n}} \frac{1}{\| \ensuremath{\mathbf{y}} \|^{\alpha}} p_{{\mathcal{N}}}(\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{z}}) d \ensuremath{\mathbf{y}} & \leq K_{\ref{lem-integra-V}} \int_{\mathbb{R}^{n}} \frac{1}{\| \ensuremath{\mathbf{y}} \|^{\alpha}} 1_{\{ \| \ensuremath{\mathbf{y}} \| \leq 1 \}} d \ensuremath{\mathbf{y}} + \int_{\mathbb{R}^{n}} \frac{1}{\| \ensuremath{\mathbf{y}} \|^{\alpha}} 1_{\{ \| \ensuremath{\mathbf{y}} \| \geq 1 \}} p_{{\mathcal{N}}}(\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{z}}) d \ensuremath{\mathbf{y}} \\
& \leq K_{\ref{lem-integra-V}} \int_{B(0,1)} \frac{1}{\| \ensuremath{\mathbf{y}} \|^{\alpha}} d \ensuremath{\mathbf{y}} + \underbrace{\int_{\mathbb{R}^{n}} p_{{\mathcal{N}}}(\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{z}}) d \ensuremath{\mathbf{y}}}_{=1}
\end{align*}
Hence $\int_{\mathbb{R}^{n}} \frac{1}{\| \ensuremath{\mathbf{y}} \|^{\alpha}} p_{{\mathcal{N}}}(\ensuremath{\mathbf{y}} - \ensuremath{\mathbf{z}}) d \ensuremath{\mathbf{y}}$ is bounded by a constant independent of $\ensuremath{\mathbf{z}}$ if $\alpha \leq n$. Using this with \eqref{youm1} and \eqref{youm2} proves that $A_{\ref{lem-integra-V}}$ is bounded by a constant independent of $\ensuremath{\mathbf{z}}$.\\
Finally we prove \eqref{eq:u2-3}: Using the expression of the Markov chain given in \eqref{eq:transitionZ} denoting $\mathcal{N}$ a standard normal distribution we find that $\int V(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) =$
\begin{multline*}
E\left[ f \left( \frac{\ensuremath{\mathbf{z}}+\mathcal{N}}{\gamma} \right) 1_{\{ f(\ensuremath{\mathbf{z}}+\mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{f(\frac{\ensuremath{\mathbf{z}}+ \mathcal{N}}{\gamma}) \geq 1 \}} \right]
+ E \left[ \frac{1_{\{f(\ensuremath{\mathbf{z}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f((\ensuremath{\mathbf{z}} + \mathcal{N})/\gamma ) < 1 \}}}{f\left(\frac{\ensuremath{\mathbf{z}}+\mathcal{N}}{\gamma}\right)} \right] + \\
E \left[ f\left(\frac{\ensuremath{\mathbf{z}}}{\gamma^{-1/q}}\right) 1_{\{ f(\ensuremath{\mathbf{z}}+\mathcal{N}) > f(\ensuremath{\mathbf{z}}) \}} 1_{\{f(\ensuremath{\mathbf{z}}/\gamma^{-\frac1q}) \geq 1 \}} \right]
+
E \left[ \frac{1_{\{f(\ensuremath{\mathbf{z}} + \mathcal{N}) > f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f(\ensuremath{\mathbf{z}}/\gamma^{-1/q}) < 1 \}}}{f(\ensuremath{\mathbf{z}}/\gamma^{-1/q})} \right] \\
\leq \frac{1}{\gamma^{\alpha}} E \left[ f(\ensuremath{\mathbf{z}}+ \mathcal{N}) \right] + \frac{1}{\gamma^{-\frac{\alpha}{q}}} f(\ensuremath{\mathbf{z}}) + \gamma^{\alpha} E \left[ \frac{1}{f(\ensuremath{\mathbf{z}}+ \mathcal{N})} \right] + \gamma^{-\frac{\alpha}{q}} \frac{1}{f(\ensuremath{\mathbf{z}})} \enspace.
\end{multline*}
Using now \eqref{eq:u2-1}, \eqref{eq:u2-1prime} and \eqref{eq:u2-2} in the previous inequality we find \eqref{eq:u2-3}.
\end{proof}
\paragraph{Sufficient conditions for geometric ergodicity}
We are now ready to establish the main result of this section, namely a sufficient condition for geometric ergodicity. We need to make some further assumptions on the objective function that we gather as Assumption~\ref{ass:f2}.
\begin{assumption}\label{ass:f2}
The function $f: \mathbb{R}^{n} \to [0,+\infty[ $ satisfies Assumptions~\ref{ass:f}, i.e., is a positively homogeneous function with degree $\alpha$ and $f(\ensuremath{\mathbf{x}}) > 0$ for all $\ensuremath{\mathbf{x}} \neq 0$.\\
The function $f$ is continuously differentiable and $\alpha \leq n$.
There exists $k \in \ensuremath{{\mathbb{{N}}}}_{>}$, $c_{0}, \ldots, c_{k}$ in $\mathbb{R}$ such that for all $\tilde{\z} \in \mathcal{L}_{1}$, $\ensuremath{\mathbf{y}} \in \mathbb{R}^{n}$, $c_{\tilde{\z}}, c_{\ensuremath{\mathbf{y}}} \in [0,1]$
\begin{equation}\label{eq:assOPOmajgradient}
\| \nabla f(\tilde{\z}+ c_{\tilde{\z}} c_{\ensuremath{\mathbf{y}}} \ensuremath{\mathbf{y}}) \|^{2} \leq c_{0} + \sum_{i=1}^{k} c_{i} \| \ensuremath{\mathbf{y}} \|^{i} \enspace.
\end{equation}
\end{assumption}
In the next lemma, we verify that convex-quadratic functions satisfy the previous assumptions if $n \geq 2$.
\begin{lemma}\label{lem:CQ}
Let $f(\ensuremath{\mathbf{x}}) = \frac12 \ensuremath{\mathbf{x}}^{T} H \ensuremath{\mathbf{x}}$ with $H$ symmetric positive definite. It satisfies Assumptions~\ref{ass:f2} if $n \geq 2$.
\end{lemma}
\begin{proof}
\newcommand{| \! | \! |}{| \! | \! |}
The function $f$ is positively homogeneous with degree $2$. Hence to satisfy the assumption $\alpha \leq n$ we need $n \geq 2$. Moreover, it is continuously differentiable and homogeneous with degree $2$ and satisfies $\nabla f(\ensuremath{\mathbf{x}}) = H \ensuremath{\mathbf{x}} $. Hence $\| \nabla H (\tilde{\z}+\gamma_{0} c_{\ensuremath{\mathbf{y}}} \ensuremath{\mathbf{y}}) \|^{2} \leq | \! | \! | H | \! | \! | \| \tilde{\z}+ c_{\tilde{\z}} c_{\ensuremath{\mathbf{y}}} \ensuremath{\mathbf{y}} \|^{2} \leq K ( \| \tilde{\z} \| + \| \ensuremath{\mathbf{y}} \| )^{2} \leq K (K_{1} + \| \ensuremath{\mathbf{y}} \|)^{2} = K K_{1} + 2 K K_{1} \| \ensuremath{\mathbf{y}} \| + K K_{1}^{2} \| \ensuremath{\mathbf{y}} \|^{2} $ where $| \! | \! | . | \! | \! |$ is the induced matrix norm associated to the euclidian norm $\| . \|$ and $K$ is a bound for $| \! | \! | H | \! | \! |$ and $K_{1}$ a bound for the elements of $\bar{\mathcal{L}}_{1}$. Hence \eqref{eq:assOPOmajgradient} is satisfied with $k=2$.
\end{proof}
We are now ready to state the main result of this section.
\begin{theorem}\label{prop:OPO}
Consider $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})_{t \in \ensuremath{{\mathbb{{N}}}}}$, a (1+1)-ES with generalized one-fifth success rule as defined in \eqref{eq:sampling}, \eqref{eq:update-mean} and \eqref{eq:update-ss} optimizing $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$ and $f: \mathbb{R}^{n} \to [0,+\infty[$ satisfies Assumptions~\ref{ass:f2}. Let $\ensuremath{\mathbf{Z}}=(\ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{X}_\k}/\ensuremath{\sigma_\k})_{t \in \ensuremath{{\mathbb{{N}}}}}$ be the Markov chain associated to the (1+1)-ES optimizing $h$ defined in \eqref{eq:transitionZ}. Then the function
\begin{equation}\label{eq:driftOPO}
V(\ensuremath{\mathbf{z}})= f(\ensuremath{\mathbf{z}}) 1_{\{ f(\ensuremath{\mathbf{z}}) \geq 1 \}} + \frac{1}{f(\ensuremath{\mathbf{z}})} 1_{\{ f(\ensuremath{\mathbf{z}}) < 1 \}}
\end{equation}
satisfies a drift condition for geometric ergodicity (in the sense of \eqref{eq:driftgerd}) for the Markov chain $\ensuremath{\mathbf{Z}}$ if $\gamma > 1$ and
\begin{equation}\label{eq:inc-linear}
\frac12 \left( \frac{1}{\gamma^{\alpha}} + \gamma^{\alpha/q} \right) < 1 \enspace.
\end{equation}
\end{theorem}
The theorem calls for a few remarks. The LHS of \eqref{eq:inc-linear} corresponds to the expectation of the step-size change to the $\alpha$ on a linear function (i.e., $f(\ensuremath{\mathbf{x}}) = \ensuremath{\mathbf{a}} . \ensuremath{\mathbf{x}} + b $ for $\ensuremath{\mathbf{a}} \in \mathbb{R}^{\dim}$ and $b \in \mathbb{R}$). Using the notation $\ensuremath{\eta^{\star}}$ introduced in \eqref{eq:sschangeETA} for the step-size change, the condition \eqref{eq:inc-linear} requires that on linear function
\begin{equation}\label{eq:inc-linearETA}
E[ 1/({\ensuremath{\eta^{\star}}})_{\rm linear}^{\alpha}] < 1 \enspace,
\end{equation}
that translates a step-size increase on linear function. This condition is similar to the one found to prove geometric ergodicity for the $(1,\lambda)$ with self-adaptation \cite{TCSAnne04}. For an algorithm without elitist selection, the condition \eqref{eq:inc-linearETA} is the only one formulated on the step-size change to guarantee geometric ergodicity. It ensures that the limit of $PV/V$ is smaller $1$ for $\ensuremath{\mathbf{z}}$ to infinity (see proof). For the $(1+1)$-ES another condition appears due to the limit of $PV/V$ in zero (see the details in the proof) that is reflected in the fact that $\gamma^{-\alpha/4} < 1$, i.e., the step-size should decrease in case of failure. This translates for the one-fifth success rule into $\gamma > 1$. Note that we also need this condition $\gamma > 1$ for the irreducibility, the small sets and the aperiodicity.
\begin{proof}
Using the definition of $V$ we can write $PV(\ensuremath{\mathbf{z}}) = E[V(\ensuremath{\mathbf{Z}_{\k+1}}) | \ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{z}}]$ as
$$
PV(\ensuremath{\mathbf{z}}) =E\left[f(\ensuremath{\mathbf{Z}_{\k+1}})1_{\{ f(\ensuremath{\mathbf{Z}_{\k+1}}) \geq 1\}} + \frac{1}{f(\ensuremath{\mathbf{Z}_{\k+1}})} 1_{\{ f(\ensuremath{\mathbf{Z}_{\k+1}}) < 1 \}} | \ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{z}} \right] \enspace.
$$
According to Lemma~\ref{lem:CSdrift}, we need to study the limits of $PV(\ensuremath{\mathbf{z}})/V(\ensuremath{\mathbf{z}})$ for $\ensuremath{\mathbf{z}}$ to $0$ and to $\infty$.
\paragraph*{Investigating the limit of $PV/V$ for $\ensuremath{\mathbf{z}}$ to infinity}
We first investigate the limit for $\| \ensuremath{\mathbf{z}} \| $ to infinity and consider $\ensuremath{\mathbf{z}}$ large enough, in particular we can assume that
\begin{equation}\label{eq:case1}
f(\ensuremath{\mathbf{z}}) \geq \max{\{1,\gamma^{\alpha/q}\}}
\end{equation}
and hence $V(\ensuremath{\mathbf{z}}) = f(\ensuremath{\mathbf{z}})$. Then
\begin{equation}\label{eq:driftzlarge}
\frac{PV(\ensuremath{\mathbf{z}})}{V(\ensuremath{\mathbf{z}})} =
\underbrace{E\left[ \frac{f(\ensuremath{\mathbf{Z}_{\k+1}})1_{\{ f(\ensuremath{\mathbf{Z}_{\k+1}}) \geq 1 \}} }{f(\ensuremath{\mathbf{z}})} | \ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{z}} \right]}_{A(\ensuremath{\mathbf{z}})}
+
\underbrace{\frac{E\left[\frac{1}{f(\ensuremath{\mathbf{Z}_{\k+1}})} 1_{\{f(\ensuremath{\mathbf{Z}_{\k+1}}) \leq 1 \}} | \ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{z}} \right]}{f(\ensuremath{\mathbf{z}})}}_{B(\ensuremath{\mathbf{z}})}
\end{equation}
\noindent Throughout this proof we will denote $\mathcal{N}$ the multivariate normal distribution used at iteration $t$ to sample a new candidate solution. Namely the update for $\ensuremath{\mathbf{Z}_{\k}}$ reads:
$$
\boxed{\ensuremath{\mathbf{Z}_{\k+1}} = \frac{\ensuremath{\mathbf{Z}_{\k}}+ \mathcal{N}}{\gamma} 1_{\{ f(\ensuremath{\mathbf{Z}_{\k}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \}} + \frac{\ensuremath{\mathbf{Z}_{\k}}}{\gamma^{-1/q}} 1_{\{ f(\ensuremath{\mathbf{Z}_{\k}} + \mathcal{N}) > f(\ensuremath{\mathbf{Z}_{\k}}) \}}} \enspace .
$$
Let us first investigate the term $A(\ensuremath{\mathbf{z}})$ introduced in \eqref{eq:driftzlarge}. It is equal to
\begin{align*}
& E \left[ \frac{f(\frac{\ensuremath{\mathbf{z}} + \mathcal{N}}{\gamma})}{f(\ensuremath{\mathbf{z}})} 1_{\{ f(\ensuremath{\mathbf{z}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f(\frac{\ensuremath{\mathbf{z}} + \mathcal{N}}{\gamma}) \geq 1 \}} \right] + E \left[ \frac{f(\frac{\ensuremath{\mathbf{z}}}{\gamma^{-\frac1q}})}{f(\ensuremath{\mathbf{z}})} 1_{\{ f(\ensuremath{\mathbf{z}} + \mathcal{N}) > f(\ensuremath{\mathbf{z}}) \}} \underbrace{1_{\{ f(\frac{\ensuremath{\mathbf{z}}}{\gamma^{-1/q}}) \geq 1\}}}_{=1 \text{ (see} \eqref{eq:case1})} \right] \\
& = \frac{1}{\gamma^{\alpha}} \underbrace{E \left[ f\left(\frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}}+ \frac{\mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}}\right) 1_{\{ f(\ensuremath{\mathbf{z}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f(\frac{\ensuremath{\mathbf{z}} + \mathcal{N}}{\gamma}) \geq 1 \}} \right]}_{A_{1}} + \gamma^{\alpha/q} \underbrace{E \left[ 1_{\{ f(\ensuremath{\mathbf{z}} + \mathcal{N}) > f(\ensuremath{\mathbf{z}}) \}} \right]}_{A_{2}} \enspace.
\end{align*}
Using Lemma~\ref{lem:limitps} we obtain that $A_{2}$ converges to $1/2$ when $\ensuremath{\mathbf{z}}$ goes to $\infty$. Let us now handle the term $A_{1}$. Using the mean value theorem we have the existence of $c_{\mathcal{N}} \in [0,1]$ such that
\begin{equation}\label{eq:withmvt}
f\left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + \frac{\mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) = \underbrace{f\left(\frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}}\right)}_{=1} + \nabla f \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + c_{\mathcal{N}} \frac{ \mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) . \frac{ \mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \enspace.
\end{equation}
Hence the term $A_{1}$ can be decomposed in two terms:
\begin{multline}
A_{1} = \underbrace{E \left[
1_{\{ f(\ensuremath{\mathbf{z}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f(\frac{\ensuremath{\mathbf{z}} + \mathcal{N}}{\gamma}) \geq 1 \}} \right]}_{A_{11}}
+\\
\frac{1}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \underbrace{ E \left[\nabla f \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + c_{\mathcal{N}} \frac{ \mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) . \mathcal{N} 1_{\{ f(\ensuremath{\mathbf{z}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f(\frac{\ensuremath{\mathbf{z}} + \mathcal{N}}{\gamma}) \geq 1 \}} \right]}_{A_{12}} \enspace.
\end{multline}
The term $A_{11}$ equals
$$
A_{11} = \int 1_{\{ f(\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{n}}) \leq f(\ensuremath{\mathbf{z}}) \}}(\ensuremath{\mathbf{n}}) 1_{\{ f(\frac{\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{n}}}{\gamma}) \geq 1 \}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d\ensuremath{\mathbf{n}} \enspace.
$$
According to Lemma~\ref{lem:limitps}, the term $A_{11}$ converges to $1/2$ when $\ensuremath{\mathbf{z}}$ goes to $\infty$. Note that for all $\ensuremath{\mathbf{n}}$, the indicator $1_{\{ f(\frac{\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{n}}}{\gamma}) \geq 1 \}}(\ensuremath{\mathbf{n}})$ converges to $1$ for $\ensuremath{\mathbf{z}}$ to $\infty$.
We now take care of the term $A_{12}$ and prove that $| A_{12}|$ is bounded which will imply that $A_{12} \frac{1}{f(\ensuremath{\mathbf{z}})^{1/\alpha}}$ converges to zero when $\ensuremath{\mathbf{z}}$ goes to $\infty$.
\begin{multline*}
|A_{12}| \leq E \left[ | \nabla f \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + c_{\mathcal{N}} \frac{ \mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) . \mathcal{N} | \right] \leq E \left[ \|\nabla f \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + c_{\mathcal{N}} \frac{ \mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) \| \| \mathcal{N} \| \right] \\
\leq E \left[ \|\nabla f \left( \frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} + c_{\mathcal{N}} \frac{ \mathcal{N}}{f(\ensuremath{\mathbf{z}})^{\frac1\alpha}} \right) \|^{2} \right]^{\frac12}
\underbrace{E \left[
\| \mathcal{N} \|^{2} \right]^{\frac12}}_{=\sqrt{n}} \nonumber \enspace .
\end{multline*}
For the last two inequalities we have applied Cauchy-Schwarz inequalities. We denote $\tilde{\z}=\frac{\ensuremath{\mathbf{z}}}{f(\ensuremath{\mathbf{z}})^{1/\alpha}}$ and $c_{\tilde{\z}} = \frac{1}{f(\ensuremath{\mathbf{z}})^{1/\alpha}} $ and apply \eqref{eq:assOPOmajgradient}. Hence we find
$$
E \left[ \|\nabla f \left( \tilde{\z} + c_{\mathcal{N}} c_{\tilde{\z}} \mathcal{N} \right) \|^{2} \right]
\leq
c_{0} + \sum_{i=1}^{k} c_{i} E[ \| \mathcal{N} \|^{i} ]
=: M_{\ref{prop:OPO}}
$$
and it follows that $|A_{12}| \leq \sqrt{M_{\ref{prop:OPO}} n} $. Hence
$$
\lim_{\| \ensuremath{\mathbf{z}} \| \to \infty} A(\ensuremath{\mathbf{z}}) = \frac12 \left( \frac{1}{\gamma^{\alpha}} + \gamma^{\alpha/q} \right) \enspace.
$$
We investigate now the term $B(\ensuremath{\mathbf{z}})$ defined in \eqref{eq:driftzlarge}.
$$
B(\ensuremath{\mathbf{z}}) = \frac{1}{f(\ensuremath{\mathbf{z}})} \underbrace{E \left[ \frac{1_{\{f(\ensuremath{\mathbf{z}}+\mathcal{N})\leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f(\frac{\ensuremath{\mathbf{z}}+ \mathcal{N}}{\gamma}) \leq 1 \}}}{f(\frac{\ensuremath{\mathbf{z}}+ \mathcal{N}}{\gamma})} \right]}_{B_{11}}
+ \frac{1}{f(\ensuremath{\mathbf{z}})} \underbrace{E \left[ \frac{1_{\{f(\ensuremath{\mathbf{z}}+\mathcal{N})> f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f(\ensuremath{\mathbf{z}}/\gamma^{-\frac1q}) \leq 1 \}} }{f(\ensuremath{\mathbf{z}}/\gamma^{-\frac1q})} \right]}_{=0 \mbox{ as } f(\ensuremath{\mathbf{z}}/\gamma^{-\frac1q}) > 1 }
$$
Let us now take care of the term $B_{11}$ which is upper bounded by
$$
B_{11} \leq E \left[ \frac{1}{f(\frac{\ensuremath{\mathbf{z}}+ \mathcal{N}}{\gamma})} \right] = \gamma^{\alpha} E \left[ \frac{1}{f(\ensuremath{\mathbf{z}}+ \mathcal{N})} \right] < \gamma^{\alpha} c_{\ref{lem-integra-V}} \enspace,
$$
where for the latter term we have used Lemma~\ref{lem-integra-V}. Overall we find that
$$
0 \leq B(\ensuremath{\mathbf{z}}) < \frac{1}{f(\ensuremath{\mathbf{z}})} \gamma^{\alpha} c_{\ref{lem-integra-V}} \xrightarrow[\|\ensuremath{\mathbf{z}}\| \to \infty]{} 0
$$
where the latter limit comes from the fact that $\frac{1}{f(\ensuremath{\mathbf{z}})}$ converges to zero when $\ensuremath{\mathbf{z}}$ goes to infinity.
Overall we have proven that
\begin{equation}\label{eq:cond-infinity}
\lim_{\|\ensuremath{\mathbf{z}}\| \to \infty} \frac{PV(\ensuremath{\mathbf{z}})}{V(\ensuremath{\mathbf{z}})} = \frac12 \left( \frac{1}{\gamma^{\alpha}} + \gamma^{\alpha/q} \right) \enspace.
\end{equation}
\newcommand{A^{f<1}}{A^{f<1}}
\newcommand{B^{f<1}}{B^{f<1}}
\paragraph{Investigating the limit of $PV/V$ for $\ensuremath{\mathbf{z}}$ to zero}
We now investigate the limit for $\| \ensuremath{\mathbf{z}} \|$ to zero and consider thus $\ensuremath{\mathbf{z}}$ small enough, in particular we can assume
$$
f(\ensuremath{\mathbf{z}}) < \min \{ 1 , \gamma^{\alpha/q} \}
$$
and hence $1/V(\ensuremath{\mathbf{z}}) = f(\ensuremath{\mathbf{z}}) $.
The quantity $PV(\ensuremath{\mathbf{z}})/V(\ensuremath{\mathbf{z}})$ writes
$$
\frac{PV(\ensuremath{\mathbf{z}})}{V(\ensuremath{\mathbf{z}})} =
\underbrace{f(\ensuremath{\mathbf{z}}) E \left[ f(\ensuremath{\mathbf{Z}_{\k+1}}) 1_{\{f(\ensuremath{\mathbf{Z}_{\k+1}}) \geq 1 \}} | \ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{z}} \right]}_{C(\ensuremath{\mathbf{z}})}
+
\underbrace{E \left[ \frac{f(\ensuremath{\mathbf{z}})}{f(\ensuremath{\mathbf{Z}_{\k+1}})} 1_{\{ f(\ensuremath{\mathbf{Z}_{\k+1}}) < 1 \}} | \ensuremath{\mathbf{Z}_{\k}} = \ensuremath{\mathbf{z}} \right]}_{D(\ensuremath{\mathbf{z}})} \enspace.
$$
Let us investigate the term $C(\ensuremath{\mathbf{z}})$:
\begin{multline}
C(\ensuremath{\mathbf{z}}) = f(\ensuremath{\mathbf{z}}) E \left[ f\left(\frac{\ensuremath{\mathbf{z}}+ \mathcal{N}}{\gamma}\right) 1_{\{f(\ensuremath{\mathbf{z}}+ \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f((\ensuremath{\mathbf{z}}+ \mathcal{N})/\gamma) \geq 1 \}} \right] \\ + f(\ensuremath{\mathbf{z}}) E \left[f(\gamma^{\frac1q} \ensuremath{\mathbf{z}}) 1_{\{ f(\ensuremath{\mathbf{z}}+\mathcal{N}) > f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f(\gamma^{\frac1q} \ensuremath{\mathbf{z}}) \geq 1 \}} \right]
\end{multline}
and hence
$$
0 \leq C(\ensuremath{\mathbf{z}}) \leq f(\ensuremath{\mathbf{z}}) E \left[ f\left(\frac{\ensuremath{\mathbf{z}}+ \mathcal{N}}{\gamma}\right) \right] + f(\ensuremath{\mathbf{z}})^{2} \gamma^{\alpha/q} \enspace.
$$
According to \eqref{eq:u2-1} and \eqref{eq:u2-1prime}, for $\| \ensuremath{\mathbf{z}} \|$ small enough (hence staying in a bounded region), $E \left[ f\left(\frac{\ensuremath{\mathbf{z}}+ \mathcal{N}}{\gamma}\right) \right]$ is a bounded function of $\ensuremath{\mathbf{z}}$
and thus $C(\ensuremath{\mathbf{z}})$ converges to zero when $\ensuremath{\mathbf{z}}$ goes to $0$:
\begin{equation}
\lim_{\ensuremath{\mathbf{z}} \to 0} C(\ensuremath{\mathbf{z}}) = 0 \enspace.
\end{equation}
Let us investigate the term $D(\ensuremath{\mathbf{z}})$:
\begin{align*}
D(\ensuremath{\mathbf{z}}) & = f(\ensuremath{\mathbf{z}}) E \left[ \frac{ 1_{\{f(\ensuremath{\mathbf{z}}+ \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f((\ensuremath{\mathbf{z}}+ \mathcal{N})/\gamma) < 1 \}}}{f((\ensuremath{\mathbf{z}}+ \mathcal{N})/\gamma)} \right] +
f(\ensuremath{\mathbf{z}}) E \left[ \frac{1_{\{f(\ensuremath{\mathbf{z}}+ \mathcal{N}) > f(\ensuremath{\mathbf{z}}) \}}}{f(\ensuremath{\mathbf{z}}/\gamma^{-1/q})} \underbrace{1_{\{ f(\frac{\ensuremath{\mathbf{z}}}{\gamma^{-\frac1q}}) < 1 \}}}_{=1} \right] \\
& = f(\ensuremath{\mathbf{z}}) E \left[ \frac{1_{\{f(\ensuremath{\mathbf{z}}+ \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f((\ensuremath{\mathbf{z}}+ \mathcal{N})/\gamma) < 1 \}}}{f((\ensuremath{\mathbf{z}}+ \mathcal{N})/\gamma)} \right] +
\gamma^{-\alpha/q} E \left[ 1_{\{f(\ensuremath{\mathbf{z}}+ \mathcal{N}) > f(\ensuremath{\mathbf{z}}) \}} \right] \\
& = \underbrace{f(\ensuremath{\mathbf{z}}) \gamma^{\alpha} E \left[ \frac{1_{\{f(\ensuremath{\mathbf{z}}+ \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}} 1_{\{ f((\ensuremath{\mathbf{z}}+ \mathcal{N})/\gamma) < 1 \}}}{f(\ensuremath{\mathbf{z}}+ \mathcal{N})} \right]}_{D_{1}(\ensuremath{\mathbf{z}})}
+ \gamma^{-\alpha/q} \Pr( f(\ensuremath{\mathbf{z}}+ \mathcal{N}) > f(\ensuremath{\mathbf{z}}) ) \enspace
\end{align*}
The term $D_{1}(\ensuremath{\mathbf{z}})$ is upper bounded by $ f(\ensuremath{\mathbf{z}}) \gamma^{\alpha} E \left[ \frac{1}{f(\ensuremath{\mathbf{z}}+ \mathcal{N})} \right]$ and using Lemma~\ref{lem-integra-V} we find that for all $\ensuremath{\mathbf{z}}$, $ E \left[ \frac{1}{f(\ensuremath{\mathbf{z}}+ \mathcal{N})} \right]$ is bounded by a constant. Hence since $f(\ensuremath{\mathbf{z}})$ converges to $0$ when $\ensuremath{\mathbf{z}}$ goes to $0$ (Lemma~\ref{lem:too}), so does $D_{1}(\ensuremath{\mathbf{z}})$. Overall since according to Lemma~\ref{lem:yop}, $\Pr( f(\ensuremath{\mathbf{z}}+ \mathcal{N}) > f(\ensuremath{\mathbf{z}}) )$ converges to $1$ when $\ensuremath{\mathbf{z}}$ goes to $0$, we find that
\begin{equation}
\lim_{\ensuremath{\mathbf{z}} \to 0} D(\ensuremath{\mathbf{z}}) = \gamma^{-\alpha/q} \enspace.
\end{equation}
Overall, we have proven that
\begin{equation}\label{eq:cond-zero}
\lim_{\ensuremath{\mathbf{z}} \to 0} PV(\ensuremath{\mathbf{z}})/V(\ensuremath{\mathbf{z}}) = \gamma^{-\alpha/q} \enspace.
\end{equation}
According to Lemma~\ref{lem:CSdrift}, we obtain a drift condition for geometric ergodicity if the limits in \eqref{eq:cond-infinity} and \eqref{eq:cond-zero} are strictly smaller $1$, i.e., if
$$
\frac12 \left( \frac{1}{\gamma^{\alpha}} + \gamma^{\alpha/q} \right) < 1
$$
and $\gamma^{-\alpha/q} < 1$. This latter condition being equivalent to $\gamma > 1$.
\end{proof}
\subsection{Harris Recurrence and Positivity}
Harris recurrence and positivity of the chain $\ensuremath{\mathbf{Z}}$ follow from the geometric drift proven in Theorem~\ref{prop:OPO}. Indeed, remind the following drift result for Harris recurrence:
\begin{theorem}[Theorem 9.1.8. in \cite{Tweedie:book1993}](Drift condition for Harris recurrence)
Suppose $\ensuremath{\mathbf{Z}}$ is a $\psi$-irreducible chain. If there exists a petite set $C$ and a function $V$ which is unbounded off petite sets such that
\begin{equation}\label{eq:DriftHR}
\Delta V(\ensuremath{\mathbf{z}}) \leq 0, \ensuremath{\mathbf{z}} \in C^{c}
\end{equation}
holds, then $\ensuremath{\mathbf{Z}}$ is Harris recurrent.
\end{theorem}
In the previous theorem, a function $V: \mathcal{Z} \mapsto \mathbb{R}_{+}$ is unbounded off petite sets for $\ensuremath{\mathbf{Z}}$ if for any $c < \infty$, the sublevel sets $\mathcal{L}_{c} = \{ \ensuremath{\mathbf{y}} : V(\ensuremath{\mathbf{y}}) \leq c \} $ is petite (see \cite[Section 8.4.2]{Tweedie:book1993}).
From \cite[Theorem~10.4.4]{Tweedie:book1993} a recurrent chain admits an unique (up to constant multiples) invariant measure. The positivity is deduced from another drift condition as expressed in the following theorem.
\begin{theorem}[From Theorem 13.0.1 in \cite{Tweedie:book1993}]
Suppose that $\ensuremath{\mathbf{Z}}$ is an aperiodic Harris recurrent chain with invariant measure $\pi$. The following are equivalent:\\
The chain is positive Harris: that is, the unique invariant measure $\pi$ is finite. \\
There exists some petite set $C$, some $b < \infty$ and a non-negative function $V$ finite at some $\ensuremath{\mathbf{z}}_{0} \in \mathcal{Z}$, satisfying
\begin{equation}\label{eq:driftPOS}
\Delta V(\ensuremath{\mathbf{z}}) \leq -1 + b 1_{C}(\ensuremath{\mathbf{z}}), \ensuremath{\mathbf{z}} \in \mathcal{Z} \enspace.
\end{equation}
\end{theorem}
Using those two theorems we deduce the corollary that under the conditions of Theorem~\ref{prop:OPO} the chain $\ensuremath{\mathbf{Z}}$ is positive Harris recurrent:
\begin{corollary}\label{coro:HR-P}
Consider a (1+1)-ES with generalized one-fifth success rule as defined in \eqref{eq:sampling}, \eqref{eq:update-mean} and \eqref{eq:update-ss} optimizing $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$, $f: \mathbb{R}^{n} \to [0,+\infty[$ satisfies Assumptions~\ref{ass:f2}.
Let $\ensuremath{\mathbf{Z}}=(\ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{X}_\k}/\ensuremath{\sigma_\k})_{t \in \ensuremath{{\mathbb{{N}}}}}$ be the Markov chain associated to the (1+1)-ES optimizing $h$ defined in \eqref{eq:transitionZ}.
If $\gamma > 1$ and $\frac12 \left( \frac{1}{\gamma^{\alpha}} + \gamma^{\alpha/q} \right) < 1$, then $\ensuremath{\mathbf{Z}}$ is positive Harris recurrent.
\end{corollary}
\begin{proof}
The assumptions on $f$ ensure that the chain is $\varphi$-irreducible and aperiodic (see Section~\ref{sec:ISSA}) and the geometric drift function exhibited in Theorem~\ref{prop:OPO} satisfies obviously \eqref{eq:DriftHR} and \eqref{eq:driftPOS}.
It is unbounded off petite sets as the sublevel sets are small sets for $\ensuremath{\mathbf{Z}}$.
\end{proof}
\section{Linear Convergence of the $(1+1)$-ES with the Generalized One-fifth Success Rule}\label{sec:linear-convergence}
Using the properties derived on the normalized chain $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ we can now prove the global linear convergence of the $(1+1)$-ES with generalized one-fifth success rule. Linear convergence is formulated almost surely and in expectation. In a last part we characterize how fast the stationary regime where linear convergence takes place is reached. Common to the linear convergence results is the integrability of $\ensuremath{\mathbf{z}} \mapsto \ln \| \ensuremath{\mathbf{z}} \|$ with respect to the invariant probability measure $\pi$ of the chain $\ensuremath{\mathbf{Z}}$ that we investigate in the next section.
\subsection{Integrability w.r.t. the Stationary Measure}
To verify the integrability of $\ensuremath{\mathbf{z}} \mapsto \ln \| \ensuremath{\mathbf{z}} \|$ with respect to the invariant probability measure $\pi$, we use \eqref{eq:C-Vnorm} which is a consequence of the existence of a geometric drift. More formally we derive the following general technical lemma.
\begin{lemma}\label{lem:intV}
Let $V$ be a geometric drift function for $\ensuremath{\mathbf{Z}}$ in the sense of \eqref{eq:driftgerd} and $\pi$ its invariant probability measure. Assume that there exists $\ensuremath{\mathbf{z}} \in S_{V}$ such that $\int V(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) < \infty$, then $V$ is integrable against $\pi$:
$
\| \pi \|_{V} = \int V(\ensuremath{\mathbf{z}}) \pi(d \ensuremath{\mathbf{z}}) < + \infty \enspace.
$
\end{lemma}
\begin{proof}
From the inequality \eqref{eq:C-Vnorm}, there exists $R< \infty$ and $\rho <1$ such that for any $\ensuremath{\mathbf{z}} \in S_{V}$
\begin{equation}\label{totoo}
\| P(\ensuremath{\mathbf{z}}, .) - \pi \|_{V} \leq \rho R V(\ensuremath{\mathbf{z}}) \enspace.
\end{equation}
Consider a sequence of simple positive functions $V_{k}$ such that for each $\ensuremath{\mathbf{z}}$, $V_{k}(\ensuremath{\mathbf{z}})$ converges to $V(\ensuremath{\mathbf{z}})$ and $V_{k}(\ensuremath{\mathbf{z}})$ is increasing. Then we know that $\int V(\ensuremath{\mathbf{z}}) \pi(d \ensuremath{\mathbf{z}}) = \lim_{k} \int V_{k}(\ensuremath{\mathbf{z}}) \pi (d \ensuremath{\mathbf{z}})$ where the latter limit always exist but may be infinite.
From the triangular inequality we deduce that for all $k$
\begin{align}\label{eq:toppo}
\int V_{k}(\ensuremath{\mathbf{y}}) \pi(d\ensuremath{\mathbf{y}}) & = \int V_{k}(\ensuremath{\mathbf{y}}) \pi(d\ensuremath{\mathbf{y}}) - \int V_{k}(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) + \int V_{k}(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) \\
& \leq | \int V_{k}(\ensuremath{\mathbf{y}}) \pi(d\ensuremath{\mathbf{y}}) - \int V_{k}(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) | + \int V_{k}(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) \\
& \leq \| P(\ensuremath{\mathbf{z}},.) - \pi \|_{V} + \int V(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}})
\end{align}
where for the last inequality we have used the fact that $0 \leq V_{k} \leq V$ and the definition of $\| P(\ensuremath{\mathbf{z}},.) - \pi \|_{V}$ namely
$
\| P(\ensuremath{\mathbf{z}},.) - \pi \|_{V} = \sup_{k , |k| \leq V} | \int k(\ensuremath{\mathbf{y}}) (P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) - \pi(d\ensuremath{\mathbf{y}})) |.
$
The fact that $ \int V_{k}(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) \leq \int V(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) $ is a consequence of $ V_{k} \leq V$.
In addition, $\int V(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) < \infty$ according to the assumptions. Using \eqref{totoo} we find that for all $k$
$$
\int V_{k}(\ensuremath{\mathbf{y}}) \pi(d\ensuremath{\mathbf{y}}) \leq R V(\ensuremath{\mathbf{z}}) + \int V(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) < \infty
$$
And hence $\int V \pi = \lim_{k} \int V_{k}(\ensuremath{\mathbf{y}}) \pi(d\ensuremath{\mathbf{y}}) < \infty.$
\end{proof}
We can now apply the previous lemma to the drift function $V(\ensuremath{\mathbf{z}}) = f(\ensuremath{\mathbf{z}}) 1_{\{f(\ensuremath{\mathbf{z}}) \geq 1\}} + \frac{1}{f(\ensuremath{\mathbf{z}})}1_{\{f(\ensuremath{\mathbf{z}}) < 1\}} $ assuming that $f$ satisfies Assumptions~\ref{ass:f2} and that the sufficient conditions for a geometric drift of Theorem~\ref{prop:OPO} are satisfied (the fact that $PV(\ensuremath{\mathbf{z}})$ is finite for one $\ensuremath{\mathbf{z}}$ in $S_{V}$ comes from Lemma~\ref{lem-integra-V}). We now prove that $| \ln \| \ensuremath{\mathbf{z}} \| |$ is upper bounded by a constant times $V(\ensuremath{\mathbf{z}})$ that implies together with the previous lemma the integrability of $\ln \| \ensuremath{\mathbf{z}} \|$ w.r.t. stationary measure $\pi$.
\begin{lemma}\label{lem:integrability}
Assume that $f$ satisfies Assumptions~\ref{ass:f} and is continuous on $\mathbb{R}^{n}_{\neq}$. Then there exists a constant $K_{\ref{lem:integrability}}$ such that
\begin{equation}\label{homer}
| \ln \| \ensuremath{\mathbf{z}} \| | \leq K_{\ref{lem:integrability}} V(\ensuremath{\mathbf{z}}) \enspace.
\end{equation}
Assume that $f$ satisfies Assumptions~\ref{ass:f2} and that that $\gamma > 1$ and $\frac12 (1/\gamma^{\alpha} + \gamma^{\alpha/q}) < 1 $. Then $\ln \| \ensuremath{\mathbf{z}} \|$ is integrable w.r.t. the stationary measure $\pi$ of $\ensuremath{\mathbf{Z}}$.
\end{lemma}
\begin{proof}
For $\| \ensuremath{\mathbf{z}} \|$ close to $0$, $f(\ensuremath{\mathbf{z}})$ is close to $0$ and thus $V(\ensuremath{\mathbf{z}}) = \frac{1}{f(\ensuremath{\mathbf{z}})}$. Hence there exists $c$ such that
$$
\frac{| \ln \| \ensuremath{\mathbf{z}} \| |}{V(\ensuremath{\mathbf{z}})} = \frac{| \ln \| \ensuremath{\mathbf{z}} \| |}{f(\ensuremath{\mathbf{z}})} \leq c \frac{| \ln \| \ensuremath{\mathbf{z}} \| |}{ \| \ensuremath{\mathbf{z}} \|^{\alpha}}
$$
where the latter term is bounded since $ \| \ensuremath{\mathbf{z}} \|^{\alpha} | \ln \| \ensuremath{\mathbf{z}} \| |$ goes to $0$ when $\ensuremath{\mathbf{z}}$ goes to $0$. Note that for the middle inequality we have used \eqref{eq:LB-UB}.
For $\ensuremath{\mathbf{z}}$ large, $V(\ensuremath{\mathbf{z}}) = f(\ensuremath{\mathbf{z}}) $ and using $| \ln \| \ensuremath{\mathbf{z}} \| | \leq \| \ensuremath{\mathbf{z}} \|^{\alpha} \leq f(\ensuremath{\mathbf{z}})$ (again we use \eqref{eq:LB-UB}) we find that for $\ensuremath{\mathbf{z}}$ large $ | \ln \| \ensuremath{\mathbf{z}} \| | \leq V(\ensuremath{\mathbf{z}})$.
Since $|\ln \| \ensuremath{\mathbf{z}} \||$ is continuous and hence bounded on all $\| \ensuremath{\mathbf{z}}\|$ in an interval $[a,b]$ with $0<a<b<+\infty$ then there exists $ \tilde{c}$ such that
$|\ln \| \ensuremath{\mathbf{z}} \|| 1_{\{ a \leq \| \ensuremath{\mathbf{z}} \| \leq b \}} \leq \tilde{c} \leq \tilde{c} V(\ensuremath{\mathbf{z}}) 1_{\{ a \leq \| \ensuremath{\mathbf{z}} \| \leq b \}} $. Overall we have proven that \eqref{homer} holds.
Since according to Lemma~\ref{lem-integra-V} $\int V(\ensuremath{\mathbf{y}}) P(\ensuremath{\mathbf{z}},d\ensuremath{\mathbf{y}}) < + \infty$ for $\ensuremath{\mathbf{z}} \in \mathbb{R}^{n}_{\neq}$, when the conditions of Theorem~\ref{prop:OPO} are satisfied, we deduce from the previous lemma that $V$ is integrable w.r.t. $\pi$ and hence $| \ln \| \ensuremath{\mathbf{z}} \| |$ is integrable with respect to $\pi$.
\end{proof}
\subsection{Asymptotic Probability of Success}
We investigate now the asymptotic probability of success that comes into play in the convergence rate of the algorithm. Success is defined as whether a candidate solution is better than the current solution $\ensuremath{\mathbf{X}_\k}$, i.e., as
$
P_{\frac{\ensuremath{\mathbf{x}}}{\sigma}} \left(f ( \ensuremath{\mathbf{X}}_{t} + \ensuremath{\sigma_\k} \ensuremath{\mathbf{U}_{\k}}^{1} ) \leq f( \ensuremath{\mathbf{X}}_{t}) \right)
$
and due to the scale-invariant property of $f$, this latter quantity can be expressed with the Markov chain $\ensuremath{\mathbf{Z}}$ as
$$
P_{\ensuremath{\mathbf{z}}} \left( f( \ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1} ) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \right) = E_{\ensuremath{\mathbf{z}}} \left[ 1_{\{ f( \ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1} ) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \}} \right] \enspace.
$$
The convergence of the probability of success is a consequence of the positivity and aperiodicity and can be deduced from \cite[Theorem~14.0.1]{Tweedie:book1993}.
\begin{proposition}[Asymptotic probability of success]
Let the $(1+1)$-ES with generalized one-fifth success rule optimize $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$ and $f$ satisfies Assumptions~\ref{ass:f2}. Assume that $\gamma > 1$ and $\frac12 (1/\gamma^{\alpha} + \gamma^{\alpha/q}) < 1 $. Let $\pi$ be the invariant probability measure of the normalized Markov chain $\ensuremath{\mathbf{Z}}$.
Then for any initial condition $(\ensuremath{\mathbf{x}},\sigma) \in \mathbb{R}^{n}_{\neq} \times \mathbb{R}^{+}_{>}$, the following holds
\begin{equation}\label{eq:limProbaSuccess}
{\rm PS}:=
\lim_{t \to \infty}
P_{\frac{\ensuremath{\mathbf{x}}}{\sigma}} \left(f ( \ensuremath{\mathbf{X}}_{t} + \ensuremath{\sigma_\k} \ensuremath{\mathbf{U}_{\k}}^{1} ) \leq f( \ensuremath{\mathbf{X}}_{t}) \right)
= \int 1_{\{f(\ensuremath{\mathbf{y}} + \ensuremath{\mathbf{n}}) \leq f(\ensuremath{\mathbf{y}})\}}(\ensuremath{\mathbf{y}},\ensuremath{\mathbf{n}}) \pi(d\ensuremath{\mathbf{y}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}} \enspace.
\end{equation}
\end{proposition}
\begin{proof}
From \cite[Theorem~14.0.1]{Tweedie:book1993}, given a $\psi$-irreducible and aperiodic chain, given a function $k \geq 1$, if the chain is positive recurrent with invariant probability measure $\pi$ and $\pi(k) = \int \pi(d\ensuremath{\mathbf{z}}) k(\ensuremath{\mathbf{z}}) < \infty$, then for any $\ensuremath{\mathbf{z}} \in S_{\tilde V}= \{ \ensuremath{\mathbf{z}} : \tilde V(\ensuremath{\mathbf{z}}) < \infty \} $ where the function $\tilde V$ is an extended-valued function satisfying
\begin{equation}\label{eq:driftGEO-ERGO}
\Delta \tilde V(\ensuremath{\mathbf{z}}) \leq - k(\ensuremath{\mathbf{z}}) + b 1_{C}(\ensuremath{\mathbf{z}}) \enspace
\end{equation}
for some petite set $C$ and $b \in \mathbb{R}$, $\|P^{t}(\ensuremath{\mathbf{z}},.) - \pi \|_{k} \to 0$ holds. We take here $k(\ensuremath{\mathbf{z}})=1$ and hence the geometric drift proven in Theorem~\ref{prop:OPO} implies also \eqref{eq:driftGEO-ERGO}. From Corollary~\ref{coro:HR-P}, the chain is positive and hence the function $k(\ensuremath{\mathbf{y}})=1$ is integrable w.r.t. $\pi$.
Remark that
$$
P_{\ensuremath{\mathbf{z}}} \left( f( \ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1} ) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \right)
= \int w(\ensuremath{\mathbf{y}}) P^{t}(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{y}}) d\ensuremath{\mathbf{y}}
$$
where $w(\ensuremath{\mathbf{y}}) = \int 1_{\{f(\ensuremath{\mathbf{y}} + \ensuremath{\mathbf{n}}) \leq f(\ensuremath{\mathbf{y}}) \}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}} $, then $w(\ensuremath{\mathbf{y}}) \leq 1$ and hence from \cite[Theorem 14.0.1]{Tweedie:book1993} we deduce that ($\ensuremath{\mathbf{z}} = \ensuremath{\mathbf{x}} / \sigma$)
$$
| P_{\ensuremath{\mathbf{z}}} \left( f( \ensuremath{\mathbf{Z}_{\k}} + \ensuremath{\mathbf{U}_{\k}}^{1} ) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \right) - \int w(\ensuremath{\mathbf{y}}) \pi(d \ensuremath{\mathbf{y}}) | \leq \| P^{t}(\ensuremath{\mathbf{z}},.) - \pi \|_{\ensuremath{\mathbf{y}} \mapsto 1} \xrightarrow[t \to \infty]{} 0 \enspace.
$$
\end{proof}
We also derive a Law of Large Numbers for the asymptotic probability of success.
\begin{proposition}\label{prop:asCVPS}
Let the $(1+1)$-ES with generalized one-fifth success rule optimize $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$ and $f$ satisfies Assumptions~\ref{ass:f2}. Assume that $\gamma > 1$ and $\frac12 (1/\gamma^{\alpha} + \gamma^{\alpha/q}) < 1 $. Let $(\ensuremath{\mathbf{Z}_{\k}}=\ensuremath{\mathbf{X}_\k}/\ensuremath{\sigma_\k})_{t \in \ensuremath{{\mathbb{{N}}}}}$ be the normalized Markov chain associated to $(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})_{t \in \ensuremath{{\mathbb{{N}}}}}$. Then for all initial condition $(\ensuremath{\mathbf{X}}_{0},\sigma_{0})$
\begin{equation}\label{eq:LLN-AS}
\frac{1}{t} \sum_{k=0}^{t-1} 1_{\{ f( \ensuremath{\mathbf{X}}_{k} + \sigma_{k} \ensuremath{\mathbf{U}}_{k}^{1} ) \leq f( \ensuremath{\mathbf{X}}_{k}) \}}
=
\frac{1}{t} \sum_{k=0}^{t-1} 1_{\{ f( \ensuremath{\mathbf{Z}}_{k} + \ensuremath{\mathbf{U}}_{k}^{1} ) \leq f( \ensuremath{\mathbf{Z}}_{k}) \}}
\xrightarrow[t \to \infty]{} {\rm PS}
\end{equation}
where the asymptotic probability of success ${\rm PS}$ is defined in \eqref{eq:limProbaSuccess}.
\end{proposition}
\begin{proof}
From Corollary~\ref{coro:HR-P}, the chain is positive and Harris recurrent. The function $\ensuremath{\mathbf{y}} \mapsto w(\ensuremath{\mathbf{y}}) = \int 1_{\{f(\ensuremath{\mathbf{y}} + \ensuremath{\mathbf{n}}) \leq f(\ensuremath{\mathbf{y}}) \}}(\ensuremath{\mathbf{n}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) d \ensuremath{\mathbf{n}} $ being integrable w.r.t.\ the stationary measure $\pi$, we can thus apply the Law of Large Numbers (\cite[Theorem 17.0.1]{Tweedie:book1993}) that gives us \eqref{eq:LLN-AS}.
\end{proof}
\subsection{Almost Sure Linear Convergence}
Almost sure linear convergence derives from the application of a Law of Large Number (LLN) (see Theorem~5.2 in \cite{methodology-paper}). Some assumptions to be able to apply a LLN to $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ are
positivity, Harris-recurrence and integrability of $\ln \| \ensuremath{\mathbf{z}} \|$. We are then now ready to prove the almost sure linear convergence of the $(1+1)$-ES with generalized one-fifth success rule.
\begin{theorem}
Let the $(1+1)$-ES with generalized one-fifth success rule optimize $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$ and $f$ satisfies Assumptions~\ref{ass:f2}. Assume that $\gamma > 1$ and $\frac12 (1/\gamma^{\alpha} + \gamma^{\alpha/q}) < 1 $. Then for all initial condition $(\ensuremath{\mathbf{X}}_{0}, \sigma_{0})$ almost sure linear convergence for the mean vector and for the step-size holds, i.e.,
\begin{align}\label{eq:asCVx}
& \frac{1}{t} \ln \frac{\| \ensuremath{\mathbf{X}}_{t} \|}{\| \ensuremath{\mathbf{X}}_{0} \|} \xrightarrow[t \to \infty]{} \ln \gamma \left( \frac{q+1}{q} {\rm PS} - \frac1q \right)
\\\label{eq:asCVsigma}
& \frac{1}{t} \ln \frac{\sigma_{t} }{ \sigma_{0} } \xrightarrow[t \to \infty]{} \ln \gamma \left( \frac{q+1}{q} {\rm PS} - \frac1q \right) \enspace .
\end{align}
where ${\rm PS}$ is the asymptotic probability of success defined in \eqref{eq:limProbaSuccess}.
\end{theorem}
\begin{proof}
The proof follows the same line as the proof of Theorem~5.2 in \cite{methodology-paper}.
We start by re-writing the log-progress:
\begin{multline*}
\frac{1}{t} \ln \frac{\| \ensuremath{\mathbf{X}}_{t} \|}{\| \ensuremath{\mathbf{X}}_{0} \|} = \frac1t \sum_{k=0}^{t-1} \ln \frac{\| \ensuremath{\mathbf{X}}_{k+1} \|}{\| \ensuremath{\mathbf{X}}_{k} \|}
= \frac1t \sum_{k=0}^{t-1} \ln \frac{\sigma_{k+1} \| \ensuremath{\mathbf{Z}}_{k+1}\| }{\sigma_{k} \| \ensuremath{\mathbf{Z}}_{k} \|}
= \frac1t \sum_{k=0}^{t-1} \ln \frac{\mathcal{G}_{2}(1,\mathbf{Y}_{k}) \| \ensuremath{\mathbf{Z}}_{k+1}\| }{ \| \ensuremath{\mathbf{Z}}_{k} \|} \\
= \underbrace{\frac1t \sum_{k=0}^{t-1} \ln \| \ensuremath{\mathbf{Z}}_{k+1} \|}_{A} - \underbrace{\frac1t \sum_{k=0}^{t-1} \ln \| \ensuremath{\mathbf{Z}}_{k} \|}_{B} + \underbrace{\frac1t \sum_{k=0}^{t-1} \ln \mathcal{G}_{2}(1,\mathbf{Y}_{k})}_{C}
\end{multline*}
where we have used the scale-invariant property \eqref{eq:SIG2} and the fact that $\mathbf{Y}_{k} = \mathcal{S}}%{\mathrm{Perm}_{(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})}(\ensuremath{\mathbf{U}}_{k}) * \ensuremath{\mathbf{U}_{\k}} = \mathcal{S}}%{\mathrm{Perm}_{(\ensuremath{\mathbf{Z}}_{k},1)}(\ensuremath{\mathbf{U}}_{k})
$.
Since $\ln \| \ensuremath{\mathbf{z}} \|$ is integrable w.r.t.\ the stationary measure $\pi$ we can apply the LLN to the terms $A$ and $B$ and we find that they both converge towards $\int \ln \| \ensuremath{\mathbf{z}} \| \pi(d \ensuremath{\mathbf{z}})$ such that $A$ minus $B$ converges to zero. Let us investigate the term $C$, since
$
\mathcal{G}_{2}(1,\ensuremath{\mathbf{Y}_\k}) = (\gamma - \gamma^{-1/q}) 1_{\{ f( \ensuremath{\mathbf{Z}}_{t} + \ensuremath{\mathbf{U}}_{t}^{1}) \leq f(\ensuremath{\mathbf{Z}}_{t}) \}} + \gamma^{-1/q}
$
we find that
$
\ln \mathcal{G}_{2}(1,\ensuremath{\mathbf{Y}_\k}) = \ln \gamma ( ( 1 + 1/q ) 1_{\{ f( \ensuremath{\mathbf{Z}}_{t} + \ensuremath{\mathbf{U}}_{t}^{1}) \leq f(\ensuremath{\mathbf{Z}}_{t}) \}} - \frac1q ).
$
Therefore
\begin{multline}\label{eq:tchoumb}
C = \frac1t \sum_{k=0}^{t-1} \ln \mathcal{G}_{2}(1,\mathbf{Y}_{k}) = \ln \gamma \left( 1 + \frac1q \right) \frac1t \sum_{k=0}^{t-1} 1_{\{ f( \ensuremath{\mathbf{Z}}_{k} + \ensuremath{\mathbf{U}}_{k}^{1}) \leq f(\ensuremath{\mathbf{Z}}_{k}) \}} - \frac1q \ln \gamma \xrightarrow[]{} \\
\ln \gamma \left( 1 + \frac1q \right) \int 1_{\{ f(\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{u}}) \leq f(\ensuremath{\mathbf{z}}) \}}(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}}) \pi( d \ensuremath{\mathbf{z}}) p_{{\mathcal{N}}}(\ensuremath{\mathbf{u}}) d \ensuremath{\mathbf{u}} - \frac1q \ln \gamma = \ln \gamma \left[ \left(1+\frac1q\right) {\rm PS} - \frac1q \right]
\end{multline}
where we have used Proposition~\ref{prop:asCVPS} for the latter limit.
The limit \eqref{eq:asCVsigma} follows from the fact that
$$
\frac1t \ln \frac{\sigma_{t}}{\sigma_{0}} = \frac1t \sum_{k=0}^{t-1} \ln \frac{\sigma_{k+1}}{\sigma_{k}} = \frac1t \sum_{k=0}^{t-1} \ln \mathcal{G}_{2}(1,\mathbf{Y}_{k})
$$
with as above $\mathbf{Y}_{k} = \mathcal{S}}%{\mathrm{Perm}_{(\ensuremath{\mathbf{X}_\k},\ensuremath{\sigma_\k})}(\ensuremath{\mathbf{U}}_{k}) * \ensuremath{\mathbf{U}_{\k}} = \mathcal{S}}%{\mathrm{Perm}_{(\ensuremath{\mathbf{Z}}_{k},1)}(\ensuremath{\mathbf{U}}_{k})
$. Using~\eqref{eq:tchoumb} we obtain \eqref{eq:asCVsigma}.
\end{proof}
We define the convergence rate as minus the almost sure limit of the logarithm of $\| \ensuremath{\mathbf{X}_\k} \|$ or of $\sigma_{t}$ that corresponds to minus expectation of the logarithm of the step-size change w.r.t. the stationary distribution, i.e.,
\begin{equation}\label{eq:CR}
\boxed{{\rm CR} := - E_{\pi} \left[ \ln \ensuremath{\eta^{\star}} \right]
=
- \ln \gamma \left( \frac{q+1}{q} {\rm PS} - \frac1q \right) } \enspace.
\end{equation}
Figure~\ref{fig:simul} presents some convergence graphs of the $(1+1)$-ES with generalized $1/5$ success rule. The slope of the linear decrease observed in log scale (after a small adaptation period on the left graph) corresponds to $- {\rm CR}$.
\paragraph{Sign of the convergence rate}
Convergence will take place if ${\rm CR} > 0$. We prove in the next proposition an alternative expression for the convergence rate that allows us to conclude that ${\rm CR} > 0$.
\begin{proposition}\label{prop:SignCR}
Let the $(1+1)$-ES with generalized one-fifth success rule optimize $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$ and $f$ satisfies Assumptions~\ref{ass:f2}. Assume that $\gamma > 1$ and $\frac12 (1/\gamma^{\alpha} + \gamma^{\alpha/q}) < 1 $. Let ${\rm CR}$ be the convergence rate of the algorithm given in \eqref{eq:CR}. Then
\begin{align}\label{eq:CR2}
{\rm CR} & = - \frac{1}{\alpha} E_{\pi} \left(\ln \frac{f(\ensuremath{\mathbf{Z}_{\k}}+\ensuremath{\mathbf{U}_{\k}}^{1}1_{\{f(\ensuremath{\mathbf{Z}_{\k}}+\ensuremath{\mathbf{U}_{\k}}^{1}) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \} }) }{f(\ensuremath{\mathbf{Z}_{\k}})} \right) \\
& = - \frac{1}{\alpha}
\int \ln
\frac{f(\ensuremath{\mathbf{z}}+ \ensuremath{\mathbf{n}} 1_{f(\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{n}}) \leq f(\ensuremath{\mathbf{z}})})}
{f(\ensuremath{\mathbf{z}})}
p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) p_{\pi}(\ensuremath{\mathbf{z}}) d\ensuremath{\mathbf{z}} d \ensuremath{\mathbf{n}}
\end{align}
where $p_{\pi}$ is the density of the invariant probability measure $\pi$ with respect to the Lebesgue measure.
Consequently ${\rm CR} > 0$, i.e., linear \emph{convergence} indeed takes place.
\end{proposition}
\begin{proof}
Because $\pi$ is the invariant probability measure of $\ensuremath{\mathbf{Z}}$, if $\ensuremath{\mathbf{Z}}_{0} \sim \pi$ then for all $t$, $\ensuremath{\mathbf{Z}_{\k}} \sim \pi$ such that
$
E_{\pi} \left[ \ln \frac{f(\ensuremath{\mathbf{Z}_{\k+1}})}{f(\ensuremath{\mathbf{Z}_{\k}})} \right]
= 0.
$
On the other hand, since $\ensuremath{\mathbf{Z}_{\k+1}} = (\ensuremath{\mathbf{Z}_{\k}}+\ensuremath{\mathbf{U}_{\k}}^{1}1_{\{f(\ensuremath{\mathbf{Z}_{\k}}+\ensuremath{\mathbf{U}_{\k}}^{1}) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \} })/\ensuremath{\eta^{\star}}$ we deduce that
$$
E_{\pi} \left[ \ln \frac{f(\ensuremath{\mathbf{Z}_{\k+1}})}{f(\ensuremath{\mathbf{Z}_{\k}})} \right] = E_{\pi} \left[ - \alpha \ln \ensuremath{\eta^{\star}} + \ln \frac{f(\ensuremath{\mathbf{Z}_{\k}}+\ensuremath{\mathbf{U}_{\k}}^{1}1_{\{f(\ensuremath{\mathbf{Z}_{\k}}+\ensuremath{\mathbf{U}_{\k}}^{1}) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \} }) }{f(\ensuremath{\mathbf{Z}_{\k}})} \right] = 0
$$
Thus
\begin{align}
{\rm CR} = - E_{\pi} \left[ \ln \ensuremath{\eta^{\star}} \right] & = - \frac{1}{\alpha} E_{\pi} \left(\ln \frac{f(\ensuremath{\mathbf{Z}_{\k}}+\ensuremath{\mathbf{U}_{\k}}^{1}1_{\{f(\ensuremath{\mathbf{Z}_{\k}}+\ensuremath{\mathbf{U}_{\k}}^{1}) \leq f(\ensuremath{\mathbf{Z}_{\k}}) \} }) }{f(\ensuremath{\mathbf{Z}_{\k}})} \right) \\
& = - \frac{1}{\alpha}
\int \ln
\frac{f(\ensuremath{\mathbf{z}}+ \ensuremath{\mathbf{n}} 1_{f(\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{n}}) \leq f(\ensuremath{\mathbf{z}})})}
{f(\ensuremath{\mathbf{z}})}
p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) p_{\pi}(\ensuremath{\mathbf{z}}) d\ensuremath{\mathbf{z}} d \ensuremath{\mathbf{n}}
\end{align}
where in the previous equation we have used the fact that according to \cite[Theorem~10.4.9]{Tweedie:book1993}, $\pi$ and the maximal irreducibility measure for $\ensuremath{\mathbf{Z}}$ are equivalent. Hence $\pi$ is equivalent to the Lebesgue measure. We denoted $p_{\pi}$ its density.
Since $\frac{f(\ensuremath{\mathbf{z}}+ \ensuremath{\mathbf{n}} 1_{f(\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{n}}) \leq f(\ensuremath{\mathbf{z}})})}
{f(\ensuremath{\mathbf{z}})} \leq 1 $ we see that ${\rm CR} \geq 0$. However ${\rm CR} =0$ is impossible as it would imply that $\ln
\frac{f(\ensuremath{\mathbf{z}}+ \ensuremath{\mathbf{n}} 1_{f(\ensuremath{\mathbf{z}}+\ensuremath{\mathbf{n}}) \leq f(\ensuremath{\mathbf{z}})})}
{f(\ensuremath{\mathbf{z}})}
p_{{\mathcal{N}}}(\ensuremath{\mathbf{n}}) p_{\pi}(\ensuremath{\mathbf{z}}) = 0$ almost everywhere.
\end{proof}
The fact that the convergence rate ${\rm CR}$ is strictly positive is equivalent to having the asymptotic probability of success satisfying ${\rm PS} < 1/(q+1)$. In the case where the target probability of success is $1/5$ (as proposed in \cite{Schumer:68,Rechenberg}), this implies ${\rm PS} < 1/5$. Hence we find that when convergence occurs the asymptotic probability of success is strictly smaller than $1/5$.
In the case of a non elitist algorithm, it is not easy to obtain the sign of the convergence rate and one needs to resort to numerical simulation (see \cite{TCSAnne04}).
\subsection{Linear convergence in expectation}
Linear convergence in expectation is formulated in the next theorem. The proof follows the lines of Theorem~5.3 in \cite{methodology-paper}.
\begin{theorem}
Let the $(1+1)$-ES with generalized one-fifth success rule optimize $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$ and $f$ satisfies Assumptions~\ref{ass:f2}. Assume that $\gamma > 1$ and $\frac12 (1/\gamma^{\alpha} + \gamma^{\alpha/q}) < 1 $. Then for all initial condition $(\ensuremath{\mathbf{X}}_{0}, \sigma_{0})=(\ensuremath{\mathbf{x}}_{0},\sigma_{0}) \in \mathbb{R}^{n}_{\neq} \times \mathbb{R}^{+}_{>}$
\begin{equation}\label{eq:convEXPX}
\lim_{t \to \infty} E_{\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}}} \left[\ln \frac{\|\ensuremath{\mathbf{X}_{\k+1}}\|}{\| \ensuremath{\mathbf{X}_\k} \|} \right] = - {\rm CR}
\end{equation}
and
\begin{equation}\label{eq:convEXPsigma}
\lim_{t \to \infty} E_{\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}}} \left[\ln \frac{\sigma_{t+1}}{\sigma_{t}}\right] = - {\rm CR} \enspace.
\end{equation}
\end{theorem}
\begin{proof}
The conditions of Theorem~5.3 of \cite{methodology-paper}\ are satisfied. Hence we can conclude to the linear convergence in expectation for all initial condition $(\ensuremath{\mathbf{x}}_{0},\sigma_{0})$ such that $V(\ensuremath{\mathbf{x}}_{0}/\sigma_{0}) < \infty$ where $V$ is a function satisfying
\begin{equation}\label{eq:toptop}
\Delta V(\ensuremath{\mathbf{z}}) \leq - ( | \ln \| \ensuremath{\mathbf{z}} \| | + 1) + b 1_{C}(\ensuremath{\mathbf{z}}) \enspace.
\end{equation}
Let us show that the previous condition is satisfied for a function proportional to the geometric drift function of Theorem~\ref{prop:OPO}. This will hence imply that the initial condition $(\ensuremath{\mathbf{x}}_{0},\sigma_{0})$ can be taken in $\mathbb{R}^{n}_{\neq} \times \mathbb{R}^{+}_{>}$.
Indeed, consider the function $\tilde V$ given in \eqref{eq:driftOPO} (to avoid ambiguity we denote $\tilde V$ the function originally denoted $V$). We have proven that it satisfies a drift condition for geometric ergodicity, that is, there exists some petite set $C$, some constants $b< \infty$ and $0< \vartheta <1$ such that
\begin{equation}\label{eq:intern1}
\Delta \tilde V \leq (\vartheta -1) \tilde V(\ensuremath{\mathbf{z}}) + b 1_{C}(\ensuremath{\mathbf{z}}) \enspace.
\end{equation}
Since in Lemma~\ref{lem:integrability} we have proven that $|\ln\| \ensuremath{\mathbf{z}} \| | \leq K_{\ref{lem:integrability}} \tilde V(\ensuremath{\mathbf{z}})$ and $\tilde V \geq 1$, the following inequality holds: $ |\ln\| \ensuremath{\mathbf{z}} \| | + 1 \leq (K_{\ref{lem:integrability}} + 1) \tilde V(\ensuremath{\mathbf{z}})$. We deduce that $(\vartheta - 1) (K_{\ref{lem:integrability}}+1) \tilde V(\ensuremath{\mathbf{z}}) \leq (\vartheta - 1) (| \ln \| \ensuremath{\mathbf{z}} \| | + 1) $ and hence
\begin{equation}\label{eq:intern2}
(\vartheta -1) (K_{\ref{lem:integrability}}+1)\tilde V(\ensuremath{\mathbf{z}}) /(1 - \vartheta) \leq - (|\ln \| \ensuremath{\mathbf{z}} \| |+1)
\end{equation}
Let us take $V = (K_{\ref{lem:integrability}}+1)/(1 - \vartheta) \tilde V$, \eqref{eq:intern1} implies that
$$
\Delta V \leq (\vartheta -1) V(\ensuremath{\mathbf{z}}) + b \frac{K_{\ref{lem:integrability}}+1}{1 - \vartheta} 1_{C}(\ensuremath{\mathbf{z}}) \leq - (|\ln \| \ensuremath{\mathbf{z}} \||+1) + b \frac{K_{\ref{lem:integrability}}+1}{1 - \vartheta} 1_{C}(\ensuremath{\mathbf{z}})
$$
where we have used \eqref{eq:intern2} for the latter inequality. Since $V(\ensuremath{\mathbf{z}}) < \infty$ whenever $f(\ensuremath{\mathbf{z}}) < \infty$ and $1/f(\ensuremath{\mathbf{z}}) < \infty$ we deduce from Theorem~5.3 in \cite{methodology-paper}\ that the limits \eqref{eq:convEXPX} and \eqref{eq:convEXPsigma} hold for all $\ensuremath{\mathbf{x}}_{0}/\sigma_{0}$ such that $f(\ensuremath{\mathbf{x}}_{0}/\sigma_{0}) < \infty$ and $1/f(\ensuremath{\mathbf{x}}_{0}/\sigma_{0}) < \infty$, i.e., for all $(\ensuremath{\mathbf{x}}_{0},\sigma_{0}) \in \mathbb{R}^{n}_{\neq} \times \mathbb{R}^{+}_{>}$.
\end{proof}
\subsection{Consequences of Geometric Ergodicity: Adaptivity at a Geometric Rate}
The geometric ergodicity translates that the invariant probability distribution is reached geometrically fast. It implies that from any starting point in $\mathbb{R}^{n}_{\neq} \times \mathbb{R}^{+}_{>}$, the expected (step-size) log progress $E_{\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}}} [ \ln \frac{\ensuremath{\sigma_{\k+1}}}{\ensuremath{\sigma_\k}} ] $ approaches the convergence rate ${\rm CR}$ geometrically fast. More precisely we have the following theorem:
\begin{theorem}
Let the $(1+1)$-ES with generalized one-fifth success rule optimize $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$ and $f$ satisfies Assumptions~\ref{ass:f2}. Assume that $\gamma > 1$ and $\frac12 (1/\gamma^{\alpha} + \gamma^{\alpha/q}) < 1 $. Then there exists $r > 1$ and $R < \infty$ such that for all initial condition $(\ensuremath{\mathbf{X}}_{0}, \sigma_{0})=(\ensuremath{\mathbf{x}}_{0},\sigma_{0}) \in \mathbb{R}^{n}_{\neq} \times \mathbb{R}^{+}_{>}$
\begin{equation}
\sum_{t} r^{t} |E_{\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}}} \ln \frac{ \ensuremath{\sigma_{\k+1}} }{\ensuremath{\sigma_\k}} - {\rm CR} | \leq R V(\ensuremath{\mathbf{x}}_{0}/\sigma_{0}) \enspace.
\end{equation}
This equation implies in particular that for any initial condition $(\ensuremath{\mathbf{x}}_{0},\sigma_{0}) \in \mathbb{R}^{n}_{\neq} \times \mathbb{R}^{+}_{>} $
\begin{equation}
|E_{\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}}} \ln \frac{ \ensuremath{\sigma_{\k+1}} }{ \ensuremath{\sigma_\k} } - {\rm CR} | r^{t} \xrightarrow[t \to \infty]{} 0
\end{equation}
where $r$ is independent of the starting point.
\end{theorem}
\begin{proof}
Under the assumptions of the theorem we have proven in Theorem~\ref{prop:OPO} that $V$ defined in \eqref{eq:driftOPO} satisfies a geometric drift function. Hence according to \eqref{eq:C-Vnorm} there exists $R>0$ and $r > 1$ such that for any starting point $\ensuremath{\mathbf{z}}_{0}$ in the set $S_{V}= \{ \ensuremath{\mathbf{z}} : V(\ensuremath{\mathbf{z}}) < \infty \} $
\begin{equation}\label{young}
\sum_{t} r^{t} \| P^{t}(\ensuremath{\mathbf{z}}_{0},.) - \pi \|_{V} \leq R V(\ensuremath{\mathbf{z}}_{0})
\end{equation}
where $\| \nu \|_{V} = \sup_{g: |g| \leq V} | \nu(g)|$. Remark that if $V$ satisfies a geometric drift condition, then $k V$ satisfies also a geometric drift condition for any constant $k \geq 1$.
Consider the function $g(\ensuremath{\mathbf{z}}) = \ln \gamma \left( \frac{q+1}{q} E[1_{\{f(\ensuremath{\mathbf{z}} + \mathcal{N}) \leq f(\ensuremath{\mathbf{z}}) \}}] - \frac1q \right) $. Then $|g(\ensuremath{\mathbf{z}})| \leq \ln \gamma (q+2)/q $ is bounded and hence $|g(\ensuremath{\mathbf{z}})| \leq \underbrace{ \ln \gamma (q+2)/q}_{k} V(\ensuremath{\mathbf{z}})$. In addition, $E_{\ensuremath{\mathbf{z}}_{0}}[\ln \ensuremath{\sigma_{\k+1}}/\ensuremath{\sigma_\k}]=E_{\ensuremath{\mathbf{z}}_{0}}[g(\ensuremath{\mathbf{Z}_{\k}})] = P^{t}(\ensuremath{\mathbf{z}}_{0},.)(g)$ which yields
\begin{multline*}
| E_{\ensuremath{\mathbf{z}}_{0}}\left[\ln \frac{\ensuremath{\sigma_{\k+1}}}{\ensuremath{\sigma_\k}}\right] - {\rm CR} | = | \int g(\ensuremath{\mathbf{z}}) P^{t}(\ensuremath{\mathbf{z}}_{0},d\ensuremath{\mathbf{z}}) - \int \pi(d\ensuremath{\mathbf{z}}) g(\ensuremath{\mathbf{z}}) | = \\ | (P^{t}(\ensuremath{\mathbf{z}}_{0},.) - \pi)(g)| \leq \| P^{t}(\ensuremath{\mathbf{z}}_{0},.) - \pi \|_{k V} \enspace
\end{multline*}
and thus according to \eqref{young} there exists $R> 0$ and $r >1$ such that for any $(\ensuremath{\mathbf{x}}_{0},\sigma_{0})$
\begin{equation}
\sum_{t} r^{t} | E_{\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}}}\left[\ln \frac{\ensuremath{\sigma_{\k+1}}}{\ensuremath{\sigma_\k}}\right] - {\rm CR} | \leq R V\left(\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}} \right) \enspace.
\end{equation}
This equation implies in particular that, for any initial condition $(\ensuremath{\mathbf{x}}_{0},\sigma_{0})$
\begin{equation}
| E_{\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}}}\left[\ln \frac{\ensuremath{\sigma_{\k+1}}}{\ensuremath{\sigma_\k}}\right] - {\rm CR} | r^{t} \xrightarrow[t \to \infty]{} 0
\end{equation}
where $r$ is independent of the starting point.
\end{proof}
\mathnote{It turns out to be a pain in the ass to prove that the function $g(\ensuremath{\mathbf{z}}) = E[ \ln \| \ensuremath{\mathbf{z}} + \mathcal{N} 1_{f(\ensuremath{\mathbf{z}}+\mathcal{N}) \leq f(\ensuremath{\mathbf{z}})} \| / \| \ensuremath{\mathbf{z}} \|] $ is dominated by $V$. Actually one way to prove it is to go via the continuity of $g$ and then use the limits for $\| \ensuremath{\mathbf{z}} \|$ to infinity and $\| \ensuremath{\mathbf{z}} \|$ to zero of $g/V$. However the continuity by itself is a pain in the ass because of the negative part of the logarithm that needs to be controlled. I am kind of convinced that the result is correct thought. But applying the Dominated convergence theorem is painful - There might be more powerful tools ?? Why not trying the classical theorem with continuity on compacts ? [ because $n$ does not live in a compact] use spherical coordinates??}
Remark that we only derived a result for the log-step-size progress in the previous theorem. We believe that a similar result for the log-progress $\ln \| \ensuremath{\mathbf{X}_{\k+1}} \|/\| \ensuremath{\mathbf{X}_\k} \|$ can also be derived. However it appears to be more technical to control the corresponding $g$-function (see proof) by $V$.
Our last result derives from the Central Limit Theorem (CLT), which is also a consequence of the the geometric ergodicity.
It describes the speed of convergence of the result obtained from applying the LLN. We remind first a CLT result for MC extracted from \cite{Tweedie:book1993}. Given a function $g$, $S_{t}(g) = \sum_{k=1}^{t} g(\ensuremath{\mathbf{Z}}_{k}) $.
\begin{theorem}[Theorem 17.0.1, Theorem 16.0.1 in \cite{Tweedie:book1993}]\label{theo:CLT}
Suppose that $\ensuremath{\mathbf{Z}}$ is a positive Harris chain with invariant probability measure $\pi$ and is aperiodic. Suppose that $\ensuremath{\mathbf{Z}}$ satisfies a geometric drift in the sense of \eqref{eq:driftgerd}. Let $g$ be a function on $\mathcal{Z}$ that satisfies $g^{2} \leq V$ and let $\bar g$ denote the centered function $\bar g = g - \int g d \pi$. Then the constant
$$
\gamma_{g}^{2} = E_{\pi}[\bar g^{2}(\ensuremath{\mathbf{Z}}_{0}) ] + 2 \sum_{k=1}^{\infty} E_{\pi}[ \bar g(\ensuremath{\mathbf{Z}}_{0}) \bar g (\ensuremath{\mathbf{Z}}_{k})]
$$
\mathnote{Interesting to remark that the previous expression makes sense if the distribution of $(\ensuremath{\mathbf{Z}}_{0},\ensuremath{\mathbf{Z}}_{k})$ changes over time while we initialize $\ensuremath{\mathbf{Z}}_{0}$ with the invariant distribution $\pi$.}
is well defined, non-negative and finite, and coincides with the asymptotic variance
$$
\lim_{t \to \infty} \frac1t E_{\pi}[(S_{t}(\bar g))^{2}] = \gamma_{g}^{2} \enspace.
$$
If $\gamma_{g}^{2} > 0$ then the Central Limit Theorem holds for the function $g$, that is for any initial condition $\ensuremath{\mathbf{z}}_{0}$
$$
\lim_{t \to \infty} P_{\ensuremath{\mathbf{z}}_{0}} \left\{ (t \gamma_{g}^{2})^{-1/2} S_{t}(\bar g) \leq x \right\} = \int_{- \infty}^{x} \frac{1}{\sqrt{2 \pi}} e^{-u^{2}/2} d u \enspace .
$$
If $\gamma_{g}^{2} = 0$, then $\lim_{t \to \infty} \frac{1}{\sqrt{t}} S_{t}(g) = 0 $ a.s.
\end{theorem}
\mathnote{There is a subtlety as we cannot take directly a deterministic $g$ function. We can consider the chain $(\ensuremath{\mathbf{Z}_{\k}},\ensuremath{\mathbf{U}_{\k}}^{1})$ instead, in which case the function $g(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}}) = \frac{\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{u}} 1_{f(\ensuremath{\mathbf{z}} + \ensuremath{\mathbf{u}}) \leq f(\ensuremath{\mathbf{z}})}}{\ensuremath{\eta^{\star}}(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}})}$. Quel drift pour cette chaine ? $V(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}}) = VV(\ensuremath{\mathbf{z}})$ or $V(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}}) = VV(\ensuremath{\mathbf{z}}) + \| \ensuremath{\mathbf{u}} \| $ - On the other hand it might be for this reason that it was difficult to prove the result with the $g$ function in the previous theorem ? }
\mathnote{This subtlety anyhow already appears when we apply the LLN, we hide it but still take the expectation w.r.t.. the independent random variable.}
\begin{theorem}\label{CLT:OPO}
Let the $(1+1)$-ES with generalized one-fifth success rule optimize $h = g \circ f$ where $g \in \ensuremath{\mathcal{M}}$ and $f$ satisfies Assumptions~\ref{ass:f2}. Assume that $\gamma > 1$ and $\frac12 (1/\gamma^{\alpha} + \gamma^{\alpha/q}) < 1 $. Then for any initial condition $(\ensuremath{\mathbf{x}}_{0},\sigma_{0}) \in \mathbb{R}^{n}_{\neq} \times \mathbb{R}^{+}_{>}$
$$
\lim_{t \to \infty} P_{\frac{\ensuremath{\mathbf{x}}_{0}}{\sigma_{0}}} \left\{ \frac{\sqrt{t}}{\gamma_{g}} \left( \frac{1}{t} \ln \frac{\ensuremath{\sigma_\k}}{\sigma_{0}} - {\rm CR} \right) \right\} = \frac{1}{\sqrt{2\pi}} \int_{- \infty}^{x} \exp(- u^{2}/2) du
$$
where $\gamma_{g}^{2}= \frac1t E_{\pi}[(\ln \frac{\sigma_{t}}{\sigma_{0}} - t {\rm CR})^{2}] > 0$.
\end{theorem}
\begin{proof}
We have seen that $\frac{1}{t} \ln \frac{\ensuremath{\sigma_\k}}{\sigma_{0}} = \frac{1}{t} \sum_{k=0}^{t-1} \ln \frac{\sigma_{k+1}}{\sigma_{k}} = \frac{1}{t} \sum_{k=0}^{t-1} g_{\ref{CLT:OPO}}(\ensuremath{\mathbf{Z}}_{k},\ensuremath{\mathbf{U}}_{k}^{1}) $ with
$$
g_{\ref{CLT:OPO}}(\ensuremath{\mathbf{Z}}_{k},\ensuremath{\mathbf{U}}_{k}^{1}) = \ln \gamma \left( \left( \frac{1+q}{q} \right) 1_{ \{ f(\ensuremath{\mathbf{Z}}_{k}+\ensuremath{\mathbf{U}}_{k}^{1}) \leq f(\ensuremath{\mathbf{Z}}_{k}) \}} - \frac{1}{q} \right) \enspace.
$$
Because the definition domain of $g_{\ref{CLT:OPO}}$ is $\mathbb{R}^{n}_{\neq} \times \mathbb{R}^{n}$, we cannot directly compare it to the geometric drift function $V$ and verify whether $g^{2} \leq V$. However let us consider not only $\ensuremath{\mathbf{Z}_{\k}}$ but the couple $(\ensuremath{\mathbf{Z}_{\k}},\ensuremath{\mathbf{U}_{\k}}^{1})$ where the $\ensuremath{\mathbf{U}_{\k}}^{i}$ are i.i.d.\ distributed according to $\ensuremath{\mathcal{N}}(0,I_n)$. Clearly $(\ensuremath{\mathbf{Z}_{\k}},\ensuremath{\mathbf{U}_{\k}}^{1})$ is an homogeneous Markov Chain that will inherit the properties of $\ensuremath{\mathbf{Z}_{\k}}$. Typically the chain is $\varphi$-irreducible with respect to the Lebesgue measure on $\mathbb{R}^{n}_{\neq} \times \mathbb{R}^{n}$, aperiodic and $D_{[l_{1},l_{2}]} \times \mathbb{R}^{n}$ are some small sets for the chain. The function $\tilde V(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{u}}) = V(\ensuremath{\mathbf{z}})$ (with $V$ defined in \eqref{eq:driftOPO}) satisfies a geometric drift in the sense of \eqref{eq:driftgerd} as from the small set shape we see that we only need to control the chain outside $D_{[l_{1},l_{2}]}$ while $\ensuremath{\mathbf{u}} \in \mathbb{R}^{n}$. The invariant probability distribution of the chain is $\pi \otimes p$.
To be able to apply Theorem~\ref{theo:CLT}, we need to verify that $g_{\ref{CLT:OPO}}^{2} \leq K_{\ref{CLT:OPO}} \tilde V$ with $K_{\ref{CLT:OPO}}$ a constant larger $1$ (as the same arguments used before holds here as well, namely if $\tilde V$ satisfies a geometric drift condition, then every multiple of $\tilde V$ also satisfies a geometric drift condition, provided the multiplication constant is larger one to still ensure that the function is larger $1$).
Let us now remark that $g_{{\ref{CLT:OPO}}}^{2} \leq ((\ln \gamma) (2+q)/q)^{2} $ and hence $g_{{\ref{CLT:OPO}}}^{2} \leq (((\ln \gamma) (2+q)/q)^{2} + 1) \tilde V$.
Using Theorem~\ref{theo:CLT}, we know that $\gamma_{g}^{2}$ is well defined and cannot equal $0$ otherwise it would imply that $\frac{1}{\sqrt{t}} \ln \frac{\ensuremath{\sigma_\k}}{\sigma_{0}} = 0$ that would contradict the fact that $\lim_{t \to \infty}\frac{1}{t} \ln \frac{\ensuremath{\sigma_\k}}{\sigma_{0}} = - {\rm CR} \neq 0$. Hence $\gamma_{g}^{2} > 0$ and we conclude using Theorem~\ref{theo:CLT}.
\end{proof}
\section{Discussion}\label{sec:discussion}
\begin{figure}
\centering
\includegraphics[height=0.3\textwidth]{CV-OnePlusOne-Sphere-20D}
\includegraphics[height=0.3\textwidth]{CV-OnePlusOne-Sphere-20D-ss1}
\caption{\label{fig:simul} Convergence simulations of the $(1+1)$-ES with generalized one-fifth success rule on spherical functions $f(\ensuremath{\mathbf{x}})= g(\| \ensuremath{\mathbf{x}} \|)$ for $g \in \ensuremath{\mathcal{M}}$ in dimension $\dim = 20$ (parameters $\gamma=\exp(1/3)$ and $q=4$).
Each plot is in log scale and depicts in black the distance to optimum, i.e., $\| \ensuremath{\mathbf{X}_\k}\|$, in blue the respective step-size $\ensuremath{\sigma_\k}$ and in magenta the norm of the normalized chain $\| \ensuremath{\mathbf{Z}_{\k}}\|$. The $x$-axis is the number of function evaluations corresponding to the iteration index $t$. On the left the simulation is voluntarily started with a too small step-size equal to $\sigma_{0}=10^{-6}$ to illustrate the adaptivity ability of the algorithm. On the right, the initial step-size equals $1$.
}
\end{figure}
Using the methodology developed in \cite{methodology-paper}, we have proven the global linear convergence of the $(1+1)$-ES with generalized one-fifth success rule on functions that write $h = g \circ f $ where $f$ is a continuously differentiable positively homogeneous with degree alpha function (satisfying an additional mild condition on the gradient norm)
and $g$ is a
strictly increasing function. This class of functions includes non quasi-convex functions and non continuous functions, an untypical setting for proving linear convergence of optimization algorithms in general.
Linear convergence holds under the condition that the step-size increases in case of success, i.e., $\gamma > 1$ and that
\begin{equation}\label{cond:GE}
\frac12 \left( 1/\gamma^{\alpha} + \gamma^{\alpha/q} \right) < 1 \enspace.
\end{equation}
Especially, this condition only depends on the function via $\alpha$ and is thus the same for \emph{any} $g \circ f$ with $f$ is positively homogeneous with degree $\alpha$ and continuously differentiable (plus satisfying Assumptions~\ref{ass:f2}).
Because on a linear function the probability of success equals $1/2$, the condition in \eqref{cond:GE}
corresponds to the expected inverse of the step-size change to the alpha--on a linear function--being strictly smaller than $1$. In other words, the step-size should increase on a linear function. While this latter condition seems a reasonable requirement for an adaptive step-size algorithm, let us point out that some algorithms like the $(1,2)$-ES with self adaptation fail to satisfy this condition (see \cite{hansen2006ecj} for a thorough analysis of this problem). We believe that the fact that linear convergence on the class of functions investigated in the paper is related to increasing the step-size on linear functions illustrates the strong need to study simple models like the linear function when designing CB-SARS\ algorithms.
Our statements for the linear convergence hold for any initial solution and any initial step-size. This latter property reflects the main advantage of \emph{adaptive} step-size methods: the initial step-size does not need to be too carefully chosen to ensure good convergence properties.
Note that methods like simulated annealing or Simultaneous perturbation stochastic approximation (SPSA) \cite{Spall:1992} do not share this nice property and are very sensitive to the choice of some parameters that unfortunately need to be adjusted by the user.
The adaptation phase, i.e., how long it takes such that the linear convergence is ``observed'' (see Figure~\ref{fig:simul} left) is related theoretically to the convergence speed of $(\ensuremath{\mathbf{Z}_{\k}})_{t \in \ensuremath{{\mathbb{{N}}}}}$ to its stationary distribution. We have proven a geometric drift that ensures that this convergence is geometrically fast and the geometric rate is independent of the initial condition.
\todo{Maybe talk about different form of convergence: We have formulated linear convergence in different manner. First almost sure}
Previous attempts to analyze CB-SARS\ always focused on much smaller classes of functions. The sphere function was analyzed in \cite{TCSAnne04,Jens:2007,jens:gecco:2006}, and a specific class of convex quadratic functions was also analyzed in \cite{jens:2005,jens:tcs:2006}. Our proof is more general: it holds on a wider class of function that also includes convex-quadratic functions. Indeed in Lemma~\ref{lem:CQ} we have seen that convex-quadratic functions satisfy Assumption~\ref{ass:f2} with $\alpha=2$ if $n \geq 2$. Hence linear convergence holds on convex-quadratic functions if $n \geq 2$ under the condition that $\frac12 \left( 1/\gamma^{2} + \gamma^{2/q} \right) < 1$. This latter condition can be relaxed observing that for any $f_{H} ( \ensuremath{\mathbf{x}}) = \frac 12 \ensuremath{\mathbf{x}}^{T} H \ensuremath{\mathbf{x}}$, $f_{H} ( \ensuremath{\mathbf{x}}) = g_{\alpha} \left( f_{H}^{\alpha/2}(\ensuremath{\mathbf{x}}) \right)$ for $g_{\alpha}: x \in \mathbb{R}^{+} \mapsto x^{2/\alpha} $ ($g_{\alpha} \in \ensuremath{\mathcal{M}}$). The function $\ensuremath{\mathbf{x}} \mapsto f_{H}^{\alpha/2}(\ensuremath{\mathbf{x}}) $ is positively homogeneous with degree $\alpha$ and stays continuously differentiable if $\alpha \geq 2$. Hence linear convergence will hold for a given $(\gamma, q) $ if there exists $2 \leq \alpha \leq n$ such that $\frac12 \left( 1/\gamma^{\alpha} + \gamma^{\alpha/q} \right) < 1$.
We have obtained a comprehensive expression for the convergence rate of the algorithm as
\begin{equation}\label{flora-is-crying}
{\rm CR} = - \ln \gamma \left( \frac{q+1}{q} {\rm PS} - \frac1q \right)
\end{equation}
where ${\rm PS}$ is the asymptotic probability of success.
This formula implies that when convergence occurs the probability of success is strictly smaller than $1/(q+1)$, i.e., strictly smaller than $1/5$ using the traditional $1/5$ as target success probability (corresponding to $q=4$).
While we have proven here that ${\rm CR} > 0$, i.e., linear convergence indeed holds, in the case of algorithms that do not guarantee the monotony of $f(\ensuremath{\mathbf{X}_\k})$, the sign of the ``convergence'' rate is usually not possible to obtain analytically and one needs to resort to Monte Carlo simulations \cite{TCSAnne04}.
Besides the sign of the convergence rate, one would like to extract more properties of ${\rm CR}$ like the dependence in the dimension, or the dependence in the condition number for convex-quadratic functions. This seems to be hard to achieve with the present approach as ${\rm CR}$ depends on the stationary distribution of the normalized chain for which little is known except its existence. However Monte Carlo simulations are natural and always possible to estimate those dependencies. The present paper gives a rigorous framework to perform those Monte Carlo simulations.
Note that using more ad-hoc techniques, it is possible to obtain some dependencies in the dimension or condition number \cite{Jens:2007,jens:gecco:2006,jens:2005,jens:tcs:2006}.
Though its convergence proof ``resisted'' for more than 40 years, the algorithm analyzed is simple and relatively straightforward as witnessed by the fact that it was already proposed very early and by various researchers in parallel. We however want to emphasize that nowadays this algorithm should mainly have an academic purpose. Indeed more robust comparison-based adaptive algorithm exist, namely the CMA-ES algorithm where in addition to the step-size, a full covariance matrix is adapted \cite{hansen2001}. \todo{Note however that recently, the $(1+1)$-ES was compared with recently introduced methods and compared favorably \todo{detail more - in particular on convexity - read the paper} \cite{Stich}. READ Stich et al. siam paper.}
Last we want to emphasize two points:
1) A common misconception is that randomized methods are good for global optimization and bad for local optimization. The present paper by proving a global linear convergence for a CB-SARS\ disproves this binary view. In addition, comparisons of the CMA-ES algorithm--the state-of-the-art comparison based adaptive algorithm--with BFGS and NEWUOA show also that CMA-ES is competitive on (unimodal) composite of convex-quadratic functions provided they are significantly non-separable and non-convex \cite{auger:colloquegiens}. This result does not come as a surprise as CB-SARS\ and CMA-ES were designed first as robust local search and carefully investigated to optimally solve simple functions like the sphere, the linear function and convex-quadratic functions.
2) The present paper illustrates that the theory of Markov Chains with discrete time and continuous state space is useful and powerful for the analysis of CB-SARS. We believe that the present analysis can be extended further for the case of stochastic functions or for the case of algorithms where a covariance matrix is adapted in addition to a step-size.
\nnote{It's a discussion I had I guess a long time ago. It is clear that if the probability of success is 1/5 then the convergence rate is zero. Hence while it is ok to talk about the idea of trying to maintain 1/5 - it does not make sense to have calculate this 1/5 on a function where we want to converge (the sphere) - while we then input that with $1/5$ the step-size should stay constant. }
\newpage
\bibliographystyle{plain}
|
1,116,691,500,676 | arxiv | \section{Appendix}
\subsection{Useful facts}
\textbf{Fact 1} \textit{(Hybrid Mechanism, Corollary 4.8 in \cite{chan2011private})
The hybrid mechanism is $\epsilon$- differential private and has an error bounded with probability at least $1-\delta$ by $\frac{1}{\epsilon}\cdot (\log~t)^{1.5}\cdot \log \frac{1}{\delta}$ at time $t$. }
\textbf{Fact 2} \textit{(Chernoff-Hoeffding bound)
Let $X_1,...,X_t$ be a sequence of real-valued random variables with common range $[0,1]$, and such that $\mathbb{E}[X_t|X_1,...,X_{t-1}]=\mu$. Let $S_t = \sum_{i=1}^{t}X_t$. Then for all $a\geq 0$,}
\begin{equation}
P(S_t\geq t\mu ~ +~a)~\leq~e^{-2a^2/t},P(S_t\leq t\mu ~ +~a)~\leq~e^{-2a^2/t}\notag
\end{equation}
\subsection{Lemmas}
\begin{lemma} [Estimated Performance of Algorithm 2] \quad\quad \quad \quad
i) The estimated number of plays satisfies:
$$n_{j}^{\text{avg}}(t)-c_0 \leq\hat{n}_{i,j}(t)\leq n_{j}^{\text{avg}}(t)+c_0;$$
ii) $\hat{ {Y}}_{i,j} $ is an unbiased estimation with
$\mathbb{E}[\hat{ {Y}}_{i,j}(t)]=\mu_j$;
\\iii) The variance of $\hat{ {Y}}_{i,j}(t)$ is bounded by :
$$\text{Var}[\hat{ {Y}}_{i,j}(t)]\leq\frac{\hat{n}_{i,j}(t)+c_i}{M\hat{n}_{i,j}(t)^2}\sigma^2;$$
iv) The error between $\hat{ {Y}}_{i,j}(t)$ and $\hat{ {X}}_{i,j}(t)$ can be bounded by: $$h_{\hat{n}_{i,j}(t)}=\frac{1}{\epsilon}\cdot \log ^{1.5}({\hat{n}_{i,j}})\cdot \log \frac{1}{\delta}\cdot \frac{1}{\hat{n}_{i,j}}$$
\\
where $n_{i,j}^{\text{avg}}(t)=\frac{1}{M}\sum_{\tau=1}^{t}\mathbf{1}_M^T\mathbf{\xi}_j(\tau)$ is the total number of times arm $j$ has been pulled by agent $i$ up to time $t$.
\end{lemma}
\begin{proof}
Let $\lambda_j$ be the $j$ th largest eigenvalue of $P$, $\mathbf{u}_j$ be the eigenvector corresponding to $\lambda_j$.
Then we have $\lambda_1=1$ and $\mathbf{u}_1=\mathbf{1}_M/\sqrt{M}$. We define
\begin{eqnarray}
\nu_{pk}^{+sum} = \sum_{d=1}^{M}u_p^du_k^d\mathbbm {1}((\mathbf{u}_p\mathbf{u}_k^\top )_{ii}\geq 0)\\
\nu_{pk}^{-sum} = \sum_{d=1}^{M}u_p^du_k^d\mathbbm {1}((\mathbf{u}_p\mathbf{u}_k^\top )_{ii}\leq 0)
\end{eqnarray}
To present the topology of communication graph, we denote two parameters $c_0$ and $c_i$ as:
\begin{eqnarray}
c_0 &=& \sqrt{M}\sum_{p=2}^{M}\frac{|\lambda_p|}{1-|\lambda_p|}\\
c_i &=& M\sum_{p=1}^{M}\sum_{k=2}^{M}\frac{|\lambda_p||\lambda_k|}{1-|\lambda_p||\lambda_k|}a_{pk}(i)
\end{eqnarray}
Where
\begin{eqnarray}
a_{pk}(i)=\left\{\begin{matrix}
\nu_{pk}^{+sum}(\mathbf{u}_p\mathbf{u}_k^\top )_{kk}~,\text{if}~\lambda_p\lambda_k\geq 0~\&~ (\mathbf{u}_p\mathbf{u}_k^\top )_{ii}\geq 0\\
\nu_{pk}^{-sum}(\mathbf{u}_p\mathbf{u}_k^\top )_{ii}~,\text{if}~\lambda_p\lambda_k\geq 0~\&~ (\mathbf{u}_p\mathbf{u}_k^\top )_{ii}\leq 0\\
\max\left\lbrace |\nu_{pk}^{-sum}|,\nu_{pk}^{+sum}\right\rbrace, \text{if} ~\lambda_p\lambda_k<0
\end{matrix}\right.\notag
\end{eqnarray}
We start with the first statement. From \eqref{eq9} it follows that,
\begin{eqnarray}
\hat{\mathbf{n}}_j(t)&=&P^t\hat{\mathbf{n}}_j(0)+\sum_{\tau=0}^{t-1}P^{t-\tau}\mathbb{\xi}_j(\tau)\notag\\
&=&\sum_{\tau=0}^{t-1}[\frac{1}{M}\mathbf{1}_M\mathbf{1}_M^\top\mathbb{\xi}_j(\tau)+\sum_{p=2}^{M}\lambda_p^{t-\tau}\mathbf{u}_p\mathbf{u}_p^\top\mathbb{\xi}_j(\tau)]\notag\\
\label{eq20}&=&n_j^{\text{avg}}(t)\mathbf{1}_M
+\sum_{\tau=0}^{t-1}\sum_{p=2}^{M}
\lambda_p^{t-\tau}\mathbf{u}_p\mathbf{u}_p^\top\mathbb{\xi}_j(\tau)
\end{eqnarray}
We now bound the second term on the right hand of \eqref{eq20}:
\begin{eqnarray}
\sum_{\tau=0}^{t-1}\sum_{p=2}^{M}
\lambda_p^{t-\tau}\mathbf{u}_p\mathbf{u}_p^\top\mathbb{\xi}_j(\tau)
\leq \sum_{\tau=0}^{t-1}\sum_{p=2}^{M}
|\lambda_p^{t-\tau}|\left \|\mathbf{u}_p \right \|_2^2\left \|\mathbb{\xi}_j(\tau) \right \|_2\notag\\
\leq\sqrt{M}\sum_{\tau=0}^{t-1}\sum_{p=2}^{M}|\lambda_p^{t-\tau}|\leq c_0
\end{eqnarray}
This complete the statement i).
Similarly, for statement ii) we have:
\begin{equation}
\hat{\mathbf{y}}_j(t)=P^t\hat{\mathbf{s}}_j(0)+\sum_{\tau=0}^{t-1}P^{t-\tau}\mathbf{r}_j(\tau)=\sum_{\tau=0}^{t-1}=P^{t-\tau}\mathbf{r}_j(\tau)
\end{equation}
Calculate expected on both side of above equality we have $\mathbb{E}[\hat{\mathbf{y}}_j(t)]=\mu_j\sum_{\tau = 0}^{t-1}\mathbb{\xi}_j(\tau)=\mu_i\hat{\mathbf{n}}_j(t)$, this proof statement ii).
For the third statement, we have
\begin{eqnarray}\label{eq23}
\text{Cov}[\hat{\mathbf{y}}_j(t)]&=&(P^{t-\tau})\Sigma(\tau)\Sigma(\tau)^\top (P^{t-\tau})^\top\notag\\
&=&\sum_{\tau=0}^{t-1}\sum_{p=1}^{M}\sum_{k=1}^{M}\lambda_p^{t-\tau}\lambda_j^{t-\tau}\mathbf{u}_p\mathbf{u}_p^\top\Sigma(\tau)\Sigma(\tau)^\top\mathbf{u}_k\mathbf{u}_k^\top\notag\\
&=& \underset{\textcircled{1}}{\underbrace{ \sigma^2\sum_{\tau=0}^{t-1}\sum_{p=1}^{M}\sum_{k=1}^{M}(\lambda_p\lambda_k)^{t-\tau}\varsigma_{pk}(\tau)(\mathbf{u}_p\mathbf{u}_k^\top)}}\notag\\
&+&\underset{\textcircled{2}}{\underbrace{\frac{1}{M}\sum_{\tau=0}^{t-1}\sum_{p=1}^{M}\lambda_p^{t-\tau}\mathbf{u}_p\mathbf{u}_k^\top\Sigma(\tau)\Sigma(\tau)^\top\mathbf{1}_M\mathbf{1}_M^\top}}
\end{eqnarray}
Where $\Sigma(\tau)=\sigma\text{diag}(\mathbb{\xi}_j(\tau))$, $\varsigma_{pk}(\tau)=\mathbf{u}_p^\top\text{diag}(\mathbb{\xi}_j(\tau))\mathbf{u}_k$.
We examine the $ii-$th entry of \eqref{eq23} and define \textcircled{1} and \textcircled{2} as the $ii$-th entry of the first term and second term in \eqref{eq23} respectively.
\begin{eqnarray}\label{eq24}
\textcircled{1} &\leq& \sigma^2\sum_{\tau=0}^{t-1}\sum_{p=1}^{M}\sum_{k=1}^{M}|\lambda_p\lambda_k|^{t-\tau}|\varsigma_{pk}(\tau)(\mathbf{u}_p\mathbf{u}_k^\top)_{ii}|\notag\\
&\leq& \sigma^2\sum_{\tau=0}^{t-1}\sum_{p=1}^{M}\sum_{k=1}^{M}|\lambda_p\lambda_k|^{t-\tau}|a_{pk}(i)\notag\\
&\leq&\sum_{p=1}^{M}\sum_{k=2}^{M}\frac{|\lambda_p||\lambda_k|}{1-|\lambda_p||\lambda_k|}a_{pk}(i)=\sigma^2\frac{c_i}{M}
\end{eqnarray}
\begin{eqnarray}\label{eq25}
\textcircled{2} =\frac{1}{M}[(\sum_{\tau=0}^{t-1}\sum_{p=1}^{M}\lambda_p^{t-\tau}\mathbf{u}_p\mathbf{u}_k^\top\mathbf{\xi}_j(\tau))\mathbf{1}_M^\top]_{ii}=\sigma^2\frac{\hat{n}_{i,j}(t)}{M}
\end{eqnarray}
Combining \eqref{eq24} and \eqref{eq25}, we establish statement iii).
Directly following Lemma 1, we can proof statement iv). This complete the proof of Lemma 2.
\end{proof}
\subsection{Proof of Theorem 2}
\begin{proof}
Using Lemma 1, we rewrite the bound into following equations:
\begin{eqnarray}
\label{eq1}P(X\geq Y+h_n)\leq \delta\\
\label{eq2}P(X\leq Y-h_n)\leq \delta
\end{eqnarray}
Recall $\epsilon$ is the is the differential privacy parameter. The regret incurred during time horizon $T$ can be analyzed as the sum of the regret caused by playing suboptimal arms and recomputing the arm index by federated learning. We denote $n_{i,j}(T)$ as the times that a suboptimal arm $j$ is played by agent $i$, $c_{t,n} = \sqrt{\frac{\log T}{2}}$ as the UCB confidence index. Our proof steps mostly follows the demonstration of UCB algorithm\cite{auer2002finite}. Considering a suboptimal arm $j\geq1$ of player $i$, let $\tau_{i,j}(m)$ be the time that play make the $m$-th switch to arm $j$ and $\tau'_{i,j}(m)$ be the time that the player leave arm $j$ and turn another one. Then, we have,
\begin{equation}\label{subnum}
\begin{array}{ll}
n_{i,j}(T)\notag\\
\leq 1+\sum_{m=1}^{T}\left| \tau'_{i,j}(m)-\tau_{i,j}(m)\right|
I\left\lbrace \text{Play arm}~j~ \text{at time~}\tau_{i,j}(m)\right\rbrace \notag\\
\leq 1+\sum_{m=1}^{T}\left| \tau'_{i,j}(m)-\tau_{i,j}(m)\right|\notag\\
I\left\lbrace \sum_{i=1}^{M}I_{i,j}(\tau_{i,j}(m)-1)\geq \sum_{i=1}^{M}I_{i,1}(\tau_{i,j}(m)-1)\right\rbrace\notag\\
\leq l+\sum_{m=1}^{T}\sum_{l=0}^{\infty}2^pI\left\lbrace{ \sum_{i=1}^{M}I_{i,j}(\tau_{i,j}(m)+2^p-2)\geq}\right.\notag\\ \phantom{}\left.{ \sum_{i=1}^{M}I_{i,1}(\tau_{i,j}(m)+2^p-2),n_{i,j}(\tau_{i,j}(m)-1)\geq l}\right\rbrace\notag\\
\leq l+\sum_{m=1}^{T}\sum_{p=0}^{\infty}2^pI\left\lbrace{ \sum_{i=1}^{M}I_{i,j}(m+2^p-2)\geq}\right.\notag\\ \phantom{}\left.{ \sum_{i=1}^{M}I_{i,1}(m+2^p-2),n_{i,j}(m-1)\geq l}\right\rbrace\notag\\
\leq l+\sum_{m=1}^{\infty}\sum_{m+2^p\leq T}\leq 2^p \sum_{n_{i,1}=1}^{m+2^p}\sum_{n_{i,j}=l}^{m+2^p}\notag\\
I\left\lbrace {\sum_{i=1}^{M}( {X}_{i,j}(m+2^p)+ c_{m+2^p,n_{i,j}}+h_{n_{i,j}})\geq}\right.\notag\\ \phantom{}\left.{ \sum_{i=1}^{M}( {X}_{i,1}(m+2^p)+c_{m+2^p,n_{i,1}}+h_{n_{i,1}})}\right\rbrace\notag
\end{array}
\end{equation}
In Algorithm 1, if an arm is for the pth time consecutively (without switching to any other arms in between), it will be played for the next $2^p$ slots. The second inequality uses this fact. In the second last inequality , we replace $ \tau_{i,j}(m)$ by $m$ which is clearly an upper bound.
It should be noted that each time step $t$ when index updates, we have $I_{1,j}(t)=I_{2,j}(t)=...=I_{i,j}(t)=...=I_{M,j}(t)=\frac{1}{M}\sum_{i=1}^{M}I_{i,k}(t)$. That means all agents select the same arm according to the central update results, so in equation \eqref{subnum}, we can observe that the event $\sum_{i=1}^{M}({X}_{i,j}(m+2^p)+c_{m+2^p,n_{i,j}}+h_{n_{i,j}})\geq \sum_{i=1}^{M}( {X}_{i,1}(m+2^p)+c_{m+2^p,n_{i,1}}+h_{n_{i,1}})$ implies that for each player $i$ at least one of the following events holds:
\begin{eqnarray}
\label{ev1}{X}_{i,1}(m+2^p)\leq\mu_{i,1}-c_{m+2^p,n_{i,1}}-h_{n_{i,1}}\\
\label{ev2}{X}_{i,j}(m+2^p)\geq\mu_{i,j}+c_{m+2^p,n_{i,j}}+h_{n_{i,j}}\\
\label{ev3}\mu_{i,1}\le \mu_{i,j}+2c_{m+2^p,n_{i,j}}+2h_{n_{i,j}}
\end{eqnarray}
Now, using the Chernoff-Hoeffding bound, we can get:
\begin{eqnarray}
\Pr(\eqref{ev1}) &=& \Pr({X}_{i,1}(m+2^p)\leq\mu_{i,1}-c_{m+2^p,n_{i,1}}-h_{n_{i,1}})\notag\\
&=&\Pr({X}_{i,1}(m+2^p)\leq {Y}_{i,1}(m+2^p)-h_{n_{i,1}}\notag\\
&\vee& {Y}_{i,1} (m+2^p)\leq\mu_{i,1}-c_{m+2^p,n_{i,1}})\notag\\
&=&\Pr({X}_{i,1}(m+2^p)\leq {Y}_{i,1}(m+2^p)-h_{n_{i,1}})\notag\\
&+& \Pr( {Y}_{i,1} (m+2^p)\leq\mu_{i,1}-c_{m+2^p,n_{i,1}})\notag\\
&=&\delta + (m+2^p)^{-4}
\end{eqnarray}
Similarly, we can use \eqref{eq1} and the Chernoff-Hoeffding bound to prove a bound on \eqref{ev2}:
\begin{eqnarray}
\Pr (\eqref{ev2}) &=& \Pr({X}_{i,j}(m+2^p)\geq\mu_{i,1}+c_{m+2^p,n_{i,j}}+h_{n_{i,j}})\notag\\
&=&\Pr({X}_{i,j}(m+2^p)\leq {Y}(m+2^p)_{i,j}+h_{n_{i,1}} \notag\\
&\vee& {Y}_{i,j} (m+2^p)\leq\mu_{i,j}+c_{m+2^p,n_{i,j}})\notag\\
&=&Pr({X}_{i,j}(m+2^p)\leq {Y}_{i,j}(m+2^p)+h_{n_{i,j}}) \notag\\
&+& \Pr( {Y}_{i,j} (m+2^p)\leq\mu_{i,j}-c_{m+2^p,n_{i,j}})\notag\\
&=&\delta + (m+2^p)^{-4}
\end{eqnarray}
To prove a bound on \eqref{ev3}, we try to find a minimum number $n_{i,j}$ for which \eqref{ev3} is always false. This event is flase indicates that $\Delta_{i,j}\geq 2c_{m+2^p,n_{i,j}}+2h_{n_{i,j}}$, which implies the following two equations hold for any $
0\leq\beta_0\leq 1 $.
\begin{eqnarray}
\label{ev4}\beta_0 \Delta_{i,j}\ge 2c_{m+2^p,n_{i,j}}\\
\label{ev5}(1-\beta_0)\Delta_{i,j}\ge 2h_{n_{i,j}}
\end{eqnarray}
For $n_{i,j}\geq \left \lceil \frac{8\log T}{\Delta_{i,j}^2 \beta_0^2} \right \rceil$, \eqref{ev4} is false. Equation $\eqref{ev5}$ implies that
\begin{equation}\label{pev5}
n_{i,j}\geq \beta_1\cdot \log (n_{i,j})^{1.5}
\end{equation}
where
\begin{eqnarray}
\beta_1 = \frac{2}{(1-\beta_0) \Delta_{i,j}}\cdot \frac{1}{\epsilon}\log{\frac{1}{\delta}}
\end{eqnarray}
Let $x=-\log (n_{i,j}/1.5)$, above inequality can be rewrite in the standard transcendental algebraic inequality form:
\begin{equation}
e^{-x}\geq -1.5x\beta_1^{\frac{1}{1.5}}
\end{equation}
The solution can be given by the Lambert W function, so
\begin{eqnarray}
n_{i,j} &\geq& \exp(-1.5(W(-1,\frac{-1}{1.5\beta_1^{\frac{1}{1.5}}})))\notag\\
&\approx& \beta_1^{2.25}=(\frac{2}{(1-\beta_0) \Delta_{i,j}}\cdot \frac{1}{\epsilon}\log {\frac{1}{\delta}})^{2.25}
\end{eqnarray}
Here we choose $\delta = (m+2^p)^{-4}\leq T^{-4}$,that yields
\begin{equation}
n_{i,j}\geq \max[(\frac{8}{(1-\beta_0) \Delta_{i,j}}\cdot \frac{1}{\epsilon}\log{T})^{2.25}, \frac{8\log T}{\Delta_{i,j}^2 \beta_0^2} ]
\end{equation}
In summary, the total number of suboptimal plays is
\begin{equation}
\begin{array}{ll}
\mathbb{E}[n(T)] = \sum_{i=1}^{M}\sum_{j>1}^{K}\mathbb{E}[n_{i,j}(T)]\notag\\
=\sum_{i=1}^{M}\sum_{j>1}^{K}[\max[(\frac{8}{(1-\beta_0) \Delta_{i,j}}\cdot \frac{1}{\epsilon}\log T)^{2.25},
\phantom{}\left \lceil \frac{8\log T}{\Delta_{i,j}^2 \beta_0^2} \right \rceil]]\notag\\
+\sum_{m=1}^{\infty}\sum_{p=0}^{\infty}2^p\sum_{n_{i,1}=1}^{m+2^p}\sum_{n_{i,j}=1}^{m+2^p}4(m+2^p)^{-4}\notag\\
\leq \sum_{i=1}^{M}\sum_{j>1}^{K}(4+\max[(\frac{8}{(1-\beta_0) \Delta_{i,j}}\cdot \frac{1}{\epsilon}\log T)^{2.25},
\phantom{}\left \lceil \frac{8\log T}{\Delta_{i,j}^2 \beta_0^2} \right \rceil])\notag\\
\leq MK(4+\max[(\frac{8}{(1-\beta_0) \Delta_{i,j}}\cdot \frac{1}{\epsilon}\log T)^{2.25},
\phantom{}\left \lceil \frac{8\log T}{\Delta_{i,j}^2 \beta_0^2}\right \rceil])
\end{array}
\end{equation}
The expected regret is
\begin{eqnarray}
&R^C(T) \leq\Delta_{max}\mathbb{E}(n(T))\notag\\
&\leq \Delta_{max} MK(4+\max[(\frac{8}{(1-\beta_0) \Delta_{i,j}}\cdot
\phantom{}\left \lceil \frac{8\log T}{\Delta_{i,j}^2 \beta_0^2}\right \rceil])
\end{eqnarray}
This completes the proof.
\end{proof}
\subsection{Proof of Theorem 4}
\begin{proof}
The proof is similar to Theorem 2. The regret incurred can be analyzed by playing the suboptimal arms ($j\geq 1$) during time horizon $T$:
\begin{equation}
\begin{array}{ll}
n(T)=1 + \sum_{i=1}^{M}\sum_{t=K+1}^{T}n_{i,j}(t)\notag\\
= 1 + \sum_{i=1}^{M}\sum_{t=1}^{T}I\{a_i(t)=j\}\notag\\
\leq l + \sum_{i=1}^{M}\sum_{t=1}^{T}I\{I_{i,j}(t)\geq I_{i,1}(t),n_j^{\text{avg}}\geq l\}\notag\\
\leq l+\sum_{t=1}^{T}I\left\lbrace {\sum_{i=1}^{M}\hat{ {X}}_{i,j}(t)+c_{t,n_{i,j}}+h_{n_{i,j}}} \right.\notag\\ \phantom{}\left.{\geq \sum_{i=1}^{M}\hat{ {X}}_{i,1}(t)+c_{t,n_{i,1}}+h_{n_{i,1}},n_j^{\text{avg}}\geq l }\right\rbrace
\end{array}
\end{equation}
At time slot $t$, the individual agent $i$ will choose a suboptimal arm only if the event $\left\lbrace {\sum_{i=1}^{M}\hat{ {X}}_{i,j}(t)+c_{t,\hat{n}_{i,j}+h_{\hat{n}_{i,j}}} \geq \sum_{i=1}^{M}\hat{ {X}}_{i,1}(t)+c_{t,\hat{n}_{i,1}+h_{\hat{n}_{i,1}}} }\right\rbrace $ holds.
It indicated that at least one of the following three conditions must holds:
\begin{eqnarray}
\label{ev6}\hat{ {X}}_{i,1}\leq \mu_1-c_{t,\hat{n}_{i,1}}-h_{\hat{n}_{i,1}}\\
\label{ev7}\hat{ {X}}_{i,j}\leq \mu_j+c_{t,\hat{n}_{i,j}}+h_{\hat{n}_{i,j}}\\
\label{ev8}\mu_1< \mu_j+2c_{t,\hat{n}_i,j}+2h_{\hat{n}_{i,j}}
\end{eqnarray}
According to Lemma 2, $c_{t,\hat{n}_{i,j}}=\sigma\sqrt{\frac{\hat{n}_{i,j}(t)+c_i}{M\hat{n}_{i,j}(t)}\cdot \frac{2\rho\log T}{\hat{n}_{i,j}(t)}}$.
Using the Fact 2, the union bound and the Chernoff-Hoffding bound, we can prove the probability of \eqref{ev6} is:
\begin{eqnarray}
\Pr(\eqref{ev6})&=&Pr(\hat{ {X}}_{i,1}\leq \mu_1-c_{t,\hat{n}_{i,1}}-h_{\hat{n}_{i,1}})\notag\\
&=&\Pr (\hat{ {X}}_{i,1}\leq\hat{ {Y}}_{i,1}-h_{n_{i,1}})+\notag\\
&\phantom{}&\Pr\left( z\geq \frac{\mathbb{E}[\hat{ {Y}}_{i,1}]+c_{t,\hat{n}_{i,1}}-\mu_1}{\sqrt{\text{Var}(\hat{ {Y}}_{i,1})}}\right) \notag\\
&\leq&\delta+\Pr( z\geq \frac{c_{t,\hat{n}_{i,1}}}{\sqrt{\text{Var}(\hat{ {Y}}_{i,1})}})\notag\\
&\leq& \delta+\frac{1}{2}\exp(-\frac{c_{t,\hat{n}_{i,1}^2}}{2\text{Var}(\hat{ {Y}}_{i,1})})\leq\delta+\frac{1}{2t^\rho}
\end{eqnarray}
where $z$ is the standard Gaussian random variable. The last inequality follows from the tail bounds for the error function and the statement ii) of Lemma 2.
Similarly, we can prove the bound of event \eqref{ev7}:
\begin{equation}
\Pr(\eqref{ev7}) \leq\delta+\frac{1}{2t^\rho}
\end{equation}
We can choose $\delta =\frac{1}{2t^\rho} $, which leading to :
\begin{eqnarray}
\Pr(\eqref{ev6})\leq t^{-\rho}\\
\Pr(\eqref{ev7})\leq t^{-\rho}
\end{eqnarray}
To prove a bound on \eqref{ev8}, we try to find a minimum number $\hat{n}_{i,j}$ for which \eqref{ev8} is always false. This event is false indicates that $\Delta_{i,j}\geq 2c_{t,n_{i,j}}+2h_{n_{i,j}}$, which implies the following two equations hold for any $
0\leq\beta_0\leq 1 $.
\begin{eqnarray}
\label{ev11}\beta_0 \Delta_{i,j}\ge 2c_{t,\hat{n}_{i,j}}\\
\label{ev12}(1-\beta_0)\Delta_{i,j}\ge 2h_{\hat{n}_{i,j}}
\end{eqnarray}
For $l = \left \lceil \frac{c_0}{\beta_0^2}+\frac{8\sigma^2\rho(1+c_i)\log T}{M\beta_0^2\Delta_{i,j}^2} \right \rceil$, \eqref{ev11} is false.
The appropriate choice of $n_{i,j}$ for holding $\eqref{ev12}$ can be same of Theorem 2 since \eqref{ev12} is only related to the DP error.
\begin{eqnarray}
\hat{n}_{i,j} &\geq& \beta_1^{2.25}=(\frac{2}{(1-\beta_0) \Delta_{i,j}}\cdot \frac{1}{\epsilon}\log {\frac{1}{\delta}})^{2.25}
\end{eqnarray}
Since we choose $\delta = \frac{1}{2}t^{-\rho}$,that yields
\begin{equation}
\begin{array}{ll}
\hat{n}_{i,j}\geq \max[(\frac{2+2\rho \log T }{\epsilon(1-\beta_0)\Delta_{i,j}})^{2.25},
\phantom{}\left \lceil \frac{c_0}{\beta_0^2}+\frac{8\sigma^2\rho(1+c_i)\log T}{M\beta_0^2\Delta_{i,j}^2} \right \rceil]
\end{array}
\end{equation}
In summary, the total number of suboptimal plays is
\begin{equation}
\begin{array}{ll}
\mathbb{E}[n(T)]=\notag\\%&=& \sum_{i=1}^{M}\sum_{j>1}^{K}\mathbb{E}[n_{i,j}(T)]\notag\\
\sum_{i=1}^{M}\sum_{j>1}^{K}[\max[(\frac{2+2\rho \log T }{\epsilon(1-\beta_0)\Delta_{i,j}})^{2.25},
\phantom{}\left \lceil \frac{c_0}{\beta_0^2}+\frac{8\sigma^2\rho(1+c_i)\log T}{M\beta_0\Delta_{i,j}^2} \right \rceil]]\notag\\
\phantom{}+\sum_{t=1}^{T}\sum_{n_{i,1}=1}^{t}\sum_{n_{i,j}=1}^{t}2t^{-\rho}\notag\\
\leq \sum_{i=1}^{M}\sum_{j>1}^{K}(\frac{2\rho}{\rho -1}+\max[(\frac{2+2\rho \log T }{\epsilon(1-\beta_0)\Delta_{i,j}})^{2.25},
\phantom{}\left \lceil\frac{c_0}{\beta_0^2}+\frac{8\sigma^2\rho(1+c_i)\log T}{M\beta_0\Delta_{i,j}^2} \right \rceil])\notag\\
\leq \frac{2MK\rho}{\rho -1}+\sum_{i=1}^{M}\sum_{j>1}^{K}\max[(\frac{2+2\rho \log T }{\epsilon(1-\beta_0)\Delta_{i,j}})^{2.25},
\phantom{}\left \lceil \frac{c_0}{\beta_0^2}+\frac{8\sigma^2\rho(1+c_i)\log T}{M\beta_0^2\Delta_{i,j}^2} \right \rceil]
\end{array}
\end{equation}
Using $R^D(T)=\Delta_{max}\cdot\mathbb{E}[n(T)]$, we can complete the proof.
\end{proof}
\section{Introduction}
The promise of distributed computing is to improve the efficiency and robustness of machine learning tasks by leveraging communication networks to share the computational load, leading to a compelling vision of world-wide computing \cite{1}. However, no matter how compelling this vision is,
it cannot get realized before we address a number of challenges,
of which an important one is privacy.
In this paper, we consider privacy vs. learning trade-offs for wireless recommendation systems, that are one of the most popular learning algorithms in the consumer domain, and are considered a key application of edge-based wireless distributed systems \cite{song2017making} \cite{song2018itw} \cite{8849556}. As a use case, we consider a multi-chain of stores, such as a fastfood chain, that make local recommendations to their customers, but then wish to aggregate the overall client responses to provide new recommendations or launch new products. We assume that the client responses - what items they like and how much - are the private data we want to protect.
We pose our problem within the federated learning framework, proposed by Google\cite{konevcny2016federated}, that addresses the privacy challenge by maintaining the user data locally, while
combining learning models among the distributed agents. In particular, we consider a federated multi-armed bandit (MAB) setup, where each distributed agent could be a local store that makes recommendations, while the aggregator is the parent company. The question we explore is, can we leverage the aggregator to better inform what recommendations to make at the distributed agents, without compromising the user data privacy.
We consider in particular a distributed version of the UCB algorithm: we assume that each agent (store) makes a number of recommendations locally and calculates a sequence of local average reward values. To combine the local models, we need to reveal the average values sequence to the aggregator, without compromising the privacy of the data. We do so by leveraging differential privacy (DP) \cite{dwork2014algorithmic} techniques that preserve privacy of reward sequences. Maintaining privacy amounts to adding a form of noise,
which can affect which items the aggregator decides to recommend next, and which in turn can lead to a higher regret. This paper investigates this privacy/regret trade-off.
\subsection{Related Work}
The MAB algorithm is widely used in recommendation systems due to its simplicity and efficiency \cite{li2010contextual}\cite{zeng2016online}. Auer \textit{et al.} \cite{auer2002finite} developed the UCB algorithm, which is an index-based policy relying on average reward plus an upper confidence bound. Another mainstream approach is the sampling-based approach \cite{agrawal2012analysis} that instead of computing a deterministic index, it uses a sample generated by a Bayesian estimator.
There has been a growing literature that extends the MAB problem into multi-user settings. Liu and Zhao \cite{5535151} consider a distributed bandit problem with \textit{collisions}: choosing the same arm simultaneously leads to a reduced reward for two or more agents. Similar approaches can be found in \cite{6763073} \cite{rosenski2016multi} that utilize different matching algorithms to avoid collisions. Later work \cite{martinez2019decentralized} makes use of gossip algorithm or running consensus methods to keep an approximation of the average value between agents and their neighbors. However, few works have considered accommodating privacy considerations in the learning process.
There is also a very rich literature on differential privacy, mostly applied in deep learning\cite{abadi2016deep} and information theory fields. For decision-making problems, Tossou and Dimitrakakis present algorithms for differentially private stochastic MAB \cite{10.5555/3016100.3016190}. The work in \cite{mishra2015nearly} also investigates this problem. However, all these works operate under a single user setting. As far as we know, our federated private bandit algorithm is the first work that considers both differential privacy and communication in cooperative bandit problems.
\subsection{Main Contributions}
Our work proposes a new bandit learning framework, the \textit{federated private bandits} that combines differential privacy with multi-agent bandit learning. Our key contributions are as follows.
i) We introduce a federated private bandit framework. For each agent, we apply an ($\epsilon,\delta$) differentially private variants of the UCB scheme. Specifically, the \textit{hybrid mechanism} \cite{chan2011private} is used to track a non-private reward sequence for each agent and to output a private sum reward. The agents then use this private sum reward plus a relaxation of the upper confidence bound to update the arm index.
ii) We consider two multi-agent settings: (a) the DP-Master-worker UCB (a master-worker structure): an external central node can observe all individual agent models and can return back an aggregated one to all agents; (b) the DP-Decentralized UCB (fully decentralized with networked structure): the agents average their model with their neighbors' information using a \textit{consensus algorithm} without the help of a central node. In both methods, the real rewards are kept private from all agents.
iii) We analyze both the privacy and regret performance of our federated private UCB algorithms and characterize the influence of communication and privacy on decision making. In particular, we evaluate the trade-off between the privacy and regret.
\section{System Model and Problem Formulation}
We consider a federated recommendation system with $M$ subsystems or agents, where each agent can make recommendations to its local users. We allow the agents to communicate either through a central node (master-worker structure) or directly with their neighbors (networked fully decentralized structure), to aggregate their knowledge of the user preferences. We discuss both the `master-worker' distributed structure and the fully decentralized structure in this paper. All $M$ nodes are associated with $K$ arms (e.g., movies, ads, news, or items) from an arm set $\mathbf{A}: =\left\lbrace 1,2,...,K\right\rbrace $ that can be recommended to the users.
\subsection{Federated Private Bandit Framework}
The above system model can be formulated as a $K$-armed bandit problem with $M$ distributed agents. At time slot $t$, each agent chooses and pulls an arm from the set of $K$ arms, and then the arm $j \in \mathbf{A}$ chosen by agent $i \in [M]$ generates an i.i.d. reward $r_{i,j}(t)$ from a fixed but unknown distribution at time $t$. We denote by $\mu_{i,j}$ the unknown mean of reward distribution. In our model, the reward distribution of each arm is the same for each agent, i.e., for all arms $1\leq j \leq K$, $\mu_{1,j}=\mu_{2,j}=...=\mu_{i,j}=...=\mu_{M,j}$, and thus in the rest of the paper we use $\mu_j$ for simplicity.
The arm that agent $i$ plays at time $t$ is denoted as $a_i(t)\in \mathbf{A}$. Let $q_i(t)$ be the communication message sent by agent $i$ and $q_{-i}(t)$ be the messages received by agent $i$ at time $t$. Here, messages can be learning model parameters which will be specified later. Then the policy $\pi_i(t)$ for agent $i$ can be viewed as a mapping from the collected history set to the action set. That is, $\pi_i(t):\mathit{H}_{i}(t)\rightarrow\mathbf{A}$, where the history $\mathit{H}_{i}(t)$ gathers actions, rewards, and message exchange of the past $\mathit{H}_i(t)=\left\lbrace{(a_i(1),r_{i,a_i(1)}(1),q_{-i}(1)),...,(a_i(t-1),r_{i,a_i(t-1)}(t-1),}\right.\\\left.{ q_{-i}(t-1)) }\right\rbrace $. The overall objective of the $M$ agents is to maximize the expected sum reward over a finite time horizon $T$:
$\mathbb{E}[\sum_{t=1}^{T}\sum_{i=1}^{M}r_{i,a_i(t)}(t)]$.
Without loss of generality, we can assume that $\mu_1$ is always the best arm for each agent. Then the suboptimality gap can be defined as $\Delta_{j} : = \mu_{1} - \mu_{j}$ for any arm $j\not=1$.
Let $n_{i,j}(t)$ be the number of times arm $j$ is pulled by agent $i$ up to time $t$, then the number of times arm $j$ is pulled by all the agents in the network up to time $t$ can be calculated as $n_{j}(t):=\sum_{i=1}^{M }n_{i,j}(t)$.
The learning goal is to minimize the overall expected regret, which is defined as the expected reward difference between the best arm and the online learning policies of the agents. For policies with action $a_i(t)$ ($\forall i\in [M], \forall t$), the overall expected regret is defined as
\begin{equation}
\mathit{R}(T) = TM\mu_1-\mathbb{E}[\sum_{t=1}^{T}\sum_{i=1}^{M}\mu_{i,a_i(t)}(t)]
=\sum_{j=2}^{K}\Delta_j\mathbb{E}[n_j(T)]
\end{equation}
\subsection{Differential Privacy}\label{DP}
We use differential privacy as our privacy metric and briefly review some background material in the following.
\begin{definition}[Differential Private Bandit Algorithm]
\iffalse
A bandit algorithm $\pi$ is ($\epsilon,\delta$)-differentially private if for all two neighboring history sequences $\mathbf{H}(t)=\left\lbrace H_1(t),...,H_M(t)\right\rbrace $ and $\mathbf{H}'(t)=\left\lbrace H'_1(t),...,H_M'(t)\right\rbrace $ and for all set $\mathcal{S}\subseteq \mathcal{A}$, the
following holds:
\begin{equation}
\Pr(\mathbf{a}(t)\in\mathcal{S}|\mathbf{H}(t))\leq \exp{(\epsilon)}\Pr(\mathbf{a}(t)\in\mathcal{S}|\mathbf{H}'(t))+\delta
\end{equation}
\fi
A bandit algorithm $\pi_i$ for agent $i$ is ($\epsilon,\delta$)-differentially private if for all two neighboring reward sequences $\mathbf{r}(t)=\left\lbrace r_{i,a_i(1)}(1),...,r_{i,a_i(t)}(t)\right\rbrace $ and $\mathbf{r}'(t)=\left\lbrace r'_{i,a_i(1)}(1),...,r'_{i,a_i(t)}(t)\right\rbrace $ (i.e., that differ on at most 1 position), for all subsets $\mathcal{S}\subseteq \mathcal{A}$, and for all measurable image subsets $\mathcal{Q}$ of $q_i(t)$, the following holds:
\begin{equation}
\begin{array}{llll}
\Pr\{a_i(t)\in\mathcal{S},q_i(t)\in\mathcal{Q}|\mathbf{r}(t)\}\leq \\ \exp{(\epsilon)}\Pr\{a_i(t)\in\mathcal{S},q_i(t)\in\mathcal{Q}|\mathbf{r}'(t)\}+\delta.
\end{array}
\end{equation}
We say the algorithm of the system is ($\epsilon,\delta$)-differentially private if (2) holds for all agents.
\end{definition}
Intuitively, for our bandit problem, if the reward $r_{i,j}(\tau)$ for arm $j$ and agent $i$ is the private information, the definition above implies that we want the algorithm to protect the arm's reward realization $r_{i,j}(\tau)$ against an adversary even if the adversary can observe the output actions $a_i(1),a_2(2),\ldots,a_i(t)$, the transmitted information $q_i(1),q_i(2),\ldots,q_i(t)$, and other reward realizations.
A commonly used differential privacy scheme is the \textit{Laplace} mechanism, which simply adds a \textit{Laplace} noise $N\sim~Lap(\frac{s}{\epsilon})$ to the private data communicated. In our problem, we employ a more sophisticated differential privacy mechanism, termed the hybrid mechanism, that we briefly describe next.
The Hybrid Mechanism\cite{chan2011private} is a \textit{tree based aggregation} scheme that releases private statistics over a data sequence. Consider a reward sequence $\mathbf{r}=(r(1),r(2),...,r(T))$, where at each time $t$ a new $r(t)\in[0,1]$ is inserted. Assume we want to output the partial (up to time $t$) sum $s(t)=\sum_{i=1}^{t}r(i)$ while ensuring that the sequence $\mathbf{r}$ is $(\epsilon,\delta)$-private.
The Hybrid mechanism outputs partial sums at times $t=2^k,k=1,2,..$.
For the time period $2^k$ and $2^{k+1}$,
the mechanism
constructs a binary tree $B(t)$ that has as leaves the inputs $r(i)$, all other nodes store partial sums, and the root node contains the sum from $2^k$ to $2^{k+1}-1$. The mechanism
outputs a private sum $L(t)$ by adding a \textit{Laplace} noise of scale $\frac{1}{\epsilon}$, i.e., $Lap(\frac{1}{\epsilon})$ to a set of nodes that ``cover'' all the inputs.
As compared to the straightforward approach of adding noise to each sample $r(i)$, this method enables to output partial sums that satisfy the same differential privacy gurantees adding overall less noise -
indeed there is only a logarithmic amount of noise added for any given sum because of the logarithmic tree depth.
\section{Federated Private Multi-Armed Bandits}
In this section, we present two algorithms for the federated private bandit problems under different settings and provide their performance analysis. Our algorithms combine the non-private UCB algorithm\cite{auer2002finite} with the Hybrid $(\epsilon,\delta)$ differential privacy technique.
In the UCB algorithm, at time slot $t$, each arm $j$ of agent $i$ updates an estimate of the index $I_{i,j}(t)$, which is calculated as the sum of the empirical mean $Y_{i,j}(t)$ and an upper confidence bound:
$I_{i,j}(t)=Y_{i,j}(t)+\sqrt{\frac{2\text{log}t}{n_{i,j}(t)}}$.
Here, $Y_{i,j}(t)={y_{i,j}(t)}/{n_{i,j}(t)}$, $y_{i,j}(t)$ is the sum of observed rewards and $n_{i,j}(t)$ is the total number of times that arm $j$ has been pulled until time $t$.
To achieve differential privacy, we apply a DP mechanism as shown in Figure. 1. In particular, we instantiate the hybrid mechanism $H_{i,j}$ for each arm $j$ at each agent $i$, which keeps track of the non-private empirical mean $Y_{i,j}$ and outputs a private mean $X_{i,j}$. Here $X_{i,j}=s_{i,j}/n_{i,j}$ and $s_{i,j}$ is the private sum reward. The agents select actions based on the private mean $X_{i,j}$ instead of the empirical mean $Y_{i,j}$, thus ensuring that the actions are also differentially private.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.20]{hm2.png}
\caption{Graphical model for the hybrid mechanism $H_{i,j}$ of agent $i$. }
\label{fig:label}
\end{figure}
We present two federated learning algorithms. The first, termed DP-Master-worker UCB algorithm, employs the DP mechanism to compute the individual arm index that consists of a private mean as well as an additional privacy-induced uncertainty. Then, the central node aggregates and returns back an aggregated index that will be used arm selection. The second, termed DP-Decentralized UCB algorithm, employs the same DP mechanism, but the agents estimate the index by averaging their neighbors' input.
\subsection{DP-Master-worker UCB algorithm}
In Algorithm 1, each arm $j$ of each agent $i$ uses the DP mechanisms $H_{i,j}$ (shown in Figure 1) to maintain a private total reward $s_{i,j}$. The communication phase begins when the counter $\eta =2^p~ \text{for}~p=1,2,...$. The individual arm index of each agent $i$ is first updated using the private mean, the upper confidence bound and the additional noise due to privacy (Line 12). Then, the central node averages all the private indices to compute an average index which leads to the same best arm selection for all $M$ agents. Each agent $i$ starts from the common index and privately updates it. For each agent, if an arm is pulled for the $p^{th}$ time consecutively (without switching to any other arms in between), it will also be played for the next $2^p$ time slots.
\begin{algorithm}
\caption{DP-Master-worker UCB algorithm}
\begin{algorithmic}[1]
\State \textbf{Initialization}: Set $t~=~0$ and counter $\eta~=~1$;
\State For each arm $j,~1\leq j\leq K$ of each agent $i,~1\leq i\leq M$, instantiate DP mechanisms $H_{i,j}$.
\State \textbf{Input:} The differential privacy parameter $\epsilon $;
\While {$t \leq T$}
\For {agent $i$ to $M$}
\If {$t \leq K$}
\State Play arm $a_{i}(t) = t$, observe reward $r_{i,a_{i}(t)}(t)$
\State Insert $r_{i,a_{i}(t)}(t)$ to the DP mechanism $H_{i,a_{i}(t)}$
\EndIf
\If {$\eta = 2^p$ for $ p = 0,1,...$}
\State Update total reward $s_{i,j}(t)$ using $H_{i,j}$
\State Update $\upsilon _{i,j} = \frac{ 1}{\epsilon}log\frac{1}{\delta}\log^{1.5}n_{i,j}(t)$
\State Update index $I_{i,j}(t)=X_{i,j}+\sqrt{\frac{2\log t}{n_{i,j}(t)}}+\frac{\upsilon_{i,j}(t)}{n_{i,j}(t)}$
\State \textit{/*Begin communication phase}
\State Send index $I_{i,j}(t)$ to the central node
\State Receive the averaged index $I^{avg}_{j}(t)$ of $j$ arms
\State \textit{/*End communication phase}
\State Pull best arm $a^*_{i}(t)={argmax}_jI^{avg}_j(t)$
\If {$a^*_i(t) \neq a^*_i(t-1) $}
\State Reset $\eta = 1$;
\EndIf
\Else\State {$a^*_i(t) = a^*_i(t-1) $}
\EndIf
\State Play arm $a^*_i(t)$, observe the reward $r_{i,a^*_i(t)}(t)$
\State Insert $r_{i,a^*_i(t)}(t)$ to the DP mechanism $H_{i,a^*(t)}$
\State Update $t = t+1 , \eta=\eta+1$
\EndFor
\For {The central node}
\State \textit{/*Begin communication phase}
\State Receive index sequence $\left\lbrace I_{1,j}...I_{M,j}\right\rbrace $ of $j$ arms
\State Compute and return back $I^{avg}_j=\frac{1}{M}\sum_{i=1}^{M}I_{i,j}$
\State \textit{/*End communication phase}
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
We next analyze the algorithm performance. Theorem 1 provides the privacy performance of Algorithm 1.
\begin{lemma}[Privacy error bound]
\label{lemma:errorbound}
The error between the empirical mean $Y_{i,j}$ and private mean $X_{i,j}$ after $n_{i,j}$ times of plays is bounded as $|Y_{i,j}-X_{i,j}| \leq h_{n_{i,j}}$ with probability at least $1-\delta$, where $h_{n_{i,j}}$ is the error incurred by the private mechanism calculated as $h_{n_{i,j}}=\frac{1}{\epsilon}\cdot \log ^{1.5}({n_{i,j}})\cdot \log \frac{1}{\delta}\cdot \frac{1}{n_{i,j}}$.
\end{lemma}
\begin{proof}
This follows directly from the Fact 1 (Appendix.A\cite{2005.06670}) that the hybrid mechanism remains $(\epsilon,\delta)$-DP after any number $n_{i,j}$ of plays since each time only one arm is pulled, that will affect only one mechanism.
\end{proof}
\begin{theorem}[Privacy of Algorithm 1]
Algorithm 1 is $\epsilon$- differential private after $T$ timeslots with $\delta=T^{-4}$.
\end{theorem}
\begin{proof}
Proposition 2.1 of \cite{dwork2014algorithmic} proves the \textit{post-processing} property of DP mechanisms: the composition of a mapping $f$ with an $(\epsilon, \delta)$- differentially private algorithm is also $(\epsilon, \delta)$ differentially private. Using Lemma~1, the hybrid mechanism is $(\epsilon, \delta)$- differentially private. Moreover, our Algorithm 1 can be seen as a mapping from the averaged output of the hybrid mechanism to the action. This completes our proof.
\end{proof}
Theorem 2 gives the regret of Algorithm 1. Here we only give a proof sketch, the complete proof can be found in Appendix.C\cite{2005.06670}.
\begin{theorem}[Regret of Algorithm 1]
\label{thm:1}
The learning regret of Algorithm 1 is
\begin{equation}
\begin{array}{ll}
R^C(T)\leq MK\Delta_{max}
(4+\max[(\frac{8\log{T}}{\epsilon(1-\beta_0) \Delta_{min}} )^{2.25}, \left \lceil \frac{8\log{T}}{\Delta_{min}^2 \beta_0^2} \right \rceil]\notag
\end{array}
\end{equation}
for some $0< \beta_0< 1$, where $\Delta_{max}=\max\left\lbrace \Delta_j\right\rbrace $, $\Delta_{min}=\min\left\lbrace \Delta_j\right\rbrace $, $\epsilon$ is the parameter for $(\epsilon,\delta)$ privacy, $\delta=T^{-4}$
\end{theorem}
\textit{Proof outline.} The regret incurred during the time horizon $T$ is caused by playing suboptimal arms. We first bound the amount of error between the private and empirical means that are caused by the DP mechanism. Using this bound and Lemma~\ref{lemma:errorbound}, we estimate the number of times that we play suboptimal arms. We show that after a sufficient number of times $O(\frac{MK\log^{1.5}T}{\epsilon\Delta^2_{min}})$, a suboptimal arm will not be selected with high probability.
\textit{Remark:} Through the central mode we obtain $O(MK\log^{2.25}(T))$ regret.
The DP mechanism mainly increases the exploration rounds. If we do not use the DP mechanism, the $O(\log^{2.25}T)$ term vanishes. We note that after $\frac{8\log T}{\Delta_{min}^2}$ plays, the suboptimal arms will be selected with low probability, and we can achieve
a $O(MK\log T)$ regret. Note that in Theorem \ref{thm:1}, the parameter $\epsilon$ reflects the trade off between privacy and regret, where the privacy increases as $\epsilon$ decrease.
\subsection{DP-Decentralized UCB algorithm}
\begin{algorithm}
\caption{DP-Decentralized UCB algorithm}
\begin{algorithmic}[1]
\State \textbf{Initialization}: Set $t=0$; $\hat{\mathbf{n}}_j(0)=\left\lbrace \hat{n}_{1,j}(0),...,\hat{n}_{M,j}(0)\right\rbrace $, $\hat{\mathbf{s}}_j(0)=\left\lbrace\hat{s}_{1,j}(0),...,\hat{s}_{M,j}(0)\right\rbrace$
\State For each arm $j,~1\leq j\leq K$ of each agent $i,~1\leq i\leq M$, instantiate DP Mechanisms $H_{i,j}$.
\State \textbf{Input:} The differential privacy parameter $\epsilon $; matrix $P$ represents the network structure; $\rho\geq1$;
\While {$t \leq T$}
\For {agent $i$ to $M$}
\If {$t \leq K$}
\State play arm $a_{i}(t) = t$, observe the reward $r_{a_{i}(t)}(t)$
\State Insert $r_{a_{i}(t)}(t)$ to the DP mechanism $H_{i,a_{i}(t)}$
\Else
\State \textit{/*Begin the communication phase}
\State Update the estimated play numbers:
\State $\hat{\mathbf{n}}_j(t)=P\hat{\mathbf{n}}_j(t-1)+P\mathbf{\eta}_j(t-1)$
\State Update the additional private error term:
\State $\hat{\upsilon }_{j,j}(t) = \frac{ 1}{\epsilon}\log\frac{1}{\delta}\log^{1.5}\hat{n}_{i,j}(t)$
\State Update the estimated total rewards:
\State $\hat{\mathbf{s}}_j(t)=P{\hat{\mathbf{s}}}_j(t-1)$
\State \textit{/*End the communication phase.}
\State Update the arm index :
\State $I_{i,j}(t)=\hat{{X}}_{i,j}
+\sqrt{2\rho \frac{\hat{n}_{i,j}(t)+c_i}{M\hat{n}_{i,j}(t)}\cdot \frac{\log t}{\hat{n}_{i,j}(t)}}
+\frac{\hat{\upsilon} _{i,j}(t)}{\hat{n}_{i,k}(t)}$
\State Select the best arm
$a_{i}(t)={argmax}_jI_{i,j}(t)$
\State Observe the reward $r_{a_{i}(t)}(t)$
\State Insert $r_{a_{i}(t)}(t)$ to the DP mechanism $H_{i,a_{i}(t)}$
\State Update $s_{i,a_i(t)}(t)$ using DP mechanism $H_{i,a_i(t)}$
\State $t=t+1$
\EndIf
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
In Algorithm 2, the agents average their model with their neighbors’ models at each time $t$, instead of aggregating their values with the help of a central node. We assume that each agent maintains a bi-directional communication with a set of neighboring agents. We consider Gaussian distributions for each arm's reward, i.e., the reward at arm $j$ is sampled from a Gaussian distribution with mean $\mu_{i,j}$ and variance $\sigma^2$. We assume that the variance $\sigma^2$ is known and is the same at each arm. We use a \textit{consensus algorithm} that captures the effect of the additional private information an agent receives through communication with other agents. We represent the network as a graph where nodes are agents and edges connect neighboring agents. A discrete-time consensus algorithm can be expressed as:
\begin{equation}
\mathbf{x}(t+1)=P\mathbf{x}(t),
\end{equation}
where $x$ is the quantity we want the agents to agree on, and $P$ is a row stochastic matrix given by
\begin{equation}
P = \mathit{I}_M -\frac{\kappa}{d_{max}}L.
\end{equation}
Here, $\mathit{I}_M$ is the identity matrix with order M, $d_{max}=\max_i \deg(i),i\in \left\lbrace 1,...,M\right\rbrace $ and $\deg(i)$ is the degree of agent $i$. $\kappa\in[0,1]$ is a step size parameter and $L$ is the Laplacian matrix of this communication graph. Without loss of generality, we assume that the eigenvalues of $P$ are ordered as $\lambda_1=1\ge\lambda_2\geq...\geq\lambda_M\ge-1$.
For our federated private MAB problem, we use the following definitions, that are similar to the definitions in Algorithm 1. Let $\hat{s}_{i,j}$ be the estimated total private reward, $\hat{y}_{i,j}$ be the estimated total true reward of arm $j$ at agent $i$, and $\hat{n}_{i,j}$ be the estimated total number of times that the arm $j$ has been played by agent $i$.
Let $\hat{{X}}_{i,j} = {\hat{s}_{i,j}}/{\hat{n}_{i,j}}$ be the estimated private mean, and $\hat{{Y}}_{i,j} = {\hat{y}_{i,j}}/{\hat{n}_{i,j}}$ be the estimated empirical mean.
Without taking into account differential privacy, the consensus algorithm will update $\hat{y}_{i,j}$ and $\hat{n}_{i,j}$ as follows:
\begin{eqnarray}
\label{eq9}\mathbf{\hat{n}}_{j}(t+1)=P\mathbf{\hat{n}}_{j}(t)+P\mathbf{\xi}_j(t)\\
\label{eq10}\mathbf{\hat{y}}_{j}(t+1)=P\mathbf{\hat{y}}_{j}(t)+P\mathbf{r}_j(t),
\end{eqnarray}
where ${\xi}_{i,j}(t)=I(a_i(t)=j)$, indicating if arm $j$ is played by agent $i$ at time slot $t$; $r_{i,j}(t)$ is the reward with respect to the action which is generated by the distribution $N(\mu_j,\sigma^2)$. $\mathbf{\hat{n}}_{j}(t),\mathbf{\xi}_j(t), \mathbf{\hat{y}}_{j}(t), \mathbf{r}_j(t)$ are vectors that connect the values $\hat{n}_{i,j}(t), \xi_{i,j}(t), \hat{y}_{i,j}(t), r_{i,j}(t)$ for $i= 1,...,M$ respectively. We note that under our DP mechanism, an agent can not observe the reward sequences. Thus, we use the following equation to update the private total rewards instead of \eqref{eq10}:
\begin{equation}
\mathbf{\hat{s}}_{j}(t+1)=P\mathbf{\hat{s}}_{j}(t).
\end{equation}
The above equation captures the fact that only the private total reward $\mathbf{\hat{s}}_{j}(t)$ can be broadcasted through the network graph, not $\mathbf{r}_j(t)$. We still keep \eqref{eq9} because we only aim to keep the reward values private and not the numbers $\mathbf{\hat{n}}_{j}$.
Each arm $j$ of each agent $i$ uses the analogous DP mechanisms $H_{i,j}$ in the the Algorithm 1 to maintain a private total reward. The communication phase occurs at each timeslot to update the estimate play numbers $\hat{n}_{i,j}(t)$ and the total reward $\hat{s}_{i,j}(t)$ using (5) (7). Agent $i$ selects the arm with the maximum index denoted as:
\begin{equation}
I_{i,j}(t)=\hat{{X}}_{i,j}
+\sqrt{2\rho \frac{\hat{n}_{i,j}(t)+c_i}{M\hat{n}_{i,j}(t)}\cdot \frac{\log t}{\hat{n}_{i,j}(t)}}
+\frac{\hat{\upsilon} _{i,j}(t)}{\hat{n}_{i,k}(t)},
\end{equation} where $c_0,c_i$ are parameters representing the network stricture and $\rho>1$ is the exploration parameter.
From (8) we notice that the estimation performance, the network structure, and the exploration parameter, all affect the learning performance.
\begin{theorem}[Privacy of Algorithm 2]
Algorithm 2 is $(\epsilon,\delta)$- differentially private after $T$ timeslots with $\delta = \frac{1}{2}T^{-\rho}$.
\end{theorem}
The proof of Theorem 3 is similar to that of Theorem 1.
\begin{theorem}[Regret of Algorithm 2]
The learning regret of Algorithm 2 is
\begin{equation}\label{eq17}
\begin{array}{ll}
R^D(T) \leq \frac{2MK\rho\Delta_{max}}{\rho -1}+\sum_{i=1}^{M}\sum_{j>1}^{K}\max[(\frac{2+2\rho\log{T}}{\epsilon(1-\beta_0) }) ^{2.25}, \notag\\
\left \lceil\frac{c_0}{\beta_0^2}+\frac{8\sigma^2\rho(1+c_i)\log T}{\beta_0^2\Delta_{j}} \right \rceil]
\end{array}
\end{equation}
for some $0< \beta_0 < 1, ~\rho\geq1$, where $\Delta_{max}=\max\left\lbrace \Delta_j\right\rbrace $, $\Delta_{min}=\min\left\lbrace \Delta_j\right\rbrace $ and $\epsilon$ is the parameter for $(\epsilon,\delta)$ privacy, $\delta = \frac{1}{2}T^{-\rho}$ and $c_i$, $c_0$ are parameters of the network graph.
\end{theorem}
\textit{Proof outline}. The regret is mainly caused by the estimated variance due to communication and the privacy requirements. By using Lemma 1 and Lemma 2 (provided in Appendix.B\cite{2005.06670}), we first bound the amount of error between the estimated private mean and empirical mean. We note that the communication cost is also reflected in this bound. Using this bound, we estimate the number of times suboptimal arms are selected, and complete the proof. The complete proof can be found in Appendix.D\cite{2005.06670}.
\textit{Remark:} From Theorem 4 we obtain $O(MK\log^{2.25}T)$ regret. Both the communication and the privacy mechanism result in an expansion of the exploration phase.
The DP mechanism leads to an additional $O(\log^{2.25}T)$ regret with the parameter $\epsilon$ inversely proportional to the regret. The federated learning setup introduces constants $c_0$ and $c_i$ into the regret which depend on the network topology. In particular, $c_0$ is proportional to the network scale and $c_i$ depends on the number of neighbors of agent $i$. The sparser the network connection, the larger the $c_i$ and the regret. A larger exploration parameter $\rho$ also implies more exploration rounds.
\section{Experiments}
In this section, we mainly perform numerical simulations to verify and analyze the performance of Algorithm 2. We choose $M=20$ and $K=10$.
The 20 agents are connected according to a \textit{cycle graph} which is a fully decentralized setting.
Figure 1 shows the impact of varying the privacy parameter $\epsilon$ in $\{1.5,2,5\}$ with fixed $\rho=2$. We can see that the regret increases with $\epsilon$. Figure 2 shows the impact of varing the exploration parameter $\rho$ in $\{1.2,2,4\}$ with fixed $\epsilon = 2$. Again as expected the regret increases with $\rho$. These results demonstrate the tradeoff between the regret (recommendation accuracy) and privacy.
\begin{figure}[htbp]
\centering
\subfigure[Regret as a function privacy parameter $\epsilon$.]{
\includegraphics[width=3.8cm]{exp1.png}
}
\quad
\subfigure[Regret as a function of exploration parameter $\rho$.]{
\includegraphics[width=4.1cm]{exp2.png}
}
\caption{Regret performance of Algorithm 2.}
\end{figure}
\section{Conclusion}
In this paper, we proposed a distributed MAB framework for recommendation systems that incorporates differential privacy. At each distributed agent, we use an ($\epsilon,\delta$) differentially private variant of UCB scheme to ensure that agents do not reveal information on the reward values. We designed algorithms for two multi-agent settings: the DP-Master-worker UCB algorithm and the DP-Decentralized UCB algorithm each capturing a different communication network connecting agents. We analyzed both the privacy and regret performance and showed how the need for communication and privacy can influence the decision making performance of the agents.
\newpage
|
1,116,691,500,677 | arxiv | \section{Introduction}
The ever growing quantity of content posted online requires more and more moderators to monitor this content. A fast and accurate moderation is highly beneficial to online platforms, but it is increasingly expensive and difficult to maintain. Therefore, the automated detection of abusive online content is an important research topic.
Corpora allowing to develop such methods often focus on \textit{single comments} without any conversational context~\cite{Pavlopoulos2017,Razavi2010}.
Yet, recent works~\cite{Papegnies2019,Yin2009} suggest that considering the \textit{entire conversation} thread might improve the automatic detection of abusive content. However, the development of such methods is currently limited by the lack of large scale corpora of conversations. Corpora containing full conversations exists but they have limited number of messages or are not publicly available~\cite{Napoles2017,Cecillon2019}.
\newcite{Karan2019} offers a solution with \textit{PreTox}, a large corpus of discussion threads from Wikipedia talk pages. However, the quality of their semi-automatically generated annotations might be problematic, as the authors report a Precision of only $51$\%.
In this paper, we reconstruct a large scale corpus of messages from English Wikipedia talk pages, structured as full conversations and annotated with high quality annotations. This results in a corpus containing roughly 193k conversations and 383k messages annotated as being abusive or not. To encourage further development of context- and thread-based methods in the area of abusive content detection, we publish and make freely available this corpus and the code source used for its extraction.
The other objective of this work is to improve replicability and ease the comparison of classification methods. In this context, we introduce an open-source benchmarking platform that we developed.
The contribution of this work is threefold. First, we match two existing corpora of Wikipedia messages and develop a pipeline to create a large publicly available corpus of conversations. Messages are provided together with detailed information such as the message type, author, talk page and high quality annotations. Second, we present a common comparison platform grouping approaches and methods to stimulate communities around automatic detection of abusive content. Third, we illustrate the interest of our corpus and platform by assessing existing abuse detection methods.
The rest of this article is organized as follows. First, in Section~\ref{sec:RelatedWork}, we describe the existing corpora related to our proposed one, and how they are used in the literature. Then, we describe our corpus in Section~\ref{sec:ProposedCorpus}, as well as the reconstruction pipeline we propose for its constitution. We present a benchmarking platform and some results we obtain on our corpus in Section~\ref{sec:ProposedBenchmark}. Finally, in Section~\ref{sec:Conclusion}, we summarize our results and present some perspectives.
\section{Related Work}
\label{sec:RelatedWork}
In this section, we introduce the corpora of Wikipedia messages related to abuse detection. We review how they are used in the literature, and stress the limitations of these corpora as well as the works leveraging them.
\subsection{Wikipedia Talk Pages}
\label{subsec:WPtp}
A \textit{talk page} is a discussion page where users can argue and discuss topics relative to a specific Wikipedia page. Every Wikipedia user and article has a related talk page, identified by a unique \texttt{page\_id}. But Wikipedia does not propose a standard post system such as those commonly used in online forums. Instead, the talk page is similar to a regular Wikipedia article page, or a wiki page in general: in theory, users have the ability to edit it by adding, modifying or removing text anywhere. However, in practice, a set of writing and formatting conventions\footnote{\label{note:convention}\url{https://en.wikipedia.org/wiki/Help:Talk_pages\#Replying_to_an_existing_thread}} allow giving structure to the various conversations taking place on the talk page. For instance, when a user adds his own post, he indents it so as to indicate its hierarchical level in the conversation tree. Figure~\ref{fig:ConventionFormat} shows an example of Wikipedia conversation under the form of the rendered talk page and the corresponding Wikicode (Wikipedia markup language). Note that a talk page generally contains several conversations at once.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\columnwidth]{format_convention.eps}
\caption{Part of the \textit{Japan} Wikipedia article talk page: rendered page (left) and corresponding Wikicode (right).}
\label{fig:ConventionFormat}
\end{center}
\vspace{-2mm}
\end{figure}
Like for article pages, Wikipedia stores the changes corresponding to each edit, as a \textit{revision} entry containing the text of the page after the edit. Each revision is identified by a unique number called \texttt{rev\_id}.
\subsection{Wikipedia Comment Corpus}
\label{subsec:WCC}
As part of the Wikipedia Detox research project, \newcite{Wulczyn2017} proposed the \textit{Wikipedia Comment Corpus} (WCC), a corpus of discussion comments from English Wikipedia talk pages. These comments are extracted using the revision history of each considered talk page. The authors consider the textual differences between two consecutive revisions of the talk page, and distinguish two cases depending on the importance of these changes. If the modification is significant, they assume a new comment was posted, which is identified by its own \texttt{rev\_id}. Otherwise, they suppose an existing comment was modified, and apply these changes without updating its \texttt{rev\_id}. Therefore, a given \texttt{rev\_id} is associated to \textit{at most} one comment, and one comment to \textit{exactly} one \texttt{rev\_id}. However, it is important to note that one large edition can correspond, in practice, to a user writing \textit{several} new posts in distinct conversations of the same page. In this case, all these posts are mistakenly gathered in a single WCC comment.
\newcite{Wulczyn2017} used a public dump of the English Wikipedia full history made available in January 2016 to create their corpus, which contains more than 63M comments posted between 2004 and 2015. From this massive corpus, they sampled 3 smaller datasets that they annotated for different types of abuse:
\begin{itemize}
\item \textit{personal attack}: abusive content directed at somebody's person rather than providing evidence;
\item \textit{aggression}: malicious remark to a person or group on characteristics such as religion, nationality or gender;
\item \textit{toxicity}: comment that can make other people want to leave the conversation.
\end{itemize}
It is important to note that each comment in the three datasets is \textit{explicitly} annotated as abusive or not. By comparison, in the abuse detection literature, datasets are often annotated by considering comments flagged by moderators as abusive, whereas the rest of the comments are deemed non-abusive \textit{by default}, without further check. It is then possible for the non-abusive label to be assigned to {\it abusive} comments just because they were missed by the human moderators \textit{e.g.}~\cite{Papegnies2017,Delort2011,Karan2019}. Having explicitly annotated non-abusive comments makes WCC a more reliable corpus, on this aspect.
Information on the datasets is summarized in Table~\ref{tab:InfoCorpus}. The \textit{Personal attack} and \textit{Aggression} datasets contain exactly the same 115k comments while the \textit{Toxicity} dataset contains more comments (159k). Among them, 77k appear in all three datasets. The prevalence of abusive comments in the \textit{Personal attack} dataset is 13.4\%, 14.7\% in the \textit{Aggression} dataset, and 11.5\% in the \textit{Toxicity} dataset. This prevalence does not reflect the data though, as \newcite{Wulczyn2017} oversampled comments from blocked users to enhance the variety of abusive comments. The original abuse rate in Wikipedia comments is around 1\%.
\begin{table}[!ht]
\centering
\begin{tabularx}{\columnwidth}{|X|r|r|l|}
\hline
\textbf{Dataset} & \textbf{Comments} & \textbf{Percentage} & \textbf{Type of} \\
& & \textbf{abusive} & \textbf{annotation} \\
\hline
\textbf{Personal} & 115,864 & 13.4 \% & binary \\
\textbf{attack} & & & \\
\hline
\textbf{Aggression} & 115,864 & 14.7 \% & binary and \\
& & & numerical \\
\hline
\textbf{Toxicity} & 159,686 & 11.5 \% & binary and \\
& & & numerical \\
\hline
\end{tabularx}
\caption{Main properties of the three datasets constituting the Wikipedia Comment Corpus (WCC).}
\label{tab:InfoCorpus}
\end{table}
As the \textit{Wikipedia Comment Corpus} (WCC) is one of the largest available human annotated comments corpus, it is used in many works. \newcite{Wulczyn2017} themselves tackle the problem of detecting personal attacks in Wikipedia comments. They experiment with logistic regression and multi-layer Perceptron classifiers using word or character $n$-gram features and report a $96.59$ AUC score on the \textit{Personal attack} dataset. \newcite{Pavlopoulos2017} apply deep learning methods to the moderation task. They experiment with various methods such as Convolutional Neural Network (CNN) operating on word embeddings, Recurrent Neural Network (RNN) and several variants of RNN using an attention mechanism. Their results on the \textit{Personal attack} and \textit{Toxicity} datasets outperform previously reported results with an AUC score up to $98.42$ on the \textit{Toxicity} dataset. \newcite{Grondahl2018} propose a comparative analysis of state-of-the-art hate speech detection models and apply them to the \textit{Personal attack} dataset. They experiment with Logistic Regression (LR) and Multi-Layer Perceptrons (MLP) operating on character $n$-grams, Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) approach. The report results ranging from $85$ to $87$\% in terms of macro-averaged $F1$-score. \newcite{Mishra2018} propose various methods to detect abusive content in the \textit{Personal attack} and achieve their best results with a method using context-aware representations for characters. This method is referred to as the {\it Context hidden-state + char $n$-grams} method, and obtains a $89.35$ and $87.44$ macro-averaged $F1$-score on the \textit{Toxicity} and \textit{Personal attack} datasets, respectively.
Furthermore, \newcite{Dixon2018} show that classifiers can have unfair biases toward certain people and propose methods to mitigate unintended bias in text classification models. They illustrate this statement by applying their methods to the \textit{Toxicity} and \textit{Personal attack} datasets.
Table~\ref{tab:RecapPerformances} summarizes the performances reported in the previously cited articles. The second column refers to the datasets used to train and test the classifier. Additionally, we can see that the \textit{Aggression} dataset is not used by any of the listed methods. The third column indicates, when available, the percentage of comments in each of the train/development/test split.
Although the classification performances reported are quite good, most of these models can be fooled by basic obfuscation or adversarial methods. \newcite{Hosseini2017} demonstrate the efficiency of such an attack against the \textit{Google Perspective} API. \newcite{Grondahl2018} present some basic, but efficient, evasion methods. Most of them induce a significant decrease in the classification performances. The word-based are the most vulnerable ones, which can be completely fooled by introducing or removing manual typos, punctuation and spaces in comments. Because of this possible vulnerability, it can be interesting to rely on more information than only the textual content of each comment.
\begin{table*}[!ht]
\centering
\begin{tabular} { |m{7.9cm}|m{3.4cm}|l|l|l| }
\hline
\textbf{Method} & \textbf{Dataset} & \textbf{Split} & \textbf{Metric} & \textbf{Score} \\
\hline
Logistic Regression \cite{Wulczyn2017} & Personal attack & 60/20/20 $^*$ & AUC & 96.24 \\
\hline
Multi-layer perceptrons \cite{Wulczyn2017} & Personal attack & 60/20/20 $^*$ & AUC & 96.59 \\
\hline
RNN \cite{Pavlopoulos2017} & Toxicity & N/A & AUC & 98.42 \\
\hline
RNN + attention mechanism \cite{Pavlopoulos2017} & Personal attack & N/A & AUC & 97.46 \\
\hline
RNN + attention mechanism \cite{Pavlopoulos2017} & Toxicity/Personal attack & N/A & AUC & 98.22 \\
\hline
LSTM \cite{Grondahl2018} & Personal attack & N/A & F1-score & 85. \\
\hline
CNN + GRU \cite{Grondahl2018} & Personal attack & N/A & F1-score & 87. \\
\hline
Context hidden-state+char $n$-grams \cite{Mishra2018} & Personal attack & 60/40 $^+$ & F1-score & 87.44 \\
\hline
Context hidden-state+char $n$-grams \cite{Mishra2018} & Toxicity & 60/40 $^+$ & F1-score & 89.35 \\
\hline
\end{tabular}
\caption{Works leveraging the WCC, with the obtained performances. The \textit{Split} column corresponds to the percentage of comments in the train/test or train/development/test sets. Symbols ($^*$,$^+$) denotes a similar split used by several methods.}.
\label{tab:RecapPerformances}
\vspace{-4mm}
\end{table*}
\subsection{WikiConv}
\label{subsec:WikiConv}
\textit{WikiConv}~\cite{Hua2018} is a large public corpus based on Wikipedia talk pages extracted from a July 2018 Wikipedia dump. This corpus contains \textit{full conversations}, and not only \textit{isolated comments} like WCC. In this corpus, we call ''messages'' the textual elements constituting a conversation.
The structure of a conversation is retrieved by considering the revision history of the talk page containing it. This history is viewed as a sequence of \textit{conversational actions}. A conversational action is an object representing one operation performed by a user on a talk page. It is composed of many attributes about the action, the talk page, the conversation and its structure. Additionally, actions are categorized into 5 types: conversation thread \textit{creation}, new message \textit{addition}, existing message \textit{modification}, message \textit{deletion}, and deleted message \textit{restoration}. All the attributes are listed and described on the WikiConv authors' GitHub repository\footnote{\href{https://github.com/conversationai/wikidetox/tree/master/wikiconv}{https://github.com/conversationai/wikidetox/tree/master/wikiconv}}. As mentioned before, when performing these actions, the Wikipedia users respect a set of formatting conventions. Hua \textit{et al}. define a heuristic leveraging this knowledge to identify actions, retrieve their description, and determine their type. This pipeline only relies on visual markup clues, so it is language-independent and can be applied to any version of Wikipedia archives, as long as the formatting conventions stay the same. The largest component of the corpus is from the English Wikipedia but the pipeline is also applied to Chinese, Russian, Greek and German Wikipedia.
A WikiConv message is the textual content associated to an action, i.e. the text that was added, removed, or edited. A single revision of a talk page can be constituted of several actions, and therefore result in several WikiConv messages. Because of that, the \texttt{rev\_id}, which is used as a unique comment identifier in WCC, can be shared by multiple actions in WikiConv. Instead, a WikiConv message is uniquely identified by an \texttt{action\_id}. Another major difference with WCC is that WikiConv contains the full history of the conversation, with each successive version of a message in case it is edited, and not only its final form. Moreover, when several new posts are added in one revision, those are not merged in a single comment as in WCC, but represented by separate WikiConv messages. It is therefore quite common that multiple WikiConv messages correspond to the same WCC comment. Figure~\ref{fig:RevidExample} illustrates a typical example where the added text is split over two different levels of indentation. In WCC, all the text is concatenated into a single comment while in WikiConv, $2$ messages (and therefore actions) are created, each corresponding to a different indentation level. The $2$ actions are distinct (different \texttt{action\_id}) but they have the same \texttt{rev\_id}.
Finally, the most important difference with WCC is that the messages are not annotated for abuses.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.93\columnwidth]{revidExample.eps}
\caption{Illustration of the issue regarding the \texttt{rev\_id} between the WCC and WikiConv. Figure available at \href{https://doi.org/10.6084/m9.figshare.11302385}{10.6084/m9.figshare.11302385} under CC-BY license.}
\label{fig:RevidExample}
\end{center}
\end{figure}
Moreover,~\newcite{Hua2018} use the Google Perspective API\footnote{\href{https://www.perspectiveapi.com}{https://www.perspectiveapi.com}} to score the toxicity of the messages in WikiConv. Based on machine learning models, this API scores messages for several types of abuse, \textit{e.g.}, toxicity, profanity, threat, insult. In WikiConv, all messages are scored on their toxicity and severe toxicity. These $2$ are provided as attributes of each message in the corpus.
This corpus is quite recent, so only few researches currently use it. We report one published paper by \newcite{Chang2019}, that explores the potential of a small subset of conversations sourced from WikiConv to predict future derailment in online conversations. In the next section, we introduce a corpus extending the work of \newcite{Hua2018} on WikiConv.
\subsection{PreTox}
\label{subsec:PreTox}
In a recent work, \newcite{Karan2019} proposed \textit{PreTox}, a corpus based on WikiConv (see previous section). \textit{PreTox} is composed of complete discussion threads with semi-automatically generated toxicity annotations. Karan \textit{et al.} rely on a heuristic to flag toxic messages. This heuristic combines two types of information: 1) whether or not the message was deleted by someone else; and 2) the scores generated using the Google Perspective API and provided along with the WikiConv corpus. A binary toxicity annotation is created for each message using this heuristic. Flagged messages are deemed toxic and, unlike WCC, all remaining messages are considered as non-toxic. \newcite{Karan2019} report an annotation Precision of 51\% for their semi-automated method on a test set of 100 manually annotated messages. Thus, we can suppose that human annotations would surely be more accurate than the semi-automatically generated annotations of \textit{PreTox}.
\subsection{Discussion}
\label{subsec:Discussion}
In this section, we reviewed the corpora related to the detection of abusive messages in Wikipedia talk pages. However, they all have some weaknesses. WCC contains high quality annotations for $3$ types of abusive content, but does not provide any conversational structure. On the contrary, WikiConv provides full conversations but without annotations. PreTox seeks to extend the latter by semi-automatically annotating the messages, but this process is not accurate enough. In the next section, we address these issues by combining WCC and WikiConv to combine their advantages, and thus compensate for their individual drawbacks.
Moreover, we also reviewed the works leveraging these corpora, and highlighted a major issue, as shown in Table~\ref{tab:RecapPerformances}: the lack of a standard protocol for evaluating the performances of abuse detection tools. This flaw concerns both the evaluation metric used and the way the data is divided into train/development/test subsets. In addition, none of the listed works give an open source version of their code. Therefore, it is extremely complicated to have a comparative overview of all proposed approaches, which certainly constitutes a major obstacle to progress in this research area.
\section{Proposed Corpus}
\label{sec:ProposedCorpus}
The context of messages ({\it i.e.} the messages surrounding a targeted message in a conversation) is ignored by many existing abusive content detection methods~\cite{Pavlopoulos2017,Razavi2010,Djuric2015}, while it seems to have a positive influence on classification performances~\cite{Papegnies2019,Yin2009}. An annotated conversation corpus could allow the use of such information at a large scale, in order to develop context-based methods that takes advantage of conversational structure and dynamics to detect abusive content.
In this work we propose \textit{Wikipedia Abusive Conversations} (WAC), a corpus of messages from Wikipedia integrating conversational information and high quality human annotations. WAC is a combination of the first two corpora described in Section~\ref{sec:RelatedWork}~and takes advantage of their complementarity. It is based on the messages and conversations structure from WikiConv~\cite{Hua2018} and the human annotations for 3 different types of abusive content from the WCC~\cite{Wulczyn2017}. The textual elements constituting conversations in WAC are called ''messages'' like WikiConv as they correspond to WikiConv messages matched with a WCC annotation.
This reconstruction task is not trivial because of the way that comments and messages are identified in the two corpora. As explained before and shown on Figure~\ref{fig:RevidExample}, there is no guarantee of \texttt{rev\_id} uniqueness for WikiConv messages, making it difficult to match them with WCC comments. WAC provides a large collection of conversations including at least one human-annotated message per conversation. It is divided into 3 datasets annotated for \textit{Personal attack}, \textit{Aggression} and \textit{Toxicity}. In total, it contains approximately 193k conversations consisting of 4.9 million messages, among which 383k are annotated. It is publicly available online\footnote{DOI: \href{https://doi.org/10.6084/m9.figshare.11299118}{\texttt{10.6084/m9.figshare.11299118}}}.
\subsection{Reconstruction Pipeline}
\label{subsec:Pipeline}
We now describe the reconstruction process we developed in order to gather information from existing corpora~\cite{Wulczyn2017,Hua2018} in a new one and extract useful information.
The pipeline is detailed only for the \textit{Personal attack} dataset, but is the same for the other two datasets. Its source code is open source, and publicly available online\footnote{\href{https://github.com/CompNet/WikiSynch}{https://github.com/CompNet/WikiSynch}}.
The reconstruction process is divided into 5 main steps. It begins with the extraction of the annotation from WCC. The second step is to retrieve messages from WikiConv. The third step consists in filtering these messages in order to keep only the relevant talk pages. The fourth step is the conversation reconstruction. The last step, the most important and difficult one, consists in uniquely identifying all the annotated messages in the conversation. Figure~\ref{fig:ReconstructionPipeline} shows the whole pipeline, discussed through this section.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=\textwidth]{fig_pipeline_v2.eps}
\caption{Representation of our pipeline applied to reconstruct the conversation of an annotated comment. Only the textual content of the actions is displayed. The right side shows the annotated comment and its 10 associated human judgments (J\_1, ..., J\_10). The letters in the WCC table stands for \textit{quoting\_attack} (Q), \textit{recipient\_attack} (R), \textit{third\_party\_attack} (T), \textit{other\_attack} (O), \textit{attack} (A). Messages with a red frame have the same \textit{rev\_id} as the annotated comment. Message colors match both the pages and the conversation containing them. Figure available at \href{https://doi.org/10.6084/m9.figshare.11302385}{10.6084/m9.figshare.11302385} under CC-BY license.}
\label{fig:ReconstructionPipeline}
\end{center}
\end{figure*}
\subsubsection{Annotation Extraction}
The first step is to extract annotations from the WCC. This corpus provides 10 judgments per annotated comment. Each judgement provides multiple annotations depending on the dataset. The \textit{Personal attack} dataset has $5$ binary annotations: \textit{quoting\_attack, recipient\_attack, third\_party\_attack, other\_attack} and the more general \textit{attack}. The \textit{Aggression} and \textit{Toxicity} datasets also provides such a general binary score (\textit{aggression} and \textit{toxicity}, respectively). Additionally, they provide an \textit{aggression\_score} and a \textit{toxicity\_score}) ranging from $-2$ (very abusive) to $2$ (very healthy), $0$ being neutral.
We aggregate these 10 judgments to determine the gold annotation of all the annotated messages. For the binary annotations, we compute the majority annotation among crowdworkers to determine the gold standard. For the scores, we compute the average value among all crowdworkers.
In WAC, we call \textit{annotated messages} the annotated comments extracted from WCC.
The right side of Figure~\ref{fig:ReconstructionPipeline} shows an example of this step applied to a comment annotated for \textit{Personal attack}. Based on the 10 human judgments from WCC, the gold annotations for each of the $5$ types of attack annotated is determined.
\subsubsection{Talk Pages Retrieval}
The second step is to retrieve the data from WikiConv\footnote{DOI: \href{https://doi.org/10.6084/m9.figshare.7376003}{\texttt{10.6084/m9.figshare.7376003}}}. Because WCC contains data from the English Wikipedia, we only consider the English part of WikiConv, which is composed of approximately $91$M distinct conversations. We group all messages by their respective \texttt{page\_id}. Note that at this stage, messages are not ordered, and not structured as conversations, as a single talk page can contain multiple conversations. This is for instance the case for the blue talk page in Figure~\ref{fig:ReconstructionPipeline}.
At this step, we also filter the messages based on their type: creations, additions, and modifications are retained while deletions and restorations are filtered out.
Indeed, deletions often concern abusive messages which are already outnumbered by non-abusive messages in WCC, so considering that some messages are deleted would unbalance the corpus even more. By doing this filtering, we also retain a maximum of annotated messages in our corpus.
In the example displayed in Figure~\ref{fig:ReconstructionPipeline}, the colors of the WikiConv messages match the talk page on which they appear. We can distinguish $4$ pages: purple, blue, green and grey. However, the grey page contains only $1$ message which corresponds to a deletion. Thus, this message is removed and after Step 2, only $3$ talk pages remain.
\subsubsection{Talk Pages Filtering}
Among the WikiConv talk pages retrieved at the previous step, only a fraction contains a message that is annotated in WCC. This filtering step aims at keeping only the pages containing at least one such message, in order to retain only the relevant talk pages. However, it is important to understand that the annotated message can be in any of the conversations taking place on the concerned talk page. In order to perform such a filtering, we rely on the \texttt{rev\_id}, the id of the revision from which the message was extracted. The retained pages are all the pages whose at least one message has the same \texttt{rev\_id} as an annotated comment. As previously mentioned, this attribute is available in both WCC and WikiConv but unlike WCC, WikiConv is likely to associate the same \texttt{rev\_id} to several distinct messages (or rather, these messages correspond to a single comment in the WCC). Therefore, there are more messages in WikiConv having the \texttt{rev\_id} of an annotated WCC comment than the actual number of annotated comments. This issue is addressed later at Step~$5$. Messages with a red frame in Figure~\ref{fig:ReconstructionPipeline} are messages having the same \texttt{rev\_id} as the annotated comment. After Step~3, only the blue page is retained, as it is the only one containing messages with the wanted \texttt{rev\_id}. Messages in this page are still unordered.
\subsubsection{Conversation Reconstruction}
The fourth step consists in reconstructing the conversation. For each page remaining after the previous step, we reconstruct the distinct conversations taking place on this specific page, using the attributes available in WikiConv.
The reconstruction process starts by retrieving all the messages corresponding to the creation of a new conversation. On Figure~\ref{fig:ReconstructionPipeline}, there are two such messages out of the five messages of this page. To retrieve all the creation of conversations, the \texttt{type} attribute is not enough because some messages being the starting point of a new conversation are categorized as addition and not as creation.
Then, based on the \texttt{replyTo\_id} allowing to find the message to which this message answers, it is possible to link each message of the conversation and so, to reconstruct its structure. As a result, the structure of each conversation on the page is modeled as a graph of actions, a message being the textual content of an action. Figure~\ref{fig:GraphActions} is an example of a conversation reconstructed during this step. The left part shows the textual content of all the actions in this conversation. The very first action is the creation of the conversation. Then, all the source-reply relationships are modeled using tabulations. An action is a reply to the nearest previous action with one less tabulation. For instance, Actions~7 and~3 are replies to Action~2 which is itself a reply to Action~1. The right part of Figure~\ref{fig:GraphActions} shows the corresponding graph of actions.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\columnwidth]{graph_actions.eps}
\caption{An example of a conversation and its corresponding graph of actions. Arrows illustrate which action is replying to. Figure available at \href{https://doi.org/10.6084/m9.figshare.11302385}{10.6084/m9.figshare.11302385} under CC-BY license.}
\label{fig:GraphActions}
\end{center}
\end{figure}
During this reconstruction step, modification messages are considered as replies to the original message they are editing in order to keep track of all the content added to the talk page and not only the final form of each message. Moreover, a lot of messages categorized as modifications can actually be considered as additions. Indeed, a typical behavior of Wikipedia users is to reply to a message by adding text straight into the message they want to reply to instead of creating a new one. Thus, some conversations take place in a single message which is modified successively by multiple users. However, even if the conversation takes place in a single message visually, technically, an action is created and saved for each successive edition of the message. So, the graph of action that we produce is exactly the same as if all the messages were posted in successive and distinct messages. This behavior justifies the need to consider modifications as full messages, and not as a simple state of a message at a given time.
In the example of Figure~\ref{fig:ReconstructionPipeline}, the page retained at the previous step contains $2$ conversations modeled by two distinct shades of blue. These conversations are reconstructed at Step~4 and actions are ordered as they appear on the original talk page.
\subsubsection{Annotated Messages Retrieval}
We now have reconstructed all the conversations appearing in pages known to contain at least one annotated message. However, some of these conversations may not contain any such message, as several conversations generally coexist on the same talk page. The last step is therefore to filter them out. As stated in Step~$3$, a given \texttt{rev\_id} can be associated to several WikiConv message, whereas it points out at a unique WCC comment. This is a major issue for us because the \texttt{rev\_id} is the only attribute available to match WCC comments to WikiConv messages. In order to figure out which of the messages with equivalent \texttt{rev\_id} is actually the comment annotated in WCC, we compute the \textit{Longest Common Sequence} (LCS) between the original annotated comment and each message in our corpus having the same \texttt{rev\_id}. We consider that the message with the LCS corresponds to the annotated comment. Approximately $36$\% of our annotated messages are concerned by this issue, most of them having only $2$ or $3$ messages with similar \texttt{rev\_id}.
Once every annotated message has been uniquely identified, we filter all the conversations reconstructed at Step~$4$ to only keep those containing at least one annotated message. In the example of Figure~\ref{fig:ReconstructionPipeline}, both conversations reconstructed at Step~4 contain a message with the \texttt{rev\_id} of the annotated comment, the messages with a red frame. The LCS is computed between both conversation messages and the annotated comment. As a result, the message ``Thank you'' is identified as the actual annotated message and its conversation is retained while the other is discarded. In the end, we get the conversation containing the annotated message and its associated gold annotation computed from WCC.
The described pipeline is applied for all annotations types (\textit{i.e., Personal attack, Aggression and Toxicity}) to create the 3 distinct datasets constituting our WAC corpus. It is composed of conversations containing at least one annotated message. Three files containing the annotations are released along with the corpus, each file corresponding to a dataset.
\subsection{Description}
A number of annotated comments from the original WCC datasets are discarded during the reconstruction pipeline described in Section~\ref{subsec:Pipeline}. This is mostly due to missing data in WikiConv or WCC. Some lost comments also are comments associated to a deletion or restoration in WikiConv which are discarded in the reconstruction pipeline. However, $97.97$\% of the original annotated WCC comments are retained in WAC. In total, the corpus contains more than $2.2$ million unique messages split into $168{,}827$ unique conversations. The number of annotated messages and the division of annotations in all three datasets is summarized in Table~\ref{tab:SizeRepartition}. As mentioned in Section~\ref{subsec:WCC}, one annotated message can be annotated for different types of abuse. Hence the $382{,}665$ total annotations from the last line of Table~\ref{tab:SizeRepartition} are assigned to a total of $193{,}265$ distinct messages.
\begin{table}[!ht]
\centering
\begin{tabularx}{\columnwidth}{ |X|c|c|c| }
\hline
\textbf{Dataset} & \textbf{Annotated} & \textbf{Abuse} &
\textbf{Non-abuse} \\
& \textbf{messages} & & \\
\hline
Personal & 113,174 & 14,934 & 98,240 \\
attack & & (13.20\%) & (86.80\%) \\
\hline
Aggression & 113,174 & 16,331 & 96,843 \\
& & (14.43\%) & (85.57\%) \\
\hline
Toxicity & 156,317 & 19,700 & 136,617 \\
& & (12.60\%) & (87.40\%) \\
\hline\hline
Total & 382,665 & 50,965 & 331,700 \\
& & (13.31\%) & (86.69\%) \\
\hline
\end{tabularx}
\caption{Number of annotated messages and distribution of annotations in the proposed \textit{Wikipedia Abusive Conversations} (WAC) corpus.}
\label{tab:SizeRepartition}
\end{table}
Wikipedia messages are usually longer than other types of online posts such as tweets or chat messages. Messages in our corpus have an average length of more than $1{,}000$ characters.
As shown in Figure~\ref{fig:GraphActions}, the structure of each conversation can be modeled as a graph. On average, there are $13$ messages in a conversation. The distribution of the conversations length is shown in Figure~\ref{fig:ConvLength}. Note that the \textit{y}-axis scale is logarithmic for readability reasons. We can observe that some conversations contain more than $1{,}000$ messages but for a large majority, conversations are only $1$- to $20$-message long.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\columnwidth]{length_conversation.pdf}
\caption{Distribution of the conversation lengths in \textit{Wikipedia abusive Conversations}, expressed in number of messages. The \textit{y}-axis scale is logarithmic. Figure available at \href{https://doi.org/10.6084/m9.figshare.11302385}{10.6084/m9.figshare.11302385} under CC-BY license.}
\label{fig:ConvLength}
\end{center}
\end{figure}
Figure~\ref{fig:AnnotatedPos} shows the distribution of the relative position of annotated messages in conversations. This position is expressed as the percentage of messages posted \textit{before} the annotated message in the conversation. Only conversations with at least $5$ messages are considered in this figure. This distribution shows that annotated messages are well distributed over all positions in the conversations, except at the very end of the conversation where a lot more annotated messages appear. This observation holds whether the annotated message is abusive or not. For abusive messages, this position can potentially be explained by the fact that abusive comments are quickly deleted from Wikipedia~\cite{Hua2018}, before creating many reactions.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\columnwidth]{pos_conversation.pdf}
\caption{Position of the annotated messages in the conversation, expressed in percentage of the messages appearing \textit{before} the annotated message. Only conversations with $5$ and more messages are considered. Figure available at \href{https://doi.org/10.6084/m9.figshare.11302385}{10.6084/m9.figshare.11302385} under CC-BY license.}
\label{fig:AnnotatedPos}
\end{center}
\end{figure}
Conversations can also be divided into multiple sub-conversations, a sub-conversation being a sequence of messages in which each message is an answer to the previous. For instance, the conversation represented in Figure~\ref{fig:GraphActions} is composed of $8$ messages and $2$ sub-conversations. On average, there are $3$ sub-conversations per conversation in the corpus. However, they only contain approximately 4 messages in average, which is quite limited for a conversation.
This value highlights the fact that Wikipedia talk pages are not used in the same way as forums or social media. Indeed, many messages are informative messages explaining what changes have been made to the article or suggestions on how to edit the article associated to the talk page. Most of the time, these messages do not imply answers, which can explain the relatively small number of messages per sub-conversation that we observe.
\section{Proposed Benchmarking Platform}
\label{sec:ProposedBenchmark}
In Section~\ref{subsec:Discussion}, we highlighted some issues with the evaluation and comparison of current abuse detection methods. To overcome this problem, we propose a common benchmark platform, described in Section~\ref{s:plateform}. Then, we assess, in Section~\ref{s:usage}, existing detection methods to illustrate the interest of our corpus and platform.
\begin{table*}[!htb]
\centering
\begin{tabular} { |p{3cm}|r|r|r|r|r|r|r|r|r| }
\hline
\textbf{Ground Truth} & \multicolumn{3}{c|}{\textbf{Perspective - Toxicity}} & \multicolumn{3}{c|}{\textbf{Perspective - Severe toxicity}} & \multicolumn{3}{c|}{\textbf{Hybrid method}} \\
\cline{2-10}
\textbf{Dataset} & Prec. & Recall &
$F$-measure & Prec. & Recall &
$F$-measure & Prec. & Recall &
$F$-measure \\
\hline
Personal attack & 84.96 & 86.96 & 85.96 & 54.05 & 93.06 & 68.38 & 81.24 & 70.47 & 75.47 \\
\hline
Aggression & 82.89 & 87.49 & 85.13 & 53.71 & 92.52 & 67.96 & 81.75 & 70.23 & 75.55 \\
\hline
Toxicity & 84.68 & 90.82 & 87.64 & 53.49 & 94.29 & 68.26 & 74.17 & 74.77 & 74.47 \\
\hline
\end{tabular}
\caption{Macro Precision, Recall and $F$-measure obtained by the $3$ tested methods. }
\label{tab:PerfsResults}
\end{table*}
\subsection{Platform Description}
\label{s:plateform}
An important issue is that the reported performance in different works are often almost impossible to compare since systems may be evaluated on different datasets, and many different metrics are used to perform these evaluations. Even if the used corpus is identical, which is the case with works on WCC for example, the way it is split into train/development/test differs, and is often not precisely described by the authors.
All these issues hinder replicability. In this section, we present a benchmarking platform that we developed in order to address them. It is an open-source tool available online\footnote{\href{https://github.com/CompNet/Alert}{https://github.com/CompNet/Alert}}, and aimed at grouping classification methods in the area of automatic abusive content detection to ease and stimulate the replicability of the reported performances. Moreover, we take advantage of our new corpus to address the difficulties in the comparison of the results. All the methods of the platform are assessed using the corpus we developed (WAC). We propose a split into train (60\%), development (20\%) and test (20\%) sets for each of the 3 datasets of WAC. This split was randomly generated, but is publicly available online. Using this split for all the methods ensure that all the results are obtained with the same data and so, are truly comparable. Additionally, we leave open the possibility to implement and add further metrics to the methods if needed, the tool being designed to ease the addition of new metrics. Different variants of the $F$-Measure as well as the Area Under the ROC Curve are currently implemented, since they are the metrics mainly used by the methods listed in Table~\ref{tab:RecapPerformances}.
As mentioned before, the source code of all the works presented in Section~\ref{sec:RelatedWork} is not publicly available, so we could not include them in our platform. Instead, we focused on our previously published abuse detection method~\cite{Cecillon2019}. It is a hybrid approach combining two distinct methods previously proposed by our team~\cite{Papegnies2019}. The first one is content-based and relies on a set of features describing exclusively the textual content of the messages to perform the classification. Though this approach does not require a corpus of conversations to be tested, it is still interesting to assess it on the WAC corpus because of its size. Indeed, the reported performances for this method were obtained on a small dataset of less than $3{,}000$ comments.
The second approach is graph-based, and requires full conversations in order to be applied. It consists in extracting conversational networks modeling the interactions between users, before computing topological measures to describe these graphs, and using them as features during the classification process. Hence, this method completely ignores the content of the messages and only relies on the structure and dynamics of the conversation. It is typically the kind of methods for which the WAC was created. Our hybrid tool combines both text- and graph-based features.
Our benchmarking platform is currently available online, but still needs some work to implement more existing approaches. Indeed, while it currently contains only two methods, the objective is to include more methods using \textit{Wikipedia conversations} to make it a comparative platform of many approaches in the area of automatic abuse detection.
\subsection{Usage Example}
\label{s:usage}
We now illustrate the interest of our corpus and platform to assess the performance of the Google Perspective API as well as our own hybrid method. The way the data is split between train, development, and test sets is described in a file provided with the data. We assess the performance in terms of Precision, Recall and $F$-measure, separately for the $3$ datasets constituting WAC ({\it Personal attack}, {\it Aggression} and {\it Toxicity}). Our results are presented in Table~\ref{tab:PerfsResults}.
Each message in WikiConv is provided with two scores computed through the Google Perspective API: a \texttt{toxicity} and a \texttt{severe\_toxicity}. To evaluate their quality, we first convert them into binary classes (\textit{abusive} vs. \textit{non-abusive}), by using the equal error thresholds calculated by \newcite{Hua2018} following the methodology of \newcite{Wulczyn2017}. A message is considered toxic if its \texttt{toxicity} score is above $0.64$ and severely toxic if its \texttt{severe\_toxicity} score is above $0.92$. Unsurprisingly, the best $F$-measure for the \texttt{toxicity} score is obtained with the \textit{Toxicity} dataset. However, the performances obtained for the other $2$ datasets are not much lower. Based on this observation, we can hypothesize that the method used to generate the \texttt{toxicity} and \texttt{severe\_toxicity} scores may not really distinguish between \textit{Personal attack}, \textit{Aggression} and \textit{Toxicity}, and relies on a more general definition of abuse. The \texttt{severe\_toxicity} score yields a higher Recall than the \texttt{toxicity} one for all $3$ abuse types, but the Precision is only around 54\%. This poor precision is due to a lot of toxic messages being mistaken for severely toxic messages. This confirms our assumption from Section~\ref{sec:ProposedCorpus}, \textit{i.e.} \textit{PreTox} annotations (largely based on the Google Perspective API) are less accurate than human annotations.
For the hybrid method implemented in our platform, performances are similar for the $3$ datasets, with a $F$-measure around 75\%. This puts our method between both variants of the Google Perspective API. There is a clear drop in performance compared to the results obtained in our previous work, on a different corpus~\cite{Cecillon2019}. This can be explained by several factors. First, the text-based part of our method relies on very standard features and could be improved by using more sophisticated ones. Second, the graph-based part was designed to operate on chat messages, and therefore to handle very large and linear conversations. In WAC, conversations have a limited size and are not linear, which decreases a lot the efficiency of this method. Therefore, there is room for improvement, and we plan to adjust both parts to better handle the characteristics of Wikipedia talk pages. In any way, our goal in this paper was only to illustrate the usefulness of our platform and corpus, and we leave the improvement of our classifier to future work.
\section{Conclusion and Future Work}
\label{sec:Conclusion}
In this paper, we introduced a large corpus of 383k annotated user messages along with the conversations they appear in. We presented the pipeline that we developed to link two existing corpora of Wikipedia comments and extract high quality labels and thread-level information.
So far, the development of context-based methods in the area of abusive comment detection was limited by the lack of large annotated corpora of conversations. This new publicly available corpus opens perspectives for new work and for extending existing work. For example, content-based methods could incorporate information about the conversation and its structure. Furthermore, the large number of messages in the corpus allows us to use it with any machine learning approach.
In a second part, we presented a tool that we developed to assess some detection methods on this new corpus. A future work is to further develop this platform by integrating more methods in it. The objective is to make it a comparison platform for classification methods using the conversational corpus we proposed.
\section{Bibliographical References}
\label{main:ref}
\bibliographystyle{lrec}
|
1,116,691,500,678 | arxiv | \section{Introduction}
Overlapping speech detection (OSD) is a key component for speaker diarization and speech separation. Speaker diarization seeks to match a time frame of speech to the corresponding speaker identity. The agglomerative hierarchical clustering on speaker embeddings has been one of the main approaches for speaker diarization \cite{DBLP:journals/taslp/MiroBEFFV12}\cite{DBLP:conf/icassp/Garcia-RomeroSS17}\cite{DBLP:conf/icassp/WangDWMM18}\cite{DBLP:journals/corr/abs-2012-14952}. Audios are split into segments and speaker embeddings are extracted from each of the segments. Unsupervised clustering is performed on the speaker embeddings to obtain speaker identities\cite{DBLP:conf/interspeech/ZhengLSL19a}. One problem of clustering approach is that speaker embeddings extracted from small segments can be biased by the speech content \cite{DBLP:conf/interspeech/ZhengLS20}. Furthermore, it lacks the ability to handle overlapping speech.
Recently an end-to-end speaker diarization approach is proposed, which incorporates overlapping speech detection into the process of inferring speaker labels at any given frame time \cite{DBLP:conf/asru/FujitaKHXNW19}\cite{DBLP:conf/interspeech/FujitaKHNW19}\cite{DBLP:conf/interspeech/HoriguchiF0XN20}\cite{DBLP:journals/corr/abs-2003-02966}. When tested on datasets with overlapping speech, the end-to-end approach achieved a better performance than the embedding-based clustering methods. The improvement is not unexpected since the baseline clustering method does not have the capability to handle overlapping speech, which makes up a considerable proportion of the test set. It remains unclear how accurate the system is in detecting overlapping speech. Given its importance in the speaker diarization system, it is worthwhile to single out the OSD component for extensive study.
When overlapping speech occur, identifying speakers of interest is not enough. It is also necessary to separate the overlapping speech in order to obtain the clean automatic speech recognition (ASR) results corresponding to the target speakers. Therefore, a speech separation or target speaker extraction component is usually applied to overlapping speech segments.
A vast number of literatures can be found on speech separation \cite{DBLP:conf/interspeech/WangLSWCLHLPNG20}\cite{DBLP:conf/icassp/DelcroixZKON18}\cite{DBLP:journals/taslp/LuoM19}\cite{DBLP:journals/taslp/WangNW14}\cite{DBLP:journals/taslp/WangC18a}. A major challenge faced by researchers studying speech separation is that it may cause distortions to original speech and result in degradation to the performance of ASR \cite{DBLP:conf/interspeech/WangMWSWHSWJL19}. This poses as a discouraging factor for real application since overlapping speech only contributes to a small portion of the entire duration and harming the rest of the non-overlapping speech may seem unworthy. Furthermore, performing multi-target speaker extraction on non-overlapping speech may result in undesired ``ghost speech", which will contaminate the final ASR deliverable. Therefore, knowing when to perform speech separation is crucial.
In this paper we aim to detect overlapping speech segments on microphone array data. A microphone array-based beamforming process separates speech into multiple beams according to the localization of the corresponding speakers. Therefore, when an overlapping segment is detected, separated speech is produced alongside. The only further action required is to select the optimal beam(s), depending on most probable localization of the speaker(s). \cite{anguera2007acoustic} is one of first works focusing on beamforming approach for speaker diarization. It provided a non-neural network based approach to take advantage of multi-microphone data. In \cite{DBLP:conf/interspeech/ZhengS21} we proposed a speaker diarization system based on microphone array data. We showed that integrating spatial spectrum information can lead to remarkable improvement to the system. An OSD component was mentioned in the proposed system but was not elaborated. In this paper we discuss in details the OSD component mentioned in previous work. We propose a neural network architecture called BeamTransformer, which manages to take advantage of beamformer's ability to utilize multi-microphone data and transformer's power in context sequence modeling.
Several previous efforts on detecting overlapping speech detection is worthy to be addressed. \cite{DBLP:conf/interspeech/YoshiokaECXA18} proposed a BLSTM based unmixing transducer that transforms an input multi-channel acoustic signal into a fixed number of time-synchronous audio
streams. The AMI corpus is one of the few publicly available corpus containing annotations to generate labels for overlapping speech detection \cite{mccowan2005ami}. \cite{DBLP:conf/interspeech/AndreiCB17} is among one of first efforts to train deep neural networks on OSD task. The authors reported a 0.77 precision and 0.68 recall on 25ms frame on AMI corpus. \cite{DBLP:conf/specom/KunesovaHZR19} seeks to improve speaker diarization by detecting overlapping speech and reports a 0.71 precision and 0.48 recall. \cite{DBLP:conf/icassp/BullockBG20} claims to have made significant improvements, observing a 0.92 precision and 0.48 recall on AMI.
According to the performance reported by the aforementioned works, frame-level overlapping speech detection is still far from being a reliable component in the system of speaker diarization and automatic speech recognition. It is suspected that a 25ms frame of input feature does not contain enough useful information for the neural network to make reliable prediction on the OSD task. One frame of information may be sufficient for a ``0 vs 1" task, such as voice activity detection, but the ``1 vs more" task requires more.
We choose to experiment on one-second segments instead of frames for several reasons. First, inferring on small segments allow us to take into account the sequence context information, which turns out to be helpful for inputs from multiple beams. The segment-level classification reports a more convincing result than the frame-level performance mentioned above. With accuracy, recall, and precision hovering around 90 percent, we can be more confident on relying on the outputs of OSD component in practical speaker diarization and ASR systems.
Second, the final goal of overlap detection is not simply inferring speaker labels, but to separate and recognize speech from the targeted speakers of concern. Overlapping speech are usually processed by target speaker extraction and speech separation, both of which require segment-level inputs. Hence a frame-level responsiveness is trivial.
Third, frame-level OSD task requires accurate labeling on every frame, which is highly impractical. \cite{DBLP:conf/interspeech/AndreiCB17} reported that AMI corpus contains labelling errors in terms of OSD annotations. Segment-level OSD task, on the other hand, is much more tolerant on the quality of annotations.
\section{Methods}
This section is organized as follows. In 2.1 we discuss the methods used to extract acoustic and spatial features from microphone array data. In 2.2 the architecture of BeamTransformer is thoroughly discussed. In 2.3 a complementary component used to process spatial features is described.
\subsection{Features}
The acoustic and spatial features we utilize are extracted from our previously proposed Circular Differential Directional Microphone Array (CDDMA). The detailed design of CDDMA are more thoroughly discussed in \cite{DBLP:conf/interspeech/ZhengS21}\cite{huangdifferential}. The look-direction of each beamformer is uniformly distributed around a circle to cover the whole space. The output signals of the beamformers are spatially separated from each other. All the directional elements are uniformly distributed on a circle and directions are pointing outward. The CDDMA beamformer is given as below:
\begin{equation}\label{solution}
\textbf{h}_{cddma}(\omega) = \textbf{R}^{H}( \omega, \boldsymbol{\theta }) [\textbf{R}( \omega, \boldsymbol{\theta }) \textbf{R}^{H}( \omega, \boldsymbol{\theta })]^{-1} \textbf{c}_{\boldsymbol{\theta }} .
\end{equation}
where the vector $\textbf{c}_{\boldsymbol{\theta }}$ defines the acoustic properties of the beamformer such as beampattern; the constraint matrix $\textbf{R}( \omega, \boldsymbol{\theta })$ of size $\mathit{N} \times \mathit{M} $ is constructed by the directional microphone steering vector which exploits the acoustics of microphone elements. As shown in \cite{huangdifferential}, the CDDMA-beamformer displays significant improvement in terms of white noise gain (WNG), which measures the efficacy to suppress spatially uncorrelated noise, and directivity factor (DF), which quantifies how the microphone array performs in the environment of reverberation\cite{brandstein2013microphone}\cite{benesty2018fundamentals}.
SRP-PHAT features based on the CDDMA-beamformer are extracted to represent spatial localization of the speakers \cite{dibiase2000high}. We formalize the microphone array signals received per frame as:
\begin{equation}\label{eq:1}
\textbf{x}\left( \omega, \theta \right) = [x_{1}, x_{2},\ \cdots \ x_{M}]^{T},
\end{equation}
where the superscript $^{T}$ represents the transpose operator, $\omega = 2 \pi \mathit{f} $ is the angular frequency, $\mathit{f}$ is the temporal frequency, $\theta$ is the incident angle of the signal, and $x_{m}$ represents the signal of each microphone. For each candidate of incident angle $\theta$, we design each corresponding CDDMA-beamformer to target at the direction of $\theta$, denoted as $\textbf{h}_{cddma}(\omega, \theta)$, and we calculate the transient steering response power(SRP) at $n^{th}$ frame as below:
\begin{equation}\label{eq:1}
\textbf{P}_{n} \left( \theta \right) = \int_{-\infty}^{+\infty} | \textbf{x}\left( \omega, \theta \right)^{H} \textbf{h}_{cddma}(\omega, \theta)|^{2} d \omega.
\end{equation}
The locale estimate of incident angle for current frame is given by:
\begin{equation}\label{eq:1}
\hat{\theta} = \underset{\theta}{argmax} \ \textbf{P}\left( \theta \right)
\end{equation}
Based on the estimate of SRP at each frame, we can form a spatial spectrum as below:
\begin{equation}\label{eq:2}
\mathcal{B} \left( \theta , n \right) = \textbf{P}_{n} \left( \theta \right), \forall \ n \in \mathbb{N}^+
\end{equation}
A smoothing function is applied to the estimated angle $\hat\theta$ of each frame to filter noisy direction of arrivals.
The SRP-PHAT extracted has a dimension of 120 per frame, each representing 3 angular degree. In addition, the log energy received by each of the microphone is included. Hence the spatial feature we use for each frame has a total dimension of 128.
For acoustic feature, filterbanks are extracted from each beam formed by CDDMA.
\subsection{BeamTransformer Architecture}
Figure \ref{fig:beamtransformer} displays the architecture of a BeamTransformer. Pointing directions from beams 1 to 8 are distributed evenly in a circular manner. Therefore, pointing direction of two adjacent beams has an angular size of 45 degree, which also indicates that beam 1 and beam 5 have opposite pointing directions.
A 1D convolution Pre-Net is applied to beam-wise filterbank features. Pre-Net shortens the input lengths from $L$ to $\frac{L}{4}$, which reduces the computation cost of the following transformers. Beams from opposite directions are paired up. Each of the four pairs are fed to a transformer encoder. The main motivation to perform a re-combination of beams in this manner is to reduce the number of parameters and computation costs of transformer while retaining as much uncorrelated spatial information as possible to help identify sequential difference among beams. The opposite beams, such as beam 1 and beam 5, will have smallest correlations since the beamforming for speech separation have been designed to achieve relatively high front-to-back ratio. Therefore, beams on opposite direction should display more disparity than the adjacent ones, regardless of the relative localization of overlapped speakers. The outputs of transformer encoders are then stacked together and go through a Post-Net. Post-Net consists of a simple feed-forward network.
Figure \ref{fig:overlap} and Figure \ref{fig:nonoverlap} illustrate the key feature that Beam-Transformer is trying to identify. When there are two overlapped speakers (Figure \ref{fig:overlap}), the target beams pointing to the corresponding speakers have different dominant harmonic structure of speech in the spectrogram. When there is one single speaker (Figure \ref{fig:nonoverlap}), on the other hand, there is only one dominant harmonic structure of speech, which is in the target beam pointing to the direction of speaker. As the beam direction deviates from the direction of speech, the harmonic structure gets monotonically weaker, as the signals are more suppressed by CDDMA beamforming.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{BeamTransformer.png}
\caption{Architecture of BeamTransformer.}
\label{fig:beamtransformer}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{BeamTransformerOverlap.png}
\caption{Spectrogram of beams from overlapping segments. Two overlapping speakers are located near Beam 1 and 4, respectively.}
\label{fig:overlap}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{BeamTransformerNonoverlap.png}
\caption{Spectrogram of beams from non-overlapping segments. One single speaker is located near Beam 6.}
\label{fig:nonoverlap}
\end{figure}
\subsection{SpatialNet Architecture}
SpatialNet is the architecture used to train the spatial spectrum input. The dimension of input is 128 each frame, including 120 dimensional SRP-PHAT feature and 8-dimensional microphone array log energy, as mentioned above. It has the same Pre-Net, Post-Net and transformer encoder structures as in BeamTransformer. The only difference is that a re-combination of beams is not required by SpatialNet.
Since BeamTransformer takes the acoustic signals as input and SpatialNet takes only spatial information, they can be effective complements to each other. In Figure \ref{fig:spatialnet} we show how the SpatialNet and BeamTransformer are combined. The outputs of Post-Nets from both sides are concatenated together. A frame-alignment process is applied beforehand.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{spatialspectrum.jpg}
\caption{Spatial information such as SRP-PHAT is extracted and fed into SpatialNet. The output of SpatialNet is concatenated to the Post-Net component of BeamTransformer.}
\label{fig:spatialnet}
\end{figure}
\begin{table}[h]
\centering
\caption{Tensor dimensions after each components of BeamTransformer and SpatialNet. F is the feature dimension. L represents the length of inputs, in terms of total number of frames. D represents the specified hidden dimension.}
\begin{tabular}{|c|c|c|}
\hline
& BeamTransformer & SpatialNet \\
\hline
Input & $F \times L \times$ 8 & $128 \times L \times 1$\\
\hline
Pre-Net & $D \times \frac{L}{4} \times 8$ & $D \times L \times 1$ \\
\hline
Recombination of beams & $(D \times \frac{L}{4} \times 2) \times 4 $ & $D \times L \times 1$\\
\hline
Transformer Encoders & $4D \times \frac{L}{2}$ & $D \times L$ \\
\hline
Post-Net & $D \times \frac{L}{2}$ & $D \times L$ \\
\hline
Mean Pooling & $D \times 1 $ & $D \times 1$ \\
\hline
\end{tabular}
\label{tab:network}
\end{table}
\section{Experiments}
\subsection{Corpus}
In this paper we experiment on both the publicly available AMI corpus and self-recorded AliMeeting corpus.
The AMI corpus data are recorded using circular array with omni-directional microphones. In order to utilize the microhpone array data of AMI corpus, the spatial spectrum are estimated based on conventional circular differential microphone array(CDMA) instead of the CDDMA. We follow the AMI Diarization Setup in \cite{DBLP:journals/corr/abs-2012-14952} to split up the train set and test set.
The AliMeeting corpus contains 175 real meetings with human annotations. Each of the meetings has a duration of around 30 minutes. There are 4 participants in each of the meetings, 182 speakers in total. Overlapping speech consists of about 40 percent of all speech data in the corpus. The train set consists of 156 meetings and the test set contains 19 meetings. All speakers and meeting rooms in the test set are unseen in the train set. Overlapping and non-overlapping segments of at least one second are selected for both training and testing stage.
\subsection{Experiment Setup}
The baseline system is trained on original single channel audio input without beamforming. 40-dimensional filterbank features are extracted and fed into a similar network structure: a Pre-Net, one 6-layer transformer encoder, and a Post-Net. Since there is only one channel, there is no re-combination of beams involved and there is only one transformer encoder.
We also experimented using only the spatial feature and no acoustic features. A SpatialNet is trained using 128-dimensional acoustic feature.
BeamTransformers with 40-dimensional filterbanks and 160-dimensional high dimensional filterbanks are compared to find out how much more information BeamTransformer can mine from a more detailed sequential input.
Finally, we combine BeamTransformer and SpatialNet by stacking the outputs of Post-Nets from both networks. Since the acoustic features and spatial features are relatively independent, it is expected that such combination should see a noticeable improvement.
\subsection{Results}
Table \ref{tab:amiresults} and Table \ref{tab:aliresults} display results from AMI corpus and AliMeeting corpus, respectively. SpatialNet represents the experiments using only the spatial feature and no acoustic features. BT(High-Res) stands for BeamTrasnformer with high resolution filterbank inputs. BT+SpatialNet represents the combination of BeamTransformer and SpatialNet.
BeamTransformer+SpatialNet is reportedly the best performed system on both AMI corpus and AliMeeting corpus, immediately followed by BeamTransformer with high resolution features. A relatively improvement of $15-16\%$ on both precision and recall is observed on both corpus.
\begin{table}[t]
\caption{The performance of overlapping segments detection on AMI corpus (\%).
}
\label{tab:amiresults}
\centering
{
\begin{tabular}{lcccc}
\toprule
& Accuracy & Precision & Recall & Fscore \\
\midrule
Baseline & 83.4 & 75.4 & 74.6 & 75.0 \\
\midrule
SpatialNet & 84.6 & 76.9 & 77.0 & 77.0 \\
\midrule
BeamTransformer & 88.2 & 83.1 & 81.0 & 82.0 \\
\midrule
BT(High-Res) & 90.0 & 85.5 & 84.5 & 85.0 \\
\midrule
\textbf{BT+SpatialNet} & \textbf{91.7} & \textbf{87.8} & \textbf{87.0} & \textbf{87.3} \\
\bottomrule
\end{tabular}
}
\vspace{-2mm}
\end{table}
\begin{table}[t]
\caption{The performance of overlapping speech segments detection on AliMeeting corpus (\%). Accuracy, Precision, Recall, and Fscore are denoted as A, P, R, Fs, respectively.
}
\label{tab:aliresults}
\centering
{
\begin{tabular}{lc}
\toprule
& Metrics - A, P, R, Fs, lengths \\
\midrule
Baseline & 79.4(A), 71.9(P), 70.7(R), 71.3(Fs), 1s \\
& 82.9(A), 76.6(P), 75.1(R), 75.8(Fs), 2s \\
\midrule
SpatialNet & 81.1(A), 73.5(P), 73.6(R), 73.6(Fs), 1s \\
& 85.1(A), 79.2(P), 79.1(R), 79.2(Fs), 2s \\
\midrule
BeamTransformer & 83.6(A), 77.7(P), 76.0(R), 76.8(Fs), 1s \\
& 87.8(A), 83.7(P), 81.9(R), 82.8(Fs), 2s \\
\midrule
BT(High-Res) & 85.2(A), 80.0(P), 78.4(R), 79.2(Fs), 1s \\
& 89.7(A), 86.1(P), 84.9(R), 85.5(Fs), 2s \\
\midrule
\textbf{BT+SpatialNet} & 87.1(A), 82.5(P), 81.2(R), 81.8(Fs), 1s \\
& \textbf{91.2(A), 88.1(P), 87.2(R), 87.6(Fs), 2s} \\
\bottomrule
\end{tabular}
}
\vspace{-2mm}
\end{table}
\section{Conclusions}
In this paper we investigated microphone array-based approaches in detecting overlapping speech segments. We discussed the importance of OSD in the system of speaker diarization and speech separation, and the lack of reliability in current state-of-the-art performances of frame-level OSD. We found that utilizing segment-level sequence information, along with spatial information, significantly improves the performance of overlapping speech detection. To achieve our goal, we propose a specific architecture named BeamTransformer that takes advantage of beamformer's ability in spatial filtering and transformer's renowned edge in capturing sequential knowledge.
More works could be done in the future. The reported performance was achieved by training on only 80-90 hours of data. Multi-microphone data simulation and augmentation techniques could be investigated to quickly enlarge the size of training data. Furthermore, even though the overlapping speech has been separated into different beams, a single-channel target speaker extraction approach can be applied to the corresponding beam. This way we can largely remedy the negative effects when OSD went wrong.
\bibliographystyle{IEEEtran}
|
1,116,691,500,679 | arxiv | \section{\MakeUppercase{Introduction}}
\label{sec:intro}
In order to gain understanding about the human brain, various technologies have recently been introduced, such as electroencephalography (EEG), tomography, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI). They provide scientists with data about the temporal and spatial activity inside the brain. Functional magnetic resonance imaging is a brain imaging method that measures blood oxygenation level. It detects changes in this level, that are believed to be related to neurotransmitter activity. This enables the study of brain functioning, pathological trait detection and treatment response monitoring. The method localises brain function well, and thus is useful in detecting differences in subject brain responses \cite{MATTHEWS2004, HUETTEL2009}.
Deeper understanding about the simultaneous activities in the brain begins with a decomposition of the data. Independent component analysis (ICA) has been extensively used to analyze fMRI data. It tries to decompose the data into multiple components that are mixed in the original data. Basically, there are two ways to perform ICA: group ICA and individual ICA \cite{CALHOUN2009}. Group ICA is performed on the data matrix including all the participants' fMRI data, and individual ICA is applied on each dataset of each participant. Among datasets of different participants, group ICA tends to need more assumptions which are not required by individual ICA \cite{CONG2013}.
For individual ICA, if the components for each participant are known, it is expected to find the most common components among the participants.
Therefore, clustering spatial maps extracted by ICA is a necessary step for the individual ICA approach to find common spatial information across different participants in fMRI research.
ICA decomposes the individual datasets and creates components that can be presented with spatial maps. After ICA has been applied, a data matrix of size $n$ by $p$ is produced, where $n$ is the number of spatial maps and $p$ is the number of voxels of each spatial map. The $n$ spatial maps come from different participants, and $n$ is much smaller than $p$ in fMRI research. Clustering the spatial maps is mostly done using the $n$ by $n$ similarity matrix of the $n$ by $p$ data matrix \cite{CALHOUN2009, ESPOSITO2005}. Surprisingly, it usually works well although such a similarity matrix inherently can just explain a certain amount of the total variance contained in the high-dimensional $n$ by $p$ data matrix \cite{ESPOSITO2005}.
New mathematical approaches for functional brain data analysis should take into account the characteristics of the data analyzed. As stated, spatial maps have high dimensionality $p$. In machine learning, dimensionality reduction is usually performed on such datasets before clustering. In the small-$n$-large-$p$ clustering problem, the conventional dimensionality reduction methods, for example, principal component analysis (PCA) \cite{JOLLIFFE2002}, might not be suitable for the non-linear properties of the data. In this research, we apply a recently developed non-linear method called diffusion map \cite{COIFMAN2006a,NADLER2006} for dimensionality reduction. The probabilistic background of the diffusion distance metric will give an alternative angle to this dataset by facilitating the clustering task and providing visualization. This paper explores the possibility of using the diffusion map approach for fMRI ICA component clustering.
\section{\MakeUppercase{Methodology}}
\label{sec:method}
This paper considers a dimensionality reduction approach to clustering of high-dimensional data. The clustering procedure flows as follows:
\begin{enumerate}
\item Data normalization with logarithm
\item Neighborhood estimation
\item Dimensionality reduction with diffusion map
\item Spectral clustering
\end{enumerate}
Data normalization should be done if the features are on differing scales. This ensures that the distances between the data points are meaningful. Neighborhood estimation for diffusion map creates the neighborhood where connections between data points are considered. Dimensionality reduction creates a new set of fewer features that still retain most information. Spectral clustering groups similar points together.
We assume that our dataset consists of vectors of real numbers: $X = \left\{ x_1, x_2, \dots , x_n \right\}, x_i \in \mathbb{R}^p$. In practice the dataset is a data matrix of size $n \times p$, whose rows represent the samples and columns the features. In this study each row vector is a spatial map and column vector contains the corresponding voxels in different spatial maps.
\subsection{Diffusion map}
Diffusion map is a dimensionality reduction method that embeds the high-dimensional data to a low-dimensional space. It is part of the manifold learning method family and can be characterized with its use of diffusion distance as the preserved metric \cite{COIFMAN2006a}.
The initial step of the diffusion map algorithm itself calculates the affinity matrix $W$, which has data vector distances as its elements. Here Gaussian kernel with Euclidean distance metric is used \cite{COIFMAN2006a, NADLER2008}. For $\epsilon$ selection, see below. The affinity matrix is defined as
\begin{equation*}
W_{ij} = \exp \left( -\frac{|| x_i - x_j ||^2}{\epsilon} \right),
\label{KERNEL}
\end{equation*}
\noindent where $x_i$ is the $p$-dimensional data point. The neighborhood size parameter $\epsilon$ is determined by finding the linear region in the sum of all weights in $W$, while trying different values of $\epsilon$ \cite{COIFMAN2008,SINGER2009}. The sum is
\begin{equation*}
L = \sum_{i=1}^{n} \sum_{j=1}^{n} W_{i,j},
\end{equation*}
From the affinity matrix $W$ the row sum diagonal matrix $D_{ii} = \sum_{j=1}^{n} W_{ij}, i \in 1 \ldots n$ is calculated. The $W$ matrix is then normalized as $P = D^{-1} W$. This matrix represents the transition probabilities between the data points, which are the samples for clustering and classification. The conjugate matrix $\tilde{P} = D^{\frac{1}{2}} P D^{-\frac{1}{2}}$ is created in order to find the eigenvalues of $P$. In practice, substituting $P$, we get
\begin{equation*}
\tilde{P} = D^{-\frac{1}{2}} W D^{-\frac{1}{2}}.
\label{NGL}
\end{equation*}
This so-called normalized graph Laplacian \cite{CHUNG1997} preserves the eigenvalues \cite{NADLER2008}. Singular value decomposition (SVD) $\tilde{P} = U \Lambda U^*$ yields the eigenvalues $\Lambda = \mathrm{diag}([\lambda_1, \lambda_2, \dots, \lambda_n])$ and eigenvectors in matrix $U = [ u_1, u_2, \dots, u_n ]$. The eigenvalues of $P$ and $\tilde{P}$ stay the same. It is now possible to find the eigenvectors of $P$ with $V = D^{-\frac{1}{2}} U$ \cite{NADLER2008}.
The low-dimensional coordinates in the embedded space $\Psi$ are created using $\Lambda$ and $V$:
\begin{equation*}
\Psi = V \Lambda.
\label{MAP_COORDINATES}
\end{equation*}
Now, for each $p$-dimensional point $x_i$, there is a corresponding $d$-dimensional coordinate, where $d \ll p$. The number of selected dimensions depends on how fast the eigenvalues decay. The coordinates for a single point can be expressed as
\begin{equation}
\Psi_d : x_i \to \left[ \lambda_2 v_2(x_i), \lambda_3 v_3(x_i), \dots, \lambda_{d+1} v_{d+1}(x_i) \right].
\label{DM}
\end{equation}
The diffusion map now embeds the data points $x_i$ while preserving the diffusion distance to a certain bound given that enough eigenvalues are taken into account \cite{COIFMAN2006a}.
\subsection{Spectral clustering}
Spectral clustering is a method to group samples into clusters by benefitting from the results of spectral methods that reveal the manifold, such as the diffusion map. Spectrum here is understood in the mathematical sense of spectrum of an operator on the matrix $P$. The main idea is that the dimensionality reduction has already simplified the clustering problem so that the clustering itself in the low-dimensional space is an easy task. This leaves the actual clustering for any clustering method that can work with real numbers \cite{NG2001, KANNAN2004, LUXBURG2007}.
The first few dimensions from the diffusion map represent the data up to a relative precision, and thus contain most of the distance differences in the data \cite{COIFMAN2006a}. Therefore, some of the first dimensions will be used to represent the data. Threshold at $0$ in the embedded space divides the space between the possible clusters, which means that a linear classification can be used. With the linear threshold, the second eigenvector separates the data into two clusters in the low-dimensional space. This eigenvector solves the normalized cut problem, which means that there are small weights between clusters but the internal connections between the members inside the cluster are strong. Clustering in this manner happens through similarity of transition probabilities between clusters \cite{NG2001, KANNAN2004, SHI2000, MEILA2000}.
\section{\MakeUppercase{Results}}
\label{sec:results}
The data comes from experiments where participants listened to music. The data analysis was performed on a collection of spatial maps of brain activity. After dimensionality reduction and spectral clustering, the results are presented and compared to more traditional methods.
\subsection{Data description}
In this research the fMRI data are based on the data sets used by Alluri et al.\ \cite{ALLURI2012}. Eleven musicians listened to a 512-second modern tango music piece during the experiment. In the free-listening experiment the expectation was to find relevant brain activity significantly correlating with the music stimulus. The stimuli were represented by musical features used in music information retrieval (MIR) \cite{ALLURI2012}.
After preprocessing, PCA and ICA were performed on each dataset of each participant, and 46 ICA components (i.e., spatial maps) were extracted for each dataset \cite{PUOLIVALI2013, TSATSHIVILI2013}. Then, temporal courses of the spatial maps were correlated with one musical feature, Brightness \cite{ALLURI2012}. As long as the correlation coefficient was significant (statistical $p$-value $<0.05$), the spatial maps were selected for further analysis. Altogether, $n=23$ spatial maps were selected from 11 participants. The number of voxels for each spatial map was $p=209{,}633$. So, the 23 by 209,633 data matrix was used for the clustering to find the common spatial map across the 11 participants.
\begin{figure}[tb]
\centering
\includegraphics[width=8.5cm]{ssm_p1f2_epsilon}
\caption{Selecting $\epsilon$ for diffusion map. The red line shows the selected value.}
\label{fig:epsilon}
\end{figure}
\subsection{Data analysis}
The data matrix was analyzed using the methodology explained in Section \ref{sec:method}. The dimensionality of the dataset was reduced and then the spectral clustering was carried out. The weight matrix sum for $\epsilon$ selection is in Figure \ref{fig:epsilon}; the used value is in the middle region, highlighted with straight vertical line. Clustering was performed with only one dimension in the low-dimensional space. To compare the results with more traditional clustering methods, the high-dimensional data was clustered with agglomerative hierarchical clustering \cite{XU2005R} with Euclidean distances using the similarity matrix \cite{ESPOSITO2005} and $k$-means algorithms \cite{XU2005R}. The clustering results for two clusters were identical using all the methods.
\begin{figure}[tb]
\centering
\includegraphics[width=8.5cm]{ssm_p1f2_clustering}
\caption{Diffusion map clustering results.}
\label{fig:dm_clustering}
\end{figure}
Figure \ref{fig:dm_clustering} shows the resulting clustering from the diffusion map. The figure uses the first two eigenpairs for low-dimensional presentation, for these two clusters even one dimension is enough. The spatial maps are numbered and the two clusters are marked with different symbols. The dividing spectral clustering line is at $0$ along the horizontal axis, so the point to the right of $0$ are in one cluster and to the left another. Two clusters, dense and sparse, are detected using this threshold. The dense cluster, marked with crosses, contains components that are considered to be similar according to this clustering. The traditional PCA and kernel PCA with Gaussian kernel for spectral clustering are compared to the diffusion map \cite{MULLER2001,WANG2012}. In Figure~\ref{fig:pca} diffusion map with correct $\epsilon$ creates more firm connections, which eases the clustering task. The effect of diffusion distance metric is also seen.
\begin{figure}[htb]
\centering
\includegraphics[width=8.5cm]{ssm_p1f2_dr_comp}
\caption{Dimensionality reduction method comparison. The coordinates have been scaled.}
\label{fig:pca}
\end{figure}
In Figure \ref{fig:aggl} the dendrogram produced by the agglomerative clustering is shown. The clustering results are the same as with the dimensionality reduction approach. The separation is visible at the highest level and the structure corresponds to the distances seen in Figure \ref{fig:dm_clustering}. All the points in, e.g., the dense cluster in Figure \ref{fig:dm_clustering} are in the left cluster of Figure \ref{fig:aggl}. This comparison shows the evident separation between the two clusters and also validates the results from diffusion map methodology.
\begin{figure}[htb]
\centering
\includegraphics[width=8.5cm]{ssm_p1f2_aggl_dendro}
\caption{Dendrogram from the agglomerative clustering.}
\label{fig:aggl}
\end{figure}
Figure \ref{fig:brain3} illustrates the kind of spatial maps that are found in the dense cluster. Dark areas along the lateral sides is used to highlight those voxels whose values differed more than three standard deviations from the mean. The numbers marking the slices are their Z-coordinates. The corresponding low-dimensional point is in Figure \ref{fig:dm_clustering} numbered as 3. It is now possible to inspect the clusters more closely with domain experts.
\begin{figure}[htb]
\centering
\includegraphics[width=8.5cm]{brain3}
\caption{Example spatial map in the dense cluster, this is data point number 3. Dark lateral areas mark more than three standard deviations from the mean, e.g.\ in slices 36 and 38.}
\label{fig:brain3}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=8.5cm]{ssm_p1f2_corr_matrix}
\caption{Absolute correlation values between the spatial maps.}
\label{fig:corr_matrix}
\end{figure}
Figure \ref{fig:corr_matrix} shows the correlation matrix of all the 23 spatial maps. This is a way to inspect the similarity of the brain activity. The correlation matrix is also the basis of analysis for the hierarchical clustering \cite{ESPOSITO2005}. In the figure it can be seen that there is some correlation between some of the spatial maps, but not so much between others.
Figures \ref{fig:corr_dense} and \ref{fig:corr_sparse} illustrate the internal structure of the clusters by showing the correlation matrices for the individual clusters. The members in the dense cluster have higher correlation among themselves than the members in the sparse cluster. This information is also seen in Figure \ref{fig:dm_clustering} where the diffusion distances inside the dense cluster are smaller.
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{ssm_p1f2_corr_dense_matrix}
\caption{Absolute correlation values between the spatial maps that belong to the dense cluster.}
\label{fig:corr_dense}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{ssm_p1f2_corr_sparse_matrix}
\caption{Absolute correlation values between the spatial maps that belong to the sparse cluster.}
\label{fig:corr_sparse}
\end{figure}
\section{\MakeUppercase{Discussion}}
\label{sec:discussion}
In this paper we have proposed a theoretically sound non-linear analysis method for clustering ICA components of fMRI imaging. The clustering is based on diffusion map manifold learning, which reduces the dimensionality of the data and enables clustering algorithms to perform their task. This approach is more suitable for high-dimensional data than just applying clustering methods that are designed for low-dimensional data. The assumption of non-linear nature of brain activity also promotes the use of methods designed for such problems. Particularly, the advantage of diffusion map is in visualizing the distribution of all data samples ($n$ spatial maps with $p$ voxels in each) by using only two coordinates. As seen in the visualization, it becomes more straightforward to determine the compact cluster from the two-dimensional plot derived from the 209,633-dimensional feature space than from the similarity matrix.
The results show that the proposed methodology separates groups of similarly behaving spatial maps. Results from diffusion map spectral clustering are similar to hierarchical agglomerative clustering and $k$-means clustering. Small sample size and good separation of clusters makes the clustering problem rather simple to solve.
Moreover, the visualization obtained from diffusion map offers an interpretation for clustering.
The proposed methodology should be useful for analyzing the function of the brain and understanding which stimuli create similar spatial responses in which group of participants. The domain experts can gain more basis for the interpretation of brain activity when similar activities are already clustered using automated processes suitable for the task. Furthermore, visualization helps to identify the relationships of the clusters.
Diffusion map execution times become increasingly larger if the number of samples goes very high. This can be overcome to a certain degree with out-of-sample extension. Big sample sizes are also a problem with traditional clustering methods. However, diffusion map offers a non-linear approach, and is suitable for high-dimensional data. Both properties are true for fMRI imaging data.
The analysis could be expanded to more musical features and to bigger datasets in order to further validate its usefulness in understanding the human brain during listening to music. The method is not restricted only to certain kind of stimulus, so it is usable with diverse fMRI experimental setups. Furthermore, situations where traditional clustering fails when processing spatial maps, the proposed methdodology might give more reasonable results.
\bibliographystyle{IEEEbib}
\input{dm_fmri_ica.bbl}
\end{document}
|
1,116,691,500,680 | arxiv | \section{Introduction}
Superconvergence of finite element solutions to partial differential equations has been studied intensively for many decades \cite{Douglas1973Superconvergence, Levine1985Superconvergent, Wahlbin1995Superconvergence, Lin2004Natural}. It is shown to be an important tool to develop high-performance finite elements. Various postprocessing techniques exist to recover finite element solutions or their derivatives in order to improve accuracy, such as the popular Zienkiewicz-Zhu (ZZ) method for gradient recovery \cite{zhu1990superconvergence, Zienkiewicz1992The, zienkiewicz1992superconvergent}.
In early works, there appears a dilemma: the classic superconvergence theory has been usually adopted to specially structured grids, such as the strongly regular grids composed of equilateral triangles \cite{Huang_2008}, but the mesh generation techniques is very hard to satisfy this requirement. Thus there is a serious gap between theory of superconvergence and mesh generation.
Fortunately, the gap is gradually closing up with the development of superconvergence theory and mesh generation technologies. From one hand, one notable example of the development is the work of Bank and Xu \cite{Bank2003Asymptotically, Bank2004Asymptotically} who studied superconvergence on mildly structured grids where most pairs of elements form an approximate parallelogram. They also proved that linear finite element solution is superclose to its linear interpolant of exact solution. Based on mildly structured grids, Xu and Zhang \cite{Xu2003Analysis} established the superconvergence estimations of three gradient recovery operators containing weighted averaging, local ${L^2}$-projection, and local discrete least-squares fitting. Huang and Xu further investigated the superconvergence properties of quadratic triangular element on mildly structured grids \cite{Huang_2008}. From the other hand, the centroidal Voronoi tessellation (CVT)-based methods have been successfully applied to develop high-quality mesh generation \cite{du1999centroidal}, and the property of superconvergence has been only verified numerically on CVT-grids by Huang \cite{huang2008centroidal}. However, there are still some burning problems. The mesh condition of mildly structured grids is hypothetic in the work of Xu, and currently there is few mesh generation technologies which could theoretically meet the mesh conditions of mildly structured grids though mildly structured grids can be generated numerically by some grids generators. Meanwhile, due to lack of the deduction of mesh condition on CVT-based grids, the superconvergence property on CVT-based grids is just verified by numerical examples without any theoretic results.
In recent years, the so called bubble placement method has been systematically studied by Nie. \cite{nie2010node, liu2010node, qi2014acceleration}. The advantage of BPM is to generate high-quality grids on many complexly bounded 2D and 3D domains and can be easily used in adaptive finite element method and anisotropic problems \cite{Cai2013Numerical, Zhang2014An, zhang2015adaptive, Zhou2016A, wang2016npbs}. In addition, due to the natural parallelism of BPM, computational efficiency has been improved greatly to solve large-scale problems \cite{Nie2014Parallel}. Yet, superconvergence on BPM-based grids has not been explored. The goal of this paper is to analyze a mesh condition on BPM-based grids, such that superconvergence results can be obtained both theoretically and numerically.
In this paper, we will carefully investigate the superconvergence properties on BPM-based grids. Our work has two main steps. In the first step, a mesh condition where the actual length $l_e$ of any edge $e$ and the desired length $h$ differ only by high quantity of the parameter $h$:
\begin{equation}
|l_e-h|={\cal O}{(h^{1+\alpha})}, \alpha >0
\end{equation}
is derived from an established optimal model of BPM. The second major component of our analysis is two superconvergence results for linear finite elements
\begin{equation}
{\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }} ={\cal O} (h^{1+\min (\alpha, 1/2 )}),
\end{equation}
and quadratic finite elements
\begin{equation}
{\left\| {{u_h} - {\Pi_Q u}} \right\|_{1,\Omega }} ={\cal O}(h^{2+\min (\alpha, 1/2 )}).
\end{equation}
on BPM-based grids, where $u_I$ and ${\Pi_Q u}$ are the piecewise linear and quadratic interpolant for $u$ respectively. These superconvergence results can be used to derive the posteriori error estimates of the gradient recovery operator for many popular methods, like the ZZ patch recovery and the polynomial preserving recovery \cite{Xu2003Analysis,Zhang2004Polynomial}
The rest of this paper is organized as follows. Section 2 gives the derivation process of mesh conditions and superconvergence results on BPM-based grids. Some numerical experiments of elliptic boundary value problem on different computational domains are given in Section 3 and further discussed in Section 4. Conclusions and future works are given in Section 5.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/step.png}
\caption{The flowchart of BPM.}\label{step}
\end{figure}
\section{Methodology}
\subsection{Preliminaries}
Bubble placement method was originally inspired by the idea of bubble meshing\cite{shimada1998quadrilateral, Yamakawa2002Quad} and the principle of molecular dynamics. The computational domain is regarded as a force region with viscosity, and bubbles are distributed in the domain. Each bubble is driven by the interaction forces\cite{Shimada1998Automatic} from its adjacent bubbles
\begin{equation}
f\left( w \right) =
\begin{cases}
{k_0}\left( {1.25{w^3} - 2.375{w^2} + 1.125} \right) & 0 \le w \le 1.5\\
0 & 1.5 < w
\end{cases}
\end{equation}
and its center is taken as one node placed in the domain, where $w=\frac{l_{ij}}{\bar{l_{ij}}}$, $l_{ij}$ is the actual distance between bubble $i$ and bubble $j$, $\bar{l_{ij}}$ is the assigned one given by users. The motion of each bubble satisfies the Newton's second law of motion. BPM can be mainly divided into 3 steps: initialization, dynamic simulation, bubble insertion and deletion operations. And BPM is regarded to be controlled by two nest loops, which is schematically illustrated in Figure \ref{step}.
The inner loop (dynamic simulation) ensures a good bubble distribution when forces are balanced and the outer loop (insertion and deletion operations) controls the bubble number by adding or deleting bubbles such that adjacent bubbles can be tangent to each other as possible at force-equilibrium state. They both work together to get a closely-packed configuration of bubbles, so that a well-shaped and size-guaranteed Delaunay triangulation can be created by connecting the bubble centers.
\subsubsection{Inner loop}
In the inner loop, the bubble motion is similar to damped vibrator. In the initial state, there is a potential energy between bubbles, which transforms to kinetic energy during simulation. The motion of bubbles also leads to energy loss as the bubble system has viscous damping force. The potential energy of the bubble system reaches its minimum at force-equilibrium state, and at this moment the resultant force exerting on each interior bubble vanishes.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/bubble_distance.png}
\caption{The distance between bubble $i$ and bubble $j$. From left to right, these are overlapping bubbles with repulsive force(the sign of this force is "+" by inter-force formula), tangent bubbles with no force between bubbles and disjoint bubbles with attracting force (the sign of this force is "-") in turn. }\label{distance}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/1D_force.png}
\caption{Force-equilibrium state in one-dimensional. For bubble $k$, there is a repulsive force ${F_{( k.k-1)} }$ from bubble $k-1$. When system reach an equilibrium state, resultant external force of the bubble $k$ is zero. Therefore, ${F_{( k.k+1) }}$ and ${F_{( k.k-1)} }$ must be the same force in magnitude but in opposite direction, whence ${F_{( k.k+1)} }$ should be repulsive. }\label{1D}
\end{figure}
All interior bubbles at force-equilibrium state possess a specific characteristic. For any interior bubble $i$, forces of its adjacent bubbles exerting on it are the same in magnitude and sign. Let us take an 1D case to clarify. For any interior bubble $k$, if it overlaps with its left adjacent bubble $k-1$, there will be repulsive force ${F_{(k.k-1)}}$ between the bubble $k$ and $k-1$. At force-equilibrium state, with the condition of resultant external force of the bubble $k$ vanishing, there must be the same force in magnitude but in opposite direction. So the right adjacent bubble $k+1$ should overlap with the bubble $k$, and ${F_{( k.k+1)}}$ will be the same as ${F_{(k.k-1)}}$ in magnitude but in opposite direction. By inter-force formula (2.1), the sign of inter-force is positive if two adjacent bubbles overlap with each other, otherwise the sign is negative (shown in Figure \ref{distance}). By analogy, each interior bubble is like this until terminal bubble (Terminal bubble is fixed, but there exists interbubble force between terminal bubble and interior bubble.). Figure \ref{1D} visually displays the schematic of above statement.
Now, let us introduce the definition of bubble fusion degree ${C_{ij}} = {\textstyle{{{{\bar l}_{ij}} - {l_{ij}}} \over {{\bar {l_{ij}}}}}}=1-w$, which characterizes the relative overlapping/disjoint degree of bubble $i$ and bubble $j$. It is easy to derive the following relationship.
\begin{equation}
\left\{ \begin{array}{l}
{C_{ij}} > 0 \Rightarrow \overline {{l_{ij}}} > {l_{ij}},\text{Bubble $i$ overlaps with bubble $j$}. \\
{C_{ij}} = 0 \Rightarrow \overline {{l_{ij}}} = {l_{ij}},\text{Bubble $i$ is tangent to bubble $j$}.\\
{C_{ij}} < 0 \Rightarrow \overline {{l_{ij}}} < {l_{ij}},\text{Bubble $i$ is disjoint from bubble $j$}.
\end{array} \right.
\end{equation}
Note that if the inter-force between two adjacent bubbles are the same in magnitude and sign, the variable $w$ becomes a constant by the monotonicity of inter-force formula (2.1) in the internal [0, 1.5]. Undoubtedly, now the bubble fusion degree of any two adjacent bubbles is a constant.
For 2D case, it can be seen as 1D case in any direction, and for all interior bubbles, in whatever direction you choose, the bubble fusion degree of any two adjacent bubbles is a constant like one-dimension. The bubble distribution (or the corresponding node distribution) with this characteristics is called force-equilibrium distribution for convenience.
\begin{figure}
\centering
\subfloat[$T=0$]{
\includegraphics[width=0.28\textwidth]{figures/bubble1_1.png}
}
\subfloat[$T=100$]{
\includegraphics[width=0.28\textwidth]{figures/bubble1_3.png}
}
\subfloat[$T=200$]{
\includegraphics[width=0.28\textwidth]{figures/bubble1_6.png}
}
\subfloat[$T=0$]{
\includegraphics[width=0.28\textwidth]{figures/Cij_c0.png}
}
\subfloat[$T=100$]{
\includegraphics[width=0.28\textwidth]{figures/Cij_c100.png}
}
\subfloat[$T=200$]{
\includegraphics[width=0.28\textwidth]{figures/Cij_c200.png}
}
\caption{The bubble distributions and their corresponding $C_{ij}$ values at different time steps(now the total number of bubbles $N=606$). }\label{simulation_1}
\end{figure}
Choosing the assigned size function as $d\left( x,y \right)=h=0.1$ to execute BPM algorithm on an unit circle region. As shown in Figure \ref{simulation_1}, the initial bubble distribution is chaotic, and then gradually tends to the force-equilibrium distribution with time step T going on. Meanwhile, the corresponding $C_{ij}$ values tend towards a constant 0.28, which validates our inference.
In a word, bubble system will be at the force-equilibrium state by inner loop. And now the bubble fusion degree of any two adjacent bubbles is also a constant.
\subsubsection{Outer loop}
Let
\begin{equation}
{\epsilon}^N=\mathop {\max }\limits_{ {i,j} \in {\Gamma}_N}|C_{ij}|,
\end{equation}
where $N$ is the total number of bubbles in the current loop, and $\Gamma_N$ denotes the bubble set at force-equilibrium state with the number $N$. Outer loop aims to control bubbles' number by the overlapping ratio\cite{liu2010node}. More specifically, delete the bubble whose overlapping ratio is larger, or add some new bubbles near the bubble whose overlapping ratio is smaller. If ${\epsilon}^N$ no longer reduces, iterative processes terminate.
\begin{figure}
\centering
\subfloat[$N=606$]{
\includegraphics[width=0.3\textwidth]{figures/Nbubble1_6.png}
}
\subfloat[$N=488$]{
\includegraphics[width=0.3\textwidth]{figures/bubble2_6.png}
}
\subfloat[$N=472$]{
\includegraphics[width=0.3\textwidth]{figures/bubble_3_6.png}
}
\subfloat[${\epsilon}^N=0.317$]{
\includegraphics[width=0.3\textwidth]{figures/Cij_c677.png}
}
\subfloat[${\epsilon}^N=0.186$]{
\includegraphics[width=0.3\textwidth]{figures/Cij_c488.png}
}
\subfloat[${\epsilon}^N=0.112$]{
\includegraphics[width=0.3\textwidth]{figures/Cij_c472.png}
}
\caption{ The bubble distribution their corresponding $C_{ij}$ values after deletion operation. }\label{simulation_2}
\end{figure}
For the same circle region in 2.1.1, the Figure \ref{simulation_2} shows the bubble distribution after several rounds of adding or deleting bubbles. It can be seen that bubbles gradually tend to be tangent without a wide range of overlapping, meaning that the actual distance of two adjacent bubbles is extremely close to their assigned size function. From the changes in $C_{ij}$ , we can clearly see that values of $C_{ij}$ and ${\epsilon}^N$ both decrease, implying that operations of adding or deleting bubbles are effective.
In summary, BPM can be described mathematically: find a proper number of bubbles ${\bar N}$, such that
\begin{equation}
{\Gamma _{\bar N}} = \left\{ {{\Gamma _N}:\mathop {\min }\limits_N \left\{ {\mathop {\max }\limits_{i,j \in {\Gamma _N}} \left| {{C_{ij}}} \right|} \right\}} \right\},
\end{equation}
where $N$ is the total number of bubbles, $\Gamma_N$ denotes the bubble set at force-equilibrium state with the number $N$, and $\Gamma_{\bar N}$ is the final output.
\begin{remark}
The properties of inner loop and outer loop are also suitable for non-uniform case. For the size function
\begin{equation}
d\left( {x,y} \right) =
\begin{cases}
0.1 & \sqrt {{x^2} + {y^2}} < 2,\\
0.2 \times \left| {\sqrt {{x^2} + {y^2}} - 2} \right| + 0.1 & \sqrt {{x^2} + {y^2}} \ge 2.
\end{cases}
\end{equation}
We execute the BPM algorithm on a square region $[-3, 3] \times [-3, 3]$, and some numerical evidences are given in Figure \ref{simulation_11}.
\end{remark}
\begin{figure}[h]
\centering
\subfloat[$Nt=200$, $N=953$]{
\includegraphics[width=0.35\textwidth]{figures/non_0.png}
}
\subfloat[$Nt=200$, $N=862$]{
\includegraphics[width=0.35\textwidth]{figures/non_1.png}
}
\subfloat[$Nt=200$, $N=953$]{
\includegraphics[width=0.32\textwidth]{figures/Cij_s1.png}
}
\subfloat[$Nt=200$, $N=862$]{
\includegraphics[width=0.32\textwidth]{figures/Cij_s2.png}
}
\caption{The bubble distributions and their corresponding $C_{ij}$ values. }\label{simulation_11}
\end{figure}
\subsection{The mesh condition of BPM-based grids}
We aware that $\epsilon ^{\bar N}$ is a very important value throughout the simulation. In fact, $\epsilon ^{\bar N}$ is equivalent to the relative errors of all side lengths, so it is very useful to study the mesh condition of BPM-based grids. Actually, $\epsilon ^{\bar N}$ is related to the computational domain and given size function. Next, $\epsilon ^{\bar N}$ will be discussed separately by different circumstances. For the sake of clearness in the description, we mainly consider uniform distribution (the size function is a constant $h$), so
\begin{equation*}
{\epsilon}^N=\mathop {\max }\limits_{ {i,j} \in {\Gamma}_N}\left| \dfrac{h-{l_{ij}}}{h}\right|.
\end{equation*}
We firstly define 'ideal subdivision' if the prescribed region can be exactly covered by $\bar N$ equilateral triangles with side size $h$. For instance, an equilateral triangle region with side length $1$ can be divided into 25 equilateral triangles with length $0.2$, so that $\epsilon ^{\bar N}=0$.
However, the case encountered more frequently is not an 'ideal subdivision'. Such as a square with side as $1$, however the desired size is needed to be $h=0.3$. So we have to look for a subdivision with $\bar N$ elements such that $\epsilon ^{\bar N}$ is optimal. Though we don't know the exact value at first, it is easy to estimate the rough range.
For simplicity, let's start from an 1D case. A domain with length $L$ is required to be uniformly divided into several elements with size $h$. Let $N_e = \lfloor {\frac{L}{h}} \rfloor $ be the number of elements, and the remaining part after uniform subdivision $l = L - {N_e} \cdot h$, then $l \in \left[ {0,h} \right)$ (if $l = 0$, that is 'ideal subdivision'). The element $\delta$ with length $l$ has an error $e_{\delta}$, although other elements are ideal. So the mesh error $e=\max \left\lbrace e_{\delta},0\right\rbrace = \left| {h - l} \right| = \left| {h - \left( {L - {N_e} \cdot h} \right)} \right| = \left| {\left( {{N_e} + 1} \right) \cdot h - L} \right| = O\left( h \right)$.
\begin{figure}
\centering
\subfloat[before error averagely distributed]{
\includegraphics[width=0.35\textwidth]{figures/1D_before.png}
}
\subfloat[after error averagely distributed]{
\includegraphics[width=0.35\textwidth]{figures/1D_after.png}
}
\caption{ (a) shows the bubble distribution without force-equilibrium characteristics and (b) displays the bubble distribution at force-equilibrium state. Meanwhile, the distance of two adjacent bubbles is equivalent to the length of corresponding element.}\label{1D_errordistribution}
\end{figure}
For BPM-based grids, the bubble fusion degree of any two adjacent bubbles is a constant at force-equilibrium state, so the error of each element $|l_{ij}-h|$ is a constant, implying that the mesh error gets averaged overall elements, which is also elaborated in Figure \ref{1D_errordistribution}. Thus
\begin{equation*}
e = \left| {h - l_{ij}} \right| = \left| {\frac{{\left( {{N_e} + 1} \right) \cdot h - L}}{{{N_e} + 1}}} \right| \le \frac{h}{{{N_e} + 1}} \le \frac{h}{{N_e}} = \frac{h}{{\left\lfloor {\frac{L}{h}} \right\rfloor }} = O\left( {{h^2}} \right),
\end{equation*}
which presents a higher accurancy.
As to 2D domain. A plane domain with area $S$ is required to be uniformly divided into several equilateral triangles whose side length is $h$. We know that the area of a equilateral triangle ${s_t}$ is $ \frac{{\sqrt 3 }}{4}{h^2} = O\left( {{h^2}} \right)$. However, the value of ${N_e} = \lfloor {\frac{S}{{{s_t}}}} \rfloor $ is not a good estimation since this way ignore the influence from the remaining part near boundaries of the prescribed region. For instance, dividing a unit circle into equilateral triangles with side length 0.3 start from its center, near the boundary there always remains a ring-like area that can not be exactly covered by equilateral triangles with length 0.3. So $\lfloor {\frac{S}{{{s_t}}}} \rfloor $ overstimates, and it shouldd be replaced by ${N_e} = \lfloor {\frac{S}{{{s_t}}}} \rfloor - n$, $n\in {Z^ + }$ with $n \ll {N_e}$. Let $s = S - {N_e} \cdot {s_t}$ be the area of the remaining part, then $0 \le s < \left( {n + 1} \right){s_t}$. Similar to the 1D example, we have:
\begin{equation*}
\mathop {\lim }\limits_{h \to 0} \frac{{{e}}}{{{h^2}}} = \mathop {\lim }\limits_{h \to 0} \frac{{\left| {{\textstyle{{S-{N_e} \cdot s_t} \over {{N_e}}}}} \right|}}{{{h^2}}} \le \mathop {\lim }\limits_{h \to 0} \frac{{{\textstyle{{\left( {n + 1} \right){s_t}} \over {{N_e}}}}}}{{{h^2}}} \le \mathop {\lim }\limits_{h \to 0} \frac{{{\raise0.5ex\hbox{$\scriptstyle {\left( {n + 1} \right){s_t}}$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle {\left\lfloor {\frac{S}{{{s_t}}}} \right\rfloor }$}}}}{{{h^2}}} = 0.
\end{equation*}
Consequently, the error of edge ${e_h} = \left| {h - l_e} \right|={\cal O}(h^{1+\alpha}), (\alpha>0)$, where $l_e$ is the actual length of each edge in BPM-based grids. Above results are true for all element, so for the current node number $N$ we have ${\epsilon}^N={\cal O}(h^{\alpha})$. At the moment $\epsilon ^N $ is not necessarily equal to $\epsilon ^{\bar N}$ which we are looking for in our model, but the following inequality is certainly true:
\begin{equation*}
\epsilon ^{\bar N} \le {\epsilon ^N}.
\end{equation*}
With these, the well-spaced bubbles, the bubble fusion degree of any two adjacent bubbles $|C_{ij}|={\cal O}(h^{\alpha})$, can be generated by BPM, so that size-guaranteed grids which the actual length of each edge and the given size $h$ differ only by high quantity of the parameter $h$
\begin{equation}
|h-l_e|={\cal O}(h^{1+\alpha})
\end{equation}
could be created by connecting the centers of bubbles. Naturally, grids are in good shape.
\begin{remark}
In this paper, $l_{ij}$ means the distance between bubble $i$ and $j$, and $l_e$ denote the length of edge $e$. However, they are essentially equivalent but in a different form, which are clearly visible in Figure \ref{Notation}(a).
\end{remark}
\begin{remark}
For non-uniform case, it is no longer error distributed averagely, but distributed with different weights:
\begin{equation*}
{\lambda _e} = \frac{{{{\bar l}_e}}}{{\sum\limits_{k = 1}^{N_e} {{{\bar l}_k}} }}
\end{equation*}
where $\lambda_e$ is the weight for edge $e$, ${\bar {l_e}}$ is the desired length of edge $e$ (computed by the size function), ${N_e}$ means the total number of elements. In addition, we should assume that for any elements, the desired length of their three sides satisfy ${\bar {l_{e + 1}}} \simeq {\bar {l_e}} \simeq {\bar {l_{e - 1}}}$. This assumption is in accordance with the case of mesh refinement during adaptive iterations and many non-uniform triangulations. The rest will be treated as uniform case, and it will draw a conclusion that for any edge $e$, $|{\bar {l_e}}-{l_e}|={\cal O}({\bar {l_e}}^{1+{\alpha}}) ({\alpha}>0)$.
\end{remark}
\subsection{Superconvergence on BPM-based grids}
Let us consider an interior edge shared by two elements $\tau $ and$\tau '$, shown in Figure \ref{Notation}(b). $l_e$ denote the length of edge $e$. For the element $\tau$, let ${l_{e-1}}$ and ${l_{e+1}}$ be length of two other edges. With respect to $\tau '$, ${l_{e' - 1}}$ and ${l_{e' + 1}}$ are also length of two other edges.
\begin{figure}
\centering
\subfloat[]{
\includegraphics[width=0.4\textwidth]{figures/remark.png}
}
\subfloat[]{
\includegraphics[width=0.3\textwidth]{figures/Notation.png}
}
\caption{ Notations }\label{Notation}
\end{figure}
Properties. Suppose we have a triangulation ${{\cal T}_h}$ is generated by BPM, and from Eq.(2.6),
\begin{enumerate}
\item For element $\tau$, using the triangle inequality, we have
\begin{equation*}
\left| {{l_{e + 1}} - {l_{e - 1}}} \right| = {\cal O}({h^{1 + \alpha }})
\end{equation*}
\item For two elements $\tau$ and $\tau '$, using the triangle inequality, we have
\begin{equation*}
\left| {{l_{e + 1}} - {l_{e' + 1}}} \right| = {\cal O}({h^{1 + \alpha }})
\end{equation*}
\end{enumerate}
These properties are analogous to the definition of mildly structured grids \cite{Xu2003Analysis}, which pursues that two adjacent triangles (sharing a common edge) form an ${\cal O}(h^{1+\alpha})$ $({\alpha}>0)$ approximate parallelogram, i.e., the lengths of any two opposite edge differ only by ${\cal O}(h^{1+\alpha})$. Consequently, it is very easy to apply our BPM-based grids to all superconvergence estimations of mildly structured grids without any changes.
Let $\Omega \subset {{R}^2}$ be a bounded polygon with boundary $\partial \Omega $. Consider problem: Find $u \in V$ such that
\begin{equation}
a\left( {u,v} \right) =\int_\Omega {\nabla u\nabla v{\kern 1pt} dx} = \left( {f,v} \right),\forall v \in V,
\end{equation}
where $\left( { \cdot , \cdot } \right)$ denotes inner product in the space ${L^2}\left( \Omega \right)$, and $V \subset {H^1}\left( \Omega \right)$, if boundary conditions is different, $V$ is a little different. It is known that $a\left( { \cdot , \cdot } \right)$ is a bilinear form which satisfies the following conditions:
\begin{enumerate}
\item (Continuity). There exists $C \ge 0$ such that
\begin{equation*}
\left| {a\left( {u,v} \right)} \right| \le C{\left\| u \right\|_{1,\Omega }}{\left\| v \right\|_{1,\Omega }},
\end{equation*}
for all $u,v \in V$.
\item (Coerciveness). There exists $M > 0$ such that
\begin{equation*}
a\left( {v,v} \right) \ge M\left\| v \right\|_{_{1,\Omega }}^2,\forall v \in V.
\end{equation*}
\end{enumerate}
Let ${V_h}^k =\{v_h:v_h \in {H^1}(\Omega),v_h|_{\tau} \in P_k(\tau)\}$, $k=1,2$, be the conforming finite element space associated with triangulation ${{\cal T}_h}$. Here $P_k$ denotes the set of polynomials with degree $\leq k$. The finite element solution ${u_h} \in {V_h}^k$ satisfies
\begin{equation}
a\left( {{u_h},v} \right) = \left( {f,v} \right),\forall v \in {V_h}^k.
\end{equation}
The following two lemmas about some superconvergence results on linear and quadratic elements for Poisson problems, are a simple modification of [11, Lemma 2.1] and [8, Theorem 4.4].
\begin{lemma}
For triangulation ${{\cal T}_h}$ generated by bubble-type mesh generation, for any ${v_h} \in {V_h}^k$
\begin{equation}
\left| {\int_\Omega {\nabla ( {u - {u_I}} ) \nabla {v_h}} } \right| \lesssim {h^{1+\min (\alpha, 1/2 )}}{\| {{v_h}} \|_{1,\Omega }}.
\end{equation}
where ${u_I}$ is the linear interpolation of $u$ and $k=1$.
\end{lemma}
\begin{lemma}
For triangulation ${{\cal T}_h}$ generated by bubble-type mesh generation, for any ${v_h} \in {V_h}^k$
\begin{equation}
\left| {\int_\Omega {\nabla ( {u - {\Pi_Q u}} ) \nabla {v_h}} } \right| \lesssim {h^{2+\min (\alpha, 1/2 )}}{\| {{v_h}} \|_{1,\Omega }}.
\end{equation}
where ${\prod_Q u}$ is the quadratic interpolation of $u$ and $k=2$.
\end{lemma}
\begin{remark}
The arguments for these lemmas are the same as [11, Lemma 2.1] and [8, Theorem 4.4], and some adjustments we should make are trivial. Here it's no need to say more, for details see in reference \cite{Xu2003Analysis,Huang_2008}.
\end{remark}
\begin{theorem}
Assume that the solution of (2.7) satisfies $u \in {H^3}\left( \Omega \right) \cap W_\infty ^2\left( \Omega \right)$, and ${u_h}$ is the solution of (2.8). Let ${u_I} \in {V_h}^1$ and ${\Pi_Q u} \in {V_h}^2$ be the linear and quadratic interpolation of $u$, respectively. For triangulation ${{\cal T}_h}$ derived from BPM-based grids, we have
\begin{equation}
{\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }} ={\cal O} (h^{1+\min (\alpha, 1/2 )}),
\end{equation}
and
\begin{equation}
{\left\| {{u_h} - {\Pi_Q u}} \right\|_{1,\Omega }} ={\cal O}(h^{2+\min (\alpha, 1/2 )}).
\end{equation}
\end{theorem}
\begin{proof}
Taking $v_h={u_h}-{u_I}$ in Lemma 2.1, we have
\begin{equation*}
\begin{split}
{\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }}^2
&=a(u_h-u_I, u_h-u_I)
=a(u-u_I, u_h-u_I)\\
&=\left| {\int_\Omega {\nabla \left( {u - {u_I}} \right) \nabla {u_h-u_I}} } \right|\\
&\lesssim {h^{1+\min (\alpha, 1/2 )}}{\| {{u_h-u_I}} \|_{1,\Omega }}
\end{split}
\end{equation*}
The proof is completed by canceling $\|u_h-u_I\|_{1,\Omega }$ on both sides of the inequality. Similar, taking $v_h={u_h}-{\Pi_Q u}$ in Lemma 2.2, (2.10) can be easily obtained.
\end{proof}
\section{Numerical examples}
In this section, we will report some numerical examples to support theoretical estimations and verify the superconvergence property of solving Poisson equation on BPM-based grids. The examples considered vary from 'ideal subdivision', such as a unit equilateral triangle region, to 'non-ideal subdivision'.
In order to measure mesh shape quality simply and clearly, we define the mesh shape quality measure as the ratio between the radius of the largest inscribed circle (times two) and the smallest circumscribed circle \cite{Persson2004A}, which is very similar to the concept of `radius ratio':
\begin{equation*}
q{(a,b,c)} = \frac{{2{r_{in}}}}{{{r_{out}}}} = \frac{{{(b + c - a)(c + a - b)(a + b - c)}}}{{{abc}}}
\end{equation*}
where $a$, $b$, $c$ are the side lengths. An equilateral triangle has $q=1$. Define the average mesh quality over placement area:
\begin{equation*}
{Q_{avg}} = \frac{1}{M}\sum\limits_{m = 1}^M {{q_m}}
\end{equation*}
where $M$ represents the number of elements, and ${q_m}$ is the mesh shape quality of the $mth$ element. In addition, the closer that ${Q_{avg}}$ value is to 1, the more regular the grid is.
\subsection{Example 3.1: An unit equilateral triangle region}
\begin{figure}[h]
\centering
\subfloat[$h=0.2$]{
\includegraphics[width=0.3\textwidth]{figures/t_2.png}
}
\subfloat[$h=0.1$]{
\includegraphics[width=0.3\textwidth]{figures/t_1.png}
}
\subfloat[$h=0.05$]{
\includegraphics[width=0.3\textwidth]{figures/t_5.png}
}
\subfloat[$h=0.025$]{
\includegraphics[width=0.3\textwidth]{figures/t_25.png}
}
\caption{ BPM-based grids on square region in different sizes.}\label{grids_t}
\end{figure}
This is an equilateral triangle region with side length 1 and we solve Poisson equation on it with Dirichlet boundary conditions. The right-hand side $f$ and the boundary conditions are chosen such that the exact solution is $u = {\cos 2 \pi x \sin 2 \pi y}$. Taking initial size $h=0.2$, when $h$ reduces by half in turn, BPM-based grid configurations with the first four sizes are selectively shown in Figure \ref{grids_t}. Obviously, the near-perfect grids can be generated by our algorithm for 'ideal subdivision' case.
\begin{table}
\caption{ Superconvergence results for equilateral triangle region}
\centering
\small
\begin{tabular}{ccccccc}
\toprule
$h$ & ${\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }}$ & order (k=1) & ${\left\| {{u_h} - {\Pi_Q u}} \right\|_{1,\Omega }}$ & order (k=2) & ${Q_{avg}}$\\
\midrule
0.2 & 1.81E-01 & & 8.93E-03 & & 0.9998 \\
0.1 & 4.30E-02 & 2.08 & 1.14E-03 & 2.97& 1.0000 \\
0.05 & 1.05E-02 & 2.05 & 1.41E-04 & 3.02& 1.0000 \\
0.025 & 2.63E-03 & 2.01 & 1.78E-05 & 2.96& 1.0000 \\
0.0125 & 6.55E-04 & 2.01 & 2.30E-056 & 2.95& 1.0000 \\
\bottomrule
\end{tabular}\label{tab2}
\end{table}
Some numerical results are given in Table \ref{tab2}, where ${u_I}$ and ${\Pi_Q u}$ are the linear and quadratic interpolant of $u$, respectively. From Table \ref{tab2}, we see quite clearly the superconvergence of ${\left\| {\nabla \left( {{u_I} - {u_h}} \right)} \right\|_{0,\Omega }}$ and ${\left\| { {{u_h} - {\Pi_Q u}}} \right\|_{1,\Omega }}$, with order close to 2 and 3, respectively.
Note that when h is smaller than 0.2, the corresponding $Q_{avg}$ can achieve to 1. It indicates that the superconvergence property is inseparable from the regularity of grids.
\subsection{Example 3.2: A unit circle region centered at origin}
\begin{figure}
\centering
\subfloat[$h=0.2$]{
\includegraphics[width=0.3\textwidth]{figures/c_2.png}
}
\subfloat[$h=0.1$]{
\includegraphics[width=0.3\textwidth]{figures/c_1.png}
}
\subfloat[$h=0.05$]{
\includegraphics[width=0.3\textwidth]{figures/c_5.png}
}
\subfloat[$h=0.025$]{
\includegraphics[width=0.3\textwidth]{figures/c_25.png}
}
\caption{BPM-based grids on circle region in different sizes.}\label{grids_c}
\end{figure}
\begin{table}[h]
\caption{Results for circle region}
\centering
\small
\begin{tabular}{ccccccc}
\toprule
$h$ & ${\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }}$ & order (k=1) & ${\left\| {{u_h} - {\Pi_Q u}} \right\|_{1,\Omega }}$ & order (k=2) & ${Q_{avg}}$\\
\midrule
0.2 & 1.09E-01 & & 1.93E-02 & & 0.9510 \\
0.1 & 3.93E-02 & 1.47 & 3.53E-03 & 2.45& 0.9635 \\
0.05 & 1.36E-02 & 1.53 & 6.16E-04 & 2.52& 0.9732 \\
0.025 & 4.82E-03 & 1.50 & 1.09E-04 & 2.50& 0.9702 \\
0.0125 & 1.69E-03 & 1.51 & 1.96E-05 & 2.47& 0.9753 \\
\bottomrule
\end{tabular}\label{tab_c}
\end{table}
For a unit circle region centered at origin, the size values are taken by $0.2$, $0.1$, $0.05$, $0.025$, $0.0125$ respectively. Figure \ref{grids_c} shows that BPM-based grids are in good shape generally. Choosing the exact solution $u=sinxsiny$, some calculated results are given in Table \ref{tab_c}. Obviously, there is superconvergence phenomenon on BPM-based grids and the results clearly indicate that ${\left\| { {{u_h} - {u_I}}} \right\|_{1,\Omega }}$ and ${\left\| { {{u_h} - {\Pi_Q u}}} \right\|_{1,\Omega }}$ are close to ${\cal O}({{h^{1.50}}})$ and ${\cal O}({{h^{2.50}}})$ roughly.
Although the convergence order is lower than example 3.1, these results are still consistent with theoretic estimations (2.11) and (2.12).
For all edges $\cal E$, denote $h_{err} = {\textstyle{{\sum {\left| {{l_e} - h} \right|} } \over {\# {\cal E}}}}$, i.e., the mean value of all edges' error. Figure \ref{error_cp}(a) shows the relationship between $h_{err}$, ${\left\| { {{u_h} - {u_I}}} \right\|_{1,\Omega }}$ and ${\left\| { {{u_h} - {\Pi_Q u}}} \right\|_{1,\Omega }}$. Graphically, ${\left\| {\nabla \left( {{u_I} - {u_h}} \right)} \right\|_{0,\Omega }}$ and $h_{err}$ have similar tendency and ${\left\| { {{u_h} - {\Pi_Q u}}} \right\|_{1,\Omega }}$ is more abrupt than ${\left\| { {{u_h} - {u_I}}} \right\|_{1,\Omega }}$, which illustrate the validity of superconvergence estimation in Theorem 2.1.
\begin{figure}
\centering
\subfloat[A unit circle region]{
\includegraphics[width=0.4\textwidth]{figures/error_c2.png}
}
\subfloat[A regular pentagon region]{
\includegraphics[width=0.4\textwidth]{figures/error_p2.png}
}
\caption{${\left\| {\nabla \left( {{u_I} - {u_h}} \right)} \right\|_{0,\Omega }}$ and $h\_err$ versus the given size $h$ on BPM-based grids. }\label{error_cp}
\end{figure}
\subsection{Example 3.3: A regular pentagon region}
\begin{figure}[h]
\centering
\subfloat[$h=0.2$]{
\includegraphics[width=0.3\textwidth]{figures/p_2.png}
}
\subfloat[$h=0.05$]{
\includegraphics[width=0.3\textwidth]{figures/p_5.png}
}
\caption{ BPM-based grids on regular pentagon region in different sizes.}\label{grids_p}
\end{figure}
\begin{table}
\caption{Results for regular pentagon region }
\centering
\small
\begin{tabular}{cccccc}
\toprule
$h$ & ${\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }}$ & order (k=1) & ${\left\| {{u_h} - {\Pi_Q u}} \right\|_{1,\Omega }}$ & order (k=2) & ${Q_{avg}}$\\
\midrule
0.2 & 9.52E-02 & & 9.37E-03 & & 0.9582 \\
0.1 & 3.34E-02 & 1.51 & 1.70E-03 & 2.46& 0.9651 \\
0.05 & 1.09E-02 & 1.62 & 2.77E-04 & 2.62& 0.9608 \\
0.025 & 3.74E-03 & 1.54 & 4.86E-05 & 2.51& 0.9670 \\
0.0125 & 1.25E-03 & 1.58 & 8.31E-06 & 2.55& 0.9711 \\
\bottomrule
\end{tabular}\label{tab_p}
\end{table}
Similarly, mesh size decreases by half in turn. For avoiding needless duplication, just choose two typical graphs displaying in Figure \ref{grids_p}. Choosing the exact solution $u=e^{x+y}$, all the same, Table \ref{tab_p} shows errors, convergence order and etc. As can be seen, there are still superconvergence phenomenon ${\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }}={\cal O }\left( {{h^{1.55}}} \right)$ and ${\left\| { {{u_h} - {\Pi_Q u}}} \right\|_{1,\Omega }}= {\cal O}({{h^{2.50}}})$ on regular pentagon region, verifying superconvergence estimation as well.
Overall, these numerical experiments fully show that there exists the superconvergence property on BPM-based grids. This is perhaps the results of most practical significance.
\section{Discussion}
\subsection{The superconvergence of problems with singularities}
By above experiments, we have observed superconvergence on BPM-based grids just for convex domain since superconvergence estimation requires $u \in {H^3}\left( \Omega \right) \cap W_\infty ^2\left( \Omega \right)$ which rules out domains with a re-entrant corner. In practice, it is well known that the solution may have singularities at corners. At this point, someone must have a question that will it appear superconvergence on problems with singularities? Although this paper has no supporting theory, we can find some illumination from the work by Wu and Zhang \cite{Wu2007Can}. They had deduced superconvergence estimation on domains with re-entrant corners for Poisson equation on mildly structured grids. The estimation is:
\begin{equation}
\begin{cases}
{\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }} \lesssim {N^{- \frac{1}{2}-\rho }},\\
{\left\| {{u_h} - {\Pi_Q u}} \right\|_{1,\Omega }} \lesssim {N^{-1-\rho }},
\end{cases}
\end{equation}
where $\rho$ is related to some mesh parameters. Additionally, for a 2D second-order elliptic equation, the optimal convergence rate is
\begin{equation}
\left\| {{u_h} - u} \right\|_{1,\Omega } \lesssim
\begin{cases}
N^{-\frac{1}{2}} \quad k=1,\\
N^{-1} \quad k=2,\\
\end{cases}
\end{equation}
where $k=1$ for the linear element and $k=2$ for the quadratic. $N$ is the total number of degrees of freedom to measure convergence rate in order to be used in adaptive finite element since the mesh is not quasi-uniform. Next, we will investigate the superconvergence on the L-shape region.
Just choose a typical graphs displaying in Figure \ref{grids_L}(a). Specially, meshes near re-entrant corner are in good shape, illustrating in Figure \ref{grids_L}(b). The boundary conditions are chosen so that the true solution is ${r^{{2 \mathord{\left/{\vphantom {2 3}} \right. \kern-\nulldelimiterspace} 3}}}\sin {\textstyle{2 \over 3}}\left( {\theta + {\textstyle{\pi \over 2}}} \right)$ in polar coordinates. Figure \ref{error} demonstrates the relationship between superconvergence results and the total number of degrees of freedom. Notice that ${\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }}$ and ${\left\| {{u_h} - {\Pi_Q}} \right\|_{1,\Omega }}$ are all superconvergent, which is consistent with estimation (4.1).
\begin{figure}[h]
\centering
\subfloat[L-shape region]{
\includegraphics[width=0.3\textwidth]{figures/L_5.png}
}
\subfloat[local amplification of re-entrant corner]{
\includegraphics[width=0.3\textwidth]{figures/local_L.png}
}
\caption{BPM-based grids on L-shape region with $h=0.05$.}\label{grids_L}
\end{figure}
\begin{figure}
\centering
\subfloat{
\includegraphics[width=0.5\textwidth]{figures/error_L2.png}
}
\caption{${\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }}$ and ${\left\| {{u_h} - {\Pi_Q u}} \right\|_{1,\Omega }}$ versus the total number of degrees of freedom on L-shaped domain. Dotted lines give reference slopes.}\label{error}
\end{figure}
\subsection{The modification of the mesh condition}
\begin{table}
\caption{ Results for four types of regions}
\centering
\small
\begin{tabular}{ccccc}
\toprule
Domain & $mean$ & $var$ & $\mathop {\max }\limits_{e \in {\cal E}} \left| {{l_e} - h} \right|$ & $h_{err} = {\textstyle{{\sum {\left| {{l_e} - h} \right|} } \over {\# {\cal E}}}}$ \\
\midrule
Unit equilateral triangle & 0.1000 & 7.0481e-14 & 4.6798e-07 &1.6826e-07 \\
Unit circle & 0.0965 & 5.0513e-5 & 0.0184 &0.0093 \\
Regular pentagon & 0.0955 & 7.3520e-5 & 0.0213 &0.0087 \\
L-shape & 0.0946 & 2.0061e-4 & 0.0301 &0.0120 \\
\bottomrule
\end{tabular}\label{Tab1}
\begin{tablenotes}
\item[1]
Note: $Mean$ denotes the mean value of all edges' actual lengths, and $var$ is their variance value. Let $\cal E$ be the set of edges, ${l_e}$ the length of edge $e$, so $h_{err}$ represents the mean value of all edges' errors.
\end{tablenotes}
\end{table}
In fact, not all edges of BPM-based grids could satisfy the mesh condition $|l_e-h|={\cal O}(h^{1+\alpha})(\alpha>0)$. There will be a few edges with poor performance in the actual computations especially for 'non-ideal subdivision' case. Nodes are placed in previous four types of computing regions with $h=0.1$, some useful results of these calculations are presented in Table \ref{Tab1}.
As shown, these mean values of the actual length are very closed to 0.1. However, compared with the mean error $h_{err}$, the maximal error $\mathop {\max }\limits_{e \in {\cal E}} \left| {{l_e} - h} \right|$ is underperformed except the equilateral triangle region, implying that there are 'bad' edges with larger errors in actual computations. In fact, the bubble system just reaches an approximate force-equilibrium state instead of the desired one we expect because of some numerical errors, so the bubble fusion degree of a few bubbles is slightly larger rather than a constant analyzed in Section 2.1, which are reflected in the red box in Figure \ref{bubble_distribution}. The fact leads to errors distributed under unfair conditions. In other words, additional errors are assigned to the several elements, so that there are some edges in bad behavior that the actual length and the given size $h$ differ by same order of the parameter $h$.
\begin{figure}[h]
\centering
\subfloat[A unit equilateral triangle]{
\includegraphics[width=0.3\textwidth]{figures/bubble_t.png}
}
\subfloat[A unit circle region]{
\includegraphics[width=0.3\textwidth]{figures/bubble_3_6.png}
}
\subfloat[regular pentagon region]{
\includegraphics[width=0.3\textwidth]{figures/bubble_p.png}
}
\subfloat[L-shape region]{
\includegraphics[width=0.3\textwidth]{figures/bubble_L.png}
}
\caption{ Bubbles distribution }\label{bubble_distribution}
\end{figure}
However, these bad edges are not too many since our algorithm guarantees the resultant external force of each bubble within the range of tolerance value closing to zero, and these are validated numerically by computational results that all variance values in Table \ref{Tab1} are very meager. In general, edges' lengths are basiclly in line with the size requirements, and there is no extreme situation that the actual length differs greatly from the given size. In the meantime, the maximal and average errors are all low values, which are also strong evidence.
Drawing from the above discussions and results, the mesh condition of BPM-based grids could be modified by the following changes. Let $e$ be an edge in the triangulation ${{\cal T}_h}$ derived from BPM-based grids and its length is represented by ${l_e}$. Denote ${\cal E}={{\cal E}_1} \oplus{{\cal E}_2}$ be the set of all edges belongs to triangulation ${{\cal T}_h}$ :
\begin{enumerate}
\item For each ${e\in{\cal E}_1}$
\begin{equation}
\left|{l_e} - h \right| = O\left( {{h^{1 + \alpha }}} \right),
\end{equation}
where $\alpha$ is a positive number.
\item As for ${e\in{\cal E}_2}$ are in 'bad' group, $\left| {l_e} - h \right| =O\left( h\right) $, but for the two elements $\tau$ and $\tau '$ sharing edge $e$, they satisfy:
\begin{equation}
\sum\limits_{e \in {{\cal E}_2}} {(\left| \tau \right|+\left| \tau ' \right|) = O\left( {{h^{2\sigma }}} \right)},
\end{equation}
or
\begin{equation}
\sharp {{\cal E}_2} \lesssim N^{\sigma},
\end{equation}
where $\sigma$ is a positive number, $\tau$ and $\tau '$ are the two elements sharing edge $e$, $N$ is the total number of all edges.
\end{enumerate}
The second condition is infected by considerations of edges with poor performance. The values of maximal error show us that there are still some edges with larger errors, but the mean and variance of all edges' actual length indicate that most edges put up a good showing with rarely or no extreme situations. For two different expressions in (4.4) and (4.5), they are essentially equivalent. And they all reflect 'bad edges' are small percentage of all edges. Combining the numerical analysis, we add the 'bad edges' group to the previous theoretical results (2.6), such that the description of BPM-based grids is more realistic.
The accession of 'bad edges' group to our mesh condition will have effect on theoretical superconvergence estimations, but it is negligible. Just the expressions of estimations have been slightly modified:
\begin{equation}
{\left\| {{u_h} - {u_I}} \right\|_{1,\Omega }} ={\cal O} (h^{1+\min (\alpha,\sigma, 1/2 )}),
\end{equation}
and
\begin{equation}
{\left\| {{u_h} - {\Pi_Q u}} \right\|_{1,\Omega }} ={\cal O}(h^{2+\min (\alpha, \sigma, 1/2 )}).
\end{equation}
Note that 'bad edges' group has been considered into mildly structured grids by Xu \cite{Xu2003Analysis}, so it will not create difficulties to theoretical derivations. In particular, the superconvergence property of numerical experiments in Section 3 don't get affected. Actually, 'bad edges' group is introduced just for describing BPM-based grids in line with the actual computation.
\section{Conclusion}
By analysing the properties of BPM, some mesh conditions of BPM-based grids on any bounded domain are derived. It is well to be reminded that our mesh conditions can be applied to the different superconvergence estimation analyzed by many scholars, like R. E. Bank, J. Xu, H. Wu and etc. As a result, superconvergence estimation is discussed on BPM-based grids as the second work in this paper. Notably, this is the first time that the mesh conditions are theoretically derived and successfully applied to the existing estimations. Further, our conclusions can be applied to the posterior error estimation and adaptive finite element methods for improving precision of finite element solution.
Though the initial study on the superconvergence phenomenon of BPM-based grids is made for a classical model of two-dimensional second order elliptic equation, we will consider concrete superconvergence post-processing as well as its theoretical estimation on BPM-based grids and expect greater advantages when solve more complex systems of equations.
\section*{Acknowledgments}
This research was supported by National Natural Science Foundation of China (No.11471262 and No.11501450) and the Fundamental Research Funds for the Central Universities (No.3102017zy038). Dr. Nan Qi would like to thank "the Fundamental Research Funds" of the Shandong University.
\section*{Reference}
\bibliographystyle{elsarticle-num}
|
1,116,691,500,681 | arxiv | \section{Asymptotics}
\label{sect:A}
The following result allows one to control the asymptotic behaviour of the bi-brackets not only as $q\to1^-$ but also
as $q$ approaches radially a root of unity. This produces an explicit version of the asymptotics used in~\cite{Pup05}
for proving some linear and algebraic results in the case $l=1$.
\begin{lemma}
\label{lem1}
As $q=1-\eps\to1^-$,
$$
\frac1{(s-1)!}\,\frac{P_{s-1}(q^n)}{(1-q^n)^s}
=\frac1{n^s\eps^s}\bigl((1-\eps)F_{s-1}(\eps)+\hat\lambda_s\cdot\eps^s\bigr)-\hat\lambda_s+O(\eps)
$$
where the polynomials $F_k(\eps)\in\mathbb Q[\eps]$ of degree $\max\{0,k-1\}$ are generated by
\begin{align*}
\sum_{k=0}^\infty F_k(\eps)x^k
&=\frac1{1-(1-e^{-\eps x})/\eps}
\\
&= 1 + x + \biggl(-\frac12\eps+1\biggr)x^2 + \biggl(\frac16\eps^2-\eps+1\biggr)x^3
\\ &\qquad
+ \biggl(-\frac1{24}\eps^3+\frac7{12}\eps^2-\frac32\eps+1\biggr)x^4
\\ &\qquad
+ \biggl(\frac1{120}\eps^4-\frac14\eps^3+\frac54\eps^2-2\eps+1\biggr)x^5 + \dotsb
\end{align*}
and
$$
\sum_{s=0}^\infty\hat\lambda_sx^s=-\frac{xe^x}{1-e^x}=1+\frac12x+\sum_{k=1}^\infty\frac{B_{2k}}{(2k)!}x^{2k}
$$
is the generating function of Bernoulli numbers.
\end{lemma}
\begin{proof}
The proof is technical but straightforward.
\end{proof}
By moving the constant term $\hat\lambda_s$ to the right-hand side, we get
\begin{align*}
\frac12+\frac{P_0(q^n)}{1-q^n}
&=\frac1n\cdot\biggl(\eps^{-1}-\frac12\biggr)+O(\eps),
\\
\frac1{12}+\frac{P_1(q^n)}{(1-q^n)^2}
&=\frac1{n^2}\cdot\biggl(\eps^{-2}-\eps^{-1}+\frac1{12}\biggr)+O(\eps),
\\
\frac{P_2(q^n)}{(1-q^n)^3}
&=\frac1{n^3}\cdot\biggl(\eps^{-3}-\frac32\eps^{-2}+\frac12\eps^{-1}\biggr)+O(\eps),
\\
-\frac1{720}+\frac{P_3(q^n)}{(1-q^n)^4}
&=\frac1{n^4}\cdot\biggl(\eps^{-4}-2\eps^{-3}+\frac76\eps^{-2}-\frac16\eps^{-1}-\frac1{720}\biggr)+O(\eps),
\end{align*}
and so on.
\begin{proposition}
\label{prop2}
Assume that $s_1>r_1+1$ and $s_j\ge r_j+1$ for $j=2,\dots,l$.
Then
$$
\biggl[\begin{matrix} s_1,\dots,s_l \\ r_1,\dots,r_l \end{matrix}\biggr]
\sim\frac{\zeta(s_1-r_1,s_2-r_2,\dots,s_l-r_l)}{r_1!\,r_2!\dotsb r_l!}\,\frac1{(1-q)^{s_1+s_2+\dots+s_l}}
\qquad\text{as}\quad q\to1^-,
$$
where $\zeta(s_1,\dots,s_l)$ denotes the standard MZV.
\end{proposition}
Another way to tackle the asymptotic behaviour of the (bi-)brackets is based on the Mellin transform
$$
\varphi(t)\mapsto\wt\varphi(s)=\int_0^\infty\varphi(t)t^{s-1}\d t
$$
which maps
$$
q^{n_1d_1+\dots+n_ld_l}\big|_{q=e^{-t}}\mapsto\frac{\Gamma(s)}{(n_1d_1+\dots+n_ld_l)^s};
$$
see \cite{FS09,Zag06}. Note that the bijective correspondence between the bi-brackets and the zeta functions
$$
\frac{\Gamma(s)}{r_1!\,(s_1-1)!\dotsb r_l!\,(s_l-1)!}\sum_{\substack{n_1>\dots>n_l>0\\d_1,\dots,d_l>0}}
\frac{n_1^{r_1}d_1^{s_1-1}\dotsb n_l^{r_l}d_l^{s_l-1}}{(n_1d_1+\dots+n_ld_l)^s}
$$
can be potentially used for determining the linear relations of the former. A simple illustration is the
linear independence of the depth~1 bi-brackets.
\begin{theorem}
\label{th1}
The bi-brackets $\bigl[\begin{smallmatrix}s_1\\r_1\end{smallmatrix}\bigr]$,
where $0\le r_1<s_1\le n$, $s_1+r_1\le n$, are linearly independent over $\mathbb Q$.
Therefore, the dimension $d_n^{\BD}$ of the $\mathbb Q$-space spanned by all bi-brackets of weight at most~$n$ is
bounded from below by $\lfloor(n+1)^2/4\rfloor\ge n(n+2)/4$.
\end{theorem}
\begin{proof}
Indeed, the functions
\begin{gather*}
\frac{\Gamma(s)}{r_1!\,(s_1-1)!}\sum_{n_1,d_1>0}\frac{n_1^{r_1}d_1^{s_1-1}}{(n_1d_1)^s}
=\Gamma(s)\frac{\zeta(s-s_1+1)\zeta(s-r_1)}{(s_1-1)!\,r_1!},
\\
\text{where}\quad 0\le r_1<s_1\le n, \quad s_1+r_1\le n,
\end{gather*}
are linearly independent over~$\mathbb Q$ (because of their disjoint sets of poles at $s=s_1$ and $s=r_1+1$, respectively); thus the corresponding
bi-brackets $\bigl[\begin{smallmatrix}s_1\\r_1\end{smallmatrix}\bigr]$ are $\mathbb Q$-linearly independent as well.
\end{proof}
A similar (though more involved) analysis can be applied
to describe the Mellin transform of the depth~2 bi-brackets; note that it is more easily done for another $q$-model
we introduce further in Section~\ref{sect:D}.
\section{The stuffle product}
\label{sect:T}
Consider the alphabet $Z=\{z_{s,r}:s,r=1,2,\dots\}$ on the double-indexed letters $z_{s,r}$ of the pre-defined weight $s+r-1$.
On $\mathbb QZ$ define the (commutative) product
\begin{align}
z_{s_1,r_1}\diamond z_{s_2,r_2}
&:=\binom{r_1+r_2-2}{r_1-1}\biggl(z_{s_1+s_2,r_1+r_2-1}
\nonumber\\ &\qquad
+\sum_{j=1}^{s_1}(-1)^{s_2-1}\binom{s_1+s_2-j-1}{s_1-j}\lambda_{s_1+s_2-j}z_{j,r_1+r_2-1}
\nonumber\\ &\qquad
+\sum_{j=1}^{s_2}(-1)^{s_1-1}\binom{s_1+s_2-j-1}{s_2-j}\lambda_{s_1+s_2-j}z_{j,r_1+r_2-1}\biggr),
\label{diam}
\end{align}
where
$$
\sum_{s=0}^\infty\lambda_sx^s=-\frac{x}{1-e^x}=1+\sum_{s=1}^\infty\frac{B_s}{s!}x^s
$$
is the generating function of Bernoulli numbers.
Note that $\hat\lambda_s=\lambda_s$ for $s\ge2$, while $\hat\lambda_1=\frac12=-\lambda_1$ in the notation of Section~\ref{sect:A}.
As explained in \cite{BK13} (after the proof of Proposition~2.9),
the product $\diamond$ is also associative. With the help of \eqref{diam} define the stuffle product on the $\mathbb Q$-algebra $\mathbb Q\langle Z\rangle$
recursively by $1\tuffle w=w\tuffle 1:=w$ and
\begin{equation}
aw\tuffle bv
:=a(w\tuffle bv)+b(aw\tuffle v)+(a\diamond b)(w\tuffle v),
\label{tuff}
\end{equation}
for arbitrary $w,v\in\mathbb Q\langle Z\rangle$ and $a,b\in Z$.
\begin{proposition}
\label{prop1}
The evaluation map
\begin{equation}
[\,\cdot\,]\colon z_{s_1,r_1}\dots z_{s_l,r_l}\mapsto\biggl[\begin{matrix} s_1,\dots,s_l \\ r_1-1,\dots,r_l-1 \end{matrix}\biggr]
\label{evamap}
\end{equation}
extended to $\mathbb Q\langle Z\rangle$ by linearity satisfies
$[w\tuffle v]=[w]\cdot[v]$, so that it is a homomorphism of the $\mathbb Q$-algebra $(\mathbb Q\langle Z\rangle,\tuffle)$ onto
$(\BD,\,\cdot\,)$, the latter hence being a $\mathbb Q$-algebra as well.
\end{proposition}
\begin{proof}
The proof follows the lines of the proof of \cite[Proposition 2.10]{BK13} based on the identity
\begin{align*}
&
\frac{n^{r_1-1}P_{s_1-1}(q^n)}{(s_1-1)!\,(r_1-1)!\,(1-q^n)^{s_1}}
\cdot\frac{n^{r_2-1}P_{s_2-1}(q^n)}{(s_2-1)!\,(r_2-1)!\,(1-q^n)^{s_2}}
\\ &\quad
=\binom{r_1+r_2-2}{r_1-1}\frac{n^{r_1+r_2-2}}{(r_1+r_2-2)!}
\biggl(\frac{P_{s_1+s_2-1}(q^n)}{(s_1+s_2-1)!\,(1-q^n)^{s_1+s_2}}
\\ &\quad\qquad
+\sum_{j=1}^{s_1}(-1)^{s_2-1}\binom{s_1+s_2-j-1}{s_1-j}\lambda_{s_1+s_2-j}\frac{P_{j-1}(q^n)}{(j-1)!\,(1-q^n)^j}
\\ &\quad\qquad
+\sum_{j=1}^{s_2}(-1)^{s_1-1}\binom{s_1+s_2-j-1}{s_2-j}\lambda_{s_1+s_2-j}\frac{P_{j-1}(q^n)}{(j-1)!\,(1-q^n)^j}\biggr).
\qedhere
\end{align*}
\end{proof}
Modulo the highest weight, the commutative product \eqref{diam} on $Z$ assumes the form
\begin{equation*}
z_{s_1,r_1}\diamond z_{s_2,r_2}
\equiv\binom{r_1+r_2-2}{r_1-1}z_{s_1+s_2,r_1+r_2-1},
\end{equation*}
so that the stuffle product~\eqref{tuff} reads
\begin{align}
z_{s_1,r_1}w\tuffle z_{s_2,r_2}v
&\equiv z_{s_1,r_1}(w\tuffle z_{s_2,r_2}v)+z_{s_2,r_2}(z_{s_1,r_1}w\tuffle v)
\nonumber\\ &\qquad
+\binom{r_1+r_2-2}{r_1-1}z_{s_1+s_2,r_1+r_2-1}(w\tuffle v)
\label{tuff1}
\end{align}
for arbitrary $w,v\in\mathbb Q\langle Z\rangle$ and $z_{s_1,r_1},z_{s_2,r_2}\in Z$.
If we set $z_s:=z_{s,1}$ and further restrict the product to the subalgebra $\mathbb Q\langle Z'\rangle$, where
$Z'=\{z_s:s=1,2,\dots\}$, then Proposition~\ref{prop2} results in the following statement.
\begin{theorem}[\cite{BK13}]
\label{th:T}
For admissible words $w=z_{s_1}\dots z_{s_l}$ and $v=z_{s_1'}\dotsb z_{s_m'}$ of weight $|w|=s_1+\dots+s_l$ and $|v|=s_1'+\dots+s_m'$, respectively,
$$
[w\tuffle v]\sim(1-q)^{-|w|-|v|}\zeta(w*v)
\qquad\text{as}\quad q\to1^-,
$$
where $*$ denotes the standard stuffle \textup(harmonic\textup) product of MZVs on $\mathbb Q\langle Z'\rangle$.
\end{theorem}
Since $[w]\sim(1-q)^{-|w|}\zeta(w)$, $[v]\sim(1-q)^{-|v|}\zeta(v)$ as $q\to1^-$ and $[w\tuffle v]=[w]\cdot[v]$,
Theorem~\ref{th:T} asserts that the stuffle product \eqref{tuff} of the algebra $\MD$ reduces to the stuffle product of the algebra of MZVs
in the limit as $q\to1^-$. This fact has been already established in~\cite{BK13}.
\section{The duality}
\label{sect:D}
As an alternative extension of the mono-brackets \eqref{MB} we introduce the \emph{multiple $q$-zeta brackets}
\begin{align}
&
\bzeta\biggl[\begin{matrix} s_1,\dots,s_l \\ r_1,\dots,r_l \end{matrix}\biggr]
=\bzeta_q\biggl[\begin{matrix} s_1,\dots,s_l \\ r_1,\dots,r_l \end{matrix}\biggr]
\nonumber\\ &\qquad
:=c\sum_{\substack{m_1,\dots,m_l>0\\d_1,\dots,d_l>0}}
m_1^{r_1-1}d_1^{s_1-1}\dotsb m_l^{r_l-1}d_l^{s_l-1}q^{(m_1+\dots+m_l)d_1+(m_2+\dots+m_l)d_2+\dots+m_ld_l}
\nonumber\\ &\qquad\phantom:
=c\sum_{m_1,\dots,m_l>0}
\frac{m_1^{r_1-1}P_{s_1-1}(q^{m_1+\dots+m_l})m_2^{r_2-1}P_{s_2-1}(q^{m_2+\dots+m_l})\dotsb m_l^{r_l-1}P_{s_l-1}(q^{m_l})}
{(1-q^{m_1+\dots+m_l})^{s_1}(1-q^{m_2+\dots+m_l})^{s_2}\dotsb(1-q^{m_l})^{s_l}}
\label{TB}
\end{align}
where
$$
c=\frac1{(r_1-1)!\,(s_1-1)!\dotsb(r_l-1)!\,(s_l-1)!}.
$$
Then
$$
\biggl[\begin{matrix} s_1 \\ r_1-1 \end{matrix}\biggr]
=\bzeta\biggl[\begin{matrix} s_1 \\ r_1 \end{matrix}\biggr]
\qquad\text{and}\qquad
[s_1,\dots,s_l]
=\biggl[\begin{matrix} s_1,\dots,s_l \\ 0,\dots,0 \end{matrix}\biggr]
=\bzeta\biggl[\begin{matrix} s_1,\dots,s_l \\ 1,\dots,1 \end{matrix}\biggr].
$$
By applying iteratively the binomial theorem in the forms
$$
\frac{(m+n)^{r_1-1}}{(r_1-1)!}\,\frac{n^{r_2-1}}{(r_2-1)!}
=\sum_{j=1}^{r_1+r_2-1}\binom{j-1}{r_2-1}\frac{m^{r_1+r_2-j-1}}{(r_1+r_2-j-1)!}\,\frac{n^{j-1}}{(j-1)!}
$$
and
$$
\frac{(n-m)^{r-1}}{(r-1)!}
=\sum_{i=1}^r(-1)^{r+i}\frac{n^{i-1}}{(i-1)!}\,\frac{m^{r-i}}{(r-i)!}
$$
we see that the $\mathbb Q$-algebras spanned by either \eqref{BB} or \eqref{TB} coincide.
More precisely, the following formulae link the two versions of brackets.
\begin{proposition}
\label{prop2a}
We have
\begin{align*}
\biggl[\begin{array}{cccc} s_1, & s_2, & \dots, & s_l \\ r_1-1, & r_2-1, & \dots, & r_l-1 \end{array}\biggr]
&=\sum_{j_2=1}^{r_1+r_2-1}\binom{j_2-1}{r_2-1}\sum_{j_3=1}^{j_2+r_3-1}\binom{j_3-1}{r_3-1}\dotsb\sum_{j_l=1}^{j_{l-1}+r_l-1}\binom{j_l-1}{r_l-1}
\\ &\qquad\times
\bzeta\biggl[\begin{array}{ccccc} s_1, & s_2, & \dots, & s_{l-1}, & s_l \\ r_1+r_2-j_2, & j_2+r_3-j_3, & \dots, & j_{l-1}+r_l-j_l, & j_l \end{array}\biggr]
\end{align*}
and
\begin{align*}
&
\bzeta\biggl[\begin{matrix} s_1,\dots,s_l \\ r_1,\dots,r_l \end{matrix}\biggr]
=\sum_{i_1=1}^{r_1}\sum_{i_2=1}^{r_2}\dotsb\sum_{i_{l-1}=1}^{r_{l-1}}
(-1)^{r_1+\dots+r_{l-1}-i_1-\dotsb-i_{l-1}}
\\ &\qquad\times
\binom{r_1-i_1+i_2-1}{r_1-i_1}\dotsb
\binom{r_{l-2}-i_{l-2}+i_{l-1}-1}{r_{l-2}-i_{l-2}}\binom{r_{l-1}-i_{l-1}+r_l-1}{r_{l-1}-i_{l-1}}
\\ &\qquad\times
\biggl[\begin{array}{ccccc} s_1, & s_2, & \dots, & s_{l-1}, & s_l \\ i_1-1, & r_1-i_1+i_2-1, & \dots, & r_{l-2}-i_{l-2}+i_{l-1}-1, & r_{l-1}-i_{l-1}+r_l-1 \end{array}\biggr].
\end{align*}
\end{proposition}
Proposition~\ref{prop2a} allows us to construct an isomorphism $\varphi$ of the two $\mathbb Q$-algebras $\mathbb Q\langle Z\rangle$ with two evaluation maps
$[\,\cdot\,]$ and $\bzeta[\,\cdot\,]$,
$$
\bzeta[z_{s_1,r_1}\dots z_{s_l,r_l}]=\bzeta\biggl[\begin{matrix} s_1,\dots,s_l \\ r_1,\dots,r_l \end{matrix}\biggr],
$$
such that
$$
[w]=\bzeta[\varphi w] \qquad\text{and}\qquad
\bzeta[w]=[\varphi^{-1}w].
$$
Note however that the isomorphism breaks the simplicity of defining the stuffle product $\tuffle$ from Section~\ref{sect:T}.
Another algebraic setup can be used for the $\mathbb Q$-algebra $\mathbb Q\langle Z\rangle$ with evaluation $\bzeta$.
We can recast it as the $\mathbb Q$-subalgebra $\fH^0:=\mathbb Q+x\fH y$ of the $\mathbb Q$-algebra $\fH:=\mathbb Q\langle x,y\rangle$
by setting $\bzeta[1]=1$ and
$$
\bzeta[x^{s_1}y^{r_1}\dots x^{s_l}y^{r_l}]=\bzeta\biggl[\begin{matrix} s_1,\dots,s_l \\ r_1,\dots,r_l \end{matrix}\biggr].
$$
The depth (or length) is defined as the number of appearances of the subword $xy$, while the weight is the number of letters minus the length.
\begin{proposition}[Duality]
\label{prop3}
$$
\bzeta\biggl[\begin{matrix} s_1,s_2,\dots,s_l \\ r_1,r_2,\dots,r_l \end{matrix}\biggr]
=\bzeta\biggl[\begin{matrix} r_l,r_{l-1},\dots,r_1 \\ s_l,s_{l-1},\dots,s_1 \end{matrix}\biggr].
$$
\end{proposition}
\begin{proof}
This follows from the rearrangement of the summation indices:
$$
\sum_{i=1}^ld_i\sum_{j=i}^lm_j
=\sum_{i=1}^ld_i'\sum_{j=i}^lm_j'
$$
where $d_i'=m_{l+1-i}$ and $m_j'=d_{l+1-j}$.
\end{proof}
Denote by $\tau$ the anti-automorphism of the algebra $\fH$, interchanging
$x$ and $y$; for example, $\tau(x^2yxy)=xyxy^2$. Clearly, $\tau$ is an involution
preserving both the weight and depth, and it is also an automorphism of the subalgebra $\fH^0$.
The duality can be then stated as
\begin{equation}
\bzeta[\tau w]=\bzeta[w]
\qquad\text{for any}\quad w\in\fH^0.
\label{eq:dual}
\end{equation}
We also extend $\tau$ to $\mathbb Q\langle Z\rangle$ by linearity.
The duality in Proposition~\ref{prop3} is exactly the partition duality given earlier by Bachmann for the model~\eqref{BB}.
\section{The dual stuffle product}
\label{sect:S}
We can now introduce the product which is dual to the stuffle one.
Namely, it is the duality composed with the stuffle product and, again, with the duality:
\begin{equation}
w\stuffle v:=\varphi^{-1}\tau(\tau\varphi w\tuffle\tau\varphi v)
\qquad\text{for}\quad w,v\in\mathbb Q\langle Z\rangle.
\label{shuff}
\end{equation}
It follows then from Propositions \ref{prop1} and \ref{prop3} that
\begin{proposition}
\label{prop4}
The evaluation map \eqref{evamap} on $\mathbb Q\langle Z\rangle$ satisfies
$[w\stuffle v]=[w]\cdot[v]$, so that it is also a homomorphism of the $\mathbb Q$-algebra $(\mathbb Q\langle Z\rangle,\stuffle)$ onto
$(\BD,\,\cdot\,)$.
\end{proposition}
Note that \eqref{tuff1} is also equivalent to the expansion from the right \cite[Theorem~9]{Zud03}:
\begin{align}
wz_{s_1,r_1}\tuffle vz_{s_2,r_2}
&\equiv(w\tuffle vz_{s_2,r_2})z_{s_1,r_1}+(wz_{s_1,r_1}\tuffle v)z_{s_2,r_2}
\nonumber\\ &\qquad
+\binom{r_1+r_2-2}{r_1-1}(w\tuffle v)z_{s_1+s_2,r_1+r_2-1}.
\label{tuff2}
\end{align}
The next statement addresses the structure of the dual stuffle product \eqref{shuff}
for the words over the sub-alphabet $Z'=\{z_s=z_{s,1}:s=1,2,\dots\}\subset Z$. Note that
the words from $\mathbb Q\langle Z'\rangle$ can be also presented as the words from $\mathbb Q\langle x,xy\rangle$
necessarily ending with $xy$.
\begin{proposition}
\label{prop4a}
Modulo the highest weight and depth,
\begin{equation}
aw\stuffle bv
\equiv a(w\stuffle bv)+b(aw\stuffle v)
\label{shuff1}
\end{equation}
for arbitrary words $w,v\in\mathbb Q+\mathbb Q\langle x,xy\rangle xy$ and $a,b\in\{x,xy\}$.
\end{proposition}
\begin{proof}
First note that restricting \eqref{tuff2} further modulo the highest depth implies
\begin{align*}
wz_{s_1,r_1}\tuffle vz_{s_2,r_2}
&\equiv(w\tuffle vz_{s_2,r_2})z_{s_1,r_1}+(wz_{s_1,r_1}\tuffle v)z_{s_2,r_2},
\\ \intertext{and that we also have}
wz_{s_1,r_1+1}\tuffle vz_{s_2,r_2}
&\equiv(wz_{s_1,r_1}\tuffle vz_{s_2,r_2})y+(wz_{s_1,r_1+1}\tuffle v)z_{s_2,r_2},
\\
wz_{s_1,r_1+1}\tuffle vz_{s_2,r_2+1}
&\equiv(wz_{s_1,r_1}\tuffle vz_{s_2,r_2+1})y+(wz_{s_1,r_1+1}\tuffle vz_{s_2,r_2})y.
\end{align*}
The relations already show that
\begin{equation}
wa'\tuffle vb'
\equiv(w\tuffle vb')a'+(wa'\tuffle v)b'
\label{tuff3}
\end{equation}
for arbitrary words $w,v\in\mathbb Q+\mathbb Q\langle Z\rangle$ and $a',b'\in Z\cup\{y\}$,
where
$$
z_{s_1,r_1}\dots z_{s_{l-1},r_{l-1}}z_{s_l,r_l}y=z_{s_1,r_1}\dots z_{s_{l-1},r_{l-1}}z_{s_l,r_l+1}.
$$
Secondly note that the isomorphism $\varphi$ of Proposition~\ref{prop2a} acts trivially on the words from $\mathbb Q\langle Z'\rangle$.
Therefore, applying $\tau\varphi$ to the both sides of~\eqref{shuff} and extracting the homogeneous part of the result
corresponding to the highest weight and depth we arrive at
\begin{equation*}
\tau(w\stuffle v)\equiv\tau w\tuffle\tau v
\qquad\text{for all}\quad w,v\in\mathbb Q\langle Z'\rangle.
\end{equation*}
Denoting
$$
\ol a=\tau a=\begin{cases}
y & \text{if $a=x$}, \\
xy & \text{if $a=xy$},
\end{cases}
$$
and using \eqref{tuff3} we find out that
\begin{align*}
\tau(aw\stuffle bv)
&\equiv\tau(aw)\tuffle\tau(bv)
\equiv(\tau w)\ol a\tuffle(\tau v)\ol b
\\
&\equiv(\tau w\tuffle(\tau v)\ol b)\ol a+((\tau w)\ol a\tuffle\tau v)\ol b
\\
&\equiv(\tau w\tuffle\tau(bv))\ol a+(\tau(aw)\tuffle\tau v)\ol b
\equiv(\tau(w\stuffle bv))\ol a+(\tau(aw\stuffle v))\ol b
\\
&\equiv\tau(a(w\stuffle bv)+b(aw\stuffle v)),
\end{align*}
which implies the desired result.
\end{proof}
\begin{theorem}
\label{th:S}
For admissible words $w=z_{s_1}\dots z_{s_l}$ and $v=z_{s_1'}\dotsb z_{s_m'}$ of weight $|w|=s_1+\dots+s_l$ and $|v|=s_1'+\dots+s_m'$, respectively,
$$
[w\stuffle v]\sim(1-q)^{-|w|-|v|}\zeta(w\shuffle v)
\qquad\text{as}\quad q\to1^-,
$$
where $\shuffle$ denotes the standard shuffle product of MZVs on $\mathbb Q\langle Z'\rangle$.
\end{theorem}
\begin{proof}
Because both $\varphi$ and $\tau$ respect the weight,
Proposition~\ref{prop4a} shows that the only terms that can potentially interfere with the asymptotic behaviour as $q\to1^-$
correspond to the same weight but lower depth. However, according to \eqref{shuff} and \eqref{tuff2}, the `shorter' terms
do not belong to $\mathbb Q\langle Z'\rangle$, that is, they are linear combinations of the monomials
$z_{q_1,r_1}\dots z_{q_n,r_n}$ with $r_1+\dots+r_n=l+m>n$, hence $r_j\ge2$ for at least one $j$.
The latter circumstance and Proposition~\ref{prop2} then imply
\begin{equation*}
\lim_{q\to1^-}(1-q)^{|w|+|v|}[z_{q_1,r_1}\dots z_{q_n,r_n}]=0.
\qedhere
\end{equation*}
\end{proof}
Theorem~\ref{th:S} asserts that the dual stuffle product \eqref{shuff} restricted from $\BD$ to the subalgebra $\MD$
reduces to the shuffle product of the algebra of MZVs in the limit as $q\to1^-$. This result is implicitly stated in~\cite{Ba14}.
More is true: using \eqref{tuff1} and Proposition~\ref{prop4a} we obtain
\begin{theorem}
\label{th:TS}
For two words $w=z_{s_1}\dots z_{s_l}$ and $v=z_{s_1'}\dotsb z_{s_m'}$, not necessarily admissible,
$$
[w\tuffle v-w\stuffle v]\sim(1-q)^{-|w|-|v|}\zeta(w*v-w\shuffle v)
\qquad\text{as}\quad q\to1^-,
$$
whenever the MZV on the right-hand side makes sense.
\end{theorem}
In other words, the $q$-zeta model of bi-brackets provides us with a (far reaching) regularisation of the MZVs:
the former includes the extended double shuffle relations as the limiting $q\to1^-$ case.
\begin{conjecture}[{Bachmann \cite{Ba14}}]
\label{conj1}
The resulting double stuffle (that is, stuffle and dual stuffle)
relations exhaust all the relations between the bi-brackets.
Equivalently (and simpler), the stuffle relations and the duality exhaust all the relations between the bi-brackets.
\end{conjecture}
We would like to point out that the duality $\tau$ from Section~\ref{sect:D} also exists for the algebra of MZVs \cite[Section~6]{Zud03}.
However the two dualities are not at all related: the limiting $q\to1^-$ process squeezes the appearances of $x$ before $y$ in
the words $x^{s_1}yx^{s_2}y\dots x^{s_l}y$, so that they become $x^{s_1-1}yx^{s_2-1}y\dots x^{s_l-1}y$. Furthermore, the duality of MZVs
respects the shuffle product: the dual shuffle product coincides with the shuffle product itself. On the other hand,
the dual stuffle product of MZVs is very different from the stuffle (and shuffle) products. It may be an interesting
problem to understand the double stuffle relations of the algebra of MZVs.
\section{Reduction to mono-brackets}
\label{sect:R}
In this final section we present some observations towards another conjecture of Bachmann about the coincidence of the $\mathbb Q$-algebras of bi- and mono-brackets.
\begin{conjecture}[Bachmann]
\label{conj2}
$\MD=\BD$.
\end{conjecture}
Based on the representation of the elements from $\BD$ as the polynomials from $\mathbb Q\langle x,y\rangle$
(see also the last paragraph of Section~\ref{sect:S}), we can loosely interpret this conjecture for the algebra of MZVs
as follows: all MZVs lie in the $\mathbb Q$-span of
$$
\zeta(s_1,s_2,\dots,s_l)=\zeta(x^{s_1-1}yx^{s_2-1}y\dots x^{s_l-1}y)
$$
with all $s_j$ to be at least~2 (so that there is no appearance of $y^r$ with $r\ge2$). The latter statement
is already known to be true: Brown~\cite{Bro12} proves that one can span the $\mathbb Q$-algebra of MZVs by the set with all $s_j\in\{2,3\}$.
\medskip
In what follows we analyse the relations for the model \eqref{TB}, because it makes simpler keeping track of the duality relation.
We point out from the very beginning that the linear relations given below are all experimentally found
(with the check of 500 terms in the corresponding $q$-expansions) but we believe that it is possible to establish
them rigorously using the double stuffle relations given above.
The first presence of the $q$-zeta brackets that are not reduced to ones from $\MD$ by the duality relation happens in weight~3.
It is $\bz{2\\2}$ and we find out that
$$
\bz{2\\2}
=\tfrac12\bz{2\\1}+\bz{3\\1}-\bz{2,1\\1,1}.
$$
There are 34 totally $q$-zeta brackets of weight up to~4,
\begin{gather*}
\bz{}^*, \; \bz{1\\1}^*, \; \bz{2\\1}=\bz{1\\2}, \; \bz{2\\2}^*, \; \bz{3\\1}=\bz{1\\3}, \; \bz{3\\2}=\bz{2\\3}, \; \bz{4\\1}=\bz{1\\4},
\\
\bz{1,1\\1,1}^*, \; \bz{2,1\\1,1}=\bz{1,1\\1,2}, \; \bz{1,2\\1,1}=\bz{1,1\\2,1}, \;
\bz{2,1\\2,1}=\bz{1,2\\1,2}, \; \bz{2,1\\1,2}^*, \; \bz{1,2\\2,1}^*,
\\
\bz{2,2\\1,1}=\bz{1,1\\2,2}, \; \bz{3,1\\1,1}=\bz{1,1\\1,3}, \; \bz{1,3\\1,1}=\bz{1,1\\3,1},
\\
\bz{1,1,1\\1,1,1}^*, \; \bz{2,1,1\\1,1,1}=\bz{1,1,1\\1,1,2}, \;
\bz{1,2,1\\1,1,1}=\bz{1,1,1\\1,2,1}, \; \bz{1,1,2\\1,1,1}=\bz{1,1,1\\2,1,1}, \;
\bz{1,1,1,1\\1,1,1,1}^*,
\end{gather*}
where the asterisk marks the self-dual ones. Only 21 of those listed are not dual-equivalent, and only five of the latter are not reduced to
the $q$-zeta brackets from $\MD$; besides the already mentioned $\bz{2\\2}$ these are $\bz{3\\2}$, $\bz{2,1\\2,1}$, $\bz{2,1\\1,2}$ and $\bz{1,2\\2,1}$. We find out that
\begin{align*}
\bz{3\\2}
&=\tfrac14\bz{2\\1}+\tfrac32\bz{4\\1}-2\bz{2,2\\1,1},
\displaybreak[2]\\
\bz{2,1\\2,1}
&=\bz{2,1\\1,1}+\tfrac12\bz{1,2\\1,1}-\bz{2,2\\1,1}+\bz{1,3\\1,1}-\bz{2,1,1\\1,1,1}-\bz{1,2,1\\1,1,1},
\displaybreak[2]\\
\bz{2,1\\1,2}
&=-\tfrac12\bz{2,1\\1,1}-\tfrac12\bz{1,2\\1,1}+2\bz{2,2\\1,1}+\bz{3,1\\1,1}-\bz{1,3\\1,1}+\bz{1,2,1\\1,1,1},
\displaybreak[2]\\
\bz{1,2\\2,1}
&=-\bz{2,1\\1,1}+2\bz{2,2\\1,1}+\bz{2,1,1\\1,1,1},
\end{align*}
and there is one more relation in this weight between the $q$-zeta brackets from $\MD$:
$$
\tfrac13\bz{2\\1}-\bz{3\\1}+\bz{4\\1}-2\bz{2,2\\1,1}+2\bz{3,1\\1,1}=0.
$$
The computation implies that the dimension $d_4^{\BD}$ of the $\mathbb Q$-space spanned by all multiple $q$-zeta brackets of weight not more than~4 is equal
to the dimension $d_4^{\MD}$ of the $\mathbb Q$-space spanned by all such brackets from $\MD$ and that both are equal to~15.
A similar analysis demonstrates that
$$
d_5^{\BD}=d_5^{\MD}=28 \quad\text{and}\quad d_6^{\BD}=d_6^{\MD}=51,
$$
and it seems less realistic to compute and verify that $d_n^{\BD}=d_n^{\MD}$ for $n\ge7$ though Conjecture~\ref{conj2}
and \cite[Conjecture (5.4)]{BK13} support
\begin{align*}
\sum_{n=0}^\infty d_n^{\MD}x^n
&\overset?=\frac{1-x^2+x^4}{(1-x)^2(1-2x^2-2x^3)}
\\
&=1+2x+4x^2+8x^3+15x^4+28x^5+51x^6+92x^7
\\ &\quad
+165x^8+294x^9+523x^{10}+O(x^{11}).
\end{align*}
We can compare this with the count $c_n^{\MD}$ and $c_n^{\BD}$ of all mono- and bi-brackets of weight $\le n$,
$$
\sum_{n=0}^\infty c_n^{\MD}x^n=\frac1{1-2x}
\quad\text{and}\quad
\sum_{n=0}^\infty c_n^{\BD}x^n=\frac{1-x}{1-3x+x^2}=\sum_{n=0}^\infty F_{2n}x^n,
$$
where $F_n$ denotes the Fibonacci sequence.
In addition, we would like to point out one more expectation for the algebra of (both mono- and bi-) brackets, which is not shared
by other $q$-models of MZVs: all linear (hence algebraic) relations between them seem to be over $\mathbb Q$, not over $\mathbb C(q)$.
\begin{conjecture}
\label{conj3}
A collection of \textup(bi-\textup)brackets is linearly dependent over $\mathbb C(q)$ if and only if
it is linearly dependent over~$\mathbb Q$.
\end{conjecture}
\begin{acknowledgements}
I have greatly benefited from discussing this work with Henrik Bachmann, Kurusch Ebrahimi-Fard, Herbert Gangl and Ulf K\"uhn\,---\,it is my pleasure
to thank them for numerous clarifications, explanations and hints. I thank the three anonymous referees of the journal
for pointing out some typos in the preliminary version and helping to improve the exposition.
I would also like to acknowledge that a part of this research was undertaken in ICMAT\,---\,Institute of Mathematical Sciences
(Universidad Aut\'onoma de Madrid, Spain)
during the Research Trimester on \emph{Multiple Zeta Values, Multiple Polylogarithms, and Quantum Field Theory}
(September--December 2014).
\end{acknowledgements}
|
1,116,691,500,682 | arxiv | \section{Introduction}
Gravitational Microlensing has been demonstrated to be a powerful
observational tool to study populations of stellar or planetary mass
objects which emit little detectable radiation. To date the major
emphasis of gravitational microlensing survey teams has been the
determination of the composition of the Galactic dark matter
(Alcock, {\it et al.} 1996a, 1997b, Ansari, {\it et al.} 1996,), but microlensing
observations toward the Galactic bulge have yielded a surprisingly
high microlensing rate (Alcock, {\it et al.} 1997a, Bennett, {\it et al.} 1995,
Udalski, {\it et al.} 1994a). This has important implications for the structure
of the Galaxy, but it also yields a relatively large sample of
microlensing events that can be used for other studies.
One of the most exciting possibilities is to use microlensing as a tool to
search for planets orbiting the lensing stars (Mao \& Paczy{\'n}ski 1991, Gould \&
Loeb 1992). Microlensing is unique among ground based planetary search
techniques for its ability to detect low mass planets (Elachi
1995; Bennett \& Rhie 1996); its sensitivity extends down to an Earth mass.
The microlensing lightcurve deviations caused by planets are generally
quite brief and they will affect only a fraction of all microlensing
events even if every lens star hosts its own planetary system. For these
reasons, microlensing planet searches require real-time event detection
(Alcock, {\it et al.} 1996b, Udalski, {\it et al.} 1994b)
and frequent microlensing event follow-up observations in order
to have high sensitivity to planetary lensing events. It
is still possible to detect planetary signals with microlensing survey
observations, and in this paper we present two events from the MACHO
Project Galactic bulge data which are likely to have been caused by
lenses with masses close to $M_{\rm Jup}$ (Jupiter's mass).
\section{Events}
\begin{figure}
\plottwo{fig_94.ps}{fig_closeup.ps}
\caption{The dual-color lightcurve of event 94-BLG-4 during the 1994
Galactic bulge season and a close-up of the lightcurve showing the binary
lens fit.\label{fig-a} }
\end{figure}
Figure 1 shows the lightcurve of event 94-BLG-4 from the 1994 bulge season.
This star is a clump giant star with $R = 16.7$ and $V-R = 1.1$ which has
maintained constant brightness during the 1993 through 1996 bulge seasons
with the exception of the short period of brightening shown in Figure 1.
This lightcurve shows a unique brightening which is achromatic but
also asymmetric, and it is also well explained by the binary microlensing
fit shown. The parameters for the binary fit are shown. ${\hat t}$ is the
Einstein diameter crossing time; $t_0$ is the time of closest alignment
between the source and lens center of mass; $a$ is the separation
of the lens components in units of the Einstein radius; $\theta$
is the angle between
the motion of the source relative to the line separating the lenses; and
$u_{\rm min}$ is the transverse distance of closest approach between the source and
lens center of mass. The $\chi^2=430.8$ for the fit shown with 648 data
points and 8 parameters. If we add two other parameters to allow for
a blended source, the $\chi^2$ is not improved significantly, and we find
that the amount of unlensed light superimposed upon the lensed source
is limited to less than $3\%$ (as expected for a clump giant source).
For comparison, $\chi^2=2835$ for a 5 parameter single lens fit, and if
we arbitrarily remove both the blue and red measurements at day 882.5,
we obtain $\chi^2=491.9$ for the single lens fit. Thus, while we have
undersampled the deviation from the best single lens fit, the deviation is
not completely confined to the pair of data points obtained in the observation
at day 882.5. Formally, the binary fit constrains both the event timescale
and the lens mass ratio quite accurately-to better than $3\%$. The value
${\hat t} = 10.7\,$days indicates a lens mass of $0.04\, {\rm M_\odot} $ with a $1-\sigma$
uncertainty of a factor of 3, but the overall ${\hat t}$ distribution suggests
that the mass is not much less than $0.1 {\rm M_\odot} $. This indicates that the
mass of the secondary lens is likely to have $m\sim 5M_{\rm Jup}$ with
a factor of 3 uncertainty.
Thus, the most likely explanation of this event is that the
lens is an M-dwarf system with
a giant planet at a projected separation of (very) roughly 1AU.
\begin{figure}
\plottwo{fig_95b3.ps}{fig_95b3c.ps}
\caption{The dual-color lightcurve of event 95-BLG-3 during the 1995
Galactic bulge season and a close-up of the lightcurve showing the
lens fit.\label{fig-b} }
\end{figure}
Figure 2 shows the lightcurve of the shortest event ever seen by the MACHO
collaboration with ${\hat t}=2.4\,$days. This event was detected with our
alert system, but it also passes the cuts used in our analysis of the '93
bulge data set. If we apply the standard relationship
between ${\hat t}$ and lens mass we find a most likely lens mass of about
$2M_{\rm Jup}$, but perhaps we should not use the ``most likely"
mass formula for our shortest event. Couldn't this event be part of
the tail of the event timescale distribution caused by more massive
lenses? The timescale distribution of events from two bulge seasons is
shown in Figure 3. If we assume that mass distribution of lenses has
a lower cutoff (of order $0.1 {\rm M_\odot} $), then it follows that
the distribution of detected events will scale as ${\hat t}^3$ for small
${\hat t}$. (We have used the fact that our event detection efficiency
scales as ${\hat t}$ for small ${\hat t}$.) This implies that the number of
events with ${\hat t} < {\hat t}_c$ should scale as ${\hat t}_c^4$.
We can now compare this prediction to the timescale distribution
shown in Figure 3. The '93 data set has 1 event with ${\hat t} < 10\,$days
and 12 events with ${\hat t} < 20\,$days while the '93$+$'95 data set has
5 events with ${\hat t} < 10\,$days and 24 events with ${\hat t} < 20\,$days.
Scaling from these numbers with the ${\hat t}_c^4$ scaling law implies that
we should expect to detect between 0.003 and 0.01 events per year with
${\hat t} < 2.5\,$days, so it is unlikely for us to have detected such an
event as a part of the short timescale tail of stellar mass lenses.
Thus, we can treat this event as a part of a separate population and
the mass estimate of $2M_{\rm Jup}$ is a reasonable one. If it is
a planet, then it would have to either be in a distant orbit with a
projected separation of $>5$ or $10\,$AU, or it could be a planet that
has been removed from the planetary system it was born in.
\begin{figure}
\plotfiddle{that_dist2.ps}{2.0truein}{0}{28}{28}{-100}{-50}
\caption{The ${\hat t}$ distributions of the 1993 MACHO bulge events and
the combined 1993 and 1995 bulge events (bold) are shown.}
\end{figure}
\section{Conclusions}
Unfortunately, we cannot treat either of these two events as definitive
detections of planetary mass objects. For the first event, 94-BLG-4,
we would require additional observations to fully characterize the
binary lightcurve and to definitively establish that our fit is the
correct one. For event 95-BLG-3, additional observations taken less than
24 hours after the event peak could have determined if the finite size of
the source star was resolved which would have established this event
as a {\it bona fide} planetary mass lensing event. Had this event occurred
in 1996, we would have recognized this event in time to request
follow-up observations, but in early 1995
our alert system was operating with a time lag of about 30 hours.
At present, the time lag for MACHO alerts is typically less than 6 hours.
When similar events occur in the future, we can look forward to prompt
alert announcements and to higher quality data sets from the ever
expanding microlensing follow-up teams such as GMAN and PLANET
(Pratt, {\it et al.} 1995, Albrow, {\it et al.} 1995)
\acknowledgments
Work performed at LLNL is supported by the DOE under contract W7405-ENG-48.
Work performed by the CfPA personnel is supported in part by the
Office of Science and Technology Centers of
NSF under cooperative agreement AST-8809616.
Work performed at MSSSO is supported by the Bilateral Science
and Technology Program of the Australian Department of Industry, Technology
and Regional Development.
|
1,116,691,500,683 | arxiv | \section{Introduction}
In recent years, the rapid developments of the joint vision-language study have been supported by a flurry of benchmark datasets~\cite{liu2019clevr,antol2015vqa,goyal2017making,johnson2017clevr,hu2017modeling,kazemzadeh2014referitgame,REFCOCOG,hudson2019gqa,perez2018film} and methods~\cite{anderson2018bottom,hudson2018compositional,hudson2019learning}. The latest research trends~\cite{perez2018film,hudson2018compositional,santoro2017simple,mascharka2018transparency,mao2019neuro} have gone beyond a simple understanding of multi-modal information~\cite{morency2011towards,zadeh2016mosi,poria2015deep}, and focused on more advanced cognitive tasks, such as visual reasoning~\cite{zhang2020text,johnson2017clevr,liu2019clevr,hudson2019gqa}, visual question answering~\cite{antol2015vqa,goyal2017making,johnson2017clevr,hudson2019gqa,krishna2017visual}, and referring expression comprehension ~\cite{plummer2015flickr30k,liu2019clevr,hu2017modeling,kazemzadeh2014referitgame,REFCOCOG}.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{fig0.pdf}
\caption{Comparison between the conventional modular network and our unified network (a-b), and the illustration of the proposed LaConv module (c).
LaConv can unify the visual recognition and multi-modal interaction into one processing step. Based on it, we further build an unified and end-to-end network called LaConvNet. }
\label{fig0}
\vspace{-3mm}
\end{figure}
To accomplish these tasks, most existing vision-and-language (VL) systems adopt a modular structure. As shown in Fig.~\ref{fig0} (a), a typical VL system often uses a visual backbone, \emph{e.g.}, ResNet~\cite{he2016deep} or FasterRCNN~\cite{ren2015faster}, to extract the features of the input image, based on which another inference module is deployed to model the cross-modal interactions.
This long-popular design paradigm has achieved great success in various VL tasks~\cite{antol2015vqa,goyal2017making,johnson2017clevr,hu2017modeling}, but has been also criticized for its excessive parameters and high computational overhead~\cite{kim2021vilt}.
In contrast to this modular structure, we are committed to establishing an unified alternative by exploring language-guided visual recognition.
As shown in Fig.~\ref{fig0}-b, we aim at embedding the language information into the process of visual recognition, and then directly output the language-dependent visual features.
This motivation is inspired by the human cognitive mechanism towards multi-modal tasks.
Neuroscience researches~\cite{stein2008multisensory,mcgurk1976hearing,shams2002visual,bonath2007neural} show that the processing mechanism of \textit{primary visual cortex cells} can be affected by other modalities, \emph{e.g.}, text or sound, and then produce the multimodal sensory. It means that the low-level visual recognition can also be driven by the natural language signals.
For example, after receiving a natural language instruction, people will perform the visual recognition related to the instruction, \emph{e.g.}, paying attention to relevant regions and analyzing relevant information like colors or textures.
To achieve this target, we first propose a novel \emph{Language-Guided Dynamic Convolution} (LaConv), of which structure is depicted in Fig.~\ref{fig0}.
The property of ``\textit{language-guided visual recognition}'' are mainly reflected in that LaConv can realize differentiated feature extractions on the same image according to different natural language instructions. This property is attributed to its dynamic convolution filters predicted by natural language information.
Through this novel multi-modal convolution, LaConv can complete visual recognition and multi-modal reasoning in one processing step.
In addition to a plug-in module for existing VL systems, we also exploit LaConv as a stand-alone block for building an unified multi-modal network. Specifically, we build the first fully language-guided convolution network with LaConv blocks, termed \emph{LaConvNet}.
As shown in Fig.\ref{fig0}, LaConvNet processes the input image from the pixel level, and embeds the natural language information into the complete process of visual feature learning. The output visual features can be directly used for multi-modal prediction.
Compared to the modular VL systems, LaConvNet can get rid of large backbone networks, and play roles of feature extraction and multi-modal inference at the same time.
To validate our approach, we conduct extensive experiments on four benchmark datasets, \emph{i.e.}, CLEVR~\cite{johnson2017clevr}, CLEVR-Humans~\cite{johnson2017inferring}, RefCOCO~\cite{kazemzadeh2014referitgame} and RefCOCO+~\cite{kazemzadeh2014referitgame} , of two VL tasks, \emph{i.e.}, Visual Question Answering (VQA)~\cite{johnson2017clevr,johnson2017inferring} and Referring Expression Comprehension (REC)~\cite{kazemzadeh2014referitgame}. The experimental results not only show the superior performance gains of LaConv as a multi-modal module over a set of state-of-the-art (SOTA) methods, \emph{e.g.}, but also confirm three advantages of LaConvNet as an unified multi-modal network:
\begin{itemize}
\item Without the large visual backbone, LaConvNet is much more compact than most existing VL systems.
\item LaConvNet can be easily generalized to most VL tasks with a few modifications.
\item When training from scratch, LaConvNet shows much better learning ability than existing modular methods.
\end{itemize}
In conclusion, the proposed LaConvNet are compact, efficient and highly generalized, which is a viable alternative for most vision-and-language tasks.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{cond.pdf}
\caption{Illustration of the parameter generation based on natural language information. The input text features are first transformed into a language condition matrix by multiplying the affinity matrix between the text and the image features.
Then, the convolution filters are predicted based on this condition matrix.
A novel \emph{pixel packing} operation is applied to alleviate the \emph{low-rank degeneration} in low-level visual features. }
\label{condgen}
\vspace{-4mm}
\end{figure}
\section{Language-Guided Dynamic Convolution}
In this section, we give the definition of the proposed {LaConv}, of which structure is illustrated in Fig.\ref{fig0}.
We begin with the introduction of language-dependent parameter generation, and then describe the dynamic convolution.
\subsection{Language-conditioned Parameter Generation}\label{sec:lcpg}
As shown in Fig.\ref{condgen}, to achieve language-guided visual recognition, LaConv generates dynamic convolution filters based on natural language information.
These filters not only depend on the text features, but also on the space and the content of images.
Generally, the length of the text features is often not consistent with the resolution of the image features, and two modalities are also not spatially related.
To this end, we first transform the text features into a condition matrix, which has the same shape as the image ones.
Each feature vector in this condition matrix is also semantically related to the corresponding image region.
Concretely, given the image features, $\textbf{I} \in \mathbb{R}^{(h\times w) \times d}$, and the text features, $\textbf{Q} \in \mathbb{R}^{l\times d}$, the language condition matrix $\textbf{C} \in \mathbb{R}^{(h\times w) \times d}$ is obtained by:
\begin{equation}\label{eq:condition}
\begin{aligned}
&\textbf{C} = \sigma(\textbf{A} (\textbf{Q}\textbf{W}_A)\textbf{W}_C),
\end{aligned}
\end{equation}
Here, $\sigma$ is the ReLU function, while $l$ is the length of text, $h\times w$ denotes the resolution of the image features, and $d$ is the feature dimension.
$\mathbf{A}\in \mathbb{R}^{\left(h\times w\right)\times l}$ is the affinity matrix between $\mathbf{I}$ and $\mathbf{Q}$, and its values denotes the coefficients between the features of two modalities. Here, we resort to \emph{scaled dot-product attention}~\cite{vaswani2017attention} to compute the multi-modal coefficients:
\begin{equation}
\begin{aligned}
\textbf{A}=\text{Softmax}(\frac{(\mathbf{\textbf{I}W}_I)^T (\mathbf{QW}_Q)}{\sqrt{d}}).
\end{aligned}
\label{att_func}
\end{equation}
To improve the module capacity, we also extend Eq.~\ref{eq:condition} into a \emph{multi-head} version~\cite{vaswani2017attention}.
With Eq.\ref{eq:condition} and Eq.\ref{att_func}, the obtained condition matrix not only has the same shape as the visual features, but also spatially relates to the image regions.
Therefore, through the condition matrix, we can effectively embed language information into the dynamic generations of parameters for each visual region and each multi-modal example.
\textbf{Pixel packing.} Since the generation of the condition matrix is based on the scale-dot product between two types of features, it can easily lead to the issue of \emph{low-rank degeneration}~\cite{bhojanapalli2020low} when the number of the image features is much large than their feature dimension, \emph{e.g.}, low-level image features.
To this end, we propose a novel operation called \textit{pixel packing} to alleviate this problem.
As shown in Fig~\ref{condgen}, given the image features $\textbf{I}\in \mathbb{R}^{\left(h\times w\right) \times d}$, we first pack them into $k$ new visual tokens, where $k=(h\times w)/s^2$, and each token is the concatenation of the $s\times s$ local features.
So, the number of new image features is reduced by $s^2$ times. Correspondingly, the resolutions of the condition matrix $\textbf{C}$ and the affinity matrix $\textbf{A}$ in Eq.~\ref{eq:condition} also becomes $(h\times w)/s^2$.
Before the parameter predictions, we will reshape the condition matrix back to the size of $(h\times w)$, which is to make sure the predicted filters can adapt the input image features.
Overall, pixel packing can alleviate the issue of low-rank degeneration via reducing the feature resolutions, while maintaining the multi-modal interactions in Eq.~1.
\textbf{Convolution filters.\label{sec_conv}} Based on the condition matrix $\mathbf{C}$, we further predict the parameters of convolution filters $\textbf{W}_{conv}\in \mathbb{R}^{(h\times w) \times (k \times k \times g)}$, defined as:
\begin{equation}
\begin{aligned}
\textbf{W}_{conv} &= \mathbf{C}\textbf{W}_1+b_1.
\end{aligned}
\label{eq3}
\end{equation}
Here, $k\times k$ denotes the filter size and $g$ is the number of groups for dynamic convolution.
Although the parameter generation produces a large number of dynamic weights, \emph{i.e.,} $h\times w\times(k \times k \times g)$ overall, the trainable parameters are only $k^2gd+3d^2$.
Compared to a conventional convolution layer, whose parameter size is $(kd)^2$ and quadratic to the filter size, our parameter generation is still very lightweight.
\subsection{Language-guided Dynamic Convolution }
Based on the generated filters, we further perform dynamic convolutions to achieve language-guided visual feature learning.
This dynamic convolution used in LaConv is principally based on \emph{depth-wise convolution}~\cite{krizhevsky2012imagenet}, also known as \emph{group convolution}. Specifically, depth-wise convolution divides the image features into several groups by channel, and its convolutions are independently conducted on each group. The convolution filters are weight-sharing for all spatial regions.
The main differences of LaConv lies in two main aspects. Firstly, each image position $(i,j)$ has their corresponding filters, which is to model the spatial information in the text features, \emph{e.g.}, ``left person''.
When filters are shared, such information is hard to recognize.
Secondly, based on the predicted filters described in Sec.~\ref{sec_conv}., LaConv can extract differentiated visual features of the same image based on different texts.
And the output features are highly related to the text.
Concretely, given the grouped image features $\textbf{I}^l \in \mathbf{R}^{h\times w \times \frac{d}{g}}$ and the predicted filters $\textbf{W} \in \mathbf{R}^{k \times k \times g}$, the convolution of LaConv is defined as:
\begin{equation}
\begin{aligned}
&\textbf{O}_{i,j}^l= \sum_{\Delta i=1}^{k}\sum_{\Delta j=1}^{k} (\textbf{I}^l_{R(i,j)})_{ \Delta i,\Delta j} \odot \textbf{W}_{ i,j, \Delta i,\Delta j,l}, \\
&\textbf{O}=\text{concat}(\textbf{O}^1,\textbf{O}^2,...,\textbf{O}^g).
\end{aligned}
\label{dyconv}
\end{equation}
Here, $g$ is the number of groups and $R(i,j)$ denotes the patch of $k\times k$ vectors centered on $\textbf{I}^l_{i,j}$, $\textbf{O} \in \mathbf{R}^{h\times w \times d}$ is the output, and $l$ denotes the index of groups.
Particularly, {LaConv} does not lead to additional computation compared to the \textit{Depth-wise Convolution}.
We also implement {LaConv} with CUDA kernel, and add it to existing DL libraries, \emph{e.g.,} PyTorch~\cite{paszke2019pytorch}, to accelerate the computation.
\begin{table}[t]
\centering
\begin{tabular}{cc|c|c|c}
\toprule
\multicolumn{1}{c|}{Output Size} & s & LaConvNet-10 & LaConvNet-15 & LaConvNet-19 \\ \hline
\multicolumn{1}{c|}{$224 \times 224$} & - & 16-d linear & 16-d linear & 16-d linear \\ \hline
\multicolumn{1}{c|}{$112 \times 112$} & - & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool \\ \hline
\multicolumn{1}{c|}{$112 \times 112$} & 8 & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $\text{3x3, 16-d LaConv} \times 2$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $\text{3x3, 16-d LaConv} \times 3$ \end{tabular} & \begin{tabular}[c]{@{}c@{}} \specialrule{0em}{1pt}{1pt} $\text{3x3, 16-d LaConv} \times 3$ \end{tabular} \\[3pt] \hline
\multicolumn{1}{c|}{$56 \times 56$} & - & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool \\ \hline
\multicolumn{1}{c|}{$56 \times 56$} & 4 & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $ \text{7x7, 64-d LaConv} \times 1$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $ \text{7x7, 64-d LaConv} \times 2$ \end{tabular} & \begin{tabular}[c]{@{}c@{}} \specialrule{0em}{1pt}{1pt} $ \text{7x7, 64-d LaConv} \times 3$ \end{tabular} \\[3pt] \hline
\multicolumn{1}{c|}{$28 \times 28$} & - & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool \\ \hline
\multicolumn{1}{c|}{$28 \times 28$} & 2 & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $\text{7x7, 128-d LaConv} \times 2$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $\text{7x7, 128-d LaConv} \times 3$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $\text{7x7, 128-d LaConv} \times 4$ \end{tabular} \\[3pt] \hline
\multicolumn{1}{c|}{$14 \times 14$} & - & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool \\ \hline
\multicolumn{1}{c|}{$14 \times 14$} & 1 & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $ \text{7x7, 256-d LaConv} \times 4$\end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $ \text{7x7, 256-d LaConv} \times 5$\end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $ \text{7x7, 256-d LaConv} \times 6$\end{tabular} \\[3pt] \hline
\multicolumn{1}{c|}{$7 \times 7$} & - & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool & 2x2, stride 2 maxpool \\ \hline
\multicolumn{1}{c|}{$7 \times 7$} & 1 & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $ \text{7x7, 512-d } \text{LaConv} \times 1$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $ \text{7x7, 512-d } \text{LaConv} \times 2$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{1pt} $ \text{7x7, 512-d } \text{LaConv} \times 3$ \end{tabular} \\[3pt] \hline
\multicolumn{5}{c}{Classifier} \\ \bottomrule
\end{tabular}
\vspace{1mm}
\caption{ Network architecture of {LaConvNet}. ``X-d'' denotes the channels of transformations. ``s'' denotes the packing size of the LaConv block. Similar to ResNet, {LaConvNet} includes 5 stages and each stage contains several {LaConv} blocks. }
\label{net}
\vspace{-6mm}
\end{table}
\section{LaConvNet\label{net_intro}}
Based on LaConv, we further propose an unified and end-to-end network, termed \emph{LaConvNet}. Its network configurations are given in Tab.\ref{net}. LaConvNet processes images directly from the pixel level, completely abandoning the traditional convolution backbones like ResNet~\cite{he2016deep} or MaskRCNN~\cite{he2017mask}.
This property is the main difference of LaConvNet from most existing VL systems, which also makes LaConvNet much more compact.
Besides, we propose three sizes of LaConvNet, namely \textit{{LaConvNet-10}}, \textit{LaConvNet-15} and \textit{LaConvNet-19}. The suffix indicates the number of LaConv layers. For each stage of LaConvNet, we set a proper \textit{packing size } $s$ to keep the expressive power of the parameter generation. For the language encoder of LaConvNet, we use 1-layer LSTM unit and 3-layer self-attentions~\cite{vaswani2017attention}.
Notably, the design of {LaConvNet} still has a large space for exploration, such as the choice of depth, width, filter size \emph{etc.}
In this paper, we only aim to provide a effective baseline network for the quick validation of our argument.
The detailed information of {LaConvNet} is given in the appendix.
\section{Experiments}
To validate LaConvNet and the LaConv module, we conduct extensive experiments on four benchmark datasets of VQA and REC, namely CLEVR~\cite{johnson2017clevr}, CLEVR-Humans~\cite{johnson2017inferring}, RefCOCO~\cite{kazemzadeh2014referitgame} and RefCOCO+~\cite{kazemzadeh2014referitgame}, and compare them with a set of SOTA methods~\cite{hudson2018compositional,shrestha2019answer,kim2018bilinear,Luo_2020_CVPR,yang2020improving}.
\subsection{Datasets and Metrics \label{datasets}}
\textbf{CLEVR}~\cite{johnson2017clevr} is a synthetic VQA dataset introduced by Johnson \textit{et al}~\cite{johnson2017clevr}, which aims to diagnose various reasoning skills, \emph{e.g.,} relations and counting. It contains 999,968 image-question pairs, among which 699,989, 149,991 and 149,998 for training, validation and test, respectively. We use the classification accuracy as the metric.
\textbf{CLEVR-Humans}~\cite{johnson2017inferring} replaces the auto-generated questions in CLEVR with human-annotated ones, which can test the generalization ability of VQA models. It has 17,817, 7,202 and 7,145 examples for training, validation and test, respectively. We use the classification accuracy as the metric.
\textbf{RefCOCO \& RefCOCO+}~\cite{kazemzadeh2014referitgame} are two real-world datasets for referring expression comprehension, which are collected via an interactive game interface. RefCOCO and RefCOCO+ are splited into \textit{train}, \textit{val}, \textit{test A} and \textit{test B}. The referents of \textit{test A} are about people, while the ones of \emph{test B} are objects. Both datasets have 142,000 expressions for 50,000 bounding boxes of 20,000 images from MS-COCO~\cite{lin2014microsoft}. The expressions of RefCOCO are mainly about absolute locations, while RefCOCO+ contains more expressions about relative relations and attributes. Following previous works~\cite{yang2019fast,Luo_2020_CVPR}, when the Intersection-over-Union (IoU) between prediction and ground-truth is large than 0.5, the prediction is regarded as correct.
\subsection{Experimental Settings \label{detail}}
For CLEVR and CLEVR-Humans, the number of training epochs is 23, and 3 epochs are for warm-up.
For RefCOCO and RefCOCO+, the total training epochs are set to 35, 3 of which are for warm-up.
We use Glove~\cite{pennington2014glove} word embedding with a dimension of 300 to represent each input word.
The language encoder is built with an LSTM network and 3 self-attention layers~\cite{vaswani2017attention}, the dimensions of which are 512. The expanding ratio of FFN in LaConv is set to 4. All models are trained by \emph{Adam} optimizer~\cite{kingma2014adam}.
The basic learning rate is set to 1e-4, which is decayed by a factor of 0.2 at the last third and the last epochs. In default, we use ResNet-34 as the visual backbone for SOTA models to train from scratch. The task-specific head of LaConvNet are borrowed from previous works of VQA~\cite{yu2019deep} and REC~\cite{Luo_2020_CVPR}, respectively. More details can be referred to our supplementary materials.
\begin{table}[t]
\centering
\begin{tabular}{lccccccc}
\hline
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{\#Params}} &
\multirow{1}{*}{\textbf{Overall}} & \multirow{2}{*}{\textbf{Count}} & \textbf{Cmp.} & \multirow{2}{*}{\textbf{Exist}} & \textbf{Query.} & \textbf{Cmp.} \\
& & \textbf{Accuracy} & & \textbf{Num.} & & \textbf{Attr.} & \textbf{Attr.} \\ \hline
BUTD~\cite{anderson2018bottom} & 37M & 50.6 & 44.2& 68.7 & 64.3 & 44.5 & 53.7 \\
Film~\cite{perez2018film} & - & 97.6 & 94.5 & 93.8 & 99.2 & 99.2 & 99.0 \\
RN~\cite{santoro2017simple} & - & 95.5 & 90.1 & 93.6 & 97.8 & 97.1 & 97.1 \\
BAN~\cite{kim2018bilinear} & 119M &92.2&88.3&94.9&96.4&91.2&94.7 \\
RAMEN~\cite{shrestha2019answer} & 51M & 87.8&82.1&83.3&93.9&90.6&87.0 \\
MACNet~\cite{hudson2018compositional} & 27M & 98.5& 96.7 & 97.6 &99.3&99.3&98.4 \\ \hline
LaConvNet-10 & 14M & 98.3 & 96.4 & 97.5 & 99.2 & 99.3 & 98.3 \\
LaConvNet-15 & 20M & 98.7 & 97.4 & 96.4 & 99.4 & 99.5 & 99.1 \\
LaConvNet-19 & 26M &\textbf{ 99.1} & \textbf{ 97.9 } & \textbf{99.4 } & \textbf{99.5 } & \textbf{99.6 } & \textbf{99.3}\\ \hline
\end{tabular}
\vspace{1mm}
\caption{Comparisons between LaConvNet and SOTAs on CLEVR. All models are trained from scratch. \#Params denotes the parameter size\protect\footnotemark[1].}
\label{clevr_s}
\vspace{-5mm}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{lccccccc}
\toprule
\multicolumn{1}{c}{\textbf{Model}} & \textbf{\#Params} & \multicolumn{3}{c}{\textbf{RefCOCO}} & \multicolumn{3}{c}{\textbf{RefCOCO+}} \\ \cline{3-8}
& & val & testA & testB & val & testA & testB \\ \hline
MCN-single~\cite{Luo_2020_CVPR} & 24M & 56.4 & 62.3 & 48.1 & 39.4 & 44.7 & 29.4 \\
One-Stage-BERT~\cite{yang2019fast} & 33M & 58.5 & 63.7 & 51.3 & 44.9 & 51.1 & 36.2 \\
ReSC~\cite{yang2020improving} & 31M &58.9&63.8&50.4&44.7&51.3&36.6\\ \hline
LaConvNet-10 & 12M & 60.6 & 65.1 & 52.5 & 49.1 & 55.1&39.2 \\
LaConvNet-15 & 18M &60.9& 65.2& 53.6&
49.2& 55.3& 39.5
\\
LaConvNet-19 & 24M & \textbf{62.5}& \textbf{67.9}& \textbf{55.9}
&\textbf{49.4}& \textbf{55.7}& \textbf{40.5}
\\ \bottomrule
\end{tabular}
\vspace{1mm}
\caption{Comparisons between LaConvNet and SOTAs on RefCOCO and RefCOCO+. All methods are all trained from scratch. \#Params denotes the parameters\protect\footnotemark[1].}
\vspace{-6mm}
\label{refcoco}
\end{table}
\begin{table}[t]
\center
\subtable[Model ablations.]{
\tablestyle{4.1pt}{1.2}\begin{tabular}{cx{22}c}
\toprule
LaConvNet-10 & Clevr & RefCOCO \\ \hline
\multicolumn{1}{l}{+ Base} & 25.3 & 30.2\\
\multicolumn{1}{l}{+ Dynamic filters} & {97.7} & {57.3}\\
\multicolumn{1}{l}{+ Pixel packing} & 98.3 & 60.6\\ \bottomrule\\ \\
\end{tabular}}\hspace{15mm}\vspace{-1.5mm}
\subtable[Parameters and computations.]{
\tablestyle{4.88pt}{1.2}\begin{tabular}{cx{22}x{22}}
\toprule
Network & Params & Madds \\ \hline
ResNet34~\cite{he2016deep} & 21.3M & 3.68G \\
ResNet101~\cite{he2016deep} & 42.5M & 7.85G \\ \hline
LaConvNet-10 & 10.0M & 1.86G \\
LaConvNet-15 & 15.9M & 2.82G \\
LaConvNet-19 & 21.8M & 3.70G \\ \bottomrule
\end{tabular}}
\caption{(a). Ablation study of LaConvNet on Clevr \textit{val} set and RefCOCO \textit{val} set. (b). Parameters and computational costs of LaConvNet and two ResNets. MAdds denotes \textit{multiplication-addition}~\cite{howard2017mobilenets}, which indicates the computation cost. All results are reported with the image resolution of $224 \times 224$ and the text length of $15$. Parameters and computations of the language encoder are excluded. }\vspace{-0.5mm}
\label{tab1}
\vspace{-3mm}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Prog.}} & \multirow{1}{*}{\textbf{Overall}} & \textbf{Humans} & \textbf{Humans} \\
& & \textbf{Accuracy} & \textbf{Before FT} & \textbf{After FT} \\ \hline
IEP~\cite{johnson2017inferring} & 700k & 96.9 & - & - \\
DDRprog~\cite{suarez2018ddrprog} & 700k & 98.3 & - & - \\
Tbdnet~\cite{mascharka2018transparency} & 700k & 98.7 & \textbf{-} & \textbf{-} \\ \hline
MACNet~\cite{hudson2018compositional} & 0 & \underline{ 98.9} & \underline{ 57.4} & \underline{ 81.5} \\
NS-CL~\cite{mao2019neuro} & 0 & \underline{ 98.9} & - & - \\ \hline
LaConv*+ResNet101 & 0 & \textbf{99.1} & \textbf{58.9} & \textbf{82.4} \\ \bottomrule
\end{tabular}
\vspace{1mm}
\caption{Performance comparison between LaConv and SOTA methods on CLEVR and CLEVR-Humans. \textit{Prog.} denotes the number of extra program ground truths used during training. \textit{LaConv* }is a structure with 6 LaConv layers. The visual backbones of all models are pre-trained on the ImageNet.}
\label{clevr}
\vspace{-6mm}
\end{table}
\begin{figure}[t]
\centering
\subfigure[The accuracy-parameter curve.]{
\includegraphics[width=0.4\linewidth]{1.pdf}}\hspace{12mm}
%
\subfigure[Training cost of LaConvNet and SOTAs.]{
\includegraphics[width=0.4\linewidth]{2.pdf}}
\caption{ (a) Performance comparison between LaConvNet and three SOTA modular networks under different parameter sizes on RefCOCO~\cite{kazemzadeh2014referitgame} dataset.
(b) Comparisons of training costs between LaConvNet-10 and three SOTA modular networks. The backbone of SOTAs is ResNet101.
}
\label{curve}
\end{figure}
\subsection{Experimental Results}
\footnotetext[1]{The parameters of the language encoder are not included.}
\subsubsection{Comparison with SOTA Methods}
We compare {LaConvNet} to a set of the SOTA methods~\cite{Luo_2020_CVPR,yang2019fast,yang2020improving,johnson2017inferring,perez2018film,anderson2018bottom,kim2018bilinear,shrestha2019answer,hudson2018compositional} on three benchmark datasets, of which results are given in Tab.~\ref{clevr_s} and Tab.~\ref{refcoco}. For fair comparisons, all methods are trained on the corresponding dataset from scratch.
CLEVR is a dataset for examining the visual reasoning ability of VL systems, and its scenes and objects are relatively simple.
From Tab.~\ref{clevr_s}, we can see that LaConvNets have obvious advantages than SOTA methods in both overall accuracy and parameter size.
These results also indicates that LaConvNet not only can extract language-related visual features, but also can achieve efficient multi-modal reasoning. The other observation is that even if the images of CLEVR is very simple, these SOTA modular networks are still affected without backbone pre-training. In contrast, the impact of pre-training on LaConvNet is little, as shown in Tab.~\ref{clevr}, suggesting that its unified structure is already capable of learning simple visual knowledge without additional data.
On two real-image datasets, \emph{i.e.}, RefCOCO and RefCOCO+, the advantages of LaConvNets become more prominent.
Under the same or fewer parameters, LaConvNet-19 has obvious performance gains over SOTAs ranging from +6.4\% to +37.8\%.
This huge performance gap also reflects the merits of LaConvNet's unified structure over the modular networks in visual feature learning, especially in RefCOCO+, which focuses on the recognition of low-level information.
But we also noticed that our method still needs an effective pre-training method to further improve the performance of these real-image datasets. This is mainly because real-world images contain many complex scenes and diverse objects, and a limited number of multi-modal samples cannot provide sufficient learning information.
The pre-training of LaConvNet will be left in our future work.
\subsubsection{Ablation Study}
We first examine the effects of dynamic parameter generation and pixel packing of LaConv, of which results are given in Tab.~\ref{tab1} (a). Here, ``Base'' denotes the single-modal group convolution network, which has a very limited ability to perform multi-modal prediction.
As shown in Tab.~\ref{tab1} (a), with the language-dependent parameter generation, the performance of LaConvNet has been greatly improved, indicating its ability to simultaneously process visual information and perform multi-modal reasoning. Another observation is that pixel packing can improve the expressive power of LaConvNet, \emph{e.g.}, +3.3\% on RefCOCO, suggesting its effectiveness in dealing with \emph{low-rank degeneration}.
We also validate LaConv as a plug-in module to the existing modular structure, of which results are given in Tab.~\ref{clevr}. Specifically, we construct a modular network with 6 LaConv layers and a ResNet101 backbone.
From this table we can see that as a multi-modal inference module, LaConv still outperforms SOTA methods with different design principles, such as the symbol-based( TBDNet~\cite{mascharka2018transparency}), Realtion-base (RN~\cite{santoro2017simple}) and the attention-based (BAN~\cite{kim2018bilinear} and BUTD~\cite{anderson2018bottom}) models.
The latest symbol-based method, \emph{i.e.,} Tbdnet~\cite{mascharka2018transparency}, uses all program ground-truth during training, which is still inferior than LaConv. Compared to the SOTA method MACNet~\cite{hudson2018compositional}, LaConv is still slightly better, \emph{i.e.,} +0.2\%.
When transferred to the CLEVR-Humans dataset, the performance gains of LaConv become more obvious, which well confirms its generation ability.
\subsubsection{Model Efficiency}
The efficiency of {LaConvNet} are described in Tab.\ref{tab1} (b) and Fig.~\ref{curve}. In Tab.\ref{tab1} (b) , we compare LaConvNet with alternative vision backbones in terms of both parameter size and computation overhead. As a language-conditioned convolution network, {LaConvNet} is lightweight and efficient. For example, the parameters and computations of LaConvNet-10 are almost 4 times smaller than ResNet101. Considering modular networks require extra modules for language-vision fusions, the compactness of the unified LaConvNet is more distinct. In Fig.~\ref{curve}, we compare LaConvNet with the SOTA modular networks under different parameter sizes\footnotemark[2]\footnotetext[2]{We test SOTA model with different backbones, including ResNet18, ResNet34 and ResNet101.}. We find that LaConvNet achieves better trade-off between performance and parameters compared to the existing modular models. In term of training efficiency, LaConvNet saves 2/3 training costs against the SOTA REC model, \emph{i.e.,} ReSC~\cite{yang2020improving}. These experiments well support the efficiency of the unified architecture.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\columnwidth]{vis2.pdf}
\caption{Visualizations of the attention maps in language-conditioned parameter generation. We visualize phrases of a question and their corresponding visual attention. The phrase and the image index of the same colors correspond to each other. }
\label{vis1}
\vspace{-2mm}
\end{figure}
\subsubsection{Qualitative Analysis}
In this section, we give detailed visualizations to answer two key questions of LaConv and LaConvNet, \emph{i.e.,} ``\textit{is the parameter generation reliable and interpretable?}'' and ``\textit{what convolutions are learned from the natural language instructions?} ''
\textbf{Is the condition generator reliable and interpretable?}
{LaConv} is more interpretable than the traditional convolution due to the language-dependent parameter generation.
To support this argument, we visualize the affinity matrix $\textbf{A}$ of its parameter generations.
As shown in Fig~\ref{vis1}, each phrase of a question attends to the corresponding region.
For instance, in the first example of Fig.\ref{vis1}, the logical phrase of ``the same as'' highlights two \textit{same} cubes.
Analogically, the spatial phrase of ``the right of'' in the second example is also related to the \textit{right} object.
In addition, other referring phrases, \emph{e.g.,} ``the large shiny object'' and ``the tiny object'', are also visualized in the attention maps.
Based on these observations, we believe that the generated convolution filters of a position can accurately execute the corresponding language instructions, which makes {LaConv} more reliable and interpretable.
\textbf{What convolutions are learned from language instructions?}
Unlike the conventional convolutions that are weight-sharing for each spatial position, {LaConv} depends on both image position and text content.
To examine its dynamics, we visualize the filters in each stage of {LaConvNet} in Fig.\ref{vis2}.
For a better understanding, we select four positions of the example image, and visualize their filters.
From Fig.\ref{vis2}, the first observation is that the filters for different groups vary greatly (from the vertical axis), which suggests that each group of convolutions is responsible for different recognition patterns.
The second observation is that the filters at the initial stage are relatively static, \emph{i.e.,} filters of the same group are similar for different positions (read from the horizontal axis).
Such a finding suggests that these convolutions focus on learning low-level visual representations, and they are less affected by natural language information.
However, we notice that as the recognition progresses, the filters of the same group vary drastically, presenting different intensities and morphs, \emph{e.g.,} Stage3-5.
This observations may suggest that the language instructions start to dynamically guide the visual recognition, therefore different positions present distinct patterns. Conclusively, these visualizations indicates that the language-guided visual recognition of LaConvNet is a continuous process, and the impact of natural language information can be reflected on its weights.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{vis4.pdf}
\caption{Visualizations of convolution kernels in the {LaConvNet-10}.
Vertical axis denotes each group of convolution kernels. Horizontal axis represents the positions of convolutions that are marked in the image.
The colors denote the magnitude of values in filters, and the more red color indicates a larger value.
For each stage, we select the last layer for visualization. }
\label{vis2}
\vspace{-2mm}
\end{figure}
\section{Related Work}
In the latest development of the joint vision-and-language (VL) study, how to equip an VL system with human-like reasoning ability has become the research focus of various VL tasks, such as \emph{visual question answering}~\cite{antol2015vqa,goyal2017making,johnson2017clevr,hudson2019gqa,krishna2017visual}, \textit{referring expression comprehension}~\cite{plummer2015flickr30k,liu2019clevr,hu2017modeling,kazemzadeh2014referitgame,REFCOCOG}.
One of the research direction is to design an inference network that can obtain relevant visual information from the extracted visual features, such as attention-based networks~\cite{hudson2018compositional,yang2016stacked}, relational network~\cite{santoro2017simple} or modulated networks~\cite{perez2018film,santoro2017simple}.
Differing from these methods, our goal is to use language cues to guide image feature learning in convolutions, thereby integrating the multi-modal reasoning into the visual recognition process.
There are several works~\cite{perez2018film,nguyen2020revisiting,de2017modulating,gao2018question} with similar design principles to ours.
Among them, one representative method is FiLM proposed by Perez \emph{et al.}~\cite{perez2018film}, which learns normalization weights by language features and applies them after convolutions.
The other is the hybrid convolution proposed by Gao \emph{et al.}~\cite{gao2018question}.
In these methods, the conventional convolution filters are fused by language features and executed in a static convolution, \emph{i.e.,} filters are shared for different images under the same question.
In contrast, {LaConv} generates dynamic language conditions based on both language and visual information, and directly performs language-driven convolutions.
The proposed LaConvNet can perform language-guided visual recognition from pixel level, and completely abandon the traditional convolution backbones.
\section{Conclusion}
In this paper, we explore the language-guided visual recognition to achieve a unified reasoning structure for vision-and-language tasks.
We first propose a novel language-guided dynamic convolution module called LaConv. With the parameters generated by natural language information, LaConv can complete visual feature learning and multi-modal inference in one forward step.
Based on {LaConv}, we build a fully language-driven network, termed {LaConvNet}.
{LaConvNet} gets rid of CNN blocks entirely, and directly performs visual reasoning from raw pixels. Extensive experiments are conducted on four benchmarks of two VL tasks, and the experimental results greatly confirm the efficiency and the compactness of the proposed unified network.
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,500,684 | arxiv | \section{Introduction} %
Quantum spin liquids (QSLs) \cite{anderson_resonating_1973,savary_quantum_2017} are realized in certain spin systems where the interplay of frustration and quantum fluctuations suppresses long range order.
These exotic phases of matter cannot be understood in terms of spontaneous symmetry breaking,
but are instead characterized by their long range entanglement and emergent fractionalized excitations.
The lack of local order parameters makes it difficult to experimentally detect QSLs by their static properties--except for showing the absence of conventional order.
Instead it appears more promising to study dynamical properties of QSLs (e.g., the dynamical spin structure factor)
which encode characteristic fingerprints of topological order\cite{qi_dynamics_2009,han_fractionalized_2012,dodds_quantum_2013,punk_topological_2014,morampudi_statistics_2017}.
On the theory side, significant insight into the physics of QSLs comes from the study of exactly solvable models.
A prominent example is the Kitaev model on the honeycomb lattice\cite{kitaev_anyons_2006}, which exhibits a QSL phase
featuring fractionalization of spin-$1/2$ degrees of freedom into fluxes and Majorana excitations.
The Kitaev interaction, a strongly anisotropic Ising exchange appears to be realized approximately in compounds with strong spin-orbit interaction\cite{jackeli_mott_2009,witczak-krempa_correlated_2014,nussinov_compass_2015,rau_spin-orbit_2016,winter_models_2017},
such as the iridates Na$_2$IrO$_3$, Li$_2$IrO$_3$~\cite{chaloupka_zigzag_2013}, and $\alpha$-RuCl$_3$~\cite{plumb_-rucl3:_2014,sears_magnetic_2015,banerjee_proximate_2016}.
It may also be realized in metal-organic frameworks~\cite{yamada_designing_2017}.
In such materials, additional interactions are important and typically lead to long-range magnetic order,
nonetheless signatures of being in the proximity to the Kitaev QSL are discussed\cite{yadav_kitaev_2016,banerjee_proximate_2016,banerjee_neutron_2017}.
Recent attention has shifted to applying a magnetic field\cite{yadav_kitaev_2016,janssen_honeycomb-lattice_2016,zheng_gapless_2017,janssen_magnetization_2017,jansa_observation_2017,sears_phase_2017,winter_probing_2018,gohlke_quantum_2018},
in particular experiments on the Kitaev compound $\alpha$-RuCl$_3$ (with an in-plane magnetic field) reveal
a single transition into quantum paramagnetic phase with spin-excitation gap\cite{johnson_monoclinic_2015,ponomaryov_unconventional_2017,wolter_field-induced_2017,wang_magnetic_2017,banerjee_excitations_2018,lampen-kelley_anisotropic_2018,hentrich_unusual_2018}.
In this article, we consider the Kitaev model in a magnetic field along $[111]$,
such that the field couples to the spins in a symmetry-equivalent way and the field does not prefer any bond in particular.
While the magnetic field breaks integrability, Kitaev has identified two three-spin exchange terms
within perturbation theory, that break time-reversal symmetry and open a gap in the spectrum.
One of the terms retains integrability and upon adding to the Kitaev model,
leads to a topologically ordered phase hosting non-abelian anyons\cite{kitaev_anyons_2006}.
However, numerical simulations~\cite{jiang_possible_2011}
reveal that the same topological phase occurs for small magnetic fields and ferromagnetic Kitaev coupling.
The topological phase turns out to be more stable, by one order of magnitude in the critical field strength,
if an antiferromagnetic coupling is considered\cite{zhu_robust_2017}.
Remarkably, an additional regime, possibly gapless, between the low-field topological and the high-field polarized phase appears to exist\cite{zhu_robust_2017}.
In this work, we employ large scale infinite density matrix renormalisation group (iDMRG) methods\cite{white_density_1992,mcculloch_infinite_2008,phien_infinite_2012,kjall_phase_2013} to investigate the ground state phase diagram
of the Kitaev model in a magnetic field along $[111]$ and simulate its dynamics using a matrix-product operator (MPO) based time-evolution\cite{zaletel_time-evolving_2015}.
The topologically ordered phase is characterized by its finite topological entanglement entropy (TEE)\cite{levin_detecting_2006,kitaev_topological_2006}.
By subtracting contributions of the Majorana fermions and the $\mathbb Z_2$-gauge field
from the numerically obtained entanglement entropy of a bipartition,
we extract a remainder which is identical to the TEE in the integrable case.
In doing so, we obtain a clear signature of non-abelian anyonic quasiparticles in the topological phase.
In a magnetic field, this remainder is still consistent with the existence of non-abelian anyons.
Furthermore, within the topological phase the correlation length decreases with magnetic field in a way
that is consistent with a cubic opening of the gap as found for the three-spin exchange\cite{kitaev_anyons_2006}.
However, the dynamical spin-structure factor in presence of a field behaves very differently
compared to what is known for the three-spin exchange\cite{knolle_dynamics_2015}.
The magnetic field causes the flux degrees of freedom to become mobile.
As a consequence the low-energy spectrum contains more structure and
the gap in the dynamical spin-structure factor is reduced.
Approaching the intermediate regime from the polarized phase,
the magnon modes reduce in frequency and simultaneously flatten.
This resembles the phenomenology within linear spin wave theory (LSWT)\cite{janssen_honeycomb-lattice_2016,mcclarty_topological_2018},
but the transition is significantly renormalised to lower fields.
Close to the transition,
a broad continuum exists that, within our reachable resolution in frequency,
reaches down to zero frequency and merges with the single magnon branches.
At the transition, the spectrum appears to be (nearly) gapless in the entire reciprocal space.
We do not observe an opening of a gap in the intermediate regime.
The remainder of this paper is structured as follows:
In Sec. \ref{scn:mod} we introduce the model consisting of Kitaev term,
Zeeman coupling to a magnetic field along $[111]$,
and three-spin exchange.
In Sec. \ref{scn:gs}, the ground state phase diagram is discussed for both signs of the Kitaev coupling.
We then focus on the antiferromagnetic Kitaev coupling in Sec. \ref{scn:dsf}
and study its dynamical signatures within the low-field topological as well as the high-field polarized phases.
We conclude with a summary and discussion in Sec. \ref{scn:dis}.
\section{Model} \label{scn:mod}
The Hamiltonian describing the Kitaev model in a magnetic field along [111] direction reads
\begin{equation}
H = \sum_{\langle i,j \rangle_\gamma} K_{\gamma} S_i^\gamma S_j^\gamma
- h \sum_i \left( S^x_i + S^y_i + S^z_i \right) ~,
\label{eqn:H_KH111}
\end{equation}
where the first term is the pure Kitaev model exhibiting strongly anisotropic spin exchange coupling\cite{kitaev_anyons_2006}.
Neighboring spins couple depending on the direction of their bond $\gamma$ with $S^x S^x$, $S^y S^y$ or $S^z S^z$, cf. Fig. \ref{fig:H_and_K3}(a).
The second term is the Zeeman-coupling of the spins to a magnetic field applied in $[111]$ direction.
\begin{figure}[tb]
\includegraphics{Kitaevhexagon.pdf}
\includegraphics{KitaevK3hexagon.pdf}
\caption{
(a) Bonds labeled with x,y, and z and an exemplaric $S^y_i S^y_j$ Kitaev-exchange (orange),
(b) a single three-spin term $S^x_i S^z_j S^y_k$ of the three-spin interaction in $H_{K_3}$.}
\label{fig:H_and_K3}
\end{figure}
In the zero field limit, the Kitaev model exhibits a quantum spin liquid ground state with fractionalized excitations\cite{kitaev_anyons_2006}.
Depending on $K_\gamma$, the spectrum of the fermions is either \emph{gapped} (\emph{A-phase}) or \emph{gapless} (\emph{B-phase}).
Let the $K_\gamma$ be sorted as $K_\alpha \ge K_\beta \ge K_\gamma$, then the gapless B-phase occurs if $|K_\alpha| \le |K_\beta| + |K_\gamma|$
and the A-phase if $|K_\alpha| > |K_\beta| + |K_\gamma|$.
In the remainder, we consider the isotropic case $K_\gamma = K = \pm 1$.
Flux degrees are defined by the plaquette operator $W_p = \prod_{i \in \mathcal P} \sigma^{\gamma(i)}_i$,
where $\gamma(i) = \{x,y,z\}$ equals the bond, that is not part of the loop $\mathcal P$ around the plaquette.
The $W_p$ commute with the Hamiltonian (in the $h=0$ limit) and have eigenvalues $\pm 1$.
Thus, the $W_p$'s are quantum numbers separating the full Hilbert space into subspaces,
for each of which a free fermion problem remains to be solved.
The ground state lies in the flux-free sector, that is $\forall i: W_{p,i} = +1$.
For later use, we comment on placing the Kitaev model on a cylinder.
A second flux operator of a non-contractable loop $\mathcal C$ going around the cylinder can be defined:
$W_l = \prod_{i\in \mathcal C} \sigma^{\gamma(i)}_i$.
Similarly to $W_p$, $W_l$ commutes with the Hamiltonian, has eigenvalues $\pm 1$, and separates the full Hilbert space in two subspaces.
With respect to the free fermions, $W_l = -1$ (flux-free) corresponds to periodic
and $+1$ to antiperiodic boundary conditions along the circumference of the cylinder.
The ground state within each of the two sectors are separated in energy by $\Delta E$,
which depends on the circumference $L_\text{circ}$ and vanishes in the limit $L_\text{circ} \rightarrow \infty$.
Applying a magnetic field $h$ along $[111]$, as in Eq. (\ref{eqn:H_KH111}), breaks time-reversal symmetry
and opens a gap in the fermionic spectrum.
The lowest order terms breaking time-reversal and not changing the flux configuration
are the three-spin exchanges $S_i^x S_j^y S_k^z$.
Two such terms exist\cite{kitaev_anyons_2006}.
The one illustrated in Fig.~\ref{fig:H_and_K3}(b) plus symmetric variants
results in a quadratic Hamiltonian for the Majorana fermions
and thus preserves the integrability of the original model.
The corresponding Hamiltonian reads
\begin{equation}
H_{K_3} = \sum_{\langle i,j \rangle_\gamma} K_{\gamma} S_i^\gamma S_j^\gamma
+ K_3 \sum_{\langle \langle i,j,k \rangle \rangle} S_i^x S_j^y S_k^z ~,
\label{eqn:H_K3}
\end{equation}
where $\langle \langle . \rangle \rangle$ denotes an ordered tuple $(i,j,k)$ of neighboring sites such that the $S^x$, $S^y$, and $S^z$ at the outer two sites coincide with the label of the bond connecting to the central site.
The flux operators $W_p$ and $W_l$ still commute with $H_{K_3}$ and separate the Hilbert space.
The remaining fermionic Hamiltonian is quadratic with the corresponding bands having non-zero Chern number $\pm 1$ and
yielding composite excitations with anyonic exchange statistics\cite{kitaev_anyons_2006}.
\section{Ground State Phases} \label{scn:gs}
\begin{figure}
\centering
\includegraphics[]{Honeycomb_rhombic_3-crop.pdf}
\caption{
Geometries used for iDMRG and their corresponding accessible momenta (orange lines) in reciprocal space with respect to the first Brillouin zone (inner hexagon).
The second Brillouin zone is shown partially.
The roman numbers label links across the boundary.
(a) \emph{rhombic} geometry with three unit cells, $L_\text{circ} = 6$ sites, along the circumference
and (b) its corresponding reciprocal space.
(c,d) \emph{rhombic-2} geometry with five unit cells circumference, $L_\text{circ} = 10$ sites.
}
\label{fig:lat_BC}
\end{figure}
The ground state is obtained using the \emph{matrix product state} (MPS) based \emph{infinite density matrix renormalisation group} (iDMRG) method~\cite{white_density_1992,mcculloch_infinite_2008,phien_infinite_2012,kjall_phase_2013}.
Being a standard technique for one-dimensional systems,
it has been used in two dimensions by wrapping the lattice on a cylinder
and mapping the cylinder to a chain with longer range interactions.
We employ a \emph{rhombic-2} geometry with a circumference of $L_\text{circ} = 10$ sites
and a \emph{rhombic} geometry with $L_\text{circ}=6$ as illustrated in Fig.~\ref{fig:lat_BC}.
Both geometries capture the $K-$points in reciprocal space and hence are gapless for pure Kitaev-coupling ($h = 0$).
A main advantage of the \emph{rhombic-2} geometry is its translational invariance of the chain winding around the cylinder.
While the mapping to a cylinder for the \emph{rhombic} geometry
requires an iDMRG unit cell of at least $L_\text{circ}$ sites,
a single fundamental unit cell with two sites is sufficient to simulate an infinite cylinder
using the \emph{rhombic-2} geometry.
Different iDMRG cells have been used to test for possible breaking of translational symmetry
and corresponding results will be presented when of relevance.
We use bond dimensions of up to $\chi = 1600$ for the computation of the phase diagram.
\begin{figure}
\includegraphics[width=\linewidth]{Kitaev_H111_phasediagram_panel-crop.pdf}
\caption{
Several observables of the Kitaev model with antiferromagnetic coupling, $K > 0$, in a magnetic field along $[111]$.
From top to bottom:
Second derivative $-d^2 E/d h^2$ of the energy with respect to the field $h$,
entanglement entropy $S_E$ of a bipartition of the cylinder divided by the number $L_y$ of bonds cut,
correlation length $\xi$,
average of plaquette fluxes $W_p$,
and flux $W_l$ of a non-contractible loop around the cylinder.
At least three phases are observed: topological phase for $h < h_{c1,AF} \approx 0.22$,
intermediate possibly gapless $h_{c1,AF} < h < h_{c2,AF}\approx0.36$, and a subsequent field-polarised phase.
Solid blue lines are for the $W_l=-1$ sector, dashed blue lines for $W_l=+1$,
and its intensity encodes the bond dimension $\chi$ used, where dark blue refers to a large $\chi$.
Thin dashed black lines depict the phase transitions obtained from the peaks in $-d^2 E/d h^2$.
}
\label{fig:pd}
\end{figure}
We confirm the existence of two phases and a single transition for ferromagnetic Kitaev coupling\cite{jiang_possible_2011, zhu_robust_2017} (FMK, $K < 0$),
and of at least three phases for antiferromagnetic Kitaev coupling\cite{zhu_robust_2017} (AFK, $K>0$).
For both, FMK and AFK, we find a topological phase at low field and a field-polarized phase at high field.
Only for AFK, we identify an intermediate, seemingly gapless, phase.
\subsection{Topological Phase} \label{scn:gs_topo}
For small $h$, the system forms a non-abelian topological phase\cite{kitaev_anyons_2006}.
Its stability upon applying $h$ vastly differs depending on the sign of the Kitaev interaction.
Employing a \emph{rhombic-2} geometry with $L_\text{circ}=10$, we find, in case of AFK,
that this phase ranges up to $h_{c1,AF} \approx 0.22$, whereas for FMK it ranges only up to $h_{c,FM} \approx 0.014$.
Both values are based on the peaks in the second derivative $-d^2E/dh^2$ of the energy with respect to the magnetic field.
However, subtle features are present for AFK at slightly lower $h \approx 0.2$,
which become less pronounced with larger bond dimension $\chi$.
In comparison to values reported earlier\cite{jiang_possible_2011, zhu_robust_2017} we find a nearly $30\%$ lower value for the FMK transition $h_{c,FM}$.
This is due to the fact that for rather small circumferences,
the ground state energy within the topological phase is strongly sensitive to the boundary condition
as has already been noted in Ref.~[\onlinecite{kitaev_anyons_2006}].
The {\emph{rhombic-2}} geometry we employ has the same twisted boundary condition
as the $(L \bm n_1, L \bm n_2 + \bm n_1)$ geometry employed in [\onlinecite{kitaev_anyons_2006}],
which is shown to converge better in energy when increasing $L$ or $L_\text{circ}$, respectively.
The transition field $h_{c,FM}$ may still decrease slightly upon further increasing $L_\text{circ}$
and approaching the two-dimensional limit $L_\text{circ} \rightarrow \infty$.
For small $h$, the total magnetisation, $|\langle \bm S \rangle|$ (not shown here), grows proportionally with $h$.
The two sectors found on the cylindrical geometry and determined by $W_l = +1$ or $W_l = -1$ are distinguished
by their behaviour of the entanglement entropy $S_E$ and the correlation length $\xi$.
The $W_l = +1$ sector is characterized by finite $\xi$ and $S_E$
due to being gapped by imposing antiperiodic boundary conditions on the Majorana fermions\cite{gohlke_dynamics_2017}.
In contrast, the $W_l = -1$ sector has divergent $\xi$ and $S_E$ when $h=0$, where it is gapless.
In the latter, encoding the wave function as MPS with a finite $\chi$ induces an effective gap that limits $\xi$ and $S_E$.
In fact, the growth of $\xi$ and $S_E$ with increasing $\chi$ is connected via
$S_{E,\chi} = c/6 \log \xi_\chi + \text{const}$~\cite{calabrese_entanglement_2004,tagliacozzo_scaling_2008},
where $c$ is the universal \emph{central charge}.
This has been named finite entanglement scaling and allows to confirm $c=1$ (for $h=0$, $W_l=-1$)
as has been checked previously on a different cylinder geometry\cite{gohlke_dynamics_2017}.
As a side remark, the notion of a central charge is applicable due to using a cylinder geometry
and effectively mapping the model in question to a one-dimensional system.
In a magnetic field, $\langle W_p \rangle$ as well as the cylinder flux $W_l$ begin to slowly deviate from $\pm 1$
until they vanish close to the transition.
The plaquette fluxes $W_p$, as defined in the integrable limit, are not conserved anymore for finite $h$
as the application of a single $S^\gamma_i$ creates a flux each on the two plaquettes adjacent to bond $\gamma$ at site $i$.
However, an adiabatically connected operator $\tilde W_l$ of $W_l$ is expected to exist,
such that $\tilde W_l \approx \pm1$~\cite{hastings_quasiadiabatic_2005}.
Such a dressed Wilson loop $\tilde W_l$ separates the two sectors found on the cylinder
for any $h$ within the topological phase.
\begin{figure}
\includegraphics[width=\linewidth]{KH111_ksi_loglog_h_and_K3.pdf}
\caption{
Comparison of correlation length $\xi$ between
(a) the rescaled magnetic field $h \rightarrow 32 h^3$ and
(b) the three-spin exchange $K_3$.
The solid red line is a guide-to-the-eye corresponding to a $1/K_3$ or $1/(32h^3)$ scaling.
Within $0.05 < K_3, 32h^3 < 0.2$, that is where numerical convergence is achieved,
the behaviour of $\xi$ is consistent with a predicted opening of the gap as $\Delta E \propto h^3$ or as $\Delta E \propto K_3$, respectively.
}
\label{fig:top_hp3_K3}
\end{figure}
Numerical convergence, that is $\xi$ and $S_E$ become $\chi$-independent, is achieved for $0.1 < h < 0.18$.
In that range $\xi$ reflects the physical excitation gap\cite{hastings_locality_2004} via $\Delta E \propto 1/\xi$.
Figure \ref{fig:top_hp3_K3}, where the x-axis has been rescaled $h \to 32h^3$,
enables a direct comparison with the three-spin exchange $K_3$ in $H_{K_3}$.
Both exhibit a very similar decrease of $\xi$ with a $\xi \propto 1/x$ scaling, where $x$ is either $32h^3$ or $K_3$.
$\xi$ reaches a plateau at $x=0.2$ with a low $\xi \approx 1$.
If $h$ is applied, a small $\chi$-dependent dip and the phase transition into the intermediate regime follows,
whereas for $K_3$ the plateau ranges up to $K_3 = 1$, from where $\xi$ increases again%
\footnote{For large $K_3 \gg 1$, the flux gap reduces and vanishes.
The ground state is then not in a flux-free sector anymore.}.
%
The entanglement entropy $S_E$ reaches, in the case of a magnetic field,
a plateau already at $32h^3 \approx 0.06$ ($h \approx 0.12$)
beyond which it raises again until the transition field $h_{c1,AF}$ is reached.
At all fields the entanglement remains larger than for the corresponding $K_3$.
A more detailed discussion about the entanglement in the context of topological excitations and topological entanglement entropy follows below.
The $W_l=+1$ sector has $\chi$-independent $\xi$ and $S_E$ up to $h \approx 0.18$.
Before the transition ($0.18 < h < h_{c1,AF}$) both sectors exhibit a $\chi$-dependents
which suggests a closing of the gap at the transition
and, thence, indicates that the transition might be continuous.
\begin{figure}[tb]
\includegraphics[width=\linewidth]{Kitaev_H111_Sbip_top_panel.pdf}
\caption{
Remainder $\Delta S_E$ of the entanglement entropy of a bipartition of the cylinder after subtracting
a fermionic and a gauge field contribution following Eq. (\ref{eqn:DS_E}).
The magnetic field has been rescaled, $h\rightarrow 32h^3$, based on the behaviour of the correlation length in Fig. \ref{fig:top_hp3_K3}.
The vertical dashed lines signal the transitions in a magnetic field.
The horizontal lines correspond to $\log (\mathcal D/d_a)$ as discussed in the main text.
}
\label{fig:topSbip}
\end{figure}
We now focus on the characterization of the topological order occurring at low magnetic fields $h$ or when non-zero $K_3$ is considered.
First, let us recall some facts about topologically ordered systems on an infinite cylinder\cite{cincio_characterizing_2013,zaletel_topological_2013}.
%
Generally, topological order leads to a ground state degeneracy
with a number of degenerate states being equal to the number of emergent quasiparticle species.
These ground states are conveniently represented as minimally entangled states (MES)\cite{zhang_quasiparticle_2012,zaletel_topological_2013}, say $|\psi_0^i\rangle$,
where $i$ denotes the particular quasiparticle.
By utilizing iDMRG, such MES are selected naturally,
and the obtained MPS corresponds to one of the quasiparticles\cite{jiang_identifying_2012,cincio_characterizing_2013}.
Upon cutting a cylinder into two semi-infinite halves, the entanglement entropy grows proportional with the circumference $L_\text{circ}$ as\cite{kitaev_topological_2006}
\begin{equation}
S_E = \alpha L_\text{circ} - \gamma_i~,
\label{eqn:S_E_area_law}
\end{equation}
where $\gamma_i$ denotes the topological entanglement entropy (TEE)\cite{levin_detecting_2006,kitaev_topological_2006}.
A non-zero TEE $\gamma_i = \log(\mathcal D/d_i)$ reveals topological order and is connected to the total quantum dimension $\mathcal D$,
which itself is a sum of the quantum dimension $d_i$ of each quasiparticle
\begin{equation}
\mathcal D = \sqrt{\sum_i d_i^2}~.
\label{eqn:totalQD}
\end{equation}
The quantum dimension is related to the fusion vector space,
which is spanned by all the different ways anyons can fuse to yield a trivial total charge\cite{kitaev_topological_2006,nayak_non-abelian_2008}.
The quantum dimension of abelian anyons is $d_i=1$, whereas for non-abelian anyons $d_i$ is generally larger than one.
The gapped phase of the Kitaev model upon applying $K_3$ is known to exhibit topological order hosting non-abelian Ising anyons\cite{kitaev_anyons_2006}.
The following quasiparticles exist: $\mathbb{1}$ (vacuum), $\epsilon$ (fermion), and $\sigma$ (vortex),
of which $\sigma$ has a quantum dimension $d_\sigma=\sqrt 2$ and the other two $d_\mathbb{1} = d_\epsilon =1$.
From (\ref{eqn:totalQD}) follows a total quantum dimension of $\mathcal D = 2$.
The Kitaev model has two separate contributions\cite{yao_entanglement_2010} to the entanglement entropy
\begin{equation}
S_E = S_G + S_F~.
\label{eqn:S_G_+_S_F}
\end{equation}
The first contribution, $S_G$, originates from the static $\mathbb Z_2$-gauge field and is stated to be\cite{yao_entanglement_2010,dora_gauge_2018}
\begin{equation}
S_G = \left(\frac{N_y}{2}-1\right) \log 2~,
\label{eqn:S_G}
\end{equation}
where $N_y$ is the number of unit cells along the circumference
and equals the number of bonds cut by the bipartition, thus $N_y=L_\text{circ}/2$.
The second contribution, $S_F$, describes the entanglement of the matter fermions\cite{yao_entanglement_2010}.
By comparison with Eq.~(\ref{eqn:S_E_area_law}), the constant part in~(\ref{eqn:S_G}) resembles the TEE $\gamma_i = \log 2$.
We turn to our iDMRG results now,
where the entanglement entropy is readily available from the MPS representation of the ground state wave function.
As will become clear later, we consider the following quantity
\begin{equation}
\Delta S_E = S_E - S_F - \frac{N_y}{2} \log 2 ~\approx \gamma_i ~,
\label{eqn:DS_E}
\end{equation}
where $S_E$ is the entanglement entropy extracted numerically using iDMRG.
$S_F$ can be computed exactly via the eigenvectors of the fermion hopping matrix
if $H_{K_3}$ is considered\cite{yao_entanglement_2010,meichanetzidis_anatomy_2016}.
We compute $S_F$ on a torus with one dimension equalling $L_\text{circ}$ and the second dimension being much larger.
Note that a bipartition of a torus leaves two cuts of length $L_\text{circ}$,
whereas on the cylinder there is only one such cut.
Thence, only half of $\tilde S_F$ of a torus is considered in Eq.~(\ref{eqn:DS_E}).
In the exactly solvable case of $H_{K_3}$, $\Delta S_E$ reproduces the TEE,
such that $\Delta S_{E,K_3} = \gamma_i$ for all $K_3$,
except when iDMRG is not converged with respect to $\chi$.
From Fig.~\ref{fig:topSbip}, we recover the following TEE
\begin{equation}
\gamma_i =
\begin{cases}
\log 2 \quad & (W_l=+1), \\
\log \frac{2}{\sqrt 2} \quad & (W_l=-1),
\end{cases}
\label{eqn:iDMRG_gamma}
\end{equation}
which depends on the sector $W_l=\pm1$.
In the gapless limit of the $W_l=-1$ sector ($K_3 = 0$),
$S_F$ is divergent.
Thus, at small $K_3$ the MPS improves with increasing $\chi$ similar to the behaviour of $\xi$ discussed before.
Nonetheless, from (\ref{eqn:iDMRG_gamma}) a total quantum dimension of $\mathcal D = 2$ can simply be read off.
The $W_l=-1$ sector contains a non-abelian anyon, a vortex $\sigma$, with quantum dimension $d_i=\sqrt 2$.
The ground state of the $W_l=+1$ sector is doubly degenerate with $d_i = 1$ for both states.
Thus, the expected degeneracy is recovered.
Upon applying the magnetic field, the integrability of $H_{K_3}$ in Eq.~(\ref{eqn:H_K3}) is lost
and the fermionic contribution $S_F$ cannot be computed exactly.
Based on the fact that we observe a similar opening of the gap in the fermionic spectrum for $K_3$ and $h$
when the magnetic field is rescaled as $h \rightarrow 32h^3$,
we assume that $S_F$ as a function of the rescaled magnetic field $S_F(32h^3)$
is similar to $S_F(K_3)$ as a function of $K_3$.
This assumption is at least justified in the limit of small $h$.
Figure~\ref{fig:topSbip}(a) displays $\Delta S_E$ in a magnetic field,
where it approaches the same values of $\gamma_i$ for small $h$.
At elevated fields, $\Delta S_E$ begins to deviate from $\gamma = \log 2$ or $\gamma_\sigma = \log \sqrt 2$.
$\Delta S_E$ increases monotonically until the transition into the intermediate phase is reached.
In a magnetic field, the separability of fluxes and fermions is lost
and generically additional entanglement between fluxes and fermions is created.
Such entanglement generates an additional contribution $S_{F \otimes G}$ to the entanglement entropy,
which is not accounted for in Eqs.~(\ref{eqn:S_G_+_S_F}) and (\ref{eqn:DS_E}).
As this deviation occurs continuously, we like to argue
that the topological phase in a low magnetic field is adiabatically connected to the topological phase of $H_{K_3}$ at non-zero $K_3$.
As a remark, the difference of $\Delta S_E$ between the $W_l=\pm 1$ sectors is not constant.
This is due to the correlation length of the fermions being enhanced in the $-1$ sector,
particularly near the gapless limit ($h=0$), where it diverges.
Thus, the fermions may build up entanglement with the fluxes in an increased area near the cut resulting in an enhanced $S_{F\otimes G}$.
We like to conclude that we find numerical evidence for a total quantum dimension $\mathcal D = 2$
with non-abelian anyons having quantum dimension $d_i = \sqrt 2$ in the exactly solvable model using the three-spin term.
The results using the magnetic field, breaking integrability of the original model, are still consistent with the results above. However, a significant contribution to the entanglement entropy arises at increased magnetic fields.
\subsection{Intermediate Regime}
\label{sec:itmd}
Only for AFK, an intermediate region exists ranging from $h_{c1,AF} < h < h_{c2,AF}$,
where $h_{c1,AF}\approx0.22$ (for \emph{rhombic-2}, $L_\text{circ}=10$) marks the transition from the topological phase
and $h_{c2,AF}\approx0.36$ the transition into the field-polarised phase.
The ground state within the intermediate regime requires to go to comparably large bond dimensions $\chi~\approx~1000$.
Using smaller $\chi$, the ground state is very sensitive to the cylindrical geometry
as well as the size of the iDMRG cell.
However, based on the $1/\chi$-extrapolation of the ground state energy, that is presented in Appendix~\ref{app:fs_int},
we find evidence for a translationally invariant ground state.
In particular, when using a larger iDMRG cell,
we observe a restoration of translational symmetry upon reaching a sufficiently large $\chi$.
This motivates the use of the \emph{rhombic-2} geometry with an iDMRG cell equivalent to a single fundamental unit cell,
which on the one hand suppresses ground states with enlarged unit cells due to broken translational symmetry,
but on the other hand saves computational resources better spent in reaching larger $\chi$.
Returning to its physical properties, the intermediate region exhibits a behaviour typical for a gapless phase.
Both correlation length $\xi$ and entanglement entropy $S_E$ are not converged with respect to $\chi$,
where $\xi$ increases slowly with $\chi$, while $S_E$ increases somewhat faster than in the gapless Kitaev limit.
As we are studying effectively a one-dimensional system due to the cylindrical geometry,
the finite-$\chi$ scaling\cite{tagliacozzo_scaling_2008} extracting a central charge may be applicable\cite{geraedts_half-filled_2016}.
In that context, the behaviour of $S_E$ and $\xi$ indicate a larger central charge $c$,
than found in the B-phase of the bare Kitaev model.
%
However, the finite-$\chi$ scaling, see also Appendix \ref{app:fs_int}, does not reveal a conclusive $c$.
Furthermore, the behaviour $\xi$ for larger $\chi \ge 800$ suggests a separation of the intermediate region into three phases,
of which the middle one grows in extent with larger $\chi$.
Given the large entanglement and the sensitivity to boundary conditions,
our iDMRG results can only be suggestive for the nature of the ground state in the two-dimensional limit.
The flux expectation values $W_p$ and $W_l$ approach zero continuously.
Interestingly, the coexistence of both sectors found in the topological phase, $W_l|_{h=0} = \pm1$,
persists beyond the transition $h_{c1,AF}$.
The peak in $-d^2E/dh^2$ signaling this transition is independent of the particular sector.
\subsection{Polarized Phase}
A transition to the large-$h$ field-polarized phase occurs at $h_{c2,AF}\approx0.36$ (AFK),
or $h_{c,FM}\approx0.014$ (FMK), respectively.
The polarized phase is gapped,
which is signaled by the DMRG simulations by a finite correlation length $\xi$ and finite entanglement entropy $S_E$.
The entanglement $S_E$ decreases with increasing field $h$ and vanishes once the magnetic moments approach saturation,
where the ground state is a simple product state.
At the transition both, FMK and AFK,
exhibit a longitudinal magnetic moment of $\approx 55\%$ of saturation along the $[111]$ direction
without any transverse component.
The longitudinal moment grows with $h$ reaching $90\%$ saturation near $h\approx0.6$ (AFK) and $h\approx0.2$ (FMK).
Large magnetic moments motivate perturbative methods like spin wave-theory\cite{mcclarty_topological_2018}.
In comparison to linear spin wave theory (LSWT)\cite{janssen_honeycomb-lattice_2016},
the transition gets renormalized significantly from $h_{LSW,AF}=1/\sqrt 3\approx 0.58$ down to $h_{c2,AF}$.
For FMK, LSWT predicts a transition at exactly zero\cite{janssen_honeycomb-lattice_2016},
whereas in iDMRG it occurs at small, non-zero field.
\section{Dynamical Spin-Structure Factor} \label{scn:dsf}
\begin{figure*}
\includegraphics[width=\linewidth]{KH111_DSF_panel_wide_3x2-crop.pdf}
\caption{
Dynamical spin-structure factor $\mathcal S^{xx}(k,\omega)$ along $\Gamma$--$M$--$\Gamma'$ at:
(a) $h=0.0$, (b) $h=0.1$, (c) $h=0.2$,
(d) $h=0.0, K_3=-0.25$, (e) $h=0.1, K_3=-0.25$, (f) $h=0.175, K_3=-0.25$
within the topological phase.
(a)-(f) have a logarithmic color scale ranging over two decades.
(g,h) $\mathcal S^{xx}(k,\omega)$ at high-symmetry points $\Gamma$, $K$, $M$, and $\Gamma'$
for different $h$ and $K_3$.
An vertical offset is used for better visibility.
In all plots, $\mathcal S^{xx}(k,\omega)$ is normalized as given in the main text.
}
\label{fig:dsf}
\end{figure*}
\begin{figure*}[tb]
\includegraphics[width=\linewidth]{KH111_DSF_pol_panel_wide-crop.pdf}
\caption{
Dynamical spin-structure factor $\mathcal S(k,\omega)$ in the field-polarized phase
along $\Gamma$--$M$--$\Gamma'$ at:
(a) $h=0.58$, (b) $h=0.5$, (c) $h=0.425$, and (d) $h=0.375$.
(e) $\mathcal S(k,\omega)$ at high-symmetry points $K$, $M$, and $\Gamma'$
for different $h$.
An vertical offset is used for better visibility.
In all plots, $\mathcal S(k,\omega)$ is normalized as given in the main text.
}
\label{fig:dsf_pol}
\end{figure*}
The \emph{dynamical spin-structure factor} $\mathcal S(\bm{k},\omega)$ contains information about the excitation spectrum
and is experimentally accessible via scattering experiments, in particular inelastic neutron scattering.
We consider $\mathcal S(\bm k,\omega) = \sum_{\gamma=\{x,y,z\}} \mathcal S^{\gamma\gamma} (\bm k,\omega)$
with $\mathcal S^{\gamma\gamma} (\bm k,\omega)$ being the spatio-temporal Fourier transform of the dynamical correlations
\begin{equation}
\mathcal S^{\gamma\gamma} (\bm k,\omega) = N \int dt ~ e^{i \omega t} \sum_{a,b} e^{i (\bm r_b - \bm r_a) \cdot \bm k} ~ C^{\gamma\gamma}_{ab}(t) ~,
\end{equation}
where $\gamma = \{x,y,z\}$ is the spin component,
$r_a$ and $r_b$ are the spatial positions of the spins,
and diagonal elements $\mathcal S^{xx}$, $\mathcal S^{yy}$, and $\mathcal S^{zz}$ are considered.
$N$ is defined by normalizing $\mathcal S^{\gamma\gamma} (\bm k,\omega)$ as
$\int d\omega \int d\bm k ~ \mathcal S^{\gamma\gamma} (\bm k,\omega) = \int d\bm k$.
$C^{\gamma\gamma}_{ab} (t)$ denotes the dynamical spin-spin correlation
\begin{align}
C_{ab}^{\gamma\gamma}(t) &= \langle \psi_0 | S_a^\gamma (t) S_b^\gamma (0) | \psi_0 \rangle \nonumber \\
&= \langle \psi_0 | U(-t) S_a^\gamma U(t) S_b^\gamma | \psi_0 \rangle \nonumber \\
&= \langle \psi_0 | S_a^\gamma U(t) S_b^\gamma | \psi_0 \rangle~, \label{eqn:Cij_t}
\end{align}
where the unitary time-evolution operator $U(t) = e^{-i(H-E_0)t}$ is modified
by subtracting the ground state energy $E_0$.
Thus, the time-evolution $U(-t)$ acts trivially on the ground state $\langle \psi_0| U(-t) = \langle \psi_0 |$.
Following Ref.~[\onlinecite{zaletel_time-evolving_2015}], we express $U(t)$
into a \emph{matrix product operator} (MPO) with discretized time steps.
Equation (\ref{eqn:Cij_t}) provides the numerical protocol we employ:
(i) Obtain the ground state wave function $|\psi_0 \rangle$ using iDMRG
and enlarge the iDMRG cell along the cylindrical axis to make room for the excitation to spread spatially,
(ii) apply spin operator $S_i^\gamma$ at site $i$,
(iii) time-evolve the MPS by $U(t)$,
(iv) apply $S_j^\gamma$ at $j$,
and (v) compute the overlap.
On the technical side,
we first compute the spatial Fourier transform of $C_{ab}^{\gamma\gamma}(t)$,
extend the time-signal using linear predictive coding~\cite{white_spectral_2008},
and multiply with a gaussian to suppress ringing due to the finite-time window.
The extension of the time-signal allows for much wider finite-time windows
keeping a significant part of the simulated real-time dynamics.
All spectra shown in the remainder have a broadening of $\sigma_\omega=0.018$
due to multiplying the real-time data with a Gaussian of width $\sigma_t=55.8$.
The real-time data is obtained for times up to $T=120$ on cylinders with \emph{rhombic} geometry and $L_\text{circ}=6$.
In the following, we discuss $\mathcal S(\bm k, \omega)$ within the topological phase and the polarized phase.
Simulating the dynamics within the intermediate regime is left for future work
as the necessary bond dimension for encoding the ground state is to large to achieve appreciably long times in the time-evolution.
\subsection{Topological Phase}
Near $h=0$, see Fig. \ref{fig:dsf}(a),
the numerically obtained $\mathcal S(\bm k, \omega)$ exhibits the features of the analytic solution\cite{knolle_dynamics_2014,knolle_dynamics_2015}
with some adjustments due to the cylindrical geometry~\cite{gohlke_dynamics_2017}.
Firstly, this involves a low-energy peak at $\omega\approx0.03$ %
of which its spectral weight is shifted towards $\Gamma'$
due to the antiferromagnetic nearest-neighbor spin-spin correlation
caused by the antiferromagnetic Kitaev exchange.
When using a cylindrical geometry, an additional $\delta$-peak with finite spectral weight occurs at the two-flux energy.
This $\delta$-peak, together with the finite-time evolution and subsequent broadening in frequency space,
hides the two-flux gap.
Nevertheless, the $\delta$-peak position coincides with the two-flux ga
\footnote{The two-flux gap is shifted towards smaller frequencies for narrow cylinders.},
$\Delta_{2} \approx 0.03$.
Secondly, a broad continuum exists, that is cut off near $\omega\approx1.5$.
Increasing $h$ to $0.1$ and $0.2$, cf. Fig.~\ref{fig:dsf}(b) and (c), only leads to minor changes of the spectrum.
Most notably, the low-energy peak develops a shoulder towards slightly elevated energies,
and the cut-off at $\omega\approx1.5$ is blurred out.
Both features are more prominent in the line plots, Fig.~\ref{fig:dsf}(g).
Any changes to the low-energy spectrum near or even below the two-flux gap are hidden in the energy resolution
caused by the finite time-evolution.
In order to get a qualitative view on how the magnetic field affects the spectrum,
we investigate the effect of both, $K_3$ and $h$.
For $K_3=-0.25$ and $h=0.0$, Fig. \ref{fig:dsf}(d), the low-energy peak gets elevated to $\omega \approx 0.2$.
This peak originates from a single fermion bound to a pair of fluxes\cite{knolle_dynamics_2015}
and its shift is caused by $K_3$ increasing the two-flux gap.
The fermion continuum starts at $\omega\approx0.4$,
and the upper cut-off of the continuum remains near $\omega\approx1.5$.
Both edges are sharp.
%
Note that, $K_3 = -0.25$ has a similar correlation length as $h=0.2$
as discussed above in relation to Fig.~\ref{fig:top_hp3_K3}.
Yet, the corresponding spectra, Fig.~\ref{fig:dsf}(c) and (d), are qualitatively different,
in that for $h=0.2$ the spectral weight is shifted significantly towards zero with no observable gap.
Upon increasing $h$ to $0.1$, the low-energy peak splits into at least three peaks, two of them develop a dispersion.
Due to the field, the fluxes acquire a finite hopping amplitude and become mobile.
The fluxes are thence no longer required to lie on neighboring plaquettes, but instead may separate.
Hence, the mode describing a fermion bound to the two-flux pair generally attains more structure\cite{theveniaut_bound_2017}.
Moreover, interaction between fluxes may induce further dispersion\cite{lahtinen_topological_2010,lahtinen_interacting_2011}.
At further elevated fields, cf. Fig.~\ref{fig:dsf}(f) at $h=0.175$,
somewhat before the phase transition into the intermediate regime%
\footnote{In analogy to the findings in [\onlinecite{jiang_possible_2011}], the additional $K_3$ term shifts the critical field. For $K_3=-0.25$ the transition occurs at $h_{c1,AF}=0.19$.},
the splitting increases with lots of the spectral weight shifting to the peak that is lowest in energy.
The spectral gap reduces significantly with $h$ and has its minimum at the $\Gamma$ and $\Gamma'$ high-symmetry points.
\subsection{Polarized Phase}
From linear spin-wave theory (LSWT) it is known that the magnons are topological.
Their bands carry a $\pm1$ Chern number and chiral edge modes have been observed on a slab geometry\cite{mcclarty_topological_2018,joshi_topological_2018}.
But LSWT is only applicable for fields above the classical critical field $h_{\text{clas}}=1/\sqrt 3\approx0.58$.
Here, we focus on the bulk excitation spectrum at fields between the numerically observed, $h_{c2,AF}\approx0.36$, and the classical critical field.
Results for larger fields are presented in Ref. [\onlinecite{mcclarty_topological_2018}] using the same method.
Beginning our discussion at the classical critical field $h=0.58$
shown in Fig.~\ref{fig:dsf_pol}(a), we observe two magnon-bands
with a minimum of $\omega\approx0.15$ at the high-symmetry points $\Gamma$ and $\Gamma'$.
The two-magnon continuum has some overlap with the upper magnon band.
With lowering the field, the magnon bands move down in energy
and flatten in the sense that their bandwidth decreases.
At $h=0.5$, cf. Fig.~\ref{fig:dsf_pol}(b), the continuum already overlaps with major parts of the upper magnon band.
This opens decay channels, limiting its lifetime, and consequently broadening the mode.
Approaching the transition, cf. Fig.~\ref{fig:dsf_pol}(d) at $h=0.375$ and (c) at $h=0.425$,
$\mathcal S(\omega,\bm k)$ shows a very broad continuum ranging down to almost zero energy,
where also most of the spectral weight is observed.
The upper magnon band is completely obscured by the multi-magnon continuum
and lots of the spectral weight is distributed over a wide range in energy.
The lower edge of the spectrum flattens towards the transition,
which is even more evident in the line plots shown in Fig.~\ref{fig:dsf_pol}(e).
%
In particular at $h=0.375$ the low-energy peaks shift down to almost zero energy simultaneously
at the $K$, $M$, and $\Gamma'$ high-symmetry points,
with most of the spectral weight still appearing above the $\Gamma'$-point.
This reproduces to some extent the phenomenology of LSWT,
namely that the lower magnon band flattens completely while decreasing to zero energy\cite{janssen_honeycomb-lattice_2016,mcclarty_topological_2018},
yet it occurs at lower fields than in LSWT.
On the other hand, a clear remnant of the single magnon branch cannot be observed close to the transition
as it overlaps and merges with the multi-magnon continuum.
It may be possible that the single magnon branch is still dispersive, even though with a significantly reduced bandwidth.
A feature in the spectrum not mentioned so far,
emerges at around $\omega\approx1$ at magnetic fields near the transition.
Initially this high-energy feature is very broad in energy,
but sharpens and moves to higher energy upon increasing the field.
At $h=0.5$ ($h=0.58$) it appears around $\omega\approx1.25$ ($\omega\approx1.5$) and exhibits a slight dispersion.
At even larger fields, beyond what is presented here,
the high-energy feature moves up in energy with a linear dependence on the field
and twice the slope compared to the single-magnon excitations.
Furthermore, the high-energy feature is situated at the upper edge of the two magnon continuum.
Its intensity first increases, but starts to decrease at higher fields.
It would be interesting to investigate,
if this may be due to the appearance
of an anti-bound state\cite{seibold_theory_2008} of two magnons
experiencing a repulsive interaction on account of the antiferromagnetic Kitaev
exchange interaction between two adjacent flipped spins.
\section{Conclusion}
\label{scn:dis}
We confirm the vastly different phenomenology between ferromagnetic and antiferromagnetic Kitaev interaction,
if a magnetic field along $[111]$ direction is applied.
In case of ferromagnetic Kitaev coupling, only a single magnetic transition is observed,
that separates a low-$h$ topological phase from the large-$h$ field-polarized phase.
Whereas for antiferromagnetic Kitaev coupling, the topological phase is more stable
and an intermediate regime exists, that is possibly gapless.
The topological order of the low-$h$ phase and its non-abelian anyonic excitations are verified
by extracting the topological entanglement entropy.
In addition to Ref.~[\onlinecite{jiang_possible_2011}],
the topological order obtained with a finite three-spin term or when applying a weak magnetic field
is the same also for antiferromagnetic Kitaev coupling.
Upon applying the magnetic field,
the spectral gap in the dynamical spin-structure factor remains within the frequency resolution
and the overall spectrum exhibits only minor changes.
However, the dynamical spin-structure factor is remarkably different when applying the three-spin term
lifting the spectral gap, both due to the flux gap increasing and the fermions gapping out.
When a combination of magnetic field and three-spin term is applied,
we observe a drastic reduction of the spectral gap with increased field
and more structure in the low-energy peak corresponding to a bound state of a flux pair and a fermion.
This additional structure is due to the fluxes becoming mobile and the flux-pair may separate
providing a richer energy manifold for that bound state.
Upon approaching the intermediate regime, the spectral gap reduces with a minimum at the $\Gamma'$ high-symmetry point.
We can conclude, that even though the energy gap opens in a similar manner
when either the magnetic field or three-spin term is varied,
the dynamical spin-structure factor exhibits a remarkably different low-energy structure.
Thus, additional terms in perturbation theory, other then the three-spin term preserving integrability,
are relevant to describe the dynamical spin-structure factor in the topological phase.
When approaching the intermediate region from high-fields,
we observe a strong reduction in frequency with a simultaneous flattening of the lower magnon band.
A broad continuum develops, that ranges down to the lowest frequencies and merges with the single magnon branch.
It remains an open question,
whether this flattening could be attributed to the collapse of the lower magnon branch, as observed within LSWT,
or rather to multi-magnon excitations obscuring any dispersion of the very same magnon branch.
Nonetheless, the flat gap closing as such is interesting in various aspects
as it may indicate exotic spin states like a quantum spin liquid.
\section{Acknowledgements}
We are grateful to Bal\'asz D\'ora, Lucas Janssen, Paul McClarty, Karlo Penc, Jeffrey G. Rau and Ruben Verresen for stimulating discussions.
%
This work was supported in part by DFG via SFB 1143. FP acknowledges the support of the DFG Research Unit FOR 1807 through grants no. PO 1370/2- 1, TRR80, and the Nanosystems Initiative Munich (NIM) by the German Excellence Initiative.
|
1,116,691,500,685 | arxiv | \section{Introduction}
Deep learning i.e., the use of deep neural networks for regression and classification, has been very successful in many different contexts in science and engineering \cite{DLnat}. These include image analysis, natural language understanding, game intelligence and protein folding. As deep neural networks are universal function approximators, it is natural to employ them as ansatz spaces for solutions of ordinary and partial differential equations, paving the way for their successful use in scientific computing. A very incomplete list of examples where deep learning is used for the numerical solutions of differential equations includes the solution of high-dimensional linear and semi-linear parabolic partial differential equations \cite{HEJ1,E1} and references therein, and for many-query problems such as those arising in uncertainty quantification (UQ), PDE constrained optimization and (Bayesian) inverse problems. Such problems can be recast as parametric partial differential equations and the use of deep neural networks in their solution is explored for elliptic and parabolic PDEs in \cite{OSZ2019,Kuty}, for transport PDEs \cite{PP1} and for hyperbolic and related PDEs \cite{DRM1,LMR1,LMRP1, LMM}, and as operator learning frameworks in \cite{DeepOnets,LMK1,Stu1,Stu2} and references therein. All the afore-mentioned methods are of the \emph{supervised learning} type \cite{DLbook} i.e., the underlying deep neural networks have to be trained on \emph{data}, either available from measurements or generated by numerical simulations.
However, there are several interesting problems for PDEs where generating training data might be very expensive. A different strategy might be relevant for such problems, namely the so-called \emph{Physics informed neural networks} (PINNs) which collocate the PDE residual on \emph{training points} of the approximating deep neural network, thus obviating the need for generating training data. Proposed originally in \cite{Lag2,Lag1}, PINNs have been revived and developed in significantly greater detail recently in the pioneering contributions of Karniadakis and collaborators. PINNs have been successfully applied to simulate a variety of forward and inverse problems for PDEs, see \cite{KAR8, jag1, jag2, KAR9,KAR5,KAR6,KAR7,KAR1,KAR2,KAR4,shukla,MM3} and references therein.
In a recent paper \cite{MM1}, the authors obtain rigorous estimates on the error due to PINNs for the forward problem for a variety of linear and non-linear PDEs, see \cite{MM2} for similar results on inverse problems and \cite{DAR1} for a different perspective on error estimates for PINNs. Following \cite{MM1}, one can expect that PINNs could be efficient at approximating solutions of nonlinear PDEs as long as classical solutions to such PDEs exist and are \emph{stable} in a suitable sense. So far, PINNs have only been proposed and tested for a very small fraction of PDEs. It is quite natural to examine whether they can be efficient at approximating other types of PDEs and in particular, if the considerations of \cite{MM1} apply to these PDEs, then can one derive rigorous error estimates for PINNs ?
In this paper, we investigate the utility of PINNs for approximately a large class of PDEs which arises in physics i.e., non-linear dispersive equations that model different aspects of shallow water waves \cite{Lannes}. These include the famous Korteweg-De Vries (KdV) equation and its high-order extension, the so-called Kawahara equation, the well-known Camassa-Holm type equations and the Benjamin-Ono equations. All these PDEs have several common features, namely
\begin{itemize}
\item They model dispersive effects in shallow-water waves.
\item The interesting dynamics of these equations results from a balance between non-linearity and dispersion.
\item They are completely integrable and contain interesting structures such as interacting solitons in their solutions.
\item Classical solutions and their stability have been extensively investigated for these equations.
\item Standard numerical methods, such as finite-difference \cite{Hol1, Car, Nav, Hol2, Hol3, koley1} and finite-element \cite{koley2, koley4} for approximating these equations can be very expensive computationally. In particular, it can be very costly to obtain low errors due to the high-order (or non-local) derivatives in these equations leading to either very small time-steps for explicit methods or expensive non-linear (or linear) solvers for implicit methods.
\end{itemize}
Given these considerations, it is very appealing to investigate if PINNs can be successfully applied for efficiently approximating these nonlinear dispersive PDEs. To this end, we adapt the PINNs algorithm to this context in this paper and prove error estimates for PINNs, leveraging the stability of underlying classical solutions into error bounds. Moreover, we perform several numerical experiments for the KdV, Kawahara, generalized Camassa-Holm and Benjamin-Ono equations to ascertain that PINNs can indeed approximate dispersive equations to high-accuracy, at low computational cost.
The rest of the paper is organized as follows; in section \ref{sec:2}, we briefly recall the PINNs algorithm for PDEs and apply to the KdV-Kawahara PDE in section \ref{sec:3}, generalized Camassa-Holm equations in section \ref{sec:4} and the Benjamin-Ono equations in section \ref{sec:5}.
\section{Physics Informed Neural Networks}
\label{sec:2}
In this section, we follow the recent paper \cite{MM1} and briefly recapitulate the essentials of PINNs for the following abstract PDE,
\subsection{The underlying abstract PDE}
\label{sec:21}
Let $X,Y$ be separable Banach spaces with norms $\| \cdot \|_{X}$ and $\|\cdot\|_{Y}$, respectively. For definiteness, we set $Y = L^p(\mathbb{D};\mathbb{R}^m)$ and $X= W^{s,q}(\mathbb{D};\mathbb{R}^m)$, for $m \geq 1$, $1 \leq p,q < \infty$ and $s \geq 0$, with $\mathbb{D} \subset \mathbb{R}^{\bar{d}}$, for some $\bar{d} \geq 1$. In the following, we only consider space-time domains with $\mathbb{D} = (0,T) \times D \subset \mathbb{R}$, resulting in $\bar{d} = 2$. Let $X^{\ast} \subset X$ and $Y^{\ast} \subset Y$ be closed subspaces with norms $\|\cdot \|_{X^{\ast}}$ and $\|\cdot\|_{Y^{\ast}}$, respectively.
We start by considering the following abstract formulation of our underlying PDE:
\begin{equation}
\label{eq:pde}
\EuScript{D}(\bu) = \mathbf{f}.
\end{equation}
Here, the \emph{differential operator} is a mapping, $\EuScript{D}: X^{\ast} \mapsto Y^{\ast}$ and the \emph{input} $\mathbf{f} \in Y^{\ast}$, such that
\begin{equation}
\label{eq:assm1}
\begin{aligned}
&(H1): \quad \|\EuScript{D}(\bu)\|_{Y^{\ast}} < +\infty, \quad \forall~ \bu \in X^{\ast}, ~{\rm with}~\|\bu\|_{X^{\ast}} < +\infty. \\
&(H2):\quad \|\mathbf{f}\|_{Y^{\ast}} < +\infty.
\end{aligned}
\end{equation}
Moreover, we assume that for all $\mathbf{f} \in Y^{\ast}$, there exists a unique $\bu \in X^{\ast}$ such that \eqref{eq:pde} holds.
\subsection{Quadrature rules}
\label{sec:22}
In the following section, we need to consider approximating integrals of functions. Hence, we need an abstract formulation for quadrature. To this end, we consider a mapping $g: \mathbb{D} \mapsto \mathbb{R}^m$, such that $g \in Z^{\ast} \subset Y^{\ast}$. We are interested in approximating the integral,
$$
\overline{g}:= \int\limits_{\mathbb{D}} g(y) dy,
$$
with $dy$ denoting the $\bar{d}$-dimensional Lebesgue measure. In order to approximate the above integral by a quadrature rule, we need the quadrature points $y_{i} \in \mathbb{D}$ for $1 \leq i \leq N$, for some $N \in \mathbb{N}$ as well as weights $w_i$, with $w_i \in \mathbb{R}_+$. Then a quadrature is defined by,
\begin{equation}
\label{eq:quad}
\overline{g}_N := \sum\limits_{i=1}^N w_i g(y_i),
\end{equation}
for weights $w_i$ and quadrature points $y_i$. We further assume that the quadrature error is bounded as,
\begin{equation}
\label{eq:assm3}
\left|\overline{g} - \overline{g}_N\right| \leq C_{quad}
\left(\|g\|_{Z^{\ast}},\bar{d} \right) N^{-\alpha},
\end{equation}
for some $\alpha > 0$.
\subsection{PINNs}
\label{sec:23}
\subsubsection{Neural Networks.}
As PINNs are neural networks, we start a very concise description of them. Given an input $y \in \mathbb{D}$, a feedforward neural network (also termed as a multi-layer perceptron), shown in figure \ref{fig:1}, transforms it to an output, through a layer of units (neurons) which compose of either affine-linear maps between units (in successive layers) or scalar non-linear activation functions within units, resulting in the representation,
\begin{equation}
\label{eq:ann1}
\bu_{\theta}(y) = C_K \circ\sigma \circ C_{K-1}\ldots \ldots \ldots \circ\sigma \circ C_2 \circ \sigma \circ C_1(y).
\end{equation}
Here, $\circ$ refers to the composition of functions and $\sigma$ is a scalar (non-linear) activation function. Examples for the activation function $\sigma$ in \eqref{eq:ann1} include the sigmoid function, the hyperbolic tangent function and the \emph{ReLU} function.
For any $1 \leq k \leq K$, we define
\begin{equation}
\label{eq:C}
C_k z_k = W_k z_k + b_k, \quad {\rm for} ~ W_k \in \mathbb{R}^{d_{k+1} \times d_k}, z_k \in \mathbb{R}^{d_k}, b_k \in \mathbb{R}^{d_{k+1}}.
\end{equation}
For consistency of notation, we set $d_1 = \bar{d}$ and $d_K = m$.
Our neural network \eqref{eq:ann1} consists of an input layer, an output layer and $(K-1)$ hidden layers for some $1 < K \in \mathbb{N}$. The $k$-th hidden layer (with $d_k$ neurons) is given an input vector $z_k \in \mathbb{R}^{d_k}$ and transforms it first by an affine linear map $C_k$ \eqref{eq:C} and then by a nonlinear (component wise) activation $\sigma$. A straightforward addition shows that our network contains $\left(\bar{d} + m + \sum\limits_{k=2}^{K-1} d_k\right)$ neurons.
We also denote,
\begin{equation}
\label{eq:theta}
\theta = \{W_k, b_k\},~ \theta_W = \{ W_k \}\quad \forall~ 1 \leq k \leq K,
\end{equation}
to be the concatenated set of (tunable) weights for our network. It is straightforward to check that $\theta \in \Theta \subset \mathbb{R}^M$ with
\begin{equation}
\label{eq:ns}
M = \sum\limits_{k=1}^{K-1} (d_k +1) d_{k+1}.
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm]{ANN.png}
\caption{An illustration of a (fully connected) deep neural network. The red neurons represent the inputs to the network and the blue neurons denote the output layer. They are
connected by hidden layers with yellow neurons. Each hidden unit (neuron) is connected by affine linear maps between units in different layers and then with nonlinear (scalar) activation functions within units.}
\label{fig:1}
\end{figure}
\subsubsection{Training PINNs: Loss functions and optimization}
The neural network $\bu_{\theta}$ \eqref{eq:ann1} depends on the tuning parameter $\theta \in \Theta$ of weights and biases. Within the standard paradigm of deep learning \cite{DLbook}, one \emph{trains} the network by finding tuning parameters $\theta$ such that the loss (error, mismatch, regret) between the neural network and the underlying target is minimized. Here, our target is the solution $\bu \in X^{\ast}$ of the abstract PDE \eqref{eq:pde} and we wish to find the tuning parameters $\theta$ such that the resulting neural network $\bu_{\theta}$ approximates $\bu$.
Following standard practice of machine learning, one obtains training data $\bu(y)$, for all $y \in \mathcal{S}$, with training set $\mathcal{S} \subset \mathbb{D}$ and then minimizes a loss function of the form $\sum\limits_{\mathcal{S}} \|\bu(y) - \bu_{\theta}(y)\|_{X}$ to find the neural network approximation for $\bu$. However, obtaining this training data requires possibly expensive numerical simulations of the underlying PDE \eqref{eq:pde}. In order to circumvent this issue, the authors of \cite{Lag1} suggest a different strategy. An abstract paraphrasing of this strategy runs as follows: we assume that for every $\theta \in \Theta$, the neural network $\bu_{\theta} \in X^{\ast}$ and $\|\bu_{\theta} \|_{X^{\ast}} < +\infty$. We define the following \emph{residual}:
\begin{equation}
\label{eq:res1}
\EuScript{R}_{\theta} = \EuScript{R}(\bu_\theta):= \EuScript{D}\left(\bu_{\theta}\right) - \mathbf{f}.
\end{equation}
By assumptions (H1),(H2) (cf. \eqref{eq:assm1}), we see that $\EuScript{R}_{\theta} \in Y^{\ast}$ and $\|\EuScript{R}_{\theta}\|_{Y^{\ast}} < +\infty$ for all $\theta \in \Theta$. Note that $\EuScript{R}(\bu) = \EuScript{D}(\bu) - \mathbf{f} \equiv 0$, for the solution $\bu$ of the PDE \eqref{eq:pde}. Hence, the term \emph{residual} is justified for \eqref{eq:res1}.
The strategy of PINNs, following \cite{Lag1}, is to minimize the \emph{residual} \eqref{eq:res1}, over the admissible set of tuning parameters $\theta \in \Theta$ i.e
\begin{equation}
\label{eq:pinn1}
{\rm Find}~\theta^{\ast} \in \Theta:\quad \theta^{\ast} = {\rm arg}\min\limits_{\theta \in \Theta} \|\EuScript{R}_{\theta}\|_{Y}.
\end{equation}
Realizing that $Y = L^p(\mathbb{D})$ for some $1 \leq p < \infty$, we can equivalently minimize,
\begin{equation}
\label{eq:pinn2}
{\rm Find}~\theta^{\ast} \in \Theta:\quad \theta^{\ast} = {\rm arg}\min\limits_{\theta \in \Theta} \|\EuScript{R}_{\theta}\|^p_{L^p(\mathbb{D})} = {\rm arg}\min\limits_{\theta \in \Theta} \int\limits_{\mathbb{D}} |\EuScript{R}_{\theta}(y)|^p dy.
\end{equation}
As it will not be possible to evaluate the integral in \eqref{eq:pinn2} exactly, we need to approximate it numerically by a quadrature rule. To this end, we use the quadrature rules \eqref{eq:quad} discussed earlier and select the \emph{training set} $\mathcal{S} = \{y_n\}$ with $y_n \in \mathbb{D}$ for all $1 \leq n \leq N$ as the quadrature points for the quadrature rule \eqref{eq:quad} and consider the following \emph{loss function}:
\begin{equation}
\label{eq:lf1}
J(\theta):= \sum\limits_{n=1}^N w_n |\EuScript{R}_{\theta}(y_n)|^p = \sum\limits_{n=1}^N w_n \left| \EuScript{D}(\bu_{\theta}(y_n)) - \mathbf{f}(y_n) \right|^p.
\end{equation}
It is common in machine learning \cite{DLbook} to regularize the minimization problem for the loss function i.e we seek to find,
\begin{equation}
\label{eq:lf2}
\theta^{\ast} = {\rm arg}\min\limits_{\theta \in \Theta} \left(J(\theta) + \lambda_{reg} J_{reg}(\theta) \right).
\end{equation}
Here, $J_{reg}:\Theta \to \mathbb{R}$ is a \emph{regularization} (penalization) term. A popular choice is to set $J_{reg}(\theta) = \|\theta_W\|^q_q$ for either $q=1$ (to induce sparsity) or $q=2$. The parameter $0 \leq \lambda_{reg} \ll 1$ balances the regularization term with the actual loss $J$ \eqref{eq:lf1}.
The proposed algorithm for computing this PINN is given below,
\begin{algorithm}
\label{alg:PINN} {\bf Finding a physics informed neural network to approximate the solution of the very general form PDE \eqref{eq:pde}}.
\begin{itemize}
\item [{\bf Inputs}:] Underlying domain $\mathbb{D}$, differential operator $\EuScript{D}$ and input source term $\mathbf{f}$ for the PDE \eqref{eq:pde}, quadrature points and weights for the quadrature rule \eqref{eq:quad}, non-convex gradient based optimization algorithms.
\item [{\bf Goal}:] Find PINN $\bu^{\ast}= \bu_{\theta^{\ast}}$ for approximating the PDE \eqref{eq:pde}.
\item [{\bf Step $1$}:] Choose the training set $\mathcal{S} = \{y_n\}$ for $y_n \in \mathbb{D}$, for all $1 \leq n \leq N$ such that $\{y_n\}$ are quadrature points for the underlying quadrature rule \eqref{eq:quad}.
\item [{\bf Step $2$}:] For an initial value of the weight vector $\overline{\theta} \in \Theta$, evaluate the neural network $\bu_{\overline{\theta}}$ \eqref{eq:ann1}, the PDE residual \eqref{eq:res1}, the loss function \eqref{eq:lf2} and its gradients to initialize the underlying optimization
algorithm.
\item [{\bf Step $3$}:] Run the optimization algorithm till an approximate local minimum $\theta^{\ast}$ of \eqref{eq:lf2} is reached. The map $\bu^{\ast} = \bu_{\theta^{\ast}}$ is the desired PINN for approximating the solution $\bu$ of the PDE \eqref{eq:pde}.
\end{itemize}
\end{algorithm}
\section{Korteweg de-Vries $\&$ Kawahara equations}
\label{sec:3}
We will apply the PINNs algorithm \ref{alg:PINN} to several examples of non-linear dispersive PDEs. We start with the well-known KdV-Kawahara equations.
\subsection{The underlying PDEs}
The general form of the KdV-Kawahara equation is given by,
\begin{equation}
\label{eq:heat}
\begin{aligned}
u_t + u u_x + \alpha u_{xxx} - \beta u_{xxxxx}&= 0, \quad \forall ~ x\in (0,1), \, t \in (0,T), \\
u(x,0) &= \bar{u}(x), \quad \forall ~ x \in (0,1), \\
u(0,t) &= h_1(t), \quad \forall ~ t \in (0,T), \\
u(1,t) &= h_2(t), \quad \forall ~ t \in (0,T), \\
u_x(0,t) &=h_3(t), \quad \forall ~ t \in (0,T), \\
u_x(1,t) &=h_4(t), \quad \forall ~ t \in (0,T), \\
u_{xx}(1,t) &= h_5(t), \quad \forall ~ t \in (0,T).
\end{aligned}
\end{equation}
Here $\alpha, \beta $ are non-negative real constants.
Note that if $\beta =0$, then the above equation is called Korteweg de-Vries (KdV) equation, and if $\beta \neq 0$, then the above equation is called the Kawahara equation. It is well known that KdV equation plays a pivotal role in the modeling of shallow water waves, and in particular, the one-dimensional waves of small but finite amplitude in dispersive systems can be described by the KdV equation. However, under certain circumstances, the coefficient of the third order derivative in the KdV equation may become very small or even zero \cite{hunter}. In such a scenario, one has to take account of the higher order effect of dispersion in order to balance the nonlinear effect, which leads to the Kawahara equation.
For the sake of simplicity it will be assumed $\alpha=\beta=1$ in the upcoming analysis, since their values are not relevant in the present setting, while emphasizing that that the subsequent analysis also holds for the case $\beta=0$ (KdV equations). Regarding the existence and stability of solutions to \eqref{eq:heat}, we closely follow the work by Faminskii $\&$ Larkin \cite{andrei}, and recall the following result.
\begin{theorem}
\label{002}
For any integer $k\ge 0$, $n \in \mathbb{N}$, $l=1$ or $2$, define the spaces
\begin{align*}
\mathcal{X}_k((0,1)\times (0,T))& := \Big\{ u: \partial^n_t u \in C([0,T]; H^{5(k-n)}(0,1)) \cap L^2((0,T); H^{5(k-n)+1}(0,1))\Big\}, \\
\mathcal{B}^l_k(0,T)& := \displaystyle \prod_{j=0}^{l} H^{k + (2-j)/5} (0,T).
\end{align*}
Let $\bar{u} \in H^{5k}(0,1)$, boundary data $(h_1, h_3) \in \mathcal{B}^1_k(0,T)$, and $(h_2, h_4, h_5) \in \mathcal{B}^2_k(0,T)$ satisfy the natural compatibility conditions. Then there exists a unique solution $u \in \mathcal{X}_k$, and the flow map in Lipschitz continuous on any ball in the corresponding norm.
\end{theorem}
By choosing appropriate values of $k$ (for our purpose, we take $k=2$) in the above theorem, we readily infer the existence of classical solutions of the Kawahara equations \eqref{eq:heat} by the embedding of Sobolev spaces in the $C^{\ell}$ spaces.
\subsection{PINNs for the KdV-Kawahara Equations \eqref{eq:heat}}
We apply algorithm \ref{alg:PINN} to approximate the solutions of \eqref{eq:heat}. To this end, we need the following steps,
\subsubsection{Training Set.}
\label{sec:train}
Let us define the space-time domain $\Omega_T = (0,1) \times (0,T)$, and divide the training set $\mathcal{S} = \mathcal{S}_{int} \cup \mathcal{S}_{sb} \cup \mathcal{S}_{tb}$ of the abstract PINNs algorithm \ref{alg:PINN} into the following three subsets,
\begin{itemize}
\item [(a)] Interior training points $\mathcal{S}_{int}=\{y_n\}$ for $1 \leq n \leq N_{int}$, with each $y_n = (x_n,t_n) \in \Omega_T$. We use low-discrepancy Sobol points as training points.
\item [(b)] Spatial boundary training points $\mathcal{S}_{sb} = (0,t_n) \cup (1,t_n)$ for $1 \leq n \leq N_{sb}$, and the points $t_n$ chosen as low-discrepancy Sobol points.
\item [(c)] Temporal boundary training points $\mathcal{S}_{tb} = \{x_n\}$, with $1 \leq n \leq N_{tb}$ and each $x_n \in (0,1)$, chosen as low-discrepancy Sobol points.
\end{itemize}
\subsubsection{Residuals}
To define residuals for the neural network $u_{\theta} \in C^5([0,T]\times [0,1])$, defined by \eqref{eq:ann1}, with $\theta \in \Theta$ as the set of tuning parameters, we use the hyperbolic tangent $\tanh$ activation function, i.e., $\sigma = \tanh$. With this setting, we define the following residuals
\begin{itemize}
\item [(a)] Interior Residual given by,
\begin{equation}
\label{eq:hres1}
\EuScript{R}_{int,\theta}(x,t):= \partial_t u_{\theta}(x,t) + u_{\theta} (u_{\theta})_x(x,t) + (u_{\theta})_{xxx}(x,t) - (u_{\theta})_{xxxxx}(x,t).
\end{equation}
Note that the above residual is well-defined and $\EuScript{R}_{int,\theta} \in C([0,T]\times [0,1])$ for every $\theta \in \Theta$.
\item [(b)] Spatial boundary Residual given by,
\begin{equation}
\begin{aligned}
\label{eq:hres2}
\EuScript{R}_{sb1,\theta}(0,t) & := u_{\theta}(0,t) -h_1(t), \quad \forall t \in (0,T), \\
\EuScript{R}_{sb2,\theta}(1,t) & := u_{\theta}(1,t) -h_2(t), \quad \forall t \in (0,T), \\
\EuScript{R}_{sb3,\theta}(0,t) & := (u_{\theta})_x(0,t) -h_3(t), \quad \forall t \in (0,T),\\
\EuScript{R}_{sb4,\theta}(1,t) & := (u_{\theta})_x(1,t) -h_4(t), \quad \forall t \in (0,T),\\
\EuScript{R}_{sb5,\theta}(1,t) & := (u_{\theta})_{xx}(1,t) -h_5(t), \quad \forall t \in (0,T).
\end{aligned}
\end{equation}
Given the fact that the neural network and boundary data are smooth, above residuals are well-defined.
\item [(c)] Temporal boundary Residual given by,
\begin{equation}
\label{eq:hres3}
\EuScript{R}_{tb,\theta}(x):= u_{\theta}(x,0) - \bar{u}(x), \quad \forall x \in (0,1).
\end{equation}
Again the above quantity is well-defined and $\EuScript{R}_{tb,\theta} \in C^5((0,1))$, as both the initial data and the neural network are smooth.
\end{itemize}
\subsubsection{Loss function}
We set the following loss function
\begin{equation}
\label{eq:hlf}
J(\theta):= \sum\limits_{n=1}^{N_{tb}} w^{tb}_n|\EuScript{R}_{tb,\theta}(x_n)|^2 + \sum\limits_{n=1}^{N_{sb}} \sum\limits_{i=1}^{5} w^{sb}_n|\EuScript{R}_{sbi,\theta}(t_n)|^2 + \lambda \sum\limits_{n=1}^{N_{int}} w^{int}_n|\EuScript{R}_{int,\theta}(x_n,t_n)|^2 .
\end{equation}
Here the residuals are defined by \eqref{eq:hres3}, \eqref{eq:hres2}, \eqref{eq:hres1}, $w^{tb}_n$ are the $N_{tb}$ quadrature weights corresponding to the temporal boundary training points $\mathcal{S}_{tb}$, $w^{sb}_n$ are the $N_{sb}$ quadrature weights corresponding to the spatial boundary training points $\mathcal{S}_{sb}$ and $w^{int}_n$ are the $N_{int}$ quadrature weights corresponding to the interior training points $\mathcal{S}_{int}$. Furthermore, $\lambda$ is a hyperparameter for balancing the residuals, on account of the PDE and the initial and boundary data, respectively.
\subsection{Estimate on the generalization error}
We are interested in estimating the following generalization error for the PINN $u^* =u_{\theta^*}$ with loss function \eqref{eq:hlf}, for approximating the solution of \eqref{eq:heat}:
\begin{equation}
\label{eq:hegen}
\EuScript{E}_{G}:= \left(\int\limits_0^T \int\limits_0^1 |u(x,t) - u^{\ast}(x,t)|^2 dx dt \right)^{\frac{1}{2}}.
\end{equation}
We are going to estimate the generalization error in terms of the \emph{training error} that we define as,
\begin{equation}
\label{eq:hetrain}
\EuScript{E}^2_{T}:= \underbrace{\sum\limits_{n=1}^{N_{tb}} w^{tb}_n|\EuScript{R}_{tb,\theta^{\ast}}(x_n)|^2}_{(\EuScript{E}_T^{tb})^2} + \underbrace{\sum\limits_{n=1}^{N_{sb}} \sum\limits_{i=1}^{5} w^{sb}_n|\EuScript{R}_{sbi,\theta^{\ast}}(t_n)|^2}_{(\EuScript{E}_T^{sb})^2} + \lambda\underbrace{\sum\limits_{n=1}^{N_{int}} w^{int}_n|\EuScript{R}_{int,\theta^{\ast}}(x_n,t_n)|^2}_{(\EuScript{E}_T^{int})^2}.
\end{equation}
Note that the training error can be readily computed \emph{a posteriori} from the loss function \eqref{eq:hlf}.
We also need the following assumptions on the quadrature error. For any function $g \in C^k(\Omega)$, the quadrature rule corresponding to quadrature weights $w^{tb}_n$ at points $x_n \in \mathcal{S}_{tb}$, with $1 \leq n \leq N_{tb}$, satisfies
\begin{equation}
\label{eq:hquad1}
\left| \int\limits_{\Omega} g(x) dx - \sum\limits_{n=1}^{N_{tb}} w^{tb}_n g(x_n)\right| \leq C^{tb}_{quad}(\|g\|_{C^k}) N_{tb}^{-\alpha_{tb}}.
\end{equation}
For any function $g \in C^k(\partial\Omega \times [0,T])$, the quadrature rule corresponding to quadrature weights $w^{sb}_n$ at points $(x_n,t_n) \in \mathcal{S}_{sb}$, with $1 \leq n \leq N_{sb}$, satisfies
\begin{equation}
\label{eq:hquad2}
\left| \int\limits_0^T \int\limits_{\partial\Omega} g(x,t) ds(x) dt - \sum\limits_{n=1}^{N_{sb}} w^{sb}_n g(x_n,t_n)\right| \leq C^{sb}_{quad}(\|g\|_{C^k}) N_{sb}^{-\alpha_{sb}}.
\end{equation}
Finally, for any function $g \in C^\ell(\Omega \times [0,T])$, the quadrature rule corresponding to quadrature weights $w^{int}_n$ at points $(x_n,t_n) \in \mathcal{S}_{int}$, with $1 \leq n \leq N_{int}$, satisfies
\begin{equation}
\label{eq:hquad3}
\left| \int\limits_0^T \int\limits_{\Omega} g(x,t) dx dt - \sum\limits_{n=1}^{N_{int}} w^{int}_n g(x_n,t_n)\right| \leq C^{int}_{quad}(\|g\|_{C^\ell}) N_{int}^{-\alpha_{int}}.
\end{equation}
In the above, $\alpha_{int},\alpha_{sb},\alpha_{tb} > 0$ and in principle, different order quadrature rules can be used. We estimate the generalization error for the PINN in the following,
\begin{theorem}
\label{thm:heat}
Let $u \in C^5([0,1] \times [0,T])$ be the unique classical solution of the Korteweg de-Vries $\&$ Kawahara equation \eqref{eq:heat}. Let $u^{\ast} = u_{\theta^{\ast}}$ be a PINN generated by algorithm \ref{alg:PINN}, corresponding to loss function \eqref{eq:lf2}, \eqref{eq:hlf}. Then the generalization error \eqref{eq:hegen} can be estimated as,
\begin{equation}
\label{result_01}
\begin{aligned}
\epsilon_G &\leq C_1\big(\epsilon_T^{tb} + \epsilon_T^{int} + C_2(\epsilon_T^{sb}) + C_3(\epsilon_T^{sb})^{1/2} \\
&+ (C_{quad}^{tb})^{1/2} N_{tb}^{-\alpha_{tb} / 2} + (C_{quad}^{int})^{1/2} N_{int}^{-\alpha_{int} / 2} + C_2(C_{quad}^{sb})^{1/2} N_{sb}^{-\alpha_{sb} / 2} + C_3(C_{quad}^{sb})^{1/4} N_{sb}^{-\alpha_{sb} / 4}\big),
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
C_1 &= \sqrt{T + 2C_4T^2e^{2C_4T}}, \quad
C_2 = \sqrt{\Vert u \Vert_{C_t^0C_x^0} + 1}, \\
C_3 &= \sqrt{10(\Vert u^* \Vert_{C_t^0C^4_x} + \Vert u \Vert_{C_t^0C^4_x})T^{1/2}}, \quad
C_4 = \Vert u^* \Vert_{C_t^0C_x^1} + \frac{1}{2}\Vert u \Vert_{C_t^0C_x^1} + \frac{1}{2},
\end{aligned}
\end{equation}
and $C_{quad}^{tb} = C_{quad}^{tb}(\|\EuScript{R}_{tb,\theta^{\ast}}\|_{C^5})$, $C_{quad}^{sb} = C_{quad}^{sb}(\sum\limits_{i=1}^{5}\|\EuScript{R}_{sbi,\theta^{\ast}}\|_{C^{3}})$ and $C_{quad}^{int} = C_{quad}^{int}(\|\EuScript{R}_{int,\theta^{\ast}}\|_{C^{0}})$ are the constants defined by the quadrature error \eqref{eq:hquad1}, \eqref{eq:hquad2}, \eqref{eq:hquad3}, respectively.
\end{theorem}
\begin{proof}
It is easy to see that the error $\hat{u}: u^{\ast} - u$ satisfies the following equations,
\begin{equation}
\label{eq:herr}
\begin{aligned}
\hat{u}_t + \hat{u}_{xxx} - \hat{u}_{xxxxx}+ u^{\ast} u^{\ast}_x - uu_x&= \EuScript{R}_{int}, \quad x\in (0,1)~ t \in (0,T), \\
\hat{u}(x,0) &= \EuScript{R}_{tb}(x), \quad x \in (0,1), \\
\hat{u}(0,t) &= \EuScript{R}_{sb1}(0,t), \quad t \in (0,T), \\
\hat{u}(1,t) &= \EuScript{R}_{sb2}(1,t), \quad t \in (0,T),\\
\hat{u}_x(0,t) &= \EuScript{R}_{sb3}(0,t), \quad t \in (0,T), \\
\hat{u}_x(1,t) &= \EuScript{R}_{sb4}(1,t), \quad t \in (0,T), \\
\hat{u}_{xx}(1,t) &= \EuScript{R}_{sb5}(1,t), \quad t \in (0,T).
\end{aligned}
\end{equation}
Here, we have denoted $\EuScript{R}_{int} = \EuScript{R}_{int,\theta^{\ast}}$ for notational convenience and analogously for the residuals $\EuScript{R}_{tb},\EuScript{R}_{sb}.$
Note that
$$
u^{\ast} u^{\ast}_x - uu_x = \hat{u} \hat{u}_x + u \hat{u}_x + \hat{u} u_x; \,\, \hat{u} \hat{u}_{xxx} = ( \hat{u} \hat{u}_{xx})_x - \frac12 ( \hat{u}^2_x)_x, \,\, \hat{u} \hat{u}_{xxxxx} = ( \hat{u} \hat{u}_{xxxx})_x - ( \hat{u}_x \hat{u}_{xxx})_x+ \frac12 ( \hat{u}^2_{xx})_x.
$$
Multiplying both sides of the PDE \eqref{eq:herr} with $\hat{u}$, integrating over the domain and afterwards by parts yields,
\begin{equation}
\begin{split}
\frac{1}{2}\frac{d}{dt}\int_0^1 \hat{u}^2\,dx &= - \int_0^1 \hat{u}\hat{u}_{xxx}\,dx + \int_0^1 \hat{u}\hat{u}_{xxxxx}\,dx - \int_0^1\hat{u}(\hat{u}\hat{u}_x - u\hat{u}_x + u_x\hat{u})\,dx + \int_0^1 \hat{u}\EuScript{R}_{int}\,dx \\
&\leq -\left. \hat{u}_{xx}\hat{u} \right\vert_0^1 + \frac{1}{2} \left. (\hat{u}_x)^2 \right\vert_1 + \left. \hat{u}_{xxxx}\hat{u} \right\vert_0^1 - \left. \hat{u}_{xxx}\hat{u}_x \right\vert_0^1 + \frac{1}{2} \left. (\hat{u}_{xx})^2 \right\vert_1 \\
&- \int_0^1 \hat{u}^2\hat{u}_x\,dx - (\left. \frac{1}{2}\hat{u}^2u \right\vert_0^1 - \frac{1}{2}\int_0^1 \hat{u}^2u_x\,dx) - \int_0^1 \hat{u}^2u_x\,dx + \int_0^1 \hat{u}\EuScript{R}_{int}\,dx \\
&\leq \Vert \hat{u} \Vert_{C^4_x}(|\EuScript{R}_{sb1}| + |\EuScript{R}_{sb2}| + |\EuScript{R}_{sb3}| + |\EuScript{R}_{sb4}|) + \frac{1}{2} (\EuScript{R}_{sb3}^2 + \EuScript{R}_{sb5}^2) \\
&+ (\Vert u^* \Vert_{C_x^1} + \frac{1}{2}\Vert u \Vert_{C_x^1}) \int_0^1 \hat{u}^2\,dx + \frac{1}{2}\Vert u \Vert_{C_x^0}(\EuScript{R}_{sb1}^2 + \EuScript{R}_{sb2}^2) + \frac{1}{2}\int_0^1 \EuScript{R}_{int}^2 \,dx + \frac{1}{2}\int_0^1 \hat{u}^2 \,dx \\
&\leq (\Vert u^* \Vert_{C_t^0C^4_x} + \Vert u \Vert_{C_t^0C^4_x})(|\EuScript{R}_{sb1}| + |\EuScript{R}_{sb2}| + |\EuScript{R}_{sb3}| + |\EuScript{R}_{sb4}|) \\
&+ \frac{1}{2} (\EuScript{R}_{sb3}^2 + \EuScript{R}_{sb5}^2) + \frac{1}{2}\Vert u \Vert_{C_t^0C_x^0}(\EuScript{R}_{sb1}^2 + \EuScript{R}_{sb2}^2) + \frac{1}{2}\int_0^1 \EuScript{R}_{int}^2 \,dx \\
&+ (\Vert u^* \Vert_{C_t^0C_x^1} + \frac{1}{2}\Vert u \Vert_{C_t^0C_x^1} + \frac{1}{2}) \int_0^1 \hat{u}^2\,dx \\
&=: C_1(\sum\limits_{i=1}^5 |\EuScript{R}_{sbi}|) + C_2(\sum\limits_{i=1}^5 \EuScript{R}_{sbi}^2) + \frac{1}{2}\int_0^1 \EuScript{R}_{int}^2 \,dx + C_3\int_0^1 \hat{u}^2\,dx.
\end{split}
\end{equation}
Then integrating the above inequality over $[0,\bar{T}]$ for any $\bar{T} \leq T$, using Cauchy-Schwarz and Gronwall's inequalities, we obtain
\begin{equation}
\begin{aligned}
&\int_0^1 \hat{u}(x, \bar{T})^2\,dx \\
&\leq \int_0^1 \EuScript{R}_{tb}^2\,dx + 2C_1T^{1/2}\sum\limits_{i=1}^5(\int_0^T\EuScript{R}_{sbi}^2\,dt)^{1/2} + 2C_2\sum\limits_{i=1}^5(\int_0^T\EuScript{R}_{sbi}^2\,dt)
+ \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt + 2C_3\int_0^{\bar{T}}\int_0^1 \hat{u}^2\,dxdt \\
&\leq (1 + 2C_3Te^{2C_3T}) \big(\int_0^1 \EuScript{R}_{tb}^2\,dx + 10C_1T^{1/2}(\sum\limits_{i=1}^5\int_0^T \EuScript{R}_{sbi}^2\,dt)^{1/2}
+ 2C_2\sum\limits_{i=1}^5(\int_0^T\EuScript{R}_{sbi}^2\,dt) + \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt \big) .
\end{aligned}
\label{eq:kawa_hat_eq3}
\end{equation}
A further integration of \eqref{eq:kawa_hat_eq3} with respect to $\bar{T}$ results in
\begin{equation}
\begin{aligned}
\epsilon_G^2 := \int_0^T\int_0^1 \hat{u}(x, \bar{T})^2\,dxd\bar{T}
&\leq (T + 2C_3T^2e^{2C_3T}) \big(\int_0^1 \EuScript{R}_{tb}^2\,dx + 10C_1T^{1/2}(\sum\limits_{i=1}^5\int_0^T \EuScript{R}_{sbi}^2\,dt)^{1/2} \\
&\quad + 2C_2\sum\limits_{i=1}^5(\int_0^T\EuScript{R}_{sbi}^2\,dt) + \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt \big),
\end{aligned}
\label{eq:kawa_hat_eq4}
\end{equation}
with
\begin{equation}
\begin{aligned}
C_1 = \Vert u \Vert_{C_t^0C^4_x} + \Vert u^* \Vert_{C_t^0C^4_x}, \quad
C_2 = \frac{1}{2}\Vert u \Vert_{C_t^0C_x^0} + \frac{1}{2}, \quad
C_3 = \Vert u^* \Vert_{C_t^0C_x^1} + \frac{1}{2}\Vert u \Vert_{C_t^0C_x^1} + \frac{1}{2}.
\end{aligned}
\end{equation}
Eventually, applying the estimates \eqref{eq:hquad1}, \eqref{eq:hquad2}, \eqref{eq:hquad3} on the quadrature error, and definition of training errors \eqref{eq:hetrain}, yields the desired inequality \eqref{result_01}.
\end{proof}
\subsection{Numerical experiments}
\subsubsection{Implementation}
The PINNs algorithm \ref{alg:PINN} has been implemented within the PyTorch framework \cite{Pas} and the code can be downloaded from \textbf{https://github.com/baigm11/DispersivePinns}. As is well documented \cite{KAR1, KAR2, MM1}, the coding and implementation of PINNs is extremely simple. Only a few lines of Python code suffice for this purpose. All the numerical experiments were performed on a single GeForce GTX1080 GPU.
The PINNs algorithm has the following hyperparameters, the number of hidden layers $K-1$, the width of each hidden layer $d_k \coloneqq \bar{d}$ in \eqref{eq:ann1}, the specific activation function $A$, the parameter $\lambda$ in the loss function \eqref{eq:hlf}, the regularization parameter $\lambda_{reg}$ in the cumulative loss function \eqref{eq:lf2} and the specific gradient descent algorithm for approximating the optimization problem \eqref{eq:lf2}. We use the hyperbolic tangent $\tanh$ activation function, thus ensuring that all the smoothness hypothesis on the resulting neural networks, as required in all bounds on generalization error below, are satisfied. Moreover, we use the second-order LBFGS method \cite{Fle} as the optimizer. We follow the ensemble training procedure of \cite{LMR1} in order to choose the remaining hyperparameters. To this end, we consider a range of values, shown in Table \ref{tab:1}, for the number of hidden layers, the depth of each hidden layer, the parameter $\lambda$ and the regularization parameter $\lambda_{reg}$. For each configuration in the ensemble, the resulting model is retrained (in parallel) $n_\theta$ times with different random starting values of the trainable weights in the optimization algorithm and the one yielding the smallest value of the training loss is selected.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{c c c c c c c }
\toprule
\bfseries &\bfseries $K-1$ & \bfseries $d$ &\bfseries $q$ & \bfseries $\lambda_{reg}$ &\bfseries $\lambda$ &\bfseries $n_\theta$ \\
\midrule
\midrule
KdV Equation & 4, 8 & 20, 24, 28 &2& 0& 0.1, 1, 10 & 5\\
\midrule
Kawahara Equation & 4, 8, 12 & 20, 24, 28, 32 &2& 0& 0.1, 1, 10 & 5\\
\midrule
CH Equation & 4, 8 & 20, 24, 28 &2& 0& 0.1, 1, 10 & 5\\
\midrule
BO Equation, Single Soliton & 4, 8, 12 & 20, 24, 28, 32 &2& 0& 0.1, 1, 10 & 5\\
\midrule
BO Equations, Double Soliton & 4, 8 & 20, 24, 28 &2& 0& 0.1, 1, 10 & 5\\
\bottomrule
\end{tabular}
\caption{Hyperparameter configurations employed in the ensemble training of PINNs.}
\label{tab:1}
}
\end{table}
\subsubsection{KdV equation}
We set $\beta = 0$ in \eqref{eq:heat} to recover the KdV equation and consider the well-known numerical benchmarks of single and double soliton solutions, with exact solution formulas for both cases.
For the single soliton, the exact solution is given by,
\begin{equation}
u(x, t) = 9\big(1 - \tanh^2(\sqrt{3/2}(x - 3t))\big),
\end{equation}
representing a single bump moving to the right with speed 3 with initial peak at $x = 0$.
\begin{figure}[h!]
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{{Images/kdv_single.png}}
\caption{Single soliton}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\
\includegraphics[width=1\linewidth]{{Images/kdv_double.png}}
\caption{Double soliton}
\end{subfigure}
\caption{The exact and PINN solutions of single and double soliton test case of KdV equation.}
\label{fig:kdv}
\end{figure}
The ensemble training for the PINNs in this case resulted in the selection of hyperparameters, reported in Table \ref{tab:kdv}. We plot the exact solution and the approximate solution, computed with the PINNs algorithm \ref{alg:PINN} in figure \ref{fig:kdv} (left). As seen from this figure, PINNs provide a very accurate approximation for the single soliton. This is further verified in the extremely low generalization errors reported in Table \ref{tab:kdv}, showcasing the ability of PINNs to accurately approximate single solitons for the KdV equation.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c c c c c c c}
\toprule
\bfseries &\bfseries $N_{int}$ & \bfseries $N_{sb}$& \bfseries $N_{tb}$ &\bfseries $K-1$ & \bfseries $d$ &\bfseries $\lambda$ &\bfseries $\EuScript{E}_T$&\bfseries $\EuScript{E}_G^r$ \\
\midrule
\midrule
Single Soliton &2048 & 512 & 512 &4&20& 0.1 &0.000236& 0.00338\% \\
\midrule
Double Soliton &4096 & 1024 & 1024 &4&32& 1 &0.000713& 0.059\% \\
\bottomrule
\end{tabular}
\caption{Best performing \textit{Neural Network} configurations for the single soliton and double soliton problem. Low-discrepancy Sobol points are used for every reported numerical example.}
\label{tab:kdv}
}
\end{table}
For the double soliton, the exact solution is given by,
\begin{equation}
u(x, t) = 6(b - a)\frac{b\textrm{csch}^2(\sqrt{b/2}(x - 2bt)) + a\textrm{sech}^2(\sqrt{a/2}(x - 2at))}{\big(\sqrt{a}\tan(\sqrt{a/2}(x - 2at)) - \sqrt{b}\tanh(\sqrt{b/2}(x - 2bt))\big)^2},
\label{eq:kdv_d}
\end{equation}
for any real numbers a and b where we have used $a = 0.5$ and $b = 1$ in the numerical experiment. \eqref{eq:kdv_d} represents two solitary waves that “collide” at $t = 0$ and separate for $t > 0$. For large $|t|$, $u(\cdot, t)$ is close to a sum of two solitary waves at different locations.
The ensemble training for the PINNs in this case resulted in the selection of hyperparameters, reported in Table \ref{tab:kdv} (bottom row). We plot the exact solution and the approximate solution, computed with the PINNs algorithm \ref{alg:PINN} in figure \ref{fig:kdv} (right). As seen from this figure, PINNs provide a very accurate approximation for the single soliton, which is further verified in the extremely low generalization errors reported in Table \ref{tab:kdv}. Thus, PINNs are able to approximate KdV solitons to very high accuracy.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c }
\toprule
\bfseries max\_iters &\bfseries training time$/s$ &\bfseries $\eps_T$ &\bfseries $\eps_G^r$ \\
\midrule
\midrule
100 & 4 & 6.75e-02 & 1.84e-01 \\
\midrule
500 & 21 & 2.41e-03 & 1.65e-03 \\
\midrule
1000 & 44 & 7.34e-04 & 4.92e-04 \\
\midrule
2000 & 61 & 2.36e-04 & 3.38e-05 \\
\bottomrule
\end{tabular}
\caption{Results of different training iterations for single soliton case of KdV equation.}
\label{tab:kdv_single_cp}
}
\end{table}
Lastly, it is natural to investigate the computational cost of the PINNs in approximating the KdV solutions. The computational cost is dominated by the training i.e, the number of LBFGS iterations that are needed to minimize the training error. We report the training times (in seconds) for different number of iterations $max_{iters}$ for the single soliton test case in Table \ref{tab:kdv_single_cp} and for the double soliton test case in Table \ref{tab:kdv_double_cp}. From Table \ref{tab:kdv_single_cp}, we observe that the PINN for approximating single soliton is very fast to train, with a relative error of $1\%$ already reached with less than $500$ LBFGS iterations and a training time of approximately $20$ seconds. On the other hand, the PINN for the double soliton takes longer to train and attains an error of less than $1\%$, only with $2000$ iterations and a training time of less than $3$ minutes. This is not surprising as the double soliton has a significantly more complicated structure. Nevertheless, the overall cost is still very low, given the high-accuracy.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c }
\toprule
\bfseries max\_iters &\bfseries training time$/s$ &\bfseries $\eps_T$ &\bfseries $\eps_G^r$ \\
\midrule
\midrule
100 & 9 & 1.21e-01 & 4.82e-01 \\
\midrule
500 & 48 & 2.60e-02 & 1.30e-01 \\
\midrule
1000 & 95 & 7.00e-03 & 4.32e-02 \\
\midrule
2000 & 159 & 2.54e-03 & 1.11e-02 \\
\midrule
5000 & 436 & 7.89e-04 & 6.50e-04 \\
\midrule
10000 & 499 & 7.13e-04 & 5.88e-04 \\
\bottomrule
\end{tabular}
\caption{Results of different training iterations for double soliton case of KdV equation.}
\label{tab:kdv_double_cp}
}
\end{table}
\subsubsection{Kawahara equation}
Following \cite{Car, koley1, koley2}, we consider a Kawahara-type equation which differs from Kawahara equation \eqref{eq:heat} in a first-order term $u_x$,
\begin{equation}
u_t + u_x + uu_x + u_{xxx} -u_{xxxxx} = 0.
\label{eq:kawa_mod}
\end{equation}
This first-order term $u_x$ is a linear perturbation and we can easily derive a similar a posteriori bound on generalization error, as for \eqref{eq:heat}. As no exact solution formulas for the double soliton test case are known for the Kawahara equation \eqref{eq:kawa_mod}, we focus on the single soliton case, with exact solutions given by \begin{equation}
\label{eq:kdvs}
u(x, t) = \frac{105}{169}\textrm{sech}^4\Big(\frac{1}{2\sqrt{13}}(x - \frac{205}{169}t - x_0)\Big).
\end{equation}
This represents a single bump moving to the right with speed $\frac{205}{169}$ with initial peak at $x = x_0$. The ensemble training selected PINNs with hyperparameters, given in Table \ref{tab:Kawa}. The resulting PINN approximation, together with the exact solution is plotted in figure \ref{fig:Kawa} and shows that the trained PINN approximates the exact solution with very high accuracy. This is further verified in the extremely low generalization error of $0.1\%$, reported in Table \ref{tab:Kawa}.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c c c c c c c}
\toprule
\bfseries &\bfseries $N_{int}$ & \bfseries $N_{sb}$& \bfseries $N_{tb}$ &\bfseries $K-1$ & \bfseries $d$ &\bfseries $\lambda$ &\bfseries $\EuScript{E}_T$&\bfseries $\EuScript{E}_G^r$ \\
\midrule
\midrule
Single Soliton &2048 & 512 & 512 &4&24& 10 &0.000321& 0.101\% \\
\bottomrule
\end{tabular}
\caption{Best performing \textit{Neural Network} configurations for the single soliton test case for the Kawahara equations \eqref{eq:kawa_mod}. Low-discrepancy Sobol points are used for every reported numerical example.}
\label{tab:Kawa}
}
\end{table}
In Table \ref{tab:kawa_single_cp}, we present training times (in seconds) for the PINNs algorithm for the Kawahara equation \ref{eq:kawa_mod}. We observe from this Table that an error of less than $1 \%$ percent is achieved in approximately $6-7$ minutes. Given the fact that the Kawahara equation requires the evaluation of $5$-th order derivatives, it is expected that each training iteration is significantly more expensive than that of the KdV equation. Table \ref{tab:kawa_single_cp} shows that this is indeed the case and partly explains the higher computational cost for the PINN to approximate the Kawahara equation. Nevertheless, the total cost is still considerably smaller than those reported for the finite difference scheme in \cite{koley1,koley2}.
\begin{figure}[h!]
\centering
\includegraphics[width=8cm]{{Images/Kawa_single.png}}
\caption{The exact and PINN solution of single soliton test case of Kawahara equation \eqref{eq:kawa_mod}.}
\label{fig:Kawa}
\end{figure}
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c }
\toprule
\bfseries max\_iters &\bfseries training time$/s$ &\bfseries $\eps_T$ &\bfseries $\eps_G^r$ \\
\midrule
\midrule
100 & 25 & 8.89e-02 & 9.70e-01 \\
\midrule
500 & 127 & 4.76e-02 & 7.86e-01 \\
\midrule
1000 & 249 & 8.40e-03 & 1.89e-01 \\
\midrule
2000 & 466 & 1.06e-03 & 5.88e-03 \\
\midrule
5000 & 964 & 3.21e-04 & 1.01e-03 \\
\bottomrule
\end{tabular}
\caption{Results of different training iterations for single soliton case of Kawahara equation.}
\label{tab:kawa_single_cp}
}
\end{table}
\section{Camassa-Holm equation}
\label{sec:4}
\subsection{The underlying PDE}
In this section, we consider the following initial-boundary value problem for the one-dimensional Camassa-Holm equation on a compact interval
\begin{equation}
\label{eq:vscl}
\begin{aligned}
u_t - u_{txx}+ 3uu_x + 2 \kappa u_x&= 2 u_x u_{xx} + u u_{xxx}, \quad \forall x\in (0,1),~t \in [0,T], \\
u(x,0) &= u_0(x), \quad \forall x \in (0,1), \\
u(0,t) &= u_{xx}(0,t) = u(1,t) = u_{xx}(1,t)=0, \quad \forall t \in [0,T].
\end{aligned}
\end{equation}
Here, $\kappa$ is a real constant. This equation models the unidirectional propagation
of shallow water waves over a flat bottom, with $u$ representing the fluid velocity.
A key feature of the above equation is that it is completely integrable for all values of $\kappa$. A special case of \eqref{eq:vscl}, corresponding to $\kappa=0$, plays an important role in the modeling of nonlinear dispersive waves in hyperelastic rods \cite{camassa}. Regarding the existence of solutions, we report the following result which is a slight modification of the results of \cite{kwek},
\begin{theorem}
\label{thm:CH}
Let $\mathcal{X}:= \lbrace u \in H^4(0,1): u(0)=u_{xx}(0)=u(1)=u_{xx}(1)=0 \rbrace$. Then for every $u_0 \in \mathcal{X}$, the problem \eqref{eq:vscl} has
a unique solution
$$
u \in C([0,T); \mathcal{X}) \cap C^1([0,T); H^1_0(0,1)),
$$
for some $T>0$. In addition, $u_{xx} \in C^1([0,T); H^1_0(0,1))$, and $u$ depends continuously on $u_0$ in the $H^4$-norm.
\end{theorem}
\subsection{PINNs}
To specify the PINNs algorithm \ref{alg:PINN} in this case, we start by choosing the training set, exactly as in section \ref{sec:train}. The following residuals are chosen,
\begin{itemize}
\item Interior Residual given by,
\begin{equation}
\begin{aligned}
\label{eq:hres11}
\EuScript{R}_{int,\theta}(x,t)& := \partial_t u_{\theta}(x,t) - \partial_{txx} u_{\theta}(x,t) +3 u_{\theta}(x,t) (u_{\theta})_x(x,t) + 2 \kappa (u_{\theta})_x(x,t) \\
& \qquad - 2 (u_{\theta})_{x}(x,t)(u_{\theta})_{xx}(x,t) - (u_{\theta})(x,t) (u_{\theta})_{xxx}(x,t).
\end{aligned}
\end{equation}
Note that the residual is well defined and $\EuScript{R}_{int,\theta} \in C([0,T]\times [0,1])$ for every $\theta \in \Theta$.
\item Spatial boundary Residual given by,
\begin{equation}
\begin{aligned}
\label{eq:hres21}
\EuScript{R}_{sb1,\theta}(0,t) & := u_{\theta}(0,t), \quad \forall t \in (0,T), \\
\EuScript{R}_{sb2,\theta}(1,t) & := u_{\theta}(1,t), \quad \forall t \in (0,T),\\
\EuScript{R}_{sb3,\theta}(0,t) & := (u_{\theta})_{xx}(0,t), \quad \forall t \in (0,T),\\
\EuScript{R}_{sb4,\theta}(1,t) & := (u_{\theta})_{xx}(1,t), \quad \forall t \in (0,T).
\end{aligned}
\end{equation}
Given the fact that the neural network is smooth, this residual is well defined.
\item Temporal boundary Residual given by,
\begin{equation}
\label{eq:hres31}
\EuScript{R}_{tb,\theta}(x):= \left[\bigg(u_\theta(x, 0) - u_0(x)\bigg)^2 + \bigg((u_\theta)_x(x, 0) - (u_0)_x(x)\bigg)^2\right]^{1/2}, \quad \forall x \in (0,1).
\end{equation}
Again this quantity is well-defined and $\EuScript{R}_{tb,\theta} \in C^2((0,1))$ as both the initial data and the neural network are smooth.
\end{itemize}
These lead to the following loss function for training the PINN for approximating the Camassa-Holm equation \eqref{eq:vscl},
\begin{equation}
\label{eq:blf}
J(\theta):= \sum\limits_{n=1}^{N_{tb}} w^{tb}_n|\EuScript{R}_{tb,\theta}(x_n)|^2 + \sum\limits_{n=1}^{N_{sb}} \sum\limits_{i=1}^{4} w^{sb}_n|\EuScript{R}_{sbi,\theta}(t_n)|^2 + \lambda \sum\limits_{n=1}^{N_{int}} w^{int}_n|\EuScript{R}_{int,\theta}(x_n,t_n)|^2 .
\end{equation}
Here $w^{tb}_n$ are the $N_{tb}$ quadrature weights corresponding to the temporal boundary training points $\mathcal{S}_{tb}$, $w^{sb}_n$ are the $N_{sb}$ quadrature weights corresponding to the spatial boundary training points $\mathcal{S}_{sb}$ and $w^{int}_n$ are the $N_{int}$ quadrature weights corresponding to the interior training points $\mathcal{S}_{int}$. Furthermore, $\lambda$ is a hyperparameter for balancing the residuals, on account of the PDE and the initial and boundary data, respectively.
\subsection{Bounds on the Generalization Error.}
As in the case of the KdV-Kawahara equation, we will leverage the stability of classical solutions of the Camassa-Holm equation \eqref{eq:vscl} in order to bound the PINN generalization error,
\begin{equation}
\label{eq:begen}
\EuScript{E}_{G}:= \left(\int\limits_0^T \int\limits_0^1 |u(x,t) - u^{\ast}(x,t)|^2 dx dt \right)^{\frac{1}{2}},
\end{equation}
in terms of the \emph{training error},
\begin{equation}
\label{eq:betrain}
\begin{aligned}
\EuScript{E}^2_{T}&:=
\lambda\underbrace{\sum\limits_{n=1}^{N_{int}} w^{int}_n|\EuScript{R}_{int,\theta^{\ast}}(x_n,t_n)|^2}_{(\EuScript{E}_T^{int})^2} +\underbrace{\sum\limits_{n=1}^{N_{tb}} + w^{tb}_n|\EuScript{R}_{tb,\theta^{\ast}}(x_n)|^2}_{(\EuScript{E}_T^{tb})^2} + \underbrace{\sum\limits_{n=1}^{N_{sb}} \sum\limits_{i=1}^{4} w^{sb}_n|\EuScript{R}_{sbi,\theta^{\ast}}(t_n)|^2}_{(\EuScript{E}_T^{sb})^2},
\end{aligned}
\end{equation}
readily computed from the training loss \eqref{eq:blf} \emph{a posteriori}. We have the following estimate,
\begin{theorem}
\label{thm:burg}
Let $\kappa > 0$ and let $u \in C^3((0,T) \times (0,1))$ be the unique classical solution of Casamma-Holm equation \eqref{eq:vscl}. Let $u^{\ast} = u_{\theta^{\ast}}$ be the PINN, generated by algorithm \ref{alg:PINN}, with loss function \eqref{eq:blf}. Then, the generalization error \eqref{eq:begen} is bounded by,
\begin{equation}
\label{001}
\begin{aligned}
\epsilon_G &\leq C_1\big(\epsilon_T^{tb} + \epsilon_T^{int} + C_2(\epsilon_T^{sb}) + C_3(\epsilon_T^{sb})^{1/2} \\
&+ (C_{quad}^{tb})^{1/2} N_{tb}^{-\alpha_{tb} / 2} + (C_{quad}^{int})^{1/2} N_{int}^{-\alpha_{int} / 2} + C_2(C_{quad}^{sb})^{1/2} N_{sb}^{-\alpha_{sb} / 2} + C_3(C_{quad}^{sb})^{1/4} N_{sb}^{-\alpha_{sb} / 4}\big),
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
C_1 &= \sqrt{T + 2C_4T^2e^{2C_4T}}, \\
C_2 &= \sqrt{2(|\kappa| + \Vert u^* \Vert_{C_t^0C_x^2} + \Vert u \Vert_{C_t^0C_x^2})}, \\
C_3 &= 2T^{1/4}\sqrt{2\Vert u^* \Vert_{C_t^1C_x^1} + 2\Vert u \Vert_{C_t^1C_x^1} + 2\Vert u \Vert_{C_t^0C_x^1}(\Vert u^* \Vert_{C_t^0C_x^1}+ \Vert u \Vert_{C_t^0C_x^1})}, \\
C_4 &= \frac{1}{2} + 3\Vert u^* \Vert_{C_t^0C_x^1} + \frac{3}{2}\Vert u \Vert_{C_t^0C_x^3},
\end{aligned}
\end{equation}
and $C^{tb}_{quad} = C^{tb}_{qaud}\left(\|\EuScript{R}_{tb,\theta^{\ast}}\|_{C^2}\right)$, $C^{int}_{quad} = C^{int}_{qaud}\left(\|\EuScript{R}_{int,\theta^{\ast}}\|_{C^0}\right)$, and $C^{sb}_{quad} = C^{sb}_{qaud}\left(\|\EuScript{R}_{sb,\theta^{\ast}}\|_{C^{1}}\right)$ are the constants associated with the quadrature errors are constants are appear in the bounds on quadrature error \eqref{eq:hquad1}-\eqref{eq:hquad3}.
\end{theorem}
\begin{proof}
Let $\hat{u} = u^{\ast}-u$ be the error with the PINN. From the PDE \eqref{eq:vscl} and the definition of the interior residual \eqref{eq:hres11}, we have,
\begin{align}
\label{eq:CH_hat_eq}
\hat{u}_t - \hat{u}_{txx} + 2 \kappa \hat{u}_x + 3 (u^{\ast}u^{\ast}_x -uu_x)= 2 u^{\ast}_{x}u^{\ast}_{xx} - 2u_xu_{xx} + u^{\ast}u^{\ast}_{xxx} -u u_{xxx} + \EuScript{R}_{int}.
\end{align}
Observe also that
\begin{equation}
\begin{aligned}
\label{eq:CH_id}
u^{\ast} u^{\ast}_x - uu_x = \hat{u} \hat{u}_x + u \hat{u}_x + \hat{u} u_x; & \quad u^{\ast}_x u^{\ast}_{xx} - u_xu_{xx} = \hat{u}_x \hat{u}_{xx} + u_x \hat{u}_{xx} + \hat{u}_x u_{xx}, \\
u^{\ast} u^{\ast}_{xxx} - uu_{xxx} &= \hat{u} \hat{u}_{xxx} + u \hat{u}_{xxx} + \hat{u} u_{xxx}.
\end{aligned}
\end{equation}
Multiplying both sides of \eqref{eq:CH_hat_eq} with $\hat{u}$, integrating by part and using the identities \eqref{eq:CH_id} we arrive at,
\begin{equation}
\frac{1}{2}\frac{d}{dt}\int_0^1 (\hat{u}^2 + (\hat{u}_x)^2) \,dx + \left.\kappa\hat{u}^2\right\vert_0^1 - \left. \hat{u}\hat{u}_{tx} \right\vert_0^1 = -3\mathcal{A} + 2\mathcal{B} + \mathcal{C} + \int_0^1 \hat{u}R_{int}\,dx,
\label{eq:CH_hat_eq1}
\end{equation}
where
\begin{equation}
\begin{aligned}
\mathcal{A} &:= \int_0^1 \hat{u} (\hat{u}\hat{u}_x + u\hat{u}_x + \hat{u}u_x)\,dx
= \int_0^1 (\hat{u}_x + u_x) \hat{u}^2\,dx + \int_0^1 \hat{u}u\hat{u}_x\,dx \\
&= \int_0^1 (\hat{u}_x + u_x) \hat{u}^2\,dx - \frac{1}{2}\int_0^1 u_x\hat{u}^2\,dx + \left.\frac{1}{2}u\hat{u}^2\right\vert_0^1
= \int_0^1 (\hat{u}_x + \frac{1}{2}u_x) \hat{u}^2\,dx
= \int_0^1 (u^*_x - \frac{1}{2}u_x) \hat{u}^2\,dx.
\end{aligned}
\label{eq:CH_A}
\end{equation}
We estimate $\mathcal{B}$ as follow
\begin{equation}
\begin{aligned}
\mathcal{B} &:= \int_0^1 \hat{u} (\hat{u}_x\hat{u}_{xx} + u_x\hat{u}_{xx} + \hat{u}_xu_{xx})\,dx
= -\frac{1}{2}\int_0^1 \hat{u}_{xxx}\hat{u}^2\,dx + \left.\frac{1}{2}\hat{u}^2\hat{u}_{xx}\right\vert_0^1 \\
&- \int_0^1 u_x\hat{u}_x^2\,dx + \frac{1}{2}\int_0^1 u_{xxx}\hat{u}^2\,dx - \left.\frac{1}{2}u_{xx}\hat{u}^2\right\vert_0^1 + \left.\hat{u}u_x\hat{u}_x\right\vert_0^1
-\frac{1}{2}\int_0^1 u_{xxx}\hat{u}^2\,dx + \left.\frac{1}{2}\hat{u}^2u_{xx}\right\vert_0^1 \\
&= -\frac{1}{2}\int_0^1\hat{u}_{xxx}\hat{u}^2\,dx - \int_0^1 u_x\hat{u}_x^2\,dx + \left.\frac{1}{2}\hat{u}^2\hat{u}_{xx}\right\vert_0^1 + \left.\hat{u}u_x\hat{u}_x\right\vert_0^1
\end{aligned}
\label{eq:CH_B}
\end{equation}
On the other hand, $\mathcal{C}$ is given by
\begin{equation}
\begin{aligned}
\mathcal{C} &:= \int_0^1 \hat{u} (\hat{u}\hat{u}_{xxx} + u\hat{u}_{xxx} + \hat{u}u_{xxx})\,dx
= \int_0^1 (u_{xxx} + \hat{u}_{xxx})\hat{u}^2\,dx + \int_0^1 \hat{u}u\hat{u}_{xxx}\,dx \\
&= \int_0^1 (u_{xxx} + \hat{u}_{xxx})\hat{u}^2\,dx +\frac{3}{2}\int_0^1 u_x\hat{u}_x^2\,dx - \frac{1}{2}\int_0^1 u_{xxx}\hat{u}^2\,dx - \left.\hat{u}u_x\hat{u}_x\right\vert_0^1 \\
&= \int_0^1 (u^*_{xxx} - \frac{1}{2}u_{xxx})\hat{u}^2\,dx +\frac{3}{2}\int_0^1 u_x\hat{u}_x^2\,dx - \left.\hat{u}u_x\hat{u}_x\right\vert_0^1. \\
\end{aligned}
\label{eq:CH_C}
\end{equation}
The boundary term in \eqref{eq:CH_hat_eq1} can be bounded as
\begin{equation}
| \left.\hat{u}\hat{u}_{tx}\right\vert_0^1 \leq (\Vert u^* \Vert_{C_t^1C_x^1} + \Vert u \Vert_{C_t^1C_x^1})(|\EuScript{R}_{sb1}| + | \EuScript{R}_{sb2} |).
\label{eq:CH_bt}
\end{equation}
From \eqref{eq:CH_hat_eq1}-\eqref{eq:CH_bt}, we get
\begin{equation}
\begin{aligned}
\frac{1}{2}\frac{d}{dt}\int_0^1 (\hat{u}^2 + (\hat{u}_x)^2) \,dx &= -\left.\kappa\hat{u}^2\right\vert_0^1 + \left.\hat{u}\hat{u}_{tx} \right\vert_0^1 - 3\mathcal{A} + 2\mathcal{B} + \mathcal{C} + \int_0^1 \hat{u}\EuScript{R}_{int}\,dx \\
&= \int_0^1 (-3u_x^* + \frac{3}{2}u_x + \frac{1}{2}u_{xxx})\hat{u}^2 \,dx - \frac{1}{2}\int_0^1u_x\hat{u}_x^2 \,dx + \int_0^1 \hat{u}\EuScript{R}_{int}\,dx \\
&- \left.\kappa\hat{u}^2\right\vert_0^1 + \left.\hat{u}\hat{u}_{tx} \right\vert_0^1 + \left.\hat{u}^2\hat{u}_{xx}\right\vert_0^1 + \left.\hat{u}u_x\hat{u}_x\right\vert_0^1 \\
&\leq (\frac{1}{2} + 3\Vert u^* \Vert_{C_t^0C_x^1} + \frac{3}{2}\Vert u \Vert_{C_t^0C_x^3})\int_0^1 \hat{u}^2 \,dx + \frac{1}{2}\Vert u \Vert_{C_t^0C_x^1}\int_0^1 \hat{u}_x^2 \,dx \\
&+ (|\kappa| + \Vert u^* \Vert_{C_t^0C_x^2} + \Vert u \Vert_{C_t^0C_x^2})(\EuScript{R}_{sb1}^2 + \EuScript{R}_{sb2}^2) + \frac{1}{2}\int_0^1 \EuScript{R}_{int}^2 \,dx \\
&+ \big(\Vert u^* \Vert_{C_t^1C_x^1} + \Vert u \Vert_{C_t^1C_x^1} + \Vert u \Vert_{C_t^0C_x^1}(\Vert u^* \Vert_{C_t^0C_x^1}+ \Vert u \Vert_{C_t^0C_x^1})\big)(|\EuScript{R}_{sb1}| + |\EuScript{R}_{sb2}|) \\
&=: C_1\sum\limits_{i=1}^4|\EuScript{R}_{sbi}| + C_2\sum\limits_{i=1}^4\EuScript{R}_{sb, i}^2 + \frac{1}{2}\int_0^1 \EuScript{R}_{int}^2 \,dx + C_3\int_0^1 (\hat{u}^2 + \hat{u}_x^2) \,dx.
\end{aligned}
\label{eq:CH_hat_eq2}
\end{equation}
Then integrating the above inequality over $[0,\bar{T}]$ for any $\bar{T} \leq T$, we obtain
\begin{equation}
\begin{aligned}
&\int_0^1 (\hat{u}^2 + \hat{u}_x^2)(x, \bar{T}) \,dx
\leq \int_0^1 \EuScript{R}_{tb}^2\,dx \\
&+ 2C_1T^{1/2}\sum\limits_{i=1}^4(\int_0^T\EuScript{R}_{sbi}^2\,dt)^{1/2} + 2C_2\sum\limits_{i=1}^4(\int_0^T\EuScript{R}_{sbi}^2\,dt)
+ \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt + 2C_3\int_0^T\int_0^1 (\hat{u}^2 + \hat{u}_x^2)\,dxdt \\
&\leq (1 + 2C_3Te^{2C_3T}) \big(\int_0^1 \EuScript{R}_{tb}^2\,dx + 8C_1T^{1/2}(\sum\limits_{i=1}^4\int_0^T\EuScript{R}_{sbi}^2\,dt)^{1/2}
+ 2C_2\sum\limits_{i=1}^4(\int_0^T\EuScript{R}_{sbi}^2\,dt) + \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt \big).
\end{aligned}
\label{eq:CH_hat_eq3}
\end{equation}
We can now exploit Cauchy-Schwarz and Gronwall's inequalities and integrate \eqref{eq:CH_hat_eq3} over $[0, T]$ with respect to $\bar{T}$ in order to obtain
\begin{equation}
\begin{aligned}
&\epsilon_G^2 := \int_0^T\int_0^1 \hat{u}(x, \bar{T})^2\,dxd\bar{T}
\leq \int_0^T\int_0^1 (\hat{u}^2 + \hat{u}_x^2)(x, \bar{T}) \,dxd\bar{T} \\
&\leq (T + 2C_3T^2e^{2C_3T}) \big(\int_0^1 \EuScript{R}_{tb}^2\,dx + 8C_1T^{1/2}(\sum\limits_{i=1}^4\int_0^T\EuScript{R}_{sbi}^2\,dt)^{1/2}
+ 2C_2\sum\limits_{i=1}^4(\int_0^T\EuScript{R}_{sbi}^2\,dt) + \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt \big),
\end{aligned}
\end{equation}
with
\begin{equation}
\begin{aligned}
C_1 &= \Vert u^* \Vert_{C_t^1C_x^1} + \Vert u \Vert_{C_t^1C_x^1} + \Vert u \Vert_{C_t^0C_x^1}(\Vert u^* \Vert_{C_t^0C_x^1}+ \Vert u \Vert_{C_t^0C_x^1}), \\
C_2 &= |\kappa| + \Vert u^* \Vert_{C_t^0C_x^2} + \Vert u \Vert_{C_t^0C_x^2}, \quad
C_3 = \frac{1}{2} + 3\Vert u^* \Vert_{C_t^0C_x^1} + \frac{3}{2}\Vert u \Vert_{C_t^0C_x^3}.
\end{aligned}
\end{equation}
The statement of the theorem can be eventually proven by finally using the estimates \eqref{eq:hquad1}, \eqref{eq:hquad2}, \eqref{eq:hquad3}.
\end{proof}
\subsection{Numerical Experiments}
We set $\kappa = k^2$ in the Camassa-Holm equation \eqref{eq:vscl} and follow \cite{peakon_lim1, peakon_lim2, peakon_lim3} to consider the following exact solution for the \emph{single soliton},
\begin{equation}
\begin{aligned}
&u(\theta) = \frac{2kcp^2}{(1+k^2p^2) + (1-k^2p^2)\cosh\theta}, \\
&\Theta = p(x-\tilde{c}t + x_0), \\
&\Theta = \frac{\theta}{k} + p\ln\frac{(1+kp) + (1-kp)e^{\theta}}{(1-kp) + (1+kp)e^{\theta}},
\end{aligned}
\end{equation}
where $\tilde{c} = \frac{c}{k} = \frac{2k^2}{1-k^2p^2}$ and $p$ is an additional parameter. To obtain the exact solution, we need to compute the inverse of $\Theta(\theta)$. $\Theta(\theta)$ is invertible if and only if $0 < kp < 1$ which is an additional constraint when choosing $p$. Note from the above formula that the soliton moves to the right with speed $\tilde{c}$, while preserving its shape during the evolution.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c c c c c c c}
\toprule
\bfseries &\bfseries $N_{int}$ & \bfseries $N_{sb}$& \bfseries $N_{tb}$ &\bfseries $K-1$ & \bfseries $d$ &\bfseries $\lambda$ &\bfseries $\EuScript{E}_T$&\bfseries $\EuScript{E}_G^r$ \\
\midrule
\midrule
Single Soliton &16384 & 4096 & 4096 &4&20& 1 &3.70e-06& 0.00191\% \\
\midrule
Double Soliton &16384 & 4096 & 4096 &8&24& 0.1 &0.00127& 0.186\% \\
\bottomrule
\end{tabular}
\caption{Best performing \textit{Neural Network} configurations for the single soliton and double soliton problem. Low-discrepancy Sobol points are used for every reported numerical example.}
\label{tab:CH}
}
\end{table}
We apply the PINNs algorithm \ref{alg:PINN} to approximate the single soliton, with parameters $k = 0.6, p = 1$. The hyperparameters, corresponding to the smallest training error during ensemble training are reported in Table \ref{tab:CH}. In figure \ref{fig:CH} (left), we plot the exact soliton and its PINN approximation, at the initial time and at a later time and observe from the figure that the trained PINN can approximate the soliton to very high accuracy. This is further validated by the extremely low generalization error, reported in Table \ref{tab:CH}. Moreover, from Table \ref{tab:CH_single_cp}, we observe that the PINN for approximating this single solution is training very fast and an error of less than $1\%$ already results from less than $500$ iterations, that corresponding to approximately $2$ minutes of training time.
\begin{figure}[h!]
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{{Images/CH_single.png}}
\caption{Single soliton, $k=0.6, p=1$}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\
\includegraphics[width=1\linewidth]{{Images/CH_double.png}}
\caption{Double soliton, $k = 0.6, p_1 = 1.5, p_2 = 1$}
\end{subfigure}
\caption{The exact and PINN solutions of single and double soliton test case of generalized CH equation.}
\label{fig:CH}
\end{figure}
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c }
\toprule
\bfseries max\_iters &\bfseries training time$/s$ &\bfseries $\eps_T$ &\bfseries $\eps_G^r$ \\
\midrule
\midrule
100 & 36 & 8.08e-03 & 2.81e-01 \\
\midrule
500 & 161 & 4.71e-04 & 5.93e-03 \\
\midrule
1000 & 284 & 1.61e-04 & 1.14e-03 \\
\midrule
2000 & 560 & 4.96e-05 & 3.75e-04 \\
\midrule
5000 & 1457 & 9.77e-06 & 9.31e-05 \\
\midrule
10000 & 1667 & 2.83e-06 & 1.94e-05 \\
\bottomrule
\end{tabular}
\caption{Results of different training iterations for single soliton case of CH equation.}
\label{tab:CH_single_cp}
}
\end{table}
Next, we again follow \cite{peakon_lim2} to consider additional parameters $p_{1,2}$ and define
\begin{equation}
c_i = \frac{2k^3}{1-k^2p_i^2}, \quad w_i = -p_ic_i, \quad i = 1, 2
\end{equation}
and
\begin{equation}
A_{12} = \frac{(p_1-p_2)^2}{(p_1+p_2)^2}.
\end{equation}
For $i = 1, 2$, we further define
\begin{equation}
\begin{aligned}
a_i = 1 + kp_i, \quad b_i = 1 - kp_i,
\end{aligned}
\end{equation}
and as before, we define $\theta_i$ w.r.t. $y$ as
\begin{equation}
\theta_i = p_i(y-c_it+\alpha_i), \quad i = 1, 2
\end{equation}
and
\begin{equation}
\begin{aligned}
v_{12} &= \frac{4k^3(p_1-p_2)^2}{(1 - k^2p_1^2)(1 - k^2k_2^2)}, \quad b_{12} &= \frac{8k^6(p_1-p_2)^2(1-k^4p_1^2p_2^2)}{(1-k^2p_1^2)^2(1-k^2p_2^2)^2}
\end{aligned}
\end{equation}
Then, the exact double soliton solution w.r.t. $y$ is given by
\begin{equation}
u(y, t) = k^2 + \frac{2}{k}\frac{w_1^2e^{\theta_1} + w_2^2e^{\theta_2} + b_{12}e^{\theta_1+\theta_2} + A_{12}(w_1^2e^{\theta_1+2\theta_2} + w_2^2e^{2\theta_1+\theta_2})}{rf^2},
\end{equation}
where
\begin{equation}
\begin{aligned}
f(y, t) &= 1 + e^{\theta_1} + e^{\theta_2} + A_{12}e^{\theta_1 + \theta_2} \\
r(y, t) &= k + \frac{2}{f^2}(c_1p_1^2e^{\theta_1} + c_2p_2^2e^{\theta_2} + v_{12}e^{\theta_1+\theta_2} + A_{12}(c_1p_1^2e^{\theta_1 + 2\theta2} + c_2p_2^2e^{2\theta_1 + \theta2})).
\end{aligned}
\end{equation}
Finally we have the following relation between $x$ and $y$
\begin{equation}
x(y, t) = \frac{y}{k} + \ln\frac{a_1a_2 + b_1a_2e^{\theta1} + b_2a_1e^{\theta2} + b_1b_2A_{12}e^{\theta_1 + \theta_2}}{b_1b_2 + a_1b_2e^{\theta1} + q_2b_1e^{\theta2} + a_1a_2A_{12}e^{\theta_1 + \theta_2}} + k^2t + \alpha,
\end{equation}
where $\alpha$ is the phase parameter. To obtain the exact solution, we need to compute the inverse of $x(y, t)$ w.r.t. $y$ at the training points. $x(y, t)$ is invertible w.r.t. $y$ if and only if $0 < kp_i < 1, i = 1, 2$ which, again, is an additional constraint when choosing $p_1, p_2$.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c }
\toprule
\bfseries max\_iters &\bfseries training time$/s$ &\bfseries $\eps_T$ &\bfseries $\eps_G^r$ \\
\midrule
\midrule
100 & 83 & 3.63e-02 & 7.19e-01 \\
\midrule
500 & 386 & 8.37e-03 & 1.68e-01 \\
\midrule
1000 & 762 & 5.52e-03 & 6.99e-02 \\
\midrule
2000 & 1508 & 3.10e-03 & 3.17e-02 \\
\midrule
5000 & 4083 & 8.71e-04 & 5.29e-03 \\
\midrule
10000 & 5747 & 4.09e-04 & 1.84e-03 \\
\bottomrule
\end{tabular}
\caption{Results of different training iterations for double soliton case of CH equation.}
\label{tab:CH_double_cp}
}
\end{table}
We set $k=0,6,p_1=1.5,p_2=2$ in the above formula and apply the PINNs algorithm to compute the double soliton for the Camassa-Holm equation. The hyperparameters, corresponding to the smallest training error during ensemble training are reported in Table \ref{tab:CH}. In figure \ref{fig:CH} (right), we plot the exact soliton and its PINN approximation, at the initial time and at a later time and observe from the figure that the trained PINN can approximate the soliton to high accuracy. This is further validated by the very low generalization error, reported in Table \ref{tab:CH}. In particular, the ability of the PINN to resolve not just the sharp waves but also the dynamic wave interaction is noteworthy. The error as a function of the training iterations (and hence the computational cost) is shown in Table \ref{tab:CH_double_cp} and we observe that significantly more training time is necessary to resolve the double soliton than the single soliton. For instance, one needs approximately $25$ minutes of training time for obtaining an error of $3\%$. This difference in convergence of training iterations between the single soliton and double soliton cases is nicely explained from the observations in Figure \ref{fig:CH_double_cp}, where we plot the PINN solutions at a sequence of training iterations. We observe that for single soliton, the sharp peak is very quickly approximated during training. Similar, the sharp peak corresponding to the faster soliton is very quickly approximated in the double soliton case. On the other hand, the complicated wave pattern, with a crest and a through in the wake of the fast solution, takes several more training iterations to resolve.
\begin{figure}
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{Images/cp_fig1/CH_single_lim1_cp_f.png}
\caption{Single soliton}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{Images/cp_fig1/CH_double_lim1_cp_f.png}
\caption{Double soliton}
\end{subfigure}
\caption{Plots of different train iterations at final time.}
\label{fig:CH_double_cp}
\end{figure}
\begin{remark}
Taking the limit $\kappa \rightarrow 0$ in the formulas for the single soliton and the double soliton for the Camassa-Holm equation, results in the well-known single peakon and double peakon solutions of the Camassa-Holm equations \cite{Hol2}. However, peakons have a singularity in their derivatives and are at most in $H^1$. Thus, the stability result as well as the bound on the generalization error no longer hold, as they require $C^3$ regularity for the solutions. Consequently, we cannot expect to compute peakons with the current version of the PINNs algorithm.
\end{remark}
\section{Benjamin-Ono Equation}
\label{sec:5}
\subsection{The underlying PDE}
As a final example of nonlinear dispersive PDEs, we consider the following Benjamin-Ono (BO) equation
\begin{equation}
\label{eq:ie1}
\begin{aligned}
u_t + uu_x + Hu_{xx} & =0, \quad x \in \mathbb{R}, \quad t>0, \\
u(x,0)&=u_0(x), \quad x \in \mathbb{R}, \\
u(x,t) &=u(x+1,t), \quad x \in \mathbb{R}, \quad t>0,
\end{aligned}
\end{equation}
with $H$ denoting the \emph{Hilbert transform}
defined by the principle value integral
\begin{equation*}
H u(x) := \mathrm{P.V.} \, \frac{1}{\pi} \int_{\mathbb{R}} \frac{u(x-y)}{y} \,dy.
\end{equation*}
The BO equation was first deduced by Benjamin \cite{benjamin} and Ono \cite{ono} as an approximate model for long-crested unidirectional waves at the interface of a two-layer system of incompressible inviscid fluids, one being infinitely deep. Later, it was shown to be a completely integrable system. In the periodic setting, Molinet \cite{molinet3a} proved well-posedness in $H^s (\mathbb{T})$ for $s \ge0$. We recall the following well-posedness result for the classical solutions of the BO equation,
\begin{theorem}
\label{0011}
For any $s>5/3$, let $u_0 \in H^{s}(0,1)$. Then there exists a global smooth solution to \eqref{eq:ie1} such that
$$
u \in C(0,T; H^{s}(0,1)), \quad u_t \in C^1(0,T; H^{s-2}(0,1)).
$$
\end{theorem}
Note that the above result was also used by Kenig, Ponce and Vega \cite{kenig} to prove uniqueness properties of BO equation. Moreover, the above result ensures that the solutions satisfy the equation \eqref{eq:ie1} pointwise for sufficiently smooth initial data.
\subsection{PINNs}
To specify the PINNs algorithm for the BO equation \eqref{eq:ie1}, we start by choosing the training set as in section \ref{sec:train}. We define the residual $\EuScript{R}$ in algorithm \ref{alg:PINN}, consisting of the following parts,
\begin{itemize}
\item \emph{Interior residual} given by,
\begin{equation}
\label{eq:ires11}
\EuScript{R}_{int,\theta}(x,t):= (u_{\theta})_t(x,t) + u_{\theta}(x,t) (u_{\theta})_{x}(x,t) + H (u_{\theta})_{xx}(x,t), \quad (x,t) \in (0,1) \times (0,T),
\end{equation}
\item \emph{Spatial boundary Residual} given by,
\begin{equation}
\label{eq:ires31}
\EuScript{R}_{sb,\theta}(x,t):= u_{\theta}(x,t)- u_{\theta}(x+1,t), \quad \forall x \in \mathbb{R}, ~ t \in (0,T].
\end{equation}
\item \emph{Temporal boundary Residual} given by,
\begin{equation}
\label{eq:ires41}
\EuScript{R}_{tb,\theta}(x):= u_{\theta}(x,0) - u_0(x), \quad \forall x \in (0,1).
\end{equation}
\end{itemize}
Next, we consider the following loss function for training PINNs to approximate the BO equation \eqref{eq:ie1},
\begin{equation}
\label{eq:ilf1}
J(\theta):= \sum\limits_{n=1}^{N_{tb}} w^{tb}_n|\EuScript{R}_{tb,\theta}(x_n)|^2 + \sum\limits_{n=1}^{N_{sb}} w^{sb}_n|\EuScript{R}_{sb,\theta}(x_n,t_n)|^2 + \lambda \sum\limits_{n=1}^{N_{int}} w^{int}_n|\EuScript{R}_{int,\theta}(x_n,t_n)|^2.
\end{equation}
Here the residuals are defined by \eqref{eq:ires11}-\eqref{eq:ires41}. $w^{tb}_n$ are the $N_{tb}$ quadrature weights corresponding to the temporal boundary training points $\mathcal{S}_{tb}$, $w^{sb}_n$ are the $N_{sb}$ quadrature weights corresponding to the spatial boundary training points $\mathcal{S}_{sb}$ and $w^{int}_n$ are the $N_{int}$ quadrature weights corresponding to the interior training points $\mathcal{S}_{int}$. Furthermore, $\lambda$ is a hyperparameter for balancing the residuals, on account of the PDE and the initial and boundary data, respectively.
\subsection{Estimate on the generalization error.}
We denote the PINN, obtained by the algorithm \ref{alg:PINN}, for approximating the BO equation, as $\bu^{\ast}= \bu_{\theta^{\ast}}$, with $\theta^{\ast}$ being a (approximate, local) minimum of the loss function \eqref{eq:lf2},\eqref{eq:ilf1}. We consider the following generalization error,
\begin{equation}
\label{eq:iegen1}
\EuScript{E}_{G}:= \left(\int\limits_0^T \int\limits_0^1 \|\bu(x,t) - \bu^{\ast}(x,t)\|^2 dx dt \right)^{\frac{1}{2}},
\end{equation}
with $\|\cdot\|$ denoting the Euclidean norm in $\mathbb{R}^d$. We will bound the generalization error in terms of the following \emph{training errors},
\begin{equation}
\label{eq:ietrain1}
\EuScript{E}_T^2:= \underbrace{\sum\limits_{n=1}^{N_{tb}} w^{tb}_n|\EuScript{R}_{tb,\theta^{\ast}}(x_n)|^2}_{\left(\EuScript{E}_T^{tb}\right)^2} + \underbrace{\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|\EuScript{R}_{sb,\theta^{\ast}}(x_n,t_n)|^2}_{\left(\EuScript{E}_T^{sb}\right)^2} +\lambda\underbrace{\sum\limits_{n=1}^{N_{int}} w^{int}_n|\EuScript{R}_{int,\theta^{\ast}}(x_n,t_n)|^2}_{\left(\EuScript{E}_T^{int}\right)^2}.
\end{equation}
As in the previous sections, the training errors can be readily computed \emph{a posteriori} from the loss function \eqref{eq:ilf1}.
We have the following bound on the generalization error in terms of the training error,
\begin{theorem}
\label{thm:euler1}
Let $u \in C^3([0,1] \times [0,T])$ be the unique classical solution of Benjamin-Ono equation \eqref{eq:ie1}. Let $u^{\ast} = u_{\theta^{\ast}}$ be the PINN, generated by algorithm \ref{alg:PINN}, with loss function \eqref{eq:ilf1}. Then, the generalization error \eqref{eq:iegen1} is bounded by,
\begin{equation}
\label{0001}
\begin{aligned}
\epsilon_G &\leq C_1\big(\epsilon_T^{tb} + \epsilon_T^{int} + C_2(\epsilon_T^{sb})^{1/2} \\
&+ (C_{quad}^{tb})^{1/2} N_{tb}^{-\alpha_{tb} / 2} + (C_{quad}^{int})^{1/2} N_{int}^{-\alpha_{int} / 2} + C_2(C_{quad}^{sb})^{1/4} N_{sb}^{-\alpha_{sb} / 4}\big),
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
C_1 &= \sqrt{T + 2C_3T^2e^{2C_3T}}, \\
C_2 &= T^{1/4}\sqrt{2(\Vert u^* \Vert_{C_t^0C_x^2} + \Vert u \Vert_{C_t^0C_x^2}) + 2\Vert u \Vert_{C_t^0C_x^0}(\Vert u \Vert_{C_t^0C_x^0} + \Vert u^* \Vert_{C_t^0C_x^0})}, \\
C_3 &= \frac{1}{2} + \Vert u^* \Vert_{C_t^0C_x^1} + \frac{1}{2}\Vert u \Vert_{C_t^0C_x^1}, \\
\end{aligned}
\end{equation}
and $C^{tb}_{quad} = C^{tb}_{qaud}\left(\|\EuScript{R}_{tb,\theta^{\ast}}\|_{C^3}\right)$, $C^{int}_{quad} = C^{int}_{qaud}\left(\|\EuScript{R}_{int,\theta^{\ast}}\|_{C^{1}}\right)$, and $C^{sb}_{quad} = C^{sb}_{qaud}\left(\|\EuScript{R}_{sb,\theta^{\ast}}\|_{C^3}\right)$ are the constants associated with the quadrature errors \eqref{eq:hquad1}-\eqref{eq:hquad3}.
\end{theorem}
\begin{proof}
We will drop explicit dependence of all quantities on the parameters $\theta^{\ast}$ for notational convenience. We denote the difference between the underlying solution $u$ of \eqref{eq:ie1} and PINN $u^{\ast}$ as $\hat{u} = u^{\ast} - u$. Using the PDE \eqref{eq:ie1} and the definitions of the residuals \eqref{eq:ires11}-\eqref{eq:ires41}, a straightforward calculation yields the following PDE for the $\hat{u}$,
\begin{equation}
\label{eq:iehat1}
\begin{aligned}
\hat{u}_t + H\hat{u}_{xx} +u^{\ast} u^{\ast}_x- u u_x&= \EuScript{R}_{u}, \quad \mbox{a.e.}\,(x,t) \in (0,1) \times (0,T), \\
\hat{u}(0,t) - \hat{u}(1,t) &= \EuScript{R}_{sb}, \quad t \in (0,T), \\
\hat{u}(x,0) &= \EuScript{R}_{tb}, \quad x \in (0,1).
\end{aligned}
\end{equation}
We take a inner product of the equation in \eqref{eq:iehat1} with the vector $\hat{u}$, and integrate by parts to obtain the term coming from the Hilbert transform
\begin{align*}
\int_0^1 \hat{u} H(\hat{u}_{xx}) \,dx = - \int_0^1 \hat{u}_x H(\hat{u}_x)\,dx + H(\hat{u}_x)(1) \hat{u}(1) - H(\hat{u}_x)(0) \hat{u}(0) = H(\hat{u}_x)(1) \hat{u}(1) - H(\hat{u}_x)(0) \hat{u}(0).
\end{align*}
The boundary terms on the other hand can be bounded as follows,
\begin{align*}
\Big[H(\hat{u}_x)(1) \hat{u}(1) - H(\hat{u}_x)(0) \hat{u}(0) \Big]
&= \Big[ \big(H(\hat{u}_x)(1) - H(\hat{u}_x)(0) \big) \hat{u}(1) + H(\hat{u}_x)(0) (\hat{u}(1) -\hat{u}(0))\Big] \\
&= H(\hat{u}_x)(0) (\hat{u}(1) -\hat{u}(0))
\le \| H(\hat{u}_x)(0) \|_{C^0_t} |\EuScript{R}_{sb}|
\le (\| u \|_{C^0_tC^2_x} + \| u^* \|_{C^0_tC^2_x}) |\EuScript{R}_{sb}|.
\end{align*}
In the second line, we have exploited the periodicity of $u$ and $u^*$. For the remaining terms, we can follow the arguments given before and get
\begin{equation}
\begin{aligned}
\frac{1}{2}\frac{d}{dt}\int_0^1 \hat{u}^2\,dx
&= - \int_0^1 \hat{u}H\hat{u}_{xx}\,dx - \int_0^1\hat{u}(\hat{u}\hat{u}_x - u\hat{u}_x + u_x\hat{u})\,dx + \int_0^1 \hat{u}\EuScript{R}_{int}\,dx \\
&\leq (\Vert u^* \Vert_{C_x^2} + \Vert u \Vert_{C_x^2})|\EuScript{R}_{sb}|
- \int_0^1 (u^*_x - \frac{1}{2}u_x)\hat{u}^2 - \left.\frac{1}{2}u\hat{u}^2\right\vert_0^1
+ \int_0^1 \hat{u}\EuScript{R}_{int}\,dx \\
&\leq (\Vert u^* \Vert_{C_x^2} + \Vert u \Vert_{C_x^2})|\EuScript{R}_{sb} | \\
&+ (\Vert u^* \Vert_{C_x^1} + \frac{1}{2}\Vert u \Vert_{C_x^1}) \int_0^1 \hat{u}^2\,dx + \Vert u \Vert_{C_x^0}(\Vert u \Vert_{C_x^0} + \Vert u^* \Vert_{C_x^0})|\EuScript{R}_{sb}| \\
&+ \frac{1}{2}\int_0^1 \EuScript{R}_{int}^2 \,dx + \frac{1}{2}\int_0^1 \hat{u}^2 \,dx \\
&\leq \big(\Vert u^* \Vert_{C_t^0C_x^2} + \Vert u \Vert_{C_t^0C_x^2} + \Vert u \Vert_{C_t^0C_x^0}(\Vert u \Vert_{C_t^0C_x^0} + \Vert u^* \Vert_{C_t^0C_x^0})\big)|\EuScript{R}_{sb}| \\
&+ \frac{1}{2}\int_0^1 \EuScript{R}_{int}^2 \,dx + (\frac{1}{2} + \Vert u^* \Vert_{C_t^0C_x^1} + \frac{1}{2}\Vert u \Vert_{C_t^0C_x^1}) \int_0^1 \hat{u}^2\,dx \\
&=: C_1|\EuScript{R}_{sb}| + \frac{1}{2}\int_0^1 \EuScript{R}_{int}^2 \,dx + C_2\int_0^1 \hat{u}^2\,dx.
\end{aligned}
\end{equation}
Then integrating the above inequality over $[0,\bar{T}]$ for any $\bar{T} \leq T$ and using Cauchy-Schwarz and Gronwall's inequalities we obtain
\begin{equation}
\begin{aligned}
\int_0^1 \hat{u}(x, \bar{T})^2\,dx
&\leq \int_0^1 \EuScript{R}_{tb}^2\,dx + 2C_1T^{1/2}(\int_0^T\EuScript{R}_{sb}^2\,dt)^{1/2} + \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt + 2C_2\int_0^{\bar{T}}\int_0^1 \hat{u}^2\,dxdt \\
&\leq (1 + 2C_2Te^{2C_2T}) \big(\int_0^1 \EuScript{R}_{tb}^2\,dx + C_1T^{1/2}(\int_0^T\EuScript{R}_{sb}^2\,dt)^{1/2} + \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt \big).
\end{aligned}
\label{eq:BO_hat_eq3}
\end{equation}
Finally, we integrate \eqref{eq:BO_hat_eq3} over $\bar{T} \in [0, T]$ and arrive at
\begin{equation}
\begin{aligned}
\epsilon_G^2 &:= \int_0^T\int_0^1 \hat{u}(x, \bar{T})^2\,dxd\bar{T} \\
&\leq (T + 2C_2T^2e^{2C_2T}) \big(\int_0^1 \EuScript{R}_{tb}^2\,dx + 2C_1T^{1/2}(\int_0^T\EuScript{R}_{sb}^2\,dt)^{1/2} + \int_0^T\int_0^1 \EuScript{R}_{int}^2 \,dxdt \big),
\end{aligned}
\end{equation}
with
\begin{equation}
\begin{aligned}
C_1 = \Vert u^* \Vert_{C_t^0C_x^2} + \Vert u \Vert_{C_t^0C_x^2} + \Vert u \Vert_{C_t^0C_x^0}(\Vert u \Vert_{C_t^0C_x^0} + \Vert u^* \Vert_{C_t^0C_x^0}), \quad
C_2 = \frac{1}{2} + \Vert u^* \Vert_{C_t^0C_x^1} + \frac{1}{2}\Vert u \Vert_{C_t^0C_x^1}.
\end{aligned}
\end{equation}
The proof of theorem can be eventually attained by applying the estimates \eqref{eq:hquad1}, \eqref{eq:hquad2}, \eqref{eq:hquad3}.
\end{proof}
\subsection{Evaluation of the singular integral}
Note that the in the PINNs algorithm for approximating the BO equation as well as in the derivation of the above error bound, we have assumed that the Hilbert transform in \eqref{eq:ie1} can be evaluated exactly. In practice, this is not possible and we need to approximate the Hilbert transform. To this end, we focus on the periodic case. The periodic Hilbert transform is defined by
\begin{equation}
H_{per}u(x) = \textrm{p.v.} \frac{1}{2L} \int_{-L}^L \cot(\frac{\pi}{2L}y)u(x - y)dy.
\end{equation}
To compute the above \emph{non-local} term, we use a Cartesian grid $\{ x_i \}_{i = -N}^N$ and additionally require $x_0 = 0$. And we can discretize the singular integral term as
\begin{equation}
\label{eq:ehr}
\begin{aligned}
H_{per}u_{xx}(x) &= \textrm{p.v.} \frac{1}{2L} \int_{-L}^L \cot(\frac{\pi}{2L}y)u_{xx}(x - y)dy \\
&\approx \frac{1}{2N} \sum\limits_{j=-N, j \neq 0}^N \cot(\frac{\pi}{2L}x_j)u_{xx}(x - x_j).
\end{aligned}
\end{equation}
We exclude index $j = 0$ in order to be consistent with the definition of principal value because $x_0 = 0$ is a singularity of $\cot(\frac{\pi}{2L}x_j)$.
More importantly, what we need to compute is the term, $\left.H_{per}u_{xx}(x)\right\vert_{x_i}$ which can be represented as a discrete periodic convolution of $\cot(\frac{\pi}{2L}x_j)$ and $\left.u_{xx}(x)\right\vert_{x_j}$
\begin{equation}
\label{eq:ehr1}
\begin{aligned}
H_{per}u_{xx}(x_i) &\approx \frac{1}{2N} \sum\limits_{j=-N, j \neq 0}^N \cot(\frac{\pi}{2L}x_j)u_{xx}(x_i - x_j) \\
&= \frac{1}{2N} \sum\limits_{j=-N, j \neq 0}^N \cot(\frac{\pi}{2L}x_j)u_{xx}(x_{i - j}).
\end{aligned}
\end{equation}
This implies that to compute $H_{per}u_{xx}(x_i), -N \leq i \leq N$ we only need to compute $u_{xx}(x_i), -N \leq i \leq N$. Moreover, the discrete periodic convolution \eqref{eq:ehr1} can be accelerated by a Fast Fourier transform(FFT) to obtain a complexity of $O(N\log(N))$.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c c c c c c c c}
\toprule
\bfseries &\bfseries $N_{int}$ & \bfseries $N_{sb}$& \bfseries $N_{tb}$ &\bfseries $K-1$ & \bfseries $d$ &\bfseries $\lambda$ &\bfseries $\EuScript{E}_T$&\bfseries $\EuScript{E}_G^r$ &\bfseries $\Delta$ \\
\midrule
\midrule
Single Soliton &32768 & 8192 & 8192 & 12& 24 & 1 &0.000296& 0.773\% & 4 \\
\midrule
Double Soliton &65536 & 16384 & 16384 & 4 & 20 & 10 &0.00616 & 0.657\% & 30 \\
\bottomrule
\end{tabular}
\caption{Best performing \textit{Neural Network} configurations for the periodic single soliton and real-line double soliton problem. Low-discrepancy Sobol points are used for all boundary points; Cartesian grids are used for all interior points.}
\label{tab:BO}
}
\end{table}
\subsection{Numerical experiments}
In addition to the previous hyperparameters, an additional one $\Delta = \frac{\Delta t}{\Delta x}$ i.e., the ratio of the time and space steps on the space-time Cartesian grid, also needs to be set for the BO equation and we select it through ensemble training. We start with the periodic single soliton test case with the exact solution,
\begin{equation}
u(x, t) = \frac{2c\delta^2}{1 - \sqrt{1-\delta^2}\cos(c\delta(x-ct-x_0))}, \quad \delta = \frac{\pi}{cL},
\end{equation}
where $L$ is the half periodicity. This represents a single bump moving to the right with speed $c$ periodically with initial peak at $x = x_0$. In our experiments, we choose $L = 15$, $c = 0.25$ and $x_0 = 0$. The selected hyperparameters as a result of the ensemble training procedure are presented in Table \ref{tab:BO}. In figure \ref{fig:BO} (left), we plot the exact single soliton and its PINN approximation and observe that the PINN approximate the exact solution very well. This is further verified from Table \ref{tab:BO}, where we report an error of less than $1 \%$.
\begin{figure}[h!]
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{{Images/BO_single.png}}
\caption{Single soliton}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\
\includegraphics[width=1\linewidth]{{Images/BO_double.png}}
\caption{Double soliton}
\end{subfigure}
\caption{The exact and PINN solutions of single and double soliton of BO equation.}
\label{fig:BO}
\end{figure}
Next, we consider the double soliton case. In this case, the exact solution formula for the periodic double soliton is very complicated to evaluate. Hence, we consider the so-called \emph{long wave limit} by taking $L \rightarrow +\infty$. Hence, we consider the interacting solitons on the real line with formulas,
\begin{equation}
u(x, t) = \frac{4c_1c_2(c_1\lambda_1^2 + c_2\lambda_2^2 + (c_1+c_2)^3c_1^{-1}c_2^{-1}(c_1-c_2)^{-2})}{(c_1c_2\lambda_1\lambda_2 - (c_1+c_2)^2(c_1-c_2)^{-2})^2 + (c_1\lambda_1 + c_2\lambda_2)^2},
\end{equation}
where
\begin{equation}
\begin{aligned}
\lambda_1 = x - c_1t, \quad \lambda_2 = x - c_2t
\end{aligned}
\end{equation}
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c }
\toprule
\bfseries max\_iters &\bfseries training time$/s$ &\bfseries $\eps_T$ &\bfseries $\eps_G^r$ \\
\midrule
\midrule
100 & 87 & 1.36e-02 & 4.11e-01 \\
\midrule
500 & 430 & 3.83e-03 & 2.36e-01 \\
\midrule
1000 & 888 & 3.30e-03 & 2.34e-01 \\
\midrule
2000 & 1667 & 1.61e-03 & 6.13e-02 \\
\midrule
5000 & 3492 & 4.56e-04 & 8.22e-03 \\
\midrule
10000 & 6107 & 2.96e-04 & 7.73e-03 \\
\bottomrule
\end{tabular}
\caption{Results of different training iterations for single soliton case of BO equation.}
\label{tab:BO_single_cp}
}
\end{table}
This solution represents two waves that “collide” at $t = 0$ and separate for $t > 0$. For large $|t|$, $u(\cdot, t)$ is close to a sum of two single solitons at different locations. We choose $c_1 = 2 $ and $ c_2 = 1$ in our experiments. Given the impossibility of computing over the whole real line, we restrict ourselves to the computational domain $[-L, L]$. We first extend the PINN by zero to the extended computational domain $[-5L, 5L]$ and then use a similar discretization as in \eqref{eq:ehr}, to compute the discrete periodic convolution of $\frac{1}{\pi x_j}$ and $\left.u_{xx}(x)\right\vert_{x_j}$ and finally restrict the result of discrete periodic convolution onto domain $[-L, L]$.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.1}
\footnotesize{
\begin{tabular}{ c c c c }
\toprule
\bfseries max\_iters &\bfseries training time$/s$ &\bfseries $\eps_T$ &\bfseries $\eps_G^r$ \\
\midrule
\midrule
100 & 74 & 2.98e-01 & 4.69e-01 \\
\midrule
500 & 325 & 3.07e-02 & 2.96e-02 \\
\midrule
1000 & 703 & 1.13e-02 & 3.92e-03 \\
\midrule
2000 & 1280 & 7.19e-03 & 6.98e-03 \\
\midrule
5000 & 1715 & 6.16e-03 & 6.57e-03 \\
\midrule
10000 & 1937 & 6.16e-03 & 6.57e-03 \\
\bottomrule
\end{tabular}
\caption{Results of different training iterations for double soliton case of BO equation.}
\label{tab:BO_double_cp}
}
\end{table}
The resulting PINN approximation together with the exact double soliton is plotted in Figure \ref{fig:BO} (right). We observe a very accurate approximation of the BO double-soliton interaction by the PINN and this is also confirmed by a very low error of less than $1\%$, reported in Table \ref{tab:BO}.
The training times for the periodic single soliton are shown in Table \ref{tab:BO_single_cp} and we see that the training is significantly slower in this case, when compared to other test cases, with a relative error of approximately $6\%$ in approximately $25$ minutes. On the other hand, the PINN approximating the real-line double soliton is significantly faster to train. From the training times reported in Table \ref{tab:BO_double_cp}, we see that an error of about $3\%$ is already achieved for a training of merely $5$ minutes. Given the non-local as well as dispersive nature of the underlying solutions, attaining such low errors in a short time is noteworthy.
\begin{figure}
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{Images/cp_fig1/BO_single_enh4_1_cp_ff.png}
\caption{Periodic single soliton}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{Images/cp_fig1/BO_double_enh30_cp_f.png}
\caption{Real-line double soliton}
\end{subfigure}
\caption{Plots of different train iterations at final time.}
\label{fig:BO_double_cp}
\end{figure}
Why is it significantly harder to train the periodic single soliton, when compared to the real-line double soliton. To investigate this question, we plot the PINN approximation to both test cases in Figure \ref{fig:BO_double_cp} for different training iterations. As observed in this figure, both the boundary values and the peak of the periodic single soliton take quite some LBFGS iterations to converge, which explains the relatively high computational cost. On the other hand, the real-line double soliton is approximated very fast as it has two sharp peaks, which are resolved with very few LBFGS iterations.
\section{Discussion}
Nonlinear dispersive PDEs such as the KdV-Kawahara equation, the Camassa-Holm equation and the Benjamin-Ono equation arise in the modeling of shallow-water waves. In addition to being completely integrable, these PDEs contain interesting solutions such as multiple colliding solitons, which result from a subtle balance between the effects of non-linearity and dispersion. Given the fact that these PDEs are nonlinear and contain either high-order or non-local partial derivatives, standard numerical methods such as finite difference and finite element methods can be very expensive for computing accurate solutions.
In this paper, we have proposed a novel machine learning algorithm for approximating the solutions of the afore-mentioned dispersive PDEs. Our algorithm is based on recently proposed physics informed neural networks (PINNs), in which the PDE residual, together with initial and boundary data mismatches, is minimized by a gradient descent algorithm to yield a neural network that can approximate classical solutions of the underlying PDE. We prove rigorous bounds on the error of the PINNs and present several numerical experiments to demonstrate that PINNs can efficiently approximate the solutions of non-linear dispersive equations such as KdV-Kawahara, Camassa-Holm and Benjamin-Ono. We observe from the numerical experiments that PINNs can yield very low errors with low to moderate computational cost, even for complicated problems such as multi-soliton interactions, making them significantly more efficient than traditional numerical methods for these nonlinear PDEs. Moreover, PINNs are very easy to code and parallelize using standard machine learning frameworks such as PyTorch and Tensorflow.
This impressive performance of PINNs is in spite of the fact that the basis of the PINNs algorithm is an \emph{automatic differentiation by backpropagation} routine, by which one evaluates the derivatives used in computing the PDE residual. Given that one has to repeatedly use automatic differentiation for evaluating the high-order derivatives for dispersive PDEs, for instance 3rd-order derivatives for the KdV and Camassa-Holm equation and even a 5th-order derivative for the Kawahara equation, it is surprising that the automatic differentiation routine is both stable and very accurate, resulting in very low PINN errors. This paper further demonstrates the robustness of backpropagation.
In future works, we will investigate the use of PINNs to compute solutions of other dispersive PDEs, particularly in several space dimensions. Moreover, it is clear from the error estimates that PINNs can only approximate classical solutions of dispersive PDEs efficiently. On the other hand, singular solutions such as peakons for the Camassa-Holm equation cannot be efficiently approximated by PINNs. Rather, weak formulations of PINNs will be better suited for this purpose and we plan to investigate such an extension in the future.
\bibliographystyle{abbrv}
|
1,116,691,500,686 | arxiv |
\chapter*{Acknowledgements}
To do justice to the people who have, in one way
or another, made this work what it is, feels like it
would take more than the rest of the thesis.
I can only say to everyone who has contributed
that I apologise for the entirely inadequate
acknowledgements to follow. I realise that
the sheer number of names below will suggest
to the reader that each has contributed only a little.
What can I say? All have made a significant
contribution; I have no real choice.
\subsection*{}
I could not ask for a pair of finer minds to share
this journey with than my supervisors,
Robert Dale and Mike Johnson, whose brilliance
still amazes me on a regular basis. Together,
they are my Plato and my Aristotle, and I doubt
I will ever shed their influences on my thinking.
I thank Mike for his infinite understanding
and Robert for his boundless energy.
Without the vision and determination of
Vance Gledhill, the unique environment
at the Microsoft Institute, and all the work
that has emerged from it, would never have existed.
He deserves the success it currently enjoys,
and my heartfelt thanks.
I want particularly to thank Mark Johnson for
his assistance both in nurturing the development
of my own ideas, and in generously contributing
his own. Thanks also to Wayne Wobcke for
discussions and input at various times.
Special thanks are deserved by Ted Briscoe,
Gregory Grefenstette, Karen Jensen and Richard Sproat,
all of whom don't realise how much I have valued
their encouragements and ideas.
I owe a great debt of gratitude to Philip Resnik,
not only for his technical contributions, but also
for his faith, passion and friendship; I only hope
that one day I can do them justice.
More than anyone else, the other Microsoft Institute
fellows have become part of the fabric of this thesis.
I am grateful to everyone here. Especial thanks
must go to Richard Buckland and Mark Dras; both
are blessed with genius, as well as just being really
friendly guys. Particular contributions have also
been made by Sarah Boyd, Maria Milosavljevic, Steven Sommer,
Wilco ter~Stal, Jonathon Tidswell and Adrian Tulloch.
I wish also to thank the following people
for their friendship, which has all helped:
Ken Barker, Alan Blair, Tanya Bowden, Christophe Chastang,
Phil Harrison, Rosie Jones, Patrick Juola,
Elisabeth Maier, Michael Mitchell, Nick Nicholas,
Peter Wallis, Susan Williams and Danny Yee.
Without the financial support generously given
by the Microsoft Institute Fellowship Program and the
Australian Government Postgraduate Award Scheme,
this research would not have happened.
\subsection*{}
{\samepage
Every single one of the following has personally made a
significant difference to the work presented here.
I am sorry I cannot specifically thank you all.
\begin{tabular}{llll}
& & & \\
{\small John Bateman} &{\small George Heidorn} &{\small Malti Patel} &{\small Andrew Taylor} \\
{\small Ezra Black} &{\small Andrew Hunt} &{\small Pavlos Peppas} &{\small Lucy Vanderwende} \\
{\small Rebecca Bruce} &{\small Christian Jacquemin} &{\small Pam Peters} &{\small Wolfgang Wahlster} \\
{\small Ted Dunning} &{\small Geof Jones} &{\small David Powers} &{\small Bonnie Lyn Webber} \\
{\small Dominique Estival} &{\small Kevin Knight} &{\small James Pustejovsky} &{\small Yorick Wilks} \\
{\small Tim Finin} &{\small John Lafferty} &{\small Ross Quinlan} &{\small Dekai Wu} \\
{\small Norman Foo} &{\small Alon Lavie} &{\small Carolyn Penstein Ros\'{e}} &{\small Collin Yallop} \\
{\small Louise Guthrie} &{\small Chris Manning} &{\small Jeff Siskind} &{\small David Yarowsky} \\
{\small Marti Hearst} &{\small Jenny Norris} &{\small Mark Steedman} &{\small Kobayasi Yoshiyuki} \\
\end{tabular}
}
\subsection*{}
And now some very special people:
There is nothing I can ever do to repay the
unerring support and care provided by my father,
my stepmother and my grandmother, without
which I would be lost.
Finally, the love and friendship I have shared
over the past few years with Andrew Campbell,
Christine Cherry and Lesley Johnston goes beyond
all words. Each has saved me from despair more times
than I can count. They are the earth on which I stand,
the air which I breathe and the sunlight that banishes
my darkness.
\subsection*{}
\subsection*{}
\subsection*{}
\vspace{2in}
\subsection*{Addendum to acknowledgements for first reprint}
This thesis has been accepted without modification by Macquarie University
in fulfillment of the requirements for the Degree of Doctor of Philosophy.
Since submission I have received insightful comments from my examiners
which have prompted me to make some small changes.
I would therefore also like to thank them: Eugene Charniak, Mitch Marcus
and Chris Wallace.
\chapter*{Preface}
The research represented in this thesis was carried out
at the Microsoft Institute. All work reported here
is the original work of the author, with the following
two exceptions.
\begin{enumerate}
\item The reasoning given in section~\ref{sec:dr_beginning}
regarding empty and non-empty bins
(pages~\pageref{pg:dr_beginning_MJstart}--\pageref{pg:dr_beginning_MJfinish})
was developed by Mark Johnson of Brown University. The author's
contribution was to extend the results to even values
of $n$ (the initial work only considered odd $n$) and
complete the proof for equation~\ref{eq:dr_beginning_emptybound}.
This work has been published as Lauer~(1995a) with Mark Johnson's
permission.
\item An original version of the probabilistic model given in
section~\ref{sec:cy_model} was jointly developed by the author
and Mark Dras, and has been published in Lauer and Dras~(1994).
\end{enumerate}
Some parts of this thesis include revised versions
of published papers. I would like to thank
the Association for Computational Linguistics
for (automatically) granting permission to reuse
material from Lauer~(1995b) (this material is primarily contained
in sections~\ref{sec:cy_model}, \ref{sec:cy_results}
and~\ref{sec:cy_comparisons}). Similarly, the
Pacific Association for Computational Linguistics
has (automatically) granted permission to reuse
material from Lauer~(1995a) (this material appears
in sections~\ref{sec:dr_need} through~\ref{sec:dr_beginning}).
Finally, kind permission has been given to reuse material from
Lauer~(1995c) (``Conserving Fuel in Statistical Language Learning:
Predicting Data Requirements'' in Proceedings of the Eighth Australian Joint
Conference on Artificial Intelligence, pp.~443--450.
Copyright by World Scientific Publishing Co. Pte, Singapore, 1995)
which appears in sections~\ref{sec:dr_beginning}
through~\ref{sec:dr_simulations}.
\chapter{Half-binomial Result}
\label{appendix:halfbinomial}
In section~\ref{sec:dr_global}, the
mathematics hinges on an expression for the
part of a binomial distribution that lies
above the half-way point of the distribution.
This appendix contains some notes leading to a mathematical
result which I have been referring to as the
{\em half-binomial result}. This result provides
a simpler expression for this probability.
We are interested in that part of a binomial distribution
which lies above the half-way point of the distribution.
Let $p$ be the chance of success on each trial and $n$ be
the number of trials. Since the distribution is discrete,
when $n$ is even some of the distribution lies exactly
on the half-way point (that is, \halfn). In this case,
we would like to weight the contribution from this part
by $\frac{1}{2}$.
Thus, the probability we are interested in is:
\begin{equation}
\halfbin(p,n) = {\sum_{i=\halfnincup}^{n} {n \choose i} p^i (1-p)^{n-i}}
+
\frac{1}{2} \even{n} {n \choose \halfn} p^{\halfn} (1-p)^{\halfn}
\end{equation}
Here, and below, $\even{n}$ is $1$ when $n$ is even and
$0$ otherwise. $\odd{n}$ is the reverse.
By expansion, we can arrive at a recurrence as follows:
First consider the probability for $n-1$.
\begin{equation}
\halfbin(p,n-1) = {\sum_{i=\halfnup}^{n-1} {{n-1} \choose i} p^i (1-p)^{n-i-1}}
+ \frac{1}{2}
\odd{n} {{n-1} \choose \halfndec} p^{\halfndec} (1-p)^{\halfndec}
\end{equation}
We want to find an expression for $\halfbin(p, n)$ in terms of
$\halfbin(p, n-1)$. Proceed by splitting the first choice function
in $\halfbin(p, n)$ into two halves using Pascal's relation.
\begin{eqnarray*}
\halfbin(p,n) & = & {\sum_{i=\halfnincup}^{n}
\left( {{n-1} \choose {i-1}} + {{n-1} \choose {i}} \right)
p^i (1-p)^{n-i}}
+
\frac{1}{2} \even{n} {n \choose \halfn} p^{\halfn} (1-p)^{\halfn} \\
& = & {\sum_{i=\halfnincup}^{n} {{n-1} \choose {i-1}} p^i (1-p)^{n-i}} \\
& & + {\sum_{i=\halfnincup}^{n} {{n-1} \choose {i}} p^i (1-p)^{n-i}} \\
& & +
\frac{1}{2} \even{n} {n \choose \halfn} p^{\halfn} (1-p)^{\halfn} \\
& = & p {\sum_{j=\halfndecup}^{n-1}
{{n-1} \choose {j}} p^j (1-p)^{n-j-1}} \\
& & + (1-p) {\sum_{i=\halfnincup}^{n}
{{n-1} \choose {i}} p^i (1-p)^{n-i-1}} \\
& & +
\frac{1}{2} \even{n} {n \choose \halfn} p^{\halfn} (1-p)^{\halfn} \\
& = & p \left( \odd{n}
{{n-1} \choose \halfndec} p^{\halfndec} (1-p)^{n-1-\halfndec} +
\sum_{j=\halfnup}^{n-1} {{n-1} \choose {j}} p^j (1-p)^{n-j-1} \right) \\
& & + (1-p) \left( {\sum_{i=\halfnup}^{n-1}
{{n-1} \choose {i}} p^i (1-p)^{n-i-1}}
- \even{n} {{n-1} \choose \halfn}
p^{\halfn} (1-p)^{\halfn-1} \right) \\
& & +
\frac{1}{2} \even{n} {n \choose \halfn} p^{\halfn} (1-p)^{\halfn} \\
& = & p \left( \halfbin(p, n-1) + \odd{n} \frac{1}{2}
{{n-1} \choose \halfndec} p^{\halfndec} (1-p)^{\halfndec} \right) \\
& & + (1-p) \left( \begin{array}{rl} \halfbin(p, n-1)
& - \odd{n} \frac{1}{2}
{{n-1} \choose \halfndec} p^{\halfndec} (1-p)^{\halfndec} \\
& - \even{n}
{{n-1} \choose \halfn} p^{\halfn} (1-p)^{\halfn-1}
\end{array} \right) \\
& & +
\frac{1}{2} \even{n} {n \choose \halfn} p^{\halfn} (1-p)^{\halfn} \\
& = & \halfbin(p, n-1) \\
& & + \odd{n} \left(
\frac{p}{2} {{n-1} \choose \halfndec} p^{\halfndec} (1-p)^{\halfndec}
- \frac{1-p}{2} {{n-1} \choose \halfndec}
p^{\halfndec} (1-p)^{\halfndec} \right) \\
& & + \even{n} \left(
\frac{1}{2} {n \choose \halfn} p^{\halfn} (1-p)^{\halfn}
- (1-p) {{n-1} \choose \halfn} p^{\halfn} (1-p)^{\halfn-1} \right)
\label{eq:recurrence}
\end{eqnarray*}
Since $\frac{1}{2}{n \choose \halfn} = {{n-1} \choose \halfn}$, the term
for even $n$ is zero. Therefore, we are left with the simple recurrence:
\begin{equation}
\halfbin(p, n) = \halfbin(p, n-1) + \odd{n} (p-\frac{1}{2})
{{n-1} \choose \halfndec} p^{\halfndec} (1-p)^{\halfndec}
\end{equation}
Since each increase in $n$ only adds a new term, we can re-express
this as a sum, which is the {\em half-binomial result}. It holds for
all $n \ge 0$.
\begin{equation}
\halfbin(p, n) = \frac{1}{2} + \sum_{i=0}^{\halfnup-1} (p-\frac{1}{2})
{2i \choose i} p^{i} (1-p)^{i}
\end{equation}
Notice that this sum (unlike the expression we began with)
does not contain terms that are dependent on $n$. Only the
index of summation depends on $n$. The corollary
below follows from the fact that the term representing
even $n$ in the recurrence above is zero.
\begin{corollary}
For all even $n \ge 2$, $\halfbin(p, n) = \halfbin(p, n-1)$.
\end{corollary}
Observing that terms of the sum are always positive when $p > 0.5$
leads to the following corollary.
\begin{corollary}
For all $n \ge 1$ and $p > \frac{1}{2}$,
$\halfbin(p, n) \ge \halfbin(p, n-1)$.
\end{corollary}
\chapter{Instructions to Human Judges}
\label{appendix:humaninstructions}
\section*{Instructions}
Welcome to my compound noun parsing experiment.
The purpose is to establish how well people can parse compound nouns
without using context. For this, I have extracted 215 three-word compounds,
which I will present to you for analysis.
Three word compounds have two possible syntactic readings.
In each example, you will be provided with both graphical and
bracketed alternatives. Here are two examples:
\begin{description}
\item[Left-Branching:] ((romance novel) cover)
Since we are not referring here to a \scare{novel cover}
for \scare{romance}, but rather the \scare{cover} of a \scare{romance novel},
this example is left-branching.
[accompanied by a box diagram showing romance novel linked to cover]
\item[Right-Branching:] (toy (coffee grinder))
Since we are not referring here to a \scare{grinder} for \scare{toy coffee},
but rather a \scare{coffee grinder} that is a \scare{toy},
this example is right-branching.
[accompanied by a box diagram showing coffee grinder linked to toy]
\end{description}
Your task is to choose which of these alternatives is
most probably the correct one in the (unknown) context
from which the compound was extracted.
Since you do not know the context, you must make your best guess.
Try to imagine possible meanings and contexts for each compound
and choose the analysis you think is most likely.
You will only be allowed to choose exactly one.
Choose your answer for each compound carefully.
You will not be able to go back once you have clicked
on the OK button. Feel free to mark the check boxes
to the left of the OK button if they are appropriate.
You can also add any comments you like about each example
in the comment box. Click on the HELP button to review
these instructions at any time.
Some points to note:
\begin{itemize}
\item There aren't necessarily equal numbers of
left- and right-branching compounds.
\item The compounds are presented in random order.
\item The compounds come from encyclopedia entries.
\end{itemize}
The entire task should take approximately 50--60 minutes.
Please take your time. Think for as long as you wish:
we want you to answer as well as you can.
\section*{Answer Dialogue}
After the instructions above had been presented to the
subject, a dialogue was displayed for each test noun compound.
The dialogue had the following elements:
\begin{itemize}
\item the test compound shown in large font.
\item two alternative analyses shown graphically.
\item two alternative analyses shown by bracketing.
\item radio buttons for left- and right-branching.
\item display with fraction of test completed.
\item HELP and OK buttons.
\item a free text comment box.
\item three check boxes, labelled:
\begin{itemize}
\item Contains a word you've never seen.
\item Neither answer is appropriate.
\item Both would fit equally well.
\end{itemize}
\end{itemize}
\chapter{Test Set for Noun Compound Parsing}
\label{appendix:cytest}
The following list of noun compounds forms the test set
used for the experiments on noun compound parsing.
It was produced from Grolier's using the method described
in section~\ref{sec:cy_method}. Each was then assigned
one of the following analyses: left-branching~(L),
right-branching~(R), indeterminate~(I) and extraction
error~(E). This answer is shown in the last
column.
For comparison, the second column gives the prediction made
by the best statistical model when using tagged training
data (see section~\ref{sec:cy_comparisons} and in particular
figure~\ref{fig:cy_tagged_tag_accuracy}). This
model achieves 80.7\% accuracy.
\begin{center}
\begin{tabular}{|l|c|c|} \hline
Noun triple & Best model & Correct Answer \\
\hline
minority business development & L & L \\
satellite data systems & R & R \\
disaster relief assistance & L & L \\
county extension agents & R & R \\
world food production & R & R \\
granary storage baskets & L & R \\
customs enforcement vehicles & L & L \\
airport security improvements & L & L \\
mountain summit areas & L & L \\
law enforcement agencies & L & L \\
college commencement poem & R & R \\
health education institutions & L & L \\
country music theme & L & L \\
sea transportation hub & L & L \\
principle organ systems & R & E \\
quality movie animation & R & E \\
army ant behavior & L & L \\
fossil ant species & L & I \\
missile defense systems & R & L \\
world petroleum production & R & R \\
representative percussion instruments & L & E \\
Arab independence movements & L & L \\
speech recognition system & L & L \\
\hline
\end{tabular}
\begin{tabular}{|l|c|c|} \hline
Noun triple & Best model & Correct Answer \\
\hline
production passenger vehicles & R & R \\
revenue ton miles & L & R \\
combustion chemistry technology & L & L \\
science fiction novels & L & L \\
missile guidance system & R & L \\
sea bass species & L & L \\
radiation energy conversion & L & L \\
tobacco mosaic disease & R & I \\
science fiction novels & L & L \\
energy distribution properties & R & L \\
college football player & L & I \\
science fiction writer & L & L \\
science fiction themes & L & L \\
science fiction writer & L & L \\
breeder technology development & L & L \\
landmark majority opinions & R & R \\
television news personality & R & I \\
community college system & L & L \\
town council members & L & L \\
war crimes prosecutor & L & L \\
health insurance laws & L & L \\
science fiction satire & L & L \\
death penalty statutes & L & L \\
Calvinist peasant family & R & R \\
exhibition ballroom dancers & L & R \\
music hall performer & R & L \\
volume alkali production & R & R \\
decades child specialists & R & E \\
child development specialists & L & L \\
fact problem behavior & R & E \\
skyscraper office buildings & R & R \\
sea warfare doctrine & L & L \\
reservoir evaporation losses & L & R \\
communications satellite system & L & L \\
data storage device & L & L \\
error correction data & L & L \\
automobile ignition systems & R & R \\
computer ad data & R & E \\
performance improvement method & L & L \\
computer music studio & L & L \\
privacy protection agency & R & L \\
law enforcement interception & L & L \\
computer education enthusiasts & L & L \\
citizen conservation organizations & R & R \\
development assistance efforts & L & L \\
energy conservation law & L & L \\
\hline
\end{tabular}
\begin{tabular}{|l|c|c|} \hline
Noun triple & Best model & Correct Answer \\
\hline
citizens twenty-one years & L & E \\
engine lubrication system & R & I \\
Monday night football & L & L \\
law enforcement agents & L & L \\
law enforcement agencies & L & L \\
law enforcement officials & L & L \\
intelligence reconnaissance sources & R & R \\
coalition war cabinet & R & R \\
cell plasma membrane & R & I \\
science fiction writer & L & L \\
data management effort & L & L \\
speech communication skills & R & I \\
hair cell destruction & L & L \\
bird pox viruses & L & I \\
college extension personnel & R & R \\
tobacco mosaic virus & R & I \\
currency brokerage office & R & L \\
countries education systems & R & E \\
student personality traits & R & R \\
hydrogen energy system & R & L \\
fiber optics systems & L & L \\
health enforcement agencies & R & L \\
minority business enterprises & L & L \\
law enforcement agency & L & L \\
college basketball players & L & I \\
law enforcement organizations & L & L \\
law enforcement agencies & L & L \\
studio office buildings & R & R \\
speech transmission systems & L & L \\
century manuscript illumination & L & E \\
century manuscript illumination & L & E \\
quality assurance department & L & L \\
infantry infiltration tactics & R & R \\
law enforcement officials & L & L \\
law enforcement officials & L & L \\
tobacco mosaic virus & R & I \\
tobacco mosaic virus & R & I \\
army ordnance depot & R & I \\
highway transportation systems & R & I \\
science fiction writer & L & L \\
sperm cell production & L & L \\
world grape production & R & R \\
world food production & R & R \\
Romans concert hall & R & E \\
Sunday afternoon football & L & L \\
country music singer & L & L \\
\hline
\end{tabular}
\begin{tabular}{|l|c|c|} \hline
Noun triple & Best model & Correct Answer \\
\hline
tenor sax players & L & L \\
health maintenance organizations & R & L \\
health maintenance organizations & R & L \\
hospital payment system & L & L \\
countries health insurance & L & E \\
tenor saxophone player & L & L \\
television drama series & L & R \\
television suspense series & R & R \\
lymph node enlargement & L & L \\
law enforcement activities & L & L \\
law enforcement standards & L & L \\
cell plasma membrane & R & I \\
repertory theater movement & L & L \\
hospital dissection theaters & L & R \\
government construction standards & R & R \\
daylight hours hummingbirds & L & E \\
kidney artery disease & R & L \\
origins quota system & L & L \\
valleys irrigation systems & R & E \\
food storage facilities & L & L \\
government ethics law & L & R \\
period infantry armies & L & E \\
century infantry tactics & R & E \\
chicken pox infection & L & L \\
information storage technology & L & L \\
data storage systems & L & L \\
debt repayment problems & L & L \\
county borough corporations & L & R \\
world petroleum crisis & L & R \\
luxury furniture industry & L & L \\
gridiron street pattern & R & R \\
Buddhist hell scenes & R & R \\
ethics committee investigation & L & L \\
sea bass family & L & L \\
computer industry entrepreneur & L & L \\
day management responsibilities & L & R \\
granite office tower & R & R \\
satellite news distribution & L & R \\
health maintenance organization & R & L \\
college basketball facilities & L & I \\
bladder outlet obstruction & L & L \\
war crimes trials & L & L \\
advice newspaper columns & R & R \\
television mystery series & R & R \\
missile defense weapons & R & L \\
music hall comedian & R & L \\
\hline
\end{tabular}
\begin{tabular}{|l|c|c|} \hline
Noun triple & Best model & Correct Answer \\
\hline
workers compensation law & L & L \\
canon law system & L & L \\
student democracy movement & R & R \\
world petroleum producers & L & R \\
life insurance policy & L & L \\
life insurance policy & L & L \\
coalition war cabinet & R & R \\
teacher education college & L & L \\
college basketball commentator & L & L \\
management information systems & L & R \\
weapons delivery systems & L & L \\
emergency medicine specialist & L & L \\
health maintenance organizations & R & L \\
tobacco mosaic virus & R & I \\
pagan Arab tribes & R & I \\
civilian undergraduate colleges & L & R \\
law enforcement officials & L & L \\
system drainage area & L & R \\
quantum interference device & R & L \\
monsoon regions rainfall & L & E \\
fertility mystery cult & R & L \\
city government activities & L & L \\
century concert music & R & R \\
music chorale melodies & L & R \\
union representation elections & R & R \\
government ephemeris offices & R & R \\
war crimes tribunals & L & L \\
sperm storage vessel & L & L \\
nucleus evaporation products & R & R \\
fission energy production & L & L \\
weapon delivery systems & L & L \\
weapons production facilities & R & L \\
weapon delivery systems & L & L \\
food energy calories & L & L \\
assistant majority leader & R & R \\
university opera workshops & R & R \\
war crimes trials & L & L \\
law enforcement resources & L & L \\
energy storage element & L & L \\
law enforcement agencies & L & L \\
law enforcement agencies & L & L \\
barn owl family & L & L \\
computer memory units & L & I \\
Buddhist temple precincts & L & R \\
family cigar business & R & R \\
world transportation artery & L & R \\
\hline
\end{tabular}
\begin{tabular}{|l|c|c|} \hline
Noun triple & Best model & Correct Answer \\
\hline
community barn raisings & R & R \\
deputy music director & R & R \\
twenty-one member nations & L & E \\
war crimes indictments & L & L \\
night warfare capabilities & L & L \\
gasoline storage tanks & L & L \\
cylinder phonograph system & L & R \\
television news photography & L & L \\
students laboratory instruction & R & E \\
world umbrella organization & R & R \\
years planetarium projectors & L & E \\
venom delivery system & L & L \\
polarizer prism system & R & R \\
navigation guidance system & R & L \\
law enforcement agencies & L & L \\
science fiction writer & L & L \\
engine combustion temperatures & L & R \\
alpha particle source & L & L \\
century population growth & L & E \\
government poverty statistics & R & R \\
combustion turbine generators & L & R \\
combustion turbine unit & L & R \\
years parent involvement & L & E \\
landslide election victories & R & R \\
government news agency & R & R \\
computer output device & R & R \\
computer hardware technology & L & L \\
alpha particle bombardment & L & L \\
child guidance movement & R & L \\
granite valley temple & R & R \\
family newspaper business & R & R \\
computer radar systems & R & R \\
government frequency allocations & R & R \\
aperture synthesis systems & L & L \\
world news roundup & L & L \\
uranium disintegration series & R & R \\
room temperature radon & L & E \\
laser radar systems & L & L \\
music industry designation & L & L \\
army ammunition depot & R & I \\
college football player & L & I \\
imitation rococo interiors & L & L \\
war college instructor & L & L \\
decades city leaders & R & E \\
chicken sarcoma virus & R & I \\
deputy assistant secretary & L & I \\
\hline
\end{tabular}
\begin{tabular}{|l|c|c|} \hline
Noun triple & Best model & Correct Answer \\
\hline
peasant redemption payments & L & R \\
city government elections & L & L \\
army tank commander & R & I \\
rider ropes cattle & R & E \\
college basketball player & L & I \\
college economics textbook & L & I \\
luxury apartment buildings & L & L \\
assistant division commander & R & R \\
science curriculum development & L & L \\
assistant majority leader & R & R \\
city sewerage systems & L & I \\
missile guidance systems & R & L \\
household laundry products & R & R \\
community welfare resources & L & R \\
detection investigation committee & L & L \\
centuries song accompaniments & L & E \\
student achievement measurements & L & L \\
Buddhist stupa mountain & R & I \\
radar ocean surveillance & R & R \\
communications satellite organization & L & L \\
missile defense systems & R & L \\
missile defense system & R & L \\
government policy decisions & R & I \\
war crimes tribunal & L & L \\
college football player & L & I \\
protein digestion products & L & L \\
coalition civilian government & R & R \\
world property ownership & L & E \\
swine flu virus & L & L \\
swine flu virus & L & L \\
millennium Arab traders & L & E \\
tenor saxophone player & L & L \\
tenor saxophone player & L & L \\
news bureau chiefs & L & L \\
law enforcement agencies & L & L \\
law enforcement officials & L & L \\
civilian population losses & L & I \\
precision navigation systems & L & I \\
river valley communities & L & L \\
college student governments & L & L \\
development policy decisions & L & L \\
computer graphics systems & L & I \\
computer graphics system & L & I \\
tobacco mosaic virus & R & I \\
river January temperatures & L & E \\
pagan fertility goddess & R & R \\
\hline
\end{tabular}
\begin{tabular}{|l|c|c|} \hline
Noun triple & Best model & Correct Answer \\
\hline
country bumpkin nephew & L & L \\
country music revivals & L & L \\
bile pigment metabolism & L & L \\
law enforcement agencies & L & L \\
world wool production & R & R \\
vapor density methods & L & L \\
science fiction writer & L & L \\
security council action & L & L \\
Amazon frontier region & R & R \\
\hline
\end{tabular}
\end{center}
\chapter{Test Set for Noun Compound Paraphrasing}
\label{appendix:cetest}
The following list of noun compounds forms the test set
used for the experiments on noun compound paraphrasing.
A random sample of 400 was selected from the 24,251 noun
pairs extracted from Grolier's using the method described
in section~\ref{sec:ce_method}. These were then assigned
one of the following paraphrases:
\lingform{of}~(O), \lingform{for}~(R), \lingform{in}~(I),
\lingform{on}~(N), \lingform{at}~(A), \lingform{from}~(F),
\lingform{with}~(W), \lingform{about}~(T) and
non-prepositional~(X). This answer is shown in the last
column. The second column contains a code describing the
type for non-prepositional noun pairs: extraction error~(E),
verbal-nexus~(V) and copula~(B).
For comparison, the third column gives the prediction made
by the lexically parametrised statistical model using
maximum likelihood estimates (see section~\ref{sec:ce_results}
and in particular table~\ref{tb:ce_results_mle8}). This
model achieves 40\% accuracy.
\vspace{1cm}
\begin{center}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
fusion devices & & R & W \\
computation skills & & R & A \\
pigment accumulation & V-subj & O & X \\
metallurgy industry & & I & R \\
pest species & B & F & X \\
world food & E & I & X \\
fossil fauna & B & F & X \\
civilian population & B & O & X \\
passengers hostage & E & O & X \\
government agencies & & N & O \\
sea mammals & & A & F \\
Arab seafarers & B & O & X \\
health problems & & W & O \\
deputy governor & B & O & X \\
city legislature & & I & O \\
championship bout & & I & R \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
magic password & E & I & X \\
carbon atoms & B-chem & W & X \\
relations agency & & R & R \\
sea lanes & & F & I \\
oxygen atoms & B-chem & W & X \\
child welfare & & O & O \\
concert music & & R & A \\
property owners & V-obj & W & X \\
disease organisms & & F & O \\
laser technology & & F & W \\
newspaper subscriptions & & R & R \\
activity spectrum & & O & O \\
antibiotic regimen & & O & O \\
baccalaureate curriculum & & O & R \\
transportation system & & R & R \\
Arab origin & B & O & X \\
Arab world & & W & O \\
concert appearances & & I & I \\
sea animals & & A & O \\
welfare agencies & & N & R \\
computer catalog & & O & N \\
hydrogen atoms & B & W & X \\
submarine mountain & E & N & X \\
hydrogen atoms & B & W & X \\
alpha particle & B-chem & O & X \\
television director & V-obj & R & X \\
anatomy professor & & O & O \\
vehicle industry & & I & R \\
machinery operations & & F & W \\
warfare equipment & & R & R \\
country estate & & I & I \\
assistant secretary & B & O & X \\
security pacts & & O & O \\
river valleys & & N & W \\
quadrant elevation & & I & I \\
banana industry & & O & R \\
jute products & & O & O \\
government patronage & & O & F \\
dairy barn & & O & R \\
battery technology & & R & R \\
football player & V-obj & N & X \\
cattle rustler & V-obj & I & X \\
trial lawyers & & R & R \\
drama critic & V-obj & R & X \\
Arab conquest & V-subj & O & X \\
warrior prince & B & O & X \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
cancer production & V-obj & O & X \\
protein molecules & B & F & X \\
Sunday restrictions & & N & N \\
theater history & & A & O \\
life imprisonment & & R & R \\
family members & & O & O \\
storage batteries & & R & R \\
plutonium theft & & O & O \\
union leader & & F & O \\
priority areas & & O & O \\
Buddhist laity & B & F & X \\
subsistence cultivation & & O & R \\
language family & & F & O \\
business investment & & I & I \\
business education & & I & T \\
business education & & I & T \\
climate pattern & & N & O \\
football player & V-obj & N & X \\
lieutenant governors & B & O & X \\
sector investment & V-subj & I & X \\
fossil assemblage & V-obj & W & X \\
typewriter mechanisms & & R & R \\
recreation area & & R & R \\
cattle industry & & R & R \\
cattle population & & O & O \\
ceramics products & & F & O \\
phonograph pickups & & O & R \\
monastery buildings & & A & A \\
apartment dwellers & & W & I \\
reaction mixture & & O & R \\
oxygen atoms & B-chem & W & X \\
peasant rebellion & V-subj & O & X \\
insect pests & B & N & X \\
music director & V-obj & O & X \\
city management & V-obj & O & X \\
law systems & & O & O \\
business applications & & O & I \\
mountain valleys & & N & I \\
community education & & I & T \\
logic unit & & N & R \\
computer novices & & I & W \\
computer memory & & R & R \\
application areas & & R & O \\
information sources & & O & O \\
property law & & W & T \\
wilderness areas & & F & O \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
convenience foods & & R & W \\
business holdings & & I & O \\
corrosion resistance & V-obj & F & X \\
world economies & & I & O \\
trio sonatas & & R & R \\
dairy cattle & & F & R \\
opera performances & V-obj & A & X \\
fiction writer & V-obj & R & X \\
mountain glaciers & & F & N \\
theater director & V-obj & A & X \\
pigment granules & & I & O \\
road competitions & & R & N \\
temperature distribution & V-obj & A & X \\
government buildings & & N & R \\
ballet genres & & I & O \\
prison poems & & I & T \\
vase paintings & & N & N \\
musk deer & & O & W \\
patron goddesses & B & W & X \\
government intervention & V-subj & I & X \\
cleavage division & B & O & X \\
food resources & B & R & X \\
cell membrane & & O & O \\
extinction theory & & T & T \\
bird droppings & & O & F \\
monkey pox & & O & I \\
hull maintenance & V-obj & O & X \\
memory system & & I & R \\
pottery vessels & & N & O \\
population density & & O & O \\
business sector & & O & O \\
kidney disease & & F & I \\
Arab unity & V-subj & W & X \\
family business & & W & O \\
decomposition reactions & B & N & X \\
quantum theory & & O & T \\
storage capacity & & R & R \\
period classifications & & O & I \\
world soul & & F & O \\
reaction products & V-subj & F & X \\
town halls & & R & R \\
life sciences & & I & T \\
plasma membrane & & F & R \\
food shortages & & O & O \\
photography movement & & R & I \\
terrorist activities & & O & O \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
mystery novels & & N & T \\
customs administrations & V-obj & W & X \\
antenna rods & B & O & X \\
shorthand device & & I & R \\
car odor & & O & O \\
country music & & F & I \\
food industry & & R & R \\
bile duct & & I & R \\
world wars & & I & O \\
rococo spirit & & O & O \\
trio sonata & & R & R \\
world community & & I & O \\
government war & E & W & X \\
coalition cabinet & & F & I \\
music theory & & O & T \\
Jesuit origin & & O & I \\
hardware business & & W & I \\
coronation portal & & A & T \\
savanna areas & & O & O \\
frontier problems & & N & N \\
city dwellers & & I & I \\
family tradition & & T & O \\
gestation period & & O & O \\
city population & & O & I \\
magic beings & E & W & X \\
recreation areas & & R & R \\
handicraft products & V-subj & R & X \\
unit cell & B & W & X \\
crime novelist & & W & T \\
sea monster & & W & F \\
treaty relationships & & W & O \\
business operations & V-obj & N & X \\
lieutenant governor & B & O & X \\
dominion status & & O & O \\
cancer cells & & F & W \\
child custody & & N & O \\
government intervention & V-subj & I & X \\
hotel management & V-obj & A & X \\
excavation skills & & W & R \\
life scientists & & T & T \\
impulse transmission & V-obj & O & X \\
species determination & V-obj & T & X \\
absorption hygrometers & & O & W \\
petroleum wealth & & F & O \\
transportation equipment & & R & R \\
family sagas & & T & T \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
mountain barrier & B & I & X \\
elites resentment & E & O & X \\
consonant systems & & R & O \\
language literature & & T & I \\
population density & & O & O \\
war captives & & A & F \\
worker satisfaction & & F & O \\
population explosion & & O & O \\
rationalist thinkers & B & O & X \\
hardware technology & & I & O \\
insurance industry & & R & R \\
intelligence community & & W & R \\
catalog illustrations & & R & I \\
transmission system & & R & R \\
war crimes & & A & I \\
production facilities & & R & R \\
uplands temperatures & E & N & X \\
theater orchestra & & A & I \\
violin concerto & & R & R \\
impeachment trial & & R & R \\
population density & & O & O \\
mountain king & E & F & X \\
policy options & & N & T \\
meat products & & F & W \\
mountain country & & F & W \\
union security & V-obj & R & X \\
business economics & & F & R \\
drainage basins & & W & O \\
coalition government & & I & I \\
luxury hotels & & R & W \\
population growth & V-subj & O & X \\
heath family & & O & O \\
customs union & & W & T \\
kerosene lamps & & O & W \\
emergency detention & & R & I \\
faculty members & & O & O \\
lieutenant governor & B & O & X \\
war secretary & & O & R \\
symphony orchestras & & F & R \\
poultry pests & & I & R \\
century Americans & E & R & X \\
war god & & O & O \\
genre painter & V-obj & O & X \\
guild members & & O & O \\
petroleum industry & & F & R \\
temperature variations & V-obj & A & X \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
drainage patterns & & N & O \\
minority businesses & & T & O \\
cotton cultivation & V-obj & O & X \\
majority leader & & F & O \\
money policy & & O & T \\
policy makers & V-obj & I & X \\
ancestor spirits & & O & O \\
satellite system & & O & W \\
poultry products & & R & F \\
census population & & F & N \\
petroleum products & & F & F \\
opposition coalition & & W & I \\
government policy & & O & O \\
deputy director & B & O & X \\
altitude variations & V-subj & A & X \\
cattle town & & N & R \\
dairy products & V-subj & F & X \\
fission products & V-subj & O & X \\
weapons policy & & N & N \\
protein source & & O & O \\
ocean basins & & I & O \\
choice species & & O & O \\
backwoods protagonist & & I & F \\
vibration ratio & & O & O \\
university education & & A & I \\
car driver & V-obj & I & X \\
antelope species & & O & O \\
carbon atom & B-chem & W & X \\
valve systems & & O & O \\
opposition leaders & V-obj & I & X \\
university teachers & & A & I \\
expansion turbine & & R & W \\
pagan teachings & E & O & X \\
temple portico & & N & O \\
food preparation & V-obj & R & X \\
education journals & & I & T \\
petroleum transportation & V-obj & N & X \\
chemistry laboratories & & R & R \\
petroleum products & & F & F \\
Buddhist philosophy & & F & O \\
population expansion & V-subj & O & X \\
university cabinets & & F & O \\
vortex atom & B & O & X \\
cupboard doors & & O & O \\
separation negatives & & O & F \\
election laws & & I & T \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
anarchist conspirators & B & O & X \\
coalition government & & I & I \\
laboratory quantities & & I & R \\
government action & V-subj & N & X \\
construction quality & & O & O \\
parent education & V-obj & F & X \\
television era & & R & W \\
letterpress composition & V-obj & O & X \\
strength properties & & O & O \\
protein synthesis & V-obj & O & X \\
frustration tolerance & V-obj & R & X \\
incubation period & & R & O \\
January temperatures & & I & I \\
frequency output & V-obj & A & X \\
aperture synthesis & V-obj & O & X \\
ratings systems & & O & O \\
railway union & & O & R \\
education movement & & R & R \\
television newscaster & & N & N \\
warbler family & & O & O \\
equivalence principle & & O & O \\
rotation period & & O & O \\
world population & & O & O \\
household refrigeration & & I & I \\
business administration & V-obj & F & X \\
office buildings & & F & R \\
affairs television & E & I & X \\
boyar duma & E-foreign & O & X \\
symphony orchestra & & F & R \\
country estate & & I & I \\
satellite system & & O & O \\
density measurements & V-obj & N & X \\
sea lions & & I & F \\
Passover festival & & O & R \\
childhood sexuality & & I & I \\
television writer & & N & R \\
sea urchins & & O & F \\
horror tale & & A & T \\
shellfish crustaceans & B & O & X \\
cargo carrier & V-obj & W & X \\
shrub competition & V-subj & F & X \\
hair follicles & & I & R \\
communications systems & & R & R \\
family connection & & W & O \\
soul music & & R & T \\
food products & & F & F \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
communications satellite & & R & R \\
management procedures & & R & R \\
baseball player & V-obj & I & X \\
fiber optics & & W & W \\
construction industry & & R & R \\
county town & & I & R \\
estimation methods & & I & R \\
percentage composition & & O & I \\
altitude reconnaissance & & F & A \\
trolley cars & B & O & X \\
student movement & V-subj & R & X \\
suffrage committee & & R & R \\
suspension system & & W & R \\
health standards & & R & O \\
world championships & & I & O \\
gestation period & & O & O \\
arts museum & & I & R \\
tea room & & R & R \\
lab periods & & R & I \\
communications industries & & R & R \\
carrier system & & N & W \\
television production & & N & R \\
prohibition law & & R & O \\
tenor trombone & B & O & X \\
pagan origins & E & O & X \\
Sanskrit texts & & F & I \\
area basis & & R & O \\
arts colleges & & I & R \\
terrorist activities & & O & O \\
tumor production & V-obj & O & X \\
industry revenues & & F & I \\
construction materials & & R & R \\
government officials & & F & R \\
cotton production & V-obj & R & X \\
news events & B & I & X \\
world war & & I & O \\
commonwealth status & & I & I \\
government intervention & V-subj & I & X \\
food production & V-obj & O & X \\
January temperature & & I & I \\
street scenes & & F & N \\
food industry & & R & R \\
eaves troughs & & O & N \\
lava fountains & & O & O \\
treatment systems & & R & R \\
puppet government & B & O & X \\
\hline
\end{tabular}
\begin{tabular}{|l|l|c|c|} \hline
Noun pair & Type & Lexical model & Correct answer \\
\hline
frontier community & & N & N \\
temperature differences & V-subj & A & X \\
frontier life & & N & A \\
marriage customs & & O & T \\
pertussis bacteria & B & O & X \\
sophomore year & B & O & X \\
crossroads village & & O & A \\
fiction representative & E & O & X \\
puppet regimes & B & O & X \\
settlement patterns & & R & O \\
luxury goods & B & R & X \\
theater orchestra & & A & R \\
automobile factory & & R & R \\
television series & & N & R \\
room temperature & & O & O \\
laboratory applications & & I & I \\
\hline
\end{tabular}
\end{center}
\section{Theoretical Contributions}
The theoretical component of this thesis comprises two main elements.
First, it proposes an architectural theory of statistical natural
language processing that identifies a new class of \sll\ designs.
Second, it describes work on a mathematical theory of training data
requirements for \sll\ systems that constitutes a powerful tool
for selecting such designs.
The following are brief outlines of the two theories.
\begin{description}
\item[Meaning distributions:] Existing statistical language learning models
are defined in terms of lexical and syntactic representations of language.
Probability distributions generally capture only grammatical knowledge.
The architectural theory proposed in this thesis
advocates statistical models defined in
terms of semantic representations of language.
Rather than representing grammatical knowledge probabilistically,
it views grammatical knowledge as a form of constraint.
Syntactic structures inherit
their probability distributions from semantic forms through these constraints.
The aim of this theory is to suggest new designs. Thus,
evaluation of the theory
must come from using it to build \sll\ systems and testing their
performance. The value of the theory lies in pointing out a promising
direction for exploring the design space.
\item[Data requirements:] The amount of text used to train a statistical
language learning system is crucial to its performance. Since there is no
well-known theory that can predict the amount of training data necessary,
the prevalent methodology in \sll\ research is to get as much text as you can
and see if the your chosen model works.
However, practical considerations
of data availability have a strong impact on model design.
Informed navigation of the design space rests on being able to
predict data requirements.
In this thesis, a framework for
building a predictive theory is developed and several results are given that
represent the first steps toward a general theory of data requirements.
\end{description}
Both of these theories have been investigated through experiments
on statistical noun compound analysis.
\section{Experiments on Noun Compounds}
The experimental component of this work concerns noun compounds in
English. Noun compounds are common constructions
exemplified by the last three words in example~\ref{eg:intro_cn}.
\begin{examples}
\item This time, let's avoid buying those {\em styrofoam dinner plates}.
\label{eg:intro_cn}
\end{examples}
Because noun compounds are frequent, highly ambiguous and
require a great deal of knowledge to analyse,
understanding them represents an ideal problem though which
\sll\ designs can be explored.
Understanding noun compounds requires performance of at least two tasks:
\begin{description}
\item[Parsing:] First, the syntactic structure can be ambiguous.
Is \lingform{styrofoam dinner} a constituent (in the grammatical sense)
or is \lingform{dinner plates} one?
To choose between these two analyses, a parser must
incorporate knowledge about styrofoam, about dinner and about plates. This
might be encoded in the corresponding lexical entries or might be
represented as independent semantic knowledge. Regardless of how it is
stored and used, this knowledge must come from somewhere and statistical
language learning might be able to provide it.
\item[Semantic Analysis:] Second, the semantic content of noun compounds
is ambiguous. In what relation does \lingform{dinner}
stand to \lingform{plates}? In order to
understand the meaning of \lingform{dinner plates} one must be capable of
identifying the implicit relationship. Are the plates used for dinner, or
are they formed from it (as in \lingform{dinner scraps})? Once again, the
necessary knowledge about plates and about dinners must come from
somewhere.
\end{description}
An approach to parsing noun compounds using the meaning distributions
theory is reported in this thesis.
It uses the text of an encyclopedia to acquire conceptual associations
statistically and uses these to parse noun
compounds. The results show that the method exhibits a high degree of
accuracy within the limitations imposed by the formulation of the task. The
probabilistic model upon which the method is based illustrates an
application of the meaning distributions theory.
In a series of experiments designed to make empirical
comparisons, this model outperforms alternative models
based on previously proposed algorithms.
This is evidence that the new design territory identified by the
meaning distributions theory is fertile.
Experiments on a statistical method for paraphrasing noun compounds
are also reported in this thesis. This method uses
statistics about prepositional phrases to select a paraphrase
for a given noun compound
--- a strategy suggested by the meaning distributions theory.
This is the first statistical model of this problem.
The resulting performance is significantly better than the baseline,
but is difficult to evaluate both because comparable evaluation
of other approaches is rarely performed and because no applicable
theory of data requirements is available.
This lends further support to the position that a predictive theory
of data requirements is urgently needed.
\section{An Overview of the Thesis}
In chapter~\ref{ch:background}, I will review the relevant literature in two
areas, statistical language learning and noun compounds. Readers who are
familiar with the former should find no difficulty in skipping that half.
Chapters~\ref{ch:md} and~\ref{ch:dr} are
devoted to the two theoretical contributions, the
meaning distributions theory and the work on data requirements.
The experimental work is contained in chapter~\ref{ch:experimental}, which
is divided into two parts, one for each of noun compound parsing and
noun compound paraphrasing. Many of the resources used for the
paraphrasing experiment are described in the part on noun compound
parsing since the two programs share these. Finally,
chapter~\ref{ch:conclusion} summarises the conclusions of the thesis and
outlines possible extensions for future work.
\chapter{Background}
\label{ch:background}
Both statistical language learning and noun compounds
have attracted a great deal of research. This chapter
will review the work that is relevant to this thesis
on each of these two topics.
Accordingly, the chapter is divided into two halves,
the first for statistical language learning
and the second for noun compounds.
\section{Introducing Statistical Language Learning}
\label{part:sn}
In the first half of this chapter I will review prior relevant work in
statistical \nlp. My chief aim is to make it accessible, and it is therefore
fairly introductory. Readers familiar with the area can easily skip to the
summary given in section~\ref{sec:sn_review} and continue from there to the
second half of the chapter.
In section~\ref{sec:sn_motivations}, I will introduce the general
approach of statistical \nlp, in which statistics are used to
extract information from large amounts of text. This information
is intended to replace hand-coded rules, with the advantage
that it is automatically acquired.
The information generally takes the
form of parameter values within a probabilistic model. Such a model
assigns probabilities to the choices facing it in terms of
these parameters. In addition to the advantage of automatic
knowledge acquisition, a probabilistic representation also
enhances robustness.
Section~\ref{sec:sn_taggers} illustrates the approach
with Markov model taggers. These have been shown to perform
the tagging task with a high degree of accuracy. A model for parsing,
probabilistic context free grammars, is discussed in
section~\ref{sec:sn_grammars}. However the performance of
such parsers is disappointing.
An important distinction in statistical \nlp\ is that between
supervised and unsupervised learning.
In section~\ref{sec:sn_supervised}, I will discuss both of these
and cover several important techniques that have been developed for
the latter. Without unsupervised techniques, the labour involved
in marking up training data threatens to undermine the
advantage of automatic acquisition.
The final three sections discuss approaches to enhancing probabilistic
parsers through sensitivity to words (section~\ref{sec:sn_lexical}),
specialising the model to handle particular grammatical constructs
(section~\ref{sec:sn_specialised}) and grouping words together
to reduce data requirements (section~\ref{sec:sn_conceptual}).
Each of these approaches bears strong relevance to the work
reported in this thesis.
\subsection{Motivations and Advantages of SLL}
\label{sec:sn_motivations}
It is now more than twenty years since Winograd~(1972) showed how to
build a sophisticated natural language understanding system for a limited
(and admittedly artificial) domain. Yet the field is still in pursuit of
technology that will perform similar tasks on unconstrained text. Scaling up
has proven to be extremely challenging, the requirement for ever more
knowledge being seemingly unbounded. Even if we accept for the moment
that broad coverage \nlp\ is out of reach and focus instead on restricted
domains, there is substantial effort involved in manually encoding the
requisite knowledge. Knowledge acquisition is an important process in \nlp,
one that is critical to the success of any non-trivial system.
One currently popular, but still only promising, answer is to automatically
learn from existing texts. For quite some time, linguists have been building
\newterm{corpora} (large text collections in appropriate electronic format)
often annotated with linguistic analyses, and some of these are
freely available. In addition,
there are vast volumes of electronically stored text. These resources
represent a knowledge source that, if tapped appropriately, promises barely
imaginable detail and coverage and is growing continuously. The challenge
is to collect that knowledge without having to first solve all the problems
of natural language understanding in order to get it.
The science of statistics is concerned with deriving general conclusions
about the behaviour of a system from a large amount of data, each
element of which is uncertain. Viewing language as a system and
making a few simplistic
assumptions, we can convert text into exactly this kind of data, uncertainty
arising from the simplifications. Thus, statistical methods become powerful
learning tools; applied correctly, they can extract general conclusions about
the behaviour of language. If the conclusions drawn can be arranged to
coincide with the knowledge required for \nlp\ tasks, then knowledge
acquisition can be entirely automated. The vision of feeding text in one side
and collecting knowledge at the other is a highly attractive conception of
knowledge acquisition in \nlp.
Statistical language learning techniques have rapidly risen
to occupy a substantial fraction of the research in \nlp\ over the past five
years. Starting out as a method for automatically assigning
part of speech tags to
corpora, \sll\ has spread to address a wide variety of tasks, including general
parsing (Jelinek~\etal,~1992), learning verb subcategorisations (Brent,~1993)
and thesaurus categories (Grefenstette,~1994), sense disambiguation
(Yarowsky,~1992), relative clause (Fisher and Riloff,~1992) and
prepositional phrase attachment (Hindle and Rooth,~1993) and even
anaphora resolution (Dagan and Itai,~1990). The rise of statistical methods
can also be seen as contributing to a shift in emphasis toward measuring
development costs and evaluating performance quantitatively, as embodied
by competitive meetings such as the message understanding conferences
(Chinchor~\etal,~1993).
\subsubsection*{Some foundations of statistics}
Statistics is founded on probability theory. It is impossible to give an
adequate introduction to statistics and probability theory here. An
introduction to general statistical theory can be found in Walpole and
Myers~(1993) and to categorical statistics in Agresti~(1990). For an
introduction to probabilistic representation and inference within
artificial intelligence see Pearl~(1988).
It will, however, be useful to briefly
describe some basic elements here, so that the reader unfamiliar with the
topic will be equipped with some understanding of the terminology. In
doing so I will skate over many important distinctions. Set theory will be
assumed.
A \newterm{random sample} is a set of observations, where the outcome of
each observation is unaffected by earlier or later observations in the sample.
The outcome of any given observation is called a \newterm{random
variable}, usually denoted by a capital letter such as $X$. For example, a
geologist might observe the concentration of a mineral in the soil of several
fields. Since it is reasonable to assume that the outcome in one field is not
changed by the outcome of the others, this constitutes a random sample. The
\newterm{population} from which this sample is drawn is the set of all fields
which the geologist might have picked to measure.
A \newterm{probabilistic model} is a mathematical construction that models
the behaviour of the system (in this case mineral deposits). Probabilistic
models involve one or more random variables, each of which has an event
space, the set of all possible distinct outcomes of an observation (in this case
the event space is real-valued, ranging from zero to 1 million parts per
million). In most probabilistic models of language, the event space is
discrete (say the set of all words), but can be infinite (the set of all
sentences).
For discrete event spaces, the \newterm{probability} of an outcome in the
event space is a real number between 0 and 1 (inclusive) that represents
how often we can expect that outcome to
occur.\footnote{For continuous event
spaces, we need to define probability density and probability then follows
for subsets of the event space.} It is formally written as $\Pr(X=a)$
where $a$ is the outcome. If we
imagine taking larger and larger numbers of random samples, the proportion
of observations with outcome $a$ converges in the limit to the probability of
$a$. It is easy to see that probability is defined for any subset of the
event space, simply by summing the probabilities of each event in the subset:
$\Pr(X \in \{a, b\}) = \Pr(X=a) + \Pr(X=b)$.
A \newterm{distribution} is an assignment of probabilities to each possible
outcome in the event space, where the sum of all the probabilities is 1. The
constraint that probabilities must sum to 1 is necessary because, considering
the entire event space as a subset of itself, the proportion of outcomes that
are in the event space converges to 1 (in fact, it is always 1).
In general, probabilistic models give a parameterised class of distributions
such that supplying values for the parameters fixes a particular distribution.
For example, a binomial model uses an event space with two elements (say
$a$ and $b$) and contains one parameter, $p$. Given $p$, it assigns
probability $p$ to outcome $a$ and $1-p$ to outcome $b$: that is,
$\Pr(X=a) = p = 1-\Pr(X=b)$.
A statistical learning algorithm uses a probabilistic model in order to make
predictions. To do this it first has to learn what the correct parameter values
should be. Thus, knowledge acquisition is performed by estimating values
for the parameters of a model. The process of acquiring these values from a
random sample of training data is called \newterm{parameter estimation}.
A common and simple estimation method is called the \newterm{maximum
likelihood estimator} (\acronym{mle}). This method chooses the parameter
values that give the observed sample the highest probability. That is, out of
the class of distributions given by the model, find the one that assigns the
highest probability to the observations. For binomial models, this sets $p$
to be the number of observations with outcome $a$ divided by the total
number of observations (the sample size).
More complex event spaces can be composed from simpler ones. A
\newterm{joint distribution} is a distribution over an event space formed by
the cartesian cross-product of other event spaces. For example, consider the
set of all words and the set of all parts of speech. Selecting a word at
random from a large corpus is an observation that defines a random variable
over the words, $W$. If we also observe the part of speech of that word,
this observation defines a random variable, $T$ over the parts of speech.
The cross-product of these two event spaces is a set of pairs $(w, t)$,
representing the possible outcomes. The distribution over this event space,
written $\Pr(W, T)$, is referred to as a joint distribution of words and
parts of speech.
It is useful to define the notion of \newterm{conditional probability}, written
$\Pr(X=a | Y=c)$. Intuitively, this represents how often we can expect $b$
to occur if we know $c$ occurs at the same time. For example,
\lingform{green} is usually an adjective, so the probability
$\Pr(T=\tag{adj} | W=\mbox{\lingform{green}})$ will be high. Without
the information that the word is \lingform{green} the probability of an
adjective $\Pr(T=\tag{adj})$ is much lower.
Finally, we say that two random variables $X$ and $Y$ are
\newterm{independent} when the probability of the joint event $(x, y)$,
that is $\Pr(X=x, Y=y)$ is equal to the product of the
two individual probabilities $\Pr(X=x)$ and $\Pr(Y=y)$.
This is mathematically equivalent to the conditional
and unconditional probabilities always being equal, that is,
$\Pr(Y=y | X=x) = \Pr(Y=y)$ for all $x$ and $y$.
Intuitively, the value of measurement represented by $X$
has no affect on the value of that represented by $Y$.
\subsubsection*{Applying statistics to learning language}
A thorough and accessible review of probability theory in computational
linguistics can be found in Magerman~(1992) and is recommended for
anyone interested in statistical \nlp\ methods.
The general architecture for statistical language learning includes the
following essential elements.
\begin{enumerate}
\item A means of construing grammatical (or other analytical) relations as
events.
\item A parameterised model of language production in terms of such events.
\item A statistical technique for estimating the model parameters from
corpora.
\item A method for evaluating the likelihood of an event given the model
and its parameters.
\end{enumerate}
In sections~\ref{sec:sn_taggers} through~\ref{sec:sn_conceptual} below a
number of examples of probabilistic models will be discussed. Commonly
used examples of corpora include the Penn Treebank, Grolier's
encyclopedia, collections from the \publicationname{Associated
Press} newswire and the
\acronym{atis} speech corpus. The earliest corpora were built by linguists,
the most famous being the Brown corpus constructed in 1963 at Brown
University by Francis and Ku\v{c}era~(1982). This consists of slightly over
1 million words drawn from works written in 1961 and carefully selected to
represent a range of genres. As common sense suggests, the characteristics
of the corpus used require serious consideration and in
section~\ref{sec:md_register} I will return to this topic.
The size of a corpus is usually measured in words, or more properly
occurrences of words. The wordform \lingform{the} occurs many times in
any English corpus, and increases the size of the corpus each time it occurs.
The distinction between word occurrences and wordforms
is sufficiently important in \sll\ to warrant special
terminology. A \newterm{type} is a distinct value of a random variable; for
example the word \lingform{the} is a type when the random variable ranges
over words. A \newterm{token} is an instance of a type. On any particular
occasion, the momentary value of a random variable is called a token; for
example, each occurrence of the word \lingform{the} is a token.
This distinction becomes especially important when measuring
the accuracy of an \nlp\
system. Often test sets are designed to contain only distinct test cases. That
is, test cases do not occur multiple times in the test set. If a proposed
algorithm correctly handles 90\% of such a test set, then we expect it to
handle roughly nine out of every ten {\em types}. However, if some types
occur more often than others (that is, tokens of one type are more frequent
than tokens of another type), then the expected accuracy in practice could
differ greatly from 90\%. If the 10\% of types incorrectly handled represent
80\% of the tokens, then the practical expected accuracy is a mere 20\%. To
properly evaluate the expected performance of an algorithm, the test set
should contain multiple occurrences of test cases with distribution matching
that of {\em tokens}.
\subsubsection*{Benefits of statistical language learning}
While I have positioned statistical learning methods as primarily motivated
by the need to automatically acquire knowledge, there are other
advantages.
\begin{description}
\item[Adaptability:] Language varies across time, place, situation and even
from individual to individual. For example, Lehman~(1992) shows that the
usefulness of a limited domain parser is significantly
increased if it adapts its
grammar to the usage of individual users. In principle, statistical \nlp\
systems automatically adapt to whatever type of language they are trained
on. If in one domain prepositional phrases attach to verbs highly
frequently, then the system can learn and exploit this fact. When retrained
on text from a different domain where they usually attach nominally, the
system can change in response to the new environment. In contrast, any
hand-coded knowledge base must be explicitly modified to allow for the
new environment.
\item[Robustness:] Because of the explicit representation of uncertainty in
probabilistic representations, knowledge is rarely absolute. When
unforeseen circumstances arise, probabilistic information can be used to
make informed guesses, so that performance degrades gracefully. The idea
that linguistic restrictions should be regarded as
preferences rather than strict
rules was suggested by Wilks~(1975) and shown to provide measurable
performance improvements in message understanding by Grishman and
Sterling~(1989). Probabilistic models are a mathematically principled
representation for such preferences.
\item[Combination:] Weischedel~\etal~(1990) identify the
\newterm{combination problem}, that is the difficulty of integrating knowledge
from diverse sources into one decision procedure, as a key issue for \nlp.
Many different kinds of cues affect language interpretation. Finding
appropriate methods for making use of these cues is important, but a means
of combining the results from multiple methods is also necessary. Where
this problem has arisen in the past, many researchers have adopted
\foreign{ad hoc} weighting or scoring schemes
(see for example, McRoy,~1992). In
principle, probabilistic models provide for combination of information in a
mathematically well-founded manner. In fact, this is one of the central
arguments put forward for the use of Bayesian network representations
(Pearl,~1988:14--23).
\item[Efficiency:] The kinds of analysis pursued in \nlp\ are often formulated
in terms of searching a space that is constrained by relatively simple rules,
for example a grammar. With any moderately sized grammar, there is a
great deal of ambiguity and this results in rather unwieldy search
complexity. To build practical broad coverage natural language processors,
a means for controlling the search is needed. The classical artificial
intelligence solution is to employ a heuristic during the search to order
choices. The heuristic should rank highly those options which are most
likely to result in a successful analysis if they are chosen. This is precisely
what a probabilistic model provides: a set of empirically derived
preferences. Not only do statistical methods allow the correct answer to be
derived (because appropriate knowledge is acquired automatically), but they
also allow the analyser to arrive at the answer more quickly.
\end{description}
In summary, statistical methods are based on the mathematical foundation of
probability theory and promise to allow us to make use of the continually
growing resources of on-line texts to overcome the knowledge acquisition
problem.
There are two other approaches to knowledge acquisition which might be
viewed as alternatives to \sll. First, the \acronym{cyc} project was
established as a decade long project to build a large knowledge base of
common sense rules (Lenat~\etal,~1986). The project appears to have failed
to meet its goals at this stage, although a large amount of knowledge has
been encoded.
Second, a large number of research groups have built special purpose
systems for extracting knowledge from machine-readable dictionaries such
as Longmans Dictionary of Contemporary English (\acronym{ldoce};
Proctor,~1978). A survey is given in Lauer~(1992). Early work (such as
Chodorow~\etal,~1985) used simple patterns to extract hypernymic
relations, and was extended to extract other kinds of
lexical-semantic relationships (Markowitz~\etal,~1986; Janssen,~1990).
Sense ambiguity of words in dictionary definitions often caused errors,
leading to semi-automatic versions (Calzolari and Picci,~1989;
Copestake,~1990). Other research has applied these techniques directly to
\nlp\ tasks, such as prepositional phrase attachment (Jensen and
Binot,~1987) and sense-disambiguation (Braden-Harder,~1993). The latter
task has also been approached through both connectionist networks
constructed from dictionary definitions (Veronis~\etal,~1991) and statistical
word co-occurrence measures derived from \acronym{ldoce}
(Wilks~\etal,~1990; Guthrie~\etal,~1991; Bruce and Guthrie,~1992;
Cowie~\etal,~1992). It is interesting to note that all of the latter work
uses essentially statistical language learning techniques, using
an on-line dictionary as a corpus.
Viewed in this way, the dictionary is a tiny resource
relative to the vast amount of text typically used for \sll.
More recent work combines the information provided by dictionary
definitions with co-occurrence statistics from a corpus (Luk,~1995).
\subsection{An Example: Probabilistic Tagging}
\label{sec:sn_taggers}
In this section I will use Markov model taggers to illustrate the use of
probabilistic models. The part of speech tagging problem is to take a text
and assign to each word one of a fixed set of parts of speech such as singular
noun, adjective, determiner and so on.
In all taggers I am aware of, text is tagged
sentence by sentence on the assumption that parts of speech are never
affected by context beyond the sentence in which they are contained.
Markov model taggers were the earliest successful application of statistical
learning to \nlp, a working one being reported by De Rose~(1988). This
tagger trains on part of speech bigrams (pairs of adjacent part of speech
tags) and is therefore called a \newterm{bigram tagger}. The obvious
extension, trigram taggers (trained on part of speech triples) have found
widespread use within the research community and are generally agreed to
have effectively solved the part of speech tagging
problem.\footnote{Although significant problems remain with unknown
words. See for example, Weischedel~\etal~(1993).}
Markov model tagging involves the four stage architecture outlined in
section~\ref{sec:sn_motivations}.
In what follows, I will consider each of
these for the bigram case. My purpose is to give an intuitive understanding
of how these models work and I will avoid mathematical detail as much as
possible. The reader desiring a deeper understanding should consult either
Charniak~(1993:Ch.~3) or Allen~(1995:Ch.~7).
\subsubsection*{Event space}
First, parts of speech must be formally construed as events. To do this we
imagine drawing a random sentence from a corpus. Suppose it contains $n$
words, $w_1 w_2\ldots w_n$, each with a part of speech, $t_1
t_2\ldots t_n$. Formally, this observation yields a joint event in the event
space $W^n \times T^n$, thus defining $2n$ random variables which we can
call $W_i$ and $T_i$ for $1 \leq i \leq n$. Given a joint distribution over
this composite event space, we can compute the conditional probability
$\Pr(t_1 t_2\ldots t_n | w_1 w_2\ldots w_n)$ for all possible tag sequences
and choose the most probable.\footnote{As is usual,
I have omitted the random variables from the probability expression here
because they can be inferred. Technically the expression should
be $\Pr(T_1=t_1, T_2=t_2\ldots | W_1=w_1,\ldots)$. Similar abuses of
notation will be made throughout this thesis.}
\subsubsection*{Probabilistic model}
Second, a probabilistic model is needed that provides the joint distribution
once certain parameters have been estimated. The bigram Markov model
does this by making certain assumptions about the way in which sentences
are generated. What the model supposes is equivalent to the following
imaginary generation process. First make $n$ choices from $T$ to create a
sequence of part of speech tags where the choice for the $i$th tag depends
only on the $(i-1)$th tag. Second, choose for each tag a word that can have
that tag, where the choice of word depends only on the tag. In this way
a sequence of tagged words is created. Each of these $2n$ choices is
probabilistic and the overall result is that an element of $W^n \times T^n$ is
probabilistically selected.
Since each element of $W^n \times T^n$ has
some probability of being selected in this way, this process defines the
required joint distribution. The theory of probability allows us to compute
this distribution if we know the distributions for the $2n$ choices.
Since the model assumes that the choices are independent of the
position within the sequence, we only require two distributions,
one for the tag choices and one for the word choices.
Formally, the distributions needed are written $\Pr(T_i | T_{i-1})$ and
$\Pr(W_i | T_i)$. The assumptions of the model state that words are only
dependent on their respective tags, and that tags are only dependent on the
previous tag.
\begin{eqnarray}
\lefteqn{\hspace{-1.5cm}
\Pr(W_i=w | T_1=t_1,\ldots T_n=t_n, W_1=w_1,\ldots W_{i-1}=w_{i-1},
W_{i+1}=w_{i+1},\ldots W_{n}=w_{n}) = } \hspace{3in} & & \nonumber \\
& & \Pr(W_i=w_i | T_i=t_i)
\label{eq:sn_taggers_lexgen}
\end{eqnarray}
\begin{eqnarray}
\lefteqn{\hspace{-1.3cm} \Pr(T_i=t_i | T_1=t_1,\ldots T_{i-1}=t_{i-1},
T_{i+1}=t_{i+1},\ldots T_n=t_n, W_1=w_1,\ldots W_{n}=w_{n}) = }
\hspace{3in} & & \nonumber \\
& & \Pr(T_i=t_i | T_{i-1}=t_{i-1})
\label{eq:sn_taggers_taggen}
\end{eqnarray}
Importantly, it is easy to find examples where these assumptions are clearly
violated. In example~\ref{eg:sn_taggers_bad}\ref{eg:sn_taggers_bad_lex1},
the probability that the first word is \lingform{young} (given that all
tags and following word are known) is substantially
higher than it is in example~\ref{eg:sn_taggers_bad}\ref{eg:sn_taggers_bad_lex2},
contradicting equation~\ref{eq:sn_taggers_lexgen}. In
example~\ref{eg:sn_taggers_bad}\ref{eg:sn_taggers_bad_tag}, the
probability that the third tag (in this case the tag of \lingform{eat}) is plural
is substantially increased by the information that the first tag (of the word
\lingform{Bears}) is plural too, because the verb is required to agree in
number with its subject. This contradicts
equation~\ref{eq:sn_taggers_taggen}.
\begin{examples}
\item \label{eg:sn_taggers_bad}
\begin{subexamples}
\item young/\tag{adj} lad/\tag{noun-sing}
\label{eg:sn_taggers_bad_lex1}
\item young/\tag{adj} fact/\tag{noun-sing}
\label{eg:sn_taggers_bad_lex2}
\item Bears/\tag{noun-plur} often/\tag{adv} eat/\tag{verb-plur}
\label{eg:sn_taggers_bad_tag}
\end{subexamples}
\end{examples}
The hope, and one that works out in practice, is that these violations occur
sufficiently infrequently and have sufficiently little affect on the tagging
decisions to undermine the accuracy of the model.
The purpose of the model is to specify the joint distribution over $W^n
\times T^n$ in terms of a small number of parameters. In this case, the
parameters are the probabilities comprising the two distributions $\Pr(T_i |
T_{i-1})$ and $\Pr(W_i | T_i)$. This brings us to the next stage, estimating
these parameters from a corpus.
\subsubsection*{Parameter estimation}
Third, applying a statistical method to data derived from a large text corpus
allows the parameters of the model to be estimated. For example, maximum
likelihood estimates can be derived for the probabilities of the distribution
$\Pr(T_i | T_{i-1})$ from a tagged corpus. This distribution contains one
probability for each pair of tags $(t_{i-1}, t_i)$, which we estimate by
counting the number of occurrences of tag bigrams, that is sequences
$t_{i-1} t_i$, in the corpus.
The maximum likelihood estimate under the Markov
model is given by equation~\ref{eq:sn_taggers_mle} in which bigram counts
are denoted $\countfn(t_{i-1}, t_i)$.
\begin{equation}
\Pr(T_i=t_i | T_{i-1}=t_{i-1}) \stackrel{\rm MLE}{=}
\frac{\countfn(t_{i-1}, t_i)}{\sum_{t \in T} \countfn(t_{i-1}, t)}
\label{eq:sn_taggers_mle}
\end{equation}
The other distribution, $\Pr(W_i | T_i)$, is estimated similarly. The process
of computing these estimates from a corpus is called training. Once trained,
the model contains detailed information about the way in which words can
be tagged and the typical sequencing of tags. As De Rose~(1988) points
out, this automatically acquired information can replace the costly
hand-coded knowledge used in earlier taggers such as \acronym{claws}
(Marshall,~1983).
\subsubsection*{Analysis}
Finally, we are ready to apply the model to tagging fresh text.
As noted above, when presented with a sentence, say $w_1 w_2\ldots w_n$,
the tagger proceeds by computing the conditional probability
$\Pr(t_1 t_2\ldots t_n | w_1 w_2\ldots w_n)$ for
all possible tag sequences and choosing the most probable. Fortunately, it is
not necessary to explicitly construct every possible tag sequence, which
would be computationally intractable. The most probable sequence can be
found quite efficiently using the Viterbi algorithm (Allen,~1995:201--203).
This algorithm uses the probabilities of tag subsequences to prune the search
for the most probable tag sequence, and is a good example of how a
probabilistic representation can make processing more efficient.
To give an idea of how successful this approach to tagging is,
Weischedel~\etal~(1993) report that their trigram tagger, trained on the 4
million word Penn Treebank, achieves 97\% correctly tagged words. Since
human annotators disagree on about 3\% of tags, this rivals human tagging
accuracy. This degree of success should be qualified though by noting that a
system which gives each word its most common tag regardless of context
yields 91\% accuracy (Charniak,~1993:49).
Because training data can be unreliable, it is often beneficial to combine the
predictions of more than one probabilistic model, using a process called
\newterm{smoothing}. This is especially important when data is sparse,
leading to statistical uncertainty in parameter estimates. For example, even
though the trigram tagging model is more sensitive to context than the
bigram model described above, it has many more parameters, so it requires
more training data to acquire accurate parameter settings. When data is
limited, smoothing a bigram and a trigram model together can
result in greater performance than using either on their own.
A frequently used smoothing method is called \newterm{deleted
interpolation}, where the probabilities predicted by two or more
models are combined together by a weighted summation (Jelinek,~1990:456).
The weightings given to each model, called the \newterm{interpolation
coefficients}, are optimised empirically.
There are many variations on smoothing but a
survey of them would would take us too far afield. For now it
suffices to introduce the basic idea.
The promising start made by probabilistic taggers has inspired a great deal
of work in applying statistical language learning to problems other
than part of speech tagging.
A more challenging application of the \sll\ approach is probabilistic parsing,
which I will now introduce.
\subsection{The Challenge of Probabilistic Grammars}
\label{sec:sn_grammars}
Large scale grammars often generate many possible readings for a sentence
and submitting all of these for semantic processing is computationally
infeasible. Instead, given a part of speech tagged sentence and a grammar,
we would like to find the single most likely parse for the sentence according
to the grammar. To do this, we need a model of the distribution of parse
trees, called a \newterm{probabilistic grammar}. It is then a matter of
computing the probability of each possible parse for the sentence and
selecting the parse with highest probability. Parsers for probabilistic
grammars also benefit from the guidance of the probabilistic model and can
be expected to perform better as a result.
The simplest approach is to associate probabilities with each application of a
rewrite rule in a context free grammar. This approach,
\newterm{probabilistic context free grammar}, has been studied in some
detail (Jelinek~\etal,~1992) and is a good example of how statistical models
can be structured in more sophisticated ways than simple sequences.
A context free grammar gives various possible rewrite rules for each
non-terminal symbol, representing the choice available to a generator. A
{\em probabilistic} context free grammar construes
each application of a rewrite rule as an event.
It incorporates one distribution for each non-terminal,
representing the probability of each possible rewrite rule with that
non-terminal on the left-hand side. For example, suppose that the possible
rewrite rules with \nonterm{np} on the left-hand side are those in
example~\ref{eg:sn_grammars}.
\begin{examples}
\item \label{eg:sn_grammars}
\begin{subexamples}
\item (0.4) \nonterm{np} $\rightarrow$ \nonterm{det} \nonterm{n}
\item (0.2) \nonterm{np} $\rightarrow$
\nonterm{det} \nonterm{ap} \nonterm{n}
\item (0.1) \nonterm{np} $\rightarrow$ \nonterm{n} \nonterm{pp}
\item (0.3) \nonterm{np} $\rightarrow$ \nonterm{n}
\end{subexamples}
\end{examples}
The probabilities given at the left of each rule represent the chance of
observing that an \nonterm{np} randomly selected from all parses is
rewritten by that rule. Notice that the probabilities add to 1 since these
rules are all of the allowed rewrite rules for an \nonterm{np}.
Now each interior node of a parse tree represents the application of one rule,
whose left-hand side is the node's label. The probabilistic context free
grammar model assumes that the probability of applying a rule depends only
on the non-terminal being rewritten. The choice at each node is made
independently of the choices at the other nodes. Probability theory then
states that the probability of the parse tree as a whole is the product of the
probability of each choice.
\begin{equation}
\Pr(T) = \prod_{n \in \mbox{\it interior}(T)} Pr(\mbox{\it rule}(n))
\end{equation}
Thus, we have a distribution over the set of all possible parse trees.
The parameters of the model are the rule probabilities, so there is one
parameter for each grammar rule. A simple way to estimate these
parameters is to take a parsed corpus and use maximum likelihood to
estimate the probability of each rule. For example, to find the probability of
\nonterm{np} $\rightarrow$ \nonterm{det} \nonterm{n}, count the number
of times that this rule is used in parses in the corpus and divide by the
number of times an \nonterm{np} occurs.
Once these probabilities have been estimated, they can be used not only to
rank the possible parses, but also to guide parsers in their search for the best
analysis. The Inside algorithm (Jelinek~\etal,~1992) provides an efficient
way to compute the probability that a given non-terminal spans a given word
string. This can be used to guide a chart parser (see~Allen,~1995:213--215,
for an example). This approach forces the parser to use a bottom-up
method; however Corazza~\etal~(1994) show how to generalise the
technique to guide an island-driven parser.
There are other probabilistic models of parsing, each varying in the way in
which grammatical structure is construed as events. For example,
Seneff~(1992) builds augmented transition networks (a grammatical
representation described in Woods,~1986) from a context free grammar and
takes the transitions within these networks to be events. Whenever more
than one transition is available in the network, a probability distribution is
estimated and these used to define the probability of different possible parse
trees.
Briscoe and Carroll~(1993) explore a similar idea with an
\acronym{lr}-parser. Here \acronym{lr}-parse actions are the events
represented probabilistically by the model. The \acronym{lr}-parse table is
first constructed from a generalised phrase structure grammar (Pollard and
Sag,~1987) and then the probabilities of different parse actions are
estimated. Both this and Seneff's~(1992) model have the advantage of being
integrated with efficient parsing systems making guidance of parsing direct
and simple. Training is performed by running the parser over an already
parsed corpus and counting the appropriate parse events.
A somewhat more radical approach is to associate probabilities with
arbitrary parse subtrees. In the data-oriented parsing (\acronym{dop})
approach, the grammatical events are compositions where one subtree is
inserted into another (Bod,~1993). The probability of a parse is derived
from the probabilities of subtrees. However, unlike probabilistic
context free grammars, the decomposition is not required to
continue all the way down to the level of individual parts of speech.
Large subtrees that appear in the training corpus are
associated with additional probability that is not derived from their parts.
Every subtree occurring in the corpus, including entire parsed sentences, is
associated with a probability. The probabilistic model assigns probabilities
to parses according to the likelihood of the subtrees that they contain, but it
favours shorter sequences of compositions. This encourages larger subtrees
to be composed, so that common phrases can be treated as atomic
events, rather than being decomposed.
One problem with this approach is that there is no efficient
algorithm for computing the probability of a parse. To overcome this,
Bod~(1993) uses Monte Carlo simulations to choose the most likely parse;
however, this strategy could not be used to guide a parser because it is too
computationally expensive.
In all these approaches, probability is associated with constituents, that is,
contiguous word sequences that form phrases and clauses. This is because
they are based upon grammatical formalisms which emphasise constituency.
An alternative grammatical framework, dependency grammar
(see for example, Mel'cuk,~1988),
emphasises grammatical relationships that can span
non-contiguous word groupings. Milward~(1992) argues that this
representation facilitates incremental construction of semantic forms. The
distinction between constituent-based and dependency-based grammar will
become important later in this thesis. Therefore, it is worth mentioning two
proposals for probabilistic grammars based on dependency relations rather
than constituents.\footnote{Carroll and Charniak~(1992) call their
model a probabilistic dependency grammar, but the model is actually
based on constituency. I will discuss this work in
section~\ref{sec:sn_supervised}.}
\begin{itemize}
\item Link grammar is based on typed dependency relations. A probabilistic
model for this grammar formalism has been developed by
Lafferty~\etal~(1992).
\item Charniak~(1993:130--134) proposes a variant of the probabilistic
context free model that conditions on the head of constituents, thus capturing
dependency relations within a constituent-based formalism.
\end{itemize}
Both of these proposals involve the probabilities of grammatical structures
being conditioned on words (as opposed to merely parts of speech).
Unfortunately, no implementation of either has been reported
at this time.
\subsection{Supervised and Unsupervised Training}
\label{sec:sn_supervised}
All the training methods considered so far use a corpus that is annotated with
answers. Taggers use tagged training data and parsers use parsed training
data. This is called \newterm{supervised} learning. As one might imagine,
annotating a corpus with answers involves substantial effort. Since one key
motivation for statistical language learning is to avoid the effort of manually
coding knowledge, reliance on supervised learning runs the risk of costing as
much as it saves.
While some medium-sized parsed corpora are available, the effort involved
in constructing them is enormous and relies on semi-automatic methods to
be feasible (see for example, Marcus~\etal,~1993). If \sll\ is to tap into the
enormous volumes of on-line text, unsupervised techniques must be
developed. In this section I will consider some of the methods used to train
probabilistic models using unannotated data.
\subsubsection*{Expectation maximisation}
The most common unsupervised algorithm for Markov model taggers is
called the \newterm{forward-backward} method, a member of the class
called \newterm{expectation maximisation} algorithms. These algorithms
iterate through a sequence of trained models, using earlier models in the
sequence to modulate the training of later models. To initiate the process, a
simple probabilistic model is chosen that assigns some non-zero probability
to each possible tag sequence for a sentence.
Now each sentence of the untagged training data can be associated with a set
of possible tag sequences, each with a probability according to the initial
model. One can view this as a kind of tagged data where each word has
several tags, each with a probability weighting, a kind of weighted
superposition of tags. From this data, a new model can be built using counts
of these superimposed tags. The counts are weighted by the probabilities of
the tags. For example, suppose that the initial model gives all allowed tags
for a word equal probability. The sentence in
example~\ref{eg:sn_supervised_init} might then have the probabilistic
superposition of tags shown.
\begin{examples}
\item
All/\{$\tag{noun}_{0.33}$, $\tag{det}_{0.33}$, $\tag{adv}_{0.33}$\}
bears/\{$\tag{verb}_{0.5}$, $\tag{noun}_{0.5}$\}
eat/\{$\tag{verb}_{1.0}$\} \label{eg:sn_supervised_init}
\end{examples}
The data used to constructed the new model will then include the tag triple
(\tag{det}, \tag{noun}, \tag{verb}) with weighted count 0.17, as well as the
five other triples. These counts are combined with those from all the other
sentences in the training corpus, and used to estimate the parameters of the
tagging model. For example, this might lead to a model that assigns the
probabilities given in example~\ref{eg:sn_supervised_next}.
\begin{examples}
\item All/\{$\tag{noun}_{0.1}$, $\tag{det}_{0.6}$, $\tag{adv}_{0.3}$\}
bears/\{$\tag{verb}_{0.3}$, $\tag{noun}_{0.7}$\}
eat/\{$\tag{verb}_{1.0}$\} \label{eg:sn_supervised_next}
\end{examples}
Now this new model can be used to provide weighted counts for another
iteration, leading to a third model and so on. The process stops when the
probabilities assigned by the model converge to stable values.
The choice of initial model does have a significant effect on the performance
of the final model. Merialdo~(1994) uses the uniform initial model (as in
example~\ref{eg:sn_supervised_init}) and reports that tagging accuracy
converges to 86.6\% after 10 iterations. When the initial model is provided
by maximum likelihood estimates from 100 sentences of supervised data, the
accuracy converged to 92.6\%, so there is an undesirable sensitivity to the
initial model chosen.
Also, unsupervised training with the forward-backward algorithm falls short
of the performance achieved by supervised training, at least with the amount
of training data Merialdo~(1994) uses. He reports that
a smoothed trigram model using
maximum likelihood estimates achieved 97.0\% accuracy on the same test
set and training data.
For probabilistic parsing, there is an analogous algorithm called the
\newterm{inside-outside} algorithm (Jelinek~\etal,~1992). The method
begins with an initial probabilistic grammar which is used to assign each
sentence of the unparsed training corpus a weighted set of possible parses.
These are used to generate weighted counts of rule applications and then a
new model is estimated from these counts. Again, the process is iterated
until the probabilities converge.
Fujisaki~\etal~(1991) report promising results with 72 out of 84 sentences
parsed correctly (apparently using the inside-outside algorithm, though this is
not directly stated). However, the test sentences also appeared in the
training corpus.
Furthermore, other researchers have not been able to achieve similar
results even with supervised methods.
\begin{citedquote}{Allen,~1995:212}
Unfortunately, a parser built using [a supervised probabilistic context free
grammar] turns out not to work as well as you might expect, although it does
help. Some researchers have found that these techniques identify the correct
parse about 50 percent of the time. It doesn't do better because the
independence assumptions that need to be made are too radical.
\end{citedquote}
Therefore, there is still a wide margin for improvement in
statistical parsing models.
\subsubsection*{Grammar acquisition}
The inside-outside algorithm assumes that a context free grammar is
available. Several researchers have suggested that a grammar could be
automatically acquired by, for example, beginning with a grammar
containing all possible rewrite rules and eliminating those that the
inside-outside algorithm assigns very low probability to. An obvious
problem is the huge number of possible rules. A principled way of limiting
the possible rules that is inexpensive in terms of labour is required.
Carroll and Charniak~(1992) explore the possibility of limiting the allowed
rules to those which could be expressed by a dependency grammar. That is,
there is exactly one non-terminal symbol for each terminal symbol and all
rules rewriting a non-terminal symbol must include the corresponding
terminal symbol on the right-hand side. In their experiments, a probabilistic
grammar of this kind is used to generate an artificial corpus. The goal is to
use the inside-outside algorithm to automatically reconstruct the original
grammar. They conclude that further constraints (they give a set based on
grammatical dominance relations) are necessary for learning to converge to
the original grammar. Though they refer to their grammar as a probabilistic
dependency grammar, this is slightly misleading because the grammatical
events are constituent rewrites, not dependency relations. The probabilistic
model is identical to that for a probabilistic context free grammar.
Briscoe and Waegner~(1992) take a similar tack, but allow two
non-terminals for each terminal using the framework of
Jackendoff's~(1977) \={X} theory. They
also incorporate number and gender features, and enforce some general
agreement constraints using a unification scheme. Again, an artificially
generated corpus is used; the results are similarly mixed. It appears that
something more is needed.
\subsubsection*{Noise elimination}
Apart from the iterative approach offered by expectation maximisation
algorithms, there are other unsupervised learning techniques. For example,
Brent~(1993) uses a combination of simple patterns and statistical
hypothesis testing to acquire verb subcategorisation frames. Patterns based
only on closed class words and morphemes are used to identify words that
appear to be verbs in each of 6 syntactic configurations (simple transitive,
infinitive, etc.) with high certainty. Because the patterns are simple, they
generate false positives, but most of the time they are correct. By matching
a binomial distribution to the resulting data, an estimate of the error rate of
each pattern is derived. This is then used to test the hypothesis that the
observed data might have been caused purely by errors. When this
hypothesis can be rejected, the appropriate verb frame is added to the
lexicon for that verb.
Finally, there is an altogether different technique that is of significance to
this thesis. Yarowsky~(1992) uses an on-line thesaurus to exploit
known word equivalences. His automatic sense disambiguation
mechanism aims to use word co-occurrences to select an appropriate sense.
For each target sense, a profile is collected of all the words that occur within
50 words (on either side) of that sense in the training corpus. When
presented with an ambiguous word, it matches the profiles of the possible
senses and chooses the sense with the greatest information value. For
example, the animal sense of \lingform{crane} has strong association (a
large information score) with \lingform{species}, \lingform{nest} and so on,
while the machinery sense of \lingform{crane} has strong association with
\lingform{machine}, \lingform{engine} and so on. Upon encountering the
word \lingform{crane}, Yarowsky's algorithm computes the total
information score of all surrounding words for the animal sense and for the
machinery sense, and chooses whichever sense is greater.
The problem is that without a sense tagged corpus, it is not possible to
collect the profiles for each sense of an ambiguous word. The training
algorithm will not be able to distinguish between occurrences of the animal
sense and those of the machinery sense in the training corpus. Yarowsky's
solution is to treat every occurrence of \lingform{crane} as being both the
machinery and the animal sense, but then add together the profiles of
synonymous words. All the profiles of words in the Roget's thesaurus
category \concept{tools/machinery} are combined to form a profile for the
machinery sense of \lingform{crane} (and for the appropriate sense of other
words in that category). Now all of the word co-occurrences with the animal
sense of \lingform{crane} will be wrongly added into the profile for the
machinery sense. However, this noise will be relatively small since there are
many words in the \concept{tools/machinery} category. While the other
words in that category may also be sense ambiguous and therefore contribute
further noise, these different sources of noise will be scattered throughout
the profile. On the other hand, the correct co-occurrences (the signal)
should be similar for all words in the same category, and therefore the signal
should become concentrated and outweigh the noise.
Yarowsky applies this method to some 5000 occurrences of 12 polysemous
words using Grolier's encyclopedia as a corpus.\footnote{This corpus
contains 10 million words.} The algorithm predicts the correct sense for
92\% of these, which is very successful considering that the only linguistic
knowledge used is the synonym groupings provided by Roget's thesaurus. If
similar performance could be squeezed from unsupervised learning methods
on other natural language tasks then the need for
manually coded knowledge would
be enormously reduced.\footnote{Unsupervised training for sense
disambiguation has attracted quite a bit of attention. Hearst~(1991) uses a
small amount of manually sense tagged data to train an initial model and
uses this to provide more training data, in a similar vein to
Merialdo's~(1994) tagger. Sch\"{u}tze~(1992) uses singular value
decomposition to discriminate senses on the basis of word
co-occurrences. Biber~(1993b) uses statistical factor analysis to discover
word senses from collocations.}
It is now time to take stock. This chapter began by introducing
statistical language learning as an approach to \nlp
based on probabilistic models. These models
assign probabilities to linguistic analyses in terms of their parameters,
which in turn can be estimated from a corpus. In this way, \sll\ systems
automatically acquire the knowledge necessary to their objectives.
Probabilistic taggers have been used to illustrate this approach.
These taggers use models that assign probabilities to
part of speech sequences in terms of tag $n$-grams, and achieve
high accuracy rates. However, extension of these ideas to
probabilistic context free grammars has been less successful.
We have also seen a range of techniques designed to
allow unsupervised learning so that training corpora need not
be manually annotated.
These are all core topics in statistical language learning.
In the next three sections,
I will describe research that is more specifically relevant to the
work reported in this thesis.
Each of these sections covers an approach to enhancing probabilistic
parsing: lexical sensitivity, specialised parsers and class-based
models.
\subsection{Using Lexical Statistics}
\label{sec:sn_lexical}
In the taggers and probabilistic grammars discussed so far, the only
sensitivity to words is through parts of speech.\footnote{As noted above, the
proposals of Charniak~(1993) and Lafferty~\etal~(1992) do involve
additional lexical sensitivity.} Once the part of speech for a word has been
identified, the probabilistic models do not distinguish between different
words with the same part of speech. Yet different words with the same part
of speech can behave in radically different ways. For example, in the
tagging task, the plural noun \lingform{people} is more likely to be followed
by a modal auxiliary than the plural noun \lingform{rocks}, as suggested
by example~\ref{eg:sn_lexical_modals}.
\begin{examples}
\item \label{eg:sn_lexical_modals}
\begin{subexamples}
\item People can be more efficient if they want to be.
\item People might buy more cars.
\item People should reuse plastic bags.
\item People will have to adapt.
\item Rocks can be difficult to move.
\end{subexamples}
\end{examples}
Grammatical structure is also lexically sensitive. For example, probabilistic
context free grammars are incapable of distinguishing the difference in
structure between the two noun compounds in
example~\ref{eg:sn_lexical_cns}, because both involve exactly two
applications of the phrase structure rule
\mbox{\={N} $\rightarrow$ \={N} \={N}}.
\begin{examples}
\item \label{eg:sn_lexical_cns}
\begin{subexamples}
\item{[}breakfast [currant cake ] {]}
\item{[}[cereal bowl ] manufacturer {]}
\end{subexamples}
\end{examples}
Other probabilistic grammars (for example, Briscoe and Carroll,~1993:30)
can distinguish the two structures, but always prefer one structure over the
other regardless of the words involved. This is because they ultimately deal
with parts of speech rather than with words. Their language models
treat the occurrence of all words used in the same part of speech
identically. More sensitive language models are necessary to analyse noun
compounds, since both structures illustrated by
example~\ref{eg:sn_lexical_cns} are frequently used.
Furthermore, lexical sensitivity is a widespread property of language, not
limited to esoteric parts of the grammar. In fact, it lies at the root of the
entire lexicalist movement in formal linguistics. Therefore, I shall now turn
to lexicalised probabilistic language models, those that account for the
different effects of different words used in the same part of speech.
One class of such models is based on the notion of word association.
Here, the language model contains one parameter for every pair of
words. Each of these parameters is referred to as the degree of
association between the two words. The underlying assumption is
that most of the time only highly associated words will occur in the
grammatical relations of interest, and that therefore a parser should
pursue an analysis that relates associated words before one that does
not. Naturally the question of determining which words are
associated (that is, estimating the parameters) is crucial. Different
methods for measuring association will correspond to different
underlying assumptions about the likelihood of certain grammatical
relations.
An example of this is the \newterm{distituents} model described in
Marcus~(1991). In this account, parsing is driven by the search for
constituent boundaries. It is assumed that words within a single
constituent are generally more likely to be adjacent throughout the
corpus than those on either side of a major boundary. Association is
therefore measured by word adjacency within the training corpus, and
parsing proceeds by marking boundaries within a sentence wherever
the association between adjacent words is lowest. The association
measure used is mutual information which, intuitively speaking,
captures the degree to which one word can be predicted given the
other.
\begin{equation}
\mbox{\cal M\cal I}(X, Y) \stackrel{\rm def}{=}
\log{
\frac{\countfn(X, Y)}
{\countfn(X) \times \countfn(Y)}
}
\end{equation}
Church and Hanks~(1989) also use mutual information. But rather than
simply counting adjacent words in the corpus, they preprocess
the corpus with a parser and count words in certain grammatical
relations. For example, given the phrase \lingform{read a book},
Marcus~(1991) would associate \lingform{read} with \lingform{a}
and \lingform{a} with \lingform{book}. Church and Hanks~(1989)
would first parse to reveal the verb-object relation, and then associate
\lingform{read} with \lingform{book}. Because their model is sensitive
to the grammatical structure of text, it is capable of making more
sophisticated predictions. The underlying assumptions (for example, that
the subject of a verb is most likely to be a word which is used frequently as
the subject of that verb in the corpus) are more realistic than those of
Marcus~(1991). However, they require a parsed training corpus.
Also, Church and Hanks do not aim to improve parsing
performance, instead proposing the system as a tool for lexicographers.
What is required is a lexically sensitive probabilistic model that incorporates
some syntactic structure. Church~\etal~(1991a) take a similar position.
\begin{citedquote}{Church~\etal,~1991a:110}
Syntactic constraints, by themselves, though are probably not very
important. \dots On the other hand, collocational factors (word
associations) dominate syntactic ones so much that you can easily measure
the influence of word frequency and word association norms on lexical
retrieval without careful controls on syntax. \dots We believe that syntax
will ultimately be a very important source of constraint, but in a more
indirect way. As we have been suggesting, the real constraints will come
from word frequencies and collocational constraints, but these questions will
probably need to be broken out by syntactic context.
\end{citedquote}
The theory proposed in chapter~\ref{ch:md} of this thesis concurs
with this suggestion. According to this theory,
processing is driven primarily by lexical-semantics,
mediated by syntax.
\subsection{Specialising Probabilistic Parsers}
\label{sec:sn_specialised}
The probabilistic parsing models covered so far are intended to represent
knowledge about the entire grammar. More recently, some work has looked
at modelling specific grammatical constructs, focusing on those parts of the
language that pose the most difficulty for traditional \nlp.
A lexically sensitive approach of this kind, and one to which this
research owes much, is that of Hindle and Rooth~(1993). Their
system focuses on prepositional phrase attachment ambiguity, a
problem that has long been regarded as one of the more difficult in \nlp.
In particular, they considered the choice presented to a parser when a
transitive verb construction is followed by a prepositional phrase.
Three examples based on theirs appear in
example~\ref{eg:sn_specialised_pps}.
\begin{examples}
\item \label{eg:sn_specialised_pps}
\begin{subexamples}
\item Germany sent tanks into Poland.
\label{eg:sn_specialised_pps_verbal}
\item They ignored doors into the kitchen.
\label{eg:sn_specialised_pps_nominal}
\item They mined the roads along the coast.
\label{eg:sn_specialised_pps_indeterm}
\end{subexamples}
\end{examples}
The parser must choose whether \lingform{into Poland} qualifies the
verb \lingform{send} or the noun \lingform{tanks}. These two possibilities
are illustrated by
examples~\ref{eg:sn_specialised_pps}\ref{eg:sn_specialised_pps_verbal}
and~\ref{eg:sn_specialised_pps}\ref{eg:sn_specialised_pps_nominal}
respectively.
Hindle and Rooth's~(1993) approach defines an association between
prepositions and their attachment points (in the example, \lingform{into} is
associated with \lingform{sent} and \lingform{doors}, but not
\lingform{tanks} or \lingform{ignored}).
Given an ambiguous prepositional phrase, their system compares two
association values: the preposition-verb association and the
preposition-object association. To measure the association values, they use
a corpus of 13 million words of newswire text.
For such a large training corpus, an unsupervised learning method is
required. Their system exploits a small amount of grammatical knowledge
to find unambiguous prepositional phrase attachments and uses these to
acquire initial estimates, which are then used to choose an attachment for
ambiguous examples. These latter examples form further training data so
that the association values can be cyclically refined. The end result is a set
of associations which select the correct prepositional phrase attachment in
almost 80\% of the test cases.
In performing this study, Hindle and Rooth encounter two interesting
phenomena which are also relevant to the work described here. First,
they observe that some examples exhibit what they call \newterm{semantic
indeterminacy}. This arises when the meanings corresponding to each
attachment are the same. For instance, in
example~\ref{eg:sn_specialised_pps}\ref{eg:sn_specialised_pps_indeterm}
above, it makes no difference in most contexts whether we speak
of the roads along the coast being mined or of the roads being mined
along the coast. It would seem that the syntactic representation
makes distinctions which are not necessary to understanding. A
similar effect arises in compound nouns, as in \lingform{city sewerage
systems}. Hindle and Rooth discard such examples from their test set, a
policy also followed in the experimental work in
chapter~\ref{ch:experimental}.
Second, Hindle and Rooth point out that many prepositional phrase
attachments are ambiguous even for human readers. The
oft-cited example of seeing a man with a telescope illustrates a
case where both analyses are easily conceivable.\footnote{If I see a man
with a telescope, am I looking through the telescope or is he carrying it?} In
such cases, more context is necessary to select the correct attachment.
Since Hindle and Rooth's system only pays attention to three words (the
verb, the noun and the preposition), it cannot always make the right
choice. If it chooses to attach \lingform{with} verbally (to
\lingform{seeing}) then there will be cases when the same three words
appear, but \lingform{with} attaches nominally. Because of the limited
context used, it is not possible for the system to achieve 100\% accuracy.
Hindle and Rooth argue that because it is not possible for humans to
achieve 100\% accuracy based only on the verb, object and preposition,
it is difficult to judge how well the system mimics human performance.
To provide a performance goal, they conducted an experiment in which
human subjects were asked to select attachments using only the verb, object
and preposition. The humans achieved roughly 85\% accuracy (compared to
the 80\% accuracy of the system), suggesting that the algorithm
might still be improved a little without introducing further context.
Also relevant to the current work is the nature of the training process.
Their unsupervised learning method allows the technique to be easily and
simply applied to new training data, and even to new domains, without
annotation. However, such a powerful advantage is not free. As mentioned,
the learning proceeds on the basis of unambiguous examples of attachment.
In particular, verbal attachment is unambiguous when the object noun is a
pronoun, as in
example~\ref{eg:sn_specialised_sure}\ref{eg:sn_specialised_sure_verb}.
Nominal attachment is unambiguous when the prepositional phrase
precedes the verb, as in
example~\ref{eg:sn_specialised_sure}\ref{eg:sn_specialised_sure_noun}.
\begin{examples}
\item \label{eg:sn_specialised_sure}
\begin{subexamples}
\item He hooked it with his four-iron.
\label{eg:sn_specialised_sure_verb}
\item Every man with a donkey was dead.
\label{eg:sn_specialised_sure_noun}
\end{subexamples}
\end{examples}
The price paid by utilising unambiguous examples is that one further
assumption must be made; attachment properties of prepositional
phrases are assumed to be independent of the existence of ambiguity.
Thus, Hindle and Rooth assume that prepositional phrases qualifying
transitive verbs occur in the same distributions when the verb's object
is a pronoun as when not. This is an important assumption which
is not made explicit. The noun compound models in
chapter~\ref{ch:experimental} are closely related to the
prepositional phrase attachment one of Hindle and Rooth, as are the
training methods. Therefore, analogous assumptions will be made in
the those models.
\subsection{Using Class-based Models and Conceptual
Association} \label{sec:sn_conceptual}
The approaches presented so far either consider the properties of individual
words or those of entire grammatical classes of words, the parts of speech.
In the latter case, as argued above, the models are not sufficiently sensitive
to variation in the usages of words.
However, in the former case we would expect
the number of parameters of the probabilistic model to be many more
than necessary. For example, the words \lingform{automobile} and
\lingform{car} have very similar properties, so that learning these properties
independently is wasteful of resources and fails to capture
generalisations that humans would readily use.
This increase in the number of parameters
takes on particular importance because most systems that are based on
corpus statistics are plagued by data sparseness. A large proportion of the
word tokens in text are relatively infrequent words. Since statistical
confidence rests on sample size, rare words require huge amounts of text to
be processed before an \sll\ system can learn about them. In fact, one could
fairly say that progress in the field of statistical \nlp\ is shaped by the
availability of data and estimation techniques that conserve it. While a
range of techniques are used to combat data sparseness, including deleted
interpolation and other forms of smoothing, any principled means of
reducing the number of parameters in a model is highly valuable.
This has led a number of researchers to advocate class-based modelling
where words are allocated to classes but where the classes are finer than the
part of speech divisions and motivated by different distinctions. Words
within a class are assumed to behave in a similar manner and statistical
inference proceeds in terms of word classes instead of individual words in
much the same way as parts of speech are treated by Markov model taggers
and probabilistic context free grammars.\footnote{An interesting variation is
similarity-based modelling where, rather than having rigid class boundaries,
the probabilistic model uses a similarity metric to incorporate information
derived from the behaviour of similar words (Dagan~\etal,~1993).} For
instance, Brown~\etal~(1992b) derive word classes by a clustering
algorithm designed to optimise their probabilistic language model.
While purely statistical approaches have been shown to derive part of
speech groupings (for example, Finch and Chater,~1992), many of the
classes derived by Brown~\etal~(1992b) are unintuitive and do not appear to
be useful for much more than tuning their language
model.\footnote{Statistical word clustering is one of the oldest goals of
statistical \nlp\ (starting at least as early as Hirschman~\etal,~1975) and has
many other applications, even automatic thesaurus generation
(Grefenstette,~1992).}
Another approach is to manually assign each word in the lexicon a set of
semantic markers relevant to the text content and take these to delineate the
word classes. Velardi~\etal~(1991) use this in a limited domain to address
various \nlp\ tasks including prepositional phrase attachment (more recent
work by the same group is reported in Basili~\etal,~1993). By assigning one
or more of 12 semantic tags to each open class word in their lexicon, useful
parameter estimates are acquired from a relatively small corpus of just half a
million words.
However, the restriction to a limited domain means that the technique is not
directly applicable to unconstrained text. Also, one must be careful that the
development effort involved remains small relative to the cost of building a
traditional \nlp\ system for the domain.
An approach that is not restricted to a limited domain is that developed by
Resnik and Hearst~(1993), who use the semantic classes provided by an
on-line thesaurus. The intent of their prepositional phrase attachment system
is to extend Hindle and Rooth~(1993) to allow sensitivity to the object
of the preposition. Thus an attachment choice was made on the basis of
the verb, its object, the preposition and the object of the preposition.
This permits a distinction between the two attachments shown in
example~\ref{eg:sn_conceptual_ppobj}, where the attachment decision
critically depends on the final word.
\begin{examples}
\item \label{eg:sn_conceptual_ppobj}
\begin{subexamples}
\item eating fish with bones
\item eating fish with chopsticks
\end{subexamples}
\end{examples}
Since the new model provides sensitivity to more factors than Hindle and
Rooth's model does, it requires significantly more training data.
To reduce this requirement, Resnik and Hearst define \newterm{conceptual
association}, by analogy with Hindle and Rooth's lexical association.
Associations are recorded between classes of words (as defined in the
thesaurus) rather than individual words, on the assumption that word
properties are uniform within a thesaurus category.
One interesting viewpoint is to consider such categories as representing
mental concepts. Under this view, Resnik and Hearst have moved from a
string-based representation to one based on psychological constructs.
Associations are no longer distributions of surface textual elements; they are
distributions of mental concepts. An important consequence is that different
senses of a word are separately represented by the probabilistic model.
When a word is polysemous, it appears in several thesaurus categories and
the behaviour of each sense can be represented differently.
Yarowsky's~(1992) sense disambiguation model works on this principle.
The thesaurus used by Resnik and Hearst is WordNet (described by
Miller,~1990), an on-line structured
network containing more than 100,000 words, which provides a hierarchical
taxonomy of small groups of synonyms, called \newterm{synsets}.
Resnik and Hearst's system creates one class for each synset. This class
contains not only the words within the synset, but also all words in synsets
below it in the hierarchy.\footnote{This choice of classes
leads to widely varying class
sizes. A different method for creating word classes from WordNet which
results in roughly uniform classes is described by Hearst and
Sch\"{u}tze~(1993).}
The system then collects associations from the corpus for each such
class. Under this grouping scheme each word appears in every
class which is a hypernym of its synonym set. For example,
\lingform{penny} is a member of many classes some of which we might call
\concept{coins}, \concept{cash}, \concept{money}, \concept{possessions},
\concept{objects} and \concept{entities}. Therefore, the properties of
\lingform{penny} are represented by properties of a sequence of classes
ranging from the most specific to the most general.
This creates a problem of deciding which class to use for making the
attachment decision when faced with an ambiguous prepositional phrase
involving the word. Resnik and Hearst's solution was to perform a paired
$t$-test between the two alternatives across the entire sequence of possible
classes. They gave no intuitive motivation for this choice; however the
approach seemed to work, producing a small improvement over the
technique of Hindle and Rooth.
\subsection{Brief Review}
\label{sec:sn_review}
The first half of this chapter has served to introduce statistical
language learning. We have looked at probabilistic taggers
and grammars, to illustrate the general approach, and considered
three research directions that have been pursued to improve
probabilistic parsing. These systems have demonstrated
that statistics computed from a corpus can provide useful knowledge
for natural language processing tasks.
However, each of these systems represents just one possibility
out of an enormous range of potential designs. As yet, we
have only just begun to explore this potential. If we could map
out the world of possible statistical models of language,
all the designs yet proposed would comprise, at most, a small
corner of that map.
While models for tagging have yielded good results,
probabilistic context free grammars fall short of solving
the parsing problem. Better designs still need to be explored.
Sections~\ref{sec:sn_lexical} through~\ref{sec:sn_conceptual}
reviewed research on three design aspects of parsing models:
lexical sensitivity, specialisation of the model to
particular syntactic forms and use of conceptual representations.
Each of these three research directions represents a new area
on the map.
In chapter~\ref{ch:md}, I will propose a theory of statistical
natural language understanding that extends these
research directions by identifying a new class of designs.
In doing so, the theory points the way to an entirely new part
of the map, an unexplored area that might yield the designs we seek.
Importantly, it is an empirical question whether the new designs
will prove useful. To test the theory, we need to build systems
and measure their performance. Only then can we fill in
the details of the map.
To begin evaluation of the theory, I have applied it
to the task of analysing noun compounds. This work
will be described in chapter~\ref{ch:experimental}.
The second half of this chapter will therefore review
existing work on noun compounds.
\section{Introducing Noun Compounds}
\label{part:cn}
If parsing is taken to be the first step in taming the
natural language understanding task, then broad coverage
\nlp\ remains a jungle inhabited by wild beasts.
For instance, parsing noun compounds appears to
require detailed world knowledge that is unavailable
outside a limited domain (see Sparck Jones,~1983,
for a detailed argument).
Yet, far from being an obscure, endangered species,
the noun compound is flourishing in modern language.
It has already made five appearances in this paragraph
and at least one diachronic study shows a veritable population
explosion (Leonard,~1984).
Both the challenges posed by noun compounds and their abundance
have attracted a substantial body of work in linguistics
and computational linguistics. The second half of this chapter
reviews this work, with an eye to providing the necessary background
for the experimental work described in this thesis.
In section~\ref{sec:cn_motivations}, I will argue that noun compounds
are an important subject of study because they both occur frequently
and present significant difficulties for current \nlp\ techniques.
Section~\ref{sec:cn_nature} settles on a precise definition
of noun compounds and describes some important aspects, including
their appearance in many languages, their productivity and their
functions.
Noun compounds are syntactically ambiguous when they contain more
than two words. In section~\ref{sec:cn_grammar}, I will discuss
various theories of noun compound syntax and explain how this
ambiguity arises. While there is agreement about the syntax
of noun compounds, no consensus has been reached regarding
their semantics except at the broadest level of
classification. A variety of semantic theories will be described
in section~\ref{sec:cn_meaning}. In section~\ref{sec:cn_accommodation},
I will cover the notion of semantic granularity, which is vital
to some of the theories of noun compound semantics.
One important property of noun compounds is their context dependence;
this will be discussed in section~\ref{sec:cn_context}.
Most computer systems designed to handle noun compounds
ignore this aspect and aim to give the best out of context analysis.
Algorithms have been proposed or implemented
for many tasks, including phonological stress assignment and
direct machine translation. A review of the computational tasks
that have been addressed forms section~\ref{sec:cn_computational}.
Finally, sections~\ref{sec:cn_knowledge} and~\ref{sec:cn_statistical}
cover the existing work on parsing and semantic analysis of noun
compounds using knowledge-based methods and statistical methods,
respectively.
\subsection{Motivations for Noun Compound Research}
\label{sec:cn_motivations}
Why should we be interested in noun compounds? In this section
I will suggest that not only is noun compound understanding a
vital component of any sophisticated natural language understanding
system, but that noun compounds also constitute a promising area
for research that exposes some of the weaknesses of current
natural language technology.
There has been much emphasis in recent \nlp\ research on measuring
performance, especially within statistical circles. In such an environment
the first question to ask in regard to any proposed technique is: how
often does the need for the technique arise?
The perceived value of a method is in
direct proportion to its breadth of immediate applicability. From this
(somewhat mercenary) point of view, noun compounds represent a
significant haul. They are indisputably common in most texts, and in certain
genres (for example, technical descriptions) they dot the landscape
like Australian sheep.
Leonard~(1984) reports the results of a diachronic study of noun compound
use in fictional prose. It shows a steady increase in their frequency over the
last two centuries. Table~\ref{tb:cn_motivations_leonard} is reproduced
from her work.
Assuming 15 words per sentence, \publicationname{Down
There on a Visit} has an average of one compound in every five sentences.
Yet fiction genres appear to have relatively few noun
compounds. Warren's~(1978:235) study of the Brown corpus places
fictional text as the least compound rich text type, apart from \scare{belles
lettres}.
\begin{table*}[h]
\centering
\begin{tabular}{|l|l|l|r|r|} \hline
Date & Author & Work & Tokens & Types \\
\hline
1759 & Johnson, Samuel &
\publicationname{Rasselas} & 10 & 8 \\
c.1813 & Austen, Jane &
\publicationname{Pride and Prejudice} & 39 & 34 \\
1840 & Dickens, Charles &
\publicationname{The Old Curiosity Shop} & 89 & 75 \\
1875 & Meredith, George &
\publicationname{The Ordeal of Richard Feverel} & 147 & 112 \\
1962 & Isherwood, Christopher &
\publicationname{Down There on a Visit} & 229 & 198 \\
\hline
\end{tabular}
\caption{Frequencies of noun compounds per 17,500 words of fictional
prose compiled by Leonard~(1984)}
\label{tb:cn_motivations_leonard}
\end{table*}
The ratio of types to tokens here is also noteworthy. Noun compounds, at
least in these texts, are productive. Each one encountered is typically a fresh
construction, which must be interpreted anew. Whatever the mechanism for
dealing with noun compounds, it must be dynamic if it is to be broadly
useful. It will not be practical to merely add noun compounds statically to
the lexicon.
Noun compounds are even more evident in press and technical materials. In
regard to the former text type, McDonald~(1982:125) identifies 252 noun
compounds (types) in around 500 sentences of
\publicationname{Newsweek} and \publicationname{New York Times}
articles. A previously unseen noun compound must be dealt with on
average in every second sentence.
For the latter text type, ter~Stal~(1994a:9) has made a study of compounds in
a corpus consisting of 293 technical abstracts from
\publicationname{Engineered Materials Abstracts}. His list contains some
3514 noun compounds (types), yielding an average of 12 per abstract.
Taking a broader definition of noun compound, an even higher density has
been reported by Beardon and Turner~(1993:2--3) in
\publicationname{Computer Graphics} abstracts. They found that their
sample of six abstracts had \shortquote{an average of 27\% of all words
participating in potential complex nominals}.\footnote{The word potential
reflects the fact that one of the goals of the study is to define complex
nominals. Many kinds of noun pre-modifiers are included; however, they do
not count predicating adjectives.} While this is a very small sample, and
clearly text of extraordinary density, it does vividly depict the quandary
faced by any \nlp\ system ill-equipped to deal with noun compounds.
These figures demonstrate that the likelihood of a natural language
processing system encountering a noun compound is high. But do noun
compounds play an important role in comprehension? If not, perhaps they
could just be skipped. As we might expect, the content words that are added
to form a compound are instrumental to correct interpretation; they have no
other reason to exist (unlike closed class words, which on occasion function
merely to make a sentence grammatical). Compounding serves a number of
important linguistic functions, chiefly in refining otherwise under-specified
references. For example, both words in
example~\ref{eg:cn_motivations_lb} below are polysemous --- compounding
serves to select an appropriate sense for each. Virtually all practical
applications of \nlp\ will be degraded by inability to make use of these
functions.
\begin{examples}
\item lift button \label{eg:cn_motivations_lb}
\end{examples}
In section~\ref{sec:cn_grammar} below, I will discuss the syntactic
character of noun compounds. One of their striking aspects is that on
one hand, their grammar is extremely simple, while on the
other hand they exhibit explosive syntactic ambiguity. The latter property
means that syntax-first \nlp\ techniques are not well-suited to handling noun
compounds; the former makes noun compounds a good area for gaining
insight into the weaknesses of this traditional approach. Their grammatical
simplicity prevents complex syntactic issues from clouding the conclusions.
Furthermore, noun compounds pose such large difficulties for \nlp\ that they
have been the canonical example used to point out the great distance
remaining between state-of-the-art \nlp\ technology and \shortquote{more
powerful and comprehensive [language-processing] programs}~(Sparck
Jones,~1983:363). The influence of pragmatic issues and seemingly
arbitrary amounts of world knowledge mean that any serious approach to
noun compounds must address the key issues of knowledge representation,
knowledge acquisition and robustness. Since these have wider significance
to language processing in general, progress on the problems of noun
compounds necessarily has broad impact on other aspects of \nlp.
Independent of the wider implications for \nlp, there are important direct
applications of noun compound processing techniques. I will mention two
promising areas.
\begin{description}
\item[Machine translation:] Use of compounds varies from one language to
another, so that automatic translation systems cannot rely on lexical
substitution to translate compounds. For example, French makes limited use
of compounding, while in German it is heavily productive. It is therefore
commonly necessary to render a German compound into an alternative
French syntactic construction, such as a prepositional phrase. In
section~\ref{sec:cn_nature} I will cite some work specifically on this task.
\item[Caption/abstract retrieval:] Compounding is commonly used to
compress information into a smaller amount of text and therefore pervades
captions, abstracts and signs.\footnote{A small exercise for the reader: on
your way home today take note of any signs you see. Count the proportion
of words used that are part of a noun compound. On Australian roads the
rate is close to 100\%, chiefly because of the high frequency of one sign:
\lingform{road work}.} Recent advances in multi-media technology have
resulted in large image collections appearing with no means of indexing
retrieval except via captions (see for example, Rowe,~1994). Sophisticated
retrieval must therefore process caption text and inevitably the noun
compounds found there.
\end{description}
So, to summarise, the following properties of noun compounds make them
an important subject for research:
\begin{itemize}
\item They are frequent.
\item They have linguistic functions that need to be represented.
\item They have a simple grammar, yet exhibit high parsing complexity.
\item They demand the application of large amounts of knowledge.
\item Apart from more general implications for \nlp, noun compound
processing technology also has worthwhile direct applications.
\end{itemize}
For these reasons, chapter~\ref{ch:experimental} will be concerned
with experiments on applying statistical \nlp\ techniques
to the problem of noun compound processing.
\subsection{The Nature of Noun Compounds}
\label{sec:cn_nature}
The species of linguistic construct that is referred to as the noun compound
has received regular attention from linguists. There are many and varied
subspecies, a broad range of habitats and a multitude of nomenclatures. In
this section, I will review a number of aspects of noun compounds and work
towards a definition for the purposes of this work. The definition I will
adopt follows that of Downing~(1977). Finally, I will briefly mention some
of the functions performed by compounding.
\subsubsection*{Varieties and habitat}
There are almost as many names for noun compounds and their relatives as
there are linguistic studies of them. I will survey a selection of related
constructions when considering definitions below. However, it is worth
listing the names here for the record:
\begin{itemize}
\item noun compounds, compound nouns, compounds;
\item nominal compounds, compound nominals, complex nominals;
\item noun premodifiers;
\item nominalisations;
\item noun sequences, noun-noun compounds, noun + noun compounds.
\end{itemize}
While the definitions vary, all these terms describe very similar classes of
constructions. All appear where nouns do, all involve multiple open class
morphemes and, in each case, members of the class exist that contain no
closed class morphemes.
Noun compounds are common in English, but are by no means limited to
one language. Relatives can be found in virtually all widely spoken
languages. Work specifically addressing noun compounds investigates their
use in many languages, including:
\begin{description}
\item[Chinese:] Pun and Lum~(1989) describe an algorithm for analysing
Chinese noun compounds and related constructions using case grammar.
\item[Japanese:] Fujisaki~\etal~(1991) report some experiments on applying
a probabilistic grammar to parsing Japanese noun compounds.
\item[French:] Bourigault~(1993) uses a corpus-based technique to parse
various constructions, including noun compounds, in French.
\item[German:] Rackow~\etal~(1992) use a monolingual corpus to find
appropriate English translations of German noun compounds.
\item[Italian and Modern Greek:] Di Sciullo and Ralli~(1994) contrast the
ordering constraints of noun compounds in English, Italian and Modern
Greek.
\end{description}
In some languages (for example, Dutch and German) noun compounds are
orthographically single words and a joining morpheme may appear between
two lexemes. In these languages, and those without orthographic word
markings like Chinese, there is an additional ambiguity introduced by the
need to segment the noun compound into individual lexemes.
Sproat~\etal~(1994) addresses this problem for Chinese.
The present work addresses only English noun compounds. While each
language presents its own special difficulties, the qualitative differences in
noun compound behaviour between languages are relatively few. There are
no apparent reasons to suggest that the techniques developed here could not
be adapted to other languages.
\subsubsection*{Lexicalisation}
One of the more confusing aspects of noun compounds is the fact that they
exhibit both morphological and syntactic behaviour, cutting across the usual
linguistic divisions. Though it is more common in certain other languages,
English noun compounds can be rendered orthographically as single words,
either by hyphenation, or unmarked concatenation. There are often several
forms of the same noun compound in use.
\begin{examples}
\item
\begin{subexamples}
\item bed spring
\item bed-spring
\item bedspring
\end{subexamples}
\end{examples}
Noun compounds seem to live in the border zone between word formation
and syntax, exhibiting adaptations to both. Some researchers (for example,
Sproat,~1992:37) distinguish
between syntactic and morphological compounds, but this leads to difficulty
in distinguishing the two because the boundary is not clear cut.
The morphological character of noun compounding is related to another
well-known aspect called \newterm{lexicalisation}. As is argued in many
accounts (for example, Levi,~1978:10) noun compounds can acquire
idiomatic meanings which are not derivable from the component nouns. This
appears to result from the fact that compounds can acquire the status of
independent words. \publicationname{The Oxford Shorter English
Dictionary} (Onions,~1973) has not only subentries under \lingform{foot}
for \lingform{-bridge}, \lingform{-hill}, and \lingform{-work}, but also full
entries for the words \lingform{football}, \lingform{footman} and
\lingform{footpath}. The non-compositional meanings expressed by these
words are not dissimilar in character to the noun compound in
example~\ref{eg:cn_nature_lexicalised}.
\begin{examples}
\item \label{eg:cn_nature_lexicalised}
foot fault
\end{examples}
Not only is this example commonly used as a fixed word combination, but
its meaning would change if semantically similar words were substituted:
compare \lingform{toe fault} and \lingform{leg fault}. Therefore, such
idiosyncratic meanings must be encoded on an individual basis.
By contrast, other examples exhibit highly productive combinations. The
noun compounds in example~\ref{eg:cn_nature_novel}
have corresponding siblings \lingform{leg brace} and \lingform{toe size}.
\begin{examples}
\item \label{eg:cn_nature_novel}
\begin{subexamples}
\item foot brace
\item foot size
\end{subexamples}
\end{examples}
In such cases, the multitude of possible combinations would make any
attempt to exhaustively encode meanings for each compound hopeless.
Consider the following examples.
\begin{examples}
\item
\begin{subexamples}
\item nose ring
\item neck ring
\item ankle ring
\item belly button ring
\end{subexamples}
\item
\begin{subexamples}
\item mongoose attack
\item mongoose mating
\item mongoose habitats
\item mongoose escape
\end{subexamples}
\end{examples}
I will adopt the term \newterm{novel compounds} to refer to the highly
productive kind, following Downing~(1977). Since these constitute a
substantial portion of noun compounds, any sophisticated \nlp\ system must
incorporate a dynamic mechanism to deal with these cases.
\subsubsection*{Definition}
We still lack a precise definition of noun compounds. As mentioned, there
are a multitude of possible definitions, each with its own arguments. Four are
sufficiently popular to be mentioned.
\begin{description}
\item[Noun premodifiers:] Quirk~\etal~(1985:1321--1346) adopt a very open
definition. Their grammar permits virtually any constituent to appear before
a noun to form a pre-modified noun. Their definition thus includes
\lingform{out-in-the-wilds cottage} and similar constructions. The difficulty
with this definition
lies in distinguishing compounding from adjectival modification, which I
will have more to say about shortly.
\item[Compounds:] Chomsky and Halle~(1991:16,91--93) take a
phonological definition in which words preceding a noun form a compound
if they receive the primary stress. Thus \lingform{blackboard} is a
compound, while \lingform{black board} is not. The problem with this tack
is that pronunciations vary amongst speakers, so that what is a compound for
one person may not be for another.
\item[Complex Nominals:] Levi~(1978:1) chooses to include certain
adjectives along with nouns as possible compounding elements and so calls
her noun compounds \newterm{complex nominals}. The adjectives included are
non-predicating adjectives (ones that supposedly cannot be ascribed via a
copula construction). An example then is \lingform{electrical engineer},
because \lingform{That engineer is electrical} is ungrammatical. Since
adjectives are difficult to divide into predicating and non-predicating, this
ambiguity causes computational difficulties.
\item[Noun + Noun Compounds:] Downing~(1977) defines noun
compounds as any sequence of nouns that itself functions as a noun. While
this is more restrictive than any of the previous three, it is relatively
unambiguous. Leonard~(1984) also takes this path, using the term
\newterm{noun sequences}.
\end{description}
In this thesis, I will follow Downing~(1977) in restricting attention to noun
sequences. However, there are some special noun sequences to note.
While nouns marked for the genitive, as in
example~\ref{eg:cn_nature_genitive}, are frequent premodifiers, they have
explicitly marked semantics, and it has been argued (Chomsky,~1970) that
their syntactic behaviour differs from other noun compound modifiers.
\begin{examples}
\item \label{eg:cn_nature_genitive}
\begin{subexamples}
\item dog's breakfast
\item senate's directives
\item Noam's objections
\end{subexamples}
\end{examples}
The last of these raises another special case, that of names. While proper
nouns form compounds with equal readiness to common nouns, as in
example~\ref{eg:cn_nature_proper}, names have an arbitrariness that
is clearly of a different nature; consider the names in
example~\ref{eg:cn_nature_names}.
\begin{examples}
\item \label{eg:cn_nature_proper}
\begin{subexamples}
\item Boston music
\item August snowfalls
\item Christian God
\end{subexamples}
\item \label{eg:cn_nature_names}
\begin{subexamples}
\item Charles River
\item Pitt Street
\item Lake Tahoe
\end{subexamples}
\end{examples}
A similar behaviour occurs with artificial naming conventions such as those
used to refer to chemical substances, as in
example~\ref{eg:cn_nature_chemicals}.
\begin{examples}
\item \label{eg:cn_nature_chemicals}
\begin{subexamples}
\item calcium chloride
\item porphyrin quinone phenanthrene
\end{subexamples}
\end{examples}
I will exclude all these special cases (proper nouns are not a special case
unless the noun compound is a name).
\begin{definition}[Noun Compound]
A {\em noun compound\/} is any consecutive sequence of nouns
at least two words
in length that functions as a noun, but which contains no genitive markers
and is not a name.
\end{definition}
This definition does not include orthographically joined morphemes such as
\lingform{firetruck} and \lingform{steamboat}. I will assume that such
words are joined because they are sufficiently common as to warrant
inclusion in the lexicon, and thus do not require dynamic processing.
A few examples to illustrate the definition are shown in
example~\ref{eg:cn_nature_definition}.
\begin{examples}
\item \label{eg:cn_nature_definition}
\begin{subexamples}
\item stone fish
\item party animal
\item ball park
\item outback adventure tour
\item emergency bus fuel
\item anchor nose ring
\item killer whale attack
\end{subexamples}
\end{examples}
Note also that the definition allows for gerunds, so that the following
examples are all valid noun compounds.
\begin{examples}
\item \label{eg:cn_nature_gerunds}
\begin{subexamples}
\item laughing children
\item horse riding
\item leading soprano
\end{subexamples}
\end{examples}
Since I have no special need to reserve other terms, I will refer to noun
compounds throughout this thesis variously as noun compounds, compound
nouns, or simply compounds, interchangeably.
The working definition given above is adequate to support the
experimental work and corresponding argumentation, but should not
be regarded as limiting the scope of the problem in any
fundamentally important way. At this stage, there is no evidence to
suggest that the results are not applicable to, for example, adjectival
modifiers of the kind that Levi~(1978) allows.
There remains a small difficulty with this definition, caused by part of
speech ambiguity, when a word is classed both as noun and adjective. It is
somewhat surprising that few researchers adopting definitions similar to the
one used here have remarked upon this problem. Apart from
Warren~(1978:65), who devotes two sentences to the topic, I have not seen
it mentioned. There are many nouns and adjectives that are orthographically
identical. Consider the nominal forms in example~\ref{eg:cn_nature_adjs}.
In each case, the modifier can be used both as a noun and as an adjective.
\begin{examples}
\item \label{eg:cn_nature_adjs}
\begin{subexamples}
\item plastic ball \label{eg:cn_nature_adjs_pb}
\item American law
\item light pole
\end{subexamples}
\end{examples}
Fortunately, the different part of speech assignments usually lead to distinct
interpretations. In
example~\ref{eg:cn_nature_adjs}\ref{eg:cn_nature_adjs_pb}, either the ball
is made of the substance plastic (whether hard, soft, rigid or otherwise ---
compare \lingform{acrylic ball}) or the ball is particularly flexible
(regardless of its material composition --- compare \lingform{malleable
ball}). In the former case, the substance plastic is clearly represented by a
noun, while in the latter case, the characteristic of being plastic is
represented by an adjective.\footnote{Although this is not the only way to
draw the distinction; the choice of part of speech also has syntactic
implications. Compare the use of an adjectival modifier in
\lingform{recycled plastic ball} to the adverbial modifier in
\lingform{highly plastic ball}.}
The other two examples behave similarly. Laws
might be drafted in America (adjectival reading) or be
about Americans (compounded reading). Poles might support lights (compounded
reading) or not be heavy (adjectival reading).
Without context, it is not possible to be sure of either reading.
Therefore, it is difficult to automatically distinguish
between these two types of premodification.
The definition also does not distinguish between novel and lexicalised
compounds. The goal of developing noun compound processing techniques
is primarily motivated by novel compounds. The idiosyncrasy of
lexicalised compounds requires them to be handled on an individual basis.
However, since currently available lexical resources are insufficient to easily
exclude lexicalised compounds, it is simpler to include both kinds. This
tactic removes the task of establishing a precise distinction between these
two kinds of compounds, a task that involves some theoretical
complications. For example, many noun sequences are in
common use and yet have not acquired non-compositional meanings.
\begin{examples}
\item
\begin{subexamples}
\item kitchen table
\item dog food
\item wine glass
\item field trip
\end{subexamples}
\end{examples}
The experiments I have conducted to date have not required this
issue to be addressed, and it can in principle be solved by
providing lexical entries for all lexicalised compounds.
\subsubsection*{Functions}
Before turning to the syntactic properties of compounds let's consider the
linguistic functions performed by compounds. Since they act as nouns their
primary effect is to indicate things. Modifiers assist in this function by
reducing the set of possibilities. However, there seems to be an alternative
function: providing additional information about the thing already identified.
This is analogous to relative clauses, which have restrictive and
non-restrictive versions. Bolinger~(1967) makes a similar observation
regarding adjectival modifiers, dividing them into reference-modifying and
referent-modifying groups. I call compounds sharing the former property
\newterm{indicative}, and those sharing the latter \newterm{informative}.
Quirk~\etal~(1985:1322) also draw this distinction. While most compound
noun modifiers are indicative, example~\ref{eg:cn_nature_informative}
shows some that are informative.
\begin{examples}
\item \label{eg:cn_nature_informative}
\begin{subexamples}
\item The {\em idiot doctor} gave him the wrong tablets.
\item You might already have won a {\em dream holiday}.
\end{subexamples}
\end{examples}
Marsh~(1984) observes that compounding is a common text
compression strategy in navy messages. When space is limited, use of noun
compounds serves to convey more content with fewer
words.\footnote{Although see Dras~(1995) for work on the use of
nominalisations (and support verbs) in making texts wordier.}
Fewer words overall entails fewer
closed class words, with the result that linguistic cues to sentence structure
become sparser: \shortquote{\dots the function words that so often direct
a parsing procedure and reduce the choice of possible constructions are
frequently absent \dots and \dots structural ambiguity becomes a serious
problem}~(Marsh,~1984:505). Her solution involves reliance on
semantics rather than syntax.
\shortquote{By utilizing the semantic patterns that are derived
from a sublanguage analysis, it becomes possible to properly bracket
complex noun phrases}~(Marsh,~1984:507).
The same theme also appears in Sparck~Jones~(1983), although she is more
pessimistic about how useful these semantic patterns might be.
\begin{citedquote}{Sparck~Jones,~1983:377}
Summarising a series of experiments, Marslen-Wilson and Tyler argue that
human beings show abilities to recognise words correctly, even before they
have been completely uttered, which necessarily imply not only that
extensive top-down work is being done on the speech signal as it arrives, but
that this work involves inference, including pragmatic inference.
This argument would appear to imply that compound noun interpretation can
be carried out in an essentially predictive, i.e. strictly expectation-driven,
manner, relying only on long-term memory and local context. This is very
strong, perhaps too strong to accept.
\end{citedquote}
The extent of the local context referred to here is not clear.
However, emphasis is certainly being placed on the role of semantic
expectations in compound noun processing. This is a topic to which I shall
return in chapter~\ref{ch:md}.
\subsection{The Grammar of Noun Compounds}
\label{sec:cn_grammar}
Compound nouns form a small corner of the grammar of English.
Syntactically speaking, they follow quite simple rules. Traditional phrase
structure grammar, dealing with sequences of categories, provides little to
assist the analysis. Quirk~\etal~(1985:1321--1346) specifies that any noun
can be premodified by one or more other nouns. A single application of this
rule is sufficient to generate arbitrarily long noun sequences, all without
structural ambiguity. The ambiguity inherent in noun compounds arises
from the fact that premodifying nouns can themselves be premodified by
multiple applications of the rule.
These properties can be economically captured by a grammar rule of the
form shown in example~\ref{eg:cn_grammar_cnrule}, because the constituent
formed by compounding is identical to the two compounded elements and
can participate in further compounding, either as left or right member.
\begin{examples}
\item \={N} $\rightarrow$ \={N} \={N} \label{eg:cn_grammar_cnrule}
\end{examples}
Multiple premodifiers of one noun are then generated by right recursion.
For the purposes of this work I shall assume that this rule (or some further
restriction of it) is sufficient to cover all noun compounds.
Applied repeatedly, this rule generates binary tree structures, where
each noun in the sequence lies at a leaf of the tree. In fact, it allows every
possible binary tree with $n$ leaves to be a possible parse of a compound of
length $n$. Examples exist which cannot be analysed this way.
For instance, some words appear to require more than one modifier,
as in the case in example~\ref{eg:cn_grammar_2place}.
However, these appear to be infrequent.
\begin{examples}
\item US Soviet relations \label{eg:cn_grammar_2place}
\end{examples}
According to the rule, all compounds longer than two words
are syntactically ambiguous. Take for instance the two compounds in
example~\ref{eg:cn_grammar_ambig}.
\begin{examples}
\item \label{eg:cn_grammar_ambig}
\begin{subexamples}
\item {[}china [tea cup \nn] \nn{]}
\item {[}[hydrogen ion \nn] exchange \nn{]}
\end{subexamples}
\end{examples}
Both consist of a sequence of three nouns, yet the structures differ.
Importantly, different structures lead to different interpretations as in
example~\ref{eg:cn_grammar_both}, in which the company may either
institute the policy or be the tax payer.
\begin{examples}
\item company tax policy \label{eg:cn_grammar_both}
\end{examples}
Yet, at the same time as being critical to interpretation, such structural
ambiguities multiply rapidly. It is easy to show that the number of possible
parses for a compound grows exponentially with its length. To see this,
build up the noun sequence from right to left. Each new noun after the
second can modify (at least) the whole sequence so far, or the
leftmost noun so far. Therefore, each noun (at least) doubles the
ambiguity of the sequence. ter~Stal~(1994a) reports extracting compounds
up to 8 words in length. The burgeoning of possibilities makes
it vital to find efficient methods of disambiguating compounds.
\subsubsection*{Nominalisation}
Beyond the rewrite rule given in example~\ref{eg:cn_grammar_cnrule}
above, syntactic theories of noun compounds identify
two general types of compounding.
The first type is formed by transformations of clausal
structures of one kind or another. The idea behind this is that the syntactic
behaviour of these compounds can be derived from the syntactic behaviour
of verbal structures. The most usual manifestations of this have a rightmost
noun related morphologically to a verb. Thus, the verb \lingform{collect} is
nominalised to form the noun \lingform{collection} and complements of the
verb become modifiers of the noun.
Example~\ref{eg:cn_grammar_nominalisation} shows such a compound and
the verb phrase from which it is derived.
\begin{examples}
\item \label{eg:cn_grammar_nominalisation}
\begin{subexamples}
\item stamp collection
\item to collect stamps
\end{subexamples}
\end{examples}
Once again, there are numerous nomenclatures. Levi~(1978) calls these
compounds \newterm{nominalisations}, in line with Fraser~(1970) and
Chomsky~(1970). Warren~(1978) calls them \newterm{verbal-nexus
compounds}, while Leonard~(1984) uses the term \newterm{sentence compounds}
and Meyer~(1993) calls them \newterm{sortal nouns}.
Finin~(1980) extends this class of
compounds to include \newterm{role
nominals}, where nouns can be associated with verbs to which they are not
morphologically related. For instance, the noun \lingform{food} is
considered to derive from the verb \lingform{eat} and therefore inherits a
subject (the eater) so that the compound \lingform{dog food} can be
explained by the same syntactic rules as those applying to
explicitly marked nominalisations.
While all these different names are justified by various theoretical
differences, at a coarse level they all capture essentially the same process,
whereby clausal syntax is inherited by nouns derived from verbs.
Furthermore, the syntactic constraints resulting from these theories do not do
anything substantial to reduce the structural ambiguity. There is, for
instance, no requirement for syntactic agreement between the two nouns,
even when the verb phrase from which they are derived does require
agreement. Also, Cumming~(1991) argues that nominalisations behave like
clauses in some ways but not others, and that therefore two separate
syntactic representations are required.
There is one interesting property arising from all these transformational
accounts though. They are often based around distinctions that have a
semantic rather than syntactic flavour. For example, Lees~(1970) bases his
treatment of compounds on what he calls \scare{very deep grammatical
structure}, a representation close to semantics. Likewise, Fraser~(1970)
defines three types of nominalisation, factive, action and substantive, based
upon semantic distinctions. This suggests that semantic elements play an
important role in controlling the structure of noun compounds.
\subsubsection*{Non-verbal-nexus compounds}
The second general type of noun compound comprises those compounds
which are not nominalisations. Levi~(1978) calls these
\newterm{deleted predicate nominals}, Warren~(1978) calls them
\newterm{non-verbal-nexus compounds} and
Meyer~(1993) calls them \newterm{relational nouns}.
Finin~(1980) claims that the
extended definition of role nominal allows this class to be eliminated.
However this seems optimistic in the light of Warren's~(1978) study, which
is restricted just to non-verbal-nexus compounds.
As far as the syntax of
non-nominalisations goes, there is little more than the structural rule given
in example~\ref{eg:cn_grammar_cnrule} above.
Probably the only concrete constraint is the morphological rule
described by Quirk~\etal~(1985:1333--5), which dictates that modifying
nouns are singular in number. While this rule even applies to nouns that
normally do not have singular forms
(example~\ref{eg:cn_grammar_morph_strong}), it still has many
exceptions. Example~\ref{eg:cn_grammar_morph_weak} shows two
common ones.
\begin{examples}
\item \label{eg:cn_grammar_morph_strong}
\begin{subexamples}
\item trouser leg
\item scissor factory
\end{subexamples}
\item \label{eg:cn_grammar_morph_weak}
\begin{subexamples}
\item arms race
\item tropical gardens expert
\end{subexamples}
\end{examples}
All these theories of compound noun syntax assume that
compounding results in a form that is a suitable element for further
compounding. The general rule, \mbox{\={N} $\rightarrow$ \={N}
\={N}}, thus captures the gross syntactic behaviour of compounds and is
recursively applicable without limit. However, there is one theory that is an
exception. While other theories are fully recursive, Marcus~(1980) holds
that processing limits restrict the possible structures,
so that the general rule
given above is not quite idempotent. In particular, Marcus~(1980) prohibits
structures in which three or more nouns all premodify another noun. Put
another way, the rewrite rule cannot be reapplied to its right member more
than twice. Examples of the structures disallowed are shown in
example~\ref{eg:cn_grammar_marcus}, taken from Finin~(1980:47).
\begin{examples}
\item \label{eg:cn_grammar_marcus}
\begin{subexamples}
\item {[}Aluminum [automobile [water pumps \nn] \nn] \nn{]}
\item {[}plastic [toy [fire truck \nn] \nn] \nn{]}
\item {[}[back yard \nn] [brick [dog house \nn] \nn] \nn{]}
\end{subexamples}
\end{examples}
For the purposes of this thesis, I will assume that Marcus's~(1980) theory is
false, as is evidenced by these examples.
However I do not consider the implications of his theory to have
significant impact on the results presented here in any case.
Regardless of the grammatical stance taken, the syntax of compounds that
are longer than two nouns is underconstrained by the grammar, resulting in
syntactic ambiguity. In fact, even with Marcus's constraint, the degree of
ambiguity is exponential in the length of the compound. Furthermore,
analysis of these ambiguities appears to depend primarily on semantic
expectations. The lack of syntactic and morphological constraints
immediately forces us to confront the use of semantics during the syntactic
analysis, suggesting an interleaved approach. The question becomes
how do we efficiently bring the appropriate semantic constraints to bear,
and how do we acquire the constraints in the first place?
\subsection{The Meaning of Noun Compounds}
\label{sec:cn_meaning}
Since the purpose of compounding is to identify things by way of relating
them to others, a semantic analysis of a compound involves identifying the
particular relationship involved. For instance, in
example~\ref{eg:cn_meaning_implicit}\ref{eg:cn_meaning_implicit_time}
the time of the activity is being specified. In
example~\ref{eg:cn_meaning_implicit}\ref{eg:cn_meaning_implicit_cause}
a causal relationship is denoted, and we take
example~\ref{eg:cn_meaning_implicit}\ref{eg:cn_meaning_implicit_made}
to mean a pillar made of cement.
\begin{examples}
\item \label{eg:cn_meaning_implicit}
\begin{subexamples}
\item morning prayers \label{eg:cn_meaning_implicit_time}
\item drug deaths \label{eg:cn_meaning_implicit_cause}
\item cement pillar \label{eg:cn_meaning_implicit_made}
\end{subexamples}
\end{examples}
But the relationship in each case is left implicit, so that if automatic
interpretation is desired an additional problem of identifying this
relationship arises. In this section, I will review linguistic
theories that have been proposed regarding the semantics of noun
compounds.
First though I will describe one aspect that is universally recognised.
There are two roles to be played in
compounding: one noun denotes the thing to be identified, exactly as if it
were not part of a compound, while the other noun denotes a related
thing.\footnote{Warren~(1978:19) observes that this is arguable in the case
of so called \newterm{dvandva} compounds, for example
\lingform{poet-painter}.} The former is called the \newterm{head}, the
latter the \newterm{modifier}. The class of things denoted by the compound
as a whole is generally a subset of the class denoted by the head. The
modifier determines which subset. For instance,
example~\ref{eg:cn_meaning_implicit}\ref{eg:cn_meaning_implicit_time},
the things denoted are prayers. The word \lingform{morning} denotes a related
thing (the early part of the day) which identifies a subset of the possible
prayers.
The semantic head is also the syntactic
head: the agreement features of the whole compound are inherited from the
head. In English the head of a compound is almost always the rightmost
noun. This is true of all the examples seen so far. Exceptions, such as those
in example~\ref{eg:cn_meaning_leftheads}, are usually lexicalised and often
borrowed from other languages.
\begin{examples}
\item \label{eg:cn_meaning_leftheads}
\begin{subexamples}
\item attorney general
\item fetucinne arabiata
\end{subexamples}
\end{examples}
\subsubsection*{Range of semantic theories}
Apart from right-headedness, the semantic properties of compounds have
been hotly debated in linguistics, with numerous contradictory views being
proposed. At the optimistic end of the scale, it has been suggested that there
exists a small set of semantic relationships that compounds may imply.
Levi~(1978) propounds a theory along these lines, which I will describe
below. If these theories are correct, they provide a firm basis for
computational interpretation, since they specify a concrete list of
possibilities.
In contrast, the most pessimistic views claim that the implicit relationship
between head and modifier is entirely unconstrained. For example,
Downing~(1977) performed a series of psychological experiments to
support her argument that the semantics of compounds cannot be exhausted
by any finite listing of relationships.
Her results suggest that the number
of relationships is very large. However, she does observe that certain types
of relationship are more common than others (including purpose, part-whole
and place).\footnote{Another advocate of the view that implicit semantic
relationships are unconstrained is Nishikawa~(1988), although the
arguments he puts forth also apply to the whole of language: all utterances
can be used to mean anything because of shared socio-cultural knowledge.
If his argument is accurate, the entire \nlp\ enterprise is impossible.}
If Downing's~(1977) hypothesis is true in a strong sense, then noun
compound interpretation requires complete pragmatic context-dependent
inference and therefore any attempt to handle noun compounds will involve
detailed manual knowledge coding. However, noun compound semantics do
form certain patterns. There is certainly hope that adopting a
more restrictive theory that relies on these patterns will prove useful,
even if there are cases that it is
unable to handle. This is exactly the situation in which statistical language
learning earns its living.
Between the two ends of this scale, a range of noun compound
semantics theories exist which I will survey in a moment. There are
two methodologies used to develop such theories. The first, which I will call
\newterm{example-based}, is to collect a set of example compounds, most
usually without regard to the source,
with the aim of arriving at as large a list
of examples as possible. The list that results contains compounds, but no
contexts, so that the theory is based on the \scare{out of context
interpretation} of the examples. Levi~(1978) supplies her list of around 200
examples in an appendix. Vanderwende~(1993) also follows this
methodology, using 59 compounds (but does not supply the list). Others
base their theory on examples, but do not appear to have a fixed list (for
example, Finin's,~1980, thesis contains on the order of 100 scattered
examples).
As I will argue in section~\ref{sec:cn_context}, the interpretation of
compounds can readily change in different contexts. We have already seen an
instance where this is possible because of syntactic ambiguity in
example~\ref{eg:cn_grammar_both} of section~\ref{sec:cn_grammar}.
Therefore, example-based theories of compound noun semantics base their
conclusions (at least ostensibly) on only context free interpretations.
The second methodology, which I will call \newterm{corpus-based},
acquires noun compounds from a corpus. In these studies, every compound
from a corpus is used, thus requiring the theory to be comprehensive, at least
for the text types that the corpus represents. This approach also permits the
research to measure the relative frequencies of different semantic relations,
which ensures that emphasis is given to semantic behaviours that are more
likely to be encountered. Furthermore, it results in each occurrence of a
compound having a known context, within which it can be
interpreted. There is no longer any need to assume that compound meaning
is well-defined outside of any context. The corpus-based methodology
focuses on tokens, the example-based one on types. Warren~(1978) and
Leonard~(1984), both described below, use a corpus-based methodology.
Three linguistic theories of semantics and one \nlp\ theory are sufficiently
prominent to be worth describing. In order of the degree to which they
constrain the possible semantics, they are:
\begin{itemize}
\item Levi's~(1978) recoverably deleteable predicates and nominalisations;
\item Leonard's~(1984) typology of compounds in fictional prose;
\item Warren's~(1978) analysis of Brown corpus compounds; and
\item Finin's~(1980) role-filling theory, incorporating role nominals.
\end{itemize}
I will describe each of these in turn.
\subsubsection*{Levi's RDPs}
Levi~(1978) pursues the idea that compound nouns express only a small set
of semantic relations within the generative semantics theory. She proposes
nine \newterm{recoverably deleteable predicates}, which are expressed
either as prepositional phrases or as relative clauses involving a small set of
verbs. These are:
\begin{itemize}
\item \semrel{in}, \semrel{for}, \semrel{from} and \semrel{about}; and
\item \semrel{cause}, \semrel{have}, \semrel{make}, \semrel{use} and
\semrel{be}.
\end{itemize}
Thus, \lingform{electricity station} implicitly involves
a predicate which can be expressed as \lingform{station that makes
electricity}. She claims that, apart from nominalisations, these predicates
are the only relationships possible. To the extent that this is true, it provides
a practical basis for analysing compound noun meanings: we need only
choose which predicate has been deleted in order to perform interpretations
(or at least reduce the task of detailed interpretation to something simpler).
In addition, Levi identifies a separate class of compounds, the
nominalisations. These are produced through a somewhat different
process, being derived from verbal constructions. The head noun is
derived from a verb by morphological modification and carries its
arguments along with it. The semantics are defined in terms of the following
four verb roles:
\begin{itemize}
\item \semrel{act}, \semrel{product}, \semrel{agent} and \semrel{patient}.
\end{itemize}
For example, \lingform{truck driver} expresses an \semrel{agent} role
because it is derived from the clause \lingform{agent drives a truck} and
\lingform{student discontinuations} expresses an \semrel{act} role because
it is derived from the clause \lingform{students act to discontinue}.
Thus Levi claims that all noun compounds express one of thirteen
predicates.
\subsubsection*{Leonard's typology}
While Leonard~(1984:6) gives a shorter typology of noun compounds,
it is derived only from fictional prose. The types are
defined chiefly with reference to possible paraphrases. An example
of each type follows.
\begin{itemize}
\item Sentence: \lingform{hire car}
\item Locative Sentence: \lingform{breakfast-room}
\item Locative: \lingform{country free-school}
\item Annex: \lingform{mountain top}
\item Equative: \lingform{baby crocodile}
\item Material: \lingform{stone lion}
\item Additive: \lingform{Lampton-Lufford report}
\item Reduplicative: \lingform{plop-plop}
\end{itemize}
Each of the eight types is characterised by
an associated paraphrase. For example, the Annex type compound
\lingform{blood pressure} can be paraphrased as \lingform{pressure of
a blood/bloods}.
The typology is based on a test set of 1944 example compounds
taken from a 305 thousand word corpus of fictional prose. Here, as in the
other three theories reviewed in this section, number and definiteness are not
captured by the semantic representation so that
alternatives appear in the paraphrase.\footnote{Chambers~(1994) observes that
both number and definiteness are sometimes determined
by the semantic relationship, which might be used to fill this gap.}
Leonard~(1984:70) also describes an implemented algorithm
for analysing compounds into these eight groups.
It is based on a hand-crafted lexicon (specific to the test set).
Each word is marked with one or more of 59 markers, which are then used
by a sequence of rules to choose the correct type. The type is then used to
produce a paraphrase, which is evaluated for felicity by hand.
\shortquote{At a generous count, 76\% of the interpretations are possible
ones in English}~(Leonard,~1984:v).
\subsubsection*{Warren's taxonomy}
Warren's~(1978) theory is far less constraining.
She has made a comprehensive study of compound nouns in 360
thousand words of the Brown corpus. She manually extracted every
non-verbal-nexus noun compound from the sample, which included text
from eight of the registers represented (recall that the Brown corpus contains
a carefully selected range of genres). This yielded 4557 different
compounds (all her statistics are given as counts of types, rather than tokens;
see Warren,~1978:54). She then developed a taxonomy of implicit semantic
relations, with four hierarchic levels of abstraction: major semantic classes,
minor semantic classes, main groups and subgroups. Each compound was
assigned to a subgroup (or main group where subgrouping was not
distinguished within the group). Her major semantic classes were:
\begin{itemize}
\item \semrel{constitute}: divided into Source-Result, Result-Source and
Copula classes;
\item \semrel{possession}: divided into Part-Whole, Whole-Part and
Size-Whole classes;
\item \semrel{location}: divided into Goal-\semrel{obj},
Place-\semrel{obj}, Time-\semrel{obj} and Origin-\semrel{obj} classes;
\item \semrel{purpose}: divided directly into main groups;
\item \semrel{activity-actor}: divided directly into subgroups; and
\item \semrel{resemblance}: consisting only of the Comparant-Compared
class.
\end{itemize}
The distribution of compounds across the taxonomy is quite informative
(although it is given in terms of types rather than tokens, so it reflects the
productivity of the different semantic relations as much as it does the
frequency). The counts of each group are tabulated in each section, with
counts by semantic class tabulated in the summary (Warren,~1978:229).
The distribution strongly favours some classes, with the most common being
Whole-Part (23\%). Only six of the classes cover more than 5\% of the
distribution (Whole-Part, Source-Result, Purpose, Place-\semrel{obj},
Part-Whole and Origin-\semrel{obj}). Copula compounds occupy 5\% of
the distribution, and the major semantic class Resemblance, just 1.8\%. If
the major semantic classes can be considered coarse semantic relationships,
then the distribution is evidence that a theory such as Levi's has
some explanatory power, in spite of readily discoverable exceptions.
\subsubsection*{Finin's role nominals}
While these three linguistic theories of compound noun semantics treat
nominalisation as one of two distinct processes (Warren,~1978, is careful to
exclude verbal-nexus compounds; Leonard's,~1984, program uses a special set
of secondary, lexical-valued features to analyse Sentence and Locative
Sentence types), another view holds that most compounds are implicit
nominalisations, even if the head noun is not morphologically derived from a
verb.
Finin~(1980) adopts this perspective, claiming, for example, that
\lingform{recipe book} is an implicit nominalisation of a verbal construction
involving \lingform{write}, rather than a deletion of the predicate expressed
by \semrel{about}. He calls such constructions role nominals. A
semantic analysis of a compound consists of first identifying the event
denoted by the implicit verb and then assigning the objects denoted by other
nouns to roles of the event. In this example, \lingform{recipe} denotes an
object that fills the topic role of a writing event that is implied by
\lingform{book}. Under this view, the set of possible semantic relations is
arbitrarily large, determined by the range of possible implicit
verbs and their roles.
One problem with this approach is that we are forced to posit large numbers
of lexical relations without direct linguistic justification (although qualia
theory might be adapted to this end, Pustejovsky~\etal,~1993). For
example, we must suppose that every physical object implies a composition
event for which something has filled the material role. Thus,
\lingform{fuselage} must imply a verbal construction involving
\lingform{made}, which has a composition role fillable by
\lingform{aluminium}, in order to analyse \lingform{aluminium
fuselage}.\footnote{Finin~(1980) and Isabelle~(1984) both explore
sublanguages about aircraft maintenance.} This difficulty is also observed
by Isabelle~(1984), who adopts the role nominal theory. \shortquote{For
those role nominals where there is no morphological evidence of relatedness
with the underlying verb, one is forced to rely mostly on
intuitions.}~(Isabelle,~1984:511).
\subsubsection*{Common elements}
All of these theories have strengths and weaknesses, so rather than
commit to any one, I will simply mention four properties that these
and almost all others share.
\begin{description}
\item[Permanence:] Most theories observe that semantic relations between
nouns in a compound involve inherent or typical properties of the objects
denoted. For example, a \lingform{sea mammal} never denotes a mammal
that happens momentarily to have gone for a dip. In theories that constrain
the set of possible relations, this causes all the relations to be interpreted
with this in mind.
\item[Negative relations:] Even the most pessimistic theories agree that
modifiers never denote objects that are specifically {\em not} in a certain
relationship to the head. For example, even if all the paintings in a
room depict tubas except for one, \lingform{tuba painting} will
never be used to denote that one painting.
\item[Nominalisations versus predicates:] Apart from Finin's~(1980) role
nominals, two different types of semantic relation, corresponding to the two
different syntactic types identified in section~\ref{sec:cn_grammar}, are
always suggested. Also, most theories include a copula predicate.
\item[Appeal to paraphrase:] In almost all theories the most explicit
diagnostic for determining the semantics of a compound is through
comparison of possible paraphrases. For example, the most concrete
method of checking for a Purpose relation involves considering a paraphrase
employing the preposition \lingform{for}. I will return to this point in more
detail in section~\ref{sec:ce_problem}.
\end{description}
\subsection{Accommodation and Semantic
Granularity}
\label{sec:cn_accommodation}
The hypothesis that compounds can express only a small set of semantic
relationships rests on the view that semantic representations are limited in
their detail. Not every detail of the relationship will be explicit in the
representation. For instance, to say that a causal relationship is implicit in
the compound \lingform{earthquake casualties} is a summary of finer
semantic details. Nothing is said as to whether the causer is the primary
influence, or merely a contributor. Nor is it specified whether the effect was
intentional or otherwise. We accept that a causal relationship exists, even
though there are many and varied shades of causation. Regardless of which
theory is adopted, meaning analyses are necessarily limited in resolution.
As Ryder~(1994:91) observes, listeners adapt the general meanings of the
nouns involved in order to better fit one another and the context. She calls
this process \newterm{accommodation}. In fact, accommodation is a
fundamental part of all language. There are arbitrarily fine shades of
meaning to be captured by a discrete symbolic language, resulting in a loose
fit between words and interpretations. A person reading words must
accommodate the out of context meanings of words to find an in context
interpretation.
This is apparent in the divisions created by word senses in a dictionary.
As Kilgarriff~(1992) argues, each word has a potentially infinite variety
of usages, dependent on the context in which it is used.
The lexicographer chooses to divide this space
into discrete senses because it is impossible to give an exhaustive listing of
the slight nuances conveyed by the word in each different context.
Importantly, in dividing the space, the lexicographer has some choice. In
large, detailed dictionaries, there will be many fine-grained senses, each
carefully distinguished from one another. In smaller dictionaries, the senses
must be coarse-grained, making fewer distinctions. Thus, descriptions of
word meaning have \newterm{granularity}, which may range from coarse to
fine, depending on the degree of detail distinguished.
Likewise, a semantic analysis of compound nouns is necessarily given at a
certain level of granularity. A highly sophisticated analysis might give great
detail about the relationship between the two objects, whereas a simpler
analysis might summarise finer points under one heading. In fact,
Levi~(1978:85) devotes some discussion to justifying her particular choice
of grain size (the nine recoverably deleteable predicates) and concludes that
her analysis \shortquote{incorporates neither the maximal nor the minimal
degree of generalisation possible, but rather an optimal degree}.
The evaluation of a representation, including the appropriateness of the
granularity, naturally depends on the goal. It should be judged by its
usefulness in performing the intended task. Therefore, is it perfectly
legitimate to claim that \lingform{hydrogen bomb}, \lingform{machine
translation}, \lingform{smoke signal} and \lingform{steam iron} are
expressing an Instrument relation, if this is a useful classification for the
purpose at hand.
This issue is fundamental to natural language interpretation and raises some
questions about the distinction between syntax and semantics. The syntactic
analysis of compound nouns is isomorphic to a semantic analysis in which
just one general relationship, that expressed by something like
\semrel{is-a-modifier-of}, is modeled. This homomorphism is one that I
will exploit in the experimental work on parsing in
chapter~\ref{ch:experimental}. However, for now I turn to a more
practical problem posed by compound nouns: dependence on context.
\subsection{Context Dependence}
\label{sec:cn_context}
The meaning of a compound noun, and even its syntactic structure, can vary
under different conditions. Information that contributes to the analysis can
come from other words in the same sentence, surrounding sentences,
non-linguistic knowledge or even pragmatic considerations. It is not
generally sufficient to analyse a particular noun compound (token) based
only on the noun compound itself (type).
For a start, word meanings vary depending on context. If the most common
meaning of a word is not the one being used, this can affect the preferred
reading of a compound. In example~\ref{eg:cn_context_senses} we expect
a right-branching analysis (a fee to cover drinks exacted by a nightclub upon
entrance).
\begin{examples}
\item club cover charge \label{eg:cn_context_senses}
\end{examples}
However, if the context reveals that the speaker is in a golfing shop
purchasing accessories, then the senses of both \lingform{club} and
\lingform{cover} will be different and in this case a left-branching
structure is more likely (the price of a plastic jacket used to
protect golf sticks).
Even if the word senses are known, some context sensitivity remains. In
example~\ref{eg:cn_context_syntax} context is needed to determine
whether the system referred to is a system for defense which involves
missiles (right-branching) or a system for defense against missiles
(left-branching).
\begin{examples}
\item missile defense system \label{eg:cn_context_syntax}
\end{examples}
In both these examples, the semantics is dramatically affected by context of
various kinds. Information determining which reading is intended could be
derived from a range of sources including pragmatic inference, discourse
structure and topic. Other examples are discussed in Sparck Jones~(1983).
While context dependence is an obvious and universal feature of
language,
it is perhaps of particular
significance in noun compounding. Because one of the functions of noun
compounds is to compress text, it is plausible that a greater reliance is
placed on context than in other types of syntactic construction. If there is
less information on the page, there must be more information coming from
elsewhere. More detail must be left implicit, and thus context dependent.
Meyer~(1993) places great emphasis on context, dividing his thesis into two
halves, dealing with compounds in isolation and compounds in context,
respectively. He gives a formulation of compound noun meanings within a
formal semantics framework where compounds are treated as anaphors.
When interpreted in context, the semantics of a compound are defined by
reference to a discourse representation. According to his theory,
\shortquote{concepts denoted by novel \acronym{nn}-compounds must be
linked to conceptual nets denoted by preceding
discourse}~(Meyer,~1993:169). Therefore, context must have a significant
influence. In fact, the limitation of sources of contextual effect to {\em
preceding text} is too restrictive. Example~\ref{eg:cn_context_senses}
might appear before a price on either a small tag in a golf shop or a sign
beside the doorway of a night spot. In both cases there would be no
preceding text and yet the concepts denoted would differ.
Almost all computational approaches to noun compounds have made the
practical compromise of assuming that context dependence is either
negligible or of little impact. However, at least two works incorporate a
mechanism that, in principle, allows for analyses to depend on the
denotation of preceding text, or even earlier texts.
McDonald~(1982:20) utilises a scoring function to control application of a
range of heuristics (his work will be described in more detail in
section~\ref{sec:cn_knowledge} below). One of the these, called the
\newterm{cognate heuristic}, gives preference to conceptual structures that
have already been stored in memory.
When there is more than one of the possible
structures stored in memory, the \newterm{instances heuristic} then applies,
favouring the one appearing most often. This mechanism is used to
manually supply the meanings of lexicalised compounds to the system, but,
in principle, earlier processing could
lead to cognates in memory and therefore to context dependent effects.
A similar strategy constitutes the entire noun compound interpretation
algorithm described by Lehnert~(1988). This algorithm is used by the
\progname{researcher} system, built by Lebowitz~(1983), which
incorporates an episodic representation of memory. As text is processed,
interpretations are added to an inheritance hierarchy, which is then searched
during later interpretations and used in a form of case-based reasoning.
Again, frequency is used as a criterion when multiple alternatives exist. The
result demonstrates powerful context modeling effects. However, complete
reliance on episodic knowledge is probably weaker than complete reliance
on context independent semantic knowledge. A method for combining the
two (such as McDonald's,~1982) seems more promising.
In any case, there are strong arguments for requiring context dependence in
compound noun analysis. In section~\ref{sec:cy_human} I will report a
study on the extent of context dependence in noun compound syntax.
However, except for quantifying its effects, I shall henceforth follow most
others in attempting to ignore it. Such insensitivity is a weakness of the
present work, although at this stage it appears to be a necessary one.
\subsection{Computational Tasks}
\label{sec:cn_computational}
Researchers in computational linguistics have addressed at least five
computational tasks to do with noun compounds,
all of which could benefit from statistical \nlp\
techniques. In this section I will review existing work on:
\begin{itemize}
\item identification of compounds from amongst other text;
\item syntactic analysis of structurally ambiguous compounds;
\item assignment of implicit semantic relations;
\item prediction of prosodic features of compounds; and
\item directly translating compounds in one language to phrases in another.
\end{itemize}
Of these, parsing and semantic analysis have received most attention
and therefore will be reviewed in more detail in
sections~\ref{sec:cn_knowledge}
and~\ref{sec:cn_statistical}.\footnote{Another
interesting task is the generation of
compounds that are synonymous with given query terms for information
retrieval. Since compound nouns are frequent in abstracts, especially in
technical domains, Norris~(1993) predicts that accurately generating
compounds from synonymous expressions will have an impact on
retrieval performance.}
\subsubsection*{Identification}
Nouns are commonly ambiguous for part of speech. Thus parsers may easily
confuse noun compounds for other constituents.
For instance, example~\ref{eg:cn_computational_ident} could be
taken as a verb phrase rather than a noun compound because
\lingform{ditch} may be a verb. This analysis could lead to
interpretation of the compound as an instruction to discard
the digging machine.
\begin{examples}
\item ditch digging machine \label{eg:cn_computational_ident}
\end{examples}
Arens~\etal~(1987) describe a system for
analysing digital system specifications in which
this problem arises frequently. Their
solution uses hand-coded rules called \newterm{pattern-concept pairs},
which encode semantic expectations in the domain. These expectations are
then applied by a heuristic that also checks number agreement constraints. If
the system encounters an ambiguity that it cannot resolve, it requests the
user's assistance. No coverage or accuracy figures are reported (the lexicon
contains 25 verbs and 100 nouns).
Using hand-coded rules may be adequate for limited domains, but is expensive in
development effort and is difficult to scale up to broad coverage
language processing. Statistical methods offer a cheaper means of acquiring
domain specific expectations and may even be applicable to
unconstrained text, if a sufficiently large corpus can be obtained for training.
While exploration of this problem is not pursued in this thesis, it is possible
that the same statistics used in the experimental work on parsing in
chapter~\ref{ch:experimental} could be used in solving this problem.
Note that the related problem of identifying terms or collocations has
received a great deal of attention, but is tangential to the main line of the
present work. A good starting point is the work of Smadja~(1993).
\subsubsection*{Parsing}
As we have seen, compounds longer than two words are syntactically
ambiguous and, as demonstrated in section~\ref{sec:cn_grammar}, the parse
ambiguity of compounds grows exponentially with their length.
Since ter~Stal~(1994a) reports finding compounds of up to eight words in
length in a relatively small corpus, the level of ambiguity could be crippling
to any uninformed parser.
Existing work on parsing noun compounds falls into two classes:
knowledge-based and statistical methods. Since
sections~\ref{sec:cn_knowledge} and~\ref{sec:cn_statistical} below
give details of these, I will delay review of such work until then.
It is however noteworthy that
most of the early knowledge-based approaches to noun compounds (for
example, Finin,~1980) perform parsing in combination with semantic
interpretation. ter~Stal~(1994b) suggests that treating parsing as an
independent aim may be inappropriate because subsequent semantic
processing applies the same knowledge. Nonetheless, syntactic analyses have
independent value beyond being a stepping stone to semantics. For
example, they are useful for predicting prosody in text-to-speech systems,
even when no semantic analysis is carried out.
It is therefore useful to distinguish
parsing and semantic analysis.
The first half of chapter~\ref{ch:experimental} of this thesis
will be devoted to work on applying statistical methods to parsing
compound nouns.
\subsubsection*{Semantic analysis}
As outlined above in section~\ref{sec:cn_meaning}, compounds can express
a variety of relationships,
with selection of the appropriate relation being left
up to the reader. Computationally, the task of selecting the most likely
implicit relationship is highly knowledge intensive. As noted above,
Leonard~(1984) has built a system for assigning a semantic analysis
to compounds in fictional prose
based on lexical features and a set of rules.
The system requires detailed entries in
the lexicon to achieve about 76\% accuracy on the development set.
Other relevant work employs knowledge-based methods and will be
reviewed in section~\ref{sec:cn_knowledge}.
Knowledge acquisition is the central barrier to solving this task in
unconstrained text. The second half of chapter~\ref{ch:experimental}
of this thesis will
be devoted to work on applying statistical methods to compound noun
semantic analysis. While Johnston~\etal~(1994) propose a method for
acquiring appropriate lexical-semantic information from corpora, the results
reported later in this thesis represent the first empirical study
of which I am aware.
\subsubsection*{Prosody}
Work in text-to-speech systems has investigated the
prosodic features of compounds. The phonological stress of noun
compounds varies with their syntactic structure, among other things. In
order to synthesise realistic speech, a text-to-speech system must be capable
of placing the correct emphasis on elements of a compound.
Example~\ref{eg:cn_computational_prosody} shows compounds with their
typical stress patterns; intonationally prominent words are in bold face.
\begin{examples}
\item \label{eg:cn_computational_prosody}
\begin{subexamples}
\item \stress{panic} attack
\item \stress{living} room \stress{table}
\item \acronym{risc} \stress{instruction} set
\end{subexamples}
\end{examples}
Sproat and Liberman~(1987) describe a rule-based method for assigning
stress to compounds for text-to-speech purposes. More recently,
Sproat~(1994) has applied statistical methods to the problem and given a
quantitative evaluation of the performance on two word compounds. A test
set of 940 compounds (types) were analysed with 84\% accuracy (assigning
leftmost accent yields 70\%). When combined with the previous rule-based
method by employing statistics only when no rules apply, a different test set
of 1138 compounds (types) was analysed and achieved an average
agreement with human judges of 85\%. The rules were used in only 15\% of
cases, showing the limitations of the manual coding approach. Inter-judge
agreement rates were 91\%.
There is a close relationship between syntactic structure and accent contours
in compounds. However, the relationship is not straightforward. As
Sproat's~(1994) work shows, even syntactically unambiguous compounds
have varying stress. The implied semantic relationship can be a factor, as
shown by the two readings in
example~\ref{eg:cn_computational_semprosody}. The first is typical when
a Purpose relationship is implied, the second in the case of a Made-Of
relation.
\begin{examples}
\item \label{eg:cn_computational_semprosody}
\begin{subexamples}
\item \stress{chocolate} money
\item chocolate \stress{money}
\end{subexamples}
\end{examples}
In this work I will not address the problem of stress assignment, but
strong parallels exist between work on stress assignment and
the syntactic and semantic analyses pursued in chapter~\ref{ch:experimental}.
\subsubsection*{Direct translation}
While one of the important motivations for performing syntactic and
semantic analyses of compound nouns is to allow sophisticated machine
translation of compounds, some research simply sets the task to be direct
translation from one language to another, bypassing the need for language
independent syntactic and semantic representations.
Rackow~\etal~(1992) describe an innovative noun compound translation
algorithm that employs statistics from an English corpus to select the
appropriate translation of a German compound. First, segmentation rules
are used to break down the German compound into morphemes, which are
then translated into sets of possible English words. Generation rules are
then used to formulate a set of possible translations, including English
adjectival premodifiers, prepositional phrases and noun compounds.
The correct translation is selected by counting the occurrences of each of the
possible translations in the English corpus and choosing the one appearing
most frequently. The selection of lexical items and syntactic construction is
performed solely by the corpus statistics, allowing the lexical transfer and
generation rules to overgenerate heavily. A strong advantage of this
approach is that only a monolingual (target language) corpus is required.
Rackow~\etal~(1992) give no quantitative evaluation.
Along similar lines, Jones and Alexa~(1994) describe a method for aligning
English word sequences with German compounds in a parallel corpus based
on the alignment model of Brown~\etal~(1993). Statistical associations are
derived between German compounds and word sequences commonly found
in the corresponding English sentences, with the aim of avoiding the need
for manual development of symbolic translation rules. Unfortunately, the
corpus is very small (2543 English words) and only qualitative evaluation is
reported. Also, a sentence aligned parallel corpus is required.
Finally, Maas~(1994) reports on \progname{mpro}, a transfer-based
approach to translating compounds from German to French. Much of the
work is done by phrasal transfer, with whole compounds appearing in the
lexicon. For other compounds, a simple rule is used that expands the
German compound modifier into a French prepositional phrase using
\lingform{de} (\scare{of}). No quantitative evaluation is reported.
In this thesis, direct translation will not be addressed. There is no reason,
though, why the probabilistic models and training methods given in
chapter~\ref{ch:experimental} for the syntax and semantics of noun
compounds could not be applied to translation. In particular, the
probabilistic models could replace the frequency counting method used by
Rackow~\etal~(1992), either to overcome data sparseness or to make use of
a source language corpus rather than a target language one.
\subsection{Knowledge-based and Dictionary-based
Approaches} \label{sec:cn_knowledge}
Most of the prior work proposes highly knowledge intensive algorithms for
analysing compound nouns. Early work was concerned with making use of
the knowledge representation schemes developed in the late 1970s, and
worked primarily by means of slot filling. The key idea of these methods is
that the concept denoted by one noun contains slots and that the concept
denoted by the other noun should fill one of these slots. Syntactic and
semantic analysis is performed by deciding which concept should occupy
which slot. The algorithms are therefore concerned with evaluating the
appropriateness of concepts as possible fillers of slots.
The first such system was \progname{jets} (Finin,~1980),
a program for analysing
compounds in queries given to an aircraft maintenance database (called
\progname{planes}). It employs a set of rules to interpret compounds by
slot-filling.
For example, the noun \lingform{flight} is associated with a frame called
\scare{to-fly}, which has a slot called \scare{time}. This slot has associated
with it various facets, including specifications of what fillers are the
preferred, the default and the typical filler, and also what properties are
required of the filler. Given example~\ref{eg:cn_knowledge_finin}, the
relevant slot filling rule matches the frame for \lingform{January} against
the various slots of \scare{to-fly}.
\begin{examples}
\item January flight \label{eg:cn_knowledge_finin}
\end{examples}
A scoring system evaluates each possibility. For example, if the modifier
matches the default for a slot, this contributes 8 points to the score. Once
scores are computed for each possible slot-filling rule and frame-slot
combination, the system selects the highest scoring analysis.
The system contains close to 200 concept frames and six interpretation rules.
No quantitative evaluation is given, although it is clear that substantial effort
must be expended to encode the necessary knowledge for even a very
restricted domain.
A similar approach, but in this case aimed at unrestricted text, is described in
McDonald~(1982). Here, the knowledge used is expressed using semantic
networks in which each concept has associated slots with selectional
restrictions. By a process of matching slots, a structural analysis is produced
that also supplies the implicit semantic relationships.
The system has a vocabulary of just under 200 words and works for around
25 noun compounds. Yet McDonald claims that most of the over 600
examples given in the appendix could be handled if only the hand coded
semantic knowledge were available.
\begin{citedquote}{McDonald,~1982:125, his bold}
Once this list of more than six hundred compounds was created, each
compound was examined to see how well the model presented above would
process it. The program itself was {\bf not} run on all these examples.
Instead the processing for this set of compounds was done by hand using the
algorithms described above. There are two reasons \dots [First, it would
be] necessary to add a large amount of knowledge to the data base. All this
knowledge would have to be added to the data base by hand \dots [Second,
the] amount of memory available to store the knowledge is very limited.
\end{citedquote}
In fact, even given the questions raised by this method of evaluation, the
results are disappointing.
\begin{citedquote}{McDonald,~1982:158--9}
The program as it currently stands can [given appropriate knowledge]
process about 60\% of the compounds encountered in real-world
text \dots Another approximately 30\% of the compounds can be processed
correctly if some reasonable assumptions are made and if a couple of new
patterns are added to the system.
\end{citedquote}
It seems likely that encoding the knowledge McDonald requires for anything
broader than a narrow domain is a highly difficult (if not impossible) task,
especially since each new piece of knowledge would potentially introduce
errors in the earlier correct analyses.
It is therefore far preferable to turn to
an automatic method of acquiring and applying the required knowledge.
This conclusion is supported by an information retrieval study by Gay and
Croft~(1990). They build a slot-filling noun compound interpretation
program and evaluate its impact on retrieval performance in a limited
domain. Their conclusion is that the cost of building such a system cannot
be justified, primarily because a simple statistical strategy, using a three
word window achieves comparable performance.
\begin{citedquote}{Gay and Croft,~1990:36}
To obtain these performance improvements, which are almost certain to be
extremely small, a prohibitive effort involving construction of the
knowledge base and processing document text would be required.
\end{citedquote}
More recently, with the emergence of unification-based \nlp, there have been
some proposals for noun compound interpretation algorithms based on
unification. In these proposals, feature structures replace the earlier
slot-filling rules,
with unification being used to constrain the possible fillers
(see, for example, the proposal of ter~Stal and van~der~Vet,~1994). While
the underlying knowledge representation is more advance, these proposals
can only work for a limited domain because of the requirement for detailed
hand-coding.
Wu~(1992) proposes a novel combination of statistical and
unification-based methods, in which a probabilistic model of
feature structures is developed.
Probabilities are assigned to uninstantiated feature structures
and the maximum entropy principle used to define the probabilities of fully
instantiated ones. An efficient algorithm for approximating the probabilities
of different possible analyses is given. However, analysis of noun
compounds still requires manual construction of feature structure
representations, and estimation of the probabilities of the uninstantiated
feature structures poses an additional problem. No evaluation of the
algorithm's performance is given.
Another recent proposal, described in detail by Hobbs~\etal~(1993),
holds that the meaning of a compound can be arrived at by abductive
inference. The referents of nouns within the compound are denoted by
logical literals and an explanation for their relationship is sought through
abduction. While Hobbs~\etal\ claim that the computational problems
involved in using general inference can be solved by means of weighted
abduction (see Charniak,~1986 and Goldman and Charniak~1990), the
problem of furnishing the enormous amount of world knowledge required
remains.
All these proposals rely on hand crafted symbolic knowledge bases. As a
result, any system implementing these proposals must have a very limited
vocabulary. Even though such limits are acceptable within a narrow
domain, the development effort required for each system makes these
approaches very expensive. None of the knowledge-based systems built by
\nlp\ researchers have been subjected to quantitative evaluation. This is to
be contrasted with Leonard's~(1984) computer program, where a careful
corpus-based performance test was conducted. While adding knowledge to
noun compound processors appears tempting, careful attention must be paid
to how this knowledge can be provided.
One possible approach to knowledge acquisition is the use of dictionaries as
discussed in section~\ref{sec:sn_motivations}. Vanderwende~(1993) describes
a design of this kind for the task of assigning a semantic relationship to
two word compounds. It uses a pattern-based method to acquire
knowledge from dictionary definitions.\footnote{See
section~\ref{sec:sn_motivations} for background on pattern-based methods
of dictionary processing.} Using a set of 25 weighted analysis
heuristics that use the knowledge from these definitions, a score is given to
each of 11 possible relationships. For example, one of the definitions for
\lingform{sanctuary} is \scare{an area for birds \dots }, which results in
\lingform{sanctuary} having an \semrel{is-for} attribute
with value \lingform{birds}.
This is used by one of the heuristics to give a score of 0.9 to a Purpose
relation in the compound \lingform{bird sanctuary}. The scores are adjusted
manually.
\begin{citedquote}{Vanderwende,~1993:172}
The specific values for the likelihood factors result from experimentation,
but are not ad hoc; their effects converge, in a set of heuristics, to give
consistently satisfactory results.
\end{citedquote}
The heuristics are based on a development set of 59 compounds,
of which 49 are correctly analysed.
A corpus-based evaluation is reported in Vanderwende~(1994). The
algorithm tested is an extended version of that in Vanderwende~(1993),
having 13 possible semantics relations and 34 heuristics. Some search is
involved in applying the heuristics, since the extended algorithm
incorporates a general inheritance mechanism via hypernymic and
meronymic relations extracted from the dictionary. The extended
development set contains 100 compounds, of which 79 are correctly
analysed.
Of a test set of 97 compounds taken from the Brown corpus, 51 were
analysed correctly, giving a 52\% accuracy rate. This approaches
McDonald's theoretical accuracy of 60\%, but requires much less
development effort. Vanderwende's~(1994) accuracy figure is the only
empirical performance result for this task to date, Leonard's~(1984) figure of
76\% being for the development set. In section~\ref{sec:ce_results} I will
report the empirical accuracy of a statistical method for semantic analysis of
two word compounds.
\subsection{Statistical Approaches}
\label{sec:cn_statistical}
Given the importance of compound nouns and the recent interest in
statistical methods it is not surprising that several proposals
for applying statistical methods to noun compounds have been put
forward in the last couple of years. Statistics have been employed both in
the assignment of accent to compounds for text-to-speech and in the parsing
of compounds. Work on the former task has already been described in
section~\ref{sec:cn_computational}. In this section I will review the work
on statistical noun compound parsing, both because it has received more
attention and because it is of more relevance to this thesis.
All of the algorithms I will review in this section are variants of one
proposed by Marcus~(1980:253). Therein, the procedure is stated in terms
of calls to an oracle which can determine if a noun compound is acceptable.
It is reproduced here for reference:
\begin{citedquote}{Marcus,~1980:253}
Given three nouns $n_1$, $n_2$ and $n_3$:
\begin{itemize}
\item If either [$n_1$ $n_2$] or [$n_2$ $n_3$] is not
semantically acceptable then build the alternative structure;
\item otherwise, if [$n_2$ $n_3$] is semantically
preferable to [$n_1$ $n_2$] then build [$n_2$ $n_3$];
\item otherwise, build [$n_1$ $n_2$].
\end{itemize}
\end{citedquote}
Only more recently has it been suggested that corpus statistics might provide
the oracle, and this idea is the basis of the algorithms described below.
Since the algorithm evaluates the acceptability of only adjacent pairs of
nouns, I will call any analysis procedure which follows this general outline
an \newterm{adjacency algorithm}. Note that this strategy is designed to
ensure that the deepest constituent is as acceptable as possible.
The simplest of these algorithms
is reported in Pustejovsky~\etal~(1993). Given a three
word compound, a search is conducted elsewhere in the corpus for each of
the two possible subcomponents. Whichever is found is then chosen as the
more closely bracketed pair. For example, when \lingform{backup compiler
disk} is encountered, the analysis will be:
\begin{examples}
\item \label{eg:cn_statistical_adj}
\begin{subexamples}
\item{[}backup [compiler disk \nn] \nn{]}
when \lingform{compiler disk} appears elsewhere
\item{[}[backup compiler \nn] disk \nn{]}
when \lingform{backup compiler} appears elsewhere
\end{subexamples}
\end{examples}
Since this is proposed merely as a rough heuristic, it is not stated what the
outcome is to be if neither or both subcomponents appear, nor is there any
evaluation of the algorithm.
Bourigault~(1993) proposes the same algorithm for parsing all noun
phrases, not just noun compounds, in French. In this case, if neither or both
competing substructures appear, the algorithm refuses to answer. While the
accuracy for noun compounds is not reported, the overall correctness is
70\%, with no answer being given in a further 27\% of cases.
The proposal of Liberman and Sproat~(1992) is more sophisticated and
allows for the frequency of the words in the compound. Their proposal
involves comparing the mutual information between the two pairs of
adjacent words and bracketing together whichever pair exhibits the highest.
There is no evaluation of the method other than a demonstration that four
examples work correctly.
The most complex proposal to be made appears in Resnik~(1993:126),
and once again is based on the adjacency algorithm.
The \newterm{selectional association} between a
predicate and a word is defined based on the contribution of the word to the
conditional entropy of the predicate. The association between each pair of
words in the compound is then computed by taking the maximum selectional
association from all possible ways of regarding the pair as predicate and
argument. Whilst this association metric is complicated, the decision
procedure still follows the outline devised by Marcus~(1980) above.
Resnik~(1993:128) used unambiguous noun compounds from the parsed
\publicationname{Wall Street Journal} corpus to estimate the association
values and analysed a test set of around 160 compounds. The accuracy of
the basic algorithm was 66\%, but by adjusting a threshold on the association
value and selecting a default (bracket the first two nouns together)
when the value fell below this, the accuracy was raised to 73\%.
This outperforms the baseline strategy of
always choosing the default which achieves 64\% accuracy.
All four existing statistical methods for parsing noun compounds use
adjacency algorithms. Even being generous, the best performance
achieved is 73\% on three word compounds.
In the experimental work on parsing noun compounds presented in the first
half of chapter~\ref{ch:experimental}, I will propose a model that does
not follow the adjacency algorithm and compare this empirically
to an equivalent adjacency algorithm.
The new algorithm will emphasise dependency relations rather
than constituents.
It is noteworthy that Bod's~(1993) data oriented parsing scheme (see
section~\ref{sec:sn_grammars}), if extended to include lexical items in
subtrees, would enact a variant of the adjacency algorithm when presented
with three word noun compounds that have not appeared in the training
corpus. If $n_1 \, n_2 \, n_3$ did not appear in the training corpus,
and [$n_1 \, n_2$] or [$n_2 \, n_3$] did,
then the algorithm would choose to bracket
together whichever pair appeared more frequently.
Once again, the statistical parsing model is based on constituency,
just as the adjacency algorithm dictates.
Finally, I should mention one more recent study aimed at parsing
noun compounds by Abe~\etal~(1995). This work does not give
a specific algorithm for analysing noun compounds; instead,
their program seeks to learn the acceptability of noun pairs as compounds,
leaving the use of this information up to the parser.
Given a set of two word compounds, it learns a binary relation.
That is, for each pair of nouns it learns whether or not that pair is a
possible noun compound (this is in contrast to the measures given by
Resnik,~1993, and Liberman and Sproat,~1992, which yield a graded
score). It is not possible to conclude how useful their results are in parsing
noun compounds.
\subsection{Brief Review}
The second half of this chapter has reviewed the existing work on noun
compounds. We have seen that they are both common and ambiguous, and
thus pose an important challenge to \nlp. Many compounds are
productive, requiring a dynamic mechanism to process, and doing so
involves substantial knowledge, especially semantic expectations.
These characteristics make noun compounds an ideal medium through
which to explore statistical language learning designs.
The existing computational work on syntactic and semantic analysis
of noun compounds has been either knowledge-based,
and thus limited by the availability
of suitable hand-coded knowledge, or statistical. In the latter case
only parsing has been attempted and then with limited success.
All existing statistical parsers have been based on one scheme,
the adjacency algorithm.
In chapter~\ref{ch:experimental}, I will report a series of experiments
on both parsing and semantic analysis of compound nouns. The parsing
model differs from the adjacency algorithm in being based on
dependency relations and is motivated by the theory of statistical
\nlp\ to be proposed in chapter~\ref{ch:md}. The semantic
analysis model is also motivated by this theory and represents
the first application of statistical language learning techniques
to this problem.
\chapter{Meaning Distributions}
\label{ch:md}
\section{Introduction}
In this chapter I will give a theory of
automatic natural language interpretation. The goal of the theory is to
provide an architecture for interpretation tasks within which several
independent components sit. Naturally the overall performance of
systems designed according to this theory will depend heavily on these
components. Nonetheless, the theory does not specify these components
in any more detail than is necessary to precisely describe the
architecture.
The components are
\begin{itemize}
\item a semantic representation,
\item a probabilistic model of this representation, and
\item a lexico-grammatical analyser.
\end{itemize}
While I will discuss certain requirements placed by the theory upon these
components (in particular the functions they are to perform) the theory
will not make any commitments to particular internal architectures for
these components. In applying the theory many additional decisions
must be made regarding these components, an exercise which will be
carried out in the experimental work in chapter~\ref{ch:experimental}.
In the discussion below I will consider the unit of interpretation to be the
sentence (or I will use the term utterance). However, the theory is
intended to apply generally to many linguistic levels: clauses, phrases or
even individual words. If a suitable semantic representation of
paragraphs were available, perhaps even units larger than sentences
could be addressed.\footnote{Zadrozny and Jensen~(1991) give one
semantic representation for paragraphs.} The smaller the unit, the
greater will be the influence of context on interpretation, but the basic
architecture remains applicable.
In chapter~\ref{ch:experimental}, the experimental work takes the
unit of interpretation to be the noun compound.
Finally, the rationale behind the theory is to suggest appropriate
designs for real natural language processing systems. I am not making
the claim that systems built according to the theory {\em must\/} outperform
alternative designs. Rather the theory represents a synthesis of a wide
range of ideas, including some derived from culturally determined
viewpoints, and as such only serves to point out one promising way forward.
The ultimate worth of such
theories should only be judged empirically: by building various
alternative systems and comparing their performance. In
chapter~\ref{ch:experimental} I will make a range of comparisons
between a compound noun parser based on the theory and other such parsers.
Keeping this qualification in mind,
the inspiration for the theory derives from a conventional
metaphor that views human communication as the transfer of pieces
of knowledge, as will be discussed
in section~\ref{sec:md_meaning}.
According to this metaphor,
knowledge of meanings (semantic expectations) must play
a central role in guiding interpretation.
However the idea that such knowledge might be
responsible for control of the interpretation process has been
pursued at great length in the past and is commonly deemed
to have failed.
Section~\ref{sec:md_revising} discusses
why the present theory differs from this earlier work and provides
a rough sketch of the theory.
In section~\ref{sec:md_priors}, a probabilistic representation
of semantic expectations will be proposed. The section will also
discuss the functions required of such a representation by the
theory and argue that a representation of this kind is both
currently lacking and ultimately necessary for statistical \nlp.
Semantic expectations are derived from a wide variety of
sources. While the theory treats all sources of expectations
uniformly, an implementation must choose which sources it will
model. Section~\ref{sec:md_context} considers several possible
types of contextual information that provide semantic
expectations, ranging from local discourse effects to world
knowledge.
The core of the theory will be presented
in section~\ref{sec:md_linking}. The theory assumes that
a lexico-syntactic module is available that supplies
a bidirectional many-to-many mapping between utterances
and semantic forms. Thus, for any utterance, there is a set
of allowed meanings. The theory claims that the most probable
allowed meaning is the best interpretation, regardless
of the particular syntactic configuration. That is,
syntactic forms inherit likelihoods directly from semantic
ones and do not have independent probabilities of their own.
Section~\ref{sec:md_linking} also discusses the possible
advantages of adopting this theory and some of the implications
of it. Finally, section~\ref{sec:md_register} covers
some important issues regarding the selection
of training corpus register, in light of the theory.
\section{Meaning As Pilot}
\label{sec:md_meaning}
Automatic natural language processing systems necessarily
rely on a control strategy that is ultimately responsible
for deciding which knowledge resources
get brought to bear on the task.
One often reads that traditional \nlp\ consists of a pipeline, beginning
with morphological analysis, passing through syntactic analysis and
thence semantic interpretation, and finally undergoing discourse and
pragmatic processing. This is an architecture that comes from taking
linguistic levels literally.
Still, there is no question
that it has proven useful (it can be found in many implemented systems,
even as far back as Green~\etal,~1961). It is probably not coincidental
that the first two modules of the pipeline are concerned with morphology
and grammar, certainly better understood aspects of language than
semantics and pragmatics. The placing of syntax prior to semantics can
be seen as an attempt to gain leverage from the work on grammar in
linguistics.
Simplifying greatly, traditional \nlp\
holds grammar to be the guide of the language understanding process
and grammatical knowledge is given precedence over semantic
knowledge.
However, grammar rules alone are not sufficient to determine a unique
syntactic analysis of a text,
at least not with our current conception of grammar.
In fact, syntax rules substantially underconstrain syntactic structure.
The most obvious symptom of
this is the staggering number of parses admitted by broad coverage
grammars. Briscoe and Carroll~(1993:42) report that dictionary
definitions~30 words in length \shortquote{often have many hundreds or
even thousands of parses} under the \progname{alvey natural language
tools} grammar. While this might be attributed to a failure of the
particular grammar, the field still lacks a broad-coverage parsing system
which can fully disambiguate syntactic structure. If there was any
uncertainty in the first place, this fact gives strong evidence that
substantial ambiguity remains after the application of syntactic
knowledge, and that a control strategy that pursues syntax to its limits
prior to any semantic processing is, at best, inefficient.
A number of researchers (for example, Mellish,~1985, Hirst,~1987,
Haddock,~1987) have attempted to address this problem by
interleaving syntactic and semantic processing, with promising
results. Interleaved systems give control alternately to
one component and then the other.
While this affords a greater degree of integration, syntax
is still considered to be an autonomous component.
An alternative control strategy is inspired by the conception that the
purpose of language is the communication of meaning and that therefore
the structure of language should primarily follow the contours of
meaning. This idea has a certain intuitive elegance and has been the
foundation of a substantial body of research which can be lumped
together under the title \newterm{semantically driven nlp}. Under this
approach, primacy is given to semantic knowledge. It is knowledge of
the meaning of the language that shapes the interpretation of it, with
grammar playing a minor part, if any at all.
The early champions of this approach were Schank~(1975) and his students.
The theory of \newterm{conceptual dependency}, and the ensuing
processing systems based on this theory,
aimed to represent meaning independently of
language by composing a small set of semantic primitives. By means of
this representation, expectations about the meanings of words could be
recorded. These expectations played the role of pilot, guiding the
interpreter in arriving at a representation of the meaning of the sentence.
The control strategy was:
\begin{citedquote}{Schank,~1975:13}
[The parser] attempts to recognise each word, and if it can, puts it into its
semantic model. Then if the meaning is known, a good idea of what
other meanings are going to be around it can be established because
predictions can be made on the basis of the meaning of that word.
\end{citedquote}
Another early semantically driven \nlp\ theory was Wilks's~(1975)
\newterm{preference semantics}. His system relied on semantic templates
defined in terms of~70 semantic primitives. Analysis in this system was
driven by matching these templates to maximise \newterm{semantic
density}, another representation of expectation about meaning.
Despite the large volume of work in semantically driven \nlp, the notion
of a primarily semantic control strategy is no longer popular. For a
variety of reasons, which I will come to in a moment, the entire paradigm
is now regarded as an error of judgement. Yet, the reason for discussing
these approaches here is the fact that the strategy I will propose shares
several key elements with them. I claim that some aspects of these
approaches are worthwhile, or even necessary, and it is only the
historical inclusion of other faulty elements under the same banner that
has led to their failure (both perceived and real).
In particular, this work shares two important bases with the older
semantically driven \nlp\ approaches.
\begin{itemize}
\item Meaning is represented by simple combinations
of a set of primitive elements that are relatively language independent,
symbolic and finite.
\item Primary control over the analysis process is driven by expectations
about the possible meanings of the text being processed.
\end{itemize}
The first of these is hardly controversial in \nlp; the crux of the
matter lies with what we consider to be simple. Meaning
representations formed from the combination of symbolic and language
independent elements appear in almost every \nlp\ system. However, it
might be claimed that a certain amount of complexity in semantic
representation is necessary before that representation can prove
worthwhile (or indeed even language independent). The strategy that I
will propose requires the meaning representation to be simple for certain
mathematical reasons which will become clear in succeeding sections.
Thus, this work shares with earlier semantically driven \nlp\ approaches,
a reliance on the simplicity of the meaning representation, a property not
required by other modern approaches.
The second point requires somewhat more justification than the first.
For this reason, I will now briefly review the motivations for
salvaging the idea that semantic expectations should play a central role.
These stem from culturally determined intuitions about human language
processing and therefore constitute no more than plausible suggestions.
In what follows, I do not intend to
suggest that the proposed control strategy bears any resemblance to
psychological reality. The ultimate test of the proposal lies in the
performance of \nlp\ systems that employ it.
\subsection*{Communication as transfer}
Most work in \nlp\ is based on a metaphor
in which communication is regarded as the transfer of knowledge.
Beliefs, propositions, ideas and dispositions are regarded as
objects to be copied from the writer to the reader. The metaphor is both
entrenched and extensive. Most natural language understanding systems
make use of some form of representation of these objects, whether they are
algebraic (like logical forms), graphical (like semantic networks), or
topological (like nodes of a neural network). None of this is surprising;
the metaphor is a well-accepted convention of our culture (Lakoff and
Johnson,~1980:10, use it as their example of an entrenched
conventional metaphor, calling it the \newterm{conduit} metaphor).
One of the exceptions is constituted by the efforts in
statistical machine translation found in Brown~\etal~(1993) in which
any representation of meaning is avoided. The
omission of a first class representation of such objects is exactly what I
wish to argue against, and for this reason, I will explore the metaphor a
little more closely in this section. Again, I do not intend to make any
claims about how humans communicate. Rather, I want to accept certain
assumptions about language (assumptions which are almost universally
adopted by \nlp\ systems anyway) and from them deduce some simple
principles for machine emulation of the interpretation process.
According to the metaphor, communication assumes two sides, each
regarded as a container in the sense that they may hold items of
knowledge. These items of knowledge are representations. They may be
{\em about\/} states, objects, actions and so on, that appear in the world,
or they may be {\em about\/} other items of knowledge, but they are
obviously not the same as elements of the world. So there are two
distinct sets: senses (the items of knowledge) and referents (the elements
of the world which knowledge is about).\footnote{The metaphor seems
to assume that senses can exist independently of language; that is,
intelligent beings can reason without necessarily being able to
communicate.}
In the simplest, declarative case, the source (writer/speaker) possesses some
item of knowledge that the recipient (reader/listener) does not, and the
objective is to create a copy of this for the recipient. Since items of
knowledge are not tangible, a somewhat indirect procedure is used to
perform the transfer. This is the point where language becomes
necessary. Language provides a third set of elements, a
system of signs, allowing the source to indicate the item of
knowledge being communicated. Just as senses are not identical with their
referents, neither are signs identical with their senses.
Rather, each linguistic element indicates some aspect
of the intended sense and the combination of these indications constrains
the interpretation; that is, the surface linguistic forms narrow down the
semantic {\em space}.
So it is part of the metaphor that individual meanings (senses)
can be thought of
as distinct points within a space. Now to make this feasible, we need to
distinguish that which is {\em meant\/} from that which is merely
logically {\em entailed} or pragmatically {\em connoted}.
Each communication conveys a single meaning, but from this
meaning the recipient may make further implications. These
implications may or may not have been intended by the source. But,
regardless of whether the implications were intended, the communication
is only perceived to have one distinct meaning in the truth-functional
sense; its meaning is just the propositional content. Despite the fact that
this perspective ignores a host of pragmatic effects, I will adopt this
simplification throughout, under the assumption that it will make little
difference to the performance of \nlp\ systems, though this remains an
empirical question.
Now, returning to the metaphor, by composing a series of linguistic
elements, the source overlays more and more constraints on the possible
meanings, narrowing the space with each. If communication is
successful, the accumulation of constraints eventually describes fully the
item of knowledge being transferred. If the recipient successfully
interprets each linguistic element, and composes the corresponding
constraints correctly, then the resulting composition will indicate the
intended meaning. A copy of the item of knowledge has therefore been
successfully made.
Naturally, the space of possible meanings is extremely large and
complex. Yet a few short syllables is usually sufficient to indicate an
arbitrary point in it. This, according to the metaphor, can only be
explained by the effects of context. The source and recipient already
share a great deal of knowledge about the likely intended meanings.
The context has already selected a relatively tiny portion of the space,
allowing relatively few constraints to fully indicate the information desired.
While the metaphor has little to say about how context might be
represented, it attributes enormous power to it as a communicative force.
Out of all the infinite variety of possible human meanings, somehow the
context eliminates all but a few salient meanings. It is then only
necessary for the source to distinguish which of these few is the desired
one.
The Gricean maxim of Quantity (Grice,~1975:45) states that utterances
should be concise. If we accept this principle as well as
the metaphor of communication as transfer, then it is a principle of
language that all utterances should only distinguish between the
relatively few {\em salient\/} meanings. Any additional linguistic signs
which serve to rule out meanings already disallowed by context are
redundant, and contradict the principle of concision. Therefore, the
language produced by the source must distinguish between all and only
those possible meanings allowed by the context. Summarising, the language
production task begins with the intended meaning {\em plus\/} a small set
of contextually relevant semantic possibilities. The goal is then to use
appropriate linguistic knowledge to indicate the intended meaning,
distinguishing it only from the other relevant possibilities.\footnote{This
view is explicitly adopted by Dale~(1989) as the basis of
his generation algorithm for referring expressions.}
Correspondingly, the language comprehension task requires the use of
the same linguistic knowledge to infer which of these possibilities is the
intended one. From this standpoint, knowledge of
the semantic possibilities is crucial to language understanding. It is vital
that the recipient be able to identify the small set of contextually relevant
semantic possibilities that the source is distinguishing amongst.
Therefore, adopting the conventional metaphor of natural language
understanding, and assuming the Gricean maxim of concision, leads us to
the conclusion that knowledge of semantics must play a central role in
comprehension. It should be the resource that is applied first, with
grammatical knowledge serving as the final step in refining meaning.
Yet this was the principle claim of the early
semantically driven \nlp\ work. It
is therefore natural to ask why, after so many years, it is still worth
pursuing this idea.
In the next section, I will argue
that the failure of these earlier works can be attributed to weaknesses of
other ideas held concurrently by early semantically driven \nlp\
proponents and that therefore it is still worth taking up the semantically
driven banner.
\section{Revising Semantically-driven NLP}
\label{sec:md_revising}
I claim that two design flaws were responsible for the failure of early
semantically driven \nlp\ systems to scale up. They are:
\begin{itemize}
\item dependence on a small set of semantic primitives; and
\item rejection of the need for syntactic knowledge.
\end{itemize}
In addition, I claim that two points of the philosophical stance taken by
semantically driven \nlp\ researchers were responsible for warranted
skepticism of their research. They are:
\begin{itemize}
\item the claim that semantically driven systems are models of human
language behaviour; and
\item the claim that the {\em referents\/} of utterances hold the key to
guiding language understanding (as opposed to their senses).
\end{itemize}
To avoid criticism on the basis of these last two points,
it is only necessary to avoid making these two claims.
Regarding the first claim, I have already
stated that the evaluation of the theory presented here is purely on the
grounds of empirical performance, with no pretense to psychological
validity. As for the second, the theory below carefully distinguishes
senses from referents, and relies on knowledge of the former for
controlling analysis. I will not discuss further why these claims warrant
skepticism, focusing instead on the two design flaws.
The semantically driven parsers developed by Schank and his students
were based on the conceptual dependency knowledge representation
(Schank,~1975:Ch.~3). At the heart of this representation were~11
primitive event types which could be linked through case roles with symbols
representing objects in the world, and with one another through causal
and instrument links. Similarly, Wilks's~(1975) semantic preferences
parser was based on~70 semantic primitives. It was assumed that all
meanings could be captured by combinations of these primitives. This is
a highly questionable assumption.
To start with, if there were such primitives, it seems reasonable
that our languages would have special words to indicate them,
words which children would be likely to acquire very early.
Concepts such as \concept{sign} (a communication
event) and \concept{mbuild} (a mental construction) are certainly not
like this. Also, we would expect the construction of human language
dictionaries to be much simpler than it is in reality. Lexicographers
would only need to define the words for the semantic primitives in a
special section and then restrict themselves to using those words
everywhere else. In fact, the Longmans Dictionary of Contemporary
English (Proctor,~1978) is constructed by attempting to use as few
defining words as possible. Even if only stems are counted, the defining
vocabulary exceeds~2,000 words and many of these words have several
distinct senses (each sense presumably corresponding to a separate
semantic primitive).
So, even if the notion of semantic primitives is in principle workable, the
versions of it adopted by the early semantically driven \nlp\ researchers
were far too spartan. They could not even approximate the richness of
meaning expressed by natural language. It is therefore not surprising that
their parsing systems could not handle a broad-coverage input and came
to be regarded as toy systems.
What I am arguing against here is the limitation to a {\em small\/} set of
semantic primitives. I do not believe that meaning cannot be represented
by the combination of a finite set of semantic elements; in fact if there is
to be any symbolic natural language understanding, then such
representation must be possible. Rather, it seems clear that the number
of such elements needs to be large, of the order of thousands at least.
It is even conceivable that the set of such primitives differs from person
to person. A method of representing this is provided by the notion of
\newterm{episodic memory} developed in Lebowitz~(1983), where new
concepts are built in response to new texts. Lehnert~(1988) applies this
idea to compound nouns, placing emphasis on the dependence of
compound meaning on previously seen semantic structures. The models
of compound nouns presented later in this thesis also use previously seen
semantic structures to infer interpretations for compound nouns. While
they are based on semantic primitives of a kind, the number of primitives
is far larger than those used in early semantically driven \nlp; there are
over a thousand of them for representing noun meanings alone.
The second design flaw of early semantically driven \nlp\ was the
complete rejection of syntactic representation:
\begin{citedquote}{Schank,~1975:12}
We have never been convinced of the need for grammars at all.
\end{citedquote}
\begin{citedquote}{Wilks,~1975:265}
Preference Semantics contains no conventional grammar for analysis or
generation: its task is performed by strong semantics.
\end{citedquote}
As these quotes
show, the enthusiasm for adopting semantic knowledge as the main
resource for natural language comprehension led to the
position that syntax could be dispensed with. Yet Lytinen~(1987)
showed that semantically driven systems contained syntactic
information disguised as semantics. At the same time as arguing that
syntactic knowledge was unnecessary, the advocates of semantically
driven \nlp\ were forced to make many
grammatical distinctions to get their parsers to work. Because their
systems lacked a separate syntactic representation,
syntactic distinctions became intertwined
with the semantic representation. Lytinen's solution was to have both a
semantic and syntactic representation working in tandem, with neither
having priority.
Another well-known and vocal opponent of semantically driven \nlp\ was
Ritchie~(1978). He argued that
\begin{citedquote}{Ritchie,~1978:273}
\dots notions of semantically-based grammar are not workable in the
long term. This is not an argument in favour of traditional syntax, but
rather an attempt to prevent an inadequate system (autonomous syntax)
being replaced with an equally unrealistic extreme (autocratic
semantics).
\end{citedquote}
Again, the central criticism lay with the rejection of grammatical
knowledge.\footnote{Ritchie~(1983) carried the argument further and also took
issue with the two philosophical points mentioned above: cognitive
plausibility and the use of {\em referential} semantics.}
Since early semantically driven \nlp\ systems suffered from these
flaws, it is reasonable to attribute their current
poor status to just these sources. It is not necessary to conclude that the
notion of a semantic representation being the primary controlling influence is
itself flawed; early systems could just as easily have failed for other
reasons. It is therefore reasonable to pursue further semantically driven
systems, provided that
the other reasons for failure are avoided in some fashion.
In the theory I will develop below, syntax plays a crucial role;
this is a very different story from that
adopted by early semantically driven \nlp\ researchers.
\subsection*{A new semantically driven approach to NLP}
Following the ambitious claims of the early work, the~1980s saw a
return to the syntactically driven paradigm for \nlp. By this time, the
need for semantic knowledge was widely recognised and many efforts
were focussed on mechanisms for applying semantic knowledge (for
example the interleaved approaches mentioned above, like Hirst,~1987).
But the notion of a purely
semantically driven natural language system had fallen into disrepute.
More recently there has been a resurgence of research that places
emphasis on semantic expectations in just the same way as early work.
Hahn~(1989) puts forward detailed arguments in support of semantically
driven processing for text summarisation, even going so far as to
dispense once more with syntax.
\begin{citedquote}{Hahn,~1989:357; his emphasis}
Parsing natural language is achieved [by] mapping natural language
utterances {\em directly into semantic representation structures\/} without
considering a logically separated, intermediate level of syntactic
representations.
\end{citedquote}
However, his main points centre around the efficiency of only analysing
those parts of a text that are necessary for the understanding task at hand.
This doesn't preclude a separate syntactic representation; it only requires
that primary control of the analysis is driven by semantics.
Mauldin~(1991) uses a semantically driven parser based on an extended
form of conceptual dependency to \shortquote{push [information]
retrieval performance beyond the keyword barrier}~(Mauldin,~1991:6).
The resulting system shows promise, but is limited by the availability of
scripts and frames, which must be hand-coded. Perhaps then these
knowledge resources could be acquired by statistical methods.
Hobbs~\etal~(1993) propose to integrate syntactic and semantic
processing by reformulating comprehension in terms of abduction,
claiming that almost every remaining problem in natural language
interpretation can be solved by use of weighted abductive inference
(including noun compound interpretation, see section~\ref{sec:cn_knowledge}).
The abductions proposed are essentially the application of knowledge of
semantic expectations, although the issue of how these might be acquired
in the first place is not addressed.
Even Wilks's preference semantics parser has been reincarnated in a
new-look broad-coverage form called \progname{premo} (Slator and
Wilks,~1991), which interleaves semantic expectations with syntactic
knowledge.
In all this work, the controlling influence of semantic expectations is
recognised, but no longer is there a reliance on a small set of semantic
primitives, nor is grammatical knowledge denied its proper place
(excepting Hahn,~1989).
In the remainder of
this section, I will roughly sketch a revised version of semantically
driven \nlp. The details of this new version will be the topic of the rest
of this chapter.
The approach I am proposing I will call \newterm{meaning
distributions}. According to this approach, the process of language
interpretation begins with knowledge of the possible intended meanings
in the form of semantic expectations. We have at our disposal complete
information about which meanings are more or less expected across the
entire meaning space.
We know, for example, that a sentence describing the diet
of animals is to be expected in an encyclopedia and that one
describing the diet of shoes isn't.
Given two arbitrary propositions, we can choose which of the two is
more {\em expected}.
It isn't difficult to see that this knowledge is highly context dependent.
Intended meanings will vary from situation to situation, and so too
should the semantic expectations. For example, we expect statements to
be relevant to the topic at hand, whatever it might be, so as topics shift so
must expectations. As it stands the meaning distributions approach
admits that such shifts exist and that they need to be modelled, but does
not suggest how this might be achieved. However, I will have more to
say about these issues when I consider context dependence in
section~\ref{sec:md_context}.
At this point, I have already assumed a great deal: a complete ordering of
all possible meanings on the basis of how expected they are. In fact,
without having examined the text we wish to interpret, we are already
quite some way towards an interpretation. The semantic expectations
allow us to delimit the likely interpretations to a small set of contextually
relevant meanings.
It is here that lexical and syntactic knowledge become important. The
meaning distributions approach
assumes that sufficient lexical and syntactic machinery is in
place to allow input text to be mapped onto its possible meanings. That
is, there is a mapping available between, on the one hand, grammatical
constructions of words and, on the other, the space of meanings. It does
not assume that this mapping is one-to-one; otherwise we would already
have solved the interpretation task. But it does assume that it is, in
principle, fully computable and, for mathematical reasons which will be
made clear below, that the mapping is fairly simple.
Thus, given a linguistic form, it should be possible to apply
lexical-semantic correspondences and grammatico-semantic relations to yield a
subset of the space of possible meanings: the set of meanings that could
ever lead to utterance of the linguistic form. In practice it will be
important to avoid explicitly constructing this subset, but conceptually a
particular subset exists and can be computed in principle.
The central claim of the meaning distributions approach is this:
whichever meaning within the subset of meanings allowed by the
lexico-grammatical mapping is the most expected, is the intended meaning.
Thus, the task of interpreting an utterance becomes one of finding the
single most expected meaning out of those allowed by the
lexico-grammatical knowledge.
\begin{examples}
\item Moles feed on insects. \label{eg:md_revising_moles}
\end{examples}
To illustrate, consider the sentence in example~\ref{eg:md_revising_moles}.
Suppose that we have a very simple lexical-semantic mapping as follows.
\begin{itemize}
\item \lingform{moles} corresponds to two concepts:
\concept{mammal\_mole} and \concept{chemical\_mole}.
\item \lingform{insects} is unambiguously the concept \concept{insect}.
\item \lingform{feed} can be mapped to the predicate \predicate{eat/2}
or the concept \concept{food}.
\end{itemize}
Any serious grammar would find several parses (or partial parses at any
rate) of the sentence, but for simplicity assume that only one syntactic
reading is allowed, so that only two possible meanings are permitted by
lexical and syntactic knowledge. These are:
\begin{itemize}
\item \predicate{eat}(\concept{mammal\_mole}, \concept{insect}), and
\item \predicate{eat}(\concept{chemical\_mole}, \concept{insect}).
\end{itemize}
Normally, in a large scale system, the set of meanings allowed by the
lexico-grammatical mapping would be much larger than in this
illustrative example.
Now semantic expectations will serve to select the first of these
meanings for two reasons. First, in any reasonable occurrence of this
sentence, the topic of discussion will be animals and probably moles, so
that there is a high semantic expectation for the appearance of the
concept \concept{mammal\_mole}. Second, even ignoring context,
there is a general expectation that the agent of the predicate
\predicate{eat/2} will be an animal or person, making this reading preferable
to the one in which the agent is a chemical unit.
So, in summary, the meaning distributions approach starts with semantic
expectations which place a preference ordering on the space of possible
meanings. A mapping provided by the lexicon and grammar then
creates a set of constraints on the intended meaning derived
from the utterance. Finally, selecting the
most highly expected meaning within these constraints yields the desired
interpretation. In practice, the existence of the expectations is intended
to make computation of the mapping efficient, avoiding the need to
construct unnecessary pieces of the map.
In this sketch, I have referred often to the notion of a semantic
expectation. I have not given any notion of what form these expectations
might take, save that they provide an ordering on the space of
meanings. In the next section I will take up this task by casting semantic
expectations in the form of statistical distributions.
\section{Meaning Expectations as Prior
Probabilities} \label{sec:md_priors}
\subsection*{Probabilistic expectations}
The representation for semantic expectations in the meaning distributions
approach is a probabilistic distribution. Formally, we say that there is a
random variable $M$ ranging over the space of possible meanings, $\cal
M$, whose distribution models the probability of all possible intended
meanings of all possible utterances. It is important to emphasise that the
distribution is context-sensitive. Given a certain context, a distribution is
computed that gives the likelihood of the source having each possible
meaning in mind.
To take an example, consider the context in which a speaker
has just been asked the question \scare{How are you?}
The utterance made by the speaker in response to this question
can be intended to mean a variety of things, and each of these possible
meanings has some probability. For example, it is highly likely that the
response will convey the fact that the speaker is quite normal healthwise.
Responses conveying great happiness or depression are also likely, while less
likely are those explaining complex problems and it is only remotely
possible that the response will be irrelevant to the question. In principle,
the distribution of $M$, while potentially very complex, always exists
whenever the speaker is about to make an utterance.
It is vital to distinguish here between the probability distribution of {\em
utterances\/} and that of {\em intended meanings}. There might be any
number of different utterances which could be used to convey a certain
meaning, or even several possible interpretations of one utterance.
Meaning distributions are distributions over the space of {\em
meanings}, $\cal M$; they are {\em not\/} distributions over possible
utterances. In the example above, the proposition that the speaker is quite
normal healthwise could be conveyed by any of:
\begin{examples}
\item
\begin{subexamples}
\item I'm fine.
\item Well, thank you.
\item Not bad.
\item Pretty good.
\end{subexamples}
\end{examples}
or any of many other variations.
While the probability of each of these utterances is perfectly well-defined,
such probabilities are not {\em semantic\/} expectations. Meaning
distributions represent semantic expectations and measure the likelihoods
of different intended meanings, not different utterances. Every one of
the above responses has essentially the same meaning.
It is also vital to distinguish between the probability distribution of
intended {\em senses\/} and that of intended {\em referents}.
The world contains many objects and, were it possible to randomly
sample the world, different kinds of objects would appear with
different probabilities. However, this distribution is not the same as the
distribution of intended meanings. For example, although unique gifts
are extremely rare, department store catalogs refer to them frequently.
Similarly, the distribution of
commands issued to a database management system is different from the
distribution of information stored in it. A newspaper report of a death is
most likely to refer to a murder, even though far more people die from
illnesses. Meaning
distributions {\em depend\/} on referent distributions, but the latter are
only one factor influencing the former.
The idea that semantic expectations can be represented by probability
distributions is closely related to information theory (developed by
Shannon and Weaver,~1949). According to information theory, the
amount of information contained in a message is measured by the change
in entropy between the prior and posterior probability
distributions.\footnote{The entropy of a discrete probability distribution
over $X$ is given by $\sum_{x \in X} \Pr(x) \log \Pr(x)$.} In
the case of successful communication of a single proposition (one point
in the meaning space), the posterior probability distribution has
zero entropy; the recipient is certain of the intended
meaning. Therefore, the amount of information conveyed by the
message is simply the negative of the entropy of the prior probability
distribution.
Information theory says that
if we wish to minimise the length of the message needed to convey
this information then we
must know the probability distribution of possible communications. If
the recipient has strong semantic expectations, the entropy of the prior
distribution will be close to zero, and therefore the size of the message
can be small and the utterance short. If there are only weak semantic
expectations, the utterance will need to be longer.
This is in keeping with the position taken above on language
comprehension: to adhere to the Gricean maxim of concision requires
both the source and recipient to have detailed knowledge of the prior
probability distribution, that is, semantic expectations.
So, to recap, the approach advocated here represents semantic
expectations as probability distributions over the space of possible
meanings, $\cal M$, which we call meaning distributions. These
distributions are necessarily highly context sensitive, since semantic
expectations must vary with context. In what follows I will suggest that
these distributions, in combination with lexical and grammatical
constraints, may be used to build natural language understanding
systems. So far, though, I haven't said anything about how these
meaning distributions might be constructed. Certainly the meaning
space $\cal M$ is extremely complex, and the computation of the
probability distribution of the random variable $M$ for even the most
simple of contexts might seem like an even more complicated problem
than the natural language understanding task itself. Therefore I will now
turn to the topic of defining models for such distributions.
\subsection*{Conceptual models}
Meanings have structure, a fact I have ignored up to this point.
Furthermore, we don't generally have an explicit construction
of the set of all possible meanings.
Any \nlp\ system that did would have to be extremely simple.
Since the meaning distributions approach involves assuming access to a
probability distribution over this set, it is necessary to justify how this
might be possible, even in principle. The question of how it might be
possible in practice I will delay until section~\ref{sec:md_linking}.
The problem of representing meaning has been the subject of decades of
research and even a brief overview is beyond the scope of this thesis.
Instead, I will assume the properties listed below of the underlying
knowledge representation. These properties do not impose any
constraints of importance, being exhibited by most popular knowledge
representation schemes, and by first-order predicate calculus. It is true
that the particular choice of knowledge representation will have a
significant effect on the resulting probabilistic model (but then choosing
the correct knowledge representation has long been recognised as crucial
throughout artificial intelligence). My aim here is to justify how, in
principle, a meaning distribution could be intensionally constructed.
The knowledge representation scheme must:
\begin{enumerate}
\item be symbolic;
\item generate a recursively enumerable set of expressions;
\item build complex expressions from simpler ones by well-defined
methods of composition; and
\item have canonical forms.
\end{enumerate}
These are precisely the properties possessed by a context free grammar
which allow the construction of probabilistic context free grammars.
As we have
seen, such grammars assign a probability to every possible derivation
generated by the grammar by assuming that the probability of a
derivation is the product of the probabilities of the steps in the
derivation. That is, each rule application is assumed conditionally
independent of other rule applications. This assumption is simply
untrue: for example, noun phrases in subject position are likely to have
very different structures to those in object position. It is simply
impossible for a probabilistic context free grammar to assign
probabilities to parse trees that are accurate reflections of the probability
of observing those parse trees. But as long as the probability
assignments are sufficiently accurate to distinguish highly unlikely
structures from plausible ones, such grammars are useful.
By the same reasoning, given a knowledge representation scheme with
the properties above, it is possible to construct a probabilistic conceptual
model which assigns a computable probability to each element of $\cal
M$. Just as in the context free grammar case, to do this it is necessary to
give a method of calculating the probability of a complex expression in
terms of the probability of the simpler expressions that compose it. This
will involve making assumptions about the relationships between
distributions of different meaning elements, assumptions which may in
reality be false. Nonetheless, if the structure of the knowledge
representation scheme broadly reflects the relationships exhibited by
meanings, then the probabilities assigned by the conceptual model
should be useful in as much as they distinguish highly unlikely meanings
from plausible ones.
In the experimental work in chapter~\ref{ch:experimental},
a complete example of a
probabilistic conceptual model will be given for the semantics of
compound nouns. The form of such a model is heavily dependent on the
particular knowledge representation scheme on which it is based. Since
it is difficult to explore the construction of conceptual probabilistic
models further without making a commitment to a particular knowledge
representation scheme, I will only give a rough example here. The
example uses predicate calculus, but it should be clear that a similar
model could be given for any scheme meeting the conditions above.
We wish to assign a probability to each possible expression in the
meaning representation. For this example, we will suppose that
meanings are generated top-down. That is, a predicate is selected and
then the arguments of the predicate are chosen, conditionally dependent
on the predicate. Since arguments may themselves be predicates, the
process may recurse. While I have used formal notation in what follows,
I must emphasise that this is only a rough sketch. There are many
problems to be addressed when building a probabilistic
conceptual model (consider quantification for instance) and it is not my
intent to build a rigorous model here.
Let $P$ be the set of predicates, $A$ the set of atoms and
$\mbox{arity}(p)$ be the number of arguments of a predicate $p \in P$.
Call the disjoint union of predicates and atoms, $U$. The model then
has the following sets of parameters:
\begin{enumerate}
\item one parameter for each element of $U$ giving its probability
in isolation, $\Pr(x \mid \bot)$; and
\item one parameter for each triple $\langle p, i, x \rangle$
where $p \in P$, $1 \le i \le \mbox{arity}(p)$ and $x \in U$,
giving the probability that $x$ is the $i$th argument of $p$,
$\Pr(p(\ldots, X_i=x, \ldots) \mid p)$.
\end{enumerate}
For this to be a probabilistic model, we also require the following
constraints:
\begin{equation}
\sum_{x \in U} \Pr(x \mid \bot) = 1
\end{equation}
\begin{equation}
(\forall p \in P) (\forall i:1\le i\le \mbox{arity}(p)) \sum_{x \in U}
\Pr(p(\ldots, X_i=x, \ldots) \mid p) = 1
\end{equation}
Now by assuming conditional independence between arguments of a
predicate and making several similar assumptions, we can arrive at a
formula for computing the probability of an arbitrary expression.
\begin{equation}
\Pr(\lambda) = \left\{ \begin{array}{ll}
\Pr(\lambda \mid \bot) & \mbox{if $\lambda$ is an atom} \\
\Pr(\rho \mid \bot) \prod_{p(\ldots, X_i=x, \ldots) \in \lambda}
\Pr(p(\ldots, X_i=x, \ldots) \mid p) & \mbox{if
$\lambda$ has top predicate, $\rho$}
\end{array} \right.
\label{eq:pred_calc_model}
\end{equation}
In order to use the model in practice, it is also necessary to have a
method for acquiring estimates of these parameters. A general approach
to this problem will be suggested below in section~\ref{sec:md_linking}.
To illustrate the model, consider the example involving
\predicate{eat}(\concept{mammal\_mole}, \concept{insect}) above.
It is simple to apply equation~\ref{eq:pred_calc_model}.
\begin{eqnarray*}
\lefteqn{
\Pr(\mbox{\predicate{eat}(\concept{mammal\_mole}, \concept{insect})}) =
} \\
& & \Pr(\mbox{\predicate{eat/2}} \mid \bot) \times \\
& & \Pr(\mbox{\predicate{eat}(\concept{mammal\_mole}, $\ldots$)}
\mid \mbox{\predicate{eat/2}}) \times \\
& & \Pr(\mbox{\predicate{eat}($\ldots$, \concept{insect})}
\mid \mbox{\predicate{eat/2}})
\end{eqnarray*}
Intuitively, we would expect all three probabilities on the right-hand side
to be relatively high. Moles are likely consumers, insects are good
eating for many animals and, in the context of encyclopedia articles
(from which this example is drawn), we expect propositions describing
diets. Therefore, the model correctly assigns a high probability to this
meaning.
This example was intended to show that in principle it is possible to
construct probabilistic versions of any knowledge representation scheme
which has the properties listed above. In fact, some work already exists
on building probabilistic conceptual models for story understanding.
For example, Charniak and Goldman~(1989)
describe a probabilistic quantifier-free
first order semantic representation that places logical formulae at the
nodes of a Bayesian network.\footnote{Charniak and Goldman's model
is not a pure conceptual model in the sense required by the meaning
distributions theory since it also includes lexical and syntactic nodes. It
therefore merges linguistic and world knowledge into one representation,
a move explicitly rejected by the meaning distributions theory. Also,
while they are careful to point out the distinction between senses and
referents, their probabilistic conceptual model makes the simplifying
assumption that these two distributions are the same. The most important
differences between Charniak and Goldman's work and this thesis are that
probabilities in this work are estimated from large corpora (rather than being
chosen by hand) and the resulting performance is evaluated on a statistically
significant test set.} Algorithms
operating over these networks furnish the probabilities assigned to each
formula. Goldman and Charniak~(1990) describe a method to
dynamically construct such a probabilistic conceptual model so as to
avoid the impracticality of constructing the entire meaning distribution.
So not only is the notion of a probabilistic conceptual model possible in
principle, but it has already appeared in the context of at least one natural
language task and some work has been done toward making such models
efficiently computable.
\subsection*{Theoretical advantages}
In the remainder of this section I would like to consider what makes the use of
meaning distributions and conceptual models different from similar work
in statistical language learning and why we might expect the meaning
distributions approach to be better. Though I take the position that
different approaches to statistical language learning must be evaluated on
experimental evidence, it is worthwhile justifying the theory in
theoretical terms because of the insights this yields into the experimental
data.
Statistical language learning has its roots in speech recognition where it
turns out that dependencies between elements of the speech stream are
virtually all local. Under these conditions, simple Markov models work
well. There is no need to construct more complicated representations
than sequences. Similar conditions held in the earliest successes of
statistical language learning, part of speech tagging (De~Rose,~1988),
where sequences of tags formed the primary representation. Most later
advancements of this work retained the sequence representation
(Weischedel~\etal,~1993; Merialdo,~1994) and sequences of words were
the foundation of many other systems, including the distituents of
Marcus~(1991) and the collocations of Smadja~(1993).
Despite successes with sequence-based models, the use of more
structured representations is
inevitably necessary for language understanding, a fact that is reflected
in a wide range of research. Many probabilistic models based on
structured representations have been constructed and used as the basis of
experimental work. Probabilistic context free grammars stand as the
obvious example (Jelinek~\etal,~1992). Others include the
subject-verb-object statistics collected by
Church and Hanks~(1989), the sense
disambiguation method of Hearst~(1991), the prepositional phrase
attachment information gathered by Hindle and Rooth~(1993), and
Grefenstette's~(1994) thesaurus construction statistics. However, all of
these use only grammatical relationships to structure their statistical
models.\footnote{In information retrieval there is also work on
representing grammatical relations using Bayesian networks (Bruza and
Ijdens,~1994).}
The linguistic events captured by all these models are relatively
superficial ones in the language process as a whole: syntactic
configurations and word and tag sequences. This may be likened to the
earliest machine translation efforts of the~1950s. Translations
performed word-by-word with surface syntactic rewriting were quickly
found to be too primitive to model the complex structures
of language because they were driven only by superficial elements.
In the same way, the use of superficial elements as the basis of statistical
language learning is a restriction that deserves some consideration.
It has been easier to use corpora to compute statistics about word
sequences and syntactic configurations than about anything deeper, but
if statistical \nlp\ is to go beyond its beginnings, the models must be
extended to incorporate deeper notions.
Importantly, the inaccessibility of deeper linguistic elements does not
preclude them from being included in statistical models. The states
in a hidden Markov model need not be available in
the corpus. So too can semantic elements, which are not directly
available via annotations of the corpus, be included in more sophisticated
statistical models of language interpretation.
So I have argued that, in principle, since language has deeper structural
elements than those that current statistical models can capture, current
models should be extended. But to give this argument any weight I also
need to show that these deeper structural elements play a significant role
in language understanding. Let us therefore consider two such elements:
sense distinctions and predicate-argument relations.
Polysemy is probably the most obvious challenge posed by natural
language. It emerges in some form in all natural language applications,
even simple spelling checkers. Individual words express multiple senses,
each differing in their linguistic behaviour. This variation in behaviour
is not limited only to the meaning of the word. Syntactic, morphological
and phonological properties of a word can vary with its sense. In
practice, no matter what \nlp\ task is attempted, word senses are going to
make a difference.
Statistical models that do not individuate word senses cannot
(directly) represent these differences.
In such models, information about a word is a
heterogeneous conglomerate of information about each of its senses,
typically combined by arithmetic addition.
The resulting average gives only an
approximate picture of the behaviour of any one sense of the word.
Since word senses usually vary in their frequency, this picture will
typically be dominated by the behaviour of the most frequent sense.
No matter how
much data is gathered and analysed, the model will never have accurate
information about any of the less frequent senses. This is rather like
trying to build an \nlp\ system without ever putting more than one sense
of a word into the lexicon. One would imagine that such a system would
reach a certain limit and then become unworkable. A similar fate is in
store for statistical language models which do not incorporate a notion of
word senses.
Similar arguments hold for predicate-argument structure. Selectional
restrictions (placed by a predicate on its arguments) have long been
recognised as crucial to many \nlp\ tasks (see for example, Wilks,~1975,
Hirst,~1987, and Fass,~1991). These relationships will be reflected in
even simple statistical data concerning word
associations.\footnote{Although note that probabilistic context free
grammars specifically eliminate this information from the model and thus
cannot capture selectional associations.} But if the model does not
explicitly include predicate-argument relations, this information must be
distributed. A given underlying predicate-argument
structure can be expressed on the surface in several ways. If the model is
based only on superficial textual elements, each of these surface
expressions will be treated distinctly. The same selectional restriction
will be represented by several different pieces of information.
This multiplicity is detrimental to the success of
statistical language learners since it requires the system to learn
more than is necessary.
So, on theoretical grounds, the step to explicitly semantic statistical
models is a very promising one (the view adopted in Lauer,~1993).
Very recently, a few others have expressed the same view.
Miller~\etal~(1994) describe a conceptual model based on frames for
the limited domain of air travel queries. Unfortunately, their system
requires training data to be annotated with semantic representations
which limits the applicability of the technique outside narrow domains.
Alshawi and Carter~(1994) use the same air travel corpus to experiment
with statistical metrics over a shallow semantic representation called
quasi logical form (\acronym{qlf}), but again training is supervised.
Alshawi~(1994) gives a mathematical treatment of the incorporation of
semantics into speech translation models, although the conceptual model
is based only on lexical relations and does not explicitly represent word
senses. Eizirik~\etal~(1993) propose a Bayesian network representation
which incorporates both syntactic and semantic representations into one
network. However, parameters of the model arise from the {\em
structure\/} of the network; there is no parameter estimation from training
data, and furthermore, detailed hand-coding is required to build the
semantic portion of the network. Finally, a proposal for incorporating
semantics into Data-Oriented Parsing (see section~\ref{sec:sn_grammars}) is
also made by van~der~Berg~\etal~(1994), but it too assumes supervised
learning.
Apart from the discussion of context that I promised earlier, the
remainder of
this chapter is devoted to fleshing
out the details of the meaning distributions approach.
\section{A View of Context Dependence}
\label{sec:md_context}
The meaning distributions approach places strong emphasis on
expectations, that is, on knowledge about possible meanings of a
sentence that is available before considering the sentence itself. I have
avoided using the term {\em context} until now because
I intend to interpret the notion of context very broadly.
I define context to be the complete set of {\em non-linguistic}
facts that might be relevant to interpreting an arbitrary
sentence in a given circumstance.\footnote{There is some looseness
in this definition derived from what is taken to be linguistic.
For example, in systemic functional grammar, much
of what I call context is taken to be linguistic (Halliday,~1985:xiii-xiv).}
This includes information conveyed by the cotext
(the text or dialogue accompanying the sentence),
information about the goals and intentions of the writer or
speaker, encyclopedic world knowledge and more besides.
The meaning distributions theory holds that context, in the
broad sense, deserves a more central place in the language understanding
architecture.
The importance of context to language interpretation raises the
question of how the majority of \nlp\ systems thus far built have escaped
the need to represent context explicitly. Many treat sentences in
isolation, ignoring any cotext that might have surrounded the sentence
and therefore discarding discourse and topic cues. It is therefore
unsurprising that ambiguities abound.\footnote{This is not to say that a
practicable theory of the effects of discourse and topic is available,
merely that, lacking one, ambiguities are to be expected. There is a large
research effort toward such a theory, including some statistical work
(Reithinger and Maier,~1995).} Other elements of context are usually
treated as fixed by limiting the kinds of communicative goals or by
operating in a limited domain. Both of these tactics sidestep the need to
capture the effects of context, but result in limitations on the usefulness
of the resulting systems. But if \nlp\ technology is to become more
flexible and scale up, models of language
must pay more attention to context of various kinds.
While methods for deriving expectations from context lie beyond the
scope of the meaning distributions theory, it is instructive to look at the
kinds of context that might be used for this purpose in order to illustrate
how the theory might be developed in this area.
\subsection*{Sources of context}
I will briefly consider five areas of context, these being:
\begin{enumerate}
\item topic,
\item register,
\item author models,
\item discourse structure, and
\item world knowledge.
\end{enumerate}
Topic is an
important contributor to understanding and a powerful source of
expectations. A text will generally focus on a specific topic and even when
topic shifts occur the new topic can be expected to be related to the
previous one. The information retrieval community has devoted much of
its efforts to identifying topic, although here again the models are
couched in terms of surface linguistic elements, typically stemmed
content words (although see Mauldin,~1991, who advocates the use
of a conceptual dependency representation for information retrieval).
But while merely identifying topic is useful for information retrieval,
little work has been done on deriving semantic expectations from
knowledge of the topic, a task which the meaning distributions theory
regards as necessary for language understanding.
To illustrate how a representation of topic might be used to guide
language understanding, consider Yarowsky's~(1992) sense disambiguation
system. By collecting all the words in a~101 word window around
the target word, he has built a representation of topic. The co-occurrence
statistics relate the topic to expectations about the senses of words.
His system then chooses the most expected
sense based on this representation of topic, exactly in the way that
the meaning distribution theory suggests.
A more general implementation of the meaning distributions theory
could incorporate a system like this as one
component of the conceptual model.
The second area of context is register. Biber~(1993a) explores many
linguistic variations across registers, demonstrating enormous differences
amongst them. In terms of semantic expectations, register is clearly
important. In newspaper articles, descriptions of the details of recent
events are to be expected; in advertisements on the same page, emotive
suggestions of the usefulness of products will most likely appear.
Biber's work goes some of the way towards identifying register, but once
again we lack methods for deriving semantic expectations from this
knowledge. Ideally, statistics in the conceptual model should be
conditioned on register, though this is impractical at this stage. In
section~\ref{sec:md_register}, I will outline a more practical position on
the use of register in statistical language learning.
Another contextual factor is knowledge of the writer or speaker. This is
less important in analysing written language than in recognising
speech, so while speaker dependence is of fundamental importance
in the speech recognition community, it not been seen as a problem in \nlp.
However, consider an interactive technical support dialogue system: the
queries entered by novice users will be significantly different from those
of advanced users. Therefore our semantic expectations should differ in
each case. In order to make \nlp\ systems sensitive to these differences,
techniques from the user modelling community will be needed (for an
overview see Kass and Finin,~1988). It is worth noting that author
modelling shares many elements with register dependence and the
position I take below on register could also be applied to author
modelling.
Discourse structure is an aspect of context that has received quite a bit of
{\em attention\/} (theoretical treatments include Grosz and Sidner,~1986,
and Mann and Thompson,~1987),
although in
practice corpus-based work mainly only addresses the task of identifying
discourse segments (Hearst,~1994; Reynar,~1994; Litman and
Passonneau,~1995). One work where discourse information is applied in
a statistical model is Hirschberg and Litman~(1993), although this deals
only with prosodic predictions.
Interestingly, Charniak and Goldman~(1989) discovered that their
probabilistic conceptual model for story understanding needs to be
augmented to allow for inter-sentential coherence. The augmentation
takes the form of special conditioning variables representing
spatio-temporal locality.
The effect is that the model assigns higher probability
to logical formulae that share terms with other formulae from the same
story. This is a promising first step toward modelling text cohesion
probabilistically, although as they point out, it is insensitive to discourse
state.
Semantic expectations do vary with discourse state, chiefly in connection
with topic. This fact is exploited by Morris and Hirst~(1991). In their
system, thesaurus categories provide a conceptual representation which
is used to identify coherence relations in a text and thus extract the
discourse structure. The fact that topic varies with discourse structure
and the fact that semantic expectations depend on topic, together, allow
this scheme to work. The meaning distributions theory suggests that this
dependence be utilised in the other direction: discourse structure is used
to derive semantic expectations.
A relatively crude way to do this would involve building a statistical
model to identify discourse structures from cues and then using this
information to mediate the statistical model of topic outlined above.
Whenever a discourse boundary was encountered, the contributions
made by topic to the expected meaning would be attenuated. A more
refined system could use a statistical model of rhetorical structure to
make predictions about the meaning of a sentence. For instance, in a
circumstance where a justification link (from the preceding rhetorical
segment) was likely, a meaning conveying a causal relationship should
be assigned high probability. In any event, the use of discourse structure
in assisting natural language interpretation is a very promising line of
research and fits directly into the meaning distributions theory.
Finally, under the definition of context that I have adopted, there is world
knowledge, both of generic and domain specific kinds. This knowledge
is the primary concern of the semantic components of all \nlp\ systems
and its necessity has spurred many of the efforts in knowledge acquisition
described in section~\ref{sec:sn_motivations}. I count it as a form of
context because it is clearly dynamic. That is, within the course of
interpreting a text, world knowledge can be augmented, revised or even
temporarily suspended, as in the case of counterfactuals (or science
fiction stories for that matter). I see no motivation for distinguishing
between world knowledge that is fixed and that which is not, since one
can always construct a text in which an arbitrary item of world
knowledge is revoked.
Of course, in the practical construction of a probabilistic conceptual model,
it is likely to be necessary to assume that a great deal of world knowledge is
fixed. For example, ontological assumptions of the meaning
representation scheme will fix some facts, and taking taxonomic
classifications to be unchangeable is likely to confer necessary
computational efficiencies. However, the meaning distributions theory
leaves this up to the conceptual model; it views world knowledge as just
another source of semantic expectations.
An example of a statistical model designed to capture domain knowledge
is that of Fisher and Riloff~(1992). Their system attempts to identify the
antecedents of relative pronouns using verb-complement selectional
restrictions that are specific to the domain. These selectional restrictions
are extracted statistically from a corpus of terrorist activity reports and
include such niceties as priests being one of the most likely things to be
killed.
Of all the sources of semantic expectations, world knowledge is the most
critical to natural language understanding for two reasons. First, it is by
far the most voluminous resource, thus representing a large portion of the
development effort if automatic acquisition is not undertaken. Second,
the expectations resulting from it are especially powerful instruments for
disambiguation. Therefore, a representation of world knowledge should
be a central part of any probabilistic conceptual model.
However I have endeavoured to show that at least four other kinds of
semantic expectations are also important and that these can also
be accommodated by the theory.
\subsection*{The ideal of context}
While there are many kinds of contextual information, the meaning
distributions theory treats them all homogeneously. They all contribute
to the collection of semantic expectations that are brought to bear by the
reader: the prior meaning distribution. One way to view this is to regard
context as a summary of the reader's experience. Therefore, not only
does the context include the local short-term history provided by the text
in which a sentence appears (the previous sentence, paragraph or even
chapter), but it extends beyond this to events experienced by the reader
long before: books read at an earlier time, statements made by others and
inferences made from them, and observations of real world events.
Potentially, all the information that the reader has ever been exposed to
is relevant to the interpretation of an arbitrary sentence.
From this perspective it follows that context, in the broad sense used
here, varies from person to person. We all bring different experiences to
the task of interpreting a sentence, and in principle these differences can
affect our interpretation. Nonetheless, the intuition that most people will
share the same interpretation of a given sentence
is still justified by two other
intuitions. First, within a single culture most people share a great deal
of experience: the world presents itself to everyone in much the same
way. Second, language is designed to rely heavily on just those
experiences that are most likely to be shared: speakers and writers avoid
phrasings that require specialised knowledge unless they know that their
intended audience shares that knowledge.
It also follows from this perspective that context is dynamic. The
information conveyed by each sentence can potentially contribute to the
semantic expectations for the succeeding one. In fact, amongst prior
experiences of the reader, the preceding sentence is one of the most
likely sources of useful semantic expectations.\footnote{Note, however,
that there is only one preceding sentence and a vast array of earlier
experiences, so that the expected utility of the preceding sentence is
somewhat less than that of all earlier experiences.} Therefore, in an
ideal system, probabilities assigned by the conceptual model would be
constantly updated to reflect the text being read. This would require a
detailed theory of the interactions of text and cotext meaning, including
cohesion, rhetorical structure, attitude and many other supra-sentential
linguistic processes, something that is almost certainly a long way off.
Still, in the meantime, we can incorporate those elements for which we
do have adequate theories into the probabilistic conceptual model. In the
experimental work of chapter~\ref{ch:experimental} the conceptual
models developed ignore almost all elements of context, capturing
primarily world knowledge. There remain many further areas
for exploration within the framework provided by the meaning
distributions theory.
\section{Linking Syntax and Semantics}
\label{sec:md_linking}
I have argued for the possibility and the necessity of
using probability distributions to represent semantic expectations; let's
now consider how the meaning distributions theory proposes that such
expectations should be utilised. It starts by assuming that we have at our
disposal a conceptual model that assigns probabilities to all the meanings
in $\cal M$, and aims to furnish a method for interpreting natural
language. Between these two endpoints there will need to be machinery
for dealing with language, including substantial lexical and grammatical
information.
The crux of the meaning distributions theory is a mapping between
lexico-syntax and semantics. Intuitively, there are things to say and ways
to say them, and interpretation requires at least some information about
the correspondence between the two. The theory represents this
correspondence as a mapping between, on the one hand,
superficial forms incorporating word sequences and, on the other,
terms of the meaning representation language.
For example, common nouns are typically used to
indicate concepts, a fact that would be reflected by mapping these nouns
onto generic symbols. Verb complementation is typically used to
indicate the arguments of a predicate and, similarly, the mapping will
reflect this too.
This is nothing new. It has always been assumed that semantic
interpretation rules could operate on the output of parsers to produce
meanings; otherwise, why use parsers at all? What is different here is
that the meaning distributions theory treats all stages of linguistic
analysis uniformly. Semantic interpretation rules, grammatical rules and
even morphology are simply considered to define a correspondence
(mathematically speaking, a relation) between word strings and
meanings.
Just as the theory does not rely on one particular meaning representation
(leaving this up to the conceptual model), it also avoids commitment to
the means by which the mapping is specified. Traditionally, this
specification is provided by the combination of a lexicon, a grammar and
a collection of semantic interpretation rules. Given the enormous
research efforts which have been devoted to these components, it would
seem sensible in practice to use one of the multitude of existing grammar
formalisms and corresponding parsers. But the theory does not require
this. The only requirement is that computation of the correspondence
between word strings and meanings be relatively simple. I will return to
the reasons for this at the end of this section.
What is important here is that the theory does not attempt to provide this
correspondence, nor even a formalism for specifying it. Instead, it
merely assumes that a mapping exists, relying on existing \nlp\
techniques to provide it. The theory certainly does not reject the
necessity of grammar in natural language interpretation. However, it
does assert that the grammar can afford to be far simpler if automatically
acquired probabilistic representations of semantic expectations are
utilised along with it.
\subsection*{Syntax as constraint}
Let the set of word strings (utterances) be denoted by $\cal W$ and
the set of syntax trees over these strings be denoted by $\cal S$.
The theory regards the mapping between syntactic forms and meanings
as a source of constraints {\em in both directions}.
On one hand, meanings are expressed by syntax trees, which
contain word strings. Each meaning can potentially be
expressed by many word strings,
as shown in figure~\ref{fig:md_linking_m2w} (synonymy in the broader
sense of the term).
\begin{figure}
\centering
\input{figures/md_m2w.tex}
\caption{Meanings generate many word strings}
\label{fig:md_linking_m2w}
\end{figure}
On the other hand, word strings admit parses which can be interpreted to
yield meanings. Each word string can potentially express many
meanings, as shown in figure~\ref{fig:md_linking_w2m} (polysemy, in
the corresponding sense of the term). It is this latter ambiguity that poses
such great difficulties for natural language understanding.\footnote{I
should emphasise here that the concept of mapping word strings onto
parse trees and thence onto meaning representations is not intended to be
a contribution of the theory, being hardly novel.}
\begin{figure}
\centering
\input{figures/md_w2m.tex}
\caption{A word string can express many meanings}
\label{fig:md_linking_w2m}
\end{figure}
So lexico-syntax (henceforth I shall abuse terminology and say simply
syntax) can be seen as a bidirectional many-to-many mapping between
the space of meanings expressible by the meaning representation scheme
and the set of word strings. The theory takes the position that this
mapping is a hard constraint. There are no soft edges in syntax: either a
word string can express a meaning (in at least one context), or it never
can.
Thus, syntax is viewed as delineating what is possible in any context. It
follows that syntax is context-independent and that therefore all
variations in language use across different contexts must arise from
variations in meaning distributions. Like many other assumptions
commonly made in statistical models, it is easy to find convincing
counter-examples to this implication. However, the ultimate usefulness
of the theory only depends on the performance of implementations of it.
The approximation we make by accepting a false assumption may be
accurate enough to justify the gains it provides.
I have emphasised the bidirectional nature of the mapping because it is
necessary for the statistical model. To
apply the theory to the process of interpretation, it is not only necessary
to be able to compute the set of possible meanings allowed by a word
string, but it is also necessary to compute the set of word strings that
allow a given meaning. This is a necessary (if unusual) consequence of
the mathematical formulation; however, assuming that the mapping is
computable in both directions has the advantage of potentially making
unsupervised training possible, as we shall soon see.
\subsection*{Arriving at an interpretation}
Up until now I have been avoiding introducing formal mathematical
notation, but to make the theory precise we will need a few definitions.
First, the goal is to derive intended meanings from the word strings
used to express them. Formally, we wish to define a function $\tau_C:
{\cal W} \rightarrow {\cal M}$ denoting the best interpretation for each
word string in the context, $C$.
Given a meaning (an element of $\cal M$),
syntax provides a set of word strings, each of which can
express that meaning in some context.
Let $\theta: {\cal M} \rightarrow~2^{\cal W}$ denote the map from
meanings to sets of word strings representing this.
Figure~\ref{fig:md_linking_image} depicts the following definition.
\begin{figure}
\centering
\input{figures/md_image.tex}
\caption{The image of a meaning is a set of word strings}
\label{fig:md_linking_image}
\end{figure}
\begin{definition}[Image]
The {\em image\/} of a meaning, $m \in {\cal M}$, is the set of word
strings that can express $m$, that is, $\theta(m)$.
\end{definition}
Each $m \in {\cal M}$ has, in a given context, a probability associated
with it by the probabilistic conceptual model. Let $\Pr_C(m)$ denote this.
In general, $m$ can be expressed in more than one way (that is,
$|\theta(m)|>1$).
We are interested in the probability of $m$ being expressed in
these different ways. The theory takes the stance that
syntax is purely a constraint.
Therefore the syntactic mapping does not distinguish
between the different elements of the image of $m$. Either a word string
is able to express $m$ (that is, it is in the image of $m$) or it is
unable to do so (it is not in the image).
Each element of the image is equally likely to be
used to express $m$.
Let $\Pr_C(m,w)$ be the probability of a sentence in
context $C$ expressing the meaning $m$ using the word string $w \in W$.
According to the theory, this probability is the same for all word
strings in the image of a given $m$. This assumption is captured
by equation~\ref{eq:md_linking_contrib}.
\begin{equation}
\Pr_C(m,w) = \left\{
\begin{array}{ll}
\frac{\Pr_C(m)}{\mid \theta(m) \mid} & \mbox{if $w \in \theta(m)$} \\
0 & \mbox{otherwise}
\end{array}
\right.
\label{eq:md_linking_contrib}
\end{equation}
We also need corresponding definitions in the other direction so that we
can do interpretation from word strings. Let $\phi: {\cal W}
\rightarrow~2^{\cal M}$ denote the map from word strings to meanings
defined by syntax. Given a word string, it provides a set of a meanings,
each of which could be expressed by that word string in some context.
Figure~\ref{fig:md_linking_source} depicts the following definition.
\begin{figure}
\centering
\input{figures/md_sourc.tex}
\caption{The source of a word string is a set of meanings}
\label{fig:md_linking_source}
\end{figure}
\begin{definition}[Source]
The {\em source\/} of a word string, $w \in {\cal W}$, is the set of
meanings that could be expressed by $w$, that is, $\phi(w)$.
\end{definition}
To translate probabilities in the meaning space into probabilities of word
strings, the probability of each meaning must be divided up amongst
each of the word strings lying in its image. Since, in general, word
strings lie in the image of multiple meanings, each word string will
receive multiple contributions to its probability. The size of each
contribution will depend not only on the probability of the corresponding
meaning, but also on the number of other word strings lying in the image
of that meaning. Equation~\ref{eq:md_linking_total} gives the
resulting probability distribution over word strings.
\begin{equation}
\Pr_C(w) = \sum_{m \in \phi(w)} \Pr_C(m,w) \label{eq:md_linking_total}
\end{equation}
We can now precisely state the central tenet of the meaning distributions
theory.
\begin{hypothesis}[Interpretation theory]
The best interpretation for a given sentence is that meaning which makes the
greatest contribution to the probability of the sentence in the context.
That is, $\tau_C(w) = \mbox{argmax}_{m \in \phi(w)} \Pr_C(m,w)$.
\end{hypothesis}
This is shown diagrammatically in figure~\ref{fig:md_linking_best}.
Syntax constrains the set of possible meanings of $w$
($\phi(w) = \{C, D, E\}$), as shown at the right hand side.
Once the source of a sentence has been identified (the filled columns of
the figure), the highest probability meaning within the source is the best
interpretation. This is indicated by an arrow.
\begin{figure}
\centering
\input{figures/md_best.tex}
\caption{The interpretation theory selects the most probable allowed
meaning}
\label{fig:md_linking_best}
\end{figure}
Apart from constraining the possible interpretations, syntax
does not contribute to the selection. This is done by the
probabilistic conceptual model.
Syntactic knowledge cannot, according to the theory, specify that one
syntactic structure is more likely than another; lexical knowledge cannot
specify that one word is twice as likely as another. The probability of a
parse tree comes directly from the probability of its possible meanings,
unmodulated by syntactic knowledge in any way other than as constraint.
Another way to say this is there are no independently represented
probability distributions over $\cal S$ and $\cal W$. The distributions
of syntactic structures and word strings are derived solely from those of
meanings using only the structural information from the
context-independent syntactic constraints. Much work has focussed on
representing the distributions on $\cal W$ (for example, Marcus,~1991)
and on $\cal S$ (for example, Jelinek~\etal,~1992), but assumed that the
distributions on $\cal M$ weren't necessary. The meaning distributions
theory takes the opposite tack: model the distribution on $\cal M$ and
assume that independent modelling of distributions on $\cal S$ and $\cal
W$ are unnecessary.
Naturally, if it is discovered that some specific element of lexico-syntax
violates the assumptions of the theory in such a way as to significantly
degrade performance, then it would be reasonable to extend the theory to
incorporate the relevant distributions of that element. However, until
such elements are shown to be problematic, it is reasonable to proceed
assuming that they are not.
Before I turn to some more practical aspects, let's look at one more
formulation of the theory. We can use the notion of conditional
probability to represent the relationship between meanings and word
strings. Thus, $\Pr_C(m \mid w)$ represents the
probability that $m$ is the intended meaning conveyed by the word
string $w$ in a context $C$. To pick the most likely
allowed interpretation of the string, it is necessary to find whichever $m$
maximises this probability. Using Bayes' theorem, this can be
reformulated as in equation~\ref{eq:md_linking_bayes}.
\begin{equation}
\Pr_C(m \mid w) =
\frac{\Pr(w \mid m) \Pr_C(m)}{\Pr_C(w)}
\label{eq:md_linking_bayes}
\end{equation}
Notice that the first factor of the numerator is independent of the context,
$C$, by virtue of the assumption that syntax is context-independent.
Also, the denominator is constant for a given word string, so to maximise
the left-hand side, it is only necessary to maximise the numerator of the
right-hand side.
Comparing equation~\ref{eq:md_linking_bayes} with
equation~\ref{eq:md_linking_contrib} above we see that the
interpretation theory is equivalent to the following assumption.
\begin{equation}
\Pr(w \mid m) = \left\{
\begin{array}{ll}
\frac{1}{\mid \theta(m) \mid} & \mbox{if $w \in \theta(m)$} \\
0 & \mbox{otherwise}
\end{array}
\right.
\end{equation}
Once again, it is easy to find counter-examples to this assumption.
However, it is just as easy to find counter-examples to the Markov
assumption in tag sequences, and yet hidden Markov models perform
tagging very accurately. Only empirical investigation can determine
whether this assumption justifies the time and space savings it yields.
\subsection*{Some practical implications}
A key premise of the theory is the claim that, in constraining the
interpretation, the burden placed on lexico-grammar may be made
significantly lighter by the use of semantic expectations. The grammar
can widely underconstrain the set of possible interpretations and
semantic expectations will still recover the intended meaning. This
accords with the intuition that semantics requires distinctions that syntax
does not make. The practical consequence of this for \nlp\ systems is
that less effort need be expended on grammar development. A relatively
simple over-generating grammar will suffice.
This can be compared to the position taken on generation in Knight and
Hatzivassiloglou~(1995). Their system is provided with a set of
generation rules mapping semantic elements into word sequences. These
rules generate an enormous number of sentences for any given content,
many of them ungrammatical and most undesirable. The resulting
sentences are evaluated by a word bigram model and the most probable
one is chosen to be generated. The development effort required to
construct the generation rules was greatly reduced because grammatical
agreement and other syntactic complexities were unnecessary. These
phenomena were recovered by the statistical model. The presence of a
statistical filter at the {\em output\/} side of the system resulted in
relatively weak lexico-grammatical knowledge being sufficient.
The meaning distributions theory can be seen as the mirror image of
Knight and Hatzivassiloglou's system. Instead of over-generating word
strings and then filtering these statistically in order to perform
generation, it over-generates meanings and filters these statistically in
order to perform interpretation. These two proposals share further
properties, one of which I will mention in a moment.
I have claimed earlier that the meaning distributions theory not only
allows the syntactic component to be simpler, but that it {\em requires\/}
the syntactic mapping to be relatively simple in a certain mathematical
sense. While in principle, the only restriction on the mapping is that it be
computable, two practical considerations limit its complexity.
\begin{enumerate}
\item In order to control the computational complexity of parsing, it is
necessary to translate probabilities on $\cal M$ into probabilities on
$\cal S$.
\item To allow unsupervised training of the conceptual model, elements
of the meaning representation must map to relatively simple word
patterns.
\end{enumerate}
The first of these is caused by the certain impracticality of first
generating all possible parses of a sentence and then selecting the most
semantically acceptable one. Even using efficient parsing techniques and
packed parse forest representations, the processing times to compute the
syntactic structure alone are very large. In any case, the original
motivation for the meaning distributions theory was to give semantic
information greater control over the interpretation process.
Therefore, given a syntactic mapping, it is necessary to translate the
probabilities given by the meaning distribution into probabilities relevant
to computing the syntactic mapping. For example, if this mapping is
provided by a grammar and semantic interpretation rules (as in a
traditional \nlp\ system), a means of calculating the likelihood of
grammatical constituents from the meaning distribution is needed. These
probabilities can then be used to control parsing, just as the Inside
algorithm is used to guide a Viterbi parser for context free grammars
(see Jelinek~\etal,~1992).
An analogous procedure is utilised by Knight and Hatzivassiloglou
(1995). Bigram probabilities are referred back to compute probabilities
for the generation rules and these are compiled into the generation
process. Thus their system avoids the impossible task of generating the
millions of possible word strings for a given content. A further example
of such probability translation is provided by Lafferty~\etal~(1992) for
Link Grammar, a syntactic formalism that is somewhat closer to semantics
than probabilistic context free grammars.
Since the meaning distributions theory does not commit to a particular
syntactic formalism, optimisation procedures of this kind are beyond the
scope of the theory; their development must be left for future work.
However, the important point here is that the syntactic mapping must be
sufficiently direct to support a mathematical translation such as that
provided by the Inside algorithm. It is in this sense that the syntactic
mapping must be relatively simple.
The second consideration lies with the need to train the conceptual
model. If the meaning space is non-trivial, then a large amount of
training data will be required. Since it is impractical to annotate such
large amounts of text with meaning representations, the availability of
unsupervised training methods is crucial to the success of the theory. A
key difference between this work and that of Miller~\etal~(1994) is their
reliance on supervised training data.
The bidirectionality of the syntactic mapping is useful here. In order to
train the conceptual model it is necessary to estimate the probability of
the different units that make up the the meaning representation. To
gather evidence about these units, we can use the syntactic component to
map them into simple word patterns and count occurrences of these
patterns in the corpus. These counts represent a sample of the space
defined by the conceptual model.
However, the observations made in this way are unreliable. Any simple
word pattern will be semantically ambiguous to some degree, so that
what we take to be evidence about one meaning element may actually be
evidence for another. Even if a boot-strapping scheme (such as the
cyclic refinement used by Hindle and Rooth,~1993) is pursued, the data
used for parameter estimation will always be noisy.
As Yarowsky's~(1992) training algorithm demonstrates, a sufficiently
simple, bidirectional, ambiguous mapping can be used to overcome noisy
data if it has appropriate properties. Evidence about an unseen event (in
his case the word sense) can be extracted from multiple, ambiguous sets
of corresponding seen events (in his case the co-occurrence data for each
word in a thesaurus category). Even though no single set of observations
is unambiguous, the combination of the sets gives a clear indication of
the properties of the unseen events.
For this to work in the case of meaning distributions, the syntactic
mapping must have certain properties. To start with, each meaning must
have a large {\em image}, so that evidence is combined from many
different observed word strings. Further, the {\em source\/} of each word
string in the image should differ, so that noise is scattered thinly over
different meanings. It is an empirical question as to whether this is the
case for suitable syntactic mappings. What is clear is that the syntactic
mapping must be relatively simple. The units which make up the
conceptual model must be easily mapped to corresponding word patterns
which can be counted.
So for two reasons the syntactic mapping must be relatively simple. An
important consequence of this is that the meaning representation scheme
used in the conceptual model must be relatively shallow. While this
might be viewed as a disadvantage, in the next section I will discuss the
recent increase in the use of shallow semantic representations, an
increase that suggests we might be better off with them anyway.
\subsection*{Shallow semantic representations}
Knowledge representation has been the subject of voluminous research
(see for example, Brachman and Levesque,~1985) and is seen as an end
in itself for much of this research, being applicable to many other
tasks than natural language processing.
Much of the emphasis in knowledge representation research is
on formal properties such as expressiveness, soundness and
completeness (see for example, Sowa,~1984). The computational
complexity of inferencing also receives attention and is somewhat closer
to the practical task of engineering \nlp\ systems, but generally the
problems being addressed are motivated by examples invented for the
purpose and are, therefore, a long way from the details of real text.
Still, in order to take advantage of the work in knowledge representation,
many \nlp\ researchers have opted for a knowledge representation
scheme so produced (for example, McDonald,~1982, uses
\progname{netl}; Hirst,~1987, uses \progname{frail};
Hobbs~\etal,~1993, use first-order predicate calculus). There is a cost,
however. Since these schemes were developed without particular regard
to natural language understanding, they place emphasis on deep
distinctions. That is, they postulate many meaning differences that are
not expressed on the surface --- differences which \nlp\ systems based on
these schemes will have to recover. Metaphorically speaking, we might
say that there is a long distance between the natural languages of texts
and knowledge representation languages. This increases the difficulty of
the interpretation task, perhaps making it more difficult than is necessary
for the task at hand.
The alternative is to use more superficial representations of meaning,
representations that are more text-like, but sufficiently less ambiguous
than text to support useful \nlp\ tasks. The idea is to sacrifice
fully general logical inference (and many other knowledge representation
niceties) with the goal of making interpretation easier.
After all, for many tasks,
computing all the logical consequences of a sentence will rarely (if ever)
be necessary. This idea has been around since at least
Alshawi~(1983:23), for whom it is a key step in building a practical
system.
According to this point of view, choosing a meaning representation
which is both simpler and shallower than the full-blown schemes
advocated by the knowledge representation community makes
developing \nlp\ systems easier. This argument is closely related to the
one I have made already regarding grammatical knowledge: by relying on
statistically acquired semantic expectations, the grammatical component
can be simplified, resulting in lesser development costs. Similarly,
aiming at a shallower meaning representation again reduces the
development cost. And this is the \foreign{raison d'\^{e}tre} of
statistical language learning: to bypass laborious human effort.
The cost of taking this position is the risk of failing to make a distinction
in the meaning representation that turns out to be necessary for the task
at hand. By reducing the development effort, we've also reduced the
discriminatory power of the system. Therefore, without evidence to
suggest that the reduction in power is insignificant in comparison to the
reduction in development effort, we can't conclude that shallower
representations are better, only that they aren't necessarily worse.
However, a number of recent corpus-based works provide suggestive
evidence in favour of the shallower path by utilising semantic
representations that are quite superficial, even in comparison to
Alshawi's~(1983) scheme. Apart from the conceptual association
proposed by Resnik and Hearst~(1993), which I have already cast
as employing a shallow semantic representation in
section~\ref{sec:sn_conceptual}, I will consider four of these.
Basili~\etal~(1993) propose the use of selectional constraints associated
with syntactic relationships. In particular, they focus on prepositional
modifiers, for which conceptual relationships like
[\concept{go}]~$\rightarrow
$~(\semrel{instrument})~$\rightarrow
$~[\concept{vehicle}] are to be used in disambiguating
the prepositional attachment in example~\ref{eg:md_linking_basili}.
\begin{examples}
\item John goes to Boston by bus \label{eg:md_linking_basili}
\end{examples}
This is a fairly shallow semantic representation, if shown only by the fact
that their mapping from syntactic relations to conceptual relationships is
a direct and simple one. But the representation they use in the
experimental work is even shallower. There, triples consisting of (word,
preposition, semantic~tag) are used to represent conceptual relationships
and the number of semantic tags for their domain is only twelve. The
performance results suggest that shallow semantic representations can be
very useful.
Resnik~(1993) is similar, considering the selectional restrictions placed
on the direct object of verbs. The semantic representation consists of
pairs of the form (predicate, concept), where the concept is taken to be
an argument of the predicate~(Resnik,~1993:54).\footnote{The specific
argument (subject, direct object, etc.) varies with the application.}
This representation is again rendered shallower in practice, with the set of
predicates being taken to be simply the transitive verbs. The concepts are
provided by the synonym sets in the WordNet thesaurus. The result is a
very shallow form of predicate calculus, where WordNet categories
provide the complete set of logical atoms and predicates are one-to-one
with English transitive verbs.
Justeson and Katz~(1995) consider the sense discrimination of
adjectives using the nouns that they modify.
Their representation consists of pairs, (adjective,
noun~feature), where the noun features are semantic markers like
\marker{+concrete}. This representation captures only a narrow
slice of the meaning of nouns, but is sufficient to select the
correct adjectival sense with an accuracy of~97\% (at~50\% coverage).
Once again, shallow semantic representations prove useful for
building a statistical model.
As already mentioned, Alshawi and Carter~(1994:637) use
\acronym{qlf}s, which \shortquote{express
semantic content, but are derived compositionally from complete
syntactic analyses of a sentence and therefore mirror much syntactic
structure as well}. While their main purpose is to compare statistical
metrics, they do sketch a representation that comes close to being a
probabilistic conceptual model in the sense described earlier in this
chapter, based on quasi logical forms. These shallow semantic forms
play the part of the simplified meaning representations advocated in
Alshawi~(1983) and support several disambiguation tasks within their
spoken language translation system.
All four of these studies use shallow semantic representations to
facilitate statistical language learning. Furthermore, while the
promise of automatic language acquisition is an attractive enough lure by
itself, even the Core Language Engine, \shortquote{a primarily
rule-based natural language-processing system} (Alshawi and
Carter,~1994:636), uses quasi logical forms. This suggests that
shallower semantic representations ought to have a place in natural
language understanding, even if only as an intermediate representation,
and that therefore, the requirements placed upon the semantic
representation by the meaning distributions theory are not as limiting as
they might first appear.
\section{A Note on Register}
\label{sec:md_register}
Finally, before turning from the topic of meaning distributions to more
general reflections on statistical \nlp, I want to consider the notion of
register. The register of a text is a categorisation according to the
medium and situation in which it appears, and the importance of register
to linguistic phenomena did not escape the earliest corpus builders.
When Francis and Ku\v{c}era~(1982) built the Brown corpus in the
early sixties, they were very careful to compose it from a wide range of
sources. Since then a very large range of corpora of different kinds has
become available, some \scare{balanced} (for example, Brown and
\acronym{lob}), others pure (including AP Newswire and Grolier's encyclopedia)
and still others collected without particular regard to balancing sources
(the Penn Treebank is an instance, see Marcus~\etal,~1993).
Biber~(1993a) has made extensive study of the cross-register variation in
language. For example, he has shown that within two subsets of the
\acronym{lob} corpus, one collected from fictional works and the other from
expositions, there are marked differences in the distribution of
grammatical categories of many words. The word \lingform{given} is
used passively in~71\% of its uses within exposition, but only in~19\% of
its uses within fiction.
As Biber argues, this has \shortquote{important implications for
probabilistic taggers and parsing techniques, which depend on accurate
estimates of the relative likelihood of grammatical categories in
particular contexts}~(Biber,~1993a:222). He also argues that
\shortquote{analyses must be based on a diversified corpus representing
a wide range of registers in order to be appropriately generalised to the
language as a whole, as in \dots a general purpose tagging
program}~(Biber,~1993a:220). For linguistic and lexicographic purposes
it seems clear that so-called balanced corpora are required for
completeness; however, for the purposes of statistical language learning,
the moral is not so clear.
It certainly appears that cross-register variation is significant enough to
influence the performance of probabilistic taggers, so it is crucial that we
pay attention to it. However, it is not sufficient merely to use a balanced
corpus. Consider, for example, training a unigram tagger on a corpus
consisting of two parts, one fictional and the other from expositions. The
probability estimate for the passive tagging of the word \lingform{given}
will be based on training examples in both registers. If the word is
equally frequent in each half, then the expected estimate of this
probability is~0.45 (using the percentages above). This estimate is
wildly incorrect for both halves of the training corpus, and will likely
result in poor performance on this word. While an accuracy rate
of up to~76\% (on tagging the word \lingform{given}) is possible
if the unigram tagger is trained and then applied to each half
of the corpus separately, the best possible performance when
trained on the whole corpus is only~55\%.\footnote{These figures
are only upper bounds because they (falsely) assume that \lingform{given}
has only two possible tags.}
This example demonstrates that the notion of an average English may
well be a chimera and those who attempt to acquire it should
beware.\footnote{Even linguists should take some care. Clear~(1992)
argues a similar point for corpus-based studies in a lucid paper on the
abstract notion of the composition of language.} An examination of various
research work supports this view, with most of the highly successful
statistical language learning results having been achieved
using register-pure corpora. We must recognise that the parameters
of probabilistic models are generally dependent on the type of text
being modelled. If statistical models are to yield an accurate
picture of language, then separate distributions must be maintained
for different registers. In practice, however, this is going to
be difficult. We are already struggling to find sufficient data to train
probabilistic models; dividing the corpus further will only exacerbate the
problem.
In the short term, there is a better way to proceed: choose one particular
register, train using data only from that register, and accept that the
resulting systems are only (reliably) applicable to the same register. For
this, we need large register-pure corpora, which, luckily, are currently
available, at least for research. In the current work, encyclopedia entries
are used for both training and testing. In my view the use of a corpus of
uniform register is the correct method for applying probabilistic
modelling. Naturally, it is not possible to guarantee that the results will
be equally useful for processing a different register.
\section{Summary}
\label{sec:md_summary}
In this chapter I have proposed an architectural theory of statistical \nlp.
It defines a novel class of designs for statistical language learners
and thus expands the collection of proposed language models
for \sll\ systems.
The architecture advocated by the theory includes three components:
\begin{itemize}
\item a shallow semantic representation;
\item a conceptual probabilistic model; and
\item a lexico-syntactic module.
\end{itemize}
According to the theory, natural language analysis is driven
by semantic expectations that are provided by the conceptual
probabilistic model. A clearly separate syntactic representation
is maintained; however, lexico-grammar plays a passive role,
acting only as a constraint.
This architecture places more emphasis on context than
syntactically driven \nlp\ approaches because context
can be directly represented in the probabilistic conceptual model.
In principle, this provides for sensitivity to topic, register,
author and discourse structure, as well as world knowledge.
While the architecture requires that the semantic representation
be relatively shallow, this does not immediately appear detrimental
and has the side-effect of facilitating unsupervised learning.
Finally, the aim of the theory is to suggest new designs for
language processing systems. As such, the ultimate evaluation
of the theory must be empirical. This is the concern of the
work in chapter~\ref{ch:experimental}. I am not claiming that
the designs advocated by the theory are inherently superior.
Rather, such designs form promising ground for exploration.
The contribution of the theory is the identification
of a sizable new area within the world of possible statistical
models of language.
\chapter{Data Requirements}
\label{ch:dr}
\section{Introduction}
The theory presented in chapter~\ref{ch:md} offers a new
approach to the design of statistical language learners.
As such it offers one avenue in the space of possible designs.
However, this space is both large and complex. Furthermore,
the constraints governing the design of \sll\ systems
are still only poorly understood.
We need more informed methods for exploring the design space.
The subject of this chapter is the prediction of
training data requirements for \sll\ systems. Researchers
in the area have been using increasingly large training corpora
for their experiments and what was once regarded as a large
corpus is nowadays relatively small.
Yet despite this enormous increase in supply, data sparseness
remains a common phenomenon and
this raises the question of how much data is enough.
For example, Brown~\etal~(1992a) use 583 million words of training
data, yet it is still possible that training on a billion
words would yield significantly better results. What is needed
is a theory that can predict how much training data is
expected to yield a given performance.
Because training data is crucial to the performance of \sll\ systems,
availability of sufficient training data is a key design issue.
Different designs often require dramatically different amounts of
training data. Therefore predicting the data requirements of a given
model is an important part of informed design.
A predictive theory of these requirements would be an invaluable
navigational aid in the exploration of the \sll\ design space.
While the goal of predicting data requirements for an arbitrary
statistical language learning algorithm is still some way from being
realised, this chapter begins the development of just such a tool.
In section~\ref{sec:dr_probabilistic}, I advocate the use
of explicit probabilistic models because, not only do they support
reasoning about data requirements, but they have several other
independent motivations.
Section~\ref{sec:dr_sparseness} will explore the phenomenon
of data sparseness and consider the existing ways of dealing with it.
While techniques are available to alleviate data sparseness
in specific instances, they do not remove the need for a means
of predicting data requirements in advance. In section~\ref{sec:dr_need},
I will argue for the necessity of such a theory and indicate why
existing statistical theories do not give the desired information.
To provide some data points that the theory might be
expected to explain, section~\ref{sec:dr_tour} examines six
\sll\ experiments that have been reported in the literature,
paying particular attention to their training data volumes.
From this survey, it appears that significantly less training
data than might be expected from a simple rule of thumb is
sufficient to give successful results.
In order to move toward a general theory, I have defined a framework
for viewing \sll\ systems that can be used to reason about their
data requirements. The framework is defined in
section~\ref{sec:dr_framework} and forms the notational foundation
for the results in later sections. Section~\ref{sec:dr_beginning}
defines a simple statistical learning algorithm within this
framework and gives some results regarding the expected accuracy
of this algorithm as a function of the training set size.
To arrive at a bound on the expected accuracy of this
learning scheme in terms of the volume of training data, it is
necessary to make some simplifications. This is done in
section~\ref{sec:dr_global}, leading to a method of predicting
data requirements under certain limited conditions. Finally,
section~\ref{sec:dr_simulations} reports a series of simulations
to verify these results and to explore an alternate and more
realistic set of conditions.
\section{Reasons for Probabilistic Models}
\label{sec:dr_probabilistic}
It is fundamental to the approach that I will take that
statistical language learning systems be based on probabilistic models.
Therefore, before I turn to the theory, this section will consider some
independent reasons for wanting such a model.
The probabilistic taggers and grammars described earlier (see
sections~\ref{sec:sn_taggers} and~\ref{sec:sn_grammars}) are all based on
clearly defined probabilistic models.
Having at least a rudimentary probabilistic model should be regarded as a
necessity for any system that claims to be {\em statistical}, since
statistical methods make some assumptions about the probability
distributions from which samples are taken. Currently, however, all \nlp\
systems that derive information from a large corpus are referred to as
statistical. Of these, only a subset are based on an explicit probabilistic
model. That is, only some describe linguistic events of interest by
means of random variables with fixed ranges and
precise (if unknown) probability distributions,
where the evaluation of possible analyses follows the laws of
probability theory.
Others use statistical methods like the $t$-test (for example, Fisher and
Riloff,~1992 and Resnik and Hearst,~1993) without specifying an
underlying probabilistic model, and often invalidly.\footnote{Fisher and
Riloff rank candidates by their $t$-score (thus choosing the candidate least
likely under the (false) null model rather than the most likely candidate
under the true model); Resnik and Hearst apply the paired $t$-test to
obviously non-independent samples.} In reality, these systems rely on an
implicit model and, as Dunning~(1993) shows, these implicit models can be
extremely ill-fitted to the particular characteristics of language. In addition,
there are still further \nlp\ systems that are construed as statistical, but which
have little to do with statistics, being merely corpus-based (for example,
Bourigault,~1993). While many of these approaches succeed at their chosen
tasks, there are several disadvantages inherent in failing to build upon an
explicit probabilistic model.
Mathematics often makes dull reading and probabilistic models are no
exception. It is natural then to ask why a precise model is worthwhile, or
even useful. To demonstrate the advantages of such a model, let us consider
a corpus-based algorithm which is not based on one. As already mentioned
in section~\ref{sec:cn_statistical},
Pustejovsky~\etal~(1993) describe a heuristic
for bracketing compounds using a corpus. In chapter~\ref{ch:experimental}, I
will develop a probabilistic model of this problem in detail; however,
solving the problem of bracketing compounds is not their main aim and the
performance of the heuristic is apparently sufficient for their purposes.
Recall that it works as follows:
\begin{citedquote}{Pustejovsky~\etal,~1993:341}
\ldots to bracket each compound that includes more than two nouns, we
test whether possible subcomponents of the phrase exist on their own
(as complete noun compounds) elsewhere in the corpus.
\end{citedquote}
Thus, given the compound \lingform{database management system}, the
analysis is left-branching if the compound \lingform{database management}
appears in the corpus and right-branching if the compound
\lingform{management system} appears in the corpus.
Note that the former is most usually the correct analysis.
There are a number of notable disadvantages with this algorithm.
\begin{enumerate}
\item It assumes that sufficient subcomponents will appear elsewhere in the
corpus.
\item It assumes that no spurious subcomponents also appear.
\item It has no way to indicate the degree of confidence associated with an
analysis.
\item It does not take into account the frequency of words, so that it is biased
towards bracketing common words together.
\end{enumerate}
All these difficulties must be explicitly addressed by any system which is
based on a probabilistic model, either in the model itself, or in the parameter
estimation procedures. Also, in the example above, there appears to be no
obvious reason why \lingform{management system} should not appear
elsewhere in the corpus while \lingform{database management} does
appear. The development of a probabilistic model allows the intuitive
motivations for an algorithm to be made clear.
Of greater importance is the fact that a probabilistic model defines a precise
semantics for numerical values in the analysis procedure.
This supports rigorous reasoning about the results of the procedure.
Also, since the model forces us to make assumptions explicit, it allows
reasoning about the conditions necessary for the technique in question. At
least in principle, we know in which circumstances the method is applicable.
Further, if an extension to a probabilistic model is proposed,
the denotations provided
by the model allow us to infer under what conditions
such an extension will work.
All these properties
allow us to generalise in a principled way from the success of a
given statistical language learner. Rather than concluding only that
following a specific procedure will solve a particular problem under just
certain conditions, we can see (precisely) the principles by which the
procedure works, and thus reason about the application of the same
principles and the same knowledge resources under different circumstances.
If natural language processing techniques are to be applied in environments
beyond the research laboratory, it is crucial that the applicability of a
technique to a given task can be assessed accurately and easily. Making
assumptions explicit is an essential part of such assessment.
Yet another advantage is the natural provision within probability theory for
representing confidence in an outcome.
The existence of confidence measures facilitates the
combination of multiple knowledge sources. When relevant information
suggests conflicting analyses, taking confidence into account provides a
natural way to resolve the question: pay more attention to information that is
known with high confidence. Since one of the primary motivations for using
statistical methods is as a tool for solving the combination problem (as I
suggested in section~\ref{sec:sn_motivations}), this is an important advantage.
Finally, and this is the whole point of the present discussion, probabilistic
models form a foundation from which data requirements might be predicted.
Only if given precisely defined probability spaces and the mathematical
relationships between them, can we apply statistical reasoning to predict in
advance the training set sizes for which we can expect
the desired performance.
Even though probabilistic models are typically based on only approximately
correct assumptions, these models are sufficient to produce approximately
correct results.
Similarly, we can expect statistical reasoning
about sample sizes based on these
models to also be approximately correct. For the purpose of evaluating the
engineering viability of a design, even approximations as coarse as orders of
magnitude will be useful.
\section{Defining Data Sparseness and Statistical
Zeroes}
\label{sec:dr_sparseness}
The goal of this chapter
is the prediction
of data requirements. In the section following this one
I will be putting forward some
arguments justifying the need for predictions of this kind, but before that we
should consider in more detail just what the problem of data sparseness is.
Readers familiar with \sll\ or related machine learning topics could skip
this section;
it serves mainly to introduce the concept of data requirements at
a technical level.
\sll\ techniques are obviously dependent on their training data. Their
performance must vary with both the volume of training data and the quality
of it. To understand this relationship better,
a more technical examination is
necessary. How does training data drive learning? What does this mean for
relating the volume of training data to the success of learning? For this
discussion I will assume that training is supervised. This is a working
assumption of the theory that I will present later,
and is stated more formally in section~\ref{sec:dr_framework}.
Consider a system based on a probabilistic model, prior to any learning
taking place:
it contains a set of unknowns, called parameters, that need to be
estimated from training data. Perhaps the model supplies default values for
these unknowns, but if the system is going to learn from its training data, it
will need to use that data to establish and refine the values for these
unknowns.
We call the smallest unit of data from which the learning algorithm can
update the system's parameters a \newterm{training instance}. As more
training instances are examined, the learning algorithm has more information
and can make more accurate estimates for the parameters. In the limit, after
a sufficiently large number of training instances have been examined, the
algorithm will have accurate estimates of the unknown parameters and thus
be said to have learnt from the training data.
A crucial
element of statistical learning systems is the existence of {\em conflicting}
training data. Different training instances will result in adjustments of the
parameter estimates in different directions. Sometimes this will make them
more accurate, sometimes less; learning relies on there being more of the
former than of the latter in the training set.
Thus, learning is non-monotonic,
now moving forward, now backward, shakily progressing.
For almost any reasonable distribution of the training data,
the eventual convergence to correct parameter settings is guaranteed
by a result that is fundamental to statistics called the
central limit theorem (a counter-example is the Cauchy distribution).
The more training instances are examined, the greater is the likelihood
that the parameter estimates are accurate.
Each training instance contributes to the estimation of the parameters,
but usually each training instance is only relevant to some of
the parameters, so not every parameter is updated with each
training instance. For example, if one of the model parameters
is of the form $\Pr(X=x|Y=y)$, then only training instances
for which the random variable $Y$ is equal to $y$, are relevant
to the estimation of this parameter.
It should be clear that the accuracy of the system's estimate of a
parameter depends on the number of training instances examined that are
{\em relevant} to that parameter. So now we can say
what data sparseness is. It is the condition of having too few relevant
training instances for a parameter estimate to be as accurate as required for
the purpose at hand.
The most obvious and dire case of data sparseness is when there are no
training instances relevant to a parameter. It is impossible to
say anything about the probability $\Pr(X=x|Y=y)$ if we have
never seen $Y$ take on the value $y$. In such cases, the model
can only provide defaults.
More common and equally
problematic is the case when none of the relevant training instances support
a non-zero parameter estimate. For example, if all the training instances
with $Y=y$ do not have $X=x$, then we have no evidence that $\Pr(X=x|Y=y)$
is non-zero. In other words, there is a linguistic event
which we want to estimate the probability of, but which does not occur in the
training set. This unseen event may actually be impossible; that is, the
corresponding parameter might be zero.
But since the training set is finite, it may
be that it has low probability and that we just haven't seen it yet. We call
this latter occurrence a \newterm{statistical zero}. It is impossible for the
learning algorithm to distinguish real zeroes from statistical zeroes.
However, it is possible to get arbitrarily high confidence that
the value of a parameter corresponding to an unseen event is below
a given threshold if the number of training instances can be
increased arbitrarily.
Statistical zeroes cause major technical difficulties.
For example, the mutual information between two events
which have never been observed to co-occur is negative infinity.
This means that if the probabilistic model adds
together unadjusted mutual information values, the appearance of statistical
zeroes will force it to conclude that all analyses are impossible.
No matter what other information the model incorporates,
it will never be able to
distinguish any analysis as better or worse than another in the presence of
statistical zeroes.
Low counts other than zero are also unreliable. They reflect random
fluctuations as much as underlying relationships. One approach is to ignore
counts below a certain threshold. For example, Church~\etal~(1991b)
choose a cut-off of three, below which counts are ignored, \shortquote{to
alleviate the fact that mutual information values are misleading when the
frequency counts are small}~(Church~\etal,~1991b:145).
But having seen something twice does tell us more than having
never seen it, so this strategy wastes the precious data that we are trying to
conserve.
There are a multitude of methods to alleviate data sparseness.
Smoothing the observed counts is a common approach.
Recall from section~\ref{sec:sn_taggers} that a common method
is deleted interpolation, where the results of multiple probabilistic models
(usually a series of successively simpler ones) are added together
(the weight given to each model being optimised empirically).
If a probabilistic model is smoothed with
a constant function then the result is equivalent
to adding a constant to each count. When this constant is one, this is called
the Add-One method. Gale and Church~(1994) show that this can yield very
poor estimates for English word bigrams,
worse even than using unsmoothed counts.
Another approach, called the Good-Turing method, is to adjust the observed
counts to allow for the fact that unseen events might yet be seen if a larger
sample were examined (see Katz,~1987). Church and Gale~(1991) show
that this results in estimates that very accurately match empirically measured
values. Katz~(1987) also suggests a back-off method where a cruder model
is used to estimate parameters in the Good-Turing method when there is
insufficient relevant training data.
These smoothing methods are powerful instruments for tackling data
sparseness and thus reducing the effective data requirements of a given
probabilistic model. But, no matter how much smoothing you do, accurate
estimates of the parameters of a model will always depend on having some
minimum amount of training data. This amount may be reduced by
smoothing, but the model will still
require a certain amount of training data.
Another approach to data sparseness is to change the model, moving to a
less data hungry one. To reason about this it is useful to refer to the number
of unknown parameters as the size of a model. It should be intuitively clear
that larger models generally require more data, though the exact relationship
is somewhat more complicated.
It can sometimes be advantageous
to reduce the size of a model by taking a coarser
view of the event space. For example, if a hidden Markov model tagger
distinguishes~80 different part of speech tags, then it includes~6400
transition probabilities, each specifying the probability of one tag following
another. But if these~80 tags can be mapped onto a smaller set of~10 tag
types, then the model could be modified to assume that the transition
probabilities only depend on the {\em type} of the tag.
This revised model only
contains~800 transition parameters and thus should require less data to train
accurately. The new model distinguishes less linguistic detail than the
original, but given the same amount of training data can derive more precise
parameter estimates. This may or may not be an advantage, depending on
how valid the assumption about tag types is.
This idea is closely related to the principle behind
the conceptual associations proposed by Resnik and Hearst~(1993).
In the conceptual association schemes proposed so far there have been fewer
concepts than words (exploiting synonymy), so that concepts are coarse
representations of words (in the strict sense of coarse used above). Using a
coarse view of the event space is really just another form of smoothing, one
in which data about events that are no longer distinguished is smoothed
together. However now smoothing has become an integral part of model design.
This illustrates that there is an important relationship
between model design and training data requirements. In the next section I
will argue that a theory describing this relationship in a way that enables us
to predict data requirements is unavailable and highly desirable.
\section{The Need for a Theory of Data
Requirements} \label{sec:dr_need}
This section is dedicated to justifying
the search for a predictive theory of data requirements.
First I will outline the possible uses of such a theory in both scientific
and commercial endeavours and then I will briefly relate several existing
methods that might be expected to fulfil our needs, but which do not for one
reason or another.
\subsection*{Research prototype design}
All \sll\ systems need large electronically stored text corpora to train on.
For research prototypes, such as those appearing so frequently now in \nlp\
laboratories, the cost of these corpora is not currently the
limiting factor. While acquiring, storing and maintaining these resources
does cost money, it is primarily electronic availability that determines the
size of training texts. Generally a specific research project is begun with a
particular training corpus already chosen, and if not, then this choice is made
quite early. The amount of training data is more usually a design constraint
than an experimental variable.
One consequence of this is that the researcher must be careful to design the
experiment with this constraint in mind. In particular, the design of the
probabilistic model needs to be done with reference to the amount of
available training data. For example, Magerman~(1992) advises that
\shortquote{if there is not enough data to estimate a joint probability
distribution function for two events which are not independent, the best
alternative is to pick a different model}~(1992:14). He does not say exactly
how to determine if this is the case (primarily because it is beyond the scope
of his discussion). Without explicitly computing estimates
of the joint distribution
from the available training data, determining whether there is
sufficient data is a non-trivial task. Similarly, Merialdo~(1994) suggests
that \shortquote{the amount of training data needed to properly estimate the
model increases with the number of free parameters $\dots$ In the case of
little training data, adding reasonable constraints \dots reduces the
number of free parameters and should improve the quality of the
estimates}~(1994:164).
An important component of model design is choosing what linguistic
distinctions to incorporate. All models must be sensitive to linguistic
patterns that allow the right distinctions to be made, but they may vary in
their sensitivity (for example, by taking a more or less coarse
view of events). Linguistically simplistic systems will treat two
linguistically different inputs in the same way because their linguistic
knowledge fails to distinguish them, and this will lead to errors. But while
this suggests that it is desirable to incorporate as much linguistic
sophistication as possible, there is a problem with data requirements.
Consider the effect of introducing one additional random variable on which
all outputs are conditioned. The number of distinctions which the system
can make will be increased by this addition, but at the same time, the number
of parameters in the system will increase too. If every existing parameter is
to be further conditioned on the new random variable, then the size of the
model will be multiplied by the number of values available to the new
variable.
For example, Resnik and Hearst's~(1993) prepositional phrase attachment
system takes into account the head of the noun phrase
that is the object of the preposition, thus distinguishing
cases that Hindle and Rooth's~(1993) did not. As a result, the number of
parameters required for lexical association is multiplied by the number of
nouns, an increase in model size of~3--4 orders of magnitude. Conceptual
association is then used to try to overcome the resulting data sparseness, but
it fails to reduce the size of the model back down to that of Hindle and
Rooth's.
If the training set is not hand annotated, the effect of the new random
variable is even greater than what is suggested by the increase in model size.
This is because introducing the new variable creates additional ambiguity in
the training set; the value of this new variable must be determined for each
training example by whatever mechanism is used to allow unsupervised
training. This effectively decreases the number of training instances
resulting in greater data requirements.
As argued in Lauer~(1995a), adding linguistic distinctions to the
probabilistic model presents a trade-off between language sensitivity and
data sparseness. It is a balance between poor modeling of the language and
insufficient data for accurate statistics. If researchers are to find a
satisfactory compromise in this trade-off, they will need a theory of data
requirements.
Furthermore, it is important that this theory be {\em predictive}. We want
to use the training data itself as little as possible in establishing the
plausibility of different models. If a series of models is constructed and each
applied, in a trial and error fashion, to the training data to establish whether
there is enough data for that model, then we risk overfitting the model to the
training data. Sound scientific methodology requires that the experimental
design, including the development of the probabilistic model, be done
independently of the training data. While we may have enough test data to
develop on one set and evaluate on another, frequently
we need all the training data we can get.
If the training data is used during the model development, it
becomes less valid to generalise from the resulting model. Researchers need
a predictive theory of data requirements.
\subsection*{Commercial applications}
While electronic texts are becoming increasingly available, they can be
expensive. Even unannotated text is limited and the cost of storage media
alone guarantees that it is never free. The cost of marked up text is
considerably more, and the price of any reasonably large amount of richly
annotated text will stand out on all but the largest budgets. Many corpora
are available only for research purposes, or at a reduced price for
such purposes, so where as
researchers may pay little attention to financing their corpus,
these considerations
are paramount in a commercial endeavour.
Consider the decisions facing the \sll\ technology consumer, that is, the
architect of a planned commercial \nlp\ system. A range of \sll\ techniques
are available to her. For example, she might opt for a bigram tagging model
with an expected accuracy of~94\% or for a trigram model with~97\%. Why
wouldn't she halve the error rate by choosing the latter? Because these
accuracy figures are the performance of these models in the limit.
The trigram
model requires more training data to reach its maximum performance than
the bigram model, probably a lot more, and every bit of data is going to cost
money.
The situation can get much more complicated than this example. In this
case, exactly the same type of training data is required for the two different
models. If one is building a trigram tagger, it is trivial to implement a
bigram tagger at the same time. So one strategy would be to build both and
see which one performs best on the data available (or better still use deleted
interpolation to smooth the two models together). But this relies on having a
fixed amount of data available, an unrealistic assumption. Instead, the
commercial system architect will be trying to minimise the cost of data and
still attain a certain performance level. She needs to estimate the
price-performance trade-offs at design time.
In more complicated cases, the different \sll\ techniques will require
different types of data. The option to build both models is not feasible
because this would require purchasing two complete sets of training data,
one of each type, not to mention the doubled model development costs.
Informed design choices require advance knowledge of the trade-off
between volumes of training data and resulting performance.
The need for a predictive theory of data requirements doesn't end with
system design either. Suppose the architect settles on a particular design and
hands over to the development team. They buy one batch of training data,
implement the model and come up with a system that performs at something
like the expected level. Can they improve the performance by purchasing
further batches of training data? Will the improvement be worth the cost?
A theory of data requirements could be used to make such predictions.
Without such a theory, the team must rely on extrapolating learning curves, a
tricky business in the presence of the noise that results from randomness
under data sparse conditions.
\subsection*{What is available already?}
Thus, for both research and commercial applications, a predictive theory of
data requirements would be extremely useful. In the remainder of this
section I will briefly explain why existing techniques are not sufficient and
thus why we require a new theory especially devoted to \sll\ data
requirements.
The field of statistics is concerned with establishing how large a sample
needs to be in order to have a given confidence in a measurement.
It is reasonable
to ask why standard statistical theory doesn't provide the answers we seek.
The reason is that language exhibits some peculiar characteristics which
make it different from the more traditional areas of application for statistics
such as the physical sciences.
First, obvious elements of language are all discrete and non-ordered. There
is no way to calculate the mean of a set of words, nor even the median. This
type of data is called \newterm{categorical data} in statistics. It presents
some special properties that make standard statistical models inappropriate
and the statistical theory for categorical data is usually relegated to more
advanced texts specifically devoted to it (see for example, Agresti,~1990).
Within categorical statistics, it is usual to make assumptions about the
distributions of events (that is, statistics are parametric). The distributions
found in language often violate these assumptions quite dramatically (see for
example Church and Gale,~1995). Furthermore, the probability
distributions of words are typically severely non-uniform. Zipf's law states
that the probability of a word is inversely proportional to its frequency rank,
so a small number of frequent words carry much of the probability
distribution, while a vast host of rarer words are left to carry the remainder.
These properties make traditional statistical reasoning difficult. For many
promising statistical methods, one often finds that language exhibits exactly
those conditions under which the method is inapplicable. While there may
be lesser-known statistical methods that are appropriate, we still await their
transfer from the statistical disciplines to computational linguistics.
If this thesis encourages or facilitates that transfer,
then it has met one of its goals.
Modern statistics does, however, provide a technique which allows the
calculation of error bounds on any computable statistic called the \newterm{
bootstrap method} (see Efron and Tibshirani,~1991, for an
introduction).
The method makes no distributional assumptions,
relying on computation power to generate large numbers of samples from
the training data. Unfortunately, the resulting variation estimates are valid
only for the available training data. The method cannot be used to predict
the effects of adding more data, nor used to predict requirements prior to
access to the training data.
Somewhat closer to computational linguistics than theoretical statistics is the
field of machine learning; there are results in the area of probably
approximately correct learning (or \acronym{pac}-learning) that are
relevant. A typical example is the work by Abe and Warmuth~(1992).
Their work concerns the estimation of parameters in any
probabilistic finite state automaton.\footnote{The set of probabilistic finite
state automata contains all the models and training algorithms considered
later in this chapter and is likely to contain any proposed probabilistic
models of language.} Their result bounds the sample complexity of this
estimation task, that is the number of training instances required to achieve a
given accuracy with a given confidence.
The result makes few assumptions, making it suitable for application to
language models, and it doesn't even require that the language data obey the
assumptions of the probabilistic model.\footnote{A weakness of the results
appearing later in this chapter is that they do inherit the assumptions of the
probabilistic model; but if the probabilistic model is to be useful, it must be
approximately correct in any case.} It also allows arbitrary constraints on
the model, such as would be entailed by requiring each word to be tagged
only with those parts of speech listed for it in the lexicon.
The difficulties with this result are three-fold. First, Abe and Warmuth
define accuracy to be relative entropy between the actual distribution and
that defined by the trained probabilistic automaton. A theory of data
requirements for \sll\ needs to predict the accuracy of the language analysis
system after training. While a low relative entropy makes accurate
predictions likely, it is possible to have arbitrarily large
differences between
two relative entropies, but no resulting difference in the error rate of the
language analysis system.
Second, the bound allows for arbitrarily large confidence in the accuracy of
the trained model. The bound guarantees to be correct in $(100-\alpha)$\% of
training runs where $\alpha$ is arbitrary. In practice, we don't need to be
highly confident that a given performance will be attained; an approximate
estimate of what to expect is more valuable. One could say that Abe and
Warmuth give a worst-case bound, while an average-case bound is of more
interest.
The third problem, the crucial one, arises because of the first two. The
bound that they arrive at is extremely weak. It will never predict that less
than~64$t$ training instances are needed, where $t$ is the size of the model.
In fact, for reasonable choice of other parameters (maximum state sequence
length, desired accuracy and desired confidence level) the bound will be
very much higher (these arguments are derived from
Lemma 3.1 of Abe and Warmuth,~1992:219).
However, their main purpose in establishing the
bound is to show that the sample complexity is polynomial rather than
exponential, and for this purpose it is quite a strong bound. For the purpose
of predicting data requirements in real \sll\ systems it is far too weak.
Where statistics and machine learning research run out of steam, intuition
and heuristic take over. The final technique I will mention here is an
intuitive approximation with no firm theoretical basis. The reason for
including it is that it appears to be used as a rough guide in practice.
Weischedel~\etal~(1993) state
that one \shortquote{rule of thumb suggests that the training set needs to be
large enough to contain on average ten instances of each type of tag
sequence that occurs}~(1993:363). In other words, the rule
suggests an expected data requirement of~10$t$,
where $t$ is the size of the model. As we shall see in
section~\ref{sec:dr_tour}, this prediction does not appear to be borne out by
experiments. One reason for this is that model size isn't the only factor, as
Weischedel~\etal\ also point out.
So theories that are useful for predicting data requirements are not only
necessary for building \sll\ systems, but are currently unavailable. Despite
this need, very little attention has been paid to the problem.\footnote{See de
Haan~(1992) for an investigation of sample sizes for linguistic studies.} In
fact, training set sizes have generally received little investigation.
Weischedel~\etal\ state that they \shortquote{are not aware of any other
published studies documenting empirically the impact of training set size on
performance}~(1993:364).\footnote{A detailed study of the learning rate of
a hidden Markov model tagger has been given since then by Merialdo~(1994).}
In summary, training data will always be limited and thus reasoning about
training data requirements is an important part of system design. At the
moment, the field lacks a predictive theory of data requirements. In the
remainder of this chapter I will describe some initial results in the search
for such a theory.
\section{A Tour of Training Set Sizes}
\label{sec:dr_tour}
Statistical language learning systems are varied in both goals and
construction and virtually the only thing common to all is the use of a
training corpus. Any theory which captures the data requirements
characteristics of all of them will have to be quite sophisticated, and may
need to be composed of a number of heterogeneous parts.
In this section, I will analyse six \sll\ systems with particular
reference to their training set sizes.
These systems are typical subjects of the kind of theory required.
Each can be seen as a data point that needs to be explained by the theory.
They are: a sense disambiguator, two taggers, two prepositional
phrase attachment systems and a compound noun parser.
Putting them side-by-side should help make the domain of the theory more
concrete.
My analyses will involve some guesswork. Even in those systems that are
based on explicit probabilistic models, it is rare that sufficient information is
given to evaluate the volume of training data used. Most commonly the size
of the training corpus is given in words rather than in number of training
instances. In most cases I have managed to arrive at rough estimates of
important values with a little detective work, putting together facts from
various parts of the paper describing a system. I have not included any
probabilistic grammar systems because it is difficult to find published
reports that give enough detail to allow even this kind of guesswork.
For each system I will pay attention to the size of the probabilistic model
and the number of training instances. Often the models contain two or more
qualitatively different types of parameters. A bigram tagger is the canonical
example: it has transition probabilities and word emit probabilities.
Parameters of each type are estimated from different types of training
instances. Thus, the training instances fall into classes, effectively forming
multiple distinct training sets. For a bigram tagger, pairs of consecutive tags
are used to estimate the transition parameters while the word emit
parameters are estimated from pairs composed of a word and its tag.
Since each type of parameter is estimated independently from a distinct
training set, the degree of data sparseness affecting the different types of
parameters will vary. If there is a large amount of training data of one type,
estimates of the corresponding type of parameter will be very accurate. In
evaluating the degree of data sparseness it is a reasonable working
assumption to ignore parameter types with large training sets. In the
analyses below, those parameter types that I have judiciously discounted for
this reason will be noted. Also, where no probabilistic model is given I will
attempt to reconstruct a plausible one that approximates the statistics
computed by the system.
\subsection*{Six SLL systems}
Yarowsky's~(1992) sense disambiguation system (refer back to
section~\ref{sec:sn_supervised} for more detail) uses a~101 word window
of co-occurrences. The model maintains estimates of the relative likelihood
of words in each category of a thesaurus
appearing within~50 words
of every other word. It chooses a sense by adding together the logarithms of
probabilities $\Pr( \mbox{Cat} | w)$ for each word, $w$, in the window,
which is equivalent to assuming that the words in the window
are all independent random variables. To arrive at
the estimates, it relies on the following application of Bayes' rule.
\begin{equation}
\Pr( \mbox{Cat} | w) = \frac{\Pr(w | \mbox{Cat})
\Pr(\mbox{Cat})}{\Pr(w)}
\end{equation}
Since the probabilities $\Pr(\mbox{Cat})$ are assumed constant, no data is
required to estimate them. This leaves two types of parameters, those of the
form $\Pr(w)$ (word probabilities) and those of the form $\Pr(w |
\mbox{Cat})$ (co-occurrence probabilities). The former type cancels out
from the probabilistic model, but the smoothing process uses them (see
Gale~\etal,~1994, for details of the smoothing) so we still need to take
account of them in evaluating the degree of data sparseness.
I will guess that the vocabulary size is on the order of~100 thousand since it
is derived by taking all words from Grolier's encyclopedia.\footnote{Some
stemming is performed, so it is the number of stems in the vocabulary that
we want.} Thus there are about~100 thousand word parameters. Since
there are~1042 categories (the thesaurus used is the 1977 Roget's),
there are about~104 million co-occurrence parameters.
The training corpus is Grolier's encyclopedia, which contains on the order
of~10 million words. Each of these is used as a training instance for
estimating the word probabilities (so there are about~100 training instances
per parameter for these). Further, each of the nouns appearing in the corpus
provides~100 (ambiguous) training instances for estimating the
co-occurrence probabilities. Assuming that about every fifth word is a noun,
there are about~200 million training instances for these (and thus around~2
training instances per parameter). Since the latter type of parameter has far
less data, I will ignore the former type in the summary below. The average
accuracy reported is~92\%. More recent work has shown that similar sense
disambiguation tasks can be performed via supervised training algorithms
with an accuracy of~96\% using the same (or less) context (see column~5 of
the results in Yarowsky,~1995:194), so presumably human performance
would be at least as good as this given the same test conditions.
Weischedel~\etal~(1993) describe \progname{post}. It is a trigram Markov
model part of speech tagger (refer back to section~\ref{sec:sn_taggers}) and
thus maintains two sets of parameters: word emit probabilities ($\Pr(w | t)$)
and transition probabilities ($\Pr(t_3 | t_1, t_2)$). To choose a tag for a
word, it considers all possible tag sequences for the sentence and computes
their probabilities by multiplying the estimates for each tag triple and
word-tag pair in the sequence.
It uses~47 part of speech tags, so the number of free transition parameters is
a little over~100 thousand
($47 \times 47 \times 46$).\footnote{The requirement
that all probabilities in
a given distribution must sum to~1 means there is one less free parameter
than there are probabilities to be estimated in each distribution.}
They only allow a
word to be tagged with a tag which has been observed to be possible for that
word. Thus, there are far fewer than~47 free word emit parameters per
word. Since \shortquote{the average number of parts of speech per word
[token] is approximately two}~(1993:361), we can guess that each word
type has an average of roughly two possible tags and that therefore there are
only two free parameters per word type. Given that the text is from a
restricted domain (\publicationname{Wall Street Journal}), I will guess that
the vocabulary is about~10 thousand words.
The training data is one million words from the University of Pennsylvania
Treebank. Each of these yields one training instance for each of the word
emit and transition parameter sets. Since there are far fewer of the former, I
will ignore them in the summary below. For the transition parameters, there
are about~10 training instances per free parameter, and the accuracy is
reported to be~97\%, which is approximately the accuracy of human taggers
using the whole context. A second data point for the theory is provided by
their study, since they also report the accuracy of the system when trained on
only~64 thousand words of text. This is only~0.6 training instances per
parameter with a reported accuracy of~96\%.
Merialdo's~(1994) tagger uses an almost identical model, but has a larger
tag set of~76 parts of speech. Thus, there are about~430 thousand transition
parameters to be estimated. Again, the words are restricted only to take tags
which they have been observed to take in training, and the number of tags
per word token is less than two. Thus, there are relatively few word emit
parameters and these can be ignored on the same grounds as above. The
training data is only ever measured in sentences, but since the first~2,000
sentences contain \shortquote{less than~50,000 words}~(Merialdo,~1994:162),
I will assume that the training set of~40 thousand sentences contains
about one million words. This leads to an estimate of~2.3 training
instances per parameter, which is sufficient to achieve an accuracy of~97\%.
Using only the first 2,000 sentences (and thus~0.1 training instances per
parameter) yielded~95\% accuracy.
Hindle and Rooth's~(1993) system to syntactically disambiguate
prepositional phrase attachments (refer back to
section~\ref{sec:sn_specialised}) is based on a probabilistic model of
associations between prepositions and the nouns or verbs that they attach to.
The probability of a nominal attachment is represented by parameters of the
form $\Pr(p | n)$, where $p$ is the preposition and $n$ the noun (noun
associations). Verbal attachment probabilities are computed from two sets of
parameters: $\Pr(p | v)$ (verb associations) and $\Pr(\nullsym | n)$ (null
associations), where $v$ is the verb and $n$ is the head noun of the verb's
object.
The training data is extracted from~13 million words of newswire text by a
partial parser. I will guess that there are about~50 thousand nouns,~10
thousands verbs and~100 prepositions in the vocabulary of this text.
Therefore, there are around~5 million noun associations,~1 million verb
associations and~50 thousand null associations. The parser extracts
around~1.9 million noun phrases which are not the object of a verb, so there
is a great quantity of data for estimating null associations and I will therefore
ignore these. It also extracts over~750 thousand noun-preposition pairs
(used to estimate the noun associations) including over~220 thousand
verb-object-preposition triples (used to estimate the verb associations). A
process of cyclic refinement is used to generate training data from
ambiguous examples.
Both these kinds of association have about the same number of training
instances per parameter, so for simplicity I will combine these: around~6
million parameters are estimated from around~1 million training instances
(0.2 training instances per parameter). The resulting accuracy is close
to~80\%, with human subjects achieving~85--88\% on the same information.
Resnik and Hearst~(1993) aim to enhance Hindle and Rooth's~(1993) work
by incorporating information about the head noun of the prepositional
phrase in question (refer to section~\ref{sec:sn_conceptual}). There is no
probabilistic model given; instead a set of frequency weighted mutual
information scores is submitted to a $t$-test to distinguish nominal from
verbal attachment. A reconstruction of the probabilistic model would
involve parameters of the form $\Pr(c_1, p, c_2)$ and $\Pr(v, p, c_2)$
(corresponding to the mutual information scores between these elements),
where $v$ is a verb, $p$ a preposition and $c$ a concept.
WordNet contains over~70 thousand concepts, but the $t$-test and
frequency weightings employed by Resnik and Hearst ensure that only those
concepts high up in the taxonomy will be used in making the attachment
decision. I will guess that one thousand concepts is a more reasonable figure
(this is the number of Roget's categories, for example) and leave the number
of prepositions at~100. Since the corpus is smaller I will assume
5 thousand verbs, which leads to
around~600 million parameters (100 million concept associations and
500 million verb associations). Their training
corpus \shortquote{is an order of magnitude smaller than the one used by
Hindle and Rooth}~(Resnik and Hearst,~1993:61), but is parsed, avoiding
the need for cyclic refinement. Assuming that the distribution of
prepositional phrases is similar in both corpora, the number of training
instances is therefore about~100 thousand. This is a ratio of
two training instances per 10 thousand parameters.
Even given the additional information about the head noun of the
prepositional phrase, the accuracy reported fails to improve on that of
Hindle and Rooth, being~78\%. It is seems likely that insufficient training
data is the cause of this shortfall. Dras and Lauer~(1993) report that humans
achieve up~92\% accuracy given the same information (that is, the verb,
head of the object, preposition and head of the prepositional noun phrase).
Finally, Lauer~(1994) describes a system for syntactically analysing
compound nouns (details of the probabilistic model and results of this
experiment are given below in the first part of
chapter~\ref{ch:experimental}). Two word compounds extracted from
Grolier's encyclopedia are used to estimate an association between every
pair of thesaurus categories and the results used to
select a bracketing for three word compounds. The associations measure the
probability of a word in one thesaurus category pre-modifying a word in
another category. Thus, the free parameters are of the form $\Pr(t_1
\rightarrow t_2 | t_2)$. Since there are~1043 categories (the
thesaurus used is the 1911 Roget's), the model
size is a little over one million free parameters.
The training set of two word
compounds consists of just over~24 thousand noun pairs, giving only~2.4
training instances per 100 parameters. Lauer and Dras~(1994) report the
accuracy to be~77\% (various related results appear in
chapter~\ref{ch:experimental} below). Section~\ref{sec:cy_human}
below reports an experiment in which
human subjects, given only the compounds, get an average of~81.5\%
accuracy.
\subsection*{Summary}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
System & Training & $m$ & $t$ & $m:t$ &
Accuracy & Humans \protect\footnotemark \\
\hline
\hline
Weischedel~\etal\ tags & Supervised & 1M & 100k & 10 & 97\%
& ($\leq$ 97\%) \\
Merialdo tags & Supervised & 1M & 430k & 2.3 & 97\%
& ($\leq$ 97\%) \\
Yarowsky senses & Unsupervised & 200M & 104M & 2 & 92\%
& ($\geq$ 96\%) \\
Weischedel~\etal\ tags & Supervised & 64k & 100k & 0.6 & 96\%
& ($\leq$ 97\%) \\
Hindle \& Rooth PPs & Auto-supervised & 6M & 1M & 0.2 & 80\%
& 85--88\% \\
Merialdo tags & Supervised & 50k & 430k & 0.1 & 95\%
& ($\leq$ 97\%) \\
Lauer CNs & Auto-supervised & 24k & 1M & 0.02 & 77\%
& 82\% \\
Resnik \& Hearst PPs & Supervised & 100k & 600M & 0.0002
& 78\% & ($\leq$ 92\%) \\
\hline
\end{tabular}
\end{center}
\caption{Summary of a tour of training set sizes for statistical \nlp\ systems}
\label{tb:dr_tour_summary}
\end{table*}
\footnotetext{Brackets indicate measured on different data
and/or under different conditions.}
Table~\ref{tb:dr_tour_summary} shows a summary of the above systems.
The column labelled $m$ records the number of training instances, the
column labelled $t$ shows the size of the model (number of free parameters)
and the last two columns show the performance results of the system and
that of humans given the same information. Also listed is
the type of training used. While the problems addressed
by each of these systems are of differing difficulties, a correlation between
success and data sparseness (as measured by the ratio $m:t$) is evident.
It is also clear that far less than ten training instances per parameter are
necessary to achieve accurate performance. As Weischedel~\etal~(1993)
point out, there are many three tag sequences that never occur in their
training data. Thus, there are more training instances available for
estimating the probabilities of the three tag sequences that do appear. The
fact that there is no data available for the non-occurring sequences does not
affect performance because exactly these sequences do not occur in testing
either.
This characteristic is likely to be observed in a wide range of linguistic
phenomenon because of the severely non-uniform distributions that occur in
language (as mentioned in the last section). This alone makes it clear that
the size of the model is not the only factor in determining data requirements.
In section~\ref{sec:dr_simulations} below I will consider the problem of
non-uniform distributions further.
Another observation is that all \sll\ systems have optimal performance rates
that are less than~100\%. Human judges cannot perform most of these tasks
with complete accuracy when given only the information available to the
system. Even if the probabilistic model is perfect, there is still
an upper limit to the performance which is independent of the volume of
training data. The test examples appear to exhibit dependence on context
that is unavailable and so systems limited to
the same contexts cannot perform with~100\% accuracy. Also, the
models make faulty assumptions, so their performance will be limited even
further, regardless of how accurate their parameter estimates are.
In the theory of data requirements to be developed below, the optimal error
rate will play an important role. System accuracy will be measured not in
absolute terms but rather relative to this optimal accuracy. The work
presented forms only the beginnings of a complete theory of the subject and
is too simplistic to explain all of the observations made during the tour
above. While its assumptions are limiting enough to prevent it being
directly applicable to even one of the above systems, it does expose issues
that are important to all of them.
\section{A Simple Framework for Statistical NLP}
\label{sec:dr_framework}
In the previous section I began with the observation that \sll\ systems are
widely varied in both goals and construction. But if a theory of data
requirements is going to provide insights into many of them, then it must
begin on common ground. In this section I will construct a framework for
viewing \sll\ systems based on their commonalities. Results appearing in
later sections will concern one simple statistical language learning model
within this framework. The level of mathematical detail given is necessary
to support those results, but, before the formal mathematics,
I will informally sketch the framework's main elements.
\subsection*{Informal sketch}
The first commonality amongst all \sll\ systems is that they are designed to
make choices on the basis of certain inputs.
The input
information must have some form of representation, typically a string of
words, and be of finite (if arbitrary) size. In general,
the input representation will not fully represent all possible
linguistic events. Instead, it will divide the space
of linguistic possibilities into a set of different cases.
The set of different cases distinguished by the input representation
will be called the \newterm{bins} in the formal framework defined
below.
Since language is highly context
dependent, it is unlikely that the input representation will
always provide sufficient information to determine the analysis uniquely.
For example many sentences given to a tagger will have different readings
in different contexts. If these readings involve different parts of speech
assignments, then the tag sequence will be dependent on more than just the
words in the sentence. Since the tagger is only given these words, it has
insufficient contextual information.
Another way to look at it is this: consider all of the sentences ever written.
By this I mean sentence tokens, so two occurrences of
the same sentence count as two distinct elements. Each element of
this set has, in principle, a unique analysis (parts of speech assignment,
parse, etc). By looking at only the words in the sentence, the analyser is
collapsing multiple distinct linguistic events into one representation.
It has a
coarse view of the linguistic possibilities. No matter how sophisticated we
make natural language understanding systems, there will always be
contextual factors that the system does not account for. Therefore there will
always be linguistically distinct events that are treated identically by the
system. The degree of coarseness is determined by the input representation.
Thus, the input representation plays an important role, defining the set of
test cases that can be distinguished.
When this set is larger, the system can be more
linguistically sensitive, but it also has more to learn.
All \sll\ systems have an output representation too. This representation
defines the set of choices available to the system. To be successful, there
must be a correlation between the input set and the output set. The inputs
might not fully determine the outputs, but they must partially determine
them. The goal of the system is to take test cases and predict analyses. The
system must compute a function from its input set to its output set, usually
by computing probabilities for each possible output. However,
regardless of how the function is computed, the accuracy of the system
depends only on the particular function produced. Once each element of the
input set is mapped to the analysis it should receive, the details of how the
mapping was performed do not bear on the accuracy. Therefore highly
accurate parameter estimates are only valuable to the extent that they
contribute to making correct predictions.
So far we have included an input set, an output set and a function from
inputs to outputs representing the learnt analysis procedure, with an
associated accuracy. We still need a training algorithm. The goal of such an
algorithm is the acquisition of the analysis function. Given some training
data, this algorithm will furnish parameter estimates,
that is, the information necessary to turn the
probabilistic model into an analysis function. If
given very little data, we expect the resulting analysis function to be
inaccurate. As more data is provided, the analysis function should improve.
If the training algorithm is well-behaved, then given unlimited data it should
eventually result in the analysis function converging to an optimal accuracy.
Our main interest here is the rate of this convergence and we will continue
that investigation once all these elements have been made formal.
\subsection*{Formal framework}
Formally, there is a set of linguistic events $\Omega$ from which every test
instance will be drawn, the universe of language.
Let $V$ be a
countable set of values that we would like to assign to any given linguistic
input. This defines the range of possible answers that the analyser can make.
Let $J:\Omega \rightarrow V$ be the random variable describing the
distribution of values that linguistic events take on. We also require a
countable set of inputs, $B$, to use in selecting a value for a given linguistic
event. I will refer to each element of $B$ as a {\em bin}. Let $I:\Omega
\rightarrow B$ be the random variable describing the distribution of bins
into which linguistic events fall.\footnote{I assume
without loss of generality
that $|B| > 0$, $|V| > 1$ and $(\forall b \in B) \Pr(I=b) > 0$
in all following results.}
The task of the system is to choose which value is the most likely given only
the available inputs. Therefore, it requires an analysis function taking inputs
and yielding values.
\begin{definition}[Analysis Function]
An {\em analysis function\/} is a function $A:B \rightarrow V$,
used by an \sll\
system to predict analyses in the basis of available information.
\end{definition}
The task of the learning algorithm is to acquire such a function by
computing statistics on the training set. Putting these components together,
we can define a statistical processor.
\begin{definition}[Statistical Processor]
A {\em statistical processor\/}
is a tuple $ \langle \Omega , B , V , A \rangle $,
where:
\begin{itemize}
\item $\Omega$ is the set of all possible linguistic events;
\item $B$ and $V$ are countable sets, the bins and values respectively; and
\item $A$ is the trained analysis function.
\end{itemize}
\end{definition}
Amongst all such statistical processors we are interested in
those using a probabilistic model to rank possible analyses.
\begin{definition}[Probabilistic Analyser]
A {\em probabilistic analyser\/}
is a statistical processor which computes a function
$p:B \times V \rightarrow [0,1]$ such that $\sum_{v \in V} p(b,v) = 1$ for
all $b \in B$ and then computes $A$ as:
\begin{equation}
A(b) = \argmax{v \in V} p(b,v)
\label{eq:dr_framework_pa}
\end{equation}
\end{definition}
The problem of acquiring $A$ is thus transformed into one of estimating the
function $p$ using the training corpus.
The intention is to view $p(b,v)$ as an
estimate of $\Pr(J=v | I=b)$. By trivial construction, every
analysis function corresponds to some function $p$ of a probabilistic
analyser, so for every statistical processor there is an
equivalent probabilistic
analyser. For probabilistic analysers to be well-defined, the operator
$\argmax{}$ must return a unique value. In the reasoning to follow, I will
assume that when more than one element is greater than or equal to all
others, $\argmax{}$ returns any one of these at random.
While the framework so far is sufficiently abstract to capture most \sll\
systems, there is one important limitation. Currently it does not permit the
system to refuse to answer. It requires~100\% coverage of the input
set.\footnote{This entails~100\% recall in information retrieval terms.
However, coverage and recall are generally not the same and in
particular~100\% recall does not entail~100\% coverage. This fact
sometimes escapes attention in computational linguistics.} One could, in
principle, augment the definition to include in $V$ a value representing
refusal to answer and then extend all the results below to predicting accuracy
and coverage. I leave this for future work.
\subsection*{Formal accuracy}
Once trained, a statistical processor can be put to use analysing test cases.
Test cases are drawn from $\Omega$ according to the distributions of the
random variables $I$ and $J$, so we can express the expected accuracy of an
analysis function $A$ in terms of these.
First we define correctness as whether or not the
prediction made by the function matches that observed for the test case.
\begin{definition}[Correctness]
The {\em correctness\/} of prediction made by a statistical processor using an
analysis function $A$ is a random variable $C_A$ defined by the equation:
\begin{equation}
C_A \stackrel{\rm def}{=} \delta(J, A(I))
\end{equation}
where $\delta(x, y)$ is~1 if $x = y$ and~0 otherwise.
\end{definition}
The expected accuracy is then the mathematical expectation of the
correctness.
\begin{definition}[Expected Accuracy]
The {\em expected accuracy\/} of an analysis function $A$ is given by
\begin{eqnarray}
\alpha(A) & \stackrel{\rm def}{=} & \mbox{E} [C_A] \nonumber \\
& = & \sum_{b \in B;v \in V} \Pr(I=b, J=v) \delta(v, A(b))
\nonumber \\
& = & \sum_{b \in B} \Pr(I=b, J=A(b)) \nonumber \\
& = & \sum_{b \in B} \Pr(I=b) \Pr(J=A(b) | I=b)
\label{eq:dr_framework_ea}
\end{eqnarray}
\end{definition}
This is the probability of the analyser being correct on a randomly selected
element of $\Omega$.
For any non-trivial statistical processor the input set used cannot
perfectly represent the entire linguistic event space and so
in general there exist values
$v_{1}, v_{2} \in V$, for which both $\Pr(J=v_{1}, I=b) > 0$ and
$\Pr(J=v_{2}, I=b) > 0$ for some $b \in B$. Suppose without loss of
generality that $A(b) = v_{1}$. The analyser will be in error with
probability at least $\Pr(J=v_{2}, I=b)$. This means, as we have seen
above, that typically a statistical language learner cannot achieve~100\%
accuracy, even with unlimited training data. This is the root of an interesting
problem in \sll\ because in practice, no matter how inaccurate a trained
statistical processor is, the inaccuracy may be due to the imperfect
representation of $\Omega$ by $B$.\footnote{Unless a more accurate
statistical processor based on the same input set already exists.} If this is the
case, acquiring more training data will have no effect.
Theoretically though, it is simple to define the maximum possible
performance for any statistical processor on unlimited data, given a
well-behaved training algorithm. I will call this the optimal accuracy.
\begin{definition}[Optimal Accuracy]
The {\em optimal accuracy\/}
of a statistical processor is dependent only on the bins $B$,
values $V$ and their distributions $I$ and $J$, and is defined by
\begin{equation}
\alphaopt \stackrel{\rm def}{=} \max_{A':B \rightarrow V} \alpha(A')
\label{eq:dr_framework_oa}
\end{equation}
\end{definition}
Given $B$ and $V$, $\alphaopt$ is the greatest possible accuracy rate we
can expect during testing. Any probabilistic analyser that achieves an
accuracy close to this is not going to benefit
significantly from further training data.
\begin{definition}[Optimal Analysis Function]
An {\em optimal analysis function\/} is any analysis function $A_{\opt}$ with
$\alpha(A_{\opt}) = \alphaopt$.
\end{definition}
By inspection, $A$ is an optimal analysis function if and only if the
following constraint holds:
\begin{equation}
(\forall b \in B) \Pr(J=A(b) | I=b) = \max_{v \in V} \Pr(J=v | I=b)
\label{eq:dr_framework_oaf}
\end{equation}
\subsection*{Errors}
The shortfall between the expected accuracy and~100\% accuracy
(the optimal error rate) reflects the imperfection of $B$ and $V$
as representations of the
linguistic universe $\Omega$. However, the framework does not distinguish
whether this error rate arises because of representational limits of $B$ and
$V$ or because the random variables $I$ and $J$ do not accurately follow
the true distribution of linguistic phenomena. Poor modelling and noise
can be treated identically as far as the framework is concerned.
Consider the tagger example once more. Suppose that the words have been
incorrectly transcribed occasionally, so that the input to the tagger contains
words not actually present in the original linguistic utterance. This noise
will cause errors. Also, as we've seen above, extra-sentential context can
make an unlikely reading the preferred one, resulting in an optimally trained
tagger making an error. Both these kinds of errors are treated identically by
the framework and all the results below hold whether the optimal error rate
arises from noise or violated model assumptions. There is no need to extend
the framework to handle noisy conditions.
The framework does distinguish two error components:
\begin{itemize}
\item {\em optimal errors\/} caused by modelling limitations or by noise; and
\item {\em statistical errors\/} caused by insufficient training data.
\end{itemize}
The importance of distinguishing these two components should be obvious.
An illustration of them is provided by the errors that Brent's~(1993) verb
frame acquisition system makes.
\begin{citedquote}{Brent,~1993:256}
Three of the five \dots violate the model's assumptions$\dots$and
highlight the limitations of the model. The remaining [two] \dots would be
much rarer in more mundane sources of text \dots than in the diverse
Brown Corpus.
\end{citedquote}
Because of the great variety of language found in the Brown Corpus, the
amount of training data required to overcome statistical fluctuations is very
large. It is statistical errors that are the concern of a theory of data
requirements.
In the results below, the goal will be to relate the expected accuracy to the
volume of training data. In doing so, we will be evaluating the expected
accuracy {\em relative} to the optimal accuracy. It is only the error
component due to statistical fluctuation that will be predicted. This raises a
practical issue: How do we predict the optimal accuracy? Unless large
volumes of manually annotated data exist, measuring $\alphaopt$ for a
proposed statistical processor presents a difficult challenge.
Hindle and Rooth~(1993) have used human judges to measure the
context-dependence of prepositional phrase attachment. Judges were given
only the preposition and the preceding verb and noun, just the information
available to their statistical processor. The judges could only perform the
attachment correctly in around~87\% of cases. If we assume that the judges
incorrectly analysed the remaining~13\% of cases because these cases
depended on knowledge of the wider context, then any statistical learning
algorithm based on the same input information cannot do better than~87\%.
Of course, if there is insufficient training data the system may do
considerably worse because of statistical errors.
Unfortunately, this approach to estimating $\alphaopt$ is expensive to apply
and makes a number of questionable psychological assumptions. For
example, it assumes that humans can accurately reproduce parts of their
language analysis behaviour on command. It may also suffer when
representational aspects of the analysis task cannot be explained easily to
experimental subjects. A worthwhile goal for future research is to establish
a statistical method for estimating or bounding $\alphaopt$ using small
amounts of language data.
\subsection*{Training}
There is still one vital component missing from the formal framework and
that is training data. After all this is the requirement that the theory is
designed to predict. I will assume hereafter that training is supervised, and
that training data and testing data come from the same
distributions. Note that if part of the optimal error rate is due to
noise, this assumption implies that training and
testing data are subject to the same noise.
\begin{assumption}[Training and Test Set Equivalence]
Test and training data are drawn from the same distributions.
\end{assumption}
Unsupervised learning algorithms are usually driven by assumptions that are
not made explicit and therefore represent a substantially more difficult
problem.
\begin{assumption}[Supervised Training]
All training data is annotated with values, allowing supervised learning.
\end{assumption}
Training proceeds on the basis of training instances extracted from a corpus,
each of which contributes to the statistical measures collected. Formally, I
will define a training set as a collection of pairs of bins and values.
\begin{definition}[Training Set]
A {\em training set\/} of $m$ instances
is an element of $(B \times V)^{m}$
where each pair $(b,v)$ is sampled according to the random variables $I$
and $J$ from $\Omega$.
\end{definition}
It follows from this definition that each training instance is independent of
all the others.
While this assumption is made almost universally by \sll\ models,
it is often violated. Work on adjusting models to allow for such
violations is reported in Church and Gale~(1995).
\begin{assumption}[Training Sample Independence]
Training instances are statistically independent of one another.
\end{assumption}
When a pair from a training set $c$ has bin $b$ as its first component, we say
that the training instance has {\em fallen into\/} bin $b$.
The corresponding value $v$ (the second component of the pair) is
also said to have fallen into bin $b$.
Recall that a probabilistic analyser includes a function $p:B \times V
\rightarrow [0,1]$. There are a variety of methods by which an appropriate
function $p$ can be estimated from the training set. Regardless of the
learning algorithm used, each possible training corpus, $c$, results in the
acquisition of some function, $\pfromc$, and consequently in an analysis
function $A_c$. Our aim is to explore the dependence of the expected
accuracy $\alpha(A_c)$ on the magnitude of $m$, the size of the
training set.
If the learning algorithm is well-behaved in the following sense, then the
expected accuracy of the \sll\ system is guaranteed to converge to the
optimal accuracy rate as larger amounts of training data are used. The proof
of this is straight-forward.
\begin{definition}[Well-behaved Learner]
A learning algorithm is defined to be {\em well-behaved\/} when
\begin{equation}
(\forall b \in B) (\forall v \in V)
\lim_{m \rightarrow \infty} p(b,v) = Pr(J=v | I=b)
\end{equation}
where $p(b,v)$ is the function acquired by the learner after $m$ training
instances of an infinite training set have been examined.
\end{definition}
This completes the framework.
\section{Towards a Theory of Data
Requirements} \label{sec:dr_beginning}
While the framework provides a notational foundation for a theory of data
requirements, it cannot be used by itself to make predictions about data
requirements. The relationship between training data volumes and expected
accuracy depends heavily on both the structural characteristics of the
probabilistic model and the learning algorithm used to estimate the model
parameters. It would be a mammoth task to construct mathematical
representations of all the probabilistic models and training methods
commonly used in statistical language learning.
Instead, the remainder of this chapter focuses on a very simple case, in fact,
the simplest well-behaved one. To instantiate the framework we need to
provide both a probabilistic model and a training algorithm to estimate the
parameters of that model, thus yielding an appropriate function $\pfromc$
for any given training set $c$.
\subsection*{Choosing a model and training algorithm}
The probabilistic model I will examine is the trivial one, the complete joint
distribution over $B \times V$. This model has a set of parameters
$\hat{p}(b, v)$, one for each pair composed of a bin and a value,
representing an estimate of the probability $\Pr(J=v | I=b)$. As long as the
learning algorithm ensures that $\sum_{v \in V} \hat{p}(b,v) = 1$ for all $b
\in B$, we can let $p(b,v) = \hat{p}(b,v)$ and we have a probabilistic
analyser. Since this constraint removes one free parameter for each bin, the
size of the model is given by $t = |B| (|V|-1)$.
The training algorithm is the simplest well-behaved one given this model:
the maximum likelihood estimate assuming a multinomial distribution. Let
$\countfn_c(b, v)$ be the number of pairs in the training sequence $c$
whose first component is bin $b$ and second component is value $v$. Also,
let $\countfn_c(b) = \sum_{v' \in V} \countfn_c(b, v')$,
the number of training
instances falling into bin $b$. The training algorithm estimates the
parameters of the model using
\begin{equation}
\hat{p}(b,v) = \left\{
\begin{array}{cl}
\frac{1}{|V|}
& \mbox{if $(\forall v' \in V) \countfn_c(b, v') = 0$} \\
\frac{\countfn_c(b, v)}{\countfn_c(b)}
& \mbox{otherwise}
\end{array}
\right.
\label{eq:dr_beginning_mle}
\end{equation}
The fact that this model and training algorithm are well-behaved follows
from fundamental results in statistics, but no proof will be given here. Since
the functions $p$ and $\hat{p}$ are the same I will use only $p$ hereafter.
Maximum likelihood estimates are commonly used in \sll\ systems and other
estimation methods are sufficiently similar in nature that reasoning about
\acronym{mle} applies at least qualitatively to other methods
too.\footnote{To be well-behaved other methods must at least share the
convergence properties of \acronym{mle}.} The limitation to complete joint
distribution models is more restrictive since it results in two common \sll\
strategies falling outside the domain of reasoning.
First, by assuming the model is the complete joint distribution, we disallow
the use of linguistic constraints to assist the analysis. In practice, it is
common to clamp certain parameters to zero, thus avoiding the need to
estimate these and lowering data requirements. Weischedel~\etal~(1993)
and Merialdo~(1994) both assume that only those tags which have been
observed for a word are possible. This reduces the number of free
parameters by an order of magnitude. Yarowsky~(1992) only permits his
noun sense disambiguator to assign a thesaurus category if the noun appears
in that category. These strategies are not available in the case I will
investigate, although they are certainly possible within the framework given
above.
Second, by assuming a complete joint distribution, we prohibit the
probabilistic model from combining simpler events to form more complex
ones. This is crucial to many \sll\ systems, including Markov model taggers,
where tag $n$-grams are combined to form sentence length sequences. The
analysis chooses the sentence-length sequence which has maximum
probability, rather than choosing the tag for each word individually. In
practice it is usually impractical to estimate the complete joint distribution
between inputs and outputs. Instead, assumptions are used to cut down the
space, creating an intensional representation of the joint probability space.
This therefore represents the most important avenue for
extending the present work.
These two qualifications mean that the simple case I will consider is not
directly applicable to other probabilistic models and learning algorithms.
However, I do believe the results provide general insights into the
phenomenon of data sparseness as it appears in \sll, and that their
significance extends beyond the simple case on which they are based. They
raise important issues for those engaged in engineering these kinds of
systems and support order-of-magnitude predictions for planning purposes.
Putting these caveats aside then, we can begin investigation of the simple
case.
\subsection*{Optimal error rates}
Consider first the optimal accuracy. From
equation~\ref{eq:dr_framework_oaf}, an optimal analysis function $A_{\opt}$
must have $\Pr(J=A_{\opt}(b) | I=b) = \max_{v \in V} \Pr(J=v | I=b)$ for every
bin $b$. An optimal analysis function selects some optimal value
for each bin. Let $\vopt(b) = A_{\opt}(b)$ for some optimal analysis function
$A_{\opt}$ and let $q(b) = \max_{v \in V} \Pr(J=v | I=b)$, the probability
of the optimal value. Thus $\Pr(J=\vopt(b) | I=b) = q(b)$.
Using equation~\ref{eq:dr_framework_ea} the optimal error rate $r_{\opt}$
can be expressed in terms of $q(b)$.
\begin{eqnarray}
r_{\opt} & \stackrel{\rm def}{=} & 1-\alphaopt \nonumber \\
& = & 1-\alpha(A_{\opt}) \nonumber \\
& = & 1-\sum_{b \in B} \Pr(I=b) \Pr(J=\vopt(b) | I=b) \nonumber \\
& = & 1-\sum_{b \in B} \Pr(I=b) q(b) \nonumber \\
& = & \sum_{b \in B} \Pr(I=b) (1-q(b))
\label{eq:dr_beginning_ropt}
\end{eqnarray}
The contribution to the optimal error rate from each bin $b$ is $\Pr(I=b) (1-
q(b))$.
Consider now the analysis procedure used by the probabilistic analyser.
Given a test case falling into bin $b$ it uses
the estimates $p(b,v)$ to select a value. It follows from
equations~\ref{eq:dr_framework_pa} and~\ref{eq:dr_beginning_mle} that
the analyser will choose that value which has occurred with the given bin
most often in the training corpus. That is, there is a distribution of values
which have been observed to fall into each bin and the analyser chooses the
mode of that distribution. For this reason, I will call the
combination of the complete joint distribution model and the maximum
likelihood training algorithm a
\newterm{mode-based learner}. According to the definition of a
probabilistic analyser and the interpretation of $\argmax{}$ given there,
when the distribution of values falling into a bin
is multi-modal, the learner chooses any one of the modes at random.
It is useful to distinguish two cases, corresponding to the two
alternatives in equation~\ref{eq:dr_beginning_mle}. When the analyser is
presented with a test case in bin $b$,
either there is at least one value $v$ for
which the training set contains some occurrences of $(b, v)$, or there are no
training instances falling into bin $b$. The latter situation I will call an
\newterm{empty bin} (that is, $count_c(b) = 0$). The former is then a
\newterm{non-empty bin}.
In the remainder of this section I will consider each of these situations in
turn. The mathematical reasoning from this point down to
equation~\ref{eq:dr_beginning_nonemptybound} was developed by Mark
Johnson of Brown University for odd values of $n$.
My contribution was to complete the proof of
equation~\ref{eq:dr_beginning_emptybound} and extend the reasoning
to even values of $n$. I would like to thank him again for
his kind permission to publish these results.
\subsection*{Empty bins}
\label{pg:dr_beginning_MJstart}
First, how do empty bins affect the expected accuracy? Let $p_{b}$ denote
$\Pr(I=b)$. Since the training instances are drawn independently of one
another according to the same distribution as test instances, the probability
of bin $b$ being empty after training on a corpus of $m$ training instances
is $(1-p_{b})^{m}$. Thus the probability, over all test inputs,
of there being no training data in the bin for that input is given by
\begin{equation}
\Pr(\mbox{\it empty}) = \sum_{b \in B} p_{b}(1-p_{b})^{m}
\end{equation}
Put another way, this is the probability of encountering an empty bin during
testing.
Clearly this will vary with the distribution $I$ over the bins. But since
$\sum_{b \in B} p_{b} = 1$, it is possible to show using partial derivatives
and boundary conditions that the maximum for $\Pr(\mbox{\it empty})$
occurs when $(\forall b \in B)$ $p_{b} = \frac{1}{|B|}$.
Therefore
\begin{equation}
\Pr(\mbox{\it empty}) \leq (1-\frac{1}{|B|})^{m} \leq e^{-m/{|B|}}
\label{eq:dr_beginning_emptybound}
\end{equation}
So for values of $m/{|B|}$ greater than~1, the probability that any
given test sample falls into a bin for which we received no training samples
is quite low. For example, when $m/{|B|}\geq 3$, they occur in less
than~5\% of inputs.
On test cases which do fall into an empty bin,
equation~\ref{eq:dr_beginning_mle} and the definition of $\argmax{}$
dictate that any value in $V$ is selected at random. So the expected
accuracy on these test cases is $\frac{1}{|V|}$.
But even if we assume that all these test cases are analysed incorrectly, the
contribution of empty bins to the error rate is never larger
than $e^{-m/{|B|}}$. Recall from section~\ref{sec:dr_need} that
the rule of thumb suggested that $m/t$ should be~10, where $t$
is the size of the model.
If the shortfall between
expected accuracy and optimal accuracy is primarily due to unseen events,
then this rule is very generous for small $V$,
predicting larger data requirements than necessary. This point is
not the same as that made by Weischedel~\etal~(1993). Their argument is
that non-uniform bin distributions reduce the data requirements --- we will
return to that issue in section~\ref{sec:dr_simulations}. The result given
here holds even for uniform bin distributions.
\subsection*{Non-empty bins}
How then do non-empty bins affect the expected accuracy? Such bins
contain at least one training instance from the training set. Let the number
of these in a particular bin $b$ be denoted by $n = \countfn_c(b)$.
Since $r_{\opt}$ is the best possible error rate, it follows
from equation~\ref{eq:dr_beginning_ropt} that $q(b)$ must close to 1
for most bins if the system is to work well. Since this is the
probability that a test instance falling into bin $b$ has value $\vopt(b)$, we
can expect this value to be one of the more frequent in the bin. If more than
half of the training instances falling into bin $b$ have the value $\vopt(b)$,
then this must be the most frequent value in the bin; that is, $\vopt(b)$ must
be the mode of the observed distribution of values in bin $b$.
Thus, if $\countfn_c(b, \vopt(b)) > \frac{n}{2}$, then $A_c(b)= \vopt(b) =
A_{\opt}(b)$ and the trained analysis function has optimal accuracy on test
cases falling into bin $b$. So by computing the probability of
$\countfn_c(b, \vopt(b)) > \frac{n}{2}$, we can obtain a lower bound for
the accuracy on bin $b$ relative to the optimal accuracy.
The probability that the trained analysis function
is optimal in bin $b$ is bounded as follows.
\begin{eqnarray}
\Pr(A_c(b) = \vopt(b) | I=b)
& \geq & \sum_{i= \lceil {\frac{n+1}{2}} \rceil}^{n}
\Pr(\countfn_c(b, \vopt(b)) = i) \nonumber \\
& = & \sum_{i= \lceil {\frac{n+1}{2}} \rceil}^{n}
{n \choose i} (1-q(b))^{n-i} q(b)^{i}
\label{eq:dr_beginning_trained}
\end{eqnarray}
When $n$ is even and $\countfn_c(b, \vopt(b)) = \frac{n}{2}$, only one
other value $v'$ can possibly have $\countfn_c(b, v') = \frac{n}{2}$. Thus,
the probabilistic analyser will set $A_c(b) = \vopt(b)$ with probability at
least one half. This fact will be used to make the bound tighter below.
Equation~\ref{eq:dr_beginning_trained} bounds the probability
that training in a given bin is optimal for that bin. Consider
now a test instance falling into this bin. Since training and testing
are independent we have the following equality.
\begin{equation}
(\forall v \in V)
(\Pr(A_c(b)=v, J=v | I=b) = \Pr(A_c(b)=v | I=b) \Pr(J=v | I=b))
\end{equation}
We can then proceed as follows to bound the probability of the
trained analysis function matching the test case.
\begin{eqnarray*}
\Pr(J=A_c(b) | I=b) & = & \sum_{v \in V} \Pr(A_c(b)=v, v=J | I=b) \\
& \geq & \Pr(A_c(b) = \vopt(b) | I=b) \Pr(J=\vopt(b) | I=b)
\end{eqnarray*}
So if all bins contain at least one training instance the overall expected
accuracy can be bounded as follows using equation~\ref{eq:dr_framework_ea}.
\begin{eqnarray}
\alpha(A_c) & = & \sum_{b \in B} \Pr(I=b) \Pr(J=A_c(b) | I=b) \nonumber \\
& \geq & \sum_{b \in B} \Pr(I=b) \Pr(A_c(b) = \vopt(b) | I=b) q(b)
\nonumber \\
& \geq & \sum_{b \in B} \Pr(I=b) q(b)
\sum_{i= \lceil {\frac{n+1}{2}} \rceil}^{n}
{n \choose i} (1-q(b))^{n-i} q(b)^{i}
\label{eq:dr_beginning_mode}
\end{eqnarray}
Therefore the contribution of a bin with at least one training instance to the
expected accuracy is bounded below by the expression contained
in the outer sum.
Switching from accuracy rates to expected error rates,
define a bounding function as follows, where $\even{n}$ is~1 when $n$
is even and~0 otherwise.
\begin{equation}
U_n(b) \stackrel{\rm def}{=} 1-q(b) \left( \begin{array}{c}
\frac{1}{2} \even{n} {n \choose {\halfn}} (1-q(b))^{\halfn} q(b)^{\halfn}
+
\sum_{i= \lceil {\frac{n+1}{2}} \rceil}^{n}
{n \choose i} (1-q(b))^{n-i} q(b)^{i}
\end{array} \right)
\end{equation}
Examination of equation~\ref{eq:dr_beginning_mode} along with the
argument above for the case of even $n$ shows that
this is an upper bound on the contribution to the error rate from bins with
$n$ training instances. So the bound on the overall expected accuracy rate
when all bins contain at least one training instance can be made slightly
tighter and re-expressed as follows. Note that $n$ varies with $b$.
\begin{eqnarray}
\alpha(A_c) & \geq & \sum_{b \in B} \Pr(I=b) (1-U_n(b)) \nonumber \\
& = & 1- \sum_{b \in B} \Pr(I=b) U_n(b)
\label{eq:dr_beginning_nonemptybound}
\end{eqnarray}
Comparing this to the optimal accuracy taken from
equation~\ref{eq:dr_beginning_ropt}
\begin{equation}
\alpha(A_{\opt}) = 1- r_{\opt} = 1 - \sum_{b \in B} \Pr(I=b) (1-q(b))
\end{equation}
we see that the error rate contributed by a bin is never more than
$\frac{U_n(b)}{1-q(b)}$ times the optimal error rate for that bin. This
provides quite tight bounds. For example, when $q(b) \geq 0.9$,
$\frac{U_3(b)}{1-q(b)} \leq 1.26$ and $\frac{U_5(b)}{1-q(b)} \leq 1.08$.
\label{pg:dr_beginning_MJfinish}
Unfortunately these bounds require knowing how many training instances
fall into a bin. However,
if all bins contain at least one training instance,
then we can loosely bound the overall expected accuracy as follows.
The first step is a corollary of the
half-binomial result given in appendix~\ref{appendix:halfbinomial}.
\begin{eqnarray}
U_{n}(b) & \leq & U_{1}(b) \nonumber \\
& = & 1 - q(b)q(b) \nonumber \\
& = & (1+ q(b))(1 - q(b)) \nonumber \\
& \leq & 2 (1-q(b))
\label{eq:dr_beginning_twiceoptimal}
\end{eqnarray}
Thus in all bins which have training instances in the corpus, the expected
error rate for the bin never exceeds twice the optimal error rate for that bin
and this yields the desired overall bound.
\begin{equation}
\alpha(A_c) \geq 1-\sum_{b \in B} \Pr(I=b) 2(1-q(b)) = 1 - 2r_{\opt}
\end{equation}
This result is quite useful since we expect the optimal error rate to be quite
low. It is directly applicable if training data is collected in a way that
ensures at least one training instance per bin. If the optimal predictions
are~90\% accurate, then a mode-based learner will be at least~80\% accurate
after learning on just one instance per bin.
So summarising the two main results so far.
\begin{itemize}
\item Empty bins occur fairly rarely --- less than~5\% of test instances will
fall into an empty bin when there is an average of~3 training instances per
bin.
\item Non-empty bins exhibit quickly converging error rates ---
their expected error rate is always less then double the optimal error rate
and with~5 training instances it is only~8\% over the optimal error rate.
\end{itemize}
So in general it appears that~3--5 instances per bin will be sufficient.
\section{Predicting the Global Expected Accuracy}
\label{sec:dr_global}
Unfortunately, we cannot normally guarantee that no bins will be empty,
since the corpus is typically a random sample. In order to combine
equations~\ref{eq:dr_beginning_emptybound}
and~\ref{eq:dr_beginning_twiceoptimal} to arrive at a bound for the overall
expected accuracy after training on a random sample, we need to make
further assumptions. Two possibilities are: assume $\Pr(I=b)$ is constant
across all bins or assume that $\Pr(J=\vopt(b) | I=b)$ is constant. Since the
former is a very poor assumption for reasons already given, I will opt for the
latter.
\begin{assumption}[Uniform Optimal Probability]
The probability of the most likely value in each bin is constant:
$\Pr(J=\vopt(b) | I=b) = p$.
\end{assumption}
Note that this does not require that the most likely value be the same value in
each bin. $\vopt(b)$ can vary with $b$ as long as its probability remains
constant. This assumption is not too drastic since if the analyser is to have
reasonable optimal performance, most $\Pr(J=\vopt(b) | I=b)$ must be close
to~1.
Now equation~\ref{eq:dr_beginning_ropt} can be simplified, since $q(b) =
\Pr(J=\vopt(b) | I=b) = p$, to obtain $r_{\opt} = 1-p$, where the
contribution to the optimal error rate from each non-empty bin $b$ is $(1-p)
\Pr(I=b)$. So equation~\ref{eq:dr_beginning_twiceoptimal} tells us that the
contribution to the expected accuracy rate from these bins is at least $(1-2(1-
p)) \Pr(I=b)$. It is also easy to show that this contribution must always be
greater than $\frac{1}{|V|} \Pr(I=b)$. Since empty bins result in the
analyser making a random choice, the contribution to the expected accuracy
rate from an empty bin $b$ is $\frac{1}{|V|} \Pr(I=b)$. Using
equation~\ref{eq:dr_beginning_emptybound} we can combine these as
follows.
\begin{eqnarray}
\alpha(A_c) & = & \sum_{b \in B} \Pr(I=b) \Pr(J=A(b) | I=b) \nonumber \\
& \ge & (1-e^{-m/{|B|}}) (1-2(1-p))
+ \frac{1}{|V|}e^{-m/{|B|}} \nonumber \\
& = & (1-e^{-m/{|B|}}) (2p-1)
+ \frac{1}{|V|}e^{-m/{|B|}}
\label{eq:dr_global_oldbound}
\end{eqnarray}
This bound is relatively weak but at least the assumptions it relies on are
also quite weak. An example plot of this bound will appear in
figure~\ref{fig:dr_simulations_results} under the name \scare{Old Bound}.
\subsection*{An exact expression for the expected accuracy}
If we make some different assumptions, an exact expression for the global
expected accuracy is possible. First let's assume that $V$ is binary.
Without loss of generality, let $V = \{v_0, v_1\}$ with
$(\forall b \in B) \vopt(b) = v_0$.
Let $B = \{b_1, b_2, \ldots \}$ and define a set of random variables
over $V$, $V_i = (J|I=b_i)$.
That is, $(\forall v \in V) \Pr(V_i=v) = \Pr(J=v | I=b_i)$.
We can revoke the uniform optimal probability assumption by
introducing a set of probabilities $v_i = \Pr(J=v_0 | I=b_i)$.
It is useful to define the following quantity, about which some results
appear in appendix~\ref{appendix:halfbinomial}.
\begin{definition}[Half-binomial]
The {\em half-binomial\/} is given by the following expression
\begin{equation}
\halfbin(p, n) \stackrel{\rm def}{=}
\frac{1}{2} \even{n} {n \choose \halfn} p^{\halfn} (1-p)^{\halfn}
+
\sum_{i=\halfnincup}^{n} {n \choose i} p^i (1-p)^{n-i}
\end{equation}
which is that part of the binomial probability distribution on the upper side
of the midpoint.
\end{definition}
Define a further set of random variables $\yaftern_i$ on $V$, describing the
distribution of the mode observed in bin $b_i$ after $n$ training instances
have fallen into that bin. This is expressed by the following equation
where $c_i(n)$ is a training set containing $n$ instances falling into bin
$b_i$.
\begin{equation}
\Pr(\yaftern_i=v_0) \stackrel{\rm def}{=} \Pr(A_{c_i(n)}(b_i)=v_0)
\end{equation}
Application of the binomial theorem leads to
\begin{equation}
\Pr(\yaftern_i=v_0) = \halfbin(v_i,n)
\end{equation}
Let $g_i(n)$ be the accuracy on test cases in bin $b_i$ after $n$ training
instances have fallen into that bin.
Using the definition of correctness, we get
\begin{eqnarray}
g_i(n) & = & \Pr(\yaftern_i = V_i) \nonumber \\
& = & v_i \halfbin(v_i, n) + (1-v_i) \halfbin(1-v_i,n) \nonumber \\
& = & 1-v_i+(2v_i-1) \halfbin(v_i,n)
\end{eqnarray}
The last step uses the fact that $\halfbin(1-p,n) = 1-\halfbin(p,n)$,
which can easily be verified by expansion.
The training set is allocated to the bins according to the random
variable $I$ and so the number of training instances having fallen into bin
$b_i$ is distributed binomially.
That is, when $c$ contains $m$ training instances,
$\Pr(\countfn_c(b_i) = n) = {m \choose n} {p_i}^n (1-p_i)^{m-n}$
where $p_i = \Pr(I=b_i)$. So the expected accuracy for test cases in
bin $b_i$ can be written as
\begin{eqnarray}
\mbox{E}_n [ g_i(n) ] & = & \sum_{n=0}^{m}
\Pr(\countfn_c(b_i) = n) g_i(n) \nonumber \\
& = & \sum_{n=0}^{m}
{m \choose n} {p_i}^n (1-p_i)^{m-n}
(1-v_i+(2v_i-1) \halfbin(v_i,n)) \nonumber \\
& = & 1-v_i+(2v_i-1) \sum_{n=0}^{m}
{m \choose n} {p_i}^n (1-p_i)^{m-n} \halfbin(v_i,n))
\nonumber \\
& = & 1-v_i+(2v_i-1) G(m, p_i, v_i)
\end{eqnarray}
where
\begin{displaymath}
G(m,p_i,v_i) \stackrel{\rm def}{=} \sum_{n=0}^{m}
{m \choose n} {p_i}^n (1-p_i)^{m-n} \halfbin(v_i,n))
\end{displaymath}
Thus the global expected accuracy can be reformulated as
\begin{eqnarray}
\alpha(A_c) & = & \mbox{E}_i [ \mbox{E}_n [ g_i(n) ] ] \nonumber \\
& = & \sum_{i=1}^{|B|} p_i \mbox{E}_n [ g_i(n) ] \nonumber \\
& = & 1 - \sum_{i=1}^{|B|} p_i v_i
+ \sum_{i=1}^{|B|} (2v_i-1) p_i G(m, p_i, v_i)
\end{eqnarray}
This is an exact expression for the expected accuracy that does not
depend on assuming uniform bins nor uniform optimal probabilities.
However it is dependent on complete knowledge of the bin and
value distributions.
We can simplify this further by reinstating the uniform optimal
probability assumption, that is $(\forall i) v_i = p$.
The expected accuracy then reduces to
\begin{equation}
\alpha(A_c) = 1 - p + (2p-1) \sum_{i=1}^{|B|} p_i G(m, p_i, p)
\end{equation}
We still need to know both $p$ and the $p_i$.
Also, the function $G$ is expensive to compute. The
remainder of this section will show how to arrive at a cheaply computable
expression for the expected accuracy when the $p_i$ are assumed uniform
and $p$ is known. In the following section I will report some simulations to
explore the effect of the uniform bins assumption.
If the bin distribution is uniform ($(\forall i) p_i = \frac{1}{|B|}$)
then we can simplify further as follows.
\begin{equation}
\alpha(A_c) = 1 - p + (2p-1) \frac{1}{|B|} G(m, \frac{1}{|B|}, p)
\label{eq:dr_global_newbound}
\end{equation}
The main computational difficulty with the
function $G$ is the appearance of ${m \choose n}$. Most corpus-based
language learners use large corpora, so we expect the number of training
instances, $m$, to be very large. So we need a more easily computable
version of $G$. The following argument leads to a fairly tight lower bound
to $G$ for suitably chosen values of $k_j$ (see below). For simplicity I
will ignore the extra term that appears in
$\halfbin(p,n)$ for even $n$. It is straightforward to prove that this term
carries through in the expected way.
\begin{eqnarray*}
G(m,r,p) & = & \sum_{n=0}^m \mbox{binomial}(n;m,r)
\sum_{i=\halfnincup}^{n}
\mbox{binomial}(i;n,p) \\
& = & \sum_{n=0}^m \sum_{i=\halfnincup}^{n}
\mbox{binomial}(n;m,r)
\mbox{binomial}(i;n,p) \\
& = & \sum_{j=0}^{\lceil \frac{m}{2} \rceil} \sum_{n=2j+1}^m
\mbox{binomial}(n;m,r)
\mbox{binomial}(n-j;n,p) \\
& = & \sum_{j=0}^{\lceil \frac{m}{2} \rceil} \sum_{n=2j+1}^m
{m \choose n} r^n (1-r)^{m-n} {n \choose j}
p^{n-j} (1-p)^j \\
& = & \sum_{j=0}^{\lceil \frac{m}{2} \rceil} (1-r)^m (\frac{1-
p}{p})^j
\sum_{n=2j+1}^m \frac{m!}{(m-n)!} (1-
r)^{-n}
\frac{p^n}{n!} {n \choose j} r^n \\
& \ge & \sum_{j=0}^{\lceil \frac{m}{2} \rceil} (1-r)^m (\frac{1-
p}{p})^j
\sum_{n=2j+1}^{k_j} \frac{m!}{(m-n)!} (1-
r)^{-n}
\frac{p^n}{n!} {n \choose j} r^n
\end{eqnarray*}
The second step rearranges the order of the two sums.
The final step introduces a
series of variables which limit the number of terms in the inner sum. The
inequality holds for all $k_j \le m$. Notice that the $k_j$ may vary for each
term of the outer sum. Since $n \le k_j \le m$ we can use the following
relation:
\begin{equation}
\frac{m!}{(m-n)!} \ge (m-k_j)^n
\label{eqn_factorial_approx}
\end{equation}
Letting $x_j \stackrel{\rm def}{=} r p \frac{(m-k_j)}{(1-r)}$
we can simplify as follows:
\begin{eqnarray*}
G(m,r,p) & \ge & \sum_{j=0}^{(m-1)/2} (1-r)^m (\frac{1-
p}{p})^j
\sum_{n=2j+1}^{k_j} {n \choose j}
\frac{m!}{(m-n)!}
\frac{(1-r)^{-n} r^n p^n}{n!} \\
& \ge & \sum_{j=0}^{(m-1)/2} (1-r)^m (\frac{1-
p}{p})^j
\sum_{n=2j+1}^{k_j} {n \choose j} (m-
k_j)^n
\frac{(1-r)^{-n} r^n p^n}{n!} \\
& = & \sum_{j=0}^{(m-1)/2} (1-r)^m (\frac{1-
p}{p})^j
\sum_{n=2j+1}^{k_j} {n \choose j}
\frac{x_j^n}{n!} \\
& \ge & \sum_{j=0}^{g} (1-r)^m (\frac{1-p}{p})^j
\sum_{n=2j+1}^{k_j} {n \choose j}
\frac{x_j^n}{n!}
\end{eqnarray*}
The last step introduces $g$ and holds for all $g \le (m- 1)/2$. This is
because in practice only the first few terms of the outer sum are significant.
Thus for suitably chosen $g, k_j$ this is a cheaply computable lower bound
for $G$. A program to compute this to a high degree of accuracy has been
implemented.
We now have a computable expression that closely approximates the
expected accuracy of a mode-based learner assuming a uniform bin
distribution and uniform optimal probabilities.
An example plot of this expression appears in
figure~\ref{fig:dr_simulations_results} under the name \scare{New Bound}.
\section{Simulations of a Mode-based Learner} \label{sec:dr_simulations}
The assumption of uniform bin probabilities significantly simplifies the
analysis, but in most cases is drastically violated by the data. This is
especially worrying because non-uniform bin distributions can have a strong
affect on the training process. The expected number of relevant training
instances when bins are logarithmically distributed is many times greater
than that when bins are uniformly distributed. When bins are uniformly
probable, the expected number of training instances in the same bin as a
random test instance is $\frac{m}{|B|}$. But Zipf's law states that word
types are distributed logarithmically (the $n$th most frequent word has
probability proportional to $\frac{1}{n}$). When this is true the expected
number of training instances in the same bin as a random test instance is
approximately $\frac{1.6 m}{\log{(0.56 |B|)}^2}$ ($ \gg \frac{m}{|B|}$).
Thus we can expect much more information to be available about {\em
typical} test cases.
This idea is discussed by Weischedel~\etal~(1993). They argue that this
phenomenon is responsible for the near optimal accuracy of their tagger with
only~64 thousand words of training data, rather than the~1 million they
predict by parameter counting. This is a highly plausible explanation and so
in this section a series of simulations are reported to explore the effects of
non-uniform distributions. The simulations also serve to validate the results
above for uniform distributions.
The simulations use a fixed set of~10 thousand bins, allocating $m$
training instances to the bins randomly according to either a uniform or
logarithmic distribution. Each training instance is randomly assigned one of
two values, with the optimal value having probability $p = 0.9$.
Simulations with other values of $p$ did not differ qualitatively. The
optimal accuracy rate is therefore~90\%. For each value of $m$,
the correctness of the mode-based
learner on~1000 randomly generated test instances is computed to arrive at
an observed correctness rate.
This process (training and testing) is repeated~30 times for each run, with
the mean being recorded as the observed accuracy. The standard deviation
is used to estimate a~95\% $t$-score confidence interval ($t$=2.045).
\begin{figure}
\centering \input{figures/skewfig.tex}
\caption{Simulation results and theoretical predictions for 10,000 bins}
\label{fig:dr_simulations_results}
\end{figure}
Figure~\ref{fig:dr_simulations_results} shows five traces of accuracy as the
volume of training data is varied. The lowest curve shows the bound given
by equation~\ref{eq:dr_global_oldbound}, the bound formed by combining
the empty and non-empty bin results. The other dotted curve shows the
expected accuracy predicted using the exact expression for the accuracy
assuming uniform bins and uniform optimal probability given by
equation~\ref{eq:dr_global_newbound}, as approximated by the program
described in the previous section. The two further curves (with
95\% confidence interval bars) then show the results of simulations,
using uniform and logarithmic bin distributions.
As can be seen, the approximation given for $G$ is quite accurate.
The match between the \scare{New Bound} and the uniform bin simulation
results is not surprising; it only serves to verify that the
algebra given in section~\ref{sec:dr_global} is correct.
The new bound can be used to predict data requirements
when bins are uniformly distributed.
It is far superior to the old bound beyond about one training instance
per bin ($m = 10$ thousand).
However, when the bins are logarithmically distributed, learning converges
significantly more quickly, as suggested by the reasoning about expected
number of relevant training instances.
Non-uniformity of the bin distribution accelerates the learning process.
Any practical theory of data requirements must incorporate some measure
of non-uniformity which can be related to the increase in learning rate.
Perhaps surprisingly though, the
logarithmic distribution appears to eventually fall behind the uniform one
once there is plenty of data. This might be explained by the presence of very
rare bins in the logarithmic distribution which thus take longer to learn.
\section{Summary}
I have argued for the need for a theory of data requirements in \sll\
and provided a framework for reasoning about it. Within this framework
I have given some results regarding the relationship between
expected accuracy and training data volumes. This includes a
computable expression for the expected accuracy under the assumption
of uniform optimal probabilities and uniform bin distribution.
In addition, I have reported on some simulations showing that
learning converges more quickly with non-uniform bin distributions.
While all these results
provide insights into the nature of data requirements in \sll\ systems,
there is a long way to go towards a general theory. I hope that these
first steps will form a reasonable foundation for future research, with the
eventual outcome that the badly needed predictive theory will become
available, making statistical language learning research less of an art form
and statistical language learning technology more accessible to industrial
application.
\chapter{Experiments on Noun Compounds}
\label{ch:experimental}
Since compound noun understanding involves a great deal of knowledge, the
statistical acquisition of information for compound disambiguation
is a promising task for the application of the meaning distributions theory.
The experimental work in this thesis is aimed at two tasks.
First, the syntactic analysis of noun compounds
and, second, the semantic analysis of them.
Accordingly, this chapter
is divided into two parts, each devoted to one of these tasks.
Where components have been used both in the experimental work
on syntax and that on semantics, details are only given in the first half.
\section{Noun Compound Parsing}
The first half of this
chapter describes experiments on parsing compound nouns using the
statistical techniques suggested by the meaning distributions theory.
The goal here is the syntactic analysis of any noun compound, although
for the purposes of the experiment I will only consider minimally
ambiguous noun compounds. The problem and the precise scope of the
experiments is defined in section~\ref{sec:cy_problem}.
In section~\ref{sec:cy_model}, I will use the meaning
distributions theory to develop a probabilistic model
of noun compound syntax. This model is based on dependency relations
between concepts, which provide a shallow semantic representation.
The section also shows that
these dependency structures are isomorphic to constituency based
parse trees.
An overview of the components of the experimental setup is given
in section~\ref{sec:cy_design}, including the corpus, the lexicon,
the analyser and the test set. Full details of the experimental
method follow in section~\ref{sec:cy_method}. These include
the method used to collect test and training data from the corpus,
the concepts used, the parameter estimation strategy and the
analysis procedure.
Section~\ref{sec:cy_results} gives the resulting performance
measures for the parsing model. This is followed
in section~\ref{sec:cy_comparisons} by a series of
empirical comparisons, including one between the new parsing model
and an equivalent adjacency algorithm (recall
from section~\ref{sec:cn_statistical} that all previous
proposals use the latter method).
To establish how close the performance of the model
is to the best possible under these conditions, an experiment
with human judges has been conducted. This is reported in
section~\ref{sec:cy_human}. Finally, some conclusions are drawn
and limitations discussed in section~\ref{sec:cy_discuss}.
\subsection{Defining the Parsing Problem}
\label{sec:cy_problem}
In all the experimental work, I will only consider English compound nouns.
Nonetheless, compounds appear in many other languages (a list of research
work on compounds in other languages was given in
section~\ref{sec:cn_nature}) and there seems no reason why the same
techniques would work less well in these.
I shall also assume that the compound has been recognised from the
surrounding text, so that the system is presented with a sequence of nouns
known to be a compound (see the description of the identification problem
in section~\ref{sec:cn_computational}).
Given an identified compound, it is simplest to define the parsing task as one
of bracketing. That is, the system must select the most likely binary
bracketing of the noun sequence, assuming that it is a compound noun. For
example, \lingform{animal cruelty committee} would usually be analysed as
shown in
example~\ref{eg:cy_problem_brack_1}\ref{eg:cy_problem_brack_acc}
whereas \lingform{woman customs official} would be assigned that
shown in
example~\ref{eg:cy_problem_brack_1}\ref{eg:cy_problem_brack_wco}.
\begin{examples}
\item \label{eg:cy_problem_brack_1}
\begin{subexamples}
\item{[}[animal cruelty \nn] committee \nn{]}
\label{eg:cy_problem_brack_acc}
\item{[}woman [customs official \nn] \nn{]}
\label{eg:cy_problem_brack_wco}
\item{[}[chocolate [[birthday party \nn] cake \nn] \nn]
obsession \nn{]} \label{eg:cy_problem_cbpco}
\end{subexamples}
\end{examples}
I refer to the former as left-branching, and the latter as right-branching.
Note that bracketing is well-defined for longer compounds too, as shown
in example~\ref{eg:cy_problem_brack_1}\ref{eg:cy_problem_cbpco}.
This representation is slightly weaker than most syntactic representations, in
the same way that dependency grammar is (a detailed theory of this
grammatical formalism is given in Mel'cuk,~1988). The
difficulty lies in determining the scope of modifiers
(for a detailed description, see Covington,~1994).
For example, in \lingform{prototype hydrogen balloon}
it is clear that the syntax is right-branching, yet there is an
ambiguity as to whether the object being described is a prototype for
all balloons that happens to use hydrogen, or is a prototype for
specifically hydrogen balloons. For the purposes of this
investigation, I shall assume that scoping ambiguity is negligible, and
accept bracketing as sufficient syntactic analysis.
When analysing compounds, I take the correct bracketing to be that which
reflects the compositional structure of the {\em meaning} of the compound.
This is demonstrated in example~\ref{eg:cy_problem_brack_1}. The
meaning of \lingform{animal} is related to that of \lingform{committee}
only through the meaning of \lingform{animal cruelty}. Similarly, the
meaning of \lingform{customs} is only related to that of \lingform{woman}
through the meaning of \lingform{customs official}. While this is the
usual analysis, it is conceivable that a syntactic theory might require a
different interpretation of bracketings (for example, in order to explain some
observed grammatical alternations). The interpretation of syntactic analysis
used here is aligned with the ultimate goal of understanding compound
nouns.
According to most views of compounding, the composition of two nouns
yields an element with essentially the same syntactic behaviour as the
original nouns. A two word compound noun acts exactly like a single noun,
as do three word compounds and so forth. It is this recursion that allows
arbitrarily long compounds. Only one theory of compounding,
Marcus~(1980), posits any change in the syntactic properties of compounds
as they get longer, and counter-examples to this theory are well-known
(Finin,~1980). It follows that all the qualitative syntactic properties of
compounds of any length are exhibited by three word compounds (shorter
compounds lack internal syntactic ambiguity).
In the experimental work I will adopt this view as an assumption and only
investigate three word compounds. However, the probabilistic model of
compound noun syntax to be developed in section~\ref{sec:cy_model} is not
limited to three word compounds. It assigns probabilities to arbitrarily long
compounds (modulo the assumption that syntactic behaviour is fully
recursive). As we've seen, for three word compounds the syntactic
ambiguity is binary.
So now I can precisely state the problem addressed in the experimental work
below.
\begin{description}
\item[Problem Statement:] Given a three word English compound noun,
predict whether the most likely syntactic analysis is left-branching or
right-branching.
\end{description}
\subsection{A Model Based on Dependency
Grammar}
\label{sec:cy_model}
In section~\ref{sec:cn_statistical} I described a number of corpus-based
approaches to precisely this problem. Recall that
all of these use an adjacency algorithm; that is, they
select the analysis whose innermost constituent is most acceptable.
In this section I will develop a probabilistic
conceptual model of noun compounds.
The model assigns probabilities to shallow representations of
noun compound meanings. In conjunction with the meaning distributions
theory, this leads to a new algorithm.
The section is composed of five parts. First, I will discuss the
principle differences between the new algorithm and existing ones.
Second, I will formally define the structures upon which the model
is based and note the relationship of these structures to parse
trees.
Third, the event space and assumptions of the model will be
given.
Fourth, I will give the equations for applying the model to parsing
noun compounds and finally I will discuss a general prediction
of the model that is consistent with empirical data.
\subsubsection*{Two different analysis methods}
There are three key differences between the new algorithm and the
existing ones.
First, and most significant of all,
the new algorithm selects the analysis incorporating the most
acceptable dependency relations (instead of the most acceptable
constituents). Second, conceptual association is used rather than lexical
association. Third, the statistics used by previous algorithms are
not derived from explicit probabilistic models.
Before beginning the mathematical development of the model, I will
describe informally the most important difference, the
distinction between analysis based on innermost constituents and that based
on dependency relations. The other two
differences are sufficiently self-evident to stand without additional
explanation.
Consider the algorithms based on innermost constituents. As mentioned
in section~\ref{sec:cn_statistical},
these are all variants of an algorithm proposed
by Marcus~(1980), which makes calls to an oracle to determine how
acceptable a two word compound is. This is reproduced again here.
\begin{citedquote}{Marcus,~1980:253}
Given three nouns $n_1$, $n_2$ and $n_3$:
\begin{itemize}
\item If either [$n_1$ $n_2$] or [$n_2$ $n_3$] is not
semantically acceptable then build the alternative structure;
\item otherwise, if [$n_2$ $n_3$] is semantically
preferable to [$n_1$ $n_2$] then build [$n_2$ $n_3$];
\item otherwise, build [$n_1$ $n_2$].
\end{itemize}
\end{citedquote}
Notice that only adjacent nouns are ever given to the oracle for acceptability
judgements. I have called algorithms that select analyses
on the basis of the acceptability of the innermost constituent
adjacency algorithms.
An algorithm that captures dependency relations must allow for longer
distance dependencies. Such an algorithm was first reported in
Lauer~(1994).\footnote{This algorithm can be viewed as a
specialisation of the probabilistic grammar that conditions on
syntactic relations and heads, which is proposed in
Charniak~(1993) as described earlier in section~\ref{sec:sn_grammars}.}
This algorithm might be written as follows.
\begin{quote}
Given three nouns $n_1$, $n_2$ and $n_3$:
\begin{itemize}
\item Determine how acceptable the structures
[$n_1$ $n_2$] and [$n_1$ $n_3$] are;
\item if the latter is more acceptable, build [$n_2$ $n_3$] first;
\item otherwise, build [$n_1$ $n_2$] first.
\end{itemize}
\end{quote}
This algorithm pays no attention to the acceptability of [$n_2$ $n_3$],
focusing instead on two constructions involving $n_1$. I will call any
algorithm that maximises the acceptability of dependency relations within
the compound a \newterm{dependency algorithm}.
For example, when \lingform{backup compiler disk} is
encountered, the dependency analysis will be:
\begin{examples}
\item \label{eg:cy_model_dependency}
\begin{subexamples}
\item{[}[backup compiler \nn] disk \nn{]}
when \lingform{backup compiler} is more acceptable
\label{eg:cy_problem_brack_bcd_l}
\item{[}backup [compiler disk \nn] \nn{]}
when \lingform{backup disk} is more acceptable
\label{eg:cy_problem_brack_bcd_r}
\end{subexamples}
\end{examples}
This should be compared to example~\ref{eg:cn_statistical_adj} in
section~\ref{sec:cn_statistical}. The difference between these two types of
algorithms is illustrated graphically in figure~\ref{fig:cy_model_2ways}.
\begin{figure*}
\centering
\input{figures/cy_2ways.tex}
\caption{Two analysis models and the associations they compare}
\label{fig:cy_model_2ways}
\end{figure*}
I claim that the dependency algorithm makes more
intuitive sense for the following reason. Consider
the compound \lingform{calcium ion exchange}, which is
typically left-branching. There does not seem to be
any reason why \lingform{calcium ion} should be any
more frequent than \lingform{ion exchange}. Both are
plausible compounds and regardless of the
bracketing, \lingform{ions} are the object of an
\lingform{exchange}. Instead, the correct parse depends on
whether \lingform{calcium} characterises the \lingform{ions}
or mediates the \lingform{exchange}.
The model given below follows the meaning distributions theory. It
involves a shallow representation of compound noun semantics, the structure
of which is a homomorphic image of their syntactic structure. It therefore
satisfies the requirements of the meaning distributions theory. The basic unit
of this representation is the notion of modification. The model uses
\newterm{modificational structures}, a representation that can be
constructed directly from a bracketing, but which reflect the semantics of the
compound more directly. Since the notion of modification is made explicit
in dependency grammar, the model is akin to a probabilistic dependency
grammar for compound nouns. The resulting compound noun parsing
algorithm is a dependency algorithm for this reason.
\subsubsection*{Modificational structure and semantic
classes}
To build a dependency-based model we require a representation
that captures the dependency relations within a compound.
I will therefore now give a definition of modificational structures
and show how the compound noun parsing problem can be reduced
to the selection of these structures.
According to the general syntax rules of compound nouns, every
binary tree with $n$ leaves is a possible parse of a compound noun
$w_1 w_2 \ldots w_n$ where the $w_i$ are all nouns. In each case,
we say that the leaves of the parse tree are labelled by the words
in the string. Each such parse incorporates a modificational structure.
This is defined by assigning one
modification relationship for each interior node in the binary tree. In
particular, for each interior node, we assert that the rightmost leaf of the left
child is a \newterm{modifier} of the rightmost leaf of the right child.
\begin{definition}[Modifier]
Given a binary branching parse tree with leaf nodes $x_1, x_2, \ldots, x_n$
and an interior node, $t$, define $x_i$ to be a {\em modifier\/}
of $x_j$ through $t$,
where $x_i$ is the rightmost leaf of the left child of $t$ and $x_j$ is the
rightmost leaf of the right child of $t$.
\end{definition}
This results in $n-1$ modification relationships, one for each word except
the last. The set of modification relationships defined by a parse tree is
called the modificational structure of that tree.
\begin{definition}[Modificational Structure]
Given a binary branching parse tree with leaf nodes $x_1, x_2, \ldots, x_n$,
the {\em modificational structure\/} of that tree is
the set of ordered pairs of leaf
nodes $\{ (x_i, x_j) | x_i \mbox{ is a modifier of } x_j\}$.
\end{definition}
Modificational structures can be displayed graphically by means of one
point for each leaf node and one arrow for each modification relationship,
pointing from the modifier to the modified leaf node. An example showing
how a parse tree yields a modificational structure is shown in
figure~\ref{fig:parse2modstruct}. While the children at a node
of a parse tree are ordered (left child and right child),
modifiers of a node in a modificational structure are unordered.
\begin{figure*}
\centering
\input{figures/cy_pt2ms.tex}
\caption{Parse trees define modificational structures}
\label{fig:parse2modstruct}
\end{figure*}
This definition of modificational structure follows the general
properties of compound noun interpretations in which the rightmost
noun below a given node is the head and carries the semantic class of the
interpretation of that node. I am not claiming that modificational
structures fully represent the meaning of compounds. Rather, they create
divisions along the lines of such meanings, which are useful for syntactic
predictions. We can think of the modificational structure as a shallow
representation of the meaning of the compound, from which the syntactic
structure is generated in a productive process.
A corollary of the definition of modifier is that every word $w_i$
($1 \leq i < n$) labels a leaf that is a modifier of a unique other
leaf further to its right.
Hence every modificational structure forms a tree
with the rightmost word $w_n$ labelling the root. Whenever $x_i$ is a
modifier of $x_j$, $x_j$ is closer to the root than $x_i$. We say
that $x_j$ is above $x_i$ whenever $x_j$ lies on the path from $x_i$
to the root.
Since the modifiers of a node in a modificational structure are unordered,
there is generally more than one parse tree for a single modificational
structure. It is therefore useful to define the possible
orderings of the nodes of a modificational structure.
\begin{definition}[Consistent Ordering]
Given a rooted tree with $n$ nodes $x_a, x_b, \ldots, x_z$,
any ordering of these nodes $x_1, x_2, \ldots x_n$ is defined
to be {\em consistent\/}
with the tree if every node of the tree has the property that
the set of nodes below it forms a complete consecutive subsequence
$x_i, x_{i+1},\ldots, x_j$ of the ordering for some $1 \leq i \leq j \leq n$.
\end{definition}
It is easy to show that whenever $x_1, x_2, \ldots x_n$ is consistent
with a directed tree, that tree is the modificational
structure of exactly one parse of the string ``$w_1 w_2 \ldots w_n$''
where each $x_i$ is labelled $w_i$ for $1 \leq i \leq n$.
It is therefore sufficient to choose the correct
modificational structure in order to determine the correct parse. Henceforth,
I will thus consider the goal to be selection of modificational structures
rather than bracketings.
Given a modificational structure with nodes labelled
$w_1, w_2, \ldots, w_n$, every
postorder traversal of the tree generates a string ``$w_{q(1)} w_{q(2)}
\ldots w_{q(n)}$'' where $q$ is a permutation consistent with the
modificational structure. This string is a syntactically legal compound (any
string of nouns is under the rule \={N} $\rightarrow$ \={N} \={N}), and in
each case (that is, each possible postorder traversal), the modificational
structure must correspond to exactly one parse of ``$w_{q(1)} w_{q(2)}
\ldots w_{q(n)}$''.
Figure~\ref{fig:cy_model_modstruct2parse} shows how one modificational
structure yields multiple word strings, but in each case only one parse of the
string incorporates that modificational structure. In summary, giving any
two of the string, the parse or the modificational structure, uniquely defines
the third.
\begin{figure*}
\centering
\input{figures/cy_ms2pt.tex}
\caption{Modificational structures correspond to one parse
of each of several different strings}
\label{fig:cy_model_modstruct2parse}
\end{figure*}
In addition to the use of dependency relations,
another novel aspect of the model is the incorporation of conceptual
association (see section~\ref{sec:sn_conceptual}). Rather than treating each
word individually, words are gathered into semantic classes. This has a
practical motivation in reducing the number of parameters (and thus also the
data requirements), since the number of semantic classes is smaller than the
number of words. On the theoretical side, this is in keeping with the
meaning distributions theory. Probabilities are assigned to
semantic elements, arrangements of semantic classes, rather than to surface
elements, the word strings. All the definitions regarding modificational
structures above hold equally well for nodes labelled by
semantic classes as for nodes labelled by words.
In so much as semantic classes can be viewed as semantic
primitives, the resulting modificational structures form a shallow semantic
representation of the meanings of compounds.
Following the meaning distributions theory, the model is based on
events involving these
classes, rather than events involving words. This is based on the assumption
that all words that realise a given semantic class have roughly the same
properties.
\subsubsection*{Some notation and assumptions for the
model}
I now turn to the formal specification of the model and its assumptions.
Let $W$ be the set of all words in compound nouns (each of
which will have at least one noun sense). Let $S$ be a set of semantic
classes, which are themselves represented by sets of words. That is, every
class $s \in S$ is a subset of $W$.
Each instance of a compound noun is considered an event. We
denote the occurrence of a compound whose nouns are $w_1, w_2, \ldots
w_n$ in that order by ``$w_1 w_2 \ldots w_n$''. We also assume that when
a word appears in a compound, it is used in a sense that corresponds to one
of the semantic classes. Since words are polysemous, the sense being used
is not explicit and different sense possibilities must be accounted for by the
model. We denote the (unknown) semantic class of a particular instance of a
word, $w_i$, by $\mbox{\it sense\/}(w_i) \in S$. To allow all word senses in a
compound to be considered together, we take $s_1 s_2\ldots s_n$ to denote
the occurrence of a compound ``$w_1 w_2 \ldots w_n$'' wherein
$\mbox{\it sense\/}(w_i) = s_i$ for all $0 < i \leq n$.
Assume that the entire vocabulary has been assigned to semantic classes.
\begin{assumption}[Lexical Meaning]
Every word is a member of some semantic class:
\begin{equation}
(\forall w \in W) (\exists s \in S \mbox{ such that } w \in S)
\end{equation}
\end{assumption}
It follows that $(\bigcup_{s \in S} s) = W$.
Since words have multiple senses, each word may appear in
several semantic classes.
Define $\mbox{\it cats\/}(w) \stackrel{\rm def}{=} \{
s \in S | w \in s \}$, the set of semantic classes that may be realised by a
word. The lexical meaning assumption guarantees that this is non-empty.
Define $\ambig(w) \stackrel{\rm def}{=} | \mbox{\it cats\/}(w) |$,
the number of senses of the word $w$..
We are interested in modificational structures. Let $M(X)$ denote the set of
possible rooted trees whose nodes are labelled by elements of the set $X$
(where the arcs leading to each node are unordered).
The event $m$ where $m \in M(W)$ denotes the
occurrence of a compound noun, whose modificational structure is the
tree $m$. When $m \in M(S)$, $m$ denotes the event $s_1 s_2\ldots s_n$
with the additional information that the modificational structure of the
corresponding compound ``$w_1 w_2 \ldots w_n$'' is the tree $m$.
Also, let $s_i \rightarrow s_j$ denote the occurrence of a compound whose
modificational structure includes the link $x_i$ is a modifier of $x_j$
where $x_i$ is labelled $w_i$, $x_j$ is labelled $w_j$,
$\mbox{\it sense\/}(w_i) = s_i$ and $\mbox{\it sense\/}(w_j) = s_j$.
Finally, the model uses probabilities of the form $\Pr(s_i \rightarrow s_j |
\exists s: s \rightarrow s_j)$, which it will be useful to abbreviate as $\Pr(s_i
\rightarrow s_j | s_j)$.
We are now in a position to state the principal modelling assumption
underpinning the probabilistic conceptual model. This will be used in
addition to the meaning distributions theory,
and is of particular interest because it makes an assertion about the space of
possible meanings, rather than the space of possible syntactic structures as is
done in previous models (see section~\ref{sec:md_priors}).
It is a novel assumption of the model that all modificational structures
that are composed of modificational links with the same labels, are
equi-probable.\footnote{Since
the algorithm derived from the model only ever compares the probabilities
of modificational structures possible for a given string, only structures that
generate the same string need meet this requirement.} In fact, we assume
that the probability of any complex modificational structure is derived from
the probabilities of its links by multiplication.
\begin{assumption}[Modifier Independence]
Given a modificational structure $m \in M(S)$, its probability is
proportional to the product of the probabilities of its individual modification
relationships; that is,
\begin{equation}
\Pr(m) \propto \prod_{c \mbox{ is a modifier of } x \mbox{ in } m}
\Pr(c \rightarrow x | x)
\label{eq:cy_model_link_product}
\end{equation}
\end{assumption}
This assumption differs substantially from other probabilistic grammar
models proposed in the past, which typically assume that all parse trees
involving the same rewrite rules are equi-probable.
The modifier independence assumption makes an assertion
about the distribution of possible
meanings rather than about the distribution of possible syntactic structures.
This difference becomes important because some modificational structures
are possible interpretations of several different compound nouns (as is the
case in figure~\ref{fig:cy_model_modstruct2parse}),
while parse trees are always an analysis of a unique compound.
Intuitively, when a speaker wishes to refer to an entity, she may
choose among different possible orderings of the modifiers. For
example, suppose an object $A$ has two associated objects $B$ and $C$,
used to identify it. The speaker may use either ``$w_B w_C w_A$'' or
``$w_C w_B w_A$''. In contrast, if object $A$ is associated with object
$B$, which is associated with object $C$, the speaker must use ``$w_C w_B
w_A$''. Therefore, assuming \foreign{a priori} equi-probable
modificational structures, and given the compound ``$w_C w_B w_A$'', the
probability of the first structure is half that of the second structure.
To capture this imbalance between the possible modificational structures, we
define the degree of choice available to the speaker when expressing a given
modificational structure $m$ as follows. This is simply the number of
postorder traversals of the modificational tree.
\begin{definition}[Choice]
Given a modificational structure $m$, the number of distinct strings with
parses for which $m$ is a valid modificational structure is called the
{\em degree of choice\/} of $m$, and is given by
\begin{equation}
\mbox{\it choice\/}(m) = \prod_{x \in \mbox{NodesOf}(m)}
\mbox{ChildrenOf}(x)!
\label{eq:cy_model_choice}
\end{equation}
\end{definition}
Note that this measure is independent of the type of node labels in the
tree and applies with $m \in M(W)$ or $m \in M(S)$.
In addition to the modifier independence assumption, I will also adopt the
meaning distributions theory. In this case, the space of possible meanings is
$M(S)$ and the mapping between word strings and this space is determined
by the grammar of compound nouns. Given a string of elements of $W$,
``$w_1 w_2 \ldots w_n$'', there is a set of modificational structures over
senses, $\Psi_{w_1, w_2, \ldots, w_n} \subseteq M(S)$, which can generate
the string (precisely those with nodes
$s_i \in \mbox{\it cats\/}(w_i)$ and with
which the ordering $s_1,s_2, \ldots s_n$ is consistent). Consider any $m \in
\Psi_{w_1, w_2, \ldots, w_n}$. The meaning distributions theory states in
equation~\ref{eq:md_linking_contrib} (in section~\ref{sec:md_linking})
that all strings that can be generated by $m$ are equi-probable given $m$.
I will adopt that assumption here.
\begin{assumption}[Meaning Distributions]
Given a modificational structure $m$ that is an allowable interpretation of
the word string ``$w_1 w_2 \ldots w_n$'' $(m \in \Psi_{w_1, w_2, \ldots,
w_n})$, every string generated by $m$ has an equal
contribution to its probability from $m$. That is,
\begin{equation}
(\forall q \mbox{ consistent with } m)
(\Pr(``w_{q(1)} w_{q(2)} \ldots w_{q(n)}\mbox{''} | m)
= \Pr(``w_1 w_2 \ldots w_n\mbox{''} | m) )
\label{eq:cy_model_uniform_generation}
\end{equation}
\end{assumption}
There are exactly $\mbox{\it choice\/}(m)$ sense sequences consistent with
$m$. For each of these sense sequences, the number of possible word
sequences is given by $\prod_{i=1}^{n} |s_i|$. Therefore the total number
of word strings generated by $m$ is the product of these two factors, and so
equation~\ref{eq:cy_model_uniform_generation} dictates that the
probability that $m$ will generate exactly the string ``$w_1 w_2 \ldots
w_n$'' is given by
\begin{equation}
\Pr(``w_1 w_2 \ldots w_n\mbox{''} | m) =
\frac{1}{\mbox{\it choice\/}(m) \prod_{i=1}^{n} |s_i|}
\label{eq:cy_model_prob_generation}
\end{equation}
Note that $m \in M(S)$ and so the values of the $s_i$ are determined by
$m$.
\subsubsection*{Analysis}
Having developed this machinery, we are now in a position to apply
it to the problem of analysing compound nouns. The choice facing the
system is to select some $m_w \in M(W)$ which is the correct
modificational structure for a given compound ``$w_1 w_2 \ldots w_n$''.
In the terminology of the data requirements theory
of chapter~\ref{ch:dr}, the bins are word strings and the values
are modificational structures.
Thus it must compare the various probabilities $\Pr(m_w | ``w_1 w_2 \ldots
w_n\mbox{''})$ for each possible $m_w \in M(W)$. In order to do so, the model
uses estimates of a set of parameters
$\Pr(s_1 \rightarrow s_2 | s_2 )$, the probability that
semantic class $s_1$ modifies semantic class $s_2$,
given that $s_2$ is modified. We therefore need an expression for the
probability of each parse in terms of these parameters.
First, since the probabilistic conceptual model assigns probabilities to
elements of $M(S)$, it will be convenient to let $m_s \in M(S)$ stand for the
modificational structure formed from $m_w$ by assigning the sense $s_i$ to
each word $w_i$ in $m_w$, where the values of $s_i$ will be fixed below.
From the lexical meaning assumption, $S^n$ is a partition of $W^n$ so
probability theory gives the following equality.
\begin{eqnarray}
\Pr(m_w | ``w_1 w_2 \ldots w_n\mbox{''})
& = & \sum_{
s_1 \in \mbox{\it cats\/}(w_1),\ldots,s_n \in \mbox{\it cats\/}(w_n)}
\Pr(m_s | ``w_1 w_2 \ldots w_n\mbox{''}) \\
& = & \sum_{
s_1 \in \mbox{\it cats\/}(w_1),\ldots,s_n \in \mbox{\it cats\/}(w_n)}
\frac{\Pr(m_s) \Pr(``w_1 w_2 \ldots w_n\mbox{''} | m_s)
}{\Pr(``w_1 w_2 \ldots w_n\mbox{''})}
\end{eqnarray}
where the second step applies Bayes' rule.
Equation~\ref{eq:cy_model_prob_generation} allows us to simplify further:
\begin{eqnarray}
\lefteqn{\Pr(m_w | ``w_1 w_2 \ldots w_n\mbox{''})} \nonumber \\
& = & \sum_{
s_1 \in \mbox{\it cats\/}(w_1),\ldots,s_n \in \mbox{\it cats\/}(w_n)}
\frac{Pr(m_s)}
{\mbox{\it choice\/}(m_s) \prod_{i=1}^{n} |s_i|
\Pr(``w_1 w_2 \ldots w_n\mbox{''})} \\
& = & \frac{1}
{\mbox{\it choice\/}(m_w) \Pr(``w_1 w_2 \ldots w_n\mbox{''})}
\sum_{
s_1 \in \mbox{\it cats\/}(w_1),\ldots,s_n \in \mbox{\it cats\/}(w_n)}
\frac{Pr(m_s)}{\prod_{i=1}^{n} |s_i|}
\label{eq:cy_model_analysis}
\end{eqnarray}
The second step follows from the fact that $\mbox{\it choice\/}(m_w) =
\mbox{\it choice\/}(m_s)$ because $m_s$ has the same structure as $m_w$.
Since $\Pr(``w_1 w_2 \ldots w_n\mbox{''})$ is constant during the analysis of a given
compound, the only probability needed is $\Pr(m_s)$, which is given in
terms of the model parameters by the modifier independence assumption.
Thus, given the set of possible analyses, the most probable can be found by
computing the above function for each and taking the largest. We now have
a dependency algorithm.
The model given can be applied to compound nouns of any length. In the
experimental work described below, it will be applied to three word
compounds. Before turning to that work, one implication of the model is
worth emphasising.
\subsubsection*{A prediction of the model}
While the various adjacency algorithms are not based on probabilistic
models, the statistics that they compute can, in most cases, be interpreted as
probabilities. In this way inferences can be made about what any
probabilistic model for them must look like. In particular, they exhibit a
reflectional symmetry through the middle word, which can tell us something
about their assumptions. For them, left-branching and right-branching
parses are reflections of one another and nothing about the statistics
computed or the decisions based on those statistics breaks that symmetry.
Therefore, given no information about the words, any reconstructed
probabilistic model for an adjacency algorithm must predict that
left-branching and right-branching compounds are equally likely. Half of all
three word compounds should be left-branching and the other half
right-branching.
A significant prediction of the dependency model is that the proportion of
left- and right-branching three word compounds should differ. Suppose a
word string ``$w_1 w_2 w_3$'' allows two equally likely modificational
structures over $S$: a left-branching $m_L$ and a right-branching $m_R$
with $\Pr(m_R) = \Pr(m_L)$. According to
equation~\ref{eq:cy_model_analysis}, the posterior likelihoods of the two
structures differ exactly by the term $\frac{1}{\mbox{\it choice\/}(m)}$. We
have $\mbox{\it choice\/}(m_L) = 1$ and $\mbox{\it choice\/}(m_R) = 2$, so
left-branching analyses are twice as likely as right-branching analyses given
equally likely meanings. If we, not unreasonably, assume that across the
meaning space $M(S)$ left-branching and right-branching modificational
structures are equally likely, then the proportion of left-branching
compounds will be twice that of right-branching ones. That is, two-thirds of
compounds should be left-branching.
It turns out that in the test set used here
(see table~\ref{tb:cy_method_test_dist} in
section~\ref{sec:cy_method} below) and in that of Resnik~(1993), the
proportion of left-branching compounds is 67\% and 64\% respectively, well
within statistical variation of the two-thirds proportion predicted by the
dependency model. Thus, the dependency algorithm has the advantage
of correctly predicting the observed distribution where the adjacency
algorithm does not. It is therefore all the more surprising that
all previous proposals involved adjacency algorithms.
Since the dependency model was first proposed in Lauer~(1994)
a similar model has also been developed, apparently independently,
by Kobayasi~\etal~(1994) for analysing Japanese
noun compounds. Using a corpus to acquire associations, they bracket
sequences of Kanji with lengths four to six (roughly equivalent to two or
three words). Unfortunately, as a simple calculation shows, using their own
preprocessing heuristics to guess a bracketing provides a higher accuracy on
their test set than their statistical model does.
This renders their experiment
inconclusive. In the experimental work below the accuracy results are
carefully compared to the baseline accuracy of two-thirds achieved by
guessing.
\subsection{Experimental Design}
\label{sec:cy_design}
We now have a probabilistic model for parsing noun compounds based
on dependency relations.
The remainder of the first half of this chapter describes a range of
experimental work aimed at evaluating the model and the resulting parsing
strategy. The first experiment aims to
measure the accuracy of the preferences suggested by the parsing strategy.
Subsequently, a range of
variations on the algorithm will be compared, including a comparison
between the adjacency and dependency algorithms.
In this section, the architecture of the experimental system is described. In
order to discover how accurate the parsing strategy is, the experimental
method includes implementation of a program to apply the strategy,
plus a performance test of the program. The result of the experiment
is a measure of the accuracy of the strategy in predicting the appropriate
parses from compounds in the test set.
First, the architecture of the experimental system will be
outlined. The parsing strategy requires a large corpus, a part-of-speech
lexicon and a thesaurus; therefore, I will then give
a brief description of each of these and finally cover the
other input to the experimental system: the test set.
The experimental method will be given in much more detail in
section~\ref{sec:cy_method} below, including the processing of
the corpus to acquire training data, the generation of a test set
and the algorithm used to analyse three word compounds using
information derived from the training data.
\subsubsection*{Architecture}
The overall structure of the experimental setup is determined
by the model given in section~\ref{sec:cy_model}. In order to use the
model, values for the parameters $\Pr(s_1 \rightarrow s_2 | s_2 )$ are
required. These will be estimated by assuming that the distribution of
dependency relations in two word compounds is representative of those in
longer compounds. Therefore, we need only extract examples of two word
compound nouns from a corpus to provide evidence for the parameter
estimates. I will call each of these parameters the \newterm{affinity}
between $s_1$ and $s_2$.
To evaluate the performance of the model, we therefore need to compose the
following elements:
\begin{itemize}
\item a training corpus,
\item a means of extracting two word compound nouns from
that corpus,
\item a data structure for recording the resulting evidence,
\item a program to make inferences from the evidence, and
\item a set of test examples with the correct parse for each.
\end{itemize}
In what follows, all evidence used to estimate the parameters of
the model is collected in one pass over the corpus and stored in
a fast access data structure. Evidence is gathered across the
entire vocabulary, not just for those words necessary for
analysing a particular test set. Once trained in this way, the
program can quickly analyse any compound (restricted only by
the lexicon and thesaurus). This demonstrates that the parsing strategy can
be directly employed using currently available hardware in broad coverage
natural language processing systems.
It is often the case with research prototypes for statistical
language learning techniques that experiments are conducted by
collecting only those corpus statistics necessary for the test set in
hand (see for example, Yarowsky,~1992, Resnik and Hearst,~1993).
While such a methodology establishes the accuracy of these systems,
many of the proposed techniques could not be applied across a
reasonable-sized vocabulary because of computational limits, both in
processing time and storage space. The program described here runs on a
desktop computer and, once the data structure has been set up, will analyse
compounds with no perceptible delay.
The program also starts with raw text and is designed to use only freely
available resources. For this reason, it should be possible for anyone to
cheaply reimplement the parsing strategy that I have used. Of particular
importance is the fact that no manual annotation of the training data is
necessary. This is a result of the assumption that inferences about longer
compounds can be made on the basis of observations of the behaviour of
shorter compounds.
\subsubsection*{The corpus}
The corpus that is used here is derived from an on-line
multimedia encyclopedia called \publicationname{The New Grolier
Multimedia
Encyclopedia}~(Grolier Inc.,~1992). This work is mass-produced
on \acronym{cd-rom} and contains text, graphics, sound and video. The
textual component, comprising approximately 8 million words,
was written by a variety of expert authors. It covers an
extremely wide variety of topics, so that it contains a broad
vocabulary. The subject matter is clearly not from a limited
domain, though attention is concentrated on a few themes
(geography, history, art, biology), with others appearing less
frequently.
Despite the unconstrained domain, the writing style is relatively
uniform, adopting a register designed for clarity and
informativeness. As I have argued in section~\ref{sec:md_register}, a
uniform register is precisely what we desire. Language properties are
dramatically affected by register; any system which ignores
changes in register will be more prone to error, since important
information is discarded. A model trained on text from several
registers should not be expected to accurately capture the behaviour
of any one of them.
Given the rapidly increasing volume of electronically stored
text, we can expect unannotated corpora of this size to be available
everywhere quite soon. By contrast, the availability of
manually annotated training data is much less probable. Even if large
research efforts such as the Penn Treebank project (Marcus~\etal,~1993)
succeed in producing low cost high quality parsed corpora
containing millions of words, the likelihood that these corpora will have
characteristics that are appropriate to a given task (for example,
appropriate topic, matching register and required annotations) is low.
\subsubsection*{The lexicon}
The program requires independent lexical knowledge for two tasks:
identifying noun compounds and grouping synonymous nouns together.
The first task is necessary in order to extract examples of
compounds. This is done by means of a simple heuristic that is described by
equation~\ref{eq:cy_method_trainset} in section~\ref{sec:cy_method}
below. The only linguistic resource it requires is a list of nouns, and the
assumption that such a list exists is not a highly demanding one. For this
experiment, a large lexical database from the University of Pennsylvania
morphological analyser was used. This is freely available over the Internet
for research purposes (see address in section~\ref{sec:cy_method} below).
The program used to implement the heuristic takes raw text as input and
produces a stream of two word compounds.
These compounds form the evidence to be stored in a data
structure for use by the program. This structure has one element
for each parameter of the probabilistic model. Since the
probabilistic model is based on semantic classes rather than on words a
mapping is needed to associate words with such classes. Once this is
provided, observations of the behaviour of words will yield inferences about
the behaviour of concepts represented by semantic classes.
The assignment of words to semantic classes involves some
rather deep linguistic knowledge and the availability of this kind
of resource is less assured. For anything larger than a toy
domain, the development of a lexical-semantic ontology
involves substantial effort. In this work, a machine-readable
thesaurus is used as a rough conceptual system. While the kind
of synonymy captured by thesaurus categories is unlikely to be
ideal, it is easily available. The thesaurus used is the
1911 version of Roget's Thesaurus, which is freely
available over the Internet (see address in section~\ref{sec:cy_method}).
If no well-motivated conceptual lexicon is available
for the target text type, the trivial conceptual system in which
there is exactly one concept for each word could be used.
There are a couple of drawbacks to this
lexically parametrised scheme. In a conceptual lexicon, words can
be assigned to multiple semantic classes, representing different
senses of the word. The lexically parametrised scheme cannot
represent polysemy in this way. Also, for a large vocabulary,
the number of parameters of the lexical scheme would become impractical.
Nonetheless, where it is practical, it provides an alternative
when a conceptual lexicon is unavailable.
\subsubsection*{The test set}
To generate the test set, a list of compounds, three words in length, were
extracted from the corpus. A similar heuristic to that used for finding
training instances was employed and the results then analysed by hand to
assign the correct answers. The extraction heuristic is described by
equation~\ref{eq:cy_method_testset} in section~\ref{sec:cy_method}
below.
After discarding some problematic test cases (again see
section~\ref{sec:cy_method}), each of 244 test compounds was assigned a
reading of either left-branching or right-branching by the author. This
annotation was done before any testing of the program was conducted.
The program was then given these compounds and asked to
supply one or other of the two possible readings. It was required
to give an answer in every case, so that all accuracy figures
below represent the performance at 100\% coverage. The accuracy
was then computed as the proportion of cases in which the
reading selected by the program matched that assigned
manually. Since the ambiguity is binary, the expected accuracy
of a uniform random selection is 50\%.
\subsection{Experimental Method}
\label{sec:cy_method}
In this section, the experimental setup will be presented in some detail. The
reader who is uninterested should skip to section~\ref{sec:cy_results},
noting two equations: Equation~\ref{eq:cy_method_assoc_metric}
describing the measure used to associate concepts and
equation~\ref{eq:cy_method_dep_ratio} giving the means by
which analyses are made.
\subsubsection*{The training set}
Grolier's encyclopedia contains over 30,000 text articles, each
with a title and a variable length body containing on average
around 12 sentences. An example sentence is:\footnote{This
extract is taken from the \articlename{Aardvark} article by Everett
Sentman; The Academic American Encyclopedia (Electronic
Version), Copyright 1992 Grolier, Inc., Danbury, CT.}
\begin{examples}
\item
The aardvark (or ant bear), Orycteropus afer, is the only species
in the MAMMAL family Orycteropodidae, order Tubulidentata.
\label{eg:cy_method_groliers_sentence}
\end{examples}
Here, the word \lingform{mammal} is capitalised because it anchors a
hyperlink. In the on-line version, the user may click on this to
view the article on mammals.
To create the corpus, each of the articles was extracted and concatenated.
After discarding some of the fields associated with each article (the titles, the
pronunciation codes and some bibliographic information), the
remaining text formed the corpus with no further preprocessing.
In particular, no tagging, sentence boundary
identification or stemming is needed before
the training algorithm begins.\footnote{The training algorithm
does include a couple of simple rules for converting plural nouns
to singular. These are given in the
subsection on conceptual grouping below.}
The applicability of the algorithm does not depend on
having such tools available.
Following the probabilistic model, training proceeds by estimating affinities
between semantic classes. Since two word compounds are syntactically
unambiguous, affinities can be estimated by counting such compounds. The
first step therefore is to identify two word compounds within the corpus.
Since the corpus is untagged, a heuristic is used for this purpose. Even
though the heuristic is not perfect, useful training estimates can still be
computed.
The heuristic begins by tokenising the text into a stream of
tokens of two types: words and punctuation. For the purposes of
the experiment, words are strings consisting entirely of a
combination of hyphens, apostrophes and alphabetics. All other
characters are considered part of a punctuation token, including
numerics. Thus, the string \lingform{during the 1930s Hitler (b.~1889,
d.~1945) became Germany's} results in 14 tokens, including \token{1930},
\token{s}, \token{b} and \token{Germany's}, where the first of these four is
treated as punctuation. Compounds are assumed to be consecutive strings of
words, so that the inclusion of any kind of punctuation within a compound
will cause the heuristic to ignore it. This has the advantage that comma
separated lists are not identified as compounds by the program.
The University of Pennsylvania morphological analyser (Karp~\etal,~1992)
contains a large lexicon of parts of speech.\footnote{
Email Dania Egedi as [email protected], or anonymous ftp
to ftp.cis.upenn.edu and retrieve /pub/xtag/morphology/morph-1.4.tar.Z.}
Among the over 317,000 entries, there are some 90,000 which are listed as
always being nouns. From these, a short stoplist of 9 words that
are entered incorrectly as always being nouns, and also all
one-letter words, have been removed to yield a set of sure nouns
(call this set, $\cal N$).\footnote{The nine words are
\lingform{baroque}, \lingform{characteristic},
\lingform{en}, \lingform{epic}, \lingform{full-scale},
\lingform{full-length}, \lingform{incident}, \lingform{laissez-faire}
and \lingform{mere}.}
Since the heuristic is intended to find two word compounds it
looks for pairs of consecutive sure nouns. For example, when
scanning \lingform{The mountain goat is found in} the heuristic
returns \lingform{mountain goat} as a two word compound. To
prevent it from selecting pairs within a larger compound that
might be syntactically ambiguous, the heuristic requires that the
tokens immediately to the left and immediately to the right not
be nouns (either punctuation or words not in $\cal N$). So in
\lingform{Baking caramel muesli cookies} the heuristic will not identify
\lingform{caramel muesli} as a compound. Thus, the training pattern
is given by:
\begin{equation}
T_{\mbox{train}} = \{ (w_1, w_2) \mid w_1 w_2 w_3 w_4; w_1,w_4 \notin
{\cal N}; w_2, w_3 \in {\cal N} \}
\label{eq:cy_method_trainset}
\end{equation}
Here , $w_1 w_2 w_3 w_4$ denotes the occurrence of four tokens
in sequence in the corpus.
Since $\cal N$ contains only words which are always nouns, this
is quite a conservative procedure. Many common nouns, like
\lingform{bear}, have verbal usages as well, so that the occurrence of
\lingform{ant bear} in example~\ref{eg:cy_method_groliers_sentence}
above will not be extracted. However, there is no guarantee that two
consecutive nouns form a compound for two reasons.
First, other grammatical constructions can result in two
consecutive nouns. For example, direct and indirect objects of a
verb (as in \lingform{give every {\bf zebra gifts}}) or premodifying
prepositional phrases (as in \lingform{In the third {\bf century
peasants} farmed the valley}) both result in consecutive nouns.
Second, because the heuristic ignores nouns like \lingform{bear}, longer
compounds may only be partially extracted and the part extracted may not
be a compound. For example, the string \lingform{aluminium radiator
grill} produces \lingform{aluminium radiator} as a candidate
(since \lingform{grill} is also used as a verb) when it is unlikely that
this is a compound in this case.
I manually checked the first 1000 compounds extracted by the
heuristic and found only 21 of them did not form compounds.
The causes of these errors are shown in
table~\ref{tb:cy_method_train_errors}.
\begin{table*}
\centering
\begin{tabular}{|l|r|} \hline
Category & Number \\ \hline
PP boundary crossed & 5 \\
Class instance construction (`\lingform{the term earth}') & 5 \\
Relative clause boundary crossed & 3 \\
Direct-indirect object boundary crossed & 2 \\
Foreign language & 1 \\
Sentence capitalisation & 1 \\
Elided relative pronoun & 1 \\
Name mistaken for noun & 1 \\
Participle mistaken for noun & 1 \\
Ungrammatical sentence & 1 \\ \hline
\end{tabular}
\caption{Training set error distribution from 1000 examples}
\label{tb:cy_method_train_errors}
\end{table*}
The heuristic is also conservative about capitalisation, always
making case sensitive comparisons. This permits it to keep
information about proper nouns separate from similar common
nouns. Unfortunately, this can result in the omission of
compounds at the beginning of a sentence because the first
word may be capitalised when ordinarily it would not be.
This policy also obscures all compounds involving a hyperlink anchor.
This results in the compound \lingform{MAMMAL family} in
example~\ref{eg:cy_method_groliers_sentence} above being ignored.
Many of the problems that I have identified with the heuristic in
this section could be alleviated by using a sentence boundary
detector and part of speech tagger.
The training pattern was passed over the entire corpus.
Table~\ref{tb:cy_method_noun_seqs} shows the number of noun pairs
produced, along with equivalent counts for longer noun
sequences. It is worth noting that this distribution understates
the actual distribution of compounds in the corpus by a great
deal. The degree of conservatism in the heuristic both reduces
the total number of compounds identified and distorts the
relative lengths towards shorter compounds. This is because
longer compounds are more likely to contain a noun that also
has a verbal usage, somewhere within them. Of the sequences
extracted, the quads were ignored, the pairs used for training and
the triples for testing.
\begin{table*}
\centering
\begin{tabular}{|l|r|c|} \hline
Type & Number & Specification \\
\hline
pairs & 35343 & $\{ (v, w) \mid u v w x; v,w \in
{\cal N}; u,x \notin {\cal N}\}$ \\
triples & 625 & $\{ (v, w, x) \mid u v w x y; v,w,x \in
{\cal N}; u,y \notin {\cal N}\}$ \\
quads & 6 & $\{ (v, w, x, y) \mid u v w x y z; v,w,x,y
\in {\cal N}; u,z \notin {\cal N}\}$ \\
longer & 0 & $\{ (v, \ldots, w) \mid u v \ldots w x;
v, \ldots, w \in {\cal N}; u,x \notin {\cal N}\}$ \\
\hline
\end{tabular}
\caption{Noun sequences extracted heuristically from Grolier's
encyclopedia}
\label{tb:cy_method_noun_seqs}
\end{table*}
\subsubsection*{Conceptual grouping}
To provide the concepts necessary for conceptual association,
an electronic version of the 1911 Roget's
thesaurus is used. This is produced by Micra Inc. and is
available via anonymous ftp from project Gutenberg.\footnote{
Anonymous ftp to Project Gutenburg:
mrcnext.cso.uiuc.edu/etext/etext91/roget13a.txt. Credit is due to
Patrick Cassidy of Micra Inc. for making this resource publicly available.}
While it is rather out of date, it is freely available.
The text in the thesaurus is freeform, being designed
for a human reader. Therefore, several preprocessing steps (a
few days' development work) are required to arrive at a machine
readable set of categories. Readers interested in using the
processed version of the thesaurus for research purposes should
send me electronic mail.
The thesaurus contains 1043 categories. These categories are
intended to capture all possible concepts. Viewed as elements
for conceptual association they embody something of the notion of semantic
primitives (like the conceptual dependency theory of Schank,~1975, for
example, but with many more primitives). All words within a category are
to be treated equally. The hope is that differences in compound noun
modification behaviours occur only across categories, not within a single
category. For example, \lingform{goldfish pond} and \lingform{trout pool}
have similar behaviour because in each case the words used are drawn
from the same semantic classes. If their behaviour differed from
one another, the assumptions of the probabilistic model
would be violated.
The size of thesaurus categories is widely variable, so that the
degree of generalisation afforded by the concepts varies. Eight of the
categories only contain one noun, while one category (\tc{receptacle}{191})
contains 371. The average number of words per category is 34.
It is not immediately obvious what effect this has on the performance
of the model.
Also, the thesaurus is not as comprehensive as it could be.
There are 20,445 distinct nouns in the thesaurus, of which
11,716 are elements of $\cal N$. For the purposes of the
experiment, I discarded all sequences containing nouns not
found in the thesaurus, unless they appeared to be plural forms
whose singular was in the thesaurus (the rules used were:
remove final 's', change final 'ses' to 's' and change final 'ies'
to 'y'). This reduced the training set to 24,251 pairs.
\subsubsection*{Computing affinities}
To complete the training phase it was necessary to store all the
evidence contained in this training set in a data structure for later
use by the program. Since the probabilistic model contains one
parameter for each pair of concepts, the appropriate structure is
a matrix whose rows and columns are indexed by thesaurus
categories. We interpret $A_{ij}$, the contents of the $i$th row
and $j$th column, as the probability that the concept represented
by the $i$th thesaurus category will be a compound noun
modifier of the concept represented by the $j$th thesaurus
category, that is $\Pr(t_i \rightarrow t_j \mid t_j)$.
For example, category \tc{fluidity}{333} represents liquids and
category \tc{receptacle}{191} represents containers, so we'd
expect the corresponding
$A_{333\,\,191} = \Pr(t_{333} \rightarrow t_{191} \mid t_{191})$ to
be fairly high because of examples like \lingform{juice bottle}. On
the other hand, category \tc{liberation}{750} representing the
concept of freedom is not usually an acceptable modifier of
category \tc{receptacle}{191} so that $A_{750\,\,191}$ should be
low.
The training set is a set of pairs of words and yet the
model refers to pairs of concepts. If each word always
expressed only one concept, then each observation of a word
would be an observation of the corresponding concept and
counting words would be identical to counting concepts. Due to
polysemy, most words can be used to express several different
concepts. Each noun in the 1911 Roget's Thesaurus is in an
average of 1.7 distinct categories (the word \lingform{point} is in
18).~\footnote{This average is is for types; that is, it
is not weighted for frequency. If it
were we could expect a higher figure.}
Therefore, when a word is observed it can generally be an
occurrence of any one of several distinct concepts.
To allow for this, counts of concept occurrences were
constructed by dividing the contribution from a word by the
ambiguity of that word. That is, when an occurrence of
\lingform{corn} was observed, the counts for the categories
\tc{convexity}{250} and \tc{food}{298} were each incremented by
$\frac{1}{2}$, since \lingform{corn} is in both of them. Joint counts
(of pairs of concepts) were constructed by dividing the
contribution from word pairs by the product of the words'
ambiguities. This procedure corresponds to Bayesian estimation
where the prior probability of the different concepts is assumed
uniform.
Given the counts of observed concept pairs, the desired
probabilities were estimated using maximum likelihood assuming
a binomial distribution. Since \acronym{mle} is well-known to be poorly
behaved on small counts (see for example, Church~\etal,~1991b:127), this
requires some justification. Consider the four common alternatives:
\begin{itemize}
\item expected likelihood estimates (add 0.5 to each count),
\item backing-off to a smaller model (for example, deleted interpolation),
\item statistical hypothesis testing of various kinds, and
\item Good-Turing estimates.
\end{itemize}
While \acronym{ele} can be used to provide acceptable estimates in a
$t$-test (Church~\etal,~1991b:127), any estimation method that simply adds
a constant value to each count is significantly worse than \acronym{mle} for
estimating probabilities themselves, as shown by Gale and Church~(1994).
For the decision task addressed in this experiment, adding a constant to each
count would have little effect (some additional weight would be given to the
sense information provided by the known dependency link $n_2 \rightarrow
n_3$; see equation~\ref{eq:cy_method_dep_ratio} below).
Adopting a back-off strategy requires having multiple models and choosing
a criterion for either changing from one model to another or combining their
predictions. In an engineered system, making use of available alternative
models is sensible practice. From a scientific perspective the situation is
more complex because the experiment effectively evaluates several models
at once. Choosing the back-off criterion forms an extra experimental
variable. Furthermore, Collins and Brooks~(1995) discovered that their
prepositional phrase attachment system performed best when the cut-off
frequencies were set to zero at every level. That is, a model was used if just
one occurrence of an event supported the necessary estimates for that model.
In the experimental results below, zero counts occur in only a small
proportion of the test cases, so these results suggest that a back-off strategy
would be unlikely to help.
Hypothesis testing allows a model to check when it has sufficient data to
have a given degree of confidence in its analysis. Examples include the
$t$-test (Fisher and Riloff,~1992), the likelihood ratio test (Dunning,~1993),
the $\chi^2$ test (Alshawi and Carter,~1994), and the odds ratio test (Dagan
and Itai,~1994). The difficulty here is deciding what to do when the test is
inconclusive. Either the system can refuse to make an analysis (in which
case, the coverage is below 100\%) or it can apply a different model (in
which case, the system has the same difficulties mentioned for back-off
strategies). In either case, hypothesis testing does not change the analysis
predicted by the model, it simply provides confidence estimates for the
analysis that the model would have made in any case.
Good-Turing estimation is a powerful smoothing technique with good
mathematical foundation and Gale and Church~(1994) have shown the
resulting estimates to be highly accurate predictors of English bigram
probabilities. However, there are a few of difficulties in applying the
Good-Turing method here. First, the counts of concepts are not real counts
because they are derived from counting words that express concepts only
ambiguously, and are therefore not necessarily integers. Since the method
relies on construction of the frequency distribution (counts of the number of
events which were observed to occur with each frequency), a direct
application of the method is impossible. Second, the method assumes that
the frequency distribution is strictly decreasing (the number of types
occurring $n$ times is greater than the number of types occurring $n+1$
times), otherwise it assigns non-monotonic probabilities (that is, events
observed $n$ times get a higher probability than those observed $n+1$
times). While word bigram frequencies fall off very rapidly, concept bigram
frequencies have a much flatter frequency distribution. Finally, reasonable
results depend on smoothing the frequency distribution, a procedure
involving further interference by the experimenter, and thus compromising
the experimental methodology.
All this said, the most commendable characteristic of maximum likelihood
estimation is the ease by which it can implemented. For the purposes of this
experiment it presents a practical compromise.
Equation~\ref{eq:cy_method_assoc_metric} gives the maximum likelihood
estimates for the model parameters in terms of the noun pair frequencies,
$\countfn(w_i, w_j)$. It incorporates the division of counts for
ambiguous words across their different senses.
\begin{eqnarray}
\Pr(t_i \rightarrow t_j \mid t_j) & = &
\frac{1}{\eta_j}
\sum_{
\begin{array}{c}
\scriptstyle w_i \scriptstyle \in \scriptstyle t_i \\
\scriptstyle w_j \scriptstyle \in \scriptstyle t_j
\end{array}
}
\frac
{\countfn(w_i, w_j)}
{\ambig(w_i)\,\,\ambig(w_j)}
\label{eq:cy_method_assoc_metric} \\
\eta_j & = &
\sum_{
\scriptstyle w_i \scriptstyle \in \scriptstyle {\cal N}
}
\frac
{\countfn(w_i, w_j)}
{\ambig(w_i)\,\,\ambig(w_j)} \nonumber
\end{eqnarray}
Recall from section~\ref{sec:cy_model} that $\ambig(w)$ is the
number of categories in which $w$ appears.
An important case to notice is when $\countfn(w_i, w_j) =
0$ for all $w_i \in t_i$ and $w_j \in t_j$. Where this is the case,
the probability estimate for $\Pr(t_i \rightarrow t_j \mid t_j)$ is
zero. This unfortunate side-effect of using maximum likelihood
estimates means that the resulting model prohibits any analysis
involving these relationships (that is, where a word used to
express the concept represented by $t_i$ modifies a word used
to express the concept represented by $t_j$). Apart from the
minor correction described in the next section, nothing has been
done to address this difficulty.
It should also be clear from
equation~\ref{eq:cy_method_assoc_metric} that the conceptual
association used is asymmetric. That is, it is generally true that
$A_{ij} \neq A_{ji}$.
\subsubsection*{Test set}
To generate the test set, a pattern like that used for training was
passed over the corpus. It extracts sequences of three nouns:
\begin{equation}
T_{\mbox{test}} = \{ (w_2, w_3, w_4) \mid
w_1 w_2 w_3 w_4 w_5; w_1,w_5 \notin {\cal N}; w_2, w_3, w_4 \in {\cal
N} \}
\label{eq:cy_method_testset}
\end{equation}
As noted above, this yielded 625 triples. Since the model can
only analyse compounds containing words in the thesaurus, any
sequences containing words unknown to the thesaurus were
discarded. This left 308 triples, which were then manually
analysed by the author using the entire article in which they
appeared as context (examples are shown in
table~\ref{tb:cy_method_test_dist} below). The entire test set
is given in appendix~\ref{appendix:cytest}.
As for the training set, not all sequences extracted in this way
were compound nouns. For the same reasons as described
above (nouns adjacent across a constituent boundary and
partially extracted longer compounds) many of the sequences
were not valid compounds and were marked as errors. An
example appears in table~\ref{tb:cy_method_test_dist}. These
triples were not used in computing the accuracy of the model.
Another group of triples did form valid compounds, but could
not be assigned a strictly left-branching or right-branching
structure in the context. In principle, the two possible syntactic
readings of a three word compound noun correspond to distinct
meanings. Normally one or other of these meanings will be the
intended one, even if the other happens to be true of the situation
being described. However, in certain contexts, both of these
meanings appear to be part of the communicative intent of the
writer. An example appears in table~\ref{tb:cy_method_test_dist},
though one would need more context than I have provided here
to verify that this is indeed an example.
Hindle and Rooth~(1993:113) observe a similar phenomenon in the
attachment of prepositional phrases. They call such examples
\newterm{semantically indeterminate}. An example adapted
from theirs is \lingform{They mined the roads along the coast} in
which \lingform{along} may attach to either \lingform{mined} or
\lingform{roads} but means the same both ways. They observe 77 cases
out of 880 (although they also have a category called `other'
containing a further 78 cases).
Apart from the extraction errors and the semantically
indeterminate compounds, the remaining 244 triples were all
assigned a syntactic reading, either left-branching or right-branching.
The distribution of all 308 triples is shown in
table~\ref{tb:cy_method_test_dist}.
\begin{table*}
\centering
\begin{tabular}{|l|r|r|l|} \hline
Type & Number & Proportion & Example \\ \hline \hline
Error & 29 & 9\% & $\begin{array}{l}
\mbox{In {\em monsoon regions}} \\
\mbox{{\em rainfall} does not \ldots }
\end{array}$ \\ \hline
Indeterminate & 35 & 11\% & $\begin{array}{l}
\mbox{Most advanced aircraft have } \\
\mbox{{\em precision navigation systems}. }
\end{array}$ \\ \hline
Left-branching & 163 & 53\% & $\begin{array}{l}
\mbox{\ldots escaped punishment by } \\
\mbox{the Allied {\em war crimes tribunals}. }
\end{array}$ \\ \hline
Right-branching & 81 & 26\% & $\begin{array}{l}
\mbox{Ronald Reagan, who won two } \\
\mbox{{\em landslide election victories}, \ldots }
\end{array}$ \\ \hline
\end{tabular}
\caption{Test set distribution} \label{tb:cy_method_test_dist}
\end{table*}
An important aspect of choosing the syntactic structure of these
compounds is their dependence on context. When the test set
was analysed, the best reading {\em within the context} was assigned
to each compound. However, such readings are not necessarily
the most likely readings in all contexts, as I discussed in
section~\ref{sec:cn_context}. The program doesn't have access to the
context because it is only given the compound. It tries to give the best
out-of-context response (that is, the one most likely across all different
contexts), but even if it does, it may be scored as incorrect.
One example given in section~\ref{sec:cn_context} is \lingform{club cover
charge}. This typically has a right-branching analysis (a fee to
cover drinks exacted by a nightclub upon entrance), but in a golfing shop
may be left-branching (the price of a plastic jacket used to
protect golf sticks). If this were a test compound and it came from a text
about golf shops, where it was left-branching, then the program would
be marked incorrect if it chose a right-branching analysis, even though this is
the best possible answer without context.
Before I turn to the analysis procedure used to predict the
readings of compound nouns, two aspects of the test
methodology need to be mentioned. First, the test set was
manually analysed prior to the application of the model described here (or
indeed any automatic analysis procedure) and so the readings assigned are
independent of possible biases caused by developing the model. Second, the
test and training sets are disjoint (in the sense that no individual occurrence
of a word appears in both), even though they are taken from the same
corpus. This follows from an inspection of the two patterns used
to extract the training and test sets. Whenever
$T_{\mbox{test}}$ applies (see equation~\ref{eq:cy_method_testset}),
three elements of $\cal N$ appear consecutively. This sequence is
prohibited by $T_{\mbox{train}}$ (see
equation~\ref{eq:cy_method_trainset}).
\subsubsection*{Analysis}
Given the test set and the parameter estimates derived from the
training corpus, all that remains is to specify the decision
procedure used to analyse the test set. It is motivated directly by
the probabilistic model, which can be simplified for three word compounds.
Since there are only two possible analyses, the decision procedure simply
computes the probabilities of the two alternatives using
equation~\ref{eq:cy_model_analysis} in section~\ref{sec:cy_model} and
chooses that one with the greater probability.
The equation giving the probability of each analysis involves a sum over the
possible categories for each word. This must be computed for each analysis
so that all possible senses of a word contribute to the decision. For example,
if $w_1 \in t_1, w_2 \in t_2, w_3 \in t_{3a}$ and $w_3 \in t_{3b}$, then
four conceptual structures are possible:
\begin{itemize}
\item $t_1 \rightarrow t_2 \rightarrow t_{3a}$,
\item $t_1 \rightarrow t_{3a} \leftarrow t_2$,
\item $t_1 \rightarrow t_2 \rightarrow t_{3b}$ and
\item $t_1 \rightarrow t_{3b} \leftarrow t_2$.
\end{itemize}
Here, the first and third are left-branching and the second
and last right-branching. Thus two affinities must be added to compute the
probability of each analysis.
Notice that this strategy has the advantage that all evidence coming from
the observation of a word in the training set is collected together
when analysing that word in the test set, even though it may
have been split up across several different parameters.
It is more efficient if, instead of computing each of the two sums
individually, their ratio is computed, so that some elements cancel out. For
the purposes of the experiment, I will assume that all semantic classes are
the same size, thus allowing one further term to be cancelled. In
section~\ref{sec:cy_comparisons} below I will investigate empirically the
effects of this approximation. Having cancelled the common factors, we are
left with the following equation for $R_{dep}$, the ratio of the two desired
probabilities (under the dependency model).
\begin{eqnarray}
R_{\mbox{dep}} \,\, & \stackrel{\rm def}{=} \,\, &
\frac{
\Pr( w_1 w_2 w_3 \mbox{ is left-branching})
}{
\Pr( w_1 w_2 w_3 \mbox{ is right-branching})
} \nonumber \\
& = &
\frac
{
\sum_{
\scriptstyle t_i \scriptstyle \in
\scriptstyle \mbox{\it cats\/}(\scriptstyle w_i)
}
\Pr(t_{1} \rightarrow t_{2} \mid t_{2}) \Pr(t_{2} \rightarrow t_{3} \mid
t_{3})
}
{
\sum_{
\scriptstyle t_i \scriptstyle \in
\scriptstyle \mbox{\it cats\/}(\scriptstyle w_i)
}
\Pr(t_{1} \rightarrow t_{3} \mid t_{3}) \Pr(t_{2} \rightarrow t_{3} \mid
t_{3})
}
\label{eq:cy_method_dep_ratio}
\end{eqnarray}
For the moment, I have omitted from this equation the factor of 2 resulting
from the differences in $\mbox{\it choice\/}(m)$ between left-branching and
right-branching modificational structures. As observed in
section~\ref{sec:cy_model}, right-branching modification structures have
$\mbox{\it choice\/}(m) = 2$, while left-branching ones have
$\mbox{\it choice\/}(m) = 1$. This factor has been left out in order to make a
fairer comparison between the adjacency and dependency methods, even
though it is warranted by the assumptions underpinning the dependency
model. In this way, I avoid the criticism that this factor is an \foreign{ad
hoc} bias toward the more common analysis in the test set and unfairly
advantages the dependency model. The factor will be restored in
section~\ref{sec:cy_comparisons} under the designation tuned model.
Having computed $R_{\mbox{dep}}$, the decision procedure then assigns
a right-branching analysis if and only if this ratio is less than 1. When the
ratio is exactly 1, the program assigns a left-branching analysis as the
default.
One problem with the above formulation is the possibility of the
denominator being zero. This can easily occur, since many of
the parameter estimates will be zero. When this occurs,
the decision procedure chooses a left-branching analysis. The
reasoning goes as follows: there is no apparent evidence to
support a right-branching analysis. If there is evidence for a
left-branching analysis, then we should choose that. If there is
no evidence for either, then the best we can do is take the
default, which again is left-branching.
A particularly bad example of the effect of data sparseness
when using maximum likelihood estimates appears in this
equation. Both numerator and denominator contain the
multiplicand $\Pr(t_{2} \rightarrow t_{3} \mid t_{3})$. This factor
does not cancel out, because it varies across the terms of the
sums. If it happens to be zero in all of the terms of the sums,
then both the numerator and denominator will be zero and a left-branching
default will be selected. Yet the test example itself
provides the information that $w_2$ can be a modifier of
$w_3$. Thus, there must be some $t_{2}$ and $t_{3}$ for which
$\Pr(t_{2} \rightarrow t_{3} \mid t_{3})$ is actually non-zero (even
though it has been estimated to be zero).
To avoid this problem, the implementation specifically checks for this
occurrence. When it is detected, the ratio is recomputed under the
assumption that all of the parameters $\Pr(t_{2} \rightarrow t_{3}
\mid t_{3})$ are non-zero and equal to each other. Under this
assumption, these terms do cancel out and the ratio only depends
on the various $\Pr(t_{1} \rightarrow t_{3} \mid t_{3})$ and $\Pr(t_{1}
\rightarrow t_{2} \mid t_{2})$. This correction only applies to estimates for
$\Pr(t_{2} \rightarrow t_{3} \mid t_{3})$. No correction is made to the
probability estimates for $\Pr(t_1 \rightarrow t_2)$ and $\Pr(t_1 \rightarrow
t_3)$ in unseen cases; if unseen, their probability is estimated as zero.
Let's consider an example of the decision procedure. Suppose
the program is presented with the compound \lingform{highland
outrigger tour}. The word \lingform{highland} appears in exactly one
category, \tc{land}{342} ($\tla$),
while \lingform{tour} appears in \tc{journey}{266} ($\tjo$).
Meanwhile \lingform{outrigger} appears in two categories:
\tc{support}{215} ($\tsu$) and \tc{ship}{273} ($\tsh$).
Because there is just one, binary, sense ambiguity,
each of the sums in equation~\ref{eq:cy_method_dep_ratio} has
just two terms. The ratio therefore becomes:
\begin{eqnarray}
\lefteqn{R_{\mbox{dep}} =} \label{eq:cy_method_rdep_hot} \\
& & \frac{
\Pr(\tla \rightarrow \tsu \mid \tsu) \Pr(\tsu \rightarrow \tjo \mid \tjo)
+ \Pr(\tla \rightarrow \tsh \mid \tsh) \Pr(\tsh \rightarrow \tjo \mid \tjo)
}
{
\Pr(\tla \rightarrow \tjo \mid \tjo) \Pr(\tsu \rightarrow \tjo \mid \tjo)
+ \Pr(\tla \rightarrow \tjo \mid \tjo) \Pr(\tsh \rightarrow \tjo \mid \tjo)
} \nonumber
\end{eqnarray}
There are four possible conceptual structures modelled in the
equation. The two right-branching ones (in the denominator)
include a modification relationship between the concepts
represented by \tc{land}{342} and \tc{journey}{266}. The first
left-branching one involves a relationship between \tc{land}{342}
and \tc{support}{215}, while the second contains a
relationship between \tc{land}{342} and \tc{ship}{273}. Since
both \tc{land}{342} $\rightarrow$ \tc{ship}{273} and \tc{land}{342}
$\rightarrow$ \tc{support}{215} are unlikely
relationships, the numerator is small. The first term of the
denominator is likewise small because \tc{support}{215}
$\rightarrow$ \tc{journey}{266} is reasonably unlikely. The
second term however will be significantly larger because both
\tc{land}{342} $\rightarrow$ \tc{journey}{266} and \tc{ship}{273}
$\rightarrow$ \tc{journey}{266} are plausible compounds. Therefore
$R_{\mbox{dep}}$ is less than 1 and a right-branching analysis is
preferred.
If both $\Pr(\tsh \rightarrow \tjo \mid \tjo)$ and
$\Pr(\tsu \rightarrow \tjo \mid \tjo)$ were estimated to be
zero (only possible because of data sparseness since \tc{ship}{273}
$\rightarrow$ \tc{journey}{266} is quite plausible),
then the ratio would be recomputed as:
\begin{equation}
R_{\mbox{dep}} = \frac
{\Pr(\tla \rightarrow \tsu \mid \tsu) + \Pr(\tla
\rightarrow \tsh \mid \tsh)}
{2 \Pr(\tla \rightarrow \tjo \mid \tjo)}
\label{eq:cy_method_rdep_hot_sparse}
\end{equation}
\subsection{Results}
\label{sec:cy_results}
Each of the 244 compounds from the test set that received either
a left-branching or right-branching analysis were passed to a
program which implemented the decision procedure just described.
The distribution of responses is shown in
table~\ref{tb:cy_results_all}.
\begin{table*}
\centering
\begin{tabular}{|l|c|c|} \hline
& Actual Left & Actual Right \\
\hline Response Left & 132 & 24 \\
\hline Response Right & 31 & 57 \\
\hline
\end{tabular}
\caption{Response distribution}
\label{tb:cy_results_all}
\end{table*}
The proportion of correct responses was 77.5\%. If we instead
always chose a left-branching analysis, we would get 66.8\%. An
important question is whether the observed performance is really
better than this simple strategy, allowing for statistical
fluctuations. The standard statistical test for comparing the
difference of two means when the sample sizes are large
involves computing a pooled estimate of the proportion in
both samples for estimating the variance. Since the samples are
sufficiently large, it is possible to use $z$-scores to estimate the
probability of seeing the observed difference in proportions by
chance. In this case, the pooled estimate of the proportion is
72.1\%, which yields a $z$-score of 2.64. Therefore there is less
than 0.5\% chance of observing this, or a greater, difference in
proportions by chance alone (one-tailed test). So the observed
difference between the experimental unit's correctness and that
of a simple guessing strategy is significant at the 1\% level.
The decision procedure defaults to a left-branching response
when it has no evidence to support either analysis. This
happened in only 9 test cases (3.69\%), so the choice of default
did not influence the performance greatly.\footnote{Six of these nine are
left-branching, so the expected accuracy of an algorithm using uniform
random choice as the default would be just 0.6\% lower.} A further 76 cases
(31.1\%) had evidence for only one of the analyses, and the
distribution of these cases is shown in
table~\ref{tb:cy_results_sparse}. The correctness in these
cases was not appreciably different from the overall correctness.
This suggests that parameter estimates of zero were not especially
less accurate than other parameter estimates.
\begin{table*}
\centering
\begin{tabular}{|l|c|c|} \hline
& Actual Left & Actual Right \\
\hline Response Left & 20 & 1 \\
\hline Response Right & 17 & 38 \\
\hline
\end{tabular}
\caption{Response distribution for cases with evidence for only
one analysis}
\label{tb:cy_results_sparse}
\end{table*}
Interestingly, there was only one right-branching compound that
had evidence for just a left-branching analysis
(the compound is \lingform{reservoir evaporation losses}). A possible
explanation is that writers try to avoid right-branching
compounds unless there is either no possibility of a
left-branching reading, or a strong association between the head
and its more distant modifier.
A side effect of using conceptual association is that words must be
assigned to concepts in order to calculate the relative probabilities of the
different analyses. Each term in the sums appearing in
equation~\ref{eq:cy_method_dep_ratio} represents an assignment
of words in the compound to concepts. By comparing the
relative sizes of these terms, we can estimate the likelihood of
different possible senses for a word. For instance, in the
\lingform{highland outrigger tour} example at the end of
section~\ref{sec:cy_method} above, the most significant
term in equation~\ref{eq:cy_method_rdep_hot} corresponds to
$ \tsh \rightarrow \tjo \leftarrow \tla$. Therefore
the program has evidence that \lingform{outrigger} is used in the sense
of \tc{ship}{273} rather than \tc{support}{215}.
Of course, this evidence only reflects the information provided by other
words in the compound (in fact, only those other words which are modifiers
or heads of the word in question). While this amount of context
is admittedly very narrow, it is still useful, especially since
providing sense cues is one of the linguistic functions of
compounding (the reason for the first word in \lingform{golf club} is
the ambiguity of the second). However, I have not measured the
performance of this strategy for sense disambiguation.
The initial experimental goal was to measure the accuracy of the
model in predicting the parses of three word noun compounds.
This has been achieved with the result being 77.5\%. In the next
section I will describe additional experiments in which
different aspects of the model are varied.
\subsection{Comparisons}
\label{sec:cy_comparisons}
In the experiments below comparisons are made across six dimensions.
\begin{enumerate}
\item Analysis method: The performance of the dependency model is
compared to that of an equivalent adjacency algorithm.
\item Training pattern: A range of different training patterns are used
to estimate the model parameters, including windowed association.
\item Training symmetry: A symmetric training scheme is
compared to the asymmetric one.
\item Tuning factors: The models are applied both with and without
the tuning factors suggested by the meaning distributions theory.
\item Parameterisation: The concept based model is compared to a lexical one.
\item Tagged data: Training data is gathered by relying on the predictions
of an automatic tagger, instead of on the heuristic used earlier.
\end{enumerate}
This section will detail each of these comparisons.
\subsubsection*{Dependency meets adjacency}
Since adjacency algorithms have been suggested a few times in the
literature, the first comparison is between the dependency method
and an equivalent adjacency method.
The decision procedure for the adjacency method differs from the
dependency method only in the way in which the parameters are used.
To obtain an adjacency algorithm, it is
sufficient to rewrite equation~\ref{eq:cy_method_dep_ratio} to use only
adjacent pairs of words. That is, when given the compound ``$w_1 w_2
w_3$'', the adjacency method computes the following ratio and assigns a
right-branching structure if and only if it is less than 1.
\begin{equation} \label{eq:cy_adjacency_adj_ratio}
R_{\mbox{adj}}
\,\, = \,\,
\frac
{
\sum_{
\scriptstyle t_i \scriptstyle \in
\mbox{\it \scriptsize cats($\scriptstyle
w_i\!$)}
}
\Pr(t_1 \rightarrow t_2)
}
{
\sum_{
\scriptstyle t_i \scriptstyle \in
\mbox{\it \scriptsize cats($\scriptstyle
w_i\!$)}
}
\Pr(t_2 \rightarrow t_3)
}
\end{equation}
Notice that the correction applied in section~\ref{sec:cy_method} when the
common factor in equation~\ref{eq:cy_method_dep_ratio} is zero, does not
apply here because there is only one factor in each term of the sum.
The training pattern used earlier to estimate the parameters of the dependency
model is derived from that model, so the experiment also uses a range of
alternative training schemes, all unsupervised. One of these results in the
same training set as that proposed by Liberman and Sproat~(1992) for their
adjacency algorithm.\footnote{Specifically, the windowed scheme with $n =
2$.}
The alternative schemes use a window to collect training instances by
observing how often a pair of nouns co-occur within some fixed number of
words. A variety of window sizes are used. For each, the training set is
given by
\begin{equation}
T_{\mbox{train}} = \{ (w_1, w_i) \mid w_1 w_2 \ldots w_i; 1 < i \leq n;
w_1,w_i \in {\cal N} \}
\label{eq:cy_adjacency_windowtrain}
\end{equation}
where $w_1 w_2 \ldots w_i$ denotes the occurrence of $i$ tokens
in sequence in the corpus, and $n \geq 2$ is the window width. Note that,
just as for the pattern, windowed counts are asymmetric.
To ensure that the test set is disjoint from the training data, all occurrences
of the test noun compounds have been removed from the training corpus.
Each training scheme yields counts of noun pairs which are used to compute
maximum likelihood estimates of the parameters as before using
equation~\ref{eq:cy_method_assoc_metric} in
section~\ref{sec:cy_method}.
Eight different training schemes have been used
and each set of estimates used to analyse the test set under both the
adjacency and the dependency method. The schemes used are:
\begin{itemize}
\item the pattern given by equation~\ref{eq:cy_method_trainset} in section
\ref{sec:cy_method}; and
\item windowed training schemes with window widths of 2, 3, 4, 5, 10, 50
and 100 words.
\end{itemize}
\begin{figure*}
\centering
\input{figures/cy_dva.tex}
\caption{Accuracy of dependency and adjacency method for
various training schemes} \label{fig:cy_adjacency_dva_accuracy}
\end{figure*}
The accuracy on the test set for all these
experiments is shown in figure \ref{fig:cy_adjacency_dva_accuracy}.
As can be seen, the dependency model is more accurate than the
adjacency model. This is true across the whole spectrum of training
schemes. In the case of the pattern training scheme, the
difference between 68.9\% for adjacency and 77.5\% for dependency results
in a pooled $z$-score of 2.14, which is statistically significant at the 5\%
level ($p = 0.0162$; one-tailed test), demonstrating the superiority
of the dependency model, at least for the compounds within Grolier's
encyclopedia.
In no case do any of the windowed training schemes outperform
the pattern scheme. It seems that additional instances admitted
by the windowed schemes are too noisy to make an improvement.
While using windowed co-occurrence did not help here, it is possible
that under more data sparse conditions better performance could
be achieved by this method.
The proportion of cases in which the procedure was forced to
guess, either because no data supported either
analysis or because both were equally supported, is shown
in figure~\ref{fig:cy_adjacency_dva_guess} and is
quite low for both methods. For the pattern and two word window training
schemes, the guess rate is less than 4\% for both models.
In the three word window training scheme, the guess
rates are less than 1\%. For all larger windows,
neither method is ever forced to guess.
It is a coincidence of the particular test and training sets
that the guess rates of both models are always equal in this figure.
\begin{figure*}
\centering
\input{figures/cy_dva_g.tex}
\caption{Guess rates of dependency and adjacency method for
various training schemes} \label{fig:cy_adjacency_dva_guess}
\end{figure*}
Initial results from applying these methods to the \acronym{ema} corpus on
ceramic materials have been obtained by ter Stal~(1995), and support the
conclusion that the dependency model is superior to the adjacency model.
\subsubsection*{Symmetric association}
The association used so far is asymmetric.
It is possible that the windowed training schemes could benefit from a
symmetric association, since that would permit paraphrases of compounds
involving post-modifiers to be included as training instances.
For example, observing \lingform{sports played in summer} would be used
as evidence that \lingform{summer} is a likely modifier of \lingform{sports}
(assuming that the window width was at least 4). This would provide a way
of reducing data sparseness too, since it would halve the number of free
parameters.
In order to test whether symmetric association schemes could make a
significant difference to the performance of the program, five of the training
schemes have been repeated using symmetric counts (that is, whenever
$(w_1, w_2)$ is observed as a training instance increment the count for
$(w_2, w_1)$ as well). The results are shown in
figures~\ref{fig:cy_adjacency_sym_accuracy}
and~\ref{fig:cy_adjacency_sym_guess}. The accuracy figures for
asymmetric association are shown with a broken line. As expected, the
guess rates fall, but the accuracy also drops slightly. Only for relatively
large window widths does symmetric association outperform the asymmetric
schemes. The dependency model still displays a markedly
greater accuracy across all training schemes.
\begin{figure}
\centering
\input{figures/cy_sym.tex}
\caption{Accuracy of symmetric association for
various training schemes} \label{fig:cy_adjacency_sym_accuracy}
\end{figure}
\begin{figure}
\centering
\input{figures/cy_sym_g.tex}
\caption{Guess rates of symmetric association for
various training schemes} \label{fig:cy_adjacency_sym_guess}
\end{figure}
\subsubsection*{Tuned model}
As mentioned in section~\ref{sec:cy_method},
equation~\ref{eq:cy_method_dep_ratio} omits the factor of 2 resulting from
the difference in $\mbox{\it choice\/}(m)$ between left-branching and
right-branching modificational structures. It also simplifies by assuming that
the sizes of semantic classes are uniform. I will now report the results of
employing a model in which both of these deviations from the dependency
model are corrected. I call this the tuned model.
While these changes are theoretically motivated only in the dependency
method, I have also applied them to the adjacency method for comparison.
To implement them, equations~\ref{eq:cy_method_dep_ratio}
and~\ref{eq:cy_adjacency_adj_ratio} must be modified to incorporate a
factor of $\frac{1}{\mid t_1 \mid \mid t_2 \mid \mid t_3 \mid}$ in each
term of the sum and the entire ratio must be multiplied by two.
Five training schemes have been applied with these extensions.
The accuracy results are shown in
figure~\ref{fig:cy_adjacency_tuned_accuracy}. For comparison, the
untuned accuracy figures are shown with dotted lines. A marked
improvement is observed for the adjacency model, while the
dependency model is only slightly improved.
\begin{figure*}
\centering
\input{figures/cy_tuned.tex}
\caption{Accuracy of tuned dependency and adjacency methods
for various training schemes} \label{fig:cy_adjacency_tuned_accuracy}
\end{figure*}
\subsubsection*{Lexical association}
The conceptual probabilistic model given uses $|S| \times |S|$ parameters. If
instead of using conceptual association, a lexical parameterisation were
used, the probabilistic model would have $|{\cal N}| \times |{\cal N}|$
parameters for a total of 8.1 billion. Apart from the probable data
sparseness problem, manipulating such a model would involve huge
computational expense.
However, it is possible to implement such a system for a small test set. To
determine the cost (if any) of assuming that words in the same category
behave identically, I have done this for the test set used above.
The pattern training scheme was retrained using lexical counts for both the
dependency and adjacency method, but only for the words in the test set.
Left-branching is favoured by a factor of two as described in the previous
section, but category sizes were omitted (these being meaningless for the
lexical association method).
Accuracy and guess rates are shown in
figure~\ref{fig:cy_lexical_lex_accuracy}. Guess rates are, unsurprisingly,
much higher for lexical association. Probably because of this, the accuracy
falls. Therefore grouping words into concepts not only makes the model
tractable, but improves the performance by allowing generalisation amongst
similar words.
\begin{figure*}
\centering
\input{figures/cy_lvc.tex}
\caption{Accuracy and guess rates of lexical and conceptual association}
\label{fig:cy_lexical_lex_accuracy}
\end{figure*}
\subsubsection*{Using a tagged corpus}
One problem with the training method given in
section \ref{sec:cy_method} is the restriction of
training data to nouns in $\cal N$. Many nouns,
especially common ones, have verbal or adjectival
usages that preclude them from being in $\cal N$.
Yet when they occur as nouns, they still provide
useful training information that the system ignores.
To test whether using tagged data would
make a difference, the freely available Brill tagger (Brill,~1993)
was applied to the corpus. Since no manually tagged
training data is available for our corpus,
the tagger's default rules were used (these
rules were produced by Brill by training on the
Brown corpus). This results in rather poor
tagging accuracy, so it is quite possible that a
manually tagged corpus would produce better
results.
\begin{figure*}
\centering
\input{figures/cy_tag.tex}
\caption{Accuracy using a tagged corpus for various
training schemes} \label{fig:cy_tagged_tag_accuracy}
\end{figure*}
Three training schemes have been used and the
tuned model then applied to the test set.
Figure \ref{fig:cy_tagged_tag_accuracy} shows the resulting
accuracy, with accuracy values from figure
\ref{fig:cy_adjacency_tuned_accuracy} displayed with dotted lines.
If anything, admitting additional training data based
on the tagger introduces more noise, reducing the
accuracy. However, for the pattern training scheme
an improvement was made to the dependency
model, producing the highest overall accuracy of 80.7\%.
\subsection{An Experiment with Human Judges}
\label{sec:cy_human}
In 4 out of 5 three word compounds the parsing strategy suggested by the
dependency model can predict the correct bracketing. To achieve this, the
model assigns probabilities to modificational structures by computing
statistics from a corpus. However, these probabilities are conditioned only
on the compound in isolation; the model is incapable of varying its analysis
in response to different contexts.\footnote{I should emphasise that the fault
lies with the probabilistic model, not the meaning distributions theory. In
terms of section~\ref{sec:md_context}, the model entirely ignores three of
the five types of context (topic, author and discourse structure) and only
statically models the other two (register and world knowledge), making it a
relatively crude model overall.}
In section~\ref{sec:cn_context} I argued that the syntactic analysis of
compounds can vary with context, so there must be an upper limit to the
model's performance that is less than perfect. Putting it in terms of the data
requirements theory, we know that there is a non-zero optimal error rate. In
order to properly evaluate the accuracy results given above, it would be
useful to have some idea of what this upper limit is. In this section I will
discuss some work toward estimating that limit.
\subsubsection*{Some important qualifications}
One approach to this is to ask human judges to perform the same task as the
program does using exactly the same information. Hindle and Rooth~(1993)
pursue this approach for prepositional phrase attachment, treating the result
as a significant indication of the optimal accuracy. For each test case, the
judge is given just those values upon which the probabilistic model is
conditioned and required to assign their best guess as to the appropriate
in-context analysis. Resnik~(1993:128) has also conducted a similar
experiment for parsing three word compound nouns. It is worth considering
some of the assumptions of such an experiment.
To begin with, the task facing the judges is an artificial one --- it isn't a
problem that humans would normally encounter. So the assumption that
human judges will achieve a near-optimal accuracy is questionable on
psychological grounds. Also, it is impossible to give the judges exactly the
same information as the program is given. For example, if the program is
trained on a newswire corpus, the trained model will incorporate the
information that test compounds come from that genre. So to be fair, we
must tell the human judge what kind of text the test set comes from. But
how much detail do we give? In principle, the entire training corpus might
be of use to the judge, but supplying this is clearly impractical. Also, if a
test case contains a word that the judge doesn't know, how is this to be
treated? She is unlikely to achieve optimal performance then.
These problems result in humans performing at below optimal accuracy.
Might the experiment therefore provide a lower bound on the optimal
accuracy? In general, no. Even if the probabilistic model conditions on
exactly the same set of variables that are provided to the judges, any
reasonable model (that is, every one bar the complete joint distribution
discussed in section~\ref{sec:dr_beginning}) makes some assumptions that
limit the possible answers. For example, Hindle and Rooth's~(1993) model
assumes that the evidence provided by the noun and verb can be combined
independently. Therefore, their program cannot ever respond with the
following analyses:\footnote{I have simplified here by ignoring the effect of
the parameters $\Pr(n, \nullsym)$; however, the basic point remains valid.}
\begin{itemize}
\item $\langle v_1, n_1, p \rangle$ gets verbal attachment,
\item $\langle v_1, n_2, p \rangle$ gets nominal attachment,
\item $\langle v_2, n_2, p \rangle$ gets verbal attachment, and
\item $\langle v_2, n_1, p \rangle$ gets nominal attachment.
\end{itemize}
Human judges are under no such restriction, and therefore potentially have
higher accuracy rates than are optimal for the program. Likewise, in the
model of compound nouns reported in this thesis, the probabilistic model
conditions only on semantic classes. It seems unfair in the extreme to
require judges to bracket sequences of thesaurus categories simply because
the model chooses this as its internal representation. I am convinced that
humans would do far worse than my program under such conditions. Yet, if
we give the judges word sequences, they will have more information at their
disposal than the program does.
However, since the test set is far too small to provide any insight into the
optimal accuracy, there appears to be little choice in the matter. Therefore,
keeping all these qualifications in mind, I will now report a human judges
experiment for the compound noun parsing task. The goal of the experiment
is to estimate the optimal performance of any analysis system that, given a
compound, predicts the bracketing assigned to it by the manual annotation
regime that I used to mark up the test set. Note that this differs from
predicting the \scare{correct} parse of the compound, since it allows both
for idiosyncrasies in my annotation of the test set and for supervision
errors.\footnote{I am not yet aware of any supervision errors.} Hindle and
Rooth fail to mention these possible causes of error, ascribing the entire
error rate to context dependence.
\subsubsection*{Experimental method}
Seven subjects volunteered to act as judges. All came from the local
research environment, although with varying backgrounds. Two have
doctoral degrees in computer science unrelated to \nlp\ and linguistics, four
are postgraduate students in computer science (one of whom is working in
\nlp), and one is an administrative assistant (no tertiary education). All are
native speakers of English. The judges were randomly assigned to one of
two blocks (3 judges in block 1 and 4 judges in block 2) prior to the
experiment. For comparison purposes, Hindle and Rooth~(1993) use two
judges and Resnik~(1993) uses one.
A custom designed interface was built which presented each test compound
out of context to the judge, along with the two possible readings. In each
case the judge was required to select either the left-branching or
right-branching analysis, according to their best guess about the compound
in the (unknown) context. The interface allowed additional comments
to be made on each compound. The judges were told that the compounds
came from an encyclopedia and instructed to take as long as necessary to
maximise their performance. The complete set of instructions given appears
in appendix~\ref{appendix:humaninstructions}.
Each compound was presented only once (responses on compounds that
appeared multiple times in the test set were weighted so that the results
below represent performance by token frequencies). The order in which the
compounds were presented was chosen randomly, each block getting a
different order. The time taken to analyse each compound was recorded,
although the judges were unaware of this. The total time taken to analyse
the 215 compounds ranged from 26 minutes to 72 minutes, with an average
of 49 minutes.
\subsubsection*{Results}
The responses given by the judges were evaluated in the same manner as the
computer program described above, with the resulting accuracy figures
shown in table~\ref{tb:cy_human_results}. The order of presentation did
not appear to systematically alter the performance, since the block averages
do not differ significantly. The average accuracy is 81.5\%, showing a
surprisingly large error rate.
This is not significantly better than the 80.7\%
achieved by the dependency model trained on tagged data. The result also
matches that found by Resnik~(1993:128) whose single judge scored only
80.0\% (on around 160 compounds with 100\% recall).
\begin{table*}
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline
Block & \multicolumn{3}{|c|}{Block 1} &
\multicolumn{4}{|c|}{Block 2} & Average \\
\cline{1-8}
Judge & A & B & C & D & E & F & G & \\
\hline
Accuracy & 78.7\% & 80.3\% & 86.1\% & 79.1\% & 79.9\% & 82.8\% &
83.6\% & 81.5\% \\
\hline
\end{tabular}
\caption{Accuracy of human judges on parsing three word
compounds out of context}
\label{tb:cy_human_results}
\end{table*}
These results suggest that the computer program may be performing at close
to optimal. However, it is also possible that neither the humans nor the
computer are performing optimally. One way to detect this is to examine
how often the computer agrees with the human subjects. If the humans
make entirely different errors to those made by the computer, then it is likely
that neither are making optimal predictions. We still cannot make the
converse conclusion if they agree well.
Table~\ref{tb:cy_human_agreement} shows the inter-human agreement
rates and the human-computer agreement rates. The inter-human agreement
rate for a judge is derived by taking the proportion of matching analyses
given by each of the other six judges and averaging these six proportions.
The overall average is computed by averaging the seven agreement rates so
derived. The human-computer agreement rate for a judge is just the
proportion of matching analyses given by the best dependency model (the
one with 80.7\% accuracy). The overall average is the mean of these seven
agreement rates. To dispel suspicion that one of the judges is pulling down
all the other judges' averages, the highest agreement rate between any pair of
judges is 84.4\%; the lowest is 73.0\%.
\begin{table*}
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline
Block & \multicolumn{3}{|c|}{Block 1} &
\multicolumn{4}{|c|}{Block 2} & Average \\
\cline{1-8}
Judge & A & B & C & D & E & F & G & \\
\hline
Inter-human & 77.6\% & 78.6\% & 82.2\% & 79.2\% & 80.4\% & 82.1\% &
81.7\% & 80.3\% \\
\hline
Human-computer & 73.4\% & 75.8\% & 77.5\% & 76.2\% & 71.3\% &
73.4\% & 77.5\% & 75.0\% \\
\hline
\end{tabular}
\caption{Inter-human and human-computer agreement rates on compound
parsing task without context}
\label{tb:cy_human_agreement}
\end{table*}
The computer agreement rates are clearly below the inter-human agreement
rates, but not by a large margin. In fact, the computer has better agreement
with 6 out of 7 judges than that between the least agreeing pair of humans.
The difference could be plausibly attributed to the lower accuracy rate of the
computer (notice that the inter-human agreement rates are correlated with
individual judge accuracy rates). Therefore, the tentative conclusion of the
experiment is that the optimal error rate is relatively large and the
performance of the best dependency model is reasonably close to it. This
suggests that improvements in the performance of algorithm are unlikely
without the addition of context sensitivity to the model.
\subsection{Discussion}
\label{sec:cy_discuss}
\subsubsection*{Conclusions}
The experiments above demonstrate a number of important points.
The most general of these is that even quite crude corpus
statistics can provide information about the syntax of compound
nouns. At the very least, this information can be applied in
broad coverage parsing to assist in the control of search.
I have also shown that with a corpus of moderate size it is
possible to get reasonable results without using a tagger
or parser by employing a customised training pattern.
The significance of the use of conceptual association deserves
some mention. I have argued that without it a broad coverage system
would be impossible. In this study, not only has the technique proved its
worth by supporting generality, but through generalisation of
training information it outperforms the equivalent lexical association
approach given the same information. This is in contrast to previous work
on conceptual association where it resulted in little
improvement on a task which could already be performed (Resnik and
Hearst,~1993).
Amongst all the comparisons performed in these experiments one stands out
as exhibiting the greatest contrast. In all experiments the dependency
method provides a substantial advantage over the adjacency method, even
though the latter is more prevalent in proposals within the literature. This
result gives strong support to the meaning distributions theory. The
dependency model also has the further commendation that it predicts
correctly the observed proportion of left-branching compounds found in two
independently extracted test sets.
In all, the most accurate technique achieved an accuracy
of 81\% as compared to the 67\% achieved by guessing left-branching.
Given the high frequency of occurrence of noun compounds in many texts,
this suggests that the use of these techniques in probabilistic parsers will
result in higher performance in broad coverage natural language
processing.
Furthermore, the human judgement experiment suggests that without context
people achieve about 82\% accuracy on the same task.
If we assume that this figure represents the optimal accuracy,
then the model of noun compound syntax based
on the meaning distributions theory is close to the best possible
without context.
While it is impossible to infer whether other \sll\ designs
based on the meaning distributions theory will also exhibit
high performance, the results here provide some empirical
evidence that the meaning distributions architecture is a fruitful
area of the design space.
\subsubsection*{Limitations}
I will review four of the limitations of this experimental work, three being
limitations of the proposed parsing strategy, and the last of the experimental
design.
\begin{enumerate}
\item The model is not sensitive to context.
\item Roget's categories do not always form appropriate semantic classes.
\item Sense ambiguity can confuse the program.
\item The test given here is limited to one corpus.
\end{enumerate}
Possibly the most significant limitation of the parsing strategy is the
failure to use contextual knowledge in making analyses. The
program is presented with each compound out of context and the
model only uses information obtained from within the
compound. While there is no easy way to tell what part of the error rate is
due to context dependence and what part is due to other model deficiencies
(like violated assumptions in the model or poor parameter estimation), the
human judgement experiment suggests that the correct analysis of a
compound depends heavily on wider context. Therefore any model which
ignores context will exhibit a significant error rate.
Another source of problems is the use of Roget's categories as conceptual
groupings. The intention of grouping words into concepts was to capture
synonymy. But Roget's thesaurus groups together words which are related
in much more arbitrary ways. For example, the category \tc{river}{348}
contains all of \lingform{cataract}, \lingform{flux}, \lingform{fountain},
\lingform{force}, \lingform{hydraulics}, \lingform{rush},
\lingform{shower}, \lingform{syringe} and \lingform{tide}.
The assumption that all these words (in their relevant senses) behave
identically is clearly a poor one. There are also many missing senses:
\lingform{present} for instance only appears in the category
\tc{giving}{784} when it should also appear in \tc{present time}{118} with
\lingform{hour}, \lingform{nowadays} and \lingform{today}.
Thus, the generalisations made by the model
from words to concepts are both noisy and haphazard. This
problem might be overcome by the use of a large lexical
ontology such as WordNet (Miller,~1990).
Even though the model performs a form of sense
disambiguation, it can often go wrong here too. In the test
example \lingform{country music singer}, the program uses both of the
possible senses of \lingform{country}, represented by the categories
\tc{abode}{189} and \tc{government}{737a}. Since
\lingform{singer} is in the category \tc{musician}{416} and musicians can
come from many different governed social units (villages,
towns, cities, countries, etc) the program uses the
\tc{government}{737a} sense of \lingform{country} to conclude
incorrectly that a right-branching structure is more likely (that is,
\lingform{country} modifies \lingform{singer} rather than
\lingform{music}).
If a practical sense disambiguation algorithm (see Yarowsky,~1995)
were available then both the training and test sets could
be preprocessed by this algorithm to yield sense-tagged data.
There would then be no need to distribute the evidence arising
from a word amongst its different possible concepts and, as long
as the sense disambiguation worked accurately, these kinds of
errors would be eliminated.
Finally, the results given here are specific to the style and size of
corpus I have used. It is not possible to make conclusions
about the applicability of the technique to other types
of text, or even about the behaviour on larger or smaller training
corpora. As in all statistical learning systems, it is important to
ask whether sufficient training data was used. Even though the corpus is
fairly large, there is a problem with data sparseness: a training
set numbering tens of thousands has been used to estimate
parameters numbering millions. However, the low guess rate
(less than 4\%) and close to optimal accuracy suggests that the problem is
not as severe as this ratio implies. It seems likely that such performance
under extremely data sparse conditions is due to the extreme non-uniformity
of the bin distribution. Also, a significantly larger corpus (written in a
single register) is unlikely to be available to any but the largest
of projects. For the purposes of answering the scientific
question of whether a practical statistical model can parse
compound nouns, there appears to be sufficient training data. But we
cannot infer from the results given above whether using more
data would help the correctness.
\section{Noun Compound Semantics}
While the syntactic analysis of compound nouns explored in the first half of
this chapter could be used to make parsing more efficient, it is only the first
step in understanding compound nouns. Understanding them also entails
being able to identify the semantic relationships implied by them.
The remainder of this chapter will describe some experiments in applying \sll\
techniques to this problem.
Though the ultimate goal is the semantic analysis of arbitrary noun compounds,
the experiments here concern only the paraphrasing of two word compounds
that have a prepositional paraphrase.
The scope of the experiments is discussed in section~\ref{sec:ce_problem},
leading to a precise task definition.
In section~\ref{sec:ce_model}, I will develop a probabilistic model
of noun compound paraphrasing based on the meaning distributions theory.
The model combines information from both the head and the
modifier in order to determine the semantic relationship between
them.
An overview of the components of the experimental setup is given
in section~\ref{sec:ce_design}, many of which are shared with
the earlier parsing experiments (see section~\ref{sec:cy_design}).
Full details of the experimental
method follow in section~\ref{sec:ce_method}. These include
the method used to collect test and training data from the corpus,
the concepts used, the parameter estimation strategy and the
analysis procedure.
Section~\ref{sec:ce_results} gives the resulting performance
measures for the paraphrasing model, both with and without
a simple smoothing technique. Further results derived by
applying restrictions to avoid data sparseness also
appear there. Finally, some conclusions are drawn
in section~\ref{sec:ce_discussion}.
\subsection{Defining the Semantic Analysis Problem}
\label{sec:ce_problem}
\subsubsection*{Semantic classifications}
Before considering different available semantic classifications of compounds,
I should mention one assumption that is made universally by constructive
theories of compound noun semantics. It is usual to assume that compounds
longer than two words are built up by a series of compounding steps, each
one incorporating its own implicit semantic relationship. For example,
\lingform{statistics tutorial room schedule} involves three separate implied
semantic relationships: the tutorials are about statistics, the room is used for
the tutorials and the schedule describes the usage of the room. It is assumed
that each of these relationships behaves exactly as it would in a two word
compound. That is, the semantics of \lingform{statistics tutorial},
\lingform{tutorial room} and \lingform{room schedule} can be combined by
conjunction to obtain the semantics of the whole.
If this is true, then giving a theory of the semantics of two word compounds
suffices to give a theory of all compounds. Since this assumption is so
common, I will adopt it here. Even if it is not strictly true, it seems
reasonable to expect that longer compounds will largely be driven by the
same semantic rules as two word compounds, so that any solution for the
latter will represent substantial progress on the former. It is worth pointing
out here that one exception to this assumption is proposed by
McDonald~(1982). His claim is that one noun cannot be modified by two
different modifiers with both implying the same semantic relationship. The
example used is \lingform{computer database management}, where
\lingform{database} specifies the patient role of \lingform{management},
forcing \lingform{computer} to have the agent role, even though
\lingform{computer management} would ordinarily imply a patient role.
This constraint also appears to be implied by Finin's~(1980) algorithm.
Having restricted attention to two word compounds, I will now consider
possible formulations of the semantic analysis task. To make this task more
concrete, it will be necessary to adopt a theory of their semantics. Unlike for
the grammar of compounds, there is virtually no concensus amongst
linguists on the possible semantics of compounds, which complicates the
task of choosing any one theory.
Levi's~(1978) theory (see section~\ref{sec:cn_meaning})
is attractive for two reasons. First, it is the most well-developed and
explicit theory of compound semantics available. Second, it
postulates a small set of possible semantic relations, which can be taken to
be the target set. Recall that she divides compounds into two kinds:
nominalisations and those with recoverably deleteable predicates. She
claims that only nine predicates are recoverably deleteable from all
compounds in the latter group. As one support for this claim, she allocates a
couple of hundred example compounds to their associated predicates.
So a natural problem specification would set the goal to be recovery of the
deleted predicate in compounds so formed. Stated in terms of Levi's
generative semantics view, this involves reversing the final step of the
derivation of the compound to find the intermediate semantic form. For
example, given the compound \lingform{olive oil} the predicate recovered
should be \semrel{from} because this compound is derived by
transformation of the intermediate form
\lingform{oil}~\semrel{from}~\lingform{olive}. Similarly,
\lingform{mountain lodge} should be analysed as
\lingform{lodge}~\semrel{in}~\lingform{mountain}.
One problem with this specification is that a given compound can have
several different, valid analyses. For instance, one entire sub-group of the
\semrel{make} predicate (mass noun modifiers) \shortquote{has a regular
alternative analysis under \semrel{be}}~(Levi,~1978:281). This multiplicity
of analyses Levi calls analytic indeterminacy. Since the allocation of test
cases to one or other predicate is partially a matter of individual preference,
evaluation is complicated, especially since the different predicates are
subjective and highly abstract.
An alternative approach would be to examine a set of compounds and
develop a new semantic classification for them, according to the data.
This follows the convention among linguists, all of whom seem to have
their own individual semantic divisions.
Vanderwende~(1993) chooses this path (see section~\ref{sec:cn_knowledge})
arriving at a set of 11 semantic relations, based on \lingform{wh}-questions,
derived from inspection of a development set of 59 compounds.
For example, \lingform{bird sanctuary} involves a \semrel{purpose}
relation because the pre-modifier answers the question \scare{What for?}
The difficulty with this approach is that the resulting classifications are
unlikely to be comprehensive and have no independent justification.
Furthermore, they remain highly abstract and subjective. Since linguists
who have specialised in the study of compounds disagree about even the
broad semantic divisions, attempting to specify our own comprehensive,
precisely defined classification is unlikely to result in a useful scheme. In
the absence of a specific application to provide the necessary semantic
distinctions, inventing intuitive ones can only lead to \foreign{ad hoc}
results.
Another alternative, the one I will select, is to classify compounds according
to their different paraphrases. So the problem is now cast as one of finding
the most preferred paraphrase of a compound. Taking this tack makes
evaluation far more concrete. While preferences for different paraphrases
are still subjective, paraphrases are at least precisely defined, unlike abstract
semantic relations. Decisions about the acceptability of different
paraphrases are on similar footing to grammatical acceptability judgements,
which form the foundation of syntactic theory even though they vary
between dialects and idiolects. Abstract semantic relations are, by
comparison, shadowy, elusive constructs.
Though Leonard~(1984) provides her own semantic classification of
compounds, the measurable goal of her computer analysis system is
paraphrasing, so in effect she adopts this approach. Given the compound
\lingform{mountain vista}, her system produces the paraphrase
\lingform{vista of a mountain or mountains}.
Likewise, \lingform{Monday morning} is rewritten as
\lingform{morning in or on a Monday}.
But the system's inability to distinguish between the two locative
prepositions in this second example exposes the underlying theory based on
semantic relations. All three prepositions, \lingform{in}, \lingform{on}
and \lingform{at} are subsumed under one locative
relation (Leonard,~1984:93).
Similarly, the semantic aspects of definiteness and number are not
distinguished by the semantic theory, leading to a somewhat underspecified
paraphrase. So the position Leonard takes is a hybrid between defining the
problem in terms of abstract semantic relations and defining it by
paraphrase.
In this work, I will define the problem solely in terms of paraphrasing. By
doing so, I am addressing a different, though related, problem. For example,
one semantic relation can be expressed by different paraphrases for purely
collocational reasons. Consider the compound \lingform{evening ride},
whose most preferred paraphrase in my dialect is \lingform{ride in the
evening}. Semantically, the compound serves to place the riding event
within a periodic time interval. The semantics of \lingform{night ride} are
as similar to those of \lingform{evening ride} as it is possible to get. Yet,
the most preferred paraphrase in my dialect for \lingform{night ride} is
\lingform{ride at night}.
Conversely, one paraphrase scheme can express different semantic relations.
A \lingform{one month vacation} has most preferred paraphrase
\lingform{vacation for one month}. Similarly, \lingform{defense outpost}
has most preferred paraphrase \lingform{outpost for defense}. Yet the
former clearly implies the duration of the vacation, while the latter implies
the purpose of the outpost. It should be clear from these examples that
defining the problem in terms of paraphrase means addressing a different
problem; though paraphrasing is neither obviously easier, nor obviously
more difficult, it is certainly more concrete.
\subsubsection*{Types of compound paraphrases}
By selecting paraphrases as the target of semantic analysis, the need to select
a set of abstract semantic classes is avoided. However, we still need to
know what paraphrases are possible. Some help in this endeavour comes
from looking again at the linguistic theories of compound noun semantics.
Those that make any attempt to define or characterise the elements of their
particular set of semantic relations, usually do so by means of paraphrases.
For example, Warren~(1978) explains virtually all her semantic classes with
reference to the typical paraphrase, summarising these observations in the
following passage.
\begin{citedquote}{Warren,~1978:47--48}
The semantic relation expressed by a compound is normally made overt in
its prepositional paraphrase. Source-Result allows an
\lingform{of}-paraphrase \dots Copula compounds, however, do not
allow any prepositional paraphrase \dots Purpose compounds allow a
\lingform{for}-paraphrase \dots
\end{citedquote}
By collecting together all such characteristic paraphrases, we can arrive at a
set of paraphrase schemes which, while not independently constructed, does
have some independent claim to being comprehensive, at least to the extent
that paraphrases represent the semantics of compounds.
All theories of compound semantics that give paraphrases posit at least one
of the following three paraphrase groups.
\begin{enumerate}
\item Copula compounds with paraphrases like \lingform{fish that is a tuna}.
\item Verbal-nexus compounds or nominalisations with paraphrases like
\lingform{demonstrations by students} and \lingform{construction of
buildings}.
\item Prepositional compounds with paraphrases like \lingform{bars of
steel}, \lingform{stories about war} and \lingform{commercials on
television}.
\end{enumerate}
There are a few scattered examples which do not fit into these classes, for
example causation (\lingform{virus that causes colds}) and price
(\lingform{jacket that costs \$50}). However, these appear to be infrequent.
I will assume that it is possible to allocate all
compounds into these three paraphrase groups in an objective
manner.\footnote{In section~\ref{sec:ce_method} below I give the criterion
used to distinguish nominalisations from prepositional compounds in the
experiments.}
There are good reasons to treat these three paraphrase groups separately.
Copula compounds clearly require hypernymic knowledge such as might be
found in a taxonomic structure like WordNet, and it is possible that this
would be sufficient to analyse them. Analysis of the other two groups might
benefit from this kind of information, but hypernymy alone is certainly
insufficient. It is plausible that copula compounds could be analysed by an
\sll\ system designed to learn hypernymic relations from a corpus (see for
example, Hearst~1992). However, since this constitutes a significant
research goal by itself, I will exclude copula compounds from the scope of
the experiment.
The semantics of verbal-nexus compounds are tied intimately to those of the
verb in question. For this reason, Warren~(1978) only considers
non-verbal-nexus compounds. For example, to interpret \lingform{garbage
collection}, knowledge of the semantics of, and case roles for, the verb
\lingform{collect} are needed. In order to go beyond the merely syntactic
observation that \lingform{garbage} plays the role of direct object of this
verb, we require a semantics for the verb. So to address these types of
compounds it appears necessary to have both a theory of, and a way of
acquiring knowledge about, the semantics of verb phrases.\footnote{One
approach to this is the predicate argument statistics collected by
Hindle~(1990), which might be used as the basis of a verbal-nexus
compound interpreter.}
In addition, the morphological characteristics of verbal-nexus compounds
are crucial to their interpretation. Nominalisations can be created by various
suffixes, often indicating different roles. An \lingform{employee} is the
patient of an \lingform{employment} action. The agent is an
\lingform{employer} and all of their staff are in their \lingform{employ}.
So any deep analysis of the semantics of nominalisations will involve
both detailed verb semantics and a
substantial morphological component.
For these reasons I will restrict the scope of the experiments below to
compounds in the prepositional paraphrase group. Since it is not hard to
identify verbal-nexus compounds (their heads are morphologically
related to a verb), this does not
present a methodological difficulty (although see the notes in
section~\ref{sec:ce_method} below). I will make the further simplification
of taking the goal to be selection of the preposition appearing in the most
preferred paraphrase of the compound. This ignores several semantic
distinctions necessary for faithful paraphrasing, such as the number and
definiteness of the modifier.
These restrictions have several advantages:
\begin{itemize}
\item The range of possible \scare{semantics} of prepositional compounds
can be precisely identified and defined.
\item There are only a small set of possible analyses.
\item Most linguistic theories have made reference to prepositional
paraphrases in characterising semantic classes, so these compounds form an
important group.
\item The granularity of the semantic distinctions (see
section~\ref{sec:cn_accommodation}) is naturally fixed by the range of
prepositions found in English.
\item Since prepositions are overtly expressed in text, the possibility of
acquiring knowledge about their properties by statistical means is made
more likely.
\end{itemize}
\subsubsection*{Prepositions as the target}
To construct a list of possible prepositional paraphrases of compounds I
have used Warren's~(1978) study, the largest corpus based study of
compound noun semantics available.\footnote{Recall from
section~\ref{sec:cn_meaning} that her study involved semantic analysis of
4,557 compounds (types) extracted from 360 thousand words (tokens) of the
Brown corpus.} She excludes verbal-nexus compounds, but allows copula
ones. For each of her major semantic sub-classes, she gives the typical
preposition used to paraphrase compounds in that sub-class (see
Warren,~1978:48, table~4).\footnote{These
classes are tabulated later in her book
with {\em type} frequency counts within her sample (Warren,~1978:229).}
In the case of her Place and Time sub-classes, she gives three prepositions
\lingform{in}, \lingform{at} and \lingform{on} because they are all
common enough to warrant inclusion. This yields seven possible
prepositions. In addition to these I will include one extra preposition,
because one of her minor sub-classes (Subject-Matter-Whole) within the
Constitute class makes up 3.6\% of all her prepositional compounds (types
not tokens). Since these compounds have a preferred paraphrase using
\lingform{about} rather than \lingform{of}, I will also include the former.
This procedure produces the following possible prepositional paraphrases of
compounds (excluding verbal-nexus and copula compounds).
\begin{description}
\item[of:] \lingform{state laws} means \lingform{laws of the state}.
\item[for:] \lingform{a baby chair} means \lingform{a chair for babies}.
\item[in:] \lingform{morning prayers} means \lingform{prayers in the
morning}.
\item[at:] \lingform{airport food} means \lingform{food at the airport}.
\item[on:] \lingform{Sunday television} means \lingform{television on
Sunday}.
\item[from:] \lingform{reactor waste} means \lingform{waste from a
reactor}.
\item[with:] \lingform{gun men} means \lingform{men with guns}.
\item[about:] \lingform{war story} means \lingform{story about war}.
\end{description}
This list excludes one paraphrase scheme from Warren's~(1978) table~4,
that being \lingform{like}-paraphrases. I have chosen to exclude this
because such compounds can be analysed as copula if metaphor is taken into
account. For example, although the compound \lingform{barrel chest} has a
preferred paraphrase of \lingform{chest like a barrel}, the paraphrase
\lingform{chest that is a barrel} also conveys the same meaning in the same
contexts. The former is preferred only because it is more explicit. Since
metaphor appears everywhere in language and any system which exhibits
deep understanding must model it in some form, it is economical to analyse
these compounds as metaphorical copula compounds. It is also possible for
prepositional paraphrases to make use of metaphor; for example,
\lingform{steel father} can be paraphrased as \lingform{father of steel},
both of which are not usually intended literally.
The example of \lingform{airport food} above raises another general issue.
The relationships expressed by compounds are usually inherent or typical
relationships. As we've seen in section~\ref{sec:cn_meaning}, a
\lingform{tree kangaroo} is {\em typically} found in trees. It does not cease
being a tree kangaroo when it hops down to earth. In evaluating the possible
paraphrases of compounds, typicality is implied, so that assigning the
preposition \lingform{at} to the compound \lingform{airport food}
represents the paraphrase \lingform{food typical at an airport}, rather than
referring to any food that happens to be carried there.
So now I can precisely state the problem addressed in the experimental work
below.
\begin{description}
\item[Problem Statement:] Given a non-copula, non-verbal-nexus
compound, predict which of the following prepositions is most likely to be
used in the preferred paraphrase of the compound, allowing for both
metaphorical paraphrases and typicality: \lingform{of}, \lingform{for},
\lingform{in}, \lingform{at}, \lingform{on}, \lingform{from},
\lingform{with} and \lingform{about}.
\end{description}
\subsection{A Probabilistic Model of Noun Compound Paraphrasing}
\label{sec:ce_model}
To build a statistical language learner for this problem we begin
with a probabilistic model. Using such a model we can compute the
probability of each of these 8 outcomes and then choose the most probable
one. Let $P$ be the set of prepositions above and $N$ the set of nouns.
The appropriate analysis function can be written as
\begin{equation}
(\forall n_1,n_2 \in N) (A(n_1,n_2) = \argmax{p \in P} \Pr(p | n_1,n_2))
\end{equation}
where $\Pr(p | n_1,n_2)$ is given by the probabilistic model.
In this section I
will give a model using the meaning distributions theory.
Consider the complete joint distribution model for this problem. It contains
one parameter for every triple, $\langle p, n_1, n_2 \rangle$. Assuming a
vocabulary of 90 thousand nouns yields over 56 billion free parameters.
Without hundreds of millions of training instances, it is not possible to train
this model, even allowing for skewed distributions and techniques like
smoothing. Therefore we need to make some assumptions to reduce the data
requirements.
The first step is to adopt the meaning distributions theory. Let $C$ be a set
of concepts, being the semantic representation of nouns, with $\phi_1: N
\rightarrow 2^C$ giving the possible semantics of each noun. Similarly, let
$R$ be a set of roles, being the semantic representations of the eight
prepositions, with $\phi_2: P \rightarrow 2^R$ giving the possible semantics
of each preposition. The meaning distributions theory requires a
probabilistic model to give the set of probabilities $\Pr(r | c_1,c_2)$ where
$r \in R, c_1,c_2 \in C$. The syntactic component defined by the source
maps $\phi_1$ and $\phi_2$ (and their corresponding image maps) then
fixes the probabilities $\Pr(p | n_1,n_2)$.
The second step is to assume
that the head and modifier of the compound contribute independent
evidence. This factors the model into two parts: one for representing the
information contributed by the head (association between $r$ and $c_2$)
and the other for information contributed by the modifier (association
between $r$ and $c_1$).
While this is a crude approximation, it finds some support in the literature.
For instance, the idea is central to Ryder's~(1994) schema based theory of
noun compound meanings.
\begin{citedquote}{Ryder,~1994:63}
\dots nouns are generally strongly autonomous, and so in a noun-noun
compound both elements often have equal autonomy, rather than one being
clearly dependent on the other.
\end{citedquote}
Accordingly, the process of interpreting a compound noun involves finding a
relation in which both nouns are plausible participants. In the model below,
the probability distribution $\Pr(r | c_1)$ gives a profile of the relations in
which the modifier is likely to participate, while the probability distribution
$\Pr(r | c_2)$ gives a profile of the relations in which the head is likely to
participate. To match these profiles together, the model multiplies the
probability distributions.
We can now formally define the model.
Recall that the goal is to
assign probabilities to different possible paraphrases, so we need a simple
expression for $\Pr(p | n_1,n_2)$ for each $p \in P$. Proceed as follows.
\begin{eqnarray}
\Pr(p | n_1,n_2) & = & \sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\hspace{-1cm}
\Pr(p | c_1, c_2, n_1, n_2) \Pr(c_1, c_2 | n_1,n_2)
\nonumber \\
& = & \sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\hspace{-1cm}
\Pr(c_1, c_2 | n_1,n_2)
\sum_{r \in \phi_2(p)}
\Pr(p | r, c_1, c_2, n_1, n_2)
\Pr(r | c_1, c_2, n_1, n_2) \nonumber \\
& = & \sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\hspace{-1cm}
\Pr(c_1, c_2 | n_1,n_2)
\sum_{r \in \phi_2(p)}
\Pr(p | r) \Pr(r | c_1, c_2) \nonumber \\
& = & \sum_{r \in \phi_2(p)} \Pr(p | r)
\sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\hspace{-1cm}
\Pr(r | c_1, c_2)
\frac{\Pr(n_1,n_2 | c_1, c_2) \Pr(c_1, c_2)}{\Pr(n_1,n_2)}
\label{eq:ce_model_restate}
\end{eqnarray}
The third step uses $\Pr(p | r, c_1, c_2, n_1, n_2) = \Pr(p | r)$
and $\Pr(r | c_1, c_2, n_1, n_2) = \Pr(r | c_1, c_2)$, both
of which follow from the meaning distributions theory
when $n_1 \in \theta_1(c_1)$, $n_2 \in \theta_1(c_2)$ and
$p \in \theta_2(r)$.
The other steps follow from probability theory and Bayes' rule.
To simplify further, it will be necessary to make some assumptions about the
syntactic mappings $\phi_1$, $\phi_2$ and the corresponding images
$\theta_1$ and $\theta_2$.
\begin{assumption}[Homogeneous Syntax]
Suppose that $\theta_1$ and $\theta_2$, are such that the following are each
constant:
\begin{eqnarray*}
(\forall c \in C) (|\theta_1(c)| & = & k_1) \\
(\forall r \in R) (|\theta_2(r)| & = & k_2)
\end{eqnarray*}
\end{assumption}
Using the meaning distributions theory, these assumptions result in two of
the probabilities above being constant. In particular, $\Pr(p | r) =
\frac{1}{k_2}$ and $\Pr(n_1,n_2 | c_1,c_2) = (\frac{1}{k_1})^2$. This
simplifies equation~\ref{eq:ce_model_restate} to
\begin{eqnarray}
\Pr(p | n_1,n_2) & = & \frac{1}{k_2} (\frac{1}{k_1})^2
\sum_{r \in \phi_2(p)}
\sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\frac{\Pr(r | c_1, c_2) \Pr(c_1, c_2)}{\Pr(n_1,n_2)} \nonumber \\
& = & \frac{1}{k_2} (\frac{1}{k_1})^2
\sum_{r \in \phi_2(p)}
\sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\frac{\Pr(r, c_1, c_2)}{\Pr(n_1,n_2)} \nonumber \\
& = & \frac{1}{k_2} (\frac{1}{k_1})^2
\sum_{r \in \phi_2(p)}
\sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\frac{\Pr(c_1 | r, c_2) \Pr( c_2 | r) \Pr(r)}{\Pr(n_1,n_2)}
\label{eq:ce_model_eliminate}
\end{eqnarray}
The denominator will cancel during the analysis, so
this expression depends only on
the probabilistic conceptual model giving the distributions of
roles and concepts. This is based on two assumptions.
\begin{assumption}[Uniform Priors]
Each role is equally likely. That is,
\begin{equation}
(\forall r \in R) \Pr(r) = k_3
\end{equation}
\end{assumption}
This means that we have no \latin{a priori} preference for one
semantic relationship over another. We also need the crucial independence
assumption.
\begin{assumption}[Head-Modifier Autonomy]
Let the probability of a modifier participating
in a given implicit relationship be independent of the head.
That is,
\begin{equation}
(\forall c_1,c_2 \in C) (\forall r \in R)
(\Pr(c_1 | r, c_2) = \Pr(c_1 | r))
\end{equation}
\end{assumption}
This factorises the model into two parts: information from the head
and information from the modifier.\footnote{I have abused
notation here since $\Pr(c_1 | r) \neq \Pr(c_2 | r)$ even when
$c_1 = c_2$. The former probability is the likelihood of concept $c_1$
appearing as the {\em modifier} of a relationship $r$,
while the latter is that of
concept $c_2$ appearing as the {\em head} of that relationship.}
Equation~\ref{eq:ce_model_eliminate} can now be reformulated as
\begin{equation}
\Pr(p | n_1,n_2) = \frac{k_3}{k_2 \Pr(n_1, n_2)} (\frac{1}{k_1})^2
\sum_{r \in \phi_2(p)}
\sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\Pr(c_1 | r) \Pr(c_2 | r)
\label{eq:ce_model_factorise}
\end{equation}
and we have a probabilistic model with two sets of parameters $\Pr(c_1 | r)$
and $\Pr(c_2 | r)$ for a total of $2(|C|-1).|R|$ free parameters.
Although the model above is stated in terms of roles, in
section~\ref{sec:ce_problem} I argued that the most concrete form of
semantic representation is the prepositional paraphrase. Therefore, in the
experimental work below I take $R$ to simply be $P$, with $\phi_2$ and
$\theta_2$ transparent maps (that is, they return singleton sets containing
their arguments).\footnote{One half of the homogeneous syntax assumption
now holds trivially.} Since this removes the outer sum,
and the outer factor is constant for any
given analysis, the analysis process is now given by
\begin{equation}
(\forall n_1,n_2 \in N) (A(n_1,n_2) = \argmax{p \in P}
\sum_{c_1 \in \phi_1(n_1);c_2 \in \phi_1(n_2)}
\Pr(c_1 | p) \Pr(c_2 | p))
\label{eq:ce_model_analysis}
\end{equation}
\subsection{Experimental Design}
\label{sec:ce_design}
In the experiments below, I will investigate the idea of using counts of
prepositional phrases in a large corpus to estimate the free parameters of the
model above. That is, predictions about the semantics of noun compounds
will be made using observations of prepositional phrases.
The key assumption here is that semantically
related pairs of concepts are expressed equally often by means of
a compound noun and by means of a prepositional phrase.
For example, when the meaning
\lingform{haven} $\stackrel{\rm FOR}{\rightarrow}$ \lingform{reptiles}
is intended, the expressions \lingform{reptile haven} and
\lingform{haven for reptiles} will be used equally frequently. Thus the
frequency of \lingform{reptile haven} (where the intended meaning can be
paraphrased using \lingform{for}) is assumed to be the same as the
frequency of \lingform{haven for reptiles}. This means that $\Pr(c_1|r)$
and $\Pr(c_2|r)$ not only represent the probabilities of noun compound
meanings but also of prepositional phrase meanings. The two forms
of expression are generated from the same underlying meaning distribution.
While this assumption follows from the meaning distributions theory,
there is a good reason why it might be false.
If a meaning can be expressed by a lexicalised compound, this form will
be preferred because it is shorter than the prepositional paraphrase.
Therefore the prepositional form will occur only rarely.
Conversely, if the compound form is ambiguous, the
prepositional form will be preferred and the compound (with
associated semantic relation) will occur only rarely. This suggests that the
probabilities of the two forms might be inversely related. However, even if
strong preferences for one form over the other are generally observed, the
model might still work well for the following reason. If there are semantic
relations that {\em never} hold between two concepts, the corresponding
prepositional phrases should not occur at all, allowing the model to rule out
at least these relations.
Luckily, the effects of such frequency inversion are reduced by the
head-modifier autonomy assumption. By factorising the model into two
parts, we can measure the frequency of \lingform{haven for reptiles} as a
product of two different frequencies, those of \lingform{haven for} and
\lingform{for reptiles}. Now even if \lingform{reptile haven} were a
lexicalised compound, thus making \lingform{haven for reptiles} a rare
construction, we could still expect to find \lingform{haven for} and
\lingform{for reptiles} with relatively high frequency.
Also, in these examples, I have been using word frequencies, but the model
is stated in terms of concept frequencies. For example, instead of counting
occurrences of \lingform{haven for}, the model maps \lingform{haven} onto
one or more concepts (say, \concept{safe\_place}) and then counts
occurrences of \lingform{\concept{[safe\_place]} for}. If the concepts
capture appropriate generalisations, this will help reduce the frequency
inversion problem because conceptually similar forms will have different
lexicalisation patterns.
\subsubsection*{Choosing the concept set}
Before turning to an overview of the system architecture, I will
address the question of what to use for the set of concepts.
Using Roget's thesaurus is one possibility that I will explore;
that is, let $C_a$ be the set of Roget's categories,
and $\phi_{1a}(w)$ the set of categories in which $w$
appears. The 1911 version used here is freely available and was described
in section~\ref{sec:cy_design}.
However, the task I am addressing is paraphrasing, not semantic
interpretation. As I have argued in section~\ref{sec:ce_problem}, the
paraphrasing problem is influenced by collocative patterns that depend on
specific words. Recall for instance, the example involving \lingform{in the
evening} and \lingform{at night}, where semantically identical constructions
have different paraphrases.
Conceptual associations should be applied to prepositional roles,
not to prepositions themselves (see Dras and Lauer,~1993, for
a more detailed argument of this point).
Therefore, because the model is based on prepositions rather than roles, I
will also investigate a lexical parameterisation;
that is, let $C_b = N_{\mbox{test}}$,
where $N_{\mbox{test}}$ is the set of nouns found in the test set,
and $\phi_{1b}(w)$ be the singleton set containing $w$.
Just as for
the experiments on compound syntax, constructing a lexical model for the
entire vocabulary would require very much larger amounts of storage space
(see section~\ref{sec:cy_comparisons}).
The portion built here is just sufficient to
measure performance on the test set. In contrast, the implementation of the
conceptual parameterisation (using Roget's categories) is a complete
compound noun semantics analyser. Other than this difference though, the
same architecture, training set and analysis procedure will be used for both
parameterisations.
\subsubsection*{Architecture}
I will now give an overview of the architecture of the system. Details of the
various components will be given in section~\ref{sec:ce_method}. The
experiment involves four main steps.
\begin{enumerate}
\item Extract a test set of two word compounds from the corpus and
manually assign the most preferred paraphrase to each.
\item Define a pair of patterns for extracting examples of the nouns
modified by, and the nouns governed by, the eight prepositions.
\item Use these patterns to estimate the distributions $\Pr(c_1|p)$
and $\Pr(c_2|p)$ for each preposition
across both Roget's categories ($C_a$) and nouns in the test set
($C_b$).
\item Use the distributions and the probabilistic model above to analyse each
element of the test set and record the accuracy.
\end{enumerate}
The corpus is the same as that used for the compound parsing experiments,
\publicationname{The New Grolier Multimedia Encyclopedia}~(Grolier
Inc.,~1992), containing approximately 8 million words (for more details see
section~\ref{sec:cy_design}). The test set is a collection of 400 two word
noun sequences extracted from the corpus by means of a simple pattern
given below, and which I have manually annotated with preferred
paraphrases.
Since tagging the corpus with the Brill tagger (Brill,~1993) proved useful
when training the models of compound syntax (see
section~\ref{sec:cy_comparisons}),
this strategy will be used here too. Note that
this tagger is freely available and has been applied using the rule set
provided with it (trained on the Brown corpus).
The patterns for extracting the
noun modified by, and the noun governed by, a preposition rely only on the
parts of speech assigned by the tagger, and so can be applied automatically
to any text using only freely available resources.
Applying the first pattern to the corpus yields an observed distribution of
concepts that are modified by each of the eight prepositions. I
will call this the \newterm{head distribution}.
Applying the second pattern to the corpus yields an observed distribution of
concepts that are governed by each of the eight prepositions, in
a symmetrical way. I will call this the \newterm{object distribution}.
This distribution records the frequency of each concept
appearing as the object of each preposition.
These two distributions are combined by the probabilistic model to give a
ranking of the possible paraphrases of the compound. The accuracy of the
system is measured as the proportion of highest ranked paraphrases that
match the most preferred one assigned manually. Note that because the goal
is to select the {\em most preferred} paraphrase, the system will score
nothing for a merely acceptable paraphrase; the answer given must match the
single most preferred paraphrase, not just have the same meaning. Also, the
system is required to give an answer to every test example, so that accuracy
is measured at 100\% coverage.
In the next section, the experimental method is given in detail, including the
extraction patterns for training and testing and the precise analysis method.
The reader may skip these details if desired, noting only the two training
patterns in equations~\ref{eq:ce_method_headpattern}
and~\ref{eq:ce_method_objpattern}, and
equation~\ref{eq:ce_method_estimates} describing parameter estimation.
\subsection{Experimental Method}
\label{sec:ce_method}
\subsubsection*{Test set}
The first step is to extract a test set of two word compound nouns. Recall
from section~\ref{sec:cy_method} that the training data for the compound
parsing experiments consisted of two word noun sequences collected by
means of a simple pattern. The pattern is reproduced here.
\begin{equation}
T_{\mbox{test}} = \{ (w_1, w_2) \mid w_1 w_2 w_3 w_4; w_1,w_4 \notin
{\cal N}; w_2, w_3 \in {\cal N} \}
\end{equation}
Here, $w_1 w_2 w_3 w_4$ denotes the occurrence of four tokens
in sequence in the corpus, and $\cal N$ is the set of words listed in the
University of Pennsylvania morphological analyser lexicon as being sure
nouns (see section~\ref{sec:cy_method} for more detail). After removing
pairs containing a word not in Roget's thesaurus, this produces 24,251 two
word noun sequences. From these a random sample of 400 sequences was
selected as the test set.
The position within the corpus of each noun sequence was kept along with it,
so that each of the sequences in the sample could be examined in context.
By looking at the full context in each case, I assigned one of the following
annotations to each test case using a specially developed
annotation environment.
\begin{description}
\item[E] Extraction errors, where the noun sequence does not form a noun
compound, plus proper names and foreign compounds.
\item[B] Copula compounds where the modifier and head both classify the
indicated object (including some metaphorical readings).
\item[V] Verbal-nexus compounds where the modifier takes the part of
either subject or object of a verb of which the head is a nominalisation.
\item[O,R,I,T,W,F,N,A] Prepositional compounds whose most preferred
paraphrase involves \lingform{of}, \lingform{for}, \lingform{in},
\lingform{about}, \lingform{with}, \lingform{from}, \lingform{on} or
\lingform{at} respectively.
\end{description}
Assignments of \scare{E} and \scare{B} are straightforward.
The noun pair \lingform{passengers hostage} is an extraction error (E)
since the two nouns are separate complements of \lingform{holding}
in the context in which it appears.
The compound \lingform{boyar duma} (a Russian
aristocratic title abolished in the early 18th century) is also marked with
\scare{E}. The compound \lingform{patron goddesses} is copula (B),
as is \lingform{puppet government} based on metaphorical
interpretation.
Somewhat more difficult to distinguish are the verbal-nexus compounds (V).
It is often not clear whether a noun is a nominalisation of a verb or not.
The verb \lingform{colonise} is a \scare{verbalisation} of the noun
\lingform{colony}, not the other way round. Though this case is clear, an
objective decision procedure is needed to define precisely which nouns are
nominalisations. For the purposes of annotating the test set, I have taken a
noun to be a nominalisation only where the etymology marked in the Shorter
Oxford English Dictionary~(Onions,~1973) shows it as being derived from
the corresponding verb. For example, \lingform{hull maintenance} is
assigned \scare{V} because the head is derived from the verb
\lingform{maintain}, but \lingform{plutonium theft} is considered a
prepositional compound (\scare{O}) because the head is derived from the
noun \lingform{thief} (the verb \lingform{thieve} is also from this noun).
A further condition of being marked as a verbal-nexus compound is the
requirement that the modifier is semantically interpreted as playing the part
of either the subject or object of the verb that corresponds to the head. In
the example above, the modifier plays the part of object: it is hulls that are
maintained. In \lingform{peasant rebellion}, the modifier plays the part of
subject, and so this example receives a \scare{V} too. In contrast,
\lingform{city dwellers} is considered a prepositional compound (\scare{I})
because, although the head is derived from the verb \lingform{dwell}, the
modifier does not act as subject or object of this verb.
The annotation environment presented the user with the complete paraphrase
for all eight prepositions, along with the context surrounding the compound.
The annotator could therefore carefully compare all of the allowed
paraphrases with the given context and use the mouse to select the most
preferred paraphrase before proceeding to the next test case. In each case,
one and only one prepositional paraphrase could be selected, so that every
example in the test set was assigned a single correct answer.
All the examples given so far in this section are taken from the annotated test
set used in the experiments. The entire test set, along with annotations is
given in appendix~\ref{appendix:cetest}. The distribution of annotation
types is given in table~\ref{tb:ce_method_testtypes}.
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c||c|} \hline
Error & Be & Verbal$_{\mbox{subj}}$ & Verbal$_{\mbox{obj}}$ &
Prepositional & Total \\
\hline
15 & 41 & 19 & 43 & 282 & 400 \\
\hline
4\% & 10\% & 5\% & 11\% & 70\% & 100\% \\
\hline
\end{tabular}
\caption{Distribution of annotation types in paraphrasing test set}
\label{tb:ce_method_testtypes}
\end{table*}
Since the goal of the experiment was to paraphrase prepositional
compounds, \scare{E}, \scare{B} and \scare{V} cases were eliminated,
leaving a test set of 282
compounds, each annotated with one of the eight target prepositions.
Table~\ref{tb:ce_method_testdist} shows the distribution of answers across
the test set.
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c||c|} \hline
Of & foR & In & abouT & With & From & oN & At & Total \\
\hline
94 & 78 & 39 & 22 & 18 & 13 & 12 & 6 & 282 \\
\hline
33\% & 28\% & 14\% & 8\% & 6\% & 5\% & 4\% & 2\% & 100\% \\
\hline
\end{tabular}
\caption{Distribution of prepositions involved in paraphrases of test set}
\label{tb:ce_method_testdist}
\end{table*}
The distribution of different paraphrases shows a heavy bias towards three
of the prepositions, \lingform{of}, \lingform{for} and \lingform{in}, which
account for 75\% of the test cases. These three also have the largest amount
of training data available, a fact that I will exploit in
section~\ref{sec:ce_results}. The most important point to note from the
table is that the simple strategy of always choosing \lingform{of} as the
paraphrase achieves 33\% accuracy on this test set. Therefore, any system
that does worse than this is of no use.
\subsubsection*{Training data and parameter estimation}
Setting aside the test set, the second step is to extract relevant training
information from the corpus. According to the probabilistic model, the
information required is a pair of distributions for each preposition. I have
used one pattern for the head distribution and one for the object distribution,
which I will describe in turn.
The pattern for the head distribution is extremely simple. It assumes
that prepositions following a noun post-modify that noun.
Whenever one of the eight target prepositions is observed to follow a word
tagged as a noun, the noun is counted as an observation relevant to the head
distribution of that preposition. For example, \lingform{story}/\tag{nn}
\lingform{about}/\tag{in} will generate a training instance for the head
distribution of \lingform{about}, increasing the estimate of the
probability that concepts expressed by the word \lingform{story} are
the head of an \lingform{about} prepositional phrase.
In Brill's tag set, I take the tags \tag{nn}, \tag{nns},
\tag{nnp}, \tag{nnps} and \tag{vbg} to be nouns for this
purpose.\footnote{These are common singular nouns, common plural nouns,
proper singular nouns, proper plural nouns and present participles,
respectively. \tag{vbg} is included in order to allow gerunds that have
been incorrectly tagged; only words listed in the thesaurus as nouns are
counted in the distribution, so few true present participles are likely to be
counted.}
Formally, the pattern is
\begin{equation}
T_{\mbox{head}}(p) = \{ w_1 \mid w_1 w_2; w_2 = p; t_1 \in
\{\mbox{\tag{nn}, \tag{nns}, \tag{nnp}, \tag{nnps}, \tag{vbg}}\} \}
\label{eq:ce_method_headpattern}
\end{equation}
where $w_1 w_2$ denotes the occurrence of two tokens in sequence in the
corpus and $t_1$ is the tag assigned to $w_1$ by the tagger.
The pattern for the object distribution is more complex. It assumes that the
first noun following a preposition is the head of the preposition's object
noun phrase. A sequence of up to three noun modifiers are permitted to
appear between the preposition and the noun, including determiners,
adjectives and cardinals. To avoid catching prepositions used as verb case
role markers, the pattern also checks for a possible nominal attachment for
the preposition. This is achieved by checking that the word preceding the
preposition is a noun, just as the pattern for the head distribution does.
Thus, the object distribution pattern applies in a strict subset of the places
where the head distribution pattern applies.
Whenever one of the eight target prepositions follows a word tagged as a
noun and is followed by up to three noun modifiers and then another word
tagged as a noun, the second noun is counted as an observation relevant to
the head distribution of that preposition. For example,
\lingform{tower}/\tag{nn} \lingform{on}/\tag{in} \lingform{the}/\tag{dt}
\lingform{western}/\tag{jj} \lingform{horizon}/\tag{nn} will generate a
training instance for the object distribution of \lingform{on}, increasing the
estimate of the probability that concepts expressed by the word
\lingform{horizon} are the object of an \lingform{on} prepositional phrase.
The same set of tags as above are taken to be nouns.
I take Brill's tags \tag{jj}, \tag{dt}, \tag{cd}, \tag{prp\$}
and \tag{pos} to be possible noun modifiers for this
purpose.\footnote{These are adjective, determiner, cardinal, possessive
pronoun and possessive markers, respectively. An idiosyncrasy of the
tagger is tagging nouns with possessive markers incorrectly as adjectives.
The pattern exploits this to find the subsequent head noun by allowing
possessive markers to be noun modifiers.}
Formally, the pattern is
\begin{eqnarray}
\lefteqn{T_{\mbox{object}}(p) = } \nonumber \\
& \{ & w_{i} \mid w_1 w_2 \ldots w_{i}; 3 \leq i \leq 6; w_2 = p;
\nonumber \\
& & t_1,t_{i} \in \{\mbox{\tag{nn}, \tag{nns}, \tag{nnp}, \tag{nnps},
\tag{vbg}}\}; \nonumber \\
& & t_3,\ldots,t_{i-1} \in \{\tag{jj}, \tag{dt}, \tag{cd}, \tag{prp\$},
\tag{pos}\} \}
\label{eq:ce_method_objpattern}
\end{eqnarray}
where $w_1 w_2 \ldots w_{i}$ denotes the occurrence of $i$ tokens in
sequence in the corpus and $t_j$ is the tag assigned to $w_j$ by the tagger.
Table~\ref{tb:ce_method_trainsizes} shows the number of training instances
generated by these two patterns from the corpus. The left side of the table
shows the quantities extracted for training the model based on $C_a$, the
Roget's categories. This model has been trained on the full vocabulary of
the thesaurus, covering 20,445 noun stems in 1043 categories. The right
hand side shows the quantities extracted for training the (partial) model
based on $C_b$, the nouns in the test set. It contains 462 noun stems.
\begin{table*}
\centering
\begin{tabular}{|r|r|c|r|r|} \hline
\multicolumn{2}{|c|}{Thesaurus} & \multicolumn{1}{c|}{Preposition} &
\multicolumn{2}{c|}{Test Set Nouns} \\
\cline{1-2} \cline{4-5}
\multicolumn{1}{|c|}{Object} & \multicolumn{1}{c|}{Head} & &
\multicolumn{1}{c|}{Object} & \multicolumn{1}{c|}{Head} \\
\hline
112150 & 219353 & of & 26852 & 43037 \\
13402 & 21409 & for & 3254 & 3712 \\
27828 & 63627 & in & 7030 & 13352 \\
867 & 1801 & about & 215 & 375 \\
6833 & 14030 & with & 1375 & 2640 \\
5465 & 12948 & from & 1092 & 1981 \\
7457 & 12773 & on & 1359 & 1829 \\
3671 & 8527 & at & 469 & 1773 \\
\hline
\end{tabular}
\caption{Numbers of training instances generated by the paraphrasing
patterns}
\label{tb:ce_method_trainsizes}
\end{table*}
The third step in the experiment is to use the evidence provided by the
training set to estimate the parameters of the model. These estimates are
stored in a data structure for later use by the analyser. The probabilistic
model employs two parameters for each concept-preposition pair: the head
and object probabilities, and so has two data structures. Both of these are
represented as a matrix with eight columns (one for each preposition), whose
rows are indexed by concepts.
We interpret $A_{ij}^{\mbox{\sc head}}$ as the probability that the
concept represented by the $j$th thesaurus category will be the {\em head}
of a compound noun whose most preferred paraphrase involves the $i$th
preposition. Likewise, we interpret $A_{ij}^{\mbox{\sc obj}}$ as the
probability that the concept represented by the $j$th thesaurus category will
be the {\em modifier} of a compound noun whose most preferred paraphrase
involves the $i$th preposition. Thus, when $r$ is the $i$th preposition,
$A_{ij}^{\mbox{\sc head}} = \Pr(c_2 | r)$ when $c_2$ is the $j$th
thesaurus category and $A_{ij}^{\mbox{\sc obj}} = \Pr(c_1 | r)$ when
$c_1$ is the $j$th thesaurus category.
Because the nouns identified by the training patterns can appear in multiple
Roget's categories (in the conceptual model, based on $C_a$), each of the
training instances generally contributes to more than one probability
estimate. Just as was done for the experiments on parsing compounds, this
is handled by dividing the counts contributed by each noun by the number of
categories that the noun appears in. So, when \lingform{web} is returned by
one of the patterns, the counts for categories \tc{crossing}{219} and
\tc{texture}{329} are both incremented by $\frac{1}{2}$, since
\lingform{web} is in both of them. This procedure corresponds to Bayesian
estimation where the prior probability of the different concepts is assumed
uniform. This problem does not arise in the lexical model, based on $C_b$.
To begin with, the parameters will be estimated using maximum likelihood
under the binomial distribution, as was done for the parsing experiments (see
section~\ref{sec:cy_method}). Given the frequency with which each noun
was returned by each pattern for each preposition, say $\countfn_{\mbox{\sc
head}}(w, p)$ and $\countfn_{\mbox{\sc obj}}(w, p)$, the parameter
estimates
are given by
\begin{eqnarray}
\Pr(c_x | r) & = &
\frac{1}{\eta_r}
\sum_{w \in \theta_1(c_x)}
\frac{\countfn_{\mbox{\sc d}}(w, r)}{\ambig(w)}
\label{eq:ce_method_estimates} \\
\eta_r & = &
\sum_{w \in N}
\frac{\countfn_{\mbox{\sc d}}(w, r)}{\ambig(w)} \nonumber
\end{eqnarray}
where $\mbox{\sc d} = \mbox{\sc obj}$ when $x = 1$ and $\mbox{\sc d} =
\mbox{\sc head}$ when $x = 2$.
Using maximum likelihood is more problematic with the current model than
it was in the parsing experiments. Since two independent estimates are
multiplied, there are two opportunities for statistical zeroes to arise. If
$\countfn_{\mbox{\sc head}}(w, r) = 0$ for all $w \in \theta_1(c_2)$ then
$\Pr(c_2 | r)$ will be estimated as zero. Similarly, $\Pr(c_1 | r)$ will be
estimated as zero, if $\countfn_{\mbox{\sc obj}}(w, r) = 0$ for all $w \in
\theta_1(c_1)$. If {\em either} of these conditions holds, then $\Pr(c_1, r,
c_2)$ will be assigned the value zero, regardless of how large the counts
in the alternate distribution are.
This has the effect of hiding information. Suppose, for example, that all the
object distribution counts for concepts expressed by a given noun, $n_1$,
were zero (that is, this noun was never identified by the pattern as the first
noun following one of the eight prepositions, and neither were any of the
other nouns expressing the same concepts). This would be
reasonably likely if $n_1$ is in only one small category.
In the lexically parameterised model, every noun is in its own category,
so it is highly likely.
Because the relevant object distribution counts are zero, all the
relevant parameter estimates $\Pr(c_1|p)$ will also be zero.
This means that regardless of the values of $\Pr(c_2|p)$,
every $\Pr(p | n_1,n_2)$ will be zero as well.
No matter what the head distribution counts for $n_2$
are, the model will be unable to select one preposition over another.
If $n_2$ expresses a concept with a strong preference for
one of the prepositions, this will be ignored.
No evidence from the head distribution will be used, even though the object
distribution does not support any of the analyses. A symmetric insensitivity
to the object distribution is also possible.
One method of bypassing this difficulty is to smooth counts away from zero.
In the experiments reported below
both maximum likelihood estimation and a simple smoothed version have
been used and a comparison made.
\subsubsection*{Analysis procedure}
Having estimated the parameters of the model, we can now analyse the test
set. The decision procedure computes the probability of a paraphrase
involving each preposition and chooses the highest. Since the
model only takes the two nouns into account, any broader context is ignored.
If the most preferred paraphrase is highly variable across different contexts,
then the accuracy of the predictions will be quite low.
I have not conducted any
investigation of the optimal error rate for this task.
Using the matrices, $A_{ij}^{\mbox{\sc head}}$ and $A_{ij}^{\mbox{\sc
obj}}$, the analyser applies equation~\ref{eq:ce_model_analysis} eight
times, once for each preposition, leading to eight probability scores, and thus
a ranking of the possible prepositional paraphrases. Ties are resolved by a
fixed precedence ordering which follows the observed frequency ranking in
a pilot set of 57 compounds collected prior to and independently of the main
test set. The precedence ordering is (from highest to lowest): \lingform{of},
\lingform{for}, \lingform{in}, \lingform{about}, \lingform{with},
\lingform{from}, \lingform{on}, \lingform{at}.
This frequency ranking matches exactly that observed in the main test set,
however the precedence was chosen before the test set was constructed; no
information from the test set was used in defining either the
analysis procedure or the probabilistic model. In any case, on the test set
used in this experiment, the conceptual model never produces a tie, so the
precedence ordering has no effect. The lexical model is only forced to rely
on the precedence ordering in 10 out of the 282 test cases (3.5\%); in all of
these it assigns zero probability to every preposition, which results in
prediction of an \lingform{of}-paraphrase.
Let's consider an example of the decision procedure. Suppose the system is
presented with the compound \lingform{cask wine}. The word
\lingform{cask} appears in category \tc{receptacle}{191} and the word
\lingform{wine} appears in category \tc{food}{298}. In general, the words
will be sense ambiguous (they will appear in more than one category), but for
this example I have chosen monosemous words to simplify the description.
The analyser extracts one row of eight values from each distribution matrix:
for each of the eight prepositions $p$, it extracts and multiplies $A_{298,
p}^{\mbox{\sc head}}$ and $A_{191, p}^{\mbox{\sc obj}}$. Suppose that
probability estimates are as in table~\ref{tb:ce_method_example}. The
analyser multiplies the two probabilities in each row of the table. Since the
preposition \lingform{in} has the highest combination (medium head
probability and high object probability), it is selected as the most likely
analysis: \lingform{wine in a cask}. Other probable analyses include
\lingform{wine from a cask} and \lingform{wine for a cask}.
\begin{table*}
\centering
\begin{tabular}{|c|c|c|} \hline
\tc{food}{298} as head & Preposition & \tc{receptacle}{191} as modifier
\\
$A_{298, p}^{\mbox{\sc head}}$ & $p$ & $A_{191, p}^{\mbox{\sc
obj}}$ \\
\hline
low & of & medium \\
high & for & low \\
medium & in & high \\
zero & about & (low) \\
medium & with & low \\
medium & from & medium \\
low & on & low \\
(low) & at & zero \\
\hline
\end{tabular}
\caption{Relevant parameter values for hypothetical example:
\lingform{cask wine}}
\label{tb:ce_method_example}
\end{table*}
The appearance of zero probabilities causes the corresponding probability in
the opposite column to be ignored. The values given in brackets in the table
cannot make any difference to the analysis, no matter how large they are
relative to the other values in the table.
\subsection{Results}
\label{sec:ce_results}
A program has been implemented to apply the decision procedure given
above. It has been run on the test set using both parameterisations ($C_a$
and $C_b$), with the results shown in table~\ref{tb:ce_results_mle8}.
Each column of the table shows the performance on test cases whose
answer is a particular preposition. The last column gives the
overall accuracy. The table also shows the results of a simple
guessing strategy (always choose \lingform{of}) as a baseline.
\begin{table*}
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r||r|} \hline
Answer & \multicolumn{1}{c|}{of} & \multicolumn{1}{c|}{for} &
\multicolumn{1}{c|}{in} & \multicolumn{1}{c|}{about} &
\multicolumn{1}{c|}{with} & \multicolumn{1}{c|}{from} &
\multicolumn{1}{c|}{on} & \multicolumn{1}{c||}{at} &
\multicolumn{1}{c|}{Total} \\
\hline
Number of cases & 94 & 78 & 39 & 22 & 18 & 13 & 12 & 6 & 282 \\
\hline
\hline
$C_a$ accuracy & 19\% & 29\% & 33\% & 64\% & 17\% & 8\% &
33\% & 33\% & 28\% \\
\hline
$C_b$ accuracy & 52\% & 47\% & 36\% & 14\% & 6\% & 23\% & 50\%
& 17\% & 40\% \\
\hline
\scare{Guess \lingform{of}} accuracy & 100\% & 0\% & 0\% & 0\% & 0\% &
0\% & 0\% & 0\% & 33\% \\
\hline
\end{tabular}
\caption{Maximum likelihood estimate results for compound paraphrasing}
\label{tb:ce_results_mle8}
\end{table*}
Apart from the high accuracy on predicting \lingform{about}-paraphrases,
the conceptual model ($C_a$) does very poorly, worse even than
guessing.\footnote{The high accuracy on \lingform{about} appears to be
due to just the right categories appearing in Roget's. The heads of 11 of the
14 correct \lingform{about}-compounds fall into the two categories
\tc{Knowledge}{490} and \tc{Description}{594}.} While this is
disappointing, it isn't particularly surprising for the reasons given in
section~\ref{sec:ce_design}: prepositions have strong collocative
preferences with nouns. If we want the conceptual model to work well, the
associations must be between concepts and roles, not concepts and
prepositions.
The lexical model does somewhat better. Though the improvement over
guessing is not large, it is statistically significant. Pooling the proportions of
test cases correctly analysed by the lexical model and by guessing gives
36.9\%, which can be used to estimate the variance of the difference. Since
the sample size is relatively large, the difference in accuracy is
approximately normally distributed. Therefore, the improvement of 7.1\%
corresponds to a $z$-score of 1.75, which is significant at the 5\% level
(one-tailed test, $p = 0.04$). We therefore have some evidence that the
distribution of prepositional phrases is correlated with the distribution of
compound noun paraphrases.
\subsubsection*{Smoothing}
As shown above, there is a technical difficulty with using maximum
likelihood estimates; unseen events are prohibited by the model leading to
information hiding.
One way to avoid this difficulty is to smooth counts away from zero. The
expected likelihood estimator (Church~\etal,~1991b:127) is computed by
taking the same counts as the maximum likelihood estimator and adding 0.5
to each count. To use this method, equation~\ref{eq:ce_method_estimates}
must be revised. Parameter estimates under expected likelihood estimation
are computed by
\begin{eqnarray}
\Pr(c_x | r) & = &
\frac{1}{\eta_r}
(\frac{1}{2}+
\sum_{w \in \theta_1(c_x)}
\frac{\countfn_{\mbox{\sc d}}(w, r)}{\ambig(w)})
\label{eq:ce_results_ele} \\
\eta_r & = &
\frac{|C|}{2} +
\sum_{w \in N}
\frac{\countfn_{\mbox{\sc d}}(w, r)}{\ambig(w)} \nonumber
\end{eqnarray}
where $\mbox{\sc d} = \mbox{\sc obj}$ when $x = 1$ and $\mbox{\sc d} =
\mbox{\sc head}$ when $x = 2$.
High counts are less affected by this revision than low counts. However,
probability estimates corresponding to high counts are reduced because of
the large increase in the value of the normaliser. Probability estimates
derived from low counts increase. Meanwhile, zero counts
now result in non-zero estimates for the corresponding events, but these are
small --- less than half of the maximum likelihood estimate for events seen
once. Therefore, this simple technique prevents the information hiding
exhibited by maximum likelihood estimates. While this offers a cheap
escape, a word of warning is mandated: expected likelihood estimation can
result in very poor probability estimates. Gale and Church~(1994) show that
for models of English word bigrams, expected likelihood estimates are
significantly worse predictors than maximum likelihood estimates.
After revising the parameter estimates to use expected likelihood, the
analyser was re-run, once again using both parameterisations ($C_a$ and
$C_b$). The results are shown in table~\ref{tb:ce_results_ele8}.
The conceptual model is slightly improved by the expected likelihood
estimation (though none of the changes in accuracy are statistically
significant).
The generalisation allowed by the conceptual groupings means that counts
are generally not low, so adding 0.5 does not have much effect. In contrast,
the lexical model has many low counts, so there is a greater change.
The new estimates result in significantly degraded performance,
demonstrating the dangers of applying simplistic smoothing techniques.
\begin{table*}
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r||r|} \hline
Answer & \multicolumn{1}{c|}{of} & \multicolumn{1}{c|}{for} &
\multicolumn{1}{c|}{in} & \multicolumn{1}{c|}{about} &
\multicolumn{1}{c|}{with} & \multicolumn{1}{c|}{from} &
\multicolumn{1}{c|}{on} & \multicolumn{1}{c||}{at} &
\multicolumn{1}{c|}{Total} \\
\hline
Number of cases & 94 & 78 & 39 & 22 & 18 & 13 & 12 & 6 & 282 \\
\hline
\hline
$C_a$ accuracy & 24\% & 28\% & 38\% & 64\% & 17\% & 8\% &
33\% & 33\% & 30\% \\
\hline
$C_b$ accuracy & 31\% & 44\% & 28\% & 45\% & 6\% & 23\% & 50\%
& 33\% & 34\% \\
\hline
\end{tabular}
\caption{Expected likelihood estimate results for compound paraphrasing}
\label{tb:ce_results_ele8}
\end{table*}
Interestingly the accuracy of the lexical model is not degraded
on five out of the eight prepositions.
All of the performance decrease is sustained on just three
of the prepositions. The three prepositions
showing reduced accuracy are exactly those with the most training
data: \lingform{of}, \lingform{for} and \lingform{in} (see
table~\ref{tb:ce_method_trainsizes} in section~\ref{sec:ce_method}).
A possible interpretation of this is that smoothing yields an improvement in
accuracy on cases suffering data sparseness, but it also forces the analyser to
consider rare cases more often than it should. Recall from
section~\ref{sec:ce_design} that one of the intuitive motivations for using
distributions of prepositional phrases to estimate those of compound noun
paraphrases, was that impossible paraphrases would be ruled out.
Smoothing by
expected likelihood estimates undermines the model's ability to rule out
these rare cases, leading to lower performance on the more frequent cases.
\subsubsection*{Avoiding data sparse distributions}
This is exacerbated by the large number of data sparse alternatives. Only
three of the prepositions do not suffer from severe data sparseness. Each of
the other five is a potential distractor, and under noisy conditions, the
chances are that one of them will be assigned high enough probability to be
selected incorrectly. Suppose, for example, that the analyser is faced with a
compound having a \lingform{for}-paraphrase. The relative abundance of
data giving information about \lingform{of}- and \lingform{in}-paraphrases
will prevent these from being incorrectly chosen above a
\lingform{for}-paraphrase. However, the distributions estimated for the
other five prepositions are each subject to large purely statistical
fluctuations, with a low signal-to-noise ratio. While on average, they all
have a low probability, chances are that one of them will be estimated to
have high probability for the given compound because of noise. Because of
this, the analyser is frequently lead to a mistaken conclusion.
Therefore, a better design for the decision procedure is to ignore information
from the data sparse distributions. The analyser computes only three
probabilities, those of paraphrases involving \lingform{of}, \lingform{for}
and \lingform{in}, and takes the maximum. Since this design only ever
selects one of these three prepositions,
it cannot correctly analyse other types
of compounds. Because these three only make up three-quarters
of the test set, this restriction places a ceiling
on the performance of 75\% accuracy. However,
so far the results have been much lower than this anyway.
Table~\ref{tb:ce_results_mle3} shows the performance of the analyser when
restricted in this way. Maximum likelihood estimates have been used for the
conceptual model (the results for expected likelihood estimates do not differ
significantly), while both estimation methods are shown for the lexical
model.
All three results show sizeable improvements over the corresponding models
under the unrestricted design. This is further evidence that consideration of
the amount of available training data is of critical importance to statistical
language learning systems. It is encouraging to note that the expected
likelihood estimation now yields a slight improvement to the lexical model.
This is in stark contrast to the performance degradation observed in the
unrestricted design. The accuracy of 47\% is the best result obtained by the
system under any configuration.
\begin{table*}
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r||r|} \hline
Answer & \multicolumn{1}{c|}{of} & \multicolumn{1}{c|}{for} &
\multicolumn{1}{c|}{in} & \multicolumn{1}{c|}{about} &
\multicolumn{1}{c|}{with} & \multicolumn{1}{c|}{from} &
\multicolumn{1}{c|}{on} & \multicolumn{1}{c||}{at} &
\multicolumn{1}{c|}{Total} \\
\hline
Number of cases & 94 & 78 & 39 & 22 & 18 & 13 & 12 & 6 & 282 \\
\hline
\hline
$C_a$ accuracy (\acronym{mle}) & 43\% & 50\% & 62\% & 0\% & 0\% & 0\% & 0\%
& 0\% & 37\% \\
\hline
$C_b$ accuracy (\acronym{mle}) & 65\% & 56\% & 67\% & 0\% & 0\% &
0\% & 0\% & 0\% & 46\% \\
\hline
$C_b$ accuracy (\acronym{ele}) & 55\% & 69\% & 67\% & 0\% & 0\% &
0\% & 0\% & 0\% & 47\% \\
\hline
\end{tabular}
\caption{Paraphrasing results when restricted to three most common
answers}
\label{tb:ce_results_mle3}
\end{table*}
One final result deserves mention. Since the prepositions \lingform{of} and
\lingform{for} comprise more than 60\% of the test set, a further
improvement might be made by restricting the design to just these two. The
results of this experiment are shown in table~\ref{tb:ce_method_mle2}.
The lexical model makes no further gain from the restriction, although
expected likelihood estimation gives slightly better results, confirming the
contrast observed in the previous restricted design. The conceptual model is
improved to an accuracy of 40\%. Using the pooled $t$-test as above gives
a $z$-score of 1.66, showing that the improvement over guessing is
statistically significant at the 5\% level (one-tailed test, p=0.049).
\begin{table*}
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r||r|} \hline
Answer & \multicolumn{1}{c|}{of} & \multicolumn{1}{c|}{for} &
\multicolumn{1}{c|}{in} & \multicolumn{1}{c|}{about} &
\multicolumn{1}{c|}{with} & \multicolumn{1}{c|}{from} &
\multicolumn{1}{c|}{on} & \multicolumn{1}{c||}{at} &
\multicolumn{1}{c|}{Total} \\
\hline
Number of cases & 94 & 78 & 39 & 22 & 18 & 13 & 12 & 6 & 282 \\
\hline
\hline
$C_a$ accuracy (\acronym{mle}) & 66\% & 65\% & 0\% & 0\% & 0\% & 0\% & 0\%
& 0\% & 40\% \\
\hline
$C_b$ accuracy (\acronym{mle}) & 84\% & 63\% & 0\% & 0\% & 0\% &
0\% & 0\% & 0\% & 45\% \\
\hline
$C_b$ accuracy (\acronym{ele}) & 71\% & 79\% & 0\% & 0\% & 0\% &
0\% & 0\% & 0\% & 46\% \\
\hline
\end{tabular}
\caption{Paraphrasing results when restricted to two most common answers}
\label{tb:ce_method_mle2}
\end{table*}
\subsection{Discussion}
\label{sec:ce_discussion}
The overall best accuracy is 47\%; however, this is difficult to
evaluate without more information.
No other work in statistical language learning has attempted this task
and there are few knowledge-based systems that have been quantitatively
evaluated. Leonard's~(1984) accuracy of 76\% was measured
on the development set and was only achieved through labour intensive
coding of appropriate lexical information. McDonald's~(1982) estimate
that 60\% of his test set could be handled by his program also
requires extensive knowledge to be provided.
A more comparable result is that of Vanderwende~(1994). She
reports that on a random test sample her program has an accuracy
of 52\%. While it is based on knowledge extracted from
Longman's dictionary, the development of the extraction patterns,
and of the analysis heuristics and weights,
again involves substantial effort (though admittedly far less
than that required for the other two). Also it assumes that an
appropriate dictionary is available (Longman's is not freely
available; research licences are considerably more costly than
those for any corpus).
In contrast, my program is extremely
simple to implement, uses freely available resources and adapts
to the text type used for training. In this light, 47\% accuracy
is a good result. Since it is significantly better than guessing,
we can, at the very least, conclude that measurements of
prepositional phrase distributions assist
in predicting the paraphrases of noun compounds.
One significant problem with evaluation is that we don't know the optimal
accuracy. For example, if compound paraphrases
are highly dependent on context, we could already be close to the optimal
accuracy. Without some independent means of measuring the optimal
accuracy, such as experiments with human subjects, it is impossible to draw
conclusions about the quality of this performance. The test set is too small
to offer much help in estimating the optimal accuracy, since
there are only eleven duplicated test cases.
Of these, two have different preferred paraphrases on each occurrence.
This gives some evidence that noun compound paraphrases are
context dependent, but to what degree is impossible to say.
Even if we knew the optimal accuracy, there is still the problem of data
requirements. How much of the non-optimal error rate is due to faulty
assumptions in the model, and how much to inadequate training data
volumes? Unless the performance closely approaches the optimal accuracy,
and lacking an applicable theory of data requirements,
the shortfall could be due to
either data sparseness or poor modelling; the scientific implications of the
experiment are uncertain.
Still, we can make some qualitative conclusions about the different strategies
that have been applied. First, conceptual association did not work well
because prepositions have purely lexical collocation properties. The test
compound \lingform{university education} has most preferred paraphrase in
context involving \lingform{in}. The conceptual model predicts a
paraphrase involving \lingform{at}. Likewise, \lingform{street scenes} has
most preferred paraphrase in context involving \lingform{on}, but the
conceptual model chooses \lingform{in}. These examples suggest that
concepts should be associated with roles rather than particular prepositions.
Second, the assumption that the head and modifier lend independent
evidence (the head-modifier autonomy assumption) leads the analyser into
error in some test cases.
For example, the lexical model assigns
\lingform{computer catalog} an \lingform{of}-paraphrase, presumably
because catalogs are typically composed of items and systems are often
composed of computers. However, in this case the paraphrase
should involve \lingform{on}. It is only through the interaction between the
head and modifier that this can be determined.
A similar circumstance leads
to error on \lingform{welfare agencies}, which the analyser assigns an
\lingform{on}-paraphrase to. The fact that agencies are not usually the
recipients of welfare is dependent on the nature of both agencies and
welfare, and thus both the head and modifier. These two examples are clear
cases where the error is due to violated assumptions, but typically the reason
for failure is not so obvious.
Therefore, it is difficult to assess by inspection
the degree of error resulting from modelling simplifications.
Finally, we've seen that a restricted model, which
only considers the more frequent answers, is beneficial
when training data is both sparse and noisy.
In contrast, a simple
smoothing technique results in dramatically worse performance. The moral
is that smoothing is just one instrument in overcoming data sparseness; it
must be coupled with an understanding of the effects of data sparseness,
which is the kind of knowledge that should be provided by a theory of data
requirements. If there is any strong conclusion to be drawn from the
experiments on compound paraphrasing, it is that the ability to reason about
data requirements is vital to statistical language learning research.
\chapter{Conclusion}
\label{ch:conclusion}
\section{Summary}
In \sll\ research, we are engaged in a search through a space of
possible designs. \sll\ provides an architecture for building
corpus-based language learners using probabilistic models; but
what the architecture cannot tell us is which probabilistic models
are most appropriate. This we must discover by a process of exploration.
The search through design space is constrained by the need both for
linguistic fidelity and for practical construction. A good model must
make accurate predictions about language and it must be simple to train
and apply. The search is therefore a complex one. In the few years since
\sll\ has become popular, we have only touched the surface of the design
space. This thesis is about exploring that design space further.
\subsection{Theoretical Contributions}
In pursuit of that goal, the thesis makes two key theoretical
contributions. First, it identifies a new class of designs by providing an
architectural theory of statistical natural language processing, the
meaning distributions theory. Second, it works toward the development
of a vital tool that will assist in future navigation of the design space, a
mathematical theory of data requirements. I will discuss each of these in
turn.
\subsubsection*{Meaning distributions}
In chapter~\ref{ch:md}, I developed the meaning distributions theory for
statistical language learning. According to this view, statistical models
of language involve probability distributions over semantic
representations. A probabilistic conceptual model, whose parameters are
estimated from a corpus, captures semantic expectations, which are then
used to guide natural language processing tasks. By emphasising
semantic expectations, the theory is effectively
a revised form of semantically-driven \nlp; however, it specifically
avoids several failings of early work in that area.
The meaning distributions theory holds that a probabilistic conceptual
model can be used in conjunction
with an independent lexico-syntactic component to perform natural
language analysis tasks.
The lexico-grammar relates the semantic representation to corresponding
syntactic and textual forms. Rather than assigning probabilities to the
syntactic representations independently, such probabilities are
determined from the semantic distributions
by viewing lexico-grammar as constraints.
This theory is only useful in as much as it allows the construction of
working natural language processors. Evaluation of the theory must be
empirical, determined by the performance of systems designed using it.
While it is derived from reflections on human language processing, it is
in no way intended to represent human behaviour. Rather it represents
new territory in the design space of \sll\ models. Whether that territory
turns out to be fertile is an empirical question.
One useful implication of the theory concerns the selection of corpus
materials for \sll\ training. The emphasis placed on context by the
meaning distributions theory highlights the importance of register to
probabilistic language models. Since it is crucial that such models take
account of register variations, and since we do not currently have
sufficiently large corpora to condition on register, I have advocated the
use of large register-pure corpora as the best short term compromise.
\subsubsection*{Data requirements}
While the meaning distributions theory opens up new territory
in design space, we still have only a limited understanding
of one of the most fundamental limits to exploration
of that space: training data requirements.
Chapter~\ref{ch:dr} has laid out a blueprint for developing a predictive
theory of data requirements, a theory that will be an invaluable tool for
further exploration.
It is obvious that the success of \sll\ systems depends on the volume of
data used to train them; yet the exact relationship is very poorly
understood. Predictions of the amount of training data required for a
given \sll\ model are not directly available from the statistics and
machine learning disciplines, and yet such predictions are necessary to
both scientific research and commercial applications.
In this thesis, I have reviewed a range of statistical learning systems with
the goal of identifying data points that the proposed theory might
explain. One outcome of this study is the observation that less than 1
training instance per parameter is sometimes sufficient. This appears
to be due to the highly non-uniform distributions commonly found in
language, suggesting that any theory of data requirements will have to
account for this phenomenon.
I have also built a framework for reasoning about data requirements in
\sll\ systems and given several results regarding the
expected accuracy of a mode-based learner after a given volume of
training data. Further, under certain assumptions, I have derived a
computable expression for the expected accuracy of such a learner in
terms of training data volume. This can be used to predict data
requirements under limited conditions.
Perhaps the most limiting assumption required by this result is that the
bin distribution be uniform. Therefore, I have also reported a series of
simulations that both validate the result and
explore the effects of non-uniform bin distributions.
As predicted non-uniformity leads to much more rapid learning.
These results lay a foundation for the development of a general
predictive theory of data requirements, without which exploration of the
design space will continue to involve substantial guesswork.
\subsection{Experiments on Noun Compounds}
To illustrate these two theoretical contributions, this thesis has addressed
the problem of analysing noun compounds. Noun compounds form a
promising medium for exploring the design of statistical language
learners because they are both common and highly ambiguous, thus
posing a significant challenge to traditional \nlp\ techniques.
Furthermore, the importance of semantic expectations in the
interpretation of noun compounds highlights a weakness of existing \sll\
models, that being the lack of attention to semantics.
The fact that existing noun compound processing systems are
primarily knowledge-driven, and have generally failed due to the
enormous development effort required, makes noun compounds an ideal
subject for the automatic knowledge acquisition afforded by the \sll\
approach.
In chapter~\ref{ch:experimental}, I have described work on two
computational tasks that have traditionally been attempted
using knowledge-based systems. The first is grammatical analysis of
syntactically ambiguous noun compounds, and the second semantic
analysis of two word noun compounds.
\subsubsection*{Parsing}
The work on parsing noun compounds began with the development of a
novel probabilistic model of noun compound syntax. Following the
meaning distributions theory, this model uses conceptual association to
evaluate the dependency relations within a noun compound. All
previous statistical algorithms were based on adjacency. The new model
correctly predicts that
two-thirds of all noun compounds
should be left-branching. This prediction requires the meaning
distributions theory and is incompatible with adjacency algorithms.
The model has been implemented and evaluated on three word English
noun compounds from an encyclopedia corpus. The implementation
uses unannotated training data and freely available lexical resources,
runs on a desktop computer and can rapidly analyse any noun compound,
not just those in the test set.
Experiments with the system show that the prediction accuracy reaches
around 81\%. This closely approaches the average performance rate of
around 82\% achieved by human judges attempting the same task.
The experiments performed with human judges are also of
independent value, since they provide quantitative evidence that noun
compounds have significant context dependence. This context
sensitivity is sufficient to cause at least 14\% of compounds to be
incorrectly parsed when context is not available.
Furthermore, comparison under identical experimental conditions shows
that the new model is significantly better than one based on association
between adjacent words, such as all those proposed previously are.
In addition to supporting the use of this model for noun compound parsing,
this result provides some evidence that designs based on the meaning
distributions theory are better than existing ones. The new territory of
design space has some empirical commendation.
\subsubsection*{Semantic analysis}
This thesis has also presented a novel model for paraphrasing
non-verbal-nexus noun compounds, the first probabilistic model of noun
compound semantics. As for the syntactic model, it is based on the
meaning distributions theory. It assigns probabilities to different
paraphrases by combining information from the head with information
from the modifier independently.
This model has been implemented and evaluated on two word English
noun compounds. Once again, the system uses only freely available
lexical resources and unannotated training data. The latter is made
possible by using distributions of prepositional phrases to train a
probabilistic conceptual model of noun compounds, a strategy suggested
by the meaning distributions theory.
Experiments show that the accuracy of the trained model is up to 47\%
and is significantly better than the baseline, demonstrating that observing
the distributions of prepositional phrases can assist
in paraphrasing non-verbal-nexus noun compounds. No estimate
of the optimal context-free accuracy is available and comparable
performance tests of knowledge-based systems have not been conducted.
Therefore, the result is difficult to evaluate. The task is
similar to that addressed by Vanderwende's~(1994) dictionary-based
system which achieved 52\% accuracy.
Advantages of the statistical algorithm reported
here include that, by comparison, it is trivial to implement and that it
adapts to the training text.
Data sparseness is a significant problem in the paraphrasing experiments.
Not only is the volume of training data low, but there is significant noise
as well. Even if the accuracy of the system was known to fall short of the
optimal accuracy, it would be impossible to conclude whether this was
due to insufficient training data or a poor model. Such questions require
a predictive theory of data requirements. Of particular interest is the
observation that applying a simple smoothing technique
drastically reduced the performance. In contrast, a restricted model that is
sensitive to training data volumes, gave the best performance. These
results show that a general, practical theory of data requirements is
fundamentally important to the search for good \sll\ designs. Without
such a tool, exploration of the design space is necessarily haphazard.
\subsection{What has been Achieved?}
In summary, there are three key achievements.
\begin{enumerate}
\item The meaning distributions theory has opened up a new and
promising area of the \sll\ design space, by proposing a probabilistic
representation of semantic expectations.
The success of the noun compound parsing model provides empirical
evidence that the new territory is a promising one.
\item The work on data requirements has laid the
foundations for development of a vital tool to assist future explorations
of the design space. The necessity of a theory of data requirements
is highlighted in the experiments on noun compound paraphrasing.
\item At least one \nlp\ problem has been solved: statistical
parsing of noun compounds. The accuracy of the model developed
is significantly better than previously proposed algorithms
and closely approaches that achieved by humans attempting the same task.
\end{enumerate}
\section{Looking Ahead}
This is the section usually headed with a quotation about
the unbounded nature of science that typically implies one step has been
made and the next should follow from it. But I think computational
linguistics as a science is relatively immature; we are still at the stage of
exploring what can be done, especially in \sll, and we should expect just as
many backward steps as forward ones. For this reason, I will not
optimistically present a bright way forward to the future.
Instead, I will give
a collection of suggestions for things to try.
\subsection{Meaning Distributions}
\begin{description}
\item[Cross-register performance:] To substantiate the position taken in
section~\ref{sec:md_register}, a study could be made of the performance of
a Markov model tagger trained on one register and applied to another. This
might also lead to a measurement of the loss of accuracy associated with
using cross-register training data.
\item[Probabilistic dependency grammar:] Probabilistic context free
grammars have been investigated in great detail. The experimental work in
this thesis suggests that models based on probabilities of dependency
relations, rather than of constituents,
do better for parsing noun compounds. Perhaps then a
grammar-wide probabilistic model of dependency relations would prove
more successful than the constituent based models tried so far. As
mentioned in section~\ref{sec:sn_grammars}, some proposals already exist,
but these need to be implemented and tested.
\item[Meaning distributions:] The meaning distributions theory, recording
probabilities of semantic representations and regarding grammar as a form
of constraint, can be applied to many \sll\ tasks.
As I have emphasised before,
the value of the theory lies in the performance of systems designed
according to the theory's dictates.
Various \nlp\ tasks could be approached in this
way, including anaphora resolution, prepositional phrase attachment and
relative clause attachment.
\end{description}
\subsection{Data Requirements}
\begin{description}
\item[Statistical inheritance:] The probabilistic models used in this thesis
employ a fixed set of thesaurus categories. To reduce data sparseness,
words within a category are assumed to behave similarly. However,
the cost of this assumption is
paid for all words, not just those affected by data sparseness. A better
alternative would be to vary the size of the thesaurus categories dynamically,
responding to the degree of data sparseness appropriately.
One approach would be to try successively larger category sizes. First
a decision would be made using only data about the particular
word, and then the statistical confidence of the decision estimated.
If the confidence estimate failed to support the decision with
high probability, then the process would be repeated, each time
using data about larger categories until the statistical test proved
significant.
A natural way to provide the various sized categories would be to
employ a taxonomy like WordNet. Classes higher up in the
taxonomy will contain more words, so that association data for these
classes will be much more {\em precisely} estimated. However, parameter
estimates for higher classes will less {\em accurately} reflect the
behaviour of individual words within the class,
because the higher classes inevitably contain more
variation within them. The fewer words in a class, the more
accurately the parameter estimates for the class will reflect the
properties of each word in it. The strategy then is to select
the smallest class that contains the desired word for which the
appropriate parameter can be estimated confidently. A form of statistical
inheritance has already been used for acquiring selectional restrictions by
Ribas~(1995).
\item[Quantifying non-uniformity:] One of the most severe limitations of the
data requirements results in section~\ref{sec:dr_global} is the assumption
that bins are equi-probable. The simulations in
section~\ref{sec:dr_simulations} are only a small step in exploring the
effects of non-uniformity. Ideally further simulations could lead to a
quantification of non-uniformity. The theory begs to be extended in many
other ways too (consider weakening any of the assumptions). For example,
one direction worth pursuing would be predicting the data requirements for
unsupervised algorithms such as expectation maximisation.
\end{description}
\subsection{Noun Compounds}
\begin{description}
\item[Context dependence of compounds:] The human judgement
experiment in section~\ref{sec:cy_human} suggests that noun compounds are
fairly context sensitive. The proposals for dealing with this (see
section~\ref{sec:cn_context}, especially Meyer's thesis and
McDonald's cognate heuristic: Meyer,~1993; McDonald,~1982)
involve matching the compound with meanings from the recent
context. One way to do this within the \sll\ approach
would be to incorporate the recent cotext as additional training data
and give it more weight.
\item[Adjectival modifiers:] Adjectives exhibit similar syntactic ambiguities
to those of noun compound modifiers. Consider
example~\ref{eg:conclusion_adj}, where \lingform{small} could specify the
size of either the transaction or the fee.
\begin{examples}
\item small transaction fee \label{eg:conclusion_adj}
\end{examples}
A model almost identical to that proposed in section~\ref{sec:cy_model}
could be used for this task. This would entail a conceptual representation of
predicating adjectives, which might be based on the adjectival groupings
found in WordNet, or on those extracted by Hatzivassiloglou~(1994).
\item[Interpreting nominalisations:] The experiments on noun compound
paraphrasing concerned non-verbal nexus compounds. One possible
approach to nominalisations that is consistent with the meaning distributions
theory is to use subject-verb-object statistics of the kind collected by
Church~\etal~(1991a). The model would be based on the joint distribution
of predicates and their arguments, which would lead to preferences for the
different semantic categories of nominalisations (see
section~\ref{sec:cn_meaning}).
\end{description}
\chapter{Introduction}
\label{ch:intro}
\pagenumbering{arabic}
\input{chap11_2.tex}
\cleardoublepage
\include{chap21_3}
\include{chap22_3}
\cleardoublepage
\include{chap31_3}
\cleardoublepage
\include{chap32_3}
\cleardoublepage
\include{chap41_3}
\include{chap42_3}
\cleardoublepage
\include{chap51_3}
\include{chap52_2}
\cleardoublepage
\include{tools/mybib}
|
1,116,691,500,687 | arxiv | \section{Introduction}
The importance of the role played by integrable systems is hard to
overestimate, given both their manifold applications and their profound
connections to a number of areas in pure mathematics, see
e.g.\ \cite{b98,d,rs}. In particular,
finite-dimensional integrable Hamiltonian dynamical systems are well
understood, and the key tool in their study
is the Liouville theorem \cite{l} relating integrability to existence of
sufficiently many independent integrals of motion in involution.
This beautiful and well-studied setup involves a blanket assumption that the
systems under study do not involve explicit dependence on the evolution
parameter, i.e., time. Allowing for such an explicit dependence is far from
trivial and necessitates certain
nontrivial modifications of the very notion of integrability, see
e.g.\ \cite{aspla} and references therein for details. It should be pointed
out that the research in this subfield is relatively scarce compared with that
centered around the integrable dynamical systems in the setting of the
Liouville theorem, cf.\ e.g.\ \cite{b98,d, g, mr} and references therein.
However, there is an important motivation for the study of explicitly
time-dependent dynamical systems and their integrability: the Painlev\'e
equations, which play an important role in many areas of modern mathematics
and in applications, cf.\ e.g.\ \cite{J, n} and references therein, as well as
certain natural generalizations thereof \cite{bcr}, can be written as
time-dependent dynamical systems, see e.g.\ \cite{bcr, J, oka, t}.
In the present paper we take an approach to the study of time-dependent
dynamical systems and their integrability that, to the best of our knowledge,
was not systematically explored in the earlier literature. Namely, we
simultaneously consider several vector fields and the associated dynamical
systems, each with its own time,
while allowing for an explicit dependence of all vector fields on all times at once and imposing the Frobenius integrability condition guaranteeing local existence of associated multitime solutions, as explained below.
In order to construct such vector fields depending on all times at once, we begin with a Lie algebra of time-independent vector fields on the underlying manifold and look for the multiparameter deformations of this algebra having the desired properties.
We expect this approach, possibly supplemented by certain additional assumptions, to yield
new nonautonomous Painlev\'e-type dynamical systems, and in the case of a Lie algebras related to the H\'enon--Heiles system this is already confirmed by examples from \cite{b2019}.\looseness=-1
Within the above setup, the following definition of {\em Frobenius integrability} is naturally motivated by an
important notion of an integrable distribution and by the Frobenius theorem
from differential geometry, cf.\ e.g.\ \cite{Fecko, Lundell, mr, o}.
\begin{definition}
A set of $n$ non-autonomous vector fields $Y_{i}(t_{1},\ldots,t_{n})$, each
depending on $n$ parameters $t_{i}$, on a finite-dimensional manifold $M$ is
called \emph{Frobenius integrable} if the following zero-curvature condition
(Frobenius condition) holds:%
\begin{equation}
\frac{\partial Y_{i}}{\partial t_{j}}-\frac{\partial Y_{j}}{\partial t_{i}%
}-\left[ Y_{i},Y_{j}\right] =0\text{\ for all }i,j=1,\ldots,n, \label{genFr}%
\end{equation}
where $[\cdot,\cdot]$ stands for the Lie bracket (commutator) of vector fields.
\end{definition}
It is rather straightforward, cf.\ e.g.\ \cite{Fecko,J}, to see that if the
Frobenius condition (\ref{genFr}) is satisfied then the associated set of $n$
dynamical systems on $M$%
\begin{equation}
\frac{dx^{\alpha}}{dt_{i}}=Y_{i}^{\alpha}(\xi,t_{1},\dots,t_{n}),\quad
\alpha=1,\dots,m=\dim M\text{, \ \ }i=1,\dots,n, \label{sys2}%
\end{equation}
possesses a local common multi-time solution $x^{\alpha}=x^{\alpha}(t_{1}%
,\dots,t_{n},\xi_{0})$ for each point $\xi_{0}\in M$, i.e., for each initial
condition $x^{\alpha}(0,\dots,0,\xi_{0})=x_{0}^{\alpha}$. Here $x^{\alpha}$
are local coordinates on $M$ on a neighborhood of a point $\xi_{0}$, and
$x_{0}^{\alpha}$ are coordinates of $\xi_{0}$ in this coordinate system;
$\xi\in M$ denotes a point on $M$ and $Y^{\alpha}(\xi,t_{1},\ldots,t_{n})$ is
the value of $\alpha$-th component of the vector field $Y$ w.r.t.\ local
coordinate system given by $x^{\alpha}$ at the point $\xi$ at the times
$t_{1},\ldots,t_{n}$. Under obvious technical assumptions such solutions from
a set of overlapping local coordinate systems that can be glued together to
define an integral submanifold $\xi=\xi(\xi_{0},t_{1},\dots,t_{n})$ passing
through $\xi_{0}$. Such a submanifold gives us a natural coordinate-free
representation for the solution of the system in question.
Note that equations (\ref{genFr}) formally look exactly like the
zero-curvature-type equations arising in the study of integrable partial
differential dispersionless systems with Lax operators written in terms of vector fields,
cf.\ e.g.\ \cite{d, ms, as} and references therein. On the other hand, if one of
the times $t_{i}$ is identified with the variable spectral parameter and the vector fields are replaced by matrices, then
equations (\ref{genFr}) formally look like the isomonodromic representations for the
Painlev\'e and Painlev\'e-type systems, cf.\ e.g.\ \cite{bcr}.
Suppose now that $M$ is endowed with a nondegenerate Poisson structure $\pi$,
so we have a Poisson manifold $(M,\pi)$, and the vector fields $Y_{i}$ are
Hamiltonian, that is, $Y_{i}=\pi dH_{i}$ for some Hamiltonian functions
$H_{i}$ depending explicitly, in general, on all times $t_{k}$: $H_{i}%
=H_{i}(\xi,t_{1},\dots,t_{n})$. We stress that in our setup the Poisson
structure $\pi$ does not depend on any of $t_{k}$.\looseness=-1
As by definition of the Poisson bracket $\{\cdot,\cdot\}$ associated with
$\pi$ we have $\left[ \pi dH_{i},\pi dH_{j}\right] =-\pi d\left\{
H_{i},H_{j}\right\} $ (we use the sign convention $\left\{ H_{i}%
,H_{j}\right\} =\left\langle dH_{i},\pi dH_{j}\right\rangle $, where
$\left\langle \cdot,\cdot\right\rangle $ is the natural pairing among
$T_{z}^{\ast}M$ and $T_{z}M$, although the opposite sign convention also
occurs in the literature), we immediately obtain that for (\ref{genFr}) to
hold it suffices that the following zero-curvature condition (Frobenius
condition) for Hamiltonians $H_{k}$ holds:
\begin{equation}
\frac{\partial H_{i}}{\partial t_{j}}-\frac{\partial H_{j}}{\partial t_{i}%
}+\{H_{i},H_{j}\}=0,\quad i,j=1,\dots,n \label{zcr}%
\end{equation}
In the case when the vector fields $Y_{i}$ do not depend explicitly on the times
$t_{i}$ the conditions (\ref{genFr}) and (\ref{zcr}) reduce to $\left[
Y_{i},Y_{j}\right] =0$ \ for all $i,j$ and $\{H_{i},H_{j}\}=0$ for all $i,j$,
respectively. In such a case the vector fields $Y_{i}$ span an involutive (and
thus integrable by the Frobenius theorem \cite{Fecko}, \cite{Lundell})
distribution $\mathcal{D}$ while the Hamiltonian vector fields constitute a
Liouville intgrable system under certain additional regularity conditions.\looseness=-1
Recall that the members of the hierarchy associated with a given
Painlev\'{e} equation admit non-autonomous Hamiltonian formulations with
evolution parameters $t_{j}$ and explicitly time-dependent Hamiltonians
that satisfy (\ref{zcr}), cf.\ e.g.\ \cite{k, oka, t}. This suggests that,
conversely, some of the dynamical systems with the Hamiltonians that satisfy
(\ref{zcr}) could possess the Painlev\'{e} property, as confirmed by the examples
related to the H\'enon--Heiles system \cite{b2019} but we defer the investigation of this idea in more detail to future work.\looseness=-1
Motivated by the above, in the present paper we study existence of
polynomial-in-times deformations of Lie algebras of autonomous Hamiltonians
$h_{i}$ (so the associated Hamiltonian vector fields $X_{i}=\pi dh_{i}$
satisfy (\ref{inv}) with $c_{ij}^{k}$ being constants) such that the deformed
Hamiltonians $H_{i}$ satisfy the condition (\ref{zcr}). In this way we
produce, from the system of non-commuting autonomous vector fields $X_{i}=\pi
dh_{i}$, polynomial-in-times vector fields $Y_{i}$ that satisfy (\ref{genFr}),
which guarantees existence of common multi-time solutions for the set of
non-autonomous systems (\ref{sys2}). Under certain natural assumptions
this deformation is shown to be unique, see Theorem \ref{main} in Section 2 for details.
Then, in Section~3, we apply our general theory to the so-called
quasi-St\"{a}ckel systems \cite{mb} and present a way of explicit computation
of the deformations in question in this particular setting. As a result, we construct a
number of families of
non-autonomous Hamiltonian systems with $n$ degrees of freedom integrable in
the Frobenius sense.\looseness=-1
\section{Non-autonomous deformations of Lie algebras\\ yielding Frobenius
integrability}
Consider an $n$-dimensional ($1<n<\dim M$) Lie algebra $\mathfrak{g}%
=\mathrm{span}\{h_{i}\in C^{\infty}(M),i=1,\dots,n\}$ of smooth real-valued
functions on our Poisson manifold $\left( M,\pi\right) $, with the structure
constants $c_{ij}^{k}\in\mathbb{R}$ so that
\begin{equation}
\{h_{i},h_{j}\}=\sum_{k=1}^{n}c_{ij}^{k}h_{k} \label{alg}%
\end{equation}
is the Lie bracket on $\mathfrak{g}$, cf.\ Definition 6.41 in \cite{olver}.
We assume that the functions $h_{i}:M\rightarrow\mathbb{R}$ (the Hamiltonians)
are functionally independent. The functions $h_{i}$ define $n$ autonomous
conservative Hamiltonian systems, cf.\ (\ref{sys2}),
\begin{equation}
\frac{dx^{\alpha}}{dt_{i}}=\left( \pi(\xi)dh_{i}(\xi,t_{1},\dots
,t_{n})\right) ^{\alpha},\quad\alpha=1,\dots,m=\dim M,\text{ \ \ }%
i=1,\dots,n,\quad\label{hs}%
\end{equation}
on $M$, and the Hamiltonian vector fields $X_{i}=\pi dh_{i}$ satisfy
\begin{equation}
\left[ X_{i},X_{j}\right] =-\sum_{i=1}^{n}c_{ij}^{k}X_{k} \label{inv}%
\end{equation}
and thus span an involutive, and hence integrable in the sense of Frobenius,
distribution $\mathcal{D}$ on $M$.
It is well known that if (\ref{inv}) holds then one can choose a basis
$V_{1},\dots,V_{n}$ of vector fields spanning the distribution $\mathcal{D}$
such that
$\left[ V_{i},V_{j}\right] =0$ for all $i,j=1,\dots,n$,
cf.\ e.g.\ \cite{Lundell}\looseness=-1. However, a direct (explicit)
construction of such a basis is usually not possible and the basis $V_{i}$
does not have to consist of Hamiltonian vector fields. We would therefore like
to have a method of deforming, in a precise sense defined below, the
autonomous vector fields $X_{i}$ to (non-autonomous in general) vector fields
$Y_{i}$ such that
\begin{enumerate}
\item The vector fields $Y_{i}$ span the same distribution $\mathcal{D}$ as
$X_{i}$ do.
\item The vector fields $Y_{i}$ are Hamiltonian with respect to $\pi$ just as
$X_{i}$, so $Y_{i}=\pi dH_{i}$ for some functions $H_{i}$ depending in general
on all times $t_{i}$.
\item The dynamical systems (\ref{sys2}) defined by the vector fields $Y_{i}$
possess common muti-time solutions, so that the condition (\ref{genFr}) is
satisfied and (\ref{zcr}) is valid for $H_{i}$.
Mathematically, this problem can be stated as follows.
\end{enumerate}
\begin{problem}
\label{problem} Denote by $\mathfrak{g}[t_{1},\dots,t_{n}]$ the space of
multivariate polynomials in $t_{1},\dots,t_{n}$ with values in $\mathfrak{g}
$.\newline1. Can one find (and, if yes, under which conditions) nonzero
polynomials $H_{i}\in\mathfrak{g}[t_{1},\dots,t_{n}]$, $i=1,\dots,n$, such
that the non-autonomous Frobenius condition (\ref{zcr}) holds and such that
$Y_{i}=\pi dH_{i}$ span the same distribution $\mathcal{D}$ as $X_{i}$
do?\newline2. Is there a unique answer to question 1?\newline3. Is there an
explicit way to calculate $H_{i}$?
\end{problem}
Thus, we will look for polynomial-in-times deformations $H_{i}$ of the
Hamiltonians $h_{i}$ such that $\pi dH_{i}$ and $\pi dh_{i}$ span the same
distribution $\mathcal{D}$ and such that the non-autonomous Hamiltonian
systems
\begin{equation}
\frac{dx^{\alpha}}{dt_{i}}=\left( \pi(\xi)dH_{i}(\xi,t_{1},\dots
,t_{n})\right) ^{\alpha},\quad\alpha=1,\dots,m=\dim M,\text{ \ \ }%
i=1,\dots,n,\quad\label{nhs}%
\end{equation}
satisfy the Frobenius condition (\ref{genFr}) and thus possess common
multi-time solutions $\xi=\xi(t_{1},\dots,t_{n},\xi_{0})$.
The first two questions of Problem \ref{problem} can be answered in the
following general setting.
\begin{theorem}
\label{main}Suppose that in a finite-dimensional Lie algebra $\mathfrak{g}$
there exists a basis $\{h_{i}\}_{i=1}^{n}$ such that
\begin{itemize}
\item[i)] $\mathfrak{g}_{c}=\mathrm{span}\{h_{i}:i=1,\dots,d_{c}\}$, where
$d_{c}\geq1$, is the center of $\mathfrak{g}$, so that for any $i=1,\dots
,d_{c}$ we have $\{h_{i},h\}=0$ for any $h\in\mathfrak{g}$;
\item[ii)] $\mathfrak{g}_{a}=\mathrm{span}\{h_{i}:i=1,\dots,d_{a}\}$, where
$d_{a}\geq d_{c}$, is an Abelian subalgebra of $\mathfrak{g}$;\looseness=-1
\item[iii)] $\left\{ h_{i},h_{j}\right\} \in\mathrm{span}\left\{
h_{1},\dots,h_{\min(i,j)-1}\right\} $ for all $i,j\leq n-1$.
\end{itemize}
Then there exists a unique multi-time-dependent Lie algebra (multi-time formal
deformation of $\mathfrak{g}$) with the generators $H_{i}\in\mathfrak{g[}%
t_{d_{c}+1},\dots,t_{i-1}]$, $i=1,\dots,n\,$\ such that Frobenius
integrability conditions (\ref{zcr}) hold, provided that
\begin{itemize}
\item[a)] $H_{i}=h_{i}$, $i=1,\dots,d_{a}$,
\item[b)] $H_{i}|_{t_{d_{c}+1}=0,\dots,t_{i-1}=0}=h_{i}$, $i=d_{a}+1,\dots,n$.
\end{itemize}
\end{theorem}
The assumptions i)--iii) imply that $\mathfrak{g}_{c}\subset\mathfrak{g}
_{a}\subset\mathfrak{g}_{n-1}=\mathrm{span}\{h_{i}:i=1,\dots,n-1\}\subset
\nolinebreak\mathfrak{g}$. Moreover, iii) implies that $\mathfrak{g}_{n-1}$ is a nilpotent
subalgebra of the Lie algebra $\mathfrak{g}$ of codimension one. The theorem
of course encompasses the case when $\mathfrak{g}$ itself, rather than just
$\mathfrak{g}_{n-1}$, is nilpotent. Note also that thanks to the assumption a)
we have $H_{i}=h_{i}$ for $i=1,\dots,d_{a}$, so the Hamiltonians $h_{i}$
spanning the Abelian subalgebra $\mathfrak{g}_{a}$ are not deformed.\looseness=-1
We stress that both the statement of Theorem~\ref{main} and its proof given
below are purely algebraic, so Theorem~\ref{main} holds not just for Lie
algebras of functions on a Poisson manifold but for an arbitrary
finite-dimensional Lie algebra $\mathfrak{g}$ which satisfies the conditions
of the theorem, with
(\ref{zcr}) replaced by
\[
\frac{\partial H_{i}}{\partial t_{j}}-\frac{\partial H_{j}}{\partial t_{i}%
}+[\![ H_{i},H_{j}]\!]=0,\quad i,j=1,\dots, n, \eqno{(5')}
\]
where $[\![\cdot,\cdot]\!]$ denotes the Lie bracket in $\mathfrak{g}$, and
$n=\dim\mathfrak{g}$. Then
in the proof the Poisson bracket $\{\cdot,\cdot\}$ should also be replaced by
$[\![\cdot,\cdot]\!]$.\looseness=-1
\begin{proof}
By virtue of a) we have that $H_{i}=h_{i}$ for $i=1,\dots, d_{a}$. Now, as
$\partial H_{j}/\partial t_{d_{a}+1}=0$ for $j=1,\dots,d_{a}$ by assumption,
the deformed Hamiltonian $H_{d_{a}+1}$ can be determined from the following
(part of) equations (\ref{zcr}):
\looseness=-1
\begin{equation}
\label{zcr0}\{H_{j},H_{d_{a}+1}\}-\frac{\partial H_{d_{a}+1}}{\partial t_{j}%
}=0,\quad j=1,\dots,d_{a}.
\end{equation}
This system has a (unique due to b)) solution
\begin{equation}
H_{d_{a}+1}=\exp\left( -\sum\limits_{i=d_{c}+1}^{d_{a}}t_{i}\mathrm{ad}%
_{h_{i}}\right) h_{q+1}, \label{a}%
\end{equation}
as we have $\mathrm{ad}_{h_{i}}=0$ for $i=1,\dots,d_{c}$; recall that by
definition $\mathrm{ad}_{f}(h)=\{f,h\}$ for any $f,h\in\mathfrak{g}$. Thus,
$H_{d_{a}+1}$ depends only on times $t_{d_{c}+1},\dots,t_{d_{a}}$. Note that
the expression in (\ref{a}) is a polynomial in $t_{d_{c}+1},\dots,t_{d_{a}}$
by virtue of the nilpotency assumption iii).\looseness=-1 For the remaining
$H_{i}$ we proceed by induction. Suppose that $H_{j}=H_{j}(t_{d_{c}+1}%
,\ldots,t_{j-1})$, $j=1,\dots, k$ are already known. Then $H_{k+1}$ can be
uniquely determined from equations (\ref{zcr}) which due to the fact that
$\partial H_{j}/\partial t_{k+1}\allowbreak=0$ for $j=1,\ldots, k$ read
\begin{equation}
\{H_{j},H_{k+1}\}-\frac{\partial H_{k+1}}{\partial t_{j}}=0,\quad j=1,\dots,k.
\label{h}%
\end{equation}
The first $d_{c}$ of these equations yield
\[
\frac{\partial H_{k+1}}{\partial t_{j}}=0,\quad j=1,\dots,d_{c},
\]
as for $j=1,\dots,d_{c}$ we have $\{H_{j},H_{k+1}\}=0$ by the assumption i).
This means that $H_{k+1}$ does not depend on $t_{1},\ldots, t_{d_{c}}$. The
remaining equations in (\ref{h}) therefore have a (unique due to b)) solution
of the form (cf.\ e.g.\ \cite{f} and references therein)
\begin{equation}
H_{k+1}=\mathcal{P}\exp\left( -\int_{\gamma}\sum\limits_{i=d_{c}+1}%
^{k}\mathrm{ad}_{H_{i}}dt_{i}\right) h_{k+1} \label{aa}%
\end{equation}
where $\gamma$ is any (smooth) curve in (an open domain of) $\mathbb{R}%
^{k-q_{0}}$ connecting the points $0$ and $(t_{d_{c}+1},\dots,t_{k})$ and
where $\mathcal{P} \exp$ denotes the path-ordered exponential, see
e.g.\ \cite{o} and references therein. This integral does not depend on a
particular choice of $\gamma$ because of the zero-curvature equations
(\ref{h}). Parameterizing the curve $\gamma$ by a parameter $\tau^{\prime}%
\in\left[ 0,\tau\right] $ so that $\gamma(0)=0$ and $\gamma(\tau
)=(t_{d_{c}+1},\dots,t_{k})$ yields
\begin{equation}
\label{F}\int_{\gamma}\sum\limits_{i=d_{c}+1}^{k}\mathrm{ad}_{H_{i}}%
dt_{i}=\int_{0}^{\tau}\sum\limits_{i=d_{c}+1}^{k}\mathrm{ad}_{H_{i}}%
|_{t_{j}=t_{j}(\tau^{\prime})}dt_{i}(\tau^{\prime})\equiv-\int_{0}^{\tau}%
F_{k}(\tau^{\prime})d\tau^{\prime}%
\end{equation}
\nopagebreak[4] where $F_{k}$ is an $\mathrm{End}(\mathfrak{g})$-valued
function of the parameter $\tau^{\prime}$.
The path-ordered exponential can be computed using the following formal Magnus
expansion, see e.g.\ \cite{o} and references therein:
\begin{equation}%
\begin{array}
[c]{l}%
\mathcal{P}\exp\left( \displaystyle \int_{0}^{\tau}F_{k}(\tau^{\prime}%
)d\tau^{\prime}\right) =\displaystyle\sum\limits_{s=0}^{\infty}\Omega_{s}%
^{k}\\[5mm]%
\displaystyle\equiv\sum\limits_{s=0}^{\infty}\displaystyle\frac{1}{s!}\int
_{0}^{\tau}d\tau_{1}^{\prime}\int_{0}^{\tau_{1}^{\prime}}\!d\tau_{2}^{\prime
}\cdots\int_{0}^{\tau_{s-2}^{\prime}}\!d\tau_{s-1}^{\prime}\int_{0}%
^{\tau_{s-1}^{\prime}}F_{k}(\tau_{1}^{\prime})\cdots F_{k}(\tau_{s}^{\prime
})d\tau_{s}^{\prime}.
\end{array}
\label{path0}%
\end{equation}
To complete the proof it remains to establish the polynomiality of $H_{k+1}$
in $t_{d_{c}+1},\dots, t_{k}$. This is achieved by observing that
$\Omega_{s}^{k}$ for all $k=d_{a}+1,\dots,n$ involve only $\mathrm{ad}_{H_{j}%
}$ with $j=d_{c}+1,\dots, n-1$ but do not involve $\mathrm{ad}_{H_{n}}$;
therefore $\Omega_{s}^{k}$ vanish for sufficiently large $s$ as the
expressions like
\[
\mathrm{ad}_{H_{r_{1}}}\,\mathrm{ad}_{H_{r_{2}}}\cdots\,\mathrm{ad}_{H_{r_{j}%
}}%
\]
will all vanish for sufficiently large $j$ if all $r_{i}$ belong to
$d_{c}+1,\dots,n-1$ as $\mathfrak{g}_{n-1}$ is nilpotent by virtue of
assumption iii).\looseness=-1
\end{proof}
Notice that the non-autonomous Hamiltonian systems (\ref{nhs}) are
conservative by construction, as the $i$-th Hamiltonian $H_{i}$ does not
depend on its own evolution parameter $t_{i}$. Moreover, for $k>j$ the
Frobenius conditions (\ref{zcr}) read\looseness=-1
\begin{equation}
\frac{\partial H_{k}}{\partial t_{j}}-\{H_{j},H_{k}\}=\frac{\partial H_{k}%
}{\partial t_{j}}+\{H_{k},H_{j}\}=\left( \frac{\partial}{\partial t_{j}%
}+L_{Y_{j}}\right) H_{k}=0,\qquad k>j \label{Li}%
\end{equation}
where $L_{Y_{j}}$ is the Lie derivative along the vector field $Y_{j}$, so all
$H_{k}$ with $k>j$ are time-dependent integrals of motion for the $j$-th flow.
\begin{remark}
Note that by the very construction of the Hamiltonians $H_{i}$ the vector
fields $Y_{i}=\pi dH_{i}$ span the same distribution $\mathcal{D}$ as
$X_{i}=\pi dh_{i}$, as required in part 1) of Problem \ref{problem}.\looseness=-1
\end{remark}
\section{Non-autonomous deformations\\ of quasi-St\"{a}ckel Hamiltonians}
In this section we apply Theorem \ref{main} to quasi-St\"{a}ckel systems
constructed in \cite{mb}; cf.\ also e.g.\ \cite{b2005,bls2007,mb} for general
background on St\"ackel and quasi-St\"ackel systems. In this particular
setting we will be able to compute the expressions in (\ref{a}) and
(\ref{aa}), thus answering the question 3 of Problem \ref{problem}.
Fix an $n\in\mathbb{N}$, $n\geq2$. Consider a $2n$-dimensional Poisson
manifold $M$ and a particular set $(\lambda_{i},\mu_{i})$ of local Darboux
(canonical) coordinates on $M$, so that $\{\mu_{i},\lambda_{j}\}=\delta_{ij},$
$i,j=1,\dots,n$ while all $\{\lambda_{i},\lambda_{j}\}$ and $\{\mu_{i},\mu
_{j}\}$ are zero. Fix also $m\in\{0,\dots,n+1\}$ and consider the following
system of linear quasi-separation relations \cite{mb} (cf.\ also \cite{m})
\begin{equation}
\sum_{j=1}^{n}\lambda_{i}^{n-j}h_{j}=\frac{1}{2}\lambda_{i}^{m}\mu_{i}%
^{2}+\sum_{k=1}^{n}v_{ik}(\lambda)\mu_{k},\qquad i=1,\dots,n,\label{sep}%
\end{equation}
where
\[
\sum_{k=1}^{n}v_{ik}(\lambda)\mu_{k}=%
\begin{cases}
\displaystyle-\sum_{k\neq i}\frac{\mu_{i}-\mu_{k}}{\lambda_{i}-\lambda_{k}}, &
\text{for }m=0,\\[5mm]%
\displaystyle-\lambda_{i}^{m-1}\sum_{k\neq i}\frac{\lambda_{i}\mu_{i}%
-\lambda_{k}\mu_{k}}{\lambda_{i}-\lambda_{k}}+(m-1)\lambda_{i}^{m-1}\mu_{i}, &
\text{for }m=1,\dots,n,\\[5mm]%
\displaystyle-\lambda_{i}^{n-1}\sum_{k\neq i}\frac{\lambda_{i}^{2}\mu
_{i}-\lambda_{k}^{2}\mu_{k}}{\lambda_{i}-\lambda_{k}}+(n-1)\lambda_{i}^{n}%
\mu_{i}, & \text{for}\quad m=n+1.
\end{cases}
\]
Solving (\ref{sep}) with respect to $h_{j}$ yields, for each choice of
$m\in\{0,\dots,n+1\}$, $n$ Hamiltonians on $M$:
\[
h_{1}=E_{1}=\frac{1}{2}\mu^{T}G\mu,\quad h_{i}=E_{i}+W_{i},\quad i=2,\ldots,n
\]
where
\[
E_{i}=\frac{1}{2}\mu^{T}A_{i}\mu,\qquad W_{i}=\mu^{T}Z_{i},\quad i=2,\ldots,n,
\]
are generated by the first respective second term on the right hand side of
(\ref{sep}) (we chose to omit the index $m$ in the above notation to simplify
writing). Here
\[
G=\text{diag}\left( \frac{\lambda_{1}^{m}}{\Delta_{1}},\ldots,\frac
{\lambda_{n}^{m}}{\Delta_{n}}\right) ,\quad\Delta_{i}=\prod\limits_{j\neq
i}(\lambda_{i}-\lambda_{j})
\]
can be interpreted as a contravariant metric tensor on an $n$-dimensional
manifold $Q$, $E_{1}$ can then be interpreted as the geodesic Hamiltonian of a
free particle in the pseudo-Riemaniann configuration space $(Q,g=G^{-1})$ so
that $M=T^{\ast}Q$ \cite{b2005,bls2007}. Next, $A_{r}=K_{r}G$, where $K_{r}$
are $(1,1)$-Killing tensors for metric $g$ with any chosen $m\in
\{0,\dots,n+1\}$, and are given by
\[
K_{i}=(-1)^{i+1}\text{diag}\left( \frac{\partial\sigma_{i}}{\partial
\lambda_{1}},\dots,\frac{\partial\sigma_{i}}{\partial\lambda_{n}}\right)
\quad i=1,\dots,n
\]
where $\sigma_{r}(\lambda)$ are elementary symmetric polynomials in $\lambda$.
Moreover, $E_{i}$ are integrals of motion for $E_{1}$ as in fact they all
pairwise commute: $\{E_{i},E_{j}\}\nolinebreak=0,\ i,j=1,\ldots,n$. The vector
fields $Z_{i}$ are in this setting the Killing vectors of the metric $g$ for
any $m\in\{0,\dots,n+1\}$ as $L_{Z_{k}}g=0$, and they take the form \cite{mb}
\[
\left( Z_{i}\right) ^{\alpha}=\sum\limits_{k=1}^{i-1}(-1)^{i-k}%
\,k\,\sigma_{i-k-1}\frac{\lambda_{\alpha}^{m+k-1}}{\Delta_{\alpha}},\qquad
i\in I_{1}^{m}%
\]
and
\[
\left( Z_{i}\right) ^{\alpha}=\sum\limits_{k=1}^{n-i+1}(-1)^{i+k}%
\,k\,\sigma_{i+k-1}\frac{\lambda_{\alpha}^{m-k-1}}{\Delta_{\alpha}},\qquad
i\in I_{2}^{m}%
\]
where
\[
I_{1}^{m}=\{2,\dots,n-m+1\},\qquad I_{2}^{m}=\{n-m+2,\ldots,n\},\qquad
m=0,\ldots,n+1.
\]
Note that the above notation implies that%
\[
I_{1}^{0}=\{2,\dots,n\}\text{, }I_{1}^{n}=I_{1}^{n+1}=\emptyset\text{,
\ }I_{2}^{0}=I_{2}^{1}=\emptyset.
\]
It was demonstrated in \cite{mb} that the Hamiltonians $h_{i}$ constitute a
Lie algebra $\mathfrak{g}=\mathrm{span}\{h_{i}\in C^{\infty}(M)\colon
i=1,\ldots,n\}$ with the following commutation relations:
\[
\{h_{1},h_{i}\}=0,\quad i=2,\dots,n,
\]
and
\begin{equation}
\{h_{i},h_{j}\}=%
\begin{cases}
0, & \text{for }i\in I_{1}^{m}\text{ and }j\in I_{2}^{m},\\
(j-i)h_{i+j-(n-m+2)}, & \text{for }i,j\in I_{1}^{m},\\
-(j-i)h_{i+j-(n-m+2)}, & \text{for }i,j\in I_{2}^{m},
\end{cases}
\label{str}%
\end{equation}
where $i,j=2,\ldots,n$. We use here the convention that $h_{i}=0$ for $i\leq0$
or $k>n$.
\begin{remark}
The Lie algebra $\mathfrak{g}$ splits into a direct sum of Lie subalgebras
$\mathfrak{g=g}_{I_{1}}\mathfrak{\oplus g}_{I_{2}}$ where
\[
\mathfrak{g}_{I_{1}}=\mathrm{span}\{h_{1}\}\mathfrak{\oplus}\mathrm{span}%
\{h_{r}\colon r\in I_{1}^{m}\}\quad\mbox{and}\quad\mathfrak{g}_{I_{2}%
}=\mathrm{span}\{h_{r}\colon r\in I_{2}^{m}\}.
\]
\end{remark}
In order to successfully apply Theorem \ref{main} and formulas (\ref{a}) and
(\ref{aa}) we will now focus on the cases $m=0,1,$ when $\mathfrak{g=g}%
_{I_{1}}$, since $I_{2}^{m}$ is then empty. Note also that for these cases the
Lie algebra $\mathfrak{g}$ is nilpotent. Then (\ref{str}) reads%
\[
\mathrm{ad}_{h_{s_{1}}}h_{i}=\{h_{i},h_{s_{1}}\}=(i,s_{1})h_{i+s_{1}%
-(n-m+2)}\text{ with \thinspace}(i,s_{1})=s_{1}-i
\]
from which it immediately follows that for any $k\in\mathbb{N}$
\begin{equation}
\mathrm{ad}_{h_{s_{k}}}\cdots\mathrm{ad}_{h_{s_{1}}}h_{i}=(i,s_{1},\dots
,s_{k})h_{i+s_{1}+\ldots+s_{k}-k(n-m+2)} \label{1}%
\end{equation}
where
\begin{equation}
(i,s_{1}\dots,s_{k})=(i,s_{1},\dots,s_{k-1})[s_{k}-s_{k-1}-\cdots
-s_{1}-i+s(n-m+2)]. \label{2}%
\end{equation}
Note that in (\ref{1}) we put $h_{s}=0$ for $s<1$.
\begin{theorem}
\label{main2}Suppose that $m=0$ or $m=1$ (then $I_{2}^{m}$ is empty while
$d_{c}=2$ for $m=0$ and $d_{c}=1$ for $m=1$). Then the conditions of
Theorem~\ref{main} are satisfied and the polynomial-in-times deformation of
$\mathfrak{g}$ given by formulas (\ref{a}) and (\ref{aa}) can be written in
the form%
\begin{equation}
\hspace*{-5mm}%
\begin{array}
[c]{rcl}%
H_{i} & = & h_{i}-\displaystyle\!\!\!\sum_{r_{1}=d_{c}+1}^{i-1}\left(
\mathrm{ad}_{h_{r_{1}}}h_{i}\right) t_{r_{1}}+\!\!\!\sum_{r_{1}=d_{c}%
+1}^{i-1}\sum_{r_{2}=r_{1}}^{i-1}\alpha_{ir_{1}r_{2}}\left( \mathrm{ad}%
_{h_{r_{2}}}\mathrm{ad}_{h_{r_{1}}}h_{i}\right) t_{r_{1}}t_{r_{2}}\\[7mm]
& & -\displaystyle\sum_{r_{1}=d_{c}+1}^{i-1}\sum_{r_{2}=r_{1}}^{i-1}%
\sum_{r_{3}=r_{2}}^{i-1}\alpha_{ir_{1}r_{2}r_{3}}\left( \mathrm{ad}%
_{h_{r_{3}}}\mathrm{ad}_{h_{r_{2}}}\mathrm{ad}_{h_{r_{1}}}h_{i}\right)
t_{r_{1}}t_{r_{2}}t_{r_{3}}+\cdots,
\end{array}
\label{3}%
\end{equation}
and the real constants $\alpha_{ir_{1}\cdots r_{k}}$ can be uniquely
determined from the Frobenius integrability condition (\ref{zcr}).
\end{theorem}
\begin{proof}
The formula (\ref{1}) implies that the center $\mathfrak{g}_{c}$ for $m=0$ is
two-dimensional and given by $\mathfrak{g}_{c}=\mathrm{span}\left\{
h_{1},h_{2}\right\} $ while for $m=1$ the center $\mathfrak{g}_{0}$ is
one-dimensional and spanned by $h_{1}$ only. The same formula implies also
that in both cases $\{h_{i},h_{j}\}\in\mathrm{span}(h_{1},\dots,h_{\min
(i,j)-1})\,$\ for all $i,j=1,\dots,n$ so the conditions i)--iii) of
Theorem~\ref{main} are satisfied. The explicit form (\ref{3}) of deformations
(\ref{a}) and (\ref{aa}) is obtained by a direct computation using the
formulas given in the proof of Theorem~\ref{main} and taking a straight line
for $\gamma$.\looseness=-1
\end{proof}
Notice that from (\ref{str}) it follows that the dimension $d_{a}$ of the
Abelian subalgebra $\mathfrak{g}_{a}$ of $\mathfrak{g}$ is given by
\begin{equation}
d_{a}=\left[ \frac{n+3-m}{2}\right] ,\quad m=0,1 \label{dimga}%
\end{equation}
so by Theorem \ref{main} the first $d_{a}$ Hamiltonians $h_{i}$ will not
be deformed, and that in (\ref{3}) $\ i=d_{a}+1,\dots,n$.
Theorem \ref{main2} gives us an effective way of calculating the sought-for
deformations, as it will be demonstrated in the following examples. Of course,
the highest order of polynomials in $t_{j}$ obtained in this way depends on
$n$.
\begin{example}
\bigskip Consider the case $n=6$, $m=0$. Then the formulas (\ref{str}) yield
the following matrix of commutators $\{h_{i},h_{j}\}$:
\[
\left(
\begin{array}
[c]{cccccc}%
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 3h_{1}\\
0 & 0 & 0 & 0 & h_{1} & 2h_{2}\\
0 & 0 & 0 & -h_{1} & 0 & h_{3}\\
0 & 0 & -3h_{1} & -2h_{2} & -h_{3} & 0
\end{array}
\right) ,
\]
and clearly $d_{c}=2$ while $d_{a}=4$. The explicit values of the expansion
coefficients $\alpha_{ir_{1}\dots r_{k}}$ can be obtained by plugging
(\ref{3}) into (\ref{zcr}). Having done this we obtain\looseness=-1
\[
H_{i}=h_{i},\quad i=1,\dots,4,\quad H_{5}=h_{5}+h_{1}t_{4},\quad H_{6}%
=h_{6}+3h_{1}t_{3}+2h_{2}t_{4}+h_{3}t_{5}.
\]
\end{example}
\begin{example}
For the case $n=6$, $m=1$ the formulas (\ref{str}) yield the following matrix
of commutators $\{h_{i},h_{j}\}$:
\[
\left(
\begin{array}
[c]{cccccc}%
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 4h_{1}\\
0 & 0 & 0 & 0 & 2h_{1} & 3h_{2}\\
0 & 0 & 0 & 0 & h_{2} & 2h_{3}\\
0 & 0 & -2h_{1} & -h_{2} & 0 & h_{4}\\
0 & -4h_{1} & -3h_{2} & -2h_{3} & -h_{4} & 0
\end{array}
\right) ,
\]
so now $d_{c}=1$ while $d_{a}=4$ as in the previous example. Inserting
(\ref{3}) into (\ref{zcr})\ yields%
\begin{align*}
H_{i} & =h_{i},\quad i=1,\dots,4,\quad H_{5}=h_{5}+2h_{1}t_{3}+h_{2}%
t_{4},\quad\\
H_{6} & =h_{6}+4h_{1}t_{2}+3h_{2}t_{3}+2h_{3}t_{4}+h_{4}t_{5}-\frac{1}%
{2}h_{2}t_{5}^{2}.
\end{align*}
\end{example}
Now turn to the general study of the cases $m=n,n+1$, where $\mathfrak{g=\mathrm{span}}\left\{
h_{1}\right\} \mathfrak{\oplus g}_{I_{2}}$ (since $I_{1}^{m}$ is empty). The
constants $(i_{1},\dots,i_{s+1})$ in (\ref{2}) are the same as in the
previously considered cases up to the sign, i.e., $(i_{1},\dots,i_{s+1}%
)\rightarrow(-1)^{s}(i_{1},\dots,i_{s+1})$. From (\ref{str}) it follows that%
\begin{align*}
\text{for}\quad m=n: & \quad\mathfrak{g}_{c}=\mathrm{span}\left\{
h_{1}\right\} ,\quad\mathfrak{g}_{a}=\mathrm{span}\left\{ h_{1}%
,h_{n-k+1},\dots,h_{n}\right\} ,\\
\text{for}\quad m=n+1: & \qquad\ \mathfrak{g}_{c}=\mathrm{span}\left\{
h_{1},h_{n}\right\} ,\quad\mathfrak{g}_{a}=\mathrm{span}\left\{
h_{1},h_{n-k+1},\dots,h_{n}\right\} ,
\end{align*}
where $k=\left[ \frac{m}{2}\right] $. Thus, $d_{c}=\dim\mathfrak{g}_{c}=1$
for $m=n$, $d_{c}=\dim\mathfrak{g}_{c}=2$ for $m=n+1$ while $d_{a}%
=\dim\mathfrak{g}_{a}=k+1$ in both cases. If we now rearrange the Hamiltonians
$h_{i}$ so that $h_{1}^{\prime}\equiv h_{1},$ $h_{i}^{\prime}\equiv h_{n-i+2}$
for $i=2,\dots,n$ we observe that in this ordering the assumptions of Theorem
\ref{main} are all satisfied. Actually, for $m=n+1$ the algebra $\mathfrak{g}$
is nilpotent, while for $m=n$ we have $\left\{ h_{n}^{\prime},h_{j}^{\prime
}\right\} \in\mathrm{span}\left\{ h_{1}^{\prime},\dots,h_{j}^{\prime
}\right\} $ which means that $\mathfrak{g}$ is a codimension one extension by
derivation of the nilpotent Lie algebra $\mathfrak{g}_{n-1}$. Thus, we obtain
the following\looseness=-1
\newpage
\begin{corollary}
Suppose that $m=n$ or $m=n+1$ (so $I_{1}^{m}$ is empty). Then the
conditions of Theorem \ref{main} are satisfied and the polynomial-in-times
deformation of $\mathfrak{g}$ given by formulas (\ref{a}) and (\ref{aa}), for
the original ordering of the Hamiltonians $h_{i}$, can be written in the form
\begin{equation}
\hspace*{-4mm}%
\begin{array}
[c]{rcl}%
H_{i} & = & h_{i}-\displaystyle\sum_{r_{1}=i+1}^{n}\left( \mathrm{ad}%
_{h_{r_{1}}}h_{i}\right) t_{r_{1}}+\sum_{r_{1}=i+1}^{n}\sum_{r_{2}=r_{1}}%
^{n}\alpha_{ir_{1}r_{2}}\left( \mathrm{ad}_{h_{r_{2}}}\mathrm{ad}_{h_{r_{1}}%
}h_{i}\right) t_{r_{1}}t_{r_{2}}\\[7mm]
& & -\displaystyle\sum_{r_{1}=i+1}^{n}\sum_{r_{2}=r_{1}}^{n}\sum_{r_{3}%
=r_{2}}^{n}\alpha_{ir_{1}r_{2}r_{3}}\left( \mathrm{ad}_{h_{r_{3}}}%
\mathrm{ad}_{h_{r_{2}}}\mathrm{ad}_{h_{r_{1}}}h_{i}\right) t_{r_{1}}t_{r_{2}%
}t_{r_{3}}+\cdots
\end{array}
\label{8}%
\end{equation}
where the real constants $\alpha_{ir_{1}\cdots r_{k}}$ can be uniquely
determined from the Frobenius integrability condition (\ref{zcr}).\looseness=-1
\end{corollary}
Note that $d_{c}$ does not enter the above formula; in the case of $m=n+1$
when $d_{c}=2$ the sums in (\ref{8}) end already at $n-1$ since then
$\mathrm{ad}_{h_{n}}=0$ as $h_{n}$ is part of the center of \ the algebra. As
before, the highest order of $t$-polynomials depends on $n$. By analogy with
the previous case, the $\left[ \frac{m}{2}\right] +1$ Hamiltonians spanning
the Abelian subalgebra $\mathfrak{g}_{a}$, that is, $h_{1}$ and $h_{n-k+1}%
,\dots,h_{n}$ with $k=\left[ \frac{m}{2}\right] $, are not deformed.
\begin{example}
Consider the case $n=6$, $m=n$. The matrix of commutators $\{h_{i},h_{j}\}$
(\ref{str}) reads%
\[
\left(
\begin{array}
[c]{cccccc}%
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & -h_{3} & -2h_{4} & -3h_{5} & -4h_{6}\\
0 & h_{3} & 0 & -h_{5} & -2h_{6} & 0\\
0 & 2h_{4} & h_{5} & 0 & 0 & 0\\
0 & 3h_{5} & 2h_{6} & 0 & 0 & 0\\
0 & 4h_{6} & 0 & 0 & 0 & 0
\end{array}
\right)
\]
and since $k=\left[ \frac{m}{2}\right] =3$ then the Hamiltonians $h_{i}$
with $i=1,4,5,6$ span an Abelian subalgebra $\mathfrak{g}_{a}$ and are thus
not deformed while $H_{2}$ and $H_{3}$ are found by inserting (\ref{8}) into
(\ref{zcr}) in order to determine the constants $\alpha_{ir_{1}\ldots r_{k}}
$. The result is
\begin{align*}
H_{i} & =h_{i},\quad i=1,4,5,6,\\
H_{2} & =h_{2}+h_{3}t_{3}+2h_{4}t_{4}+3h_{5}t_{5}+4h_{6}t_{6}+h_{5}%
t_{3}t_{4}+2h_{6}t_{3}t_{5},\\
H_{3} & =h_{3}+h_{5}t_{4}+2h_{6}t_{5}.
\end{align*}
\end{example}
\begin{example}
Consider now the case $n=6,~m=n+1$. The matrix of commutators $\{h_{i}%
,h_{j}\}$ reads now
\[
\left(
\begin{array}
[c]{cccccc}%
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & -h_{4} & -2h_{5} & -3h_{6} & 0\\
0 & h_{4} & 0 & -h_{6} & 0 & 0\\
0 & 2h_{5} & h_{6} & 0 & 0 & 0\\
0 & 3h_{6} & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{array}
\right)
\]
Again, $h_{i}$ are not deformed for $i=1,4,5,6$ while $H_{2}$ and $H_{3}$ are
found as usual by inserting (\ref{8}) into (\ref{zcr}). The result is%
\[
H_{i}=h_{i},\ i=1,4,5,6,\ H_{2}=h_{2}+h_{4}t_{3}+2h_{5}t_{4}+3h_{6}%
t_{5}-\displaystyle\frac{1}{2}h_{6}t_{3}^{2},\ H_{3}=h_{3}+h_{6}t_{4}.
\]
\end{example}
Finally, let us again return to the general theory and find integrable deformations of our algebra
$\mathfrak{g}$ in the case $1<m<n$. In this case both components
$\mathfrak{g}_{I_{1}}$ and $\mathfrak{g}_{I_{2}}$ in the splitting
$\mathfrak{g=g}_{I_{1}}\mathfrak{\oplus g}_{I_{2}}$ are nontrivial, with
$\dim\mathfrak{g}_{I_{1}}=n-m+1$ and $\dim\mathfrak{g}_{I_{2}}=m-1$. Each of
the components has an Abelian subalgebra of its own. We denote the Abelian
subalgebras of $\mathfrak{g}_{I_{1}}$ and $\mathfrak{g}_{I_{2}}$ by
$\mathfrak{g}_{a_{1}}$ and $\mathfrak{g}_{a_{2}}$, respectively, with (compare this with (\ref{dimga}))
\begin{align*}
\dim\mathfrak{g}_{a_{1}} & =\left[ \frac{n+3-m}{2}\right] \equiv d_{a_{1}
}\\
\dim\mathfrak{g}_{a_{2}} & =\left[ \frac{m}{2}\right] \equiv d_{a_{2}},
\end{align*}
so
\[
\dim\mathfrak{g}_{a}=\dim\mathfrak{g}_{a_{1}}+\dim\mathfrak{g}_{a_{2}};
\]
$\mathfrak{g}_{a_{1}}$ and $\mathfrak{g}_{a_{2}}$ are given by
\begin{align*}
\mathfrak{g}_{a_{1}} & =\mathrm{span}\left\{ h_{1},h_{2},\ldots
,h_{d_{a_{1}}}\right\} \text{ }\\
\mathfrak{g}_{a_{2}} & =\mathrm{span}\left\{ h_{n-d_{a_{2}}+1},\ldots
,h_{n}\right\}
\end{align*}
so
\[
\mathfrak{g}_{a}=\mathrm{span}\left\{ h_{1},h_{2},\ldots,h_{d_{a_{1}}%
},h_{n-d_{a_{2}}+1},\ldots,h_{n}\right\} .
\]
Therefore, the Hamiltonians $h_{d_{a_{1}}+1},\dots,h_{n-m+1}$ belonging to
$\mathfrak{g}_{I_{1}}$ should then be deformed by formulas (\ref{3}) with
$d_{c}=1$ (for $m=n+1$ the center is two-dimensional, spanned by
\thinspace$h_{1}$ and $h_{n}$, but $h_{n}$ does not belong to $\mathfrak{g}%
_{I_{1}}$) while the Hamiltonians $h_{1},\dots,h_{d_{a_{1}}}$ remain
unchanged. Likewise, the Hamiltonians $h_{n-m+2},\dots,h_{n-d_{a_{2}}}$ should
be deformed according to (\ref{8}) while the last $d_{a_{2}}$ Hamiltonians
remain unchanged.\looseness=-1
\begin{example}
Consider the case $n=11$, $m=6$. The matrix of commutators $\left\{
h_{i},h_{j}\right\} $ is
\[
\left(
\begin{array}
[c]{ccccccccccc}%
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 4h_{1} & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 2h_{1} & 3h_{2} & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & h_{2} & 2h_{3} & 0 & 0 & 0 & 0 & 0\\
0 & 0 & -2h_{1} & -h_{2} & 0 & h_{4} & 0 & 0 & 0 & 0 & 0\\
0 & -4h_{1} & -3h_{2} & -2h_{4} & -h_{4} & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & -h_{8} & -2h_{9} & -3h_{10} & -4h_{11}\\
0 & 0 & 0 & 0 & 0 & 0 & h_{8} & 0 & -h_{10} & -2h_{11} & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 2h_{9} & h_{10} & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 3h_{10} & 2h_{11} & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 4h_{11} & 0 & 0 & 0 & 0
\end{array}
\right)
\]
If we perform the deformation on each subalgebra separately, as described
above, we obtain
\begin{align*}
H_{i} & =h_{i},\quad i=1,\dots,4,9,\dots,11,\quad H_{5}=h_{5}+h_{1}t_{4},\\
H_{6} & =h_{6}+4h_{1}t_{2}+3h_{2}t_{3}+2h_{3}t_{4}+h_{4}t_{5}-\frac{1}%
{2}h_{2}t_{5}^{2},\\
H_{7} & =h_{7}+h_{8}t_{8}+2h_{9}t_{9}+3h_{10}t_{10}+4h_{11}t_{11}%
+h_{10}t_{8}t_{9}+2h_{11}t_{8}t_{10},\\
H_{8} & =h_{8}+h_{10}t_{9}+2h_{11}t_{10}.
\end{align*}
\end{example}
\section*{Acknowledgments}
AS would like to thank R. Popovych and P. Zusmanovich for helpful comments.
The research of AS and MB, as well as the visit of MB to Opava in November
2017, were supported in part by the Grant Agency of the Czech Republic (GA
\v{C}R) under grant P201/12/G028. The research of AS was also supported in
part by the Ministry of Education, Youth and Sports of the Czech Republic
(M\v{S}MT \v{C}R) under RVO funding for I\v{C}47813059.
|
1,116,691,500,688 | arxiv | \section{Introduction}
\subsection{ Regular rank rings}
In this paper all rings are considered unital.
Regular rings were introduced by John
von Neumann, these are the rings where any principal right ideal is generated
by an idempotent (see \cite{goodearl}).
A $*$-regular ring $\cR$ is a ring with involution and
$a^*a=0$ implies that $a=0$. In a $*$-regular ring any princial right ideal is
generated by a unique projection \cite{kaplansky}.
A $*$-regular ring $\cR$ is {\it proper} if
$\sum^n_{i=1} a_ia_i^*=0$ implies that all of the $a_i$'s equal to $0$.
Note that $\mbox{Mat}_{d\times d}(\cR)$ is regular if and only if $\cR$ is regular,
nevertheless for a $*$-regular ring $\cR$ $\mbox{Mat}_{d\times d}(\cR)$ is
$\star$-regular if and only if $\cR$ is proper \cite{berberian1}. Since
$\mbox{Mat}_{k\times k}(\mbox{Mat}_{d\times d}(\cR))=\mbox{Mat}_{kd\times kd}(\cR)$,
$\cR$ is proper if and
only if all the matrix rings over $\cR$ are proper.
\noindent
A rank function on a regular ring $\cR$ is function
$\mbox{rk}:\cR\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ satisfying the
following conditions.
\begin{enumerate}
\item $0\leq \mbox{rk}(a) \leq 1$.
\item $\mbox{rk}(a)=0$ if and only if $a=0$.
\item $\mbox{rk}(a+b)\leq\mbox{rk}(a)+\mbox{rk}(b)$.
\item $\mbox{rk}(ab)\leq \mbox{rk}(a),\mbox{rk}(b)$
\item If $e,f$ are orthogonal idempotents then $\mbox{rk}(e+f)=\mbox{rk}(e)+\mbox{rk}(f)$.
\end{enumerate}
The most important examples of regular rank rings are matrix rings over
division rings. In this case, the values of the rank are always rational.
The rank defines a metric on the regular ring by $d(x,y)=\mbox{rk}(x-y)$. The
completion of this metric is a regular rank ring as well.
Note that for the completion
of ultramatricial algebras (see Section \ref{ultra}) the values of
the rank can be any real number in
between zero and one \cite{goodearl}. Let $\mathcal{N}$ be a finite von Neumann
algebra then its Ore localization with respect to its non-zero divisiors
$U(\mathcal{N})$ is a $\star$-regular ring. The
elements of this ring are called affiliated operators (see \cite{reich}).
The rank of an
affiliated operator is the trace of the idempotent that
generates the right ideal generated by the operator. Note that if $A\in
U(\mathcal{N})$,then
\begin{equation}
\label{rankformula}
\mbox{rk}(A)=1-\lim_{t\to\infty}\int_0^t tr_N (E_\lambda)d\lambda\,,
\end{equation}
where $\int^\infty_0 E_\lambda d\lambda$ is the spectral decomposition
of the unbounded operator $A^*A$. This shows that if $i:\mathcal{N}\to\mathcal{M}$ is
a trace-preserving homomorphism between finite von Neumann algebras, then
its Ore-extension $\tilde{i}:U(\mathcal{N})\to U(\mathcal{M})$ is a rank preserving
$\star$-homomorphism.
\noindent
Note that $U(\mathcal{N})$ is
always proper (see Section \ref{linsection}).
\noindent
If $\cR$ is regular rank ring then there is a unique natural extension of the
rank to a matrix rank of $\mbox{Mat}_{k\times k}(\cR)$ \cite{halperin}.
Note that a matrix rank
$\mbox{rkm}$ has the same property as the rank $\mbox{rk}$ except that
$0\leq \mbox{rkm}(M)\leq k$.
\subsection{The Connes Embedding Problem}
Let $\nu=\{d_1< d_2 <\dots\}$ be an infinite sequence of positive integers.
Then one can consider the ultraproduct of the matrix algebras
$\{\mbox{Mat}_{d_i\times d_i}(\C)\}^\infty_{i=1}$ as tracial
algebras the following way (see \cite{pestov}).
Let $\omega$ be a nonprincipal ultrafilter on the natural numbers and let
$\lim_\omega$ be the corresponding ultralimit.
First, consider the algebra of bounded elements
$$\mathcal{B}=\{(a_1,a_2,\dots)\in \prod_{i=1}^\infty \mbox{Mat}_{d_i\times d_i}(\C)
\,\mid\, \sup \|a_i\|<\infty\}\,.$$
Now let $\mathcal{I}\lhd\mathcal{B}$ be the ideal of elements $\{a_i\}^\infty_{i=1}$
such that $\lim_\omega\frac{tr(a_n^*a_n)}{d_n}=0$. Then
$\mathcal{B}/\mathcal{I}=\mathcal{M}_\nu$ is a type $II_1$-von Neumann factor with trace
defined the following way.
$$Tr_\omega[\{a_i\}^\infty_{i=1}]=\lim_\omega\frac{tr(a_n)}{d_n}\,.$$
The following conjecture is generally referred to as the Connes Embedding
Problem. Is it true that a type-$II_1$-von Neumann algebra with a separable
predual
have a trace-preserving embedding to some $\mathcal{M}_\nu$ ? See the survey of
Pestov \cite{pestov} for further details.
\noindent
There is a purely algebraic version of the Connes Embedding Problem first
considered in \cite{elekszabo}. Namely, we can consider the ultraproduct of
the matrix rings $\{\mbox{Mat}_{d_i\times d_i}(\C)\}$ as {\it rank algebras}.
Let $\mathcal{J}\lhd \prod_{i=1}^{\infty} \mbox{Mat}_{d_i\times d_i}(\C)$ be the
following ideal,
$$\mathcal{J}=\{\,\{a_i\}^\infty_{i=1} \,\mid\, lim_\omega\frac
{rank(a_n)}{d_n}=0\}\,.$$
Then $\prod_{i=1}^{\infty} \mbox{Mat}_{d_i\times d_i}(\C)/\mathcal{J}=\mathcal{M}_\nu^{alg}$
is a simple
complete $\star$-regular rank ring. One can ask of course, whether any
countable
dimensional regular rank ring embeds to some $\mathcal{M}_\nu^{alg}$.
\subsection{L\"uck's Approximation Theorem}
Let $\Gamma$ be a finitely generated residually finite group and
let \\ $\Gamma=N_0\supset N_1\supset N_2\dots, \cap^\infty_{k=1}N_k=\{1\}$
be finite index normal subgroups. Let $\Delta\in \mbox{Mat}_{d\times d}(\Z\Gamma)$
be a $d\times d$-matrix over the integer group algebra $\Z\Gamma$.
Denote by $\mathcal{N}(\Gamma)}\def\ug{\mathcal{U}(\Gamma)$ the von Neumann algebra of $\Gamma$. Note that $\Delta$ acts
on $(l^2(\Gamma)^d)$ as a bounded operator. Then one can
define $\dim_\Gamma Ker(\Delta)$, the von Neumann dimension of the kernel of
$\Delta$.
Let $\pi_k:\C\Gamma\to \C(\Gamma\backslash N_k)$ the natural projection.
That is $\pi_k(\Delta)\in \mbox{Mat}_{d\times d}(\C(\Gamma\backslash N_k))$ is a
finite dimensional linear transformation. According to L\"uck's Approximation
Theorem (see \cite{lueck}) \begin{equation} \label{appro}
\lim_{k\to\infty}\frac{\dim_\C Ker (\pi_k(\Delta))}{|\Gamma:N_k|}=\dim_\Gamma
Ker (\Delta) \end{equation}
It is conjectured that (\ref{appro}) holds for any $\Delta\in \mbox{Mat}_{d\times
d}(\C\Gamma)$ as well. The conjecture was confirmed for amenable groups
$\Gamma$ in \cite{elekappro}.
\subsection{Regular Closures}
Linnell and Schick \cite{linnellschick} proved the following theorem
(see Section \ref{linsection}).
Let $\cR$ be a proper $\star$-regular ring. Then for any subset $T\subseteq
\cR$ there exists a smallest $\star$-regular subring containing $T$. We call
this ring $R(T,\cR)$ the regular closure of $T$ in $\cR$.
Let $\Gamma$ be a countable group, then one can consider the natural embedding
of its complex group algebra to its von Neumann algebra
$\C(\Gamma)\to\mathcal{N}(\Gamma)$. Let $\ug$ be Ore localization of $\mathcal{N}(\Gamma)$.
Then $\ug$ is a proper $\star$-regular ring (see \cite{berberian2}.
Therefore one can
consider the {\it analytic regular closure} $R(\C(\Gamma),\ug)=R(\Gamma)$.
\noindent
Now let $\Gamma=N_0\rhd N_1\rhd\dots, \cap^\infty_{i=1} N_i=\{1\}$ be normal
subgroups of a residually finite group.Let $\pi_i:\C\Gamma\to \C(\Gamma/N_i)$
be the natural projection as in the previous subsection and let
$s_i:\C(\Gamma/N_i)\to \mbox{Mat}_{\Gamma/N_i\times \Gamma/N_i}(\C)$ be the natural
representation by convolutions.
Define $r_i=s_i\circ\pi_i:\C\Gamma\to
\mbox{Mat}_{\Gamma/N_i\times \Gamma/N_i}(\C)\,.$ Then we have an injective (see
\cite{elekszabo}) $\star$-homomorphism $r:\C\Gamma\to \mathcal{M}^{alg}_\nu$, where
$\nu=\{|\Gamma/N_1|, |\Gamma/N_2|,\dots\}$.
Therefore we can consider the {\it algebraic regular closure}
$R(\C\Gamma,\mathcal{M}^{alg}_\nu)$ for any normal chain of residually finite group.
The main result of this paper is the following theorem.
\begin{theorem} \label{main} Let $\Gamma$ be a finitely generated amenable
group. Then there is a rank preserving $\star$-homomorphism
$$j:R(\Gamma)\to R(\C\Gamma,\mathcal{M}^{alg}_\nu)$$\
which is the identity map restricted on $\C\Gamma$. \end{theorem}
This theorem can be viewed as a structural generalization of L\"uck's
Approximation Theorem for amenable groups.
Indeed, let $\Delta\in \mbox{Mat}_{k\times k}(\C\Gamma)$ then the
approximation theorem is equivalent to the fact that
$$\mbox{rkm}_1(\Delta)=\mbox{rkm}_2(\Delta)\,,$$
where $\mbox{rkm}_1$ resp. $\mbox{rkm}_2$ are the matrix rank on $\mbox{Mat}_{k\times
k}(U(\Gamma))$
resp. on $\mbox{Mat}_{k\times k}(\mathcal{M}^{alg}_\nu)$. However, both $\mbox{rkm}_1(\Delta)$ and
$\mbox{rkm}_2(\Delta)$ are equal to the matrix rank of $\Delta$ in
$\mbox{Mat}_{k\times k} (R(\Gamma))$. Actually, we prove a generalization of
Theorem \ref{main}, where we consider algebraic closures associated to
arbitrary sofic representations (see Section \ref{sofic}) of
the group $\Gamma$.
We shall also prove the following theorem. \begin{theorem}
\label{tetel2}
For any finitely generated amenable group and coefficient field $K$,
the group algebra $K\Gamma$ embeds to the rank completion of an
ultramatricial algebra. \end{theorem}
\section{von Neumann regular closures}
\label{linsection}
In this section, we review some results of Linnell and Schick \cite{linnell},
\cite{linnellschick} about the von Neumann regular closures in proper
$\star$-regular rings.
The starting points of Linnell's paper are the following two observations
about finite von Neumann algebras already mentioned in the introduction.
\begin{enumerate}
\item $U(\mathcal{N})$ is a
$\star$-regular ring, that is any right ideal is generated by a single
projection.
\item If $\alpha,\beta\in U(\mathcal{N})$ and $\alpha\alpha^*+\beta\beta^*=0$ then
$\alpha=\beta=0\,.$
\end{enumerate}
Although Linnell and Schick consider only group von Neumann algebras all what
they used are the two properties above.
The following result is a strengthening of the second observation.
\begin{propo}[Lemma 2. \cite{linnell}]
If $\alpha,\beta\in U(\mathcal{N})$ then $(\alpha\alpha^*+\beta\beta^*)U(\mathcal{N})\supseteq
\alpha U(\mathcal{N})$.
\end{propo}
Using a simple induction one also has the following proposition.
\begin{propo} [Lemma 2.5 \cite{linnellschick}]
If $\alpha_1, \alpha_2,\dots, \alpha_n\in U(\mathcal{N})$ then
$$\sum^n_{i=1} \alpha_i\alpha^*_i U(\mathcal{N})\supseteq \alpha_1 U(\mathcal{N})\,.$$
\end{propo}
This leads to the crucial proposition about the existence of the von Neumann
regular closures.
\begin{propo}[Proposition 3.1 \cite{linnellschick}] \label{proplins}
Let $\{R_i\,\mid\, i\in I\}$ be a collection of $\star$-regular
subrings of $U(\mathcal{N})$. Then $\cap_{i\in I} R_i$ is also a $\star$-regular
subring of $U(\mathcal{N})$.
\end{propo}
We also need to show that the proposition above holds for $\mathcal{M}^{alg}_\mu$
as well.
\begin{propo}
Let $\{R_i\,\mid\, i\in I\}$ be a collection of $\star$-regular
subrings of $\mathcal{M}^{alg}_\mu$. Then $\cap_{i\in I} R_i$ is also a $\star$-regular
subring of $\mathcal{M}^{alg}_\mu$.
\end{propo}
\proof
Since $\mathcal{M}^{alg}_\mu$ is a $\star$-regular ring we only need to prove
that if $\alpha\alpha^*+\beta\beta^*=0$ in $\mathcal{M}^{alg}_\mu$, then both
$\alpha$ and $\beta$ equal to $0$. Then the proof of Proposition \ref{proplins}
works without any change.
\begin{lemma}
For finite dimensional matrices $A,B\in Mat_{k\times k}(\C)$
$$\mbox{rank}(AA^*+BB^*)\geq \max(\mbox{rank} (A), \mbox{rank}( B))$$
\end{lemma}
\proof If $(AA^*+BB^*)(v)=0$ then $A^*(v)=0$ and $B^*(v)=0$. Hence
$$\mbox{ker} (AA^*+BB^*)\subseteq \mbox{ker}(A^*)\cap \mbox{ker}(B^*)\,.$$
Therefore
$\mbox{rank}(AA^*+BB^*)\geq\mbox{rank}(A^*)=\mbox{rank}(A)$ and $\mbox{rank}(AA^*+BB^*)\geq\mbox{rank}(B^*)
=\mbox{rank}(B)\,$ \quad\qed
\vskip0.2in
Now let $A_n, B_n\in \mbox{Mat}_{d_n\times d_n}(\C)$, then
$$\lim_{\omega} \frac{\mbox{rank}(A_nA^*_n+B_nB_n^*)}{d_n}\geq
\lim_{\omega} \frac{\mbox{rank}(A_n)}{d_n}$$
and
$$\lim_{\omega} \frac{\mbox{rank}(A_nA^*_n+B_nB_n^*)}{d_n}\geq
\lim_{\omega} \frac{\mbox{rank}(B_n)}{d_n}\,.$$
Hence the proposition follows.
\section{Bratteli Diagrams, Ultramatricial Algebras and Tilings}
\label{ultra}
Recall that a Bratteli diagram is an oriented countable graph such that the
vertex set is partitioned into finite sets $\{Z_i\}^\infty_{i=1}$ such a way
that
\begin{itemize}
\item If the starting vertex of an edge is $Z_i$, then the end vertex is
necessarily in $Z_{i+1}$.
\item Each vertex has at least one outgoing edge.
\item Each vertex $\alpha$ has a non-negative {\it size} $S(\alpha)$.
\item Each edge (from a vertex $\alpha$ to a vertex $\beta$) has a
non-negative multiplicity $K(\alpha,\beta)$ such that for each
$\beta\in Z_{n+1}$, $S(\beta)=\sum_{\alpha\in Z_n} S(\alpha) K(\alpha,\beta)$
\end{itemize}
Let $P_n$ be a probability
distribution function on $Z_n$. We call the system $\{P_n\}^\infty_{n=1}$ a
{\it harmonic function} if
$$P_n(\alpha)=\sum_{\beta\in Z_{n+1}}\frac{ S(\alpha) K(\alpha,\beta)}
{S(\beta)} P_n(\beta)$$
for any $n\geq 1$ and $\alpha\in Z_{n}$.
\noindent
An {\it ultramatricial algebra} is constructed the following way. For each
$n\geq 1$ one consider a product ring $\oplus_{l=1}^{i_n}\mbox{Mat}_{d^n_l\times
d^n_l}(\C)$. Let $K(d^n_l,d^{n+1}_j)$ be non-negative integers satisfying
$$d^{n+1}_j=\sum_{1\leq l\leq i_n} d^n_j K(d^n_l,d^{n+1}_j)$$
for any $n\geq 1$ and $1\leq j\leq i_{n+1}$.
Now for any $n\geq 1$ and $1\leq l\leq i_{n+1}$
choose a diagonal embedding
$$E_{n,l}:\oplus^{i_n}_{l=1}(\mbox{Mat}_{d^{n}_l\times
d^n_l}(\C))^{K(d^n_l,d^{n+1}_j)}\to \mbox{Mat}_{d^{n+1}_j\times d^{n+1}_j}(\C)\,.$$
The embeddings define injective maps
$$\phi_n:\oplus^{i_n}_{l=1} \mbox{Mat}_{d^{n}_l\times
d^n_l}(\C)\to
\oplus^{i_{n+1}}_{j=1} \mbox{Mat}_{d^{n+1}_j\times
d^{n+1}_j}(\C)\,.$$
The direct limit $\lim_{\to}\phi_n$ is the ultramatricial algebra
$\mathcal{A}_\phi$. Clearly, $\mathcal{A}_\phi$ is a $\star$-regular ring.
\noindent
Now for any $n\geq 1$ and $1\leq l \leq i_n$ let $P(d^n_l)$ be real numbers
satisfying
\begin{equation}
\sum^{i_n}_{l=1} P(d^n_l)=1
\end{equation}
and
\begin{equation} P(d^n_l)=\sum_{j=1}^{i_{n+1}}
\frac {d^n_l K(d^n_l,d^{n+1}_j)}{d^{n+1}_j}
P(d^{n+1}_j)\,.
\end{equation}
Then we have a Bratteli diagram with a harmonic function, where
the vertices in $Z_n$ are exactly $\{\mbox{Mat}_{d^n_l\times d^n_l}(\C)\}^{i_n}_{l=1}
$, with sizes
$\{d^n_l\}^{i_n}_{l=1}$.
The Bratteli diagram defines a rank function $\mbox{rk}_\phi$ on $\mathcal{A}_\phi$.
Namely, let
$$\mbox{rk}_\phi(a_1\oplus a_2\oplus\dots\oplus a_{i_n})=
\sum^{i_n}_{l=1} m(a_l) \frac{\mbox{rank} (a_l)}{d^n_l},$$
where $m(a_l)=P(d^n_l)$ and $\mbox{rank}(a_l)$ is the rank of the matrix $a_l$.
Then it is easy to see that each
$\phi_n$ is a rank preserving $\star$-isomorphism. Therefore $\mathcal{A}_\phi$ is a
rank regular ring.
\noindent
Now let $\Gamma$ be a finitely generated group with a symmetric generating
system $S$. The Cayley graph of $\Gamma$, $Cay(\Gamma,S)$ is defined as
follows.
\begin{itemize}
\item $V(Cay(\Gamma,S))=\Gamma$
\item $(a,b)\in E(Cay(\Gamma,S))$ if $as=b$ for some $s\in S$.
\end{itemize}
Let $F\subset\Gamma$ be a finite set. Then $\partial F$ is the set of vertices
that are adjacent to a vertex in the complement of $F$. The isoperimetric
constant of $F$ is defined as
$$i(F):=\frac{|\partial F|}{|F|}\,.$$
The group $\Gamma$ is amenable if there exists a F$\mbox{\o}$lner-sequence in $\Gamma$
that is a a sequence of finite sets $\{F_n\}^\infty_{n=1}$ such that
$\lim_{n\to\infty} i(F_n)=0\,.$
\noindent
Now we define Bratteli-tiling systems. If $\gamma\in\Gamma,F\subset\Gamma$
then $\gamma F$ is called a $F$-tile.
A Bratteli system has the following properties.
\begin{itemize}
\item The level set $Z_n$ consists of finite sets $F^n_1,
F^n_2,\dots,F_{i_n}^n$ and the set $E_n$ containing only the unit element.
Also, we have $i(F^n_j)\leq \frac{1}{2^n}$ for all $j$ and $n$.
\item For any $n\geq 2$ and $F^n_j\in Z_n$ we have a partition
$F^n_j=\cup_{i=1} ^{a_{n,j}} \gamma_iA_i$, where $A_i\in Z_{n-1}$. That is
we have tiling of $F^n_j$ with the tiles of $Z_{n-1}$.
\item $K(F^{n-1}_l,F^n_j)$ is the number of $F^{n-1}_l$-tiles in the partition
of $F^n_j$. Also $K(E_{n-1},F^n_j)$ is the number of $E_{n-1}$-tiles (single
vertices).
\item $S(F^n_j)=|F^n_j|$, $S(E_n)=1$.
\item We also suppose that $K(E_{n-1},F^n_j)\leq\frac{1}{2^{n-1}}|F^n_j|\,.$
\end{itemize}
Let $m:\cup_{n=1}\infty Z_n\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ be a harmonic function such that
$m(E_n)\to 0$ as $n\to\infty$. Then we call a system above a Bratteli tiling
system. Our main technical tool is the following proposition.
\begin{propo}\label{bratt1}
For any amenable group $\Gamma$ and generating system $S$ we can construct
a Bratteli tiling system with the following property.
For any $\epsilon>0$ and $n>0$ there exist $\delta=\delta_{\e,n}>0$ such that
if $F\in\Gamma$ is a finite set and $i(F)<\delta$ then one can tile $F$ with
translates of the elements $Z_n$ satisfying the following property.
If $L\in Z_n$ and $T^F_L$ is the set of points in $F$ covered by a translate
of $L$ then
$$\left|\frac{|T^F_L|}{|F|}-m(L)\right|<\epsilon\,.$$
\end{propo}
\section{Proof of Proposition \ref{bratt1}}
First, let us recall the notion of $\e$-quasitilings. Let $Cay(\Gamma,S)$ be
the Cayley-graph of an amenable group $\Gamma$ as above. Let $F\subset\Gamma$
be a finite set and $A_1,A_2,\dots, A_n$ be subsets of $F$. We say that
$\{A_i\}^n_{i=1}$ $\e$-cover $F$ if
$$\frac{|\cup^n_{i=1} A_i|}{|F|}>1-\e\,.$$
Also, we call $\{A_i\}^n_{i=1}$ $\e$-disjoint if there exist disjoint sets
$\{B_i\}^n_{i=1}$, $B_i\subset A_i$, such that
$$\frac{|B_i|}{|A_i|}>1-\e\,.$$
The system $\{A_i\}^n_{i=1}$ $\e$-quasi-tiles $F$ if it both $\e$-covers $F$ and
$\e$-disjoint.
The following result of Ornstein and Weiss \cite{ornsteinweiss} is crucial for our proof.
\begin{propo} [Quasitiling theorem]\label{quasi}
Let $F_1\subset F_2\subset\dots$ be a F$\mbox{\o}$lner-sequence. Then for any $\e>0$
ther exists $\delta>0$ and a subfamily $F_{n_1}\subset F_{n_2} \subset\dots
F_{n_k}$ such that if $i(F)<\delta$ then $F$ can be $\e$-quasitiled by
translates of the $F_{n_i}$'s.\end{propo}
Observe that if $i(A)<\e$ and $B\subset A$, $\frac{|B|}{|A|}>1-\e$, then
$$i(B)<(d+1)\frac{\e}{1-\e}\,,$$
where $d$ is the degree of the vertices of $Cay(\Gamma,S)$. Indeed, $\partial
B$
is covered by the union of $\partial A$ and the neighbours of the vertices in
$A\backslash B$. Thus
$|\partial B|\leq |\partial A| +
d |A\backslash B|$. Therefore,
$$\frac{|\partial B|}{|B|}\leq \frac{|\partial B|}{|A|(1-\e)}
\leq (d+1) \frac{\e}{1-\e}\,.$$
Now using the quasitiling theorem we construct a Bratteli system inductively.
Suppose that $\{F^m_1, F^m_2,\dots, F^m_{i_m}\}$ and the decreasing
sequence of positive constants
$\{\delta_n\}^m_{i=0}$ are
already given such a way
that
\begin{itemize}
\item for any $i\geq 1$ $|\partial F^m_i|<\min(\delta_{n_1},\frac{1}{2^n})$,
\item if $i(F)<\delta_m$ then $F$ can be tiled by translates of the $F^m_i$'s
and less than $(1/2^m)|F|$ single points.
\end{itemize}
Now let $G_1\subset G_2\subset\dots$ be a F$\mbox{\o}$lner-sequence and $c>0$ such that
if $B\subset G_j$ for some $j$ and $\frac{|B|}{|G_j|}>1-c$ then
$$i(B)<\min (\delta_n,1/2^{n+1})\,.$$
By Proposition \ref{quasi} there exists a family of finite subsets
$F^{n+1}_1, F^{n+1}_2,\dots, F^{n+1}_{i_{n+1}}$ (namely subsets of a certain
system $G_{n_1}, G_{n_2},\dots, G_{n_k}$) and a constant $\delta_{n+1}$
such that
\begin{itemize}
\item for any $i\geq 1$ $|\partial F^{n+1}_i|
<\min(\delta_n,\frac{1}{2^{n+1}})$,
\item if $i(F)<\delta_{n+1}$ then $F$ can be tiled by translates of the
$F^{n+1}_i$'s
and less than $(1/2^{n+1})|F|$ single points.
\end{itemize}
By the induction above one can obtain a Bratteli system. Now we construct a
harmonic function $m$.
Fix a F$\mbox{\o}$lner-sequence $H_1\subset H_2\subset\dots$. Let
$\{\delta_n\}^\infty_{n=1}$ be the constans as above.
If
$$\min(\frac{1}{j_{n+1}},\delta_{j_{n+1}})\leq i(H_n)<
\min(\frac{1}{j_n},\delta_{j_n})$$
then pick a tiling of $H_n$ by translates of
the elements of $Z_{j_n}$ such a way that the number of
single vertices is less than $(\frac{1}{2^{j_n}})|H_n|$.
Then pick a tiling of the
$Z_{j_n}$-tiles by translates of of $Z_{j_n-1}$ such a way that the number of
single vertices in any tile $T$ is less than $(\frac{1}{2^{j_{n-1}}})|T|$.
Inductively, we obtain a tiling of $H_n$ by $Z_i$-translates for any $1\leq i
\leq j_n$.
Note that the number of single vertices used in the $Z_i$-tiling of $H_n$
is less than
$$(\sum_{k=i}^{j_n} \frac{1}{2^k})|H_n|\leq \frac{1}{2^{i-1}}|H_n|\,.$$
If $A\in Z_i$ then denote by $c_k(A)$ the number of vertices in
$H_k$ covered by $A$-translates and let
$$m_k(A)=\frac{c_k(A)}{|H_k|}\,.$$
Clearly, $\sum_{A\in Z_i} m_k(A)=1\,.$ We may suppose that
for any $A$, $\lim_{k\to\infty} m_k(A)=m(A)$ exists, otherwise we could pick
a subsequence of $\{H_k\}^\infty_{k=1}$.
\begin{lemma}
The function $m$ is harmonic satisfying $\lim_{i\to\infty} m(E_i)=0\,.$
\end{lemma}
\proof
The fact that $\lim_{i\to\infty} m(E_i)=0\,.$ follows from our previous
observation about the number of single vertices used in the tiling
of $H_k$. By definition, if $A\in Z_i$
$$m_{k}(A)=\sum_{B\in Z_{i+1}}\frac{S(A) K(A,B)}{S(B)} m_{k}(B)\,.$$
By taking the limit as $n\to\infty$ we get that
$$m (A)=\sum_{B\in Z_{i+1}}\frac{S(A) K(A,B)}{S(B)} m (B)\,.\qed $$
Now let us show that our Bratteli tiling system satisfies the required
property. First we prove a simple lemma.
\begin{lemma} \label{l42}
For any $i>0$ and $\delta>0$ there exists $\lambda>0$ and $p>0$ with
the following
property. Let $k>p$ and $J\subseteq H_{k}$, $\frac{|J|}{|H_{k}|}>1-\lambda\,.$
For $A\in Z_i, |A|>1$, let $j_A^{k}$ be the number of vertices
in $H_{k}$ that are covered by an $A$-translate (in the tiling previously
defined) which is completely inside $J$. Also, let $j^{k}_{E_i}$
be the number of points in $H_{k}$ that are not covered by any of the
$A$-translates above. Then
$$\left|\frac{j_A^{k}}{H_k}-m(A)\right|<\delta\quad\mbox{and}\quad
\left|\frac{j^{k}_{E_i}}{H_k}-m(E_i)\right|<\delta\,.$$
\end{lemma}
\proof The number of points covered by such $A$-translates that contain at
least one point from the complement of $J$ is less than $|H_{k}\backslash J|
|A|\,.$ Hence
$$m_{k}(A)\geq j_A^{k}\geq m_{k}(A)-\frac{|H_{k}\backslash J|}
{|H_{k}|}|A|\,.$$
Since $\sup_{A\in Z_i}|A|<\infty$ and $m_{k}(A)\to m(A)$ the lemma easily
follows. \qed
\vskip 0.2in
\noindent
Now let $\e$ be the constant in our proposition and
$0<\alpha <\e/2$. By Proposition \ref{quasi}, we have a subfamily
$H_{a_1},H_{a_2},\dots,H_{a_t}$ of $\{H_{k}\}^\infty_{k=1}$ that
$\alpha$-quasitiles any finite set $F$ with $i(F)<\delta_\alpha$.
By the previous lemma it
means that we have disjoint subsets $J$ in $F$ that can be tiled
by $Z_i$-translates such a way that using the notation of our proposition
$$\left|\frac{T^F_A}{|F|}-m(A)\right|<\frac{\e}{10}\,,$$
for any $A\in Z_i$ provided that $\alpha$ is small enough.
We cover all the remaining points (that are not in the
$J$'s) by single vertices. Then we get the required tiling of $F$. \qed
\section{The canonical rank on amenable group algebras}
In this section we recall some results from \cite{elekamena}.
Let $\Gamma$ be a finitely generated amenable group and $K\Gamma$ be its group
algebra over the field $K$. Let $\{F_n\}^\infty_{n=1}$ be a F$\mbox{\o}$lner-
sequence.
For $a\in K\Gamma$ let $V^a_n\subset K^{F_n}\subset K\Gamma$ be the vector
space of elements $z$ supported on $F_n$ such that $za=0$. Then
$$\lim_{n\to\infty}\frac{\dim_K V^a_n}{|F_n|}=k_a$$
exists and independent on the choice of the F$\mbox{\o}$lner-sequence.
We call $\mbox{rk}(a)=1-k_a$ the natural rank of $a$. It is proved in
\cite{eleklinnell}
that if $K=\C$ then $\mbox{rk}(a)=1-\dim_\Gamma Ker\,M_a$, where $dim_\Gamma$ is the
von Neumann dimension and $Ker\,M_a$ is the set of elements
$w\in l^2(\Gamma)$ for which
$wa=0$. Note that the rank can be computed slightly differently as well.
Let $S$ be a symmetric generating system of $\Gamma$ and $Cay(\Gamma,S)$ be
the Cayley-graph of $\Gamma$.
We consider the shortest path metric $d_{Cay(\Gamma,S)}$ on $\Gamma$.
Let $\mbox{supp}(a)\subset B_r(1)$, where $B_r(1)$ is the $r$-ball around the unit
element in the Cayley-graph and
$$\mbox{supp}(a)=\{\gamma\in\Gamma\,\mid\,a_\gamma\neq 0\}$$
if $a=\sum a_\gamma \gamma$. For a finite set $F\subset \Gamma$, let
$\partial_r F$
be the set of elements $x\in F$ such that
$$d_{Cay(\Gamma,S)}(x,F^c)\leq r\,.$$
Note that $\partial F=\partial_1(F)$.
Clearly, if $b\in K\Gamma$ and
$\mbox{supp}(b)\in F\backslash \partial_r F$ then $\mbox{supp} (ba)\subset F$.
Then for any $s>r$
\begin{equation}\label{rankf}
\mbox{rk}(a)=\lim_{n\to\infty} \frac{\dim_K W_na} {|F_n|},
\end{equation}
where $W_n$ is the set of elements in $K\Gamma$
supported on $F_n\backslash \partial_s F_n$.
\section{Sofic representations}\label{sofic}
\subsection{Sofic approximation and sofic representations} \label{soficsub1}
In this section we recall the notion of sofic representations from
\cite{elekszabo}.
Let $\Gamma$ be a finitely generated group with a symmetric generating set
$S$. Let $\{G_n\}^\infty_{n=1}$ be a sequence of finite graphs such that
the directed edges are labeled by the elements of $S$ such a way that if
$(x,y)$ is labeled by $s$, then $(y,x)$ is labeled by $s^{-1}$.
We say that $\{G_n\}^\infty_{n=1}$ is a sofic approximation of $\Gamma$ if for
any natural number $r>0$ there exists $n_r>0$ such that
\begin{itemize}
\item if $n\geq n_r$ then for the set $V^r_n$ of vertices $x$ for which
the ball $B_r(x)$ in $G_n$ is isomorphic to the ball $B_r(1)\in Cay(\Gamma,S)$
as labeled graphs
$$ \frac{|V^r_n|}{|V(G_n)|}>1-\frac{1}{r}\,.$$
\end{itemize}
A group is called {\it sofic} if it possesses a sofic approximation. In this
moment no non-sofic group is known. If $Cay(\Gamma,S)$ is the Cayley-graph on
an amenable group and $\{F_n\}^\infty_{n=1}$ is a F$\mbox{\o}$lner-sequence then the
induced graphs $G_n$ of the sets $F_n$ form a sofic approximation of $\Gamma$.
If $\Gamma$ is residually finite (amenable or not) with normal chain
$\{N_k\}^\infty_{k=1},\,\cap^\infty_{k=1} N_k=\{1\}\,$ then the
graph sequence $Cay(\Gamma/N_k,S)$ form a sofic approximation of $\Gamma$.
If $\{G_n\}^\infty_{n=1}$ is an arbitrary sofic approximation of a group
$\Gamma$ then one can construct an imbedding of $K\Gamma$ ($K$ is an arbitrary
field) to the ultraproduct of matrix algebras the following way.
\noindent
Let $\{\mbox{Mat}_{V(G_n)\times V(G_n)}(K)\}^\infty_{n=1}$ be a sequence of
matrix algebras. Let $a\in K\Gamma$, $a=\sum r_\gamma \gamma$ be an element
of the group algebra such that if $r_\gamma\neq 0$ then $\gamma\in
B_r(1)\subset Cay(\Gamma,S)$.
Let $\{e_x\}_{x\in V(G_n)}$ be the natural basis of $K^{V(G_n)}$. If $x\in
V^r_n$ then let
$$\psi_n(a)(e_x)=\sum_{y\in B_r(x)} k_ye_y\,,$$
where $k_y=r_\gamma$ if $x\gamma=y$. Note that by our condition on the
support of $a$ $x\gamma =y$ is meaningful. If $x\notin V^r_n$ let
$\psi_n(a)(x)=0$.
This way one can define an injective homomorphism
$\psi_\mu:K\Gamma\to\mathcal{M}^{alg}_\mu$, where $\mu=\{|V(G_1)|, |V(G_2)|,\dots\}$
If $K=\C$ the homomorphism above is a
$*$-homomorphism. The map $\psi_\mu$ is called the sofic representation
associated to the sequence $\{G_n\}^\infty_{n=1}$.
\subsection{Sofic approximation of amenable groups}
Let $\{G_n\}^\infty_{n=1}$ be a sofic approximation of the amenable
group $\Gamma$ (with symmetric generating set $S$). For $L>0$ let $Q^{G_n}_L$
be the set of vertices $x$ in $G_n$ such that
$$B_L(x)\cong B_L(1)\subset Cay(\Gamma, S)$$
as $S$-labeled graphs.
If $F\subset B_L(1)$ then for $x\in Q^{G_n}_L$ we call $\pi(F)$ an
$F$-translate, where $\pi:B_L(1)\to B_L(x)$ is the $S$-labeled graph
isomorphism mapping $1$ to $x$.
In \cite{elekappro} we proved the following generalization of the
Ornstein-Weiss quasitiling theorem.
\begin{propo} \label{soficweiss}
Let $F_1\subset F_2\dots$ be a F$\mbox{\o}$lner-sequence. Then for any $\e>0$ there
exists $L>0$, $\delta>0$ and a finite subcollection $F=\{F_{n_1},
F_{n_2},\dots,F_{n_t}\}$ such that if
$$\frac{Q^{G_n}_L}{|V(G_n)|}>1-\delta $$
then $G_n$ can be $\e$-quasitiled by $F$-translates.
\end{propo}
\section{Imbedding $K\Gamma$ to the completion of an ultramatricial algebra}
Let $(\{Z_i\}^\infty_{i=1},m)$ be the Bratteli tiling system as in Proposition
\ref{bratt1}.
We construct an ultramatricial algebra as in Section \ref{ultra}.
Let $\oplus_{A\in Z_i} \mbox{Mat}_{|A|\times |A|}(K)$ be the $i$-th algebra.
For $B\in Z_{i+1}$ let
$$E_B:\oplus_{A\in Z_i} \mbox{Mat}_{|A|\times |A|}(K)\to \mbox{Mat}_{|B|\times |B|}(K)$$
be the diagonal embedding, where the image of each $\mbox{Mat}_{|A|\times |A|}(K)$
is $K(A,B)$ $|A|\times |A|$-diagonal block in $ \mbox{Mat}_{|B|\times |B|}(K)$.
Let
$$\phi_i=\oplus_{B\in Z_{i+1}}E_B:\mbox{Mat}_{|A|\times |A|}(K)\to
\oplus_{B\in Z_{i+1}}\mbox{Mat}_{|B|\times |B|}(K)$$
the product map.
Now we define the maps
$\pi^A_i:K\Gamma\to \mbox{Mat}_{|A|\times |A|}(K)$ the following way.
We identify the elements of $\mbox{Mat}_{|A|\times |A|}(K)$ with the linear
transformations from $K^A$ to $K^A$ the natural way.
Let $a\in K\Gamma$, $\mbox{supp} (a)\subset B_r(1)$.
If $x\in A\backslash \partial_r(A)$, then let
$$\pi^A_i(a)(e_x)=\sum a_\gamma e_{x\gamma}\,.$$
Note that by the condition on the support $x\gamma$ is well-defined.
If $\partial_r(A)$, then let $\pi^A_i(a)(e_x)=0\,.$
Finally we define the maps $\pi_i:=\oplus_{A\in Z_i}
\pi^A_i:K\Gamma\to \oplus_{A\in Z_i}
\mbox{Mat}_{|A|\times |A|}(K)$.
\begin{lemma}
\label{cauchy}
For any $a\in K\Gamma$, $\{[\pi_i(a)]\}^\infty_{i=1}$ is a Cauchy-sequence in
$\mathcal{A}_\phi$, where $[\pi_i(a)]$ denotes the image of $\pi_i(a)$ under the natural\\
embedding $\oplus_{A\in Z_i} \mbox{Mat}_{|A|\times |A|}(K)\to \mathcal{A}_\phi$.
\end{lemma}
\proof
First of all note that
$$\mbox{rk}_\phi(\phi_i\circ \pi_i(a)-\pi_{i+1}(a))=
\sum_{B\in Z_{i+1}}m(B) \frac{\mbox{rank}(E_B\circ\pi_i(a)-\pi_{i+1}^B(a))}{|B|}\,.$$
Observe that
$$\mbox{rank}(E_B\circ\pi_i(a)-\pi^B_{i+1}(a))=
|B|-\dim_K \mbox{ker}(E_B\circ\pi_i(a)-\pi^B_{i+1}(a)).$$
On the other hand,
$$\dim_K \mbox{ker} (E_B\circ\pi_i(a)-\pi^B_{i+1}(a))\leq T_B\,,$$
where $T_B$ is the number of vertices in $B$ for which
$$E_B\circ\pi_i(a)(e_x)=\pi^B_{i+1}(a)(e_x)\,.$$
Clearly,
$$T_B\geq |B|-|\partial_r B|-\sum_{A\in Z_i} K(A,B) |\partial_r(A)|\,.$$
Recall that if $|B|>1$ then
$|\partial B|\leq |B| 2^{-(i+1)}\,.$
Hence $$|\partial_r B|\leq |B|2^{-(i+1)} (d+1)^{r+1}\,,$$
where $d$ is the degree of the vertices in the Cayley graph.
Also,
$$\sum_{A\in Z_i} K(A,B) |\partial_r(A)|\leq
K(E_i,B)+\sum_{A\in Z_i\,,|A|>1} K(A,B)|A| 2^{-i}(d+1)^{r+1}\,.$$
Therefore,
$$T_B\geq |B|-|B|2^{-(i+1)}(d+1)^{r+1}-|B| 2^{-(i+1)}-|B|2^{-i}(d+1)^{r+1}\,.$$
Hence,
$$\mbox{rk}_\phi(\phi_i\circ \pi_i(a)-\pi_{i+1}(a))\leq $$$$\leq
2^{-(i+1)} +\sum_{B\in Z_{i+1}\,,|B|>1}m(B) (2^{-(i+1)}(d+1)^{r+1} +
2^{-(i+1)} + 2^{-i}(d+1)^{r+1})$$
Thus the lemma follows.\qed
\begin{lemma}
Let $a\in K\Gamma, b\in K\Gamma$, then
\begin{enumerate}
\item $lim_{i\to\infty} \mbox{rk}_\phi(\phi_i(a)\phi_i(b)-\phi_i(ab))=0\,.$
\item $lim_{i\to\infty} \mbox{rk}_\phi(\phi_i(a)+\phi_i(b)-\phi_i(a+b))=0\,.$
\item If $K=\C$ then
$lim_{i\to\infty} \mbox{rk}_\phi(\phi_i(a^*)-\phi_i^*(a))=0\,.$
\end{enumerate}\end{lemma}
\proof We prove the first part only, the other two statements can be seen
exactly the same way.
If $x\in A\backslash \partial_{r+s}(A)$ then
$$\phi_i(a)\phi(b)-\phi_i(ab)(e_x)=0\,.$$
Therefore
$$\mbox{rk}_\phi(\phi_i(a)\phi_i(b)-\phi_i(ab))
\leq \frac{|\partial_{r+s}(A)|}{|A|}\,.$$
Hence the lemma follows. \qed
\vskip 0.2in
\noindent
Let $\phi(a)\in\overline{\mathcal{A}}_\phi$ be the limit of elements $\phi_i(a)$.
By the previous lemma $\phi$ is a homomorphism and if $K=\C$ then $\phi$ is
even a $\star$-homomorphism. Finally, we prove that
$\mbox{rk}_\phi(\phi(a))=\mbox{rk}_\Gamma(a)\,.$
By definition,
$$\mbox{rk}_\phi(\phi_i(a))=\sum_{A\in Z_i}m(A)\frac{\mbox{rank}(\phi_i(a))}{|A|}\,.$$
If $|A|=1$, then $m(A)\leq 2^{-i}$ otherwise by (\ref{rankf})
$$\lim_{i\to\infty}\frac{\mbox{rank}(\phi_i(a))}{|A|}=\mbox{rk}_\Gamma(a)\,.$$\
Hence, $\mbox{rk}_\phi(\phi(a))=\mbox{rk}_\Gamma(a)\,.$ This finishes the proof of our
theorem. \qed
\section{The proof of the main theorem}
\subsection{The strategy of the proof}
We have four complete regular $*$-rings: $\overline{\mathcal{A}_\phi}$, $U(\Gamma)$,
$\mathcal{M}^{alg}_\mu$ and $U(\mathcal{M}_\mu)$. Also, we define seven rank preserving
embeddings
\begin{enumerate}
\item $f_1:\C\Gamma\to \mathcal{M}^{alg}_\mu$
\item $f_2:\C\Gamma\to U(\Gamma)$
\item $f_3:\C\Gamma\to U(\mathcal{M}_\mu)$
\item $f_4:\C\Gamma\to \overline{\mathcal{A}_\phi}$
\item $f_5:\overline{\mathcal{A}_\phi}\to\mathcal{M}^{alg}_\mu$
\item $f_6:\overline{\mathcal{A}_\phi}\to U(\mathcal{M}_\mu)$
\item $f_7:U(\Gamma)\to U(\mathcal{M}_\mu)$.
\end{enumerate}
We shall show three identities:
\begin{enumerate}
\item $f_5\circ f_4=f_1$
\item $f_6\circ f_4=f_3$.
\item $f_7\circ f_2= f_3$
\end{enumerate}
From these identities the main theorem easily follows. Indeed,
$R(\Gamma)$ is the smallest $\star$-regular ring containing $\C\Gamma$ in
$U(\Gamma)$. The ring $R(\Gamma)$ is inside $\overline{\mathcal{A}_\phi}$, in fact,
it is the minimal $\star$-regular ring containing $\C\Gamma$
in $\overline{\mathcal{A}_\phi}$. On the other hand,
$R(\C\Gamma, \mathcal{M}^{alg}_\mu)$ is also the smallest $\star$-regular ring
containing $\C\Gamma$ in $\overline{\mathcal{A}_\phi}$. \qed
\subsection{The first identity}\label{sub81}
Let $\{G_n\}^\infty_{n=1}$ be the sofic approximation of our group $\Gamma$
and
$\mathcal{M}^{alg}_\mu$ be the associated ultraproduct. Let $\{H_n\}^\infty_{n=1}$
be the F$\mbox{\o}$lner-sequence in the proof of Proposition \ref{bratt1}.
Let $f_4$ be the map $\phi:\C\Gamma\to \overline{\mathcal{A}_\phi}$ defined in
the proof of Theorem \ref{tetel2}. Let $f_1$ be the map $\psi_\mu:
\C\Gamma\to \mathcal{M}^{alg}_\mu$ defined in Subsection \ref{soficsub1}.
Fix $k\geq 1$. Now we define maps
$\tau_{k,n}:\oplus_{A\in Z_k} \mbox{Mat}_{A\times A}(\C)\to \mbox{Mat}_{V(G_n)\times
V(G_n)}(\C)$
for large enough $n\geq 1$.
\noindent
First, let $q\geq 1$ be an integer. We say that $G_n$ is $q$-regular if
$G$ can be $\frac{1}{2^q}$-quasitiled by translates of
$\{H_{n_1}, H_{n_2},\dots H_{n_t}\}$, where $n_i\geq q$. For $n\geq 1$, let
$q(n)$ be the largest $q$ for which $G_n$ is $q$-regular. By Proposition
\ref{soficweiss}, for any $q\geq 1$ there exists some $n_q$ such that
if $n\geq n_q$ then $q(n)\geq q$.
\noindent
Now consider a $\frac{1}{2^q}$-quasitiling of $G_n$ by translates of
$\{H_{n_1}, H_{n_2},\dots H_{n_t}\}$. Then consider the iterated tiling for
each $H_{n_i}$ above by $Z_k'$s as in the proof of Proposition \ref{bratt1},
starting with $Z_{l(n)}$-tilings. Since the translates are not disjoint this
does not yet define a tiling of $G_n$. However, let $\{J_\alpha\}_{\alpha\in
I}$ be the disjoint parts in the quasitiling. That is each $J_\alpha$ is
inside some $H_{n_i}$-translate having size at least $(1-\frac{1}{2^q})
|H_{n_i}|$. Discard the tiles that are inside some $Z_{l(n)}$-tile that
is not contained in some $J_\alpha$. Cover, the remaining part of $G_n$ by
single vertices.
For $A\in Z_k$, let $Q_n(A)$ be the number of vertices in $V(G_n)$
that are covered
by an $A$-translate. By Lemma \ref{l42}, it is easy to see that
$$\lim_{n\to\infty}\frac{Q_n(A)}{|V(G_n)|}=m(A)\,.$$
Now let $\tau_{k,n}:\oplus_{A\in Z_k} \mbox{Mat}_{A\times A}(\C)\to
\mbox{Mat}_{V(G_n)\times V(G_n)}(\C)$ be the natural diagonal map induced by the
tiling above. If $|A|>1$, the definition is clear. The case where $A=E_k$,
that is $A$ is a
single point needs some clarification. In the diagonal map, we use only
those vertices in $G_n$ that are in some ``good'' $Z_{l(n)}$-tile, in other
words, that are not used to cover the remaining part. All the diagonal
elements in the image of $\tau_{k,n}$ that belong to a vertex covering the
remaining part are zero.
\noindent
By the iterative tiling construction, one can immediately see that
$\tau_{k,n}\circ \phi_k=\tau_{k+1,n}$. If $k>q(n)$, let us define
$\tau_{k,n}:=0$. Therefore we have a map
$$\tau=(\tau_1,\tau_2\dots):\oplus_{A\in Z_k}\mbox{Mat}_{A\times A}(\C)\to
\oplus^\infty_{n=1} \mbox{Mat}_{V(G_n)\times V(G_n)}(\C)$$
and this map extends to $\mathcal{A}_\phi$ as well.
\begin{lemma}\label{ranklemma}
For any $(a_1\oplus a_2\oplus\dots a_{i_k})\in \oplus_{A\in Z_i}
\mbox{Mat}_{A\times A}(\C)$
$$\lim_{n\to\infty} \frac{\mbox{rank}\,
(\tau_{k,n}(a_1\oplus a_2\oplus\dots a_{i_k}))}
{|V(G_n)|}=\mbox{rk}_\phi(a_1\oplus a_2\oplus\dots a_{i_k})\,.$$
\end{lemma}
\proof
$$\lim_{n\to\infty} \frac{\mbox{rank} (\tau_{k,n}(a_1\oplus a_2\oplus\dots a_{i_k}))}
{|V(G_n)|}=\lim_{n\to\infty} \sum_{A\in Z_k}
\frac{\frac {Q_n(A)}{|A|} \mbox{rank}(a_A)} {|V(G_n)|}=$$
$$=\sum_{A\in Z_k} m(A)\frac{rank(a_A)}{|A|}=
\mbox{rk}_\phi(a_1\oplus a_2\oplus\dots
a_{i_k})\,.$$
\qed
\vskip 0.2in
\noindent
Therefore we have a rank-preserving map
$\tau_{alg}:\oplus_{A\in Z_k}\mbox{Mat}_{A\times A}(\C)\to \mathcal{M}^{alg}_\mu$
defined as $\pi\circ\tau_{alg}$, where
$$\pi:\oplus^\infty_{n=1} \mbox{Mat}_{V(G_n)\times V(G_n)}(\C)\to \mathcal{M}^{alg}_\mu$$
is the quotient map. The map $\tau_{alg}$ extends to the rank completion of
$\mathcal{A}_\phi$, resulting in the map $f_5$.
\noindent
Now let us prove the first identity.
Let $a\in\C\Gamma$, $\mbox{supp}(a)\subset B_r(1)\subset Cay(\Gamma,S)$.
Then $f_1(a)$ can be represented in $\oplus^\infty_{n=1}
\mbox{Mat}_{V(G_n)\times V(G_n)}(\C)$ by the element $\oplus_{n=1}^\infty \psi_n(a)$,
where $\psi_n$ is defined in Subsection \ref{soficsub1}.
On the other hand, $f_5\circ f_4(a)$ is represented by
$\oplus_{n=1}^\infty \psi'_n(a)$, where
$$\psi'_n(a)(e_x)=\sum_{y\in B_r(x)} k_ye_y\,,$$
where $k_y=r_\gamma$ if $x\gamma=y$ and $x\in\partial_r(J_\alpha)$, for some
$J_\alpha$ in a $H_{n_i}$-translate. Clearly,
$$\lim_{n\to\infty} \frac{z_n(a)}{|V(G_n)|}\,,$$
where $z_n(a)$ is the number of elements $x\in V(G_n)$ for which
$$\psi'_n(a)(e_x)=\psi_n(a)(e_x)\,.$$
Therefore $f_5\circ f_4=f_1$.
\subsection{The second identity}
Let $\mbox{rk}_1$ resp. $\mbox{rk}_2$ denote the ranks on $\mathcal{M}_\mu$ resp. $\mathcal{M}_\mu^{alg}$
Let $$t\in\oplus^\infty_{n=1} \mbox{Mat}_{V(G_n)\times V(G_n)}(\C)=(t_1,
t_2,\dots)\,,$$
where $\sup \|t_i\|<\infty$. Note that $t$ represents and element
$[t]_{\mathcal{M}_\mu}$ in $\mathcal{M}_\mu$ and an element
$[t]_{\mathcal{M}_\mu^{alg}}$ in $ \mathcal{M}_\mu^{alg}$. It is important to note
that $rk_1([t]_{\mathcal{M}_\mu})$ is not necessarily equal to
$rk_2([t]_{\mathcal{M}_\mu^{alg}})$. Indeed, let $t_n=\frac{1}{n} Id$. Then
$\mbox{rk}_1([t]_{\mathcal{M}_\mu})=0\,.$ Nevertheless, $rk_2([t]_{\mathcal{M}_\mu^{alg}})=1\,.$
However, we have the following lemma.
\begin{lemma}\label{however1}
Let $t=(t_1,t_2,\dots)\in\oplus^\infty_{n=1} \mbox{Mat}_{V(G_n)\times V(G_n)}(\C)$,
where for any $n\geq 1$, $t_n$ is self-adjoint and all the $t_n$'s have
altogether finitely many distinct eigenvalues $\lambda_0=0,
\lambda_1,\lambda_2,\dots,\lambda_k$. Then
$$\mbox{rk}_1([t]_{\mathcal{M}_\mu})=\mbox{rk}_2([t]_{\mathcal{M}^{alg}_\mu})\,.$$
\end{lemma}
\proof Let $t_{n,i}$ be the multiplicity of $\lambda_i$ in $t_n$. Then
$$\mbox{rk}_2([t]_{\mathcal{M}^{alg}_\mu})=\lim_\omega(1-\frac{t_{n,0}}{|V(G_n)|})\,.$$
The spectral decomposition of $[t]_{\mathcal{M}_\mu}$ is $\sum^k_{i=1}\lambda_i P_i$,
where $$tr_{\mathcal{M}_\mu}(P_i)=\lim_\omega (1-\frac{t_{n,i}}{|V(G_n)|})\,.$$
By (\ref{rankformula}) $$\mbox{rk}_1([t]_{\mathcal{M}_\mu})=
\lim_\omega(1-\frac{t_{n,0}}{|V(G_n)|})\,.\quad\qed$$
We also need the following lemma.
\begin{lemma} \label{however2}
Let $t$ be as above and suppose that
$$\lim_{n\to\infty}\frac{\mbox{rank}(t_n)}{|V(G_n)|}=0\,.$$ Then
$[t]_{\mathcal{M}_\mu}=0\,.$
\end{lemma}
\proof We need to check that
$\lim_{n\to\infty}\frac{tr (t_n^* t_n)}{|V(G_n)|}=0\,.$
Observe that $\sup \|t_n^*t_n\|=K<\infty$ and $\mbox{rank}(t^*_nt_n)\leq
K\mbox{rank}(t^*_nt_n)\,.$
Then $tr(t^*_nt_n)\leq K\mbox{rank}(t^*_nt_n)$. Hence the lemma follows. \qed
\vskip 0.2in
\noindent
Let $i_\mu:\C\Gamma\to\mathcal{M}_\mu$ be defined as $\rho\circ\psi$, where
$\psi=\oplus^\infty_{n=1}\psi_n$ as in Subsection \ref{soficsub1} and
$$\rho:B(\oplus^\infty_{n=1}\mbox{Mat}_{V(G_n)\times V(G_n)}(\C))\to\mathcal{M}_\mu\,$$
be the quotient map from the bounded part of the direct product. The map
$i_\mu$ is trace-preserving and extends to an injective trace-preserving map
$\overline{i_\mu}:\mathcal{N}(\Gamma)\to\mathcal{M}_\mu$ (see \cite{Elekhiper} and
\cite{pestov}). The map $f_3$ is just $i_\mu$ composed by the embedding of
$\mathcal{M}_\mu$ into its Ore-extension. We prove that $f_3$ is rank-preserving later.
\noindent
Now let us define the map $f_6$. Let $\tau$ be the map
defined in Subsection \ref{sub81}. Then let
$j:\rho\circ\tau:\mathcal{A}_\phi\to\mathcal{M}_\mu$ and let $s=u\circ j$, where $u:\mathcal{M}_\mu\to
U(\mathcal{M}_\mu)$ be the natural embedding. Then $f_6$ is defined as the extension
of $s$ onto $\overline{\mathcal{A}_\phi}$. We need to show of course that $j$ is
rank-preserving that is
$$\mbox{rk}_1[\tau(\underline{a})]_{\mathcal{M}_\mu}=
\mbox{rk}_2[\tau(\underline{a})]_{\mathcal{M}^{alg}_\mu}\,,$$
for any $\underline{a}\in\oplus_{A\in Z_k} \mbox{Mat}_{A\times A}(\C)\,.$
However, this immediately follows from Lemma \ref{however1}.
\noindent
Now we prove the second indentity. This also shows that $f_3$ is
rank-preserving. Again, it is enough to show that
\begin{equation}
\label{utolso}
[\oplus^\infty_{n=1}\psi_n(a)-\psi'_n(a))]_{\mathcal{M}_\mu}=0\,.
\end{equation}
We already proved that
$$\lim_{n\to\infty}\frac{\mbox{rank}(\psi_n(a)-\psi'_n(a))}{|V(G_n)|}=0\,.$$
Obviously, $\sup \|\psi_n(a)-\psi'_n(a)\|<\infty\,,$
hence (\ref{utolso}) follows from Lemma \ref{however2}.
\subsection{The third identity}
By definition, $i_\mu=\overline{i_\mu}\circ i$, where $i$ is the natural
embedding of $\C\Gamma$ into $\mathcal{N}(\Gamma)}\def\ug{\mathcal{U}(\Gamma)$. This immediately proves the third
identity. This completes the proof of our main theorem.\qed
|
1,116,691,500,689 | arxiv | \section{Introduction}
Deep Neural Networks (DNNs) have been proved to be vulnerable to adversarial perturbations \cite{bfgs, fgsm, pgd, deepfool, cw, onepixel, hang2020ensemble, huang2022cyclical}. In a classification task, for a sample correctly predicted by a DNN, it can be easily predicted as a wrong class after adding an imperceptible perturbation elaborately crafted by the attacker. However, such kind of perturbations are image dependent, which means the attackers
craft perturbations corresponding to each datum. Later, Moosavi-Dezfooli et al. \cite{deepfooluap}, for the first time, proposed a special type of adversarial attack,
which
is to fool the DNNs by adding the same perturbation, called Universal Adversarial Perturbation (UAP), to all samples.
Since its proposal,
researchers have figured out numerous ways to craft UAPs \cite{nag, gap, gduap, aaa, pdua, uapforod}.
Compared with regular adversarial examples, which reveal the over-sensitivity of DNNs, the UAPs are different: they reveal that DNNs could be largely affected by a single perturbation for almost all input. The mechanism relies on the guess that the difference for different images from even different classes is vanishing, such that they can be attacked by the same perturbation. This guess coincides with the recently discovered \emph{Neural Collapse} (NC, \cite{neuralcollapse}), which means that
a DNN induces an underlying mathematical simplicity to the last-layer activation.
One of NC manifestations is variability collapse, where the within-class variation of the activations becomes negligible as these activations collapse to their class means, from which it follows that finding an UAP for all the samples may be more feasible.
The collapse of the difference among samples is the essential reason why we can find universal perturbations. Thus, directly attacking the layers where NC happens is expected to have stronger UAPs than attacking other places, e.g., the output like the most UAP methods. This is just what we want to do in this paper. Specifically,
with a proposed Feature-Gathering loss (FG loss), the adversary manages to find stronger universal perturbations in the layer which owns little within-class diversity and meanwhile expressive for potential perturbations. Our method is simple but effective and is verified by numerical experiments that we can outperform
the state-of-the-art UAPs,
whether untargeted or targeted attackes, in both regular and
limited training datasets.
Moreover, with the proposed method, we can
generate UAPs for Vision Transformers (ViTs) \cite{vit}, which are free from convolutional architectures and are believed to be more robust against adversarial perturbations \cite{mahmood2021robustness, shao2021adversarial, aldahdooh2021reveal}. Results show that our method can also defeat cutting-edge baselines, though ViTs are indeed less likely to be fooled by UAPs compared with convolutional neural networks. We further evaluate the transferability among CNNs and ViTs, discovering that CNNs can be more easily attacked by UAPs calculated for other structures including ViTs, while not vise versa.
Not only for generating stronger attacks, we can also use the proposed UAPs to better investigate the DNNs. By analyzing the features of UAPs, we find a new collapse phenomenon that features of UAPs concentrate to a direction in the layer we exert attacks. This provides a new evidence for NC and can explain the phenomenon of dominant labels, which is mentioned but not fully discussed in \cite{deepfooluap, cosineuap}.
\subsection*{Contributions}
\begin{itemize}
\item
Inspired by the NC phenomenon, we propose a simple but effective method to generate strong UAPs for DNNs.
We name this UAP as Feature-Gathering UAP (FG-UAP) for its strong ability to gather natural images' features.
\item We verify the effectiveness of our FG-UAP on various DNNs and achieve state-of-the-art performance not only in untargeted task but also in targeted and mini-set tasks.
\item We discuss the mechanism of FG-UAP in the view of NC by analyzing the labels and features of adversarial examples extracted by DNNs, providing a more detailed explanation on the dominant label phenomenon.
\end{itemize}
\section{Related Work}\label{related}
Szegedy \textit{et al. }\cite{bfgs} firstly observed that DNNs are vulnerable to maliciously constructed small noises called adversarial perturbations. Following this discovery, numerous attack methods have been proposed, including Fast Gradient Sign Method (FGSM) \cite{fgsm}, Projected Gradient Descent (PGD) \cite{pgd}, C\&W \cite{cw}, DeepFool \cite{deepfool}, and AoA \cite{aoa}. These methods craft perturbations by designing different losses and optimization algorithms, and have been extended to various research fields \cite{ma2021understanding, bisogni2021adversarial, li2021black}.
Notice that perturbations generated by all the above methods are image-dependent, which means different perturbations must be specifically computed for different images.
Different from image-dependent attacks, image-agnostic attacks shift the majority of images' predictions with a single perturbation, named Universal Adversarial Perturbation (UAP). This special type of adversarial perturbations is firstly proposed by Moosavi-Dezfooli\textit{ et al.} \cite{deepfooluap}, where an iterative procedure based on Deepfool \cite{deepfool} is designed. To distinguish this type from other UAPs, we hereinafter refer to this UAP method as DeepFool-UAP. Motivated by \cite{deepfooluap}, researchers proposed more algorithms to generate UAPs. Mopuri \textit{et al.} \cite{nag} put forward a Network for Adversary Generation (NAG) to model the distribution of adversarial perturbations. Omid Poursaeed \textit{et al.} \cite{gap} present Generative Adversarial Perturbations (GAP) to create UAPs for both classification and semantic segmentation tasks. In addition, they are the first to present challenging targeted UAPs. Later, it has been confirmed that targeted UAPs can also be found by exploiting a proxy dataset instead of the original training data \cite{dfuap}. Mopuri \textit{et al.} \cite{gduap} compute UAPs by overfiring the extracted features at multiple layers. Liu \textit{et al.} \cite{pdua} consider the model uncertainty to craft an insensitive universal perturbation. Li \textit{et al.} \cite{uapforod} try to extend such universal attack to detector-level. The latest Cosine-UAP \cite{cosineuap} proposes an algorithm based on cosine similarity to craft the state-of-the-art UAP, and also discusses the phenomenon of dominant class, which is firstly discovered by \cite{deepfooluap}.
Neural Collapse (NC) becomes another line of research, since Papyan \textit{et al. }\cite{neuralcollapse} first revealed the tendency to a simple symmetric structure in penultimate layers during the terminal phase of training. The empirical demystifying of penultimate features has spurred extensive research on theoretical philosophy underlying in different settings. A literature of study admirably proves that global minimizers of cross-entropy \cite{weinan2022emergence,lu2022neural}, MSE \cite{han2022neural}, and contrastive loss \cite{awasthi2022more} are all NC favorable. Zhu \textit{et al.} \cite{zhu2021geometric} elucidate that any optimization algorithm which can escape strict saddle points will converge to NC, showcasing SGD \cite{saad1998online}, Adam \cite{kingma2014adam} and LBFGS \cite{bottou2018optimization} through experimental verification. \cite{tirer2022extended} generalizes previous results by adding a nonlinear layer, presenting the same succinct structure as before.
This venerable line of work corroborates that NC persists across a wide range of well-trained overparameterized neural networks. It is instructive to associate practical implications of NC with adversarial attacks. Thus, we make early attempts on utilizing this pervasive behavioral simplicity of high-level features to generate UAPs for the first time.
\section{Proposed Approach}
\subsection{Problem formulation}
Consider a classification task with
natural images $X = \{\boldsymbol{x_1}, \boldsymbol{x_2}, \dots, \boldsymbol{x_n}\}$ and the corresponding labels $y_1, y_2, \dots, y_n$. A classifier $C(\cdot)$ maps an input image $x_i$ to an estimated label $y_i=C(\boldsymbol{x_i})$. The goal of the universal attack is to fool the classifier with a single perturbation. This means the
victim classifier prones to predict any image as an incorrect
class when this image is corrupted
by this perturbation. Mathematically, finding such UAP
$\boldsymbol{\delta}$ can be formulated as the following problem,
\begin{eqnarray}
\begin{array}{ll}
& \mathop{\text{max}}\limits_{\boldsymbol{\delta}} ~ \mathbb{P}_{\boldsymbol{x} \in X}[C(\boldsymbol{x}) \neq C(\boldsymbol{x} + \boldsymbol{\delta})]\\
& \text{s.t.} ~ \|\boldsymbol{\delta} \|\leq \xi,
\end{array}
\end{eqnarray}
where $\xi$ is a user-given threshold
to ensure that the perturbation is visually imperceptible to humans.
For a typical classifier, it can be composed of an feature extractor with numerous layers, an average pooling layer, followed by one or more fully connected linear layers, as shown in Figure \ref{fig:framework}. Except for the last linear layer, all the linear layers are followed by non-linear activations.
Denote the layers before the last linear layer as calculating a function $\boldsymbol{h}$. This function maps an input $\boldsymbol{x}$ to a feature vector $\boldsymbol{h(x)}$, which is referred to as the \textit{last-layer feature} according to \cite{neuralcollapse}. The final output of the last layer contains the logit value for each class, which is usually called the logit vector, and this output space is called logit space. Logit space is the output of an end-to-end DNN, numerous attack methods are implemented based on this space \cite{deepfooluap, rhp, aaa, gap, fastuap, dfuap, cosineuap}.
\subsection{Neural Collapse and UAP}
Recently, Neural Collapse (NC, \cite{neuralcollapse}) has been found as a
special and essential phenomenon happened in the last-layer activation in DNNs. A manifestation of NC is the variability collapse, i.e., the within-class variation of the activations becomes negligible as these activations collapse to their class means:
\begin{equation}
\boldsymbol{\Sigma}_{W} \triangleq \underset{i, c}{\operatorname{Ave}}\left\{\left(\boldsymbol{h}_{i, c}-\boldsymbol{\mu}_{c}\right)\left(\boldsymbol{h}_{i, c}-\boldsymbol{\mu}_{c}\right)^{\top}\right\}\rightarrow 0,
\end{equation}
where $\boldsymbol{h}_{i,c}\triangleq\boldsymbol{h}(\boldsymbol{x}_{i,c})$ is the last-layer feature of the $i$-th sample in the $c$-th class and $\boldsymbol{\mu}_{c}\triangleq \underset{i}{\operatorname{Ave}}\{\boldsymbol{h}_{i,c}\}$ is the mean value of the corresponding class.
Such collapse of the samples' difference
perfectly explains the existence of UAPs: The diversity of natural images is dampen in a well-trained DNN, from which it follows that
one can find
perturbations that exist in a
subspace in which most of the normal vectors of decision boundaries lie.
Such a perturbation
can then fool the majority of other images with the same class.
As long as there is NC, we can find the corresponding UAPs more easily. An extreme case is that NC happens in the output layer, where images in the same class naturally have no variance, which results in the current UAPs. however, from the view of attack, we prefer a more expressive space to attack so that we have more freedom to choose a good attack direction. Now that NC could happen not only in the output layer but also in the last-layer feature space,
we deem that attacking at the last-layer feature space is a more effective and significative choice.
\subsection{Feature-Gathering UAP (FG-UAP)}
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\textwidth]{fig/framework.png}
\caption{The framework of FG-UAP. For a batch of training images $\boldsymbol{x}$, we first input them into the DNN to get their last-layer features. Then, add the universal perturbation $\boldsymbol{\delta}$ to the images to get a set of adversarial examples $\boldsymbol{x'}$ and get their last-layer features. Finally, calculate FG-Loss with these features and optimize the $\boldsymbol{\delta}$ iteratively. }
\label{fig:framework}
\end{figure}
Now we explain the details of how to craft UAPs by attacking the last-layer feature space. Practically, we take turns to input clean samples into the targeted DNN, and lower the similarity of $\boldsymbol{h}(\boldsymbol{x})$ and $\boldsymbol{h}(\boldsymbol{x}+\boldsymbol{\delta})$. In terms of measuring similarity, we use cosine similarity, considering its adaption to vector scene and effectiveness proved in previous researches \cite{cosineuap, dp}. To this end, we design a loss named Feature-Gathering loss (FG loss). Given a natural sample $\boldsymbol{x}$ and its corresponding adversarial example $\boldsymbol{x} + \boldsymbol{\delta}$, their features at the last-layer feature space are abbreviated as $\boldsymbol{h}$ and $\boldsymbol{h'}$, respectively. Without any other label information, we can directly compute the FG loss:
\begin{equation}
\mathcal{L}_{\mathrm{FG}}(\boldsymbol{h}, \boldsymbol{h'}) = \text{sim}(\boldsymbol{h}, \boldsymbol{h'}) = \frac{\boldsymbol{h} \cdot \boldsymbol{h'}}{\|\boldsymbol{h}\|\| \boldsymbol{h'}\|}
= \frac{\sum\limits_{i=1}^{n}h_i {h_i'}}
{\sqrt{\sum\limits_{i=1}^{n}h_i^2}\sqrt{\sum\limits_{i=1}^{n}{h_i'}^2}}
\end{equation}
The whole attacking procedure is demonstrated in Figure \ref{fig:framework} and the algorithm is detailed in Algorithm \ref{ag:fguap}. Since the crafted UAP has the ability to gather natural images' features to a new direction (refer to \ref{sec:property} for experiment details), we name this universal perturbation as Feature-Gathering UAP (FG-UAP).
\begin{algorithm}[htbp]
\caption{FG-UAP}
\label{ag:fguap}
\textbf{Input}: classifier $f$, training set $X$, perturbation magnitude $\xi$, batch size $b$, maximum number of epochs $m$, learning rate optimizer $lr$.\\
\textbf{Output}: universal perturbation $\boldsymbol{\delta}$
\begin{algorithmic}[1]
\STATE Initialize $\boldsymbol{\delta}\leftarrow 0, t \leftarrow 0$.
\WHILE{$t<m$}
\FOR{each batch of data $B_i \subset X:len(B_i)=b$}
\STATE $\boldsymbol{g} \leftarrow \underset{\boldsymbol{x}\sim B_i}{\mathbb{E}}\left[\nabla_{\boldsymbol{\delta}}\mathcal{L}_{\mathrm{FG}}(\boldsymbol{h}(\boldsymbol{x}), \boldsymbol{h}(\boldsymbol{x+\delta}))\right]$
\STATE $\boldsymbol{\delta} \leftarrow \boldsymbol{\delta} + \Gamma (\boldsymbol{g},lr)$ ~~~~\text{\# update the perturbation with the optimizer}
\STATE $\boldsymbol{\delta} \leftarrow \text{clamp}(\boldsymbol{\delta}, -\xi, \xi)$ ~~~~\text{\# clamp the perturbation}
\ENDFOR
\STATE $t \leftarrow t+1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Experiments}\label{expsec}
In numerical experiments, we mainly evaluate the proposed FG-UAP
for six typical DNNs with convolutional architectures (hereinafter abbreviated as CNNs). They are pre-trained on the ILSVRC 2012 \cite{imagenet} validation set ($50,000$ images). The
victim CNNs include AlexNet \cite{alexnet}, GoogLeNet \cite{googlenet}, VGG16 \cite{vgg}, VGG19 \cite{vgg}, ResNet50 \cite{resnet}, and ResNet152 \cite{resnet}, all of which
are got from Torchvision \cite{pytorch}.
Adam \cite{kingma2014adam} is chosen as the optimizer. The hyper-parameters in Algorithm \ref{ag:fguap} are set as $b=32$, $m=10$, $lr=0.02$, and the magnitude of crafted UAP is set as $\xi = 10/255$, which is consistent with other universal attack methods. All the experiments are performed on PyTorch \cite{pytorch} with NVIDIA GeForce RTX 2080Ti GPUs.
\subsection{Universal attack on CNNs} \label{sec:cnn}
We first train FG-UAP to attack CNNs
and calculate their Fooling Ratios (FRs),
the primary criterion to evaluate the strength of a UAP. Table \ref{tab:cnn} shows the experimental results along with other UAP methods. From the table, it can be observed that FG-UAP achieves the highest FRs for all victim models and the improvements from state-of-the-art methods are at least 1\% and up to 5\%.
Since the logit function and the optimization method of FG-UAP are not unique, actually have appeared in different methods, the good performance
verifies our expectation that attacking the layers with variability collapse results in stronger universal perturbations. Figure \ref{fig:cnn_attack} visualizes the generated FG-UAPs and corresponding sample adversarial images for different CNNs.
\begin{table}[H]
\centering
\caption{FRs (\%) of different UAP methods attacking CNNs on ImageNet validation set, where the best FRs are highlighted in bold, and the second-best ones are underlined.}
\label{tab:cnn}
\vspace{2mm}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}ccccccc@{}}
\toprule
Method & AlexNet & GoogLeNet & VGG16 & VGG19 & ResNet50 & ResNet152 \\ \midrule
DeepFool-UAP \cite{deepfooluap} & 93.3 & 78.9 & 78.3 & 77.8 & - & 84.0 \\
GAP \cite{gap} & - & 82.7 & 83.7 & 80.1 & 62.8 & 59.19 \\
NAG \cite{nag} & 96.44 & 90.37 & 77.57 & 83.78 & 86.64 & 87.24 \\
FTUAP \cite{ftuap} & - & 85.8 & 93.5 & 94.5 & 93.6 & \underline{92.7} \\
DF-UAP \cite{dfuap} & 96.17 & 88.94 & 94.30 & 94.98 & \underline{94.96} & 90.08 \\
Cosine-UAP \cite{cosineuap} & \underline{96.5} & \underline{90.5} & \underline{97.4} & \underline{96.4} & - & 90.2 \\
FG-UAP & \textbf{97.77} & \textbf{91.53} & \textbf{98.45} & \textbf{97.77} & \textbf{96.23} & \textbf{95.59} \\ \bottomrule
\end{tabular}}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/attack_label.png}
\vspace{-5mm}
\caption{FG-UAPs (Magnitudes are mapped from $[-10,10]$ to $[0,255]$ for better observation.) and
corresponding perturbed images for different CNNs. Prediction and confidence (\%) are also reported under each image for reference. }
\label{fig:cnn_attack}
\end{figure}
\subsection{Universal attack on ViTs}
Unlike CNNs, ViTs \cite{vit} use
pure self-attention-based architectures instead of convolutional blocks,
which is now believed
to
enhance robustness. Thus, it is of great necessity to confirm whether such structure is vulnerable to our attack method. In this experiment, we apply Algorithm \ref{ag:fguap} (with parameters $b=32$, $m=20$, $lr=0.01$ and the rest are remain unchanged) on DeiT family \cite{deit}. Since DeiT-Ti and DeiT-S can be regarded as the counterpart of ResNet50 and ResNet18, respectively, we also consider these two models.
For comparison, we list the FRs for Cosine-UAP, the state-of-the-art UAP method.
Experimental results are shown in Table \ref{tab:vit}, where we have two observations: i) ViTs are indeed more robust than counterpart CNNs; ii) FG-UAP can still get satisfactory FRs and has more significant advantages on attacking ViTs, for which Cosine-UAP's performance largely degrades, especially for DeiT-B.
\begin{table}[!htpb]
\centering
\caption{FRs (\%) of FG-UAP and state-of-the-art method attacking ViTs on the ImageNet validation set. Results for ResNet18 and ResNet50 are reported here since their scales are comparable to DeiT-Ti and DeiT-S, respectively. The best FRs are highlighted in bold.}
\label{tab:vit}
\vspace{2mm}
\begin{tabular}{@{}c|ccc|ccc@{}}
\toprule
& DeiT-Ti & DeiT-S & DeiT-B & ResNet18 & ResNet50 \\ \midrule
Cosine-UAP & 90.75 & 81.13 & 69.25 & 94.72 & 95.44 \\
FG-UAP & \textbf{92.54} & \textbf{83.47} & \textbf{85.58} &\textbf{95.37} & \textbf{96.43} \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Cross-model transferability of FG-UAP}
Cross-model transferability is another criterion for UAPs. To evaluate this point, we train FG-UAPs for one architecture and use others to classify them. The results are displayed in Table \ref{tab:transfer}, where ALN, GLN, RN stands for AlexNet, GoogLeNet, and ResNet, respectively. The result clearly shows that UAPs trained for other CNNs keep quite high FRs for CNNs. Even for ViTs with totally different structures, the generated UAPs have strong transferability to CNNs, especially AlexNet and VGGs. On the contrary, transfer-based black-box attacks for ViTs are much harder: the best FR is only 36.79\%, even when the transfer is among similar architectures. These results implies that DNNs with convolutional blocks may share similar vulnerability so that the universal attack are more easily to be transferred, while that is not the case for self-attention-based architectures.
\begin{table}[H]
\centering
\caption{Transferability of FG-UAP across different DNNs. The rows indicate the surrogate model to compute UAPs, and the columns indicate the victim models for which FRs (\%) are reported.}
\label{tab:transfer}
\vspace{2mm}
\resizebox{\linewidth}{!}{
\begin{tabular}{cccccccccc}
\toprule
& ALN & GLN & VGG16 & VGG19 & RN50 & RN152 & DeiT-Ti & DeiT-S & DeiT-B \\
\midrule
ALN & \textbf{97.77} & 55.77 & 69.40 & 63.90 & 48.76 & 39.47 & 29.85 & 19.02 & 15.13 \\
GLN & 53.20 & \textbf{91.52} & 76.06 & 73.14 & 59.64 & 49.02 & 32.26 & 22.03 & 14.80 \\
VGG16 & 46.50 & 53.04 & \textbf{98.44} & 93.46 & 56.74 & 45.76 & 25.43 & 18.80 & 13.83 \\
VGG19 & 48.17 & 54.64 & 95.53 & \textbf{97.77} & 59.70 & 49.34 & 27.36 & 20.15 & 13.63 \\
RN50 & 52.57 & 59.69 & 79.15 & 75.77 & \textbf{96.23} & 65.16 & 25.64 & 16.81 & 12.65 \\
RN152 & 51.30 & 65.42 & 86.05 & 83.05 & 84.61 & \textbf{95.48} & 27.46 & 18.54 & 14.44 \\
DeiT-Ti & 43.26 & 28.53 & 49.59 & 46.96 & 31.11 & 22.47 & \textbf{92.54} & 24.49 & 20.48 \\
DeiT-S & 50.65 & 39.89 & 53.35 & 51.64 & 35.96 & 29.92 & 36.23 & \textbf{83.47} & 21.41 \\
DeiT-B & 53.06 & 43.12 & 56.72 & 55.11 & 37.42 & 32.70 & 36.79 & 32.84 & \textbf{85.58} \\
\bottomrule
\end{tabular}}
\end{table}
\subsection{Mini-set UAP and Targeted UAP}\label{sec:mini}
Here we also test FG-UAP on two variant tasks of UAP. One is mini-set UAP, which is to train UAP on a mini-set with typically 32 or 64 images. The other is targeted UAP, which is to generate UAP to cheat the classifier to a user-given class.
For mini-set UAP, extra data augmentation techniques like random rotations and random horizontal flips are used to avoid overfitting. The performance is displayed in Table \ref{tab:miniset}. Of course, the FRs of mini-set UAP are lower than that on the full set. But for UAP, the drop is quite small and still FG-UAP is better than the state-of-the-art UAPs under this setting.
\begin{table}[H]
\centering
\caption{FRs (\%) of FG-UAP and Cosine-UAP attacking CNNs on mini-sets. $N$ denotes the number of available samples. The best FRs are highlighted in bold. }
\label{tab:miniset}
\vspace{2mm}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}cccccccc@{}}
\toprule
method & $N$ & AlexNet & GoogLeNet & VGG16 & VGG19 & ResNet50 & ResNet152 \\ \midrule
Cosine-UAP & 64 & 96.91 & 86.12 & 95.64 & 94.29 & 91.74 & 88.77 \\
FG-UAP & 64 & \textbf{97.13} & \textbf{88.64} & \textbf{97.55} & \textbf{96.17} & \textbf{94.48} & \textbf{89.24} \\
Cosine-UAP & 32 & 95.66 & 87.40 & 96.46 & 94.27 & 92.15 & 87.20 \\
FG-UAP & 32 & \textbf{96.53} & \textbf{87.94} & \textbf{97.20} & \textbf{96.26} & \textbf{93.39} & \textbf{88.63} \\
\bottomrule
\end{tabular}}
\end{table}
Original UAPs, including FG-UAPs, belong to untargeted attacks. However, we can also use it for targeted attack, which is promising because
FG-UAP is based on NC that means the features of different images are similar in the attacked layers and can be led to a specific class.
To implement targeted attacks,
we introduce a targeted FG Loss by adding a term in the original FG Loss, i.e.,
\begin{equation}
\mathcal{L}_{\mathrm{FG}}(\boldsymbol{h}, \boldsymbol{h'}, i) = \mathcal{L}_{\mathrm{FG}}(\boldsymbol{h}, \boldsymbol{h'}) + f'_i,
\end{equation}
where $i$ is the target class, and $f'_i$ denotes the $i$th logit value of the adversarial example. We adopt VGG16 as the victim model, and randomly choose ten classes, along with the dominant class in untargeted FG-UAP, as the target class to craft corresponding FG-UAP. For comparison, we also report the experimental results for state-of-the-art targeted UAP, DF-UAP \cite{dfuap}, with settings claimed in the paper. For targeted UAP, we additionally record the Targeted Fooling Ratio (TFR) \cite{gap}, i.e., only when the output is exactly the targeted class. We vary the targeted class $c_{\mathrm{target}}$ and report the TFRs in Table \ref{tab:target}, while the FRs are also reported for reference. Generally, FG-UAP has consistent superiority on TFRs and the performance of contrast DF-UAP is quite unstable. Another interesting observation is that DF-UAP and FG-UAP both attain outstanding performance on the $109$-th class (brain coral), which is also the class untargeted UAPs aim at, since there is feature gathering and FG-UAP could find this direction in an unsupervised way.
\begin{table}[H]
\centering
\caption{FRs and TFRs of targeted UAPs for VGG16 trained with different UAP methods. }
\label{tab:target}
\vspace{2mm}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}cc|cccccccccc|c@{}}
\toprule
\multicolumn{2}{c|}{$c_{\mathrm{target}}$} & 1 & 100 & 200 & 300 & 400 & 500 & 600 & 700 & 800 & 900 & 109 \\ \midrule
\multicolumn{1}{c|}{\multirow{2}{*}{DF-UAP}} & TFR (\%) & 82.25 & 71.67 & 48.03 & 80.55 & 66.21 & 78.77 & 24.27 & 75.47 & 81.08 & 80.28 & 91.40 \\
\multicolumn{1}{c|}{} & FR (\%) & 93.94 & 91.76 & 90.42 & 91.91 & 88.84 & 91.41 & 92.40 & 89.85 & 92.99 & 93.19 & 97.74 \\
\midrule
\multicolumn{1}{c|}{\multirow{2}{*}{FG-UAP}} & TFR (\%) & 83.32 & 78.07 & 76.28 & 83.22 & 72.67 & 78.07 & 77.35 & 75.71 & 83.67 & 82.43 & 93.73 \\
\multicolumn{1}{c|}{} & FR (\%) & 95.64 & 94.63 & 94.93 & 94.95 & 94.72 & 93.04 & 93.94 & 94.18 & 95.82 & 94.48 & 98.11 \\
\bottomrule
\end{tabular}}
\end{table}
\section{Discussion on FG-UAP and NC}\label{sec:property}
The close link between UAP and NC is the motivation of our FG-UAP.
The good performance shown in the above section actually confirms the link. This section investigates FG-UAP in the view of NC, which can help the understanding of both UAP and NC.
\subsection{Label dominance}\label{sec:label dominance}
When the UAPs attack DNNs, they may lead most of natural images to several specific classes, although not intended as an objective. This phenomenon is called \emph{label dominance}, which is first discovered in \cite{deepfooluap} and further been discussed in \cite{vadillo2022analysis, cosineuap}. Here, we first verify whether the phenomenon also happens for our method.
To measure the label dominance, we report the percentage of top $k$ most frequently occurred categories account for the predicted labels (denoted as dominance ratio $\mathcal{D}_k$). Furthermore, we also examine whether the predicted class of UAP itself is in the top $k$ categories. It can be observed from Table \ref{tab:dominance} that UAPs for all models attain extremely high dominance ratio (compared with originally 0.1\% for any certain class), and the most frequently occurred category is exactly the corresponding UAP's predicted class. This discovery is consistent with that in \cite{cosineuap}. In the view of NC, it can be concluded that the FG-UAP utilizes the collapse of with-class variability collapse to attack DNNs, and finally results in collapse of between-class variability.
\begin{table}[!htpb]
\centering
\caption{Dominance ratios (\%) of FG-UAP for DNNs. The second row indicates DNN's prediction when feeding FG-UAP as input, and its place of how frequently it occurs in perturbed images' prediction.}
\label{tab:dominance}
\vspace{2mm}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}cccccccccc@{}}
\toprule
& ALN & GLN & VGG16 & VGG19 & RN50 & RN152 & DeiT-Ti & DeiT-S & DeiT-B \\ \midrule
$c_{\boldsymbol{\delta}}$& 721($1^{st}$) & 109($1^{st}$) & 109($1^{st}$) & 109($1^{st}$) & 971($1^{st}$) & 854($1^{st}$) & 815($1^{st}$) & 828($1^{st}$) & 879($1^{st}$) \\
$\mathcal{D}_1$& 35.18 & 82.15 & 93.32 & 93.45 & 51.24 & 86.42 & 78.10 & 63.79 & 79.32 \\
$\mathcal{D}_3$& 53.66 & 83.71 & 95.43 & 94.28 & 82.77 & 91.51 & 81.95 & 72.40 & 80.44 \\
$\mathcal{D}_5$& 60.64 & 84.49 & 95.95 & 94.78 & 91.64 & 93.16 &83.29 & 74.12 & 81.09 \\
\bottomrule
\end{tabular}}
\end{table}
\subsection{Feature collapse of adversarial examples}
In addition to label dominance, we try to go further into the feature-level to see what is happening. Originally, NC is found for natural images. Since universal perturbations are image-independent, it could be expected that adversarial examples generated by our FG-UAP have more obvious NC. To see that,
we calculate the magnitude of the between-class covariance compared with the within-class covariance of the train activations, which is used in \cite{neuralcollapse} as a measure for collapse. Mathematically, we calculate
$
\operatorname{Tr}(\Sigma_{W} \Sigma_{B}^{\dagger})
$,
where $\Sigma_{B}$ is the inter-class covariance matrix
$
\boldsymbol{\Sigma}_{B} \triangleq \underset{c}{\operatorname{Ave}}\left\{\left(\boldsymbol{\mu}_{c}-\boldsymbol{\mu}_{G}\right)\left(\boldsymbol{\mu}_{c}-\boldsymbol{\mu}_{G}\right)^{\top}\right\}
$.
The values
before and after these images are perturbed
can reflect NC for natural and adversarial examples.
For perturbed images, their predicted classes are used as labels. Results are reported in Figure \ref{fig:var_ratio}, which leads to two main conclusions.
Firstly, variability collapse indeed happens at the last-layer feature space, since the ratio's value is comparable to that in the original paper. Secondly, the phenomenon of variability collapse gets even worse after these images are corrupted by FG-UAPs. Combined with the label dominance phenomenon in Section \ref{sec:label dominance}, it can be concluded that FG-UAP corrupts images by finding a new direction at the last-layer feature space, which gathers natural images' features to this direction and make the collapse severer.
As a result, the majority of images are
predicted as the same class with UAP's.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/feature_collapse.jpg}
\caption{The magnitude of the between-class covariance compared with the within-class covariance of the train activations ($\operatorname{Tr}(\Sigma_{W} \Sigma_{B}^{\dagger})
$) at the last-layer feature space before and after the images are perturbed.}
\label{fig:var_ratio}
\end{figure}
\subsection
Redundancy
in the last-layer space}
According to NC's manifestation, images belonging to the same class gather to a direction at the last-layer space, sharing similar features. This means a small portion of images for one class are quite representative for the majority of images with the same label, from which it follows that fooling them may incidentally fool the others
as well.
The redundancy means we only need to consider a smaller scale of images to generate a FG-UAP, not all like regular UAPs do.
Note that here our aim is to maintain comparable performance with limited information for each class, and we do not use extra data augmentation, which is different from section \ref{sec:mini}. We randomly select some samples from each class and train FG-UAP on them, instead of the original 50,000 images. Figure \ref{fig:FR_decrease} displays the extent of FR decrease when the number of selected samples for each class ranges from 50 (original size) to 1. The detailed FR performance of extreme 1,000 images has also been reported in Table \ref{tab:mini}.
Notice that we switch the hyper-parameter to $lr = 0.01 , m = 20$ for better convergence and maintain other settings unchanged. It can be concluded that decreasing the number of images has limited influence on the fooling performance. Even when there is only a single sample for each class, the drop is quite small: the decrease is smaller than 1\% for all the victim models and is still better than the state-of-the-art UAPs, even when they are trained on the full set. This confirms the variability collapse for natural images at the last-layer feature space.
\begin{figure}[H]
\centering
\includegraphics[width=.95\textwidth]{fig/mini.jpg}
\vspace{-5mm}
\caption{FRs for six victim models when the dataset scale is reduced from fifty to one image per category. The ordinate denotes the ratio of corresponding FR to original FR. }
\label{fig:FR_decrease}
\end{figure}
\begin{table}[H]
\centering
\caption{FRs (\%) of FG-UAP attacking CNNs on the full and limited samples. }
\label{tab:mini}
\vspace{2mm}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}ccccccc@{}}
\toprule
& AlexNet & GoogLeNet & VGG16 & VGG19 & ResNet50 & ResNet152 \\ \midrule
Full set (50,000) & 97.77 & 91.53 & 98.45 & 97.77 & 96.23 & 95.59 \\
Mini set (1,000) & 97.08 & 90.78 & 97.98 & 97.17 & 95.82 & 94.83 \\
Deviation & -0.69 & -0.75 & -0.47 & -0.60 & -0.41 & -0.76 \\ \bottomrule
\end{tabular}}
\end{table}
\section{Conclusion and Future Work}
In this paper, we demonstrate the effectiveness of finding universal attack perturbations at the layer where NC happens. The proposed FG-UAP, which gathers natural images' features with a universal perturbation,
can achieve
state-of-the-art performance on the ImageNet dataset
for both CNNs and ViTs. It also works well in targeted attack or under limited training data.
Further, we investigate the properties of FG-UAP, which support NC and demonstrate the effectiveness of attacking the features where NC happens.
The future works include extending our method to other fields, and getting further insight into the relation between existence of UAPs and robustness of DNNs.
\section*{Declaration of Competing Interest}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\section*{Acknowledgments}
This work was partially supported by National Natural Science Foundation of China (No. 61977046) and Shanghai Municipal
Science and Technology Major Project (2021SHZDZX0102).
\newpage
|
1,116,691,500,690 | arxiv | \section{Introduction}
The CMB temperature and polarization anisotropies are being measured
with ever increasing precision. The statistics of the anisotropies
already provide valuable limits on cosmological parameters, as well as
constraints on early-universe physics. As we enter the era of precision
measurement, with signal-dominated observations out to small angular scales,
non-linear effects will become increasingly important.
One of the most significant of these over scales of most interest
for parameter estimation is weak gravitational lensing by large scale
structure. Fortunately it can be
modelled accurately as a second-order effect: the linear gravitational
potential along the line of sight lenses the linear perturbations at the last
scattering surface (see e.g.\ Refs.~\cite{Seljak:1996ve,
Zaldarriaga:1998ar,Hu:2000ee} and references therein).
Modelling of fully non-linear evolution is not required for the near
future on scales of several arcminutes (corresponding to multipoles
$l \alt 2000$) for the temperature and electric polarization power spectra.
Non-linear corrections can easily be applied to the lensing potential
if and when required, provided that its non-Gaussianity can be
ignored~\cite{Seljak:1996ve}.
In principle, the weak-lensing contribution to the observed sky can
probably be subtracted given sufficiently accurate and clean high-resolution
observations. Early work in this
area~\cite{Seljak:1998aq,Guzik:2000ju,Hu:2001tn,Hu:2001kj}
suggested a limit on the
accuracy of this reconstruction due to the statistical nature of the
(unknown) unlensed CMB fields. More recently, it has been argued that
polarization removes this limit in models where lensing
is the only source of $B$-mode polarization on small
scales~\cite{Hirata:2003ka}. If subtraction could be done exactly
we could recover the unlensed Gaussian sky, and use this for all
further analysis. However current methods for subtracting the lensing
contribution are approximate, and not easy to apply to realistic
survey geometries. The result of imperfect lensing subtraction is
a sky with complicated, non-Gaussian statistics of the signal, and
significantly more complicated noise properties than the original (lensed)
observations. For observations in the near future, a
much simpler method to account for the lensing effect is to work with
the lensed sky itself, modelling the lensing effect by the expected change
in the power spectra and their covariances. The effects of lensing
non-Gaussianities on the covariance of the temperature and $E$-mode
polarization power spectra are likely to be small, but this will not be the
case for the $B$-mode spectrum once thermal-noise levels permit imaging
of the lens-induced $B$ modes~\cite{Smith:2004up}.
In this paper we discuss how to compute the lensed power spectra accurately.
The simulation of lensed skies
and the effect on parameter estimation is discussed in Ref.~\cite{Lewis:2005tp}.
On scales where the non-Gaussianity of the lensing potential can be ignored,
the calculation of the lensed power spectra is straightforward in principle.
However, achieving good accuracy on both large and small scales for all the CMB
observables is surprisingly difficult. The lensing action on the
CMB fields at scales approaching the r.m.s.\ of the lensing deflection angle
($\sim 3\, \mathrm{arcmin}$) cannot be accurately described
with a first-order Taylor expansion, as in the full-sky harmonic method of
Ref.~\cite{Hu:2000ee}.
There is not much power in the unlensed CMB on such scales, but a first-order Taylor expansion still gives lensed power spectra that
are inaccurate at the percent level for $l \agt 1000$.
The lensed CMB on scales well below the diffusion scale
is generated by the action of small-scale weak lenses
on the (relatively) large-scale unlensed CMB,
and a Taylor expansion should become more accurate
again~\cite{Seljak00lensrecon}. (However, non-linear effects
are also important on such scales.)
The breakdown of the Taylor expansion
can be easily fixed by using the flat-sky
correlation-function methods of Refs.~\cite{Seljak:1996ve,Zaldarriaga:1998ar},
which can handle the dominant effect of the lensing displacement in
a non-perturbative manner. However, a new problem then arises on scales
where the flat-sky approximation is not valid. As noted in
Ref.~\cite{Hu:2000ee}, this is not confined to large scales due to the
mode-coupling nature of lensing: degree-scale
lenses contribute significantly to the lensed power over a wide range of
observed scales. In this paper we develop a new method for computing the
lensed power spectra that is accurate on all scales where non-Gaussianity due to non-linear
effects is not important. We do this by calculating the lensed correlation
functions on the spherical sky. This allows us to include both
the non-perturbative effects of displacing small-scale CMB fluctuations,
and the effects of sky curvature.
This paper is arranged as follows. We start in
Section~\ref{sec:lensing} with a brief introduction to CMB
lensing, then in Section~\ref{sec:correlation} we review previous work on
flat-sky correlation-function methods and present our new
full-sky method and results. In Section~\ref{sec:comparison} we
compare our new results with the flat-sky correlation-function results of
Refs.~\cite{Seljak:1996ve,Zaldarriaga:1998ar} and the perturbative
harmonic result of Ref.~\cite{Hu:2000ee}, and explain why the latter
is not accurate enough for precision cosmology. The effect of non-linear
evolution of the density field on the lensed power spectra is considered in
Section~\ref{sec:nonlin}. We end with
conclusions, and include some technical results in the appendices.
\section{CMB lensing}
\label{sec:lensing}
Gradients in the gravitational potential transverse to the line of sight to
the last scattering surface cause deviations in the photon
propagation, so that points in a direction $\hat{\mathbf{n}}$ actually come
from points on the last scattering surface in a displaced direction
$\hat{\mathbf{n}}'$. Denote the lensed CMB temperature by $\tilde{\Theta}(\hat{\mathbf{n}})$ and the unlensed
temperature by $\Theta(\hat{\mathbf{n}})$, so the lensed field is given by
$\tilde{\Theta}(\hat{\mathbf{n}}) = \Theta(\hat{\mathbf{n}}')$.
The change in direction on the sky can be described by a
displacement vector field $\boldsymbol{\alpha}(\hat{\mathbf{n}}) \equiv \boldsymbol{\nabla} \psi$, so
that (symbolically)
$\hat{\mathbf{n}}' = \hat{\mathbf{n}} + \boldsymbol{\nabla} \psi$. Here $\psi$ is the lensing
potential which
encapsulates the deviations caused by potentials along the line of
sight. More rigorously, on a unit sphere the point $\hat{\mathbf{n}}'$ is obtained from
$\hat{\mathbf{n}}$ by moving a distance $|\boldsymbol{\nabla}\psi|$ along a geodesic in
the direction of $\boldsymbol{\nabla}\psi(\hat{\mathbf{n}})$, where $\boldsymbol{\nabla}$ is the covariant
derivative on the sphere~\cite{Challinor02}.
We assume that the lensing is weak, so that the potentials may be
evaluated along the unperturbed path (i.e. we use the Born
approximation). Lensing deflections are a few arcminutes, but are coherent over
degree scales, so this is a good approximation.
In terms of the zero-shear acceleration
potential $\Psi$, the lensing potential in a flat universe with
recombination at conformal distance $\chi_*$ is given by
the line-of-sight integral
\begin{equation}
\psi(\hat{\mathbf{n}}) = -2 \int_0^{\chi_*} {\text{d}} \chi\, \Psi(\chi \hat{\mathbf{n}}; \eta_0 -\chi)
\frac{\chi_* - \chi}{\chi \chi_*}.
\label{eq:1}
\end{equation}
Here we neglect the very small effect of late-time sources, including
reionization, and approximate recombination as instantaneous so that
the CMB is described by a single source plane at $\chi = \chi_*$. The
quantity $\eta_0 -\chi$ is the conformal time at which the photon was
at position $\chi \hat{\mathbf{n}}$.
With the Fourier convention
\begin{equation}
\Psi(\mathbf{x};\eta) = \int \frac{{\text{d}}^3 {\mathbf{k}}}{(2\pi)^{3/2}}\,
\Psi({\mathbf{k}};\eta) e^{i {\mathbf{k}} \cdot \mathbf{x}},
\label{eq:2}
\end{equation}
and power spectrum
\begin{equation}
\langle \Psi({\mathbf{k}};\eta) \Psi^*({\mathbf{k}}';\eta') \rangle =
\frac{2\pi^2}{k^3} \mathcal{P}_\Psi(k;\eta,\eta') \delta({\mathbf{k}}-{\mathbf{k}}'),
\label{eq:3}
\end{equation}
the angular power spectrum of the lensing potential $\psi$ evaluates to
\begin{equation}
C_l^\psi = 16\pi \int \frac{{\text{d}} k}{k}\, \int_0^{\chi_*} {\text{d}} \chi\,
\int_0^{\chi_*} {\text{d}} \chi'\, \mathcal{P}_\Psi(k;\eta_0-\chi,\eta_0-\chi')
j_l(k\chi) j_l(k\chi') \left(\frac{\chi_*-\chi}{\chi_*\chi}\right)
\left(\frac{\chi_*-\chi'}{\chi_*\chi'}\right).
\label{cpsi}
\end{equation}
\begin{figure}
\begin{center}
\psfig{figure=CL05_fig1.eps,width=10cm}
\caption{The power spectrum of the lensing potential for a concordance $\Lambda$CDM model.
The linear theory spectrum (solid) is compared with the same model including non-linear corrections (dashed) from \textsc{halofit}~\cite{Smith:2002dz} using Eq.~\eqref{Tnonlin}.
\label{CPhi}}
\end{center}
\end{figure}
In linear theory we can define a transfer function $T_\Psi(k,\eta)$ so that $\Psi({\mathbf{k}};\eta) = T_\Psi(k,\eta) \bar{\eta}({\mathbf{k}})$ where $\bar{\eta}({\mathbf{k}})$ is the
primordial comoving curvature perturbation (or other variable for isocurvature modes). We then have
\begin{equation}
C_l^\psi = 16\pi \int \frac{{\text{d}} k}{k}\, \mathcal{P}_{\bar{\eta}}(k) \left[\int_0^{\chi_*} {\text{d}} \chi\,
T_\Psi(k;\eta_0-\chi) j_l(k\chi) \left(\frac{\chi_*-\chi}{\chi_*\chi}\right)\right]^2
\label{cpsi_transfer}
\end{equation}
where the primordial power spectrum is $\mathcal{P}_{\bar{\eta}}(k)$. This can be computed easily numerically using \textsc{camb}\footnote{\url{http://camb.info}}~\cite{Lewis:1999bs}, and a typical spectrum is shown in Fig.~\ref{CPhi}.
\section{Lensed correlation function}
\label{sec:correlation}
\subsection{Flat-sky limit}
\label{subsec:flat}
We start by calculating the lensed correlation function in the flat-sky
limit, broadly following the method of Ref.~\cite{Seljak:1996ve}.
We use a 2D Fourier transform of the temperature field
\begin{equation}
\Theta(\mathbf{x}) = \int \frac{{\text{d}}^2 {\mathbf{l}}}{2\pi}\, \Theta({\mathbf{l}}) e^{i{\mathbf{l}}\cdot \mathbf{x}},
\label{eq:6}
\end{equation}
and the power spectrum for a statistically isotropic field is then
\begin{equation}
\langle \Theta({\mathbf{l}}) \Theta^*({\mathbf{l}}') \rangle = C_l^\Theta \delta({\mathbf{l}}-{\mathbf{l}}').
\label{eq:7}
\end{equation}
Lensing re-maps the temperature according to
\begin{equation}
\tilde{\Theta}(\mathbf{x}) = \Theta(\mathbf{x} + \boldsymbol{\alpha}),
\label{eq:8}
\end{equation}
where in linear theory the displacement vector $\boldsymbol{\alpha}$ is a Gaussian field. We shall require
its correlation tensor $\langle \alpha_i(\mathbf{x}) \alpha_j (\mathbf{x}') \rangle$
to compute the lensed CMB power spectrum.
Introducing the Fourier transform of the
lensing potential, $\psi({\mathbf{l}})$, we have
\begin{equation}
\boldsymbol{\alpha}(\mathbf{x}) = i \int \frac{{\text{d}}^2 {\mathbf{l}}}{2\pi}\, {\mathbf{l}} \psi({\mathbf{l}}) e^{i{\mathbf{l}}\cdot \mathbf{x}},
\label{eq:9}
\end{equation}
so that
\begin{equation}
\langle \alpha_i(\mathbf{x}) \alpha_j(\mathbf{x}') \rangle =
\int \frac{{\text{d}}^2 {\mathbf{l}}}{(2\pi)^2} l_i l_j C_l^\psi e^{i{\mathbf{l}}\cdot
(\mathbf{x}-\mathbf{x}')}.
\label{eq:10}
\end{equation}
By symmetry, the correlator can only depend on $\delta_{ij}$ and the trace-free
tensor $r_{\langle i} r_{j \rangle}$, where $\vr \equiv \mathbf{x} - \mathbf{x}'$.
Evaluating the coefficients of these two terms by taking the trace of
the correlator, and its contraction with $r^i r^j$, we find
\begin{equation}
\langle \alpha_i(\mathbf{x}) \alpha_j(\mathbf{x}') \rangle = \frac{1}{4\pi}
\int {\text{d}} l\, l^3 C_l^\psi J_0(lr) \delta_{ij} - \frac{1}{2\pi}
\int {\text{d}} l\, l^3 C_l^\psi J_2(lr) \hat{r}_{\langle i} \hat{r}_{j \rangle} ,
\label{eq:11}
\end{equation}
where $J_n(x)$ is a Bessel function of order $n$.
Note that the trace-free term is analytic at $r=0$ due to the small-$r$
behaviour of $J_2(lr)$. Following Ref.~\cite{Seljak:1996ve}, let us denote
$\langle \boldsymbol{\alpha}(\mathbf{x}) \cdot \boldsymbol{\alpha}(\mathbf{x}')\rangle$ by $C_{\text{gl}}(r)$ so that
\begin{equation}
C_{\text{gl}}(r) = \frac{1}{2\pi} \int {\text{d}} l\, l^3 C_l^\psi J_0(lr).
\label{eq:11a}
\end{equation}
Similarly we define the anisotropic coefficient
\begin{equation}
C_{\text{gl},2}(r) = \frac{1}{2\pi} \int {\text{d}} l\, l^3 C_l^\psi J_2(lr),
\label{Cgltwo_flatdef}
\end{equation}
so that
\begin{equation}
\langle \alpha_i(\mathbf{x}) \alpha_j(\mathbf{x}') \rangle = \frac{1}{2}
C_{\text{gl}}(r) \delta_{ij} - C_{\text{gl},2}(r) \hat{r}_{\langle i}
\hat{r}_{j \rangle}.
\end{equation}
The lensed correlation function $\tilde{\xi}(r)$ is given by
\begin{eqnarray}
\tilde{\xi}(r) &\equiv&
\langle \tilde{\Theta}(\mathbf{x}) \tilde{\Theta}(\mathbf{x}') \rangle \nonumber \\
&=& \int \frac{{\text{d}}^2 {\mathbf{l}}}{(2\pi)^2} C_l^\Theta e^{i {\mathbf{l}} \cdot \vr}
\langle e^{i{\mathbf{l}} \cdot [\boldsymbol{\alpha}(\mathbf{x}) - \boldsymbol{\alpha}(\mathbf{x}')]} \rangle,
\label{eq:12}
\end{eqnarray}
where we have assumed that the CMB and lensing potential are independent
(i.e.\ we are neglecting the large scale correlation that arises from
the integrated-Sachs-Wolfe effect and has only a tiny effect on the
lensed CMB). Since we are assuming $\boldsymbol{\alpha}$ is a Gaussian field,
${\mathbf{l}} \cdot [\boldsymbol{\alpha}(\mathbf{x}) - \boldsymbol{\alpha}(\mathbf{x}')]$ is a Gaussian variate and the
expectation value in Eq.~(\ref{eq:12}) reduces to
\begin{eqnarray}
\langle e^{i{\mathbf{l}} \cdot [\boldsymbol{\alpha}(\mathbf{x}) - \boldsymbol{\alpha}(\mathbf{x}')]} \rangle &=&
\exp\left(- \frac{1}{2} \langle [{\mathbf{l}} \cdot (\boldsymbol{\alpha}-\boldsymbol{\alpha}')]^2 \rangle
\right) \nonumber \\
&=& \exp\left(-\frac{1}{2}l^2 [\sigma^2(r)
+ \cos 2(\phi_{\mathbf{l}} -\phi_\vr) C_{\text{gl},2}(r)]\right),
\label{eq:13}
\end{eqnarray}
where we have used $l^i l^j \hat{r}_{\langle i} \hat{r}_{j \rangle} =
l^2 \cos 2(\phi_{\mathbf{l}} - \phi_\vr) /2$ and defined $\sigma^2(r) \equiv
C_{\text{gl}}(0)-C_{\text{gl}}(r)$. Here, e.g.\ $\phi_{\mathbf{l}}$ is the angle
between ${\mathbf{l}}$ and the $x$-axis. The $\cos
2(\phi_{\mathbf{l}} -\phi_\vr)$ term in Eq.~(\ref{eq:13}) is difficult to handle
analytically. Instead, we expand the exponential and integrate term by term.
Expanding to second order in $C_{\text{gl},2}$, we find
\begin{eqnarray}
\tilde{\xi}(r) &=& \frac{1}{2\pi} \int l {\text{d}} l\, C_l e^{-l^2 \sigma^2(r) /2}
\left[ \left(1+\frac{1}{16} l^4 C_{\text{gl},2}^2(r)\right)J_0(lr)
+ \frac{1}{2}l^2 C_{\text{gl},2}(r) J_2(lr) + \frac{1}{16} l^4 C_{\text{gl},2}^2
(r) J_4(lr)\right].
\label{flatt}
\end{eqnarray}
Expanding to this order is sufficient to get the lensed power spectrum to
second order in $C_l^\psi$; higher order terms in $C_{\text{gl},2}$ only contribute at the
$O(10^{-4})$ level on the scales of interest.
Note that the $\exp(-l^2 \sigma^2/2)$ term
is easily handled without resorting to a perturbative expansion in
$C_l^\psi$. Since $\sigma^2$ is significantly less than $C_{\text{gl},2}$ (as
shown in Fig.~\ref{sigmaplot}), the perturbative expansion in
$C_{\text{gl},2}$ converges much faster than one in $\sigma^2$.
Equation~(\ref{flatt}) extends the result of Ref.~\cite{Seljak:1996ve}
to second order in $C_{\text{gl},2}$.
\begin{figure}
\begin{center}
\psfig{figure=CL05_fig2.ps,angle=-90,width=9.0cm}
\caption{
The functions $\sigma^2(\beta)\equiv C_{\text{gl}}(0)-C_{\text{gl}}(\beta)$ [solid] and $C_{\text{gl},2}(\beta)$ [dashed]
as a function of angular separation $\beta$ (in radians) for a typical
concordance model. The results are
calculated using the full-sky definitions of Eqs.~\eqref{cgl_def},
and use the linear power spectrum for $C_l^\psi$.
\label{sigmaplot}}
\end{center}
\end{figure}
\subsubsection{Polarization}
The polarization calculation is also straightforward in the flat-sky
limit~\cite{zaldarriaga98}. We use the spin $-2$ polarization $P \equiv Q +
iU$, where $Q$ and $U$ are the Stokes' parameters measured with
respect to the fixed basis composed of the $x$ and $-y$ axes. Expanding
$P(\mathbf{x})$ in terms of the Fourier transforms of its electric ($E$) and
magnetic ($B$) parts, we have
\begin{equation}
P(\mathbf{x}) = -\int \frac{{\text{d}}^2 {\mathbf{l}}}{2\pi}\,(E({\mathbf{l}}) - iB({\mathbf{l}})) e^{-2i\phi_{{\mathbf{l}}}}
e^{i{\mathbf{l}}\cdot \mathbf{x}},
\end{equation}
where $(\partial_x-i\partial_y)^2 e^{i{\mathbf{l}}\cdot \mathbf{x}}/l^2= -e^{-2i\phi_{{\mathbf{l}}}}
e^{i{\mathbf{l}}\cdot \mathbf{x}}$ is a spin -2 flat-sky harmonic.
The polarization correlation functions are defined as
\begin{eqnarray}
\xi_+(r) &\equiv& \langle e^{-2i\phi_\vr} P^*(\mathbf{x}) e^{2i \phi_\vr} P(\mathbf{x}')
\rangle, \label{eq:14b} \\
\xi_-(r) &\equiv& \langle e^{2i \phi_\vr} P(\mathbf{x}) e^{2i \phi_\vr} P(\mathbf{x}')
\rangle , \label{eq:14c} \\
\xi_X(r) &\equiv& \langle \Theta(\mathbf{x}) e^{2i \phi_\vr} P(\mathbf{x}') \rangle,
\label{flatpol}
\end{eqnarray}
where $\pi - \phi_\vr$ is the angle to rotate the $x$-axis onto the vector
joining $\mathbf{x}$ and $\mathbf{x}'$, so that e.g.\ $e^{2i \phi_\vr} P(\mathbf{x})$ is the
polarization on the basis adapted to $\mathbf{x}$ and $\mathbf{x}'$.
Then the lensed correlation functions to second-order in $C_{\text{gl},2}$ are
\begin{eqnarray}
\tilde{\xi}_+(r) &=& \frac{1}{2\pi} \int l {\text{d}} l\, (C_l^E + C_l^B)
e^{-l^2 \sigma^2(r) /2}
\left[ \left(1+\frac{1}{16} l^4 C_{\text{gl},2}^2(r)\right)J_0(lr)\right.\nonumber\\
&&\phantom{ \frac{1}{2\pi} \int l {\text{d}} l\, (C_l^E + C_l^B)e^{-l^2 \sigma^2(r) /2}+}\left.
+ \frac{1}{2}l^2 C_{\text{gl},2}(r) J_2(lr) + \frac{1}{16} l^4 C_{\text{gl},2}^2
(r) J_4(lr)\right], \label{eq:14e} \\
\tilde{\xi}_-(r) &=& \frac{1}{2\pi} \int l {\text{d}} l\, (C_l^E - C_l^B)
e^{-l^2 \sigma^2(r) /2}
\left[ \left(1+\frac{1}{16} l^4 C_{\text{gl},2}^2(r)\right)J_4(lr)
\right. \nonumber \\
&&\phantom{ \frac{1}{2\pi} \int l {\text{d}} l\, (C_l^E - C_l^B)e^{-l^2 \sigma^2(r) /2}+}\left.
+ \frac{1}{2}l^2 C_{\text{gl},2}(r) \frac{1}{2}[J_2(lr)+J_6(lr)]
+ \frac{1}{16} l^4 C_{\text{gl},2}^2
(r) \frac{1}{2}[J_0(lr)+J_8(lr)]\right] , \label{eq:14f} \\
\tilde{\xi}_X(r) &=& \frac{1}{2\pi} \int l {\text{d}} l\, C_l^{X}
e^{-l^2 \sigma^2(r) /2}
\left[ \left(1+\frac{1}{16} l^4 C_{\text{gl},2}^2(r)\right)J_2(lr)
+ \frac{1}{2}l^2 C_{\text{gl},2}(r) \frac{1}{2}[J_0(lr)+J_4(lr)] \right.
\nonumber \\
&&\phantom{\frac{1}{2\pi} \int l {\text{d}} l\, C_l^{X}
e^{-l^2 \sigma^2(r) /2}+} \left. + \frac{1}{16} l^4 C_{\text{gl},2}^2
(r) \frac{1}{2}[J_2(lr)+J_6(lr)]\right] . \label{eq:14g}
\end{eqnarray}
Here $C_l^E$ and $C_l^B$ are the $E$-mode and $B$-mode power spectra, and
$C_l^X$ is the $\Theta$-$E$ cross-correlation. This is the straightforward
extension of the result in Ref.~\cite{zaldarriaga98} to higher
order;\footnote{Note that we disagree with the statement in
Ref.~\cite{zaldarriaga98} that a $O(C_l^\psi)$ expansion is very accurate.
Indeed \textsc{cmbfast}\ 4.5 actually uses the non-perturbative $\sigma^2$ term
(as advocated here) rather than the lowest-order series expansion given
in the paper.} see that paper for further details of the calculation.
The lensed $\tilde{\xi}_+(r)$ has the same structure as for the temperature since
the unlensed correlation functions involve the same $J_0(lr)$, and there are
no complications due to the different local bases defined by the
displacement $\vr$ and its image under lensing $\vr - \boldsymbol{\alpha}' + \boldsymbol{\alpha}$
since the phase factors from the rotations cancel.
This is not the case for the lensed
$\tilde{\xi}_-(r)$ and $\tilde{\xi}_X(r)$.
\subsubsection{Limber approximation}
At high $l$ the power spectrum $\mathcal{P}_\Psi(k)$ varies slowly compared to the
spherical Bessel functions in Eq.~(\ref{cpsi}), which pick out the scale
$k \sim l/\chi$. Using
\begin{equation}
\int k^2 {\text{d}} k \, j_l(k\chi) j_l(k\chi') = \frac{\pi}{2\chi^2}
\delta(\chi-\chi') ,
\end{equation}
we can Limber-approximate $C_l^\psi$ as
\begin{equation}
C_l^\psi \approx \frac{8\pi^2}{l^3} \int_0^{\chi_*} \chi {\text{d}} \chi\,
\mathcal{P}_\Psi(l/\chi;\eta_0-\chi) \left(\frac{\chi_*-\chi}{\chi_*\chi}\right)^2.
\end{equation}
Changing variables to $k = l/\chi$, we find
\begin{equation}
C_{\text{gl}}(r) \approx 4\pi \int {\text{d}} k\, \int {\text{d}} \chi \mathcal{P}_\Psi(k;\eta_0-\chi)
\left(\frac{\chi_*-\chi}{\chi_*}\right)^2 J_0(k\chi r),
\label{eq:11b}
\end{equation}
in agreement with Ref.~\cite{Seljak:1996ve} if we note that his $P_\phi(k) =
\mathcal{P}_\Psi(k)/(4\pi k^3)$ outside radiation-domination.
Ref.~\cite{Seljak:1996ve} also defines $C_{\text{gl},2}(r)$ as (in our notation)
\begin{equation}
C_{\text{gl},2}(r) \approx 4\pi \int {\text{d}} k\, \int {\text{d}} \chi \mathcal{P}_\Psi(k;\eta_0-\chi)
\left(\frac{\chi_*-\chi}{\chi_*}\right)^2 J_2(k\chi r) ,
\label{eq:12b}
\end{equation}
which is the Limber-approximation version of
Eq.~\eqref{Cgltwo_flatdef}. For the results of this paper we do not
use the Limber approximation, though the approximation is rather good.
\subsection{Spherical sky}
\label{spherical}
The flat-sky result for the lensed correlation function is non-perturbative
in $\sigma^2(r)$ and this turns out to be crucial for getting high accuracy
in the lensed power spectrum on arcminute scales. Consider the
contribution to the lensed correlation functions from the unlensed CMB at
multipole $l$. Both $\sigma^2$ and $C_{\text{gl},2}$ appear with a factor
$l^2$ and so the (dominant) $l^2 \sigma^2$ term cannot be
handled accurately with a low-order expansion at high $l$.
Physically, this is because the typical lensing displacement is then
comparable to the wavelength of the unlensed fluctuation, and so
approximating the fluctuation as a gradient over the scale of the lensing
displacement is inaccurate. The error from this gradient approximation
on the lensed power spectra will be large on any scale $|{\mathbf{l}} |$ where the
dominant contribution is from unlensed fluctuations with wavenumber
${\mathbf{l}}'$ comparable to the typical lensing displacement at scale $|{\mathbf{l}} - {\mathbf{l}}'|$.
As noted in the Introduction, the small-scale cut-off in the power in the
unlensed fluctuations due to diffusion damping means that the gradient
approximation should not get uniformly worse on small scales; the approximation
should be poorest on scales of a few arcminutes. We also noted that the
the flat-sky approximation will be suspect on large scales, and also
on any scale where the dominant contribution is from large-scale lenses,
i.e.\ those for which their mode-coupled wavenumber $|{\mathbf{l}} - {\mathbf{l}}'|$ small.
What is needed for an accurate calculation (on all scales where
non-linearities in the lensing potential are not important), is a
non-perturbative treatment of $\sigma^2$ and a proper treatment of
curvature effects in the correlation functions. In this section
we show how to generalize the flat-sky calculation to spherical
correlation functions.
On the full sky we can expand the temperature field in spherical
harmonics
\begin{equation}
\Theta(\hat{\mathbf{n}}) = \sum_{lm} \Theta_{lm} Y_{lm}(\hat{\mathbf{n}}) ,
\end{equation}
and the temperature correlation function is defined by
\begin{equation}
\xi(\beta) \equiv \langle \Theta(\hat{\mathbf{n}}_1) \Theta(\hat{\mathbf{n}}_2) \rangle ,
\end{equation}
where $\beta$ is the angle between the two directions ($\hat{\mathbf{n}}_1 \cdot
\hat{\mathbf{n}}_2 = \cos\beta$). The power spectrum is defined as the variance
of the harmonic coefficients $C_l^\Theta \equiv \langle
|\Theta_{lm}|^2\rangle$ for a statistically-isotropic ensemble.
\begin{figure}
\begin{center}
\psfig{figure=CL05_fig3.eps,angle=0,width = 4cm}
\caption{The geometry of the weak lensing deflections (shown without
curvature for clarity).
\label{geom}}
\end{center}
\end{figure}
We define a spin-1 deflection field ${}_1 \alpha\equiv \boldsymbol{\alpha} \cdot
(\mathbf{e}_\theta + i \mathbf{e}_\phi)$, where $\mathbf{e}_\theta$ and $\mathbf{e}_\phi$
are the unit basis vectors of a spherical-polar coordinate system.
Rotating to the basis defined by the geodesic connecting $\hat{\mathbf{n}}_1$ and
$\hat{\mathbf{n}}_2$, the spin-1 deflection (denoted with an overbar in the geodesic
basis) has real and imaginary components
\begin{equation}
\alpha_1 \cos\psi_1 = \Re {}_1 \bar{\alpha}(\hat{\mathbf{n}}_1), \quad
\alpha_1 \sin\psi_1 = \Im {}_1 \bar{\alpha}(\hat{\mathbf{n}}_1), \quad
\end{equation}
and similarly at $\hat{\mathbf{n}}_2$. Here,
$\alpha_1 = |\boldsymbol{\alpha}(\hat{\mathbf{n}}_1)|$ is the length of the lensing displacement
at $\hat{\mathbf{n}}_1$ and $\psi_1$ is the angle it makes with the geodesic from
$\hat{\mathbf{n}}_1$ to $\hat{\mathbf{n}}_2$ (see Fig.~\ref{geom}).
In terms of these angles we have the lensed correlation function
\begin{eqnarray}
\tilde{\xi}(\beta) &=& \langle \Theta(\hat{\mathbf{n}}_1') \Theta(\hat{\mathbf{n}}_2') \rangle\\& =&
\sum_{lm} C_l^\Theta \langle Y_{lm}
(\hat{\mathbf{n}}_1') Y^*_{lm}(\hat{\mathbf{n}}_2') \rangle\\&=&
\sum_{lmm'} C_l^\Theta d^l_{mm'}(\beta) \langle Y_{lm}
(\alpha_1,\psi_1) Y^*_{lm'}(\alpha_2,\psi_2) \rangle.
\label{eq:21}
\end{eqnarray}
The easiest way to see the last step is to put $\hat{\mathbf{n}}_1$ along the $z$-axis, and
$\hat{\mathbf{n}}_2$ in the $x$-$z$ plane so that $\hat{\mathbf{n}}_1'$ has polar coordinates
$(\alpha_1,\psi_1)$. The harmonic at the deflected position
$\hat{\mathbf{n}}_2'$ can be
evaluated by rotation: $Y_{lm}(\hat{\mathbf{n}}_2') = [\hat{D}^{-1}(0,\beta,0)
Y_{lm}](\alpha_2,\psi_2)$, where $[\hat{D}Y_{lm}](\hat{\mathbf{n}})$ is a spherical
harmonic rotated by the indicated Euler angles.
We have
neglected the small correlation between the deflection angle and
the temperature so that they may be treated as independent fields. The remaining average is over possible realizations of the lensing field.
We assume the lensing potential is Gaussian, so the covariance of the
spin-1 deflection field can be determined using the results
\begin{eqnarray}
\langle {}_1 \bar{\alpha}(\hat{\mathbf{n}}_1) {}_1 \bar{\alpha}(\hat{\mathbf{n}}_2) \rangle &=&
- \sum_l \frac{2l+1}{4\pi} l(l+1) C_l^\psi d^l_{-1 1}(\beta) \equiv -
C_{\text{gl},2}(\beta) , \nonumber \\
\langle {}_1 \bar{\alpha}^*(\hat{\mathbf{n}}_1) {}_1 \bar{\alpha}(\hat{\mathbf{n}}_2) \rangle &=&
\sum_l \frac{2l+1}{4\pi} l(l+1) C_l^\psi d^l_{1 1}(\beta) \equiv
C_{\text{gl}}(\beta). \label{cgl_def}
\end{eqnarray}
As in the flat-sky limit, it is convenient to define $\sigma^2(\beta)
\equiv C_{\text{gl}}(0) - C_{\text{gl}}(\beta)$. The covariance of the Gaussian
variates $\Re {}_1 \bar{\alpha}(\hat{\mathbf{n}}_1)$,
$\Im {}_1 \bar{\alpha}(\hat{\mathbf{n}}_1)$, $\Re {}_1 \bar{\alpha}(\hat{\mathbf{n}}_2)$
and $\Im {}_1 \bar{\alpha}(\hat{\mathbf{n}}_2)$ are determined by
Eq.~(\ref{cgl_def}). Transforming variables to $\alpha_1$, $\psi_1$,
$\alpha_2$ and $\psi_2$ we find their probability distribution function
\begin{eqnarray}
\text{Pr}(\alpha_1,\alpha_2,\psi_1,\psi_2) &=&
\frac{4 \alpha_1 \alpha_2}{(2\pi)^2}
\frac{e^{-{\textstyle{\frac{1}{2}}}(\alpha_1\cos\psi_1
+\alpha_2\cos\psi_2)^2/(\sigma^2+2 C_{\text{gl}} - C_{\text{gl},2})}}
{\sqrt{\sigma^2+2 C_{\text{gl}} - C_{\text{gl},2}}} \nonumber \\
&&\mbox{}\times
\frac{e^{-{\textstyle{\frac{1}{2}}}(\alpha_1\sin\psi_1
+\alpha_2\sin\psi_2)^2/(\sigma^2+2 C_{\text{gl}} + C_{\text{gl},2})}}
{\sqrt{\sigma^2+2 C_{\text{gl}} + C_{\text{gl},2}}}
\frac{e^{-{\textstyle{\frac{1}{2}}}(\alpha_1\cos\psi_1
-\alpha_2\cos\psi_2)^2/(\sigma^2+C_{\text{gl},2})}}
{\sqrt{\sigma^2+C_{\text{gl},2}}} \nonumber \\
&&\mbox{}\times
\frac{e^{-{\textstyle{\frac{1}{2}}}(\alpha_1\sin\psi_1
-\alpha_2\sin\psi_2)^2/(\sigma^2-C_{\text{gl},2})}}
{\sqrt{\sigma^2-C_{\text{gl},2}}} .
\label{eq:22}
\end{eqnarray}
Here and below we have left the dependence of $\sigma^2$, $C_{\text{gl}}$ and $C_{\text{gl},2}$
on $\beta$ implicit.
Our general strategy to evaluate Eq.~(\ref{eq:21}) is to
expand $\text{Pr}(\alpha_1,\alpha_2,\psi_1,\psi_2)$
in $C_{\text{gl}}$ and $C_{\text{gl},2}$, but not $\sigma^2$, before performing the
integral over the angles $\psi_1$ and $\psi_2$ in the expectation value.
The remaining integrals over $\alpha_1$ and $\alpha_2$ then enter through
functions of the form
\begin{equation}
X_{imn} \equiv \int_0^\infty \frac{2\alpha}{\sigma^2}\left(\frac{\alpha}
{\sigma^2}\right)^i e^{-\alpha^2/\sigma^2} d^l_{mn}(\alpha) \, {\text{d}} \alpha.
\label{eq:37}
\end{equation}
Since terms
involving $C_{\text{gl}}(\beta)$ are suppressed at high $l$ (they do not appear in
the flat-sky results), while at low $l$ the leading-order result neglecting
$C_{\text{gl}}$ and $C_{\text{gl},2}$ altogether is very accurate, we neglect terms
involving $C_{\text{gl}}$ entirely. This approximation is very accurate
[$< O(10^{-4})$] (for completeness the full second-order result is given in
Appendix~\ref{app:fullres}). As shown in Fig.~\ref{sigmaplot} the values of
$C_{\text{gl},2}$ are much smaller than $\sigma^2$, so a perturbative
treatment in $C_{\text{gl},2}$ is sufficient as in the flat-sky case.
Working to second-order in $C_{\text{gl},2}$, we find
\begin{eqnarray}
\tilde{\xi} \approx \sum_l \frac{2l+1}{4\pi} C^\Theta_l\biggl\{
X_{000}^2 d^l_{00} + \frac{8}{l(l+1)} C_{\text{gl},2} X_{000}'^{\,2} d_{1-1}^l +
C_{\text{gl},2}^2 \left( X_{000}'^{\,2} d^l_{00}+ X_{220}^2 d_{2-2}^l\right)
\biggr\} ,
\label{curvt}
\end{eqnarray}
where primes denote differentiation with respect to $\sigma^2$ [note that
the $X_{imn}$ are implicit functions of $\beta$ via the
dependence on $\sigma^2(\beta)$].
In Appendix~\ref{appb} we develop approximations for the integrals
$X_{imn}$ which are accurate for all $l$. Applying these approximations,
the required $X_{imn}$ are
\begin{eqnarray}
X_{000} &\approx& e^{-l(l+1)\sigma^2/4} \\
X_{220} &\approx& \frac{1}{4} \sqrt{ (l+2)(l-1)l(l+1) }
e^{-[l(l+1)-2]\sigma^2/4}.
\end{eqnarray}
The expansion of these results to $O(\sigma^2)$ may also be derived
straightforwardly by using the series expansion of
$d^l_{mn}(\alpha)$ for small $\alpha$. (The
smallness of $\sigma^2$ guarantees that the
integral is dominated by the small $\alpha$ region). However, it is
important to retain the correct non-perturbative form for high $l$.
In the limit of large $l$ the limiting result $d^l_{mn}(\beta)
\rightarrow (-1)^{n-m} J_{m-n}(l\beta)$ shows that the full
result of Eq.~\eqref{curvt} reduces to Eq.~\eqref{flatt} in the flat-sky
limit and is therefore consistent. In the limit in which the
separation angle $\beta \rightarrow 0$ we have
\begin{eqnarray}
\tilde{\xi}(0) &=& \sum_l \frac{2l+1}{4\pi} C^\Theta_l =
\sum_l \frac{2l+1}{4\pi} \tilde{C}^\Theta_l ,
\end{eqnarray}
where $\tilde{C}_l^\Theta$ is the lensed power spectrum.
This expresses the fact that weak lensing does not change the
total fluctuation power.
\subsubsection{Polarization}
We can extend the previous calculation to polarization. Defining
Stokes' parameters with the local $x$-axis along the $\theta$-direction
and $y$ along $-\phi$, the quantities $Q \pm i U$ are spin $\mp 2$
respectively. We can expand $Q \pm i U$ in terms of the
spin-weight harmonics as~\cite{Zaldarriaga97}
\begin{equation}
(Q \pm i U )(\hat{\mathbf{n}}) = \sum_{lm} (E_{lm} \mp i B_{lm}) {}_{\mp 2}
Y_{lm}(\hat{\mathbf{n}}) ,
\label{eq:qiu}
\end{equation}
which expresses $P \equiv Q + i U$ as the sum of its electric
($E$ or gradient-like) and magnetic ($B$ or curl-like) parts. (Our
conventions for the polarization harmonics and correlation functions
follow Refs.~\cite{Lewis01,Chon:2003gx}; see these papers for a more
thorough introduction).
The polarization correlation functions
can be defined in terms of the spin $\pm 2$ polarization defined in
the physical basis of the geodesic connecting the two positions.
As for the temperature, we evaluate the polarization correlation functions by
taking $\hat{\mathbf{n}}_1$ along the $z$-axis and $\hat{\mathbf{n}}_2$ in the $x$-$z$ plane at
angle $\beta$ to the $z$-axis.
With this geometry, the polar-coordinate basis is
already the geodesic basis connecting $\hat{\mathbf{n}}_1$ and $\hat{\mathbf{n}}_2$ so
that the lensed correlation functions are
\begin{eqnarray}
\tilde{\xi}_+(\beta) &\equiv & \langle \tilde{P}^*(\hat{\mathbf{n}}_1)
\tilde{P}(\hat{\mathbf{n}}_2) \rangle ,\label{eq:27} \\
\tilde{\xi}_-(\beta) &\equiv& \langle \tilde{P}(\hat{\mathbf{n}}_1)
\tilde{P}(\hat{\mathbf{n}}_2) \rangle ,\label{eq:28} \\
\tilde{\xi}_X(\beta) &\equiv& \langle \tilde{\Theta}(\hat{\mathbf{n}}_1)
\tilde{P}(\hat{\mathbf{n}}_2)\rangle . \label{eq:29}
\end{eqnarray}
Under a lensing deflection the polarization orientation is preserved
relative to the direction of the deflection (we are neglecting the
small effect of field rotation~\cite{Hirata:2003ka}), i.e. the
polarization undergoes parallel transport. The geometry of the
deflections is shown in Fig.~\ref{geom}.
We can easily evaluate the lensed polarization on
the connecting geodesic basis (between $\hat{\mathbf{n}}_1$ and $\hat{\mathbf{n}}_2$)
as
\begin{equation}
\tilde{P}(\hat{\mathbf{n}}_1) = P(\alpha_1,\psi_1) e^{-2i\psi_1}.
\end{equation}
The rotation
angle $\psi_1$ is that needed to rotate the spin $-2$ polarization from polar
coordinates (coinciding with the the $\hat{\mathbf{n}}_1$--$\hat{\mathbf{n}}_1'$ basis at
$\hat{\mathbf{n}}_1'$) to the geodesic basis connecting $\hat{\mathbf{n}}_1$ and $\hat{\mathbf{n}}_2$.
For the lensed polarization at $\hat{\mathbf{n}}_2$ a little more work is required.
Let $\chi'$ denote the angle between the geodesics connecting $\hat{\mathbf{n}}_2$
to $\hat{\mathbf{n}}_2'$, and $\hat{\mathbf{n}}_1$ (along the $z$-axis) to
$\hat{\mathbf{n}}_2'$ (see Fig.~\ref{geom}).
The lensed polarization at $\hat{\mathbf{n}}_2$ on the geodesic basis adapted to $\hat{\mathbf{n}}_1$ and
$\hat{\mathbf{n}}_2$ is then
\begin{equation}
\tilde{P}(\hat{\mathbf{n}}_2) = P(\hat{\mathbf{n}}_2') e^{2i\chi'} e^{-2i\psi_2}.
\label{eq:30}
\end{equation}
We can write $\hat{\mathbf{n}}_2'$ as the direction obtained
by rotating a direction with polar angles $(\alpha_2,\psi_2)$ by
an angle $\beta$ about
the $y$-axis, i.e.\ $\hat{\mathbf{n}}_2' = \hat{D}(0,\beta,0)(\alpha_2,\psi_2)$.
Writing $P$ as $(Q-iU)^*$, and using Eq.~(\ref{eq:qiu}), we have
\begin{equation}
P(\hat{\mathbf{n}}) = \sum_{lm} (E_{lm} + i B_{lm})^* {}_{+2} Y_{lm}^*(\hat{\mathbf{n}}).
\label{eq:31}
\end{equation}
Using the rotation properties of the spin-$s$ harmonics (see
Appendix~\ref{appa}), we then find
\begin{equation}
\tilde{P}(\hat{\mathbf{n}}_2) = e^{2i\chi'} e^{-2i\psi_2} e^{-2i\kappa} \sum_{lmm'}
(E_{lm} + i B_{lm})^* D^{l*}_{mm'}(0,\beta,0) {}_2 Y_{lm'}^*(\alpha_2,\psi_2).
\label{eq:32}
\end{equation}
The angle $\kappa$ is the rotation about $\hat{\mathbf{n}}_2'$ that is required to
bring the polar basis there onto that obtained by rotating the polar basis
at $(\alpha_2,\psi_2)$ with $\hat{D}(0,\beta,0)$. Since the latter is aligned
with the geodesic basis adapted to $\hat{\mathbf{n}}_2$ and $\hat{\mathbf{n}}_2'$, we have
$\kappa=\chi'$ and the lensed polarization at $\hat{\mathbf{n}}_2$ simplifies to
\begin{equation}
\tilde{P}(\hat{\mathbf{n}}_2) = e^{-2i\psi_2} \sum_{lmm'}
(E_{lm} + i B_{lm})^* d^l_{mm'}(\beta) {}_2 Y_{lm'}^*(\alpha_2,\psi_2).
\label{eq:33}
\end{equation}
We can now quickly proceed to the following expressions for the lensed
polarization correlation functions:
\begin{eqnarray}
\tilde{\xi}_+(\beta) &=& \sum_{lmm'} (C_l^E+C_l^B) d^l_{mm'}(\beta)
\langle e^{2i\psi_1} {}_2 Y_{lm}(\alpha_1,\psi_1) {}_2 Y_{lm'}^*
(\alpha_2,\psi_2) e^{-2i\psi_2} \rangle , \label{eq:34} \\
\tilde{\xi}_-(\beta) &=& \sum_{lmm'} (C_l^E-C_l^B) d^l_{mm'}(\beta)
\langle e^{-2i\psi_1} {}_{-2} Y_{lm}(\alpha_1,\psi_1) {}_2 Y_{lm'}^*
(\alpha_2,\psi_2) e^{-2i\psi_2} \rangle , \label{eq:35} \\
\tilde{\xi}_X(\beta) &=& \sum_{lmm'} C_l^X d^l_{mm'}(\beta)
\langle Y_{lm}(\alpha_1,\psi_1) {}_2 Y_{lm'}^*
(\alpha_2,\psi_2) e^{-2i\psi_2} \rangle , \label{eq:36}
\end{eqnarray}
where the expectation values are over lensing realizations. Here,
$C_l^E$ and $C_l^B$ are the power spectra $\langle |E_{lm}|^2 \rangle$
and $\langle |B_{lm}|^2 \rangle$ respectively. The cross-correlation
power spectrum is $C_l^X \equiv \langle \Theta_{lm} E_{lm}^* \rangle$.
We evaluate the expectation values in Eqs.~(\ref{eq:34}--\ref{eq:36})
following the earlier calculation for the temperature, i.e.\ expanding
$\text{Pr}(\alpha_1,\alpha_2,\psi_1,\psi_2)$ to second order in
$C_{\text{gl},2}$ before integrating. As for the temperature, $C_{\text{gl}}$ terms
contribute negligibly (see Appendix~\ref{app:fullres} for the full result).
We find the following results for the lensed polarization correlation
functions to second order in $C_{\text{gl},2}$:
\begin{eqnarray}
\tilde{\xi}_+ &\approx& \sum_{lmm'} \frac{2l+1}{4\pi}
(C_l^E + C_l^B)\Big\{ X_{022}^2 d^l_{22} +2 C_{\text{gl},2}
X_{132}X_{121} d^l_{31}
+ C_{\text{gl},2}^2[(X_{022}')^2 d^l_{22}
+ X_{242}X_{220} d^l_{40}] \Big\} , \label{eq:38} \\
\tilde{\xi}_- &\approx& \sum_{lmm'} \frac{2l+1}{4\pi}
(C_l^E - C_l^B)\Bigg\{ X_{022}^2 d^l_{2\, -2} +
C_{\text{gl},2}[X_{121}^2 d^l_{1\,-1} + X_{132}^2 d^l_{3\,-3}]
\nonumber \\
&&\phantom{ \sum_{lmm'} \frac{2l+1}{4\pi}(C_l^E -
C_l^B)\Bigg\{}\mbox{}
+ \frac{1}{2}C_{\text{gl},2}^2[2 (X_{022}')^2 d^l_{2\,-2} +
X_{220}^2 d^l_{00} + X_{242}^2 d^l_{4\,-4}] \Bigg\} ,
\label{eq:39} \\
\tilde{\xi}_X &\approx& \sum_{lmm'} \frac{2l+1}{4\pi}
C_l^X \Bigg\{X_{022}X_{000} d^l_{02} + C_{\text{gl},2}\frac{2 X_{000}'}
{\sqrt{l(l+1)}} (X_{112} d^l_{11} + X_{132}d^l_{3\,-1} )
\nonumber \\
&&\phantom{ \sum_{lmm'} \frac{2l+1}{4\pi}C_l^X \Bigg\{}\mbox{}
+ \frac{1}{2} C_{\text{gl},2}^2[(2X_{022}'X_{000}'+X_{220}^2)
d^l_{20}
+ X_{220} X_{242} d^l_{-2 4}]\Bigg\}, \label{eq:40}
\end{eqnarray}
where
\begin{eqnarray}
X_{022} & \approx & e^{-[l(l+1)-4]\sigma^2/4} , \\
X_{121} & \approx & -\frac{1}{2} \sqrt{(l+2)(l-1)} e^{-[l(l+1)-8/3]\sigma^2/4}
, \\
X_{132} & \approx & -\frac{1}{2} \sqrt{(l+3)(l-2)} e^{-[l(l+1)-20/3]\sigma^2/4}
, \label{eq:43} \\
X_{242} & \approx & \frac{1}{4} \sqrt{(l+4)(l+3)(l-2)(l-3)}
e^{-[l(l+1)-10]\sigma^2/4} .
\end{eqnarray}
These expressions for the $X_{imn}$ are accurate to $O(\sigma^2)$ at
low $l$, and
have the correct non-perturbative form at high $l$. Since only $X_{000}$
and $X_{022}$ enter at lowest order, the other exponentials may be
further safely approximated as $\sim X_{000}$ since their contributions
will be negligible at low $l$.
In the limit of zero separation $\beta\rightarrow 0$ we have
\begin{eqnarray}
\tilde{\xi}_+(0) &=& \sum_l \frac{2l+1}{4\pi}(C_l^E + C_l^B) = \sum_l
\frac{2l+1}{4\pi}(\tilde{C}_l^E + \tilde{C}_l^B) ,\\
\tilde{\xi}_-(0) &=& \tilde{\xi}_X(0) = 0 ,
\end{eqnarray}
where $\tilde{C}_l^E$ and $\tilde{C}_l^B$ are the lensed $E$- and $B$-mode
power spectra respectively.
This shows that lensing does not change the total polarization power,
though it mixes $E$ and $B$ modes as well as different scales.
\subsubsection{CMB power spectra}
Once the lensed correlation functions have been computed, transforming
to the CMB power spectra is straightforward using
\begin{eqnarray}
\tilde{C}_l^\Theta &=& 2\pi\int_{-1}^1 \tilde{\xi}(\beta)
d^l_{00}(\beta) {\text{d}} \cos\beta , \\
\tilde{C}_l^E - \tilde{C}_l^B &=& 2\pi\int_{-1}^1 \tilde{\xi}_-(\beta)
d^l_{2-2}(\beta) {\text{d}} \cos\beta , \\
\tilde{C}_l^E + \tilde{C}_l^B &=& 2\pi\int_{-1}^1 \tilde{\xi}_+(\beta)
d^l_{22}(\beta) {\text{d}} \cos\beta , \\
\tilde{C}_l^X &=& 2\pi\int_{-1}^1 \tilde{\xi}_X(\beta)
d^l_{20}(\beta) {\text{d}} \cos\beta .
\end{eqnarray}
For a further discussion of correlation functions and the transform to
power spectra see Ref.~\cite{Chon:2003gx}.
\subsubsection{Numerical implementation}
\begin{figure}
\begin{center}
\psfig{figure=CL05_fig4.eps,width=13cm}
\caption{Difference between the lensed and unlensed temperature,
cross-correlation and $E$-polarization power spectra (top three plots),
and the lensed $B$ power spectrum (bottom) for a fiducial concordance model.
The unlensed model has no tensor component (so no $B$-mode power), and the
lensed $B$ power spectrum shown is not highly accurate due
to the neglect of non-linear evolution in the lensing potential.
The magnitude of the lensing effect depends on the fluctuation amplitude in
the model; here the model has curvature perturbation power $A_s =
2.5\times 10^{-9}$ on $0.05\,\text{Mpc}^{-1}$ scales and spectral index $n_s=0.99$.
\label{lensedcls}}
\end{center}
\end{figure}
The correlation function method is inherently very efficient, only
requiring the evaluation of one dimensional sums and integrals.
For an accurate calculation of $\tilde{C}_l^B$ it is essential to
compute the full range of the correlation function because it is
sensitive to large and small scales. However, when $\tilde{C}_l^B$ is not
needed the lensing is only a small-scale effect and we only need to
integrate some of the angular range to compute
$\tilde{\xi}(\beta)-\xi(\beta)$ (and hence the lensing contribution
$\tilde{C}_l - C_l$). We find that using
$\beta_{\text{max}}=\pi/16$ is sufficient for $0.1\%$ accuracy to
$l=2000$, providing a significant factor of 16 gain in
speed. Truncating the correlation function does induce ringing on very
small scales, so if accuracy is needed on much smaller scales the
angular range can be increased.
For all but very small scales, and the $\tilde{C}_l^B$ spectrum, we can
accurately evaluate the sums over $l$ to compute the lensed correlation
functions by sampling only every 10th $l$, yielding an
additional significant time saving.
Our code is publicly available as part of
\textsc{camb},\footnote{\url{http://camb.info}} with execution time being
dominated by that required to compute the transfer functions for the
CMB and the lensing potential. Once these have been computed, the time
required to compute the unlensed $C_l$ and then lens the result is about a
hundred times less (if $\tilde{C}_l^B$ is not required accurately).
This means that efficient methods for exploiting `fast' and `slow'
parameters~\cite{Lewis:2002ah,cosmomc_notes} during
Markov Chain Monte Carlo parameter estimation can still be applied
when lensing is accounted for via the lensed power spectrum.
Sample numerical results for the lensed CMB power spectra compared to
the unlensed spectra are shown in Fig.~\ref{lensedcls}.
\section{Comparison of methods}
\label{sec:comparison}
\begin{figure}
\begin{center}
\psfig{figure=CL05_fig5.eps,width=13cm}
\caption{
Comparison of our new result with the $O(C_l^\psi)$ harmonic result of
Ref.~\cite{Hu:2000ee} (dashed) and the flat-sky non-perturbative
result of Ref.~\cite{Seljak:1996ve}, extended to second order in
$C_{\text{gl},2}$ (solid). The magnitude of the difference depends on the
exact model and we have neglected non-linear contributions to the
lensing potential.
\label{lensedcl_comp}}
\end{center}
\end{figure}
We are now in a position to compare our new, accurate full-sky result
with previous work. In Fig.~\ref{lensedcl_comp} we compare our result
with the full-sky lowest-order perturbative harmonic result of
Ref.~\cite{Hu:2000ee} [correct to $O(C_l^\psi{})$] for a typical concordance
model. We also compare to the flat-sky result of
Refs.~\cite{Seljak:1996ve,Zaldarriaga:1998ar} which is non-perturbative in
$\sigma^2$. [We extend their results to second
order in $C_{\text{gl},2}$ using Eqs.~\eqref{flatt} and \eqref{flatpol}].
In all cases we
use an accurate numerical calculation of $C_l^\psi$, rather than
the Limber approximation, and ignore its non-linear contribution.
It is clear that the lowest-order perturbative harmonic method of
Ref.~\cite{Hu:2000ee} is not sufficiently accurate for precision
cosmology, with $\sim 1\%$ errors on the temperature and $\sim 5\%$ on
the $E$-mode polarization by $l\sim 2000$. These errors are sufficient to
bias parameters even with the planned
Planck\footnote{\url{http://sci.esa.int/planck}} satellite
observations. The perturbative harmonic result is equivalent to expanding the
correlation function result self-consistently to first order in
$C_l^\psi$. As discussed in Sec.~\ref{spherical},
this is inaccurate because $l^2\sigma^2$ in the isotropic
terms is not very small for large $l$, so many terms need to be
retained to get accurate results. It is possible to extend the
harmonic result to higher order~\cite{Cooray:2003ar}, however the
multi-dimensional integrals required scale exponentially badly with increasing
order. Even a self-consistent expansion to second order in
$C_l^\psi$ is not good enough at $l > 2000$, so at
least third order would be required. Furthermore we see that for
$\tilde{C}_l^B$ the method is also somewhat inaccurate on large scales:
because the $B$-mode signal comes from a wide range of $l$, and the $E$-mode
power peaks on small scales, the non-perturbative effects can be significant
on all scales. In fact, the large-scale lensed $E$-mode power also receives
most of its contribution from small-scale modes since the unlensed polarization
power spectrum rises steeply with $l$ on large scales. However, lensing
is still only a small fractional effect for $E$-polarization on large scales
and so the perturbative expansion is relatively more accurate for $E$ than
$B$.
The correlation function methods can easily handle the isotropic term
non-perturbatively. The accurate flat-sky result is much more accurate than
the lowest-order harmonic full-sky result, with only $\sim 0.3\%$ curvature
corrections to the temperature.\footnote{Due to the opposite sign of
curvature and second-order corrections in $C_l^\psi$, the flat-sky correlation
result correct to $O(C_{\text{gl},2})$ is actually slightly more accurate than the
result correct to $O(C_{\text{gl},2}^2)$.} The polarization errors are rather larger,
with percent-level difference on $\tilde{C}_l^B$. Although this is smaller
than the effect of non-linearities in the lensing potential (see
Section~\ref{sec:nonlin}), the latter can be accurately accounted for
with better modelling (e.g.\ Ref.~\cite{Smith:2002dz}) or simulations.
While the accurate flat-sky result is probably sufficient to
Planck sensitivities, curvature effects must be taken account for truly
accurate results approaching the cosmic-variance limit. Although the curvature
is negligible on the scale of the deflection angles, it is not negligible on
the scale of the lensing potential coherence length.
Computing our full-sky accurate result is not much harder or slower than
computing the flat result, so we recommend our new calculation for future work.
Note that the
absolute precision of the lensed results is limited by the accuracy of the computed lensing potential
and the unlensed CMB power spectra. In particular,
uncertainties in the ionization history may generate errors
significantly above cosmic variance on the unlensed $C_l$. We use the
\textsc{recfast}\ code of Ref.~\cite{Seager:1999km} that may well be
inaccurate at above the percent
level\footnote{\url{http://cosmocoffee.info/viewtopic.php?t=174}}~\cite{Leung:2003je,Dubrovich:2005fc}.
However if the
ionization history can be computed reliably to high accuracy our new lensing
method can then be used to compute the lensed power spectra accurately.
\section{Non-linear evolution}
\label{sec:nonlin}
\begin{figure}
\begin{center}
\psfig{figure=CL05_fig6.eps,width=13cm}
\caption{The fractional change in the lensed $C_l$ due to non-linear corrections using \textsc{halofit}~\cite{Smith:2002dz} for the same model as Fig.~\ref{lensedcls}. The lensed $C_l$ are computed using our new accurate method.
\label{nonlin}}
\end{center}
\end{figure}
The most important assumption we have made so far is that the lensing potential is linear and Gaussian. On small scales this will not be quite correct. Although our method does not allow us to account for the non-Gaussianity, we can take into account the effect of non-linear evolution on the power spectrum of the lensing potential [and hence $\sigma^2(\beta)$ and $C_{\text{gl},2}(\beta)$~\cite{Seljak:1996ve}].
On scales where the non-Gaussianity of the deflection field is small this should be a good approximation, assuming we have an accurate way to compute the
non-linear power spectrum of the density field.
We use the \textsc{halofit}\ code of Ref.~\cite{Smith:2002dz} to compute an
approximate non-linear, equal-time power spectrum given an accurate numerical linear power spectrum at a given redshift. \textsc{halofit}\ is expected to be accurate at the few percent level for standard $\Lambda$CDM models with power law primordial power spectra (but cannot be relied on for other models, for example with an evolving dark energy component).
We simply scale the potential transfer functions $T_\Psi(k,\eta)$ of Eq. \eqref{cpsi_transfer} so that the power spectrum of the potential $\Psi$ has the correct non-linear form at that redshift:
\begin{equation}
T_\Psi(k,\eta) \rightarrow T_\Psi(k,\eta) \sqrt{\frac{\mathcal{P}^\text{non-linear}_\Psi(k,\eta)}{\mathcal{P}_\Psi(k,\eta)}}.
\label{Tnonlin}
\end{equation}
Since non-linear effects on $C_l^\psi$ are only important where the Limber
approximation holds, Eq.~(\ref{Tnonlin}) should be very accurate.
The effect of the non-linear evolution on the power spectrum of the lensing potential is shown in Fig.~\ref{CPhi}. Although there is very little effect on scales where the power peaks ($l\sim 60$), non-linear evolution significantly increases the power on small scales. The corresponding changes to the lensed CMB power spectra are shown in Fig.~\ref{nonlin}. The temperature power spectrum $\tilde{C}_l^{\Theta}$ is changed by $\alt 0.2\%$ for $l\sim 2000$, but there are percent level changes on smaller scales.
Thus inclusion of the non-linear evolution will be important to obtain results accurate at cosmic-variance
levels, but is not likely to be important at $l< 2000$ for the near future. The effect on the $B$-mode power spectrum is more dramatic, giving a $>6\%$ increase in power on all scales. On scales beyond the peak in the $B$-mode power
($l\agt 1000$) the extra non-linear power becomes more important, producing an order unity change in the $B$-mode spectrum on small scales. On these scales the assumption of Gaussianity is probably not very good, and the accuracy will also be limited by the precision of the non-linear power spectrum.
For more accurate results, more general models, and on very small scales where the non-Gaussianity of the
lensing potential becomes important, numerical simulations may
be required (e.g. see Refs.~\cite{White:2003xz,White:1999xa}).
There are, of course, other non-linear effects on the CMB with the same
frequency spectrum as the primordial (and lensed) temperature anisotropies
and polarization. The kinematic Sunyaev-Zel'dovich (SZ) effect is the main such
effect for the temperature anisotropies, and current uncertainties in the
reionization history and morphology make the spectrum $C_l^\Theta$ uncertain
at the few percent level at $l=2000$~\cite{Zahn:2005fn}.
This is a little larger than
the error in the first-order harmonic lensing result, but this doesn't mean
that one should be content with the error in the latter. Precision cosmology
from the damping tail will require accurate modelling of both lensing and
the kinematic SZ effect. Errors at the percent level in the lensing power
on these scales would seriously limit our ability to constrain reionization
scenarios with future arcminute-resolution observations. For the polarization
spectra, the kinematic SZ effect is much less significant~\cite{Hu:1999vq}.
\section{Conclusions}
We have presented a new, fast and accurate method for computing the
lensed CMB power spectra using spherical correlation functions. Previous
perturbative methods were found to be insufficiently inaccurate for
precision cosmology, and non-perturbative results in the flat-sky approximation
are in error at above the cosmic-variance level. The method developed here
should enable accurate calculation of the lensing effect to
within cosmic-variance limits to $l \alt 2500$ under the assumptions
of the Born approximation and Gaussianity of the primordial fields.
Non-linear corrections to the lensing potential have only a small effect on
the lensed temperature power spectrum, but are important on all scales
for an accurate calculation of the lensed $B$-mode power spectrum.
\section{Acknowledgments}
We thank Gayoung Chon for her work towards implementing
the full-sky lowest-order lensing result of Ref.~\cite{Hu:2000ee} in
\textsc{camb}, and AL thanks Matias
Zaldarriaga, Mike Nolta,
Oliver Zahn, Patricia Castro, Pat McDonald and Ben Wandelt for discussion and communication.
AC acknowledges a Royal Society University Research Fellowship.
|
1,116,691,500,691 | arxiv | \section{Introduction}
Memory requirements for games on graphs are studied for decades. Initially, these studies were motivated by applications to automata theory and decidability of logical theories. For example, memoryless determinacy of parity games is a key ingredient for complementation of the tree automata and leads to the decidability of
the monadic second-order theory of trees~\cite{zielonka1998infinite}. Recently, games on graphs became an important tool in reactive synthesis~\cite{bloem2018graph}. They serve there as a model of the interaction between a reactive system and the environment. Responsibility of the game theory there is to understand, which winning conditions admit ``simple'' winning strategies (as we interested in implementing our system using them). The prevailing measure of complexity of strategies in the literature is memory. In this note, we study two kinds of memory -- \emph{general} (a.k.a.~\emph{chaotic}) memory and \emph{chromatic} memory. The relationship between them was first addressed in the Ph.D.~thesis of Kopczy\'{n}ski~\cite{phdthesis}, followed by several recent works~\cite{bouyer_et_al:LIPIcs:2020:12836,casares:LIPIcs.CSL.2022.12,casares2022size}.
\medskip
We focus on games that are deterministic, infinite-duration and turn-based. We call our players Protagonist and Antagonist. They play over a finite\footnote{There are papers that study these games over infinite graphs, but in this note we only work with finite graphs.} directed graph called an \emph{arena}. Its set of nodes has to be partitioned into ones controlled by Protagonist and ones controlled by Antagonist. Players move a token over the nodes of the graph along its edges. In each turn, the token is moved by the player controlling the current node.
After infinitely many turns, this process produces an infinite path in our graph. A \emph{winning condition} is a set of infinite paths that are winning for Protagonist. In the literature, a standard way of defining winning conditions assumes that arenas are edge-colored by elements of some set of colors $C$. Then any subset $W\subseteq C^\omega$ is associated with a winning condition, consisting of all infinite paths whose sequence of colors belongs to $W$. A utility of this approach is that we do not have to define winning conditions individually for each arena.
In this paper, we seek for simple winning strategies of Protagonist, while complexity of Antagonist's strategies is mostly irrelevant for us. Such asymmetry is motivated by reactive synthesis, where Protagonist represents a system and Antagonist represents the environment. Now,
the main measure of complexity of Protagonist's strategies for us is memory. Qualitatively, we distinguish between \emph{finite-memory} strategies and \emph{infinite-memory} strategies. In turn, among finite-memory strategies, we prefer those that have fewer states of memory.
Finite-memory strategies are defined through so-called \emph{memory structures}. Intuitively, a memory structure plays a role of a ``hard disk'' of a strategy. Formally, a \emph{general} memory structure $\mathcal{M}$ is a deterministic finite automaton whose input alphabet is the set of edges of an arena. During the game, edges over which the token moves are fed to $\mathcal{M}$ one by one. Correspondingly, the state of $\mathcal{M}$ is updated after each move.
Now, a strategy \emph{built on top of a memory structure} $\mathcal{M}$ (or simply an $\mathcal{M}$-strategy) is a strategy whose moves at any moment depend solely on two things: first, the current node, and second, the current state of $\mathcal{M}$. A strategy is finite-memory if it can be built on top of some memory structure. More precisely, if this memory structure has $k$ states, then strategies built on top of it are strategies \emph{with $k$ states of general memory}. Of course, there are strategies that cannot be built on top of any memory structure. Such strategies are infinite-memory strategies.
We also consider a special class of general memory structures called \emph{chromatic} memory structures. A memory structure is chromatic if its transition function does not distinguish edges of the same color. In other words, chromatic memory structures only reads colors of edges that are fed into it.
Alternatively, a chromatic memory structure can be viewed as a finite automaton whose input alphabet is not the set of edges, but the set of colors.
Correspondingly, strategies that are built on top a chromatic memory structure with $k$ states are called strategies with \emph{with $k$ states of chromatic memory}.
\subsection*{Around a Kopczy\'{n}ski's question}
Complexity of strategies brings us to complexity of winning conditions. For a given winning condition, we want to determine the minimal amount of memory which is sufficient to win whenever it is possible to win. More specifically, the \textbf{general memory complexity} of a winning condition $W$, denoted by $\genmem(W)$, is the minimal $k\in\mathbb{N}$ such that in every arena there exists a Protagonist's strategy $S$ with $k$ states of general memory which is optimal w.r.t.~$W$. If now such $k$ exists, we set $\genmem(W) = +\infty$. Now, ``$S$ is optimal w.r.t.~$W$'' means that there exists no node $v$ such that some Protagonist's strategy is winning from $v$ w.r.t.~$W$ and $S$ is not. Substituting ``general memory'' by ``chromatic memory'', we obtain a definition of the \textbf{chromatic memory complexity} of $W$, which is denoted by $\chrmem(W)$.
For any $W$, we have $\genmem(W)\le \chrmem(W)$. This is because any chromatic memory structure is general, and hence every strategy with $k$ states of chromatic memory is also a strategy with $k$ states of general memory. Our paper revolves around a question from the Ph.D.~thesis of Kopczy\'{n}ski~\cite{phdthesis}.
\begin{question}
\label{kop_conj} Is this true that $\genmem(W) = \chrmem(W)$ for every winning condition $W$?
\end{question}
To understand Kopczy\'{n}ski's motivation, we first have to go back to 1969, when B\"{u}chi and Landweber~\cite{buchi1969solving} established that $\chrmem(W)$ is finite for all $\omega$-regular $W$. An obvious corollary of this is that $\genmem(W)$ is also finite for all $\omega$-regular $W$. Since then, there is an unfinished quest of \emph{exactly characterizing} $\chrmem(W)$ and $\genmem(W)$ for $\omega$-regular $W$. In particular, it is open whether $\chrmem(W)$ and $\genmem(W)$ are computable given an $\omega$-regular $W$ as an input (assuming $W$ is given, say, in a form of a non-deterministic B{\"u}chi automaton recognizing $W$).
In his Ph.D.~thesis, Kopczy\'{n}ski contributed to this question by giving an algorithm computing $\chrmem(W)$ for prefix-independent $\omega$-regular $W$ (a winning condition is called prefix-independent if it is invariant under adding and removing finite prefixes). Prior to that, he published a weaker version of this result in~\cite{kopczynski2007omega}.
He asked Question \ref{kop_conj} to find out, whether his algorithm also computes $\genmem(W)$ for prefix-independent $\omega$-regular $W$. His another motivation was that the same chromatic memory structure can be used in different arenas. Indeed, transition functions of chromatic memory structures can be defined over colors so that we do not have to specify them individually for each arena.
Question \ref{kop_conj} was recently answered by Casares in~\cite{casares:LIPIcs.CSL.2022.12}. Namely, for every $n\in\mathbb{N}$ he gave a \emph{Muller} condition $W$ over $n$ colors with $\genmem(W) = 2$ and $\chrmem(W) = n$.
\begin{definition}
A winning condition $W\subseteq C^\omega$ is \textbf{Muller} if $C$ is finite and $\alpha\in W \iff\beta\in W$ for any two $\alpha, \beta\in C^\omega$ that have the same sets of colors occurring infinitely often in them.
\end{definition}
Every Muller condition is prefix-independent and $\omega$-regular. Hence, we now know that Kopczy\'{n}ski's algorithm does not always compute $\genmem(W)$ for prefix-independent $\omega$-regular $W$. It is still open whether some other algorithm does this job.
In a follow-up work, Casares, Cocombet and Lehtinen~\cite{casares2022size} achieve a larger gap between $\genmem(W)$ and $\chrmem(W)$. Namely, they construct a Muller $W$ over $n$ colors such that $\genmem(W)$ is linear in $n$ and $\chrmem(W)$ is exponential in $n$.
It is worth mentioning that Casares, Colcombet and Lehtinen derive these examples from their new automata-theoretic characterizations of $\chrmem(W)$ and $\genmem(W)$ for Muller $W$. First, Casares~\cite{casares:LIPIcs.CSL.2022.12} showed that $\chrmem(W)$ equals the minimal size of a deterministic Rabin automaton, recognizing $W$, for every Muller $W$. Second, Casares, Colcombet and Lehtinen~\cite{casares2022size} showed that $\genmem(W)$ equals the minimal size of a good-for-games Rabin automaton, recognizing $W$, for every Muller $W$. The latter result complements an earlier work by Dziembowski, Jurdzinski and Walukiewicz~\cite{dziembowski1997much}, who characterized $\genmem(W)$ for Muller $W$ in terms of their Zielonka's trees~\cite{zielonka1998infinite}.
These examples, however, do not answer a natural follow-up question -- can the gap between $\genmem(W)$ and $\chrmem(W)$ be infinite? To answer it, we have to go beyond Muller and even $\omega$-regular conditions (because $\chrmem(W)$ is finite for them). In~\cite{kozachinskiy2022state}, we raised this question in the following form.
\begin{question}
\label{my_conj}
Is this true that for every \textbf{finite} set of colors $C$ and for every winning condition $W\subseteq C^\omega$ we have $\genmem(W) < +\infty \implies \chrmem(W) < +\infty$?
\end{question}
\begin{remark}
If we do not insist on finiteness of $C$, a negative answer to Question \ref{my_conj} follows from the example of Casares. Namely, for every $n$ he defines a winning condition $W_n\subseteq\{1, 2,\ldots n\}^\omega$, consisting of all $\alpha\in\{1, 2,\ldots n\}^\omega$ such that there are exactly two numbers from 1 to n that occur infinitely often in $\alpha$. He then shows that $\genmem(W_n) = 2$ and $\chrmem(W_n) = n$ for every $n$. We can now consider the union of these winning conditions $\cup_{n\ge 2} W_n$, which is a winning condition over $C = \mathbb{N}$. On one hand, $\genmem(\cup_{n\ge 2} W_n) = 2$ because every arena has only finitely many natural numbers as colors, and hence $\cup_{n\ge 2} W_n$ coincides with $W_n$ for some $n$ there. On the other hand, we have $\chrmem(\cup_{n\ge 2} W_n) \ge\chrmem(W_n) = n$ for every $n$, which means that $\chrmem(\cup_{n\ge 2} W_n) = +\infty$.
\end{remark}
In the current note, we answer negatively to Question \ref{my_conj}.
\begin{theorem}
\label{uniform_separation}
There exists a finite set of colors $C$ and a winning condition $W\subseteq C^\omega$ such that $\genmem(W) = 2$ and $\chrmem(W) = +\infty$.
\end{theorem}
Structurally, our $W$ belongs to the $\Sigma_2^0$-level of the Borel hierarchy.
Next, the size of $C$ in our example is 5, and there is a chance that it can be reduced. In turn, $\genmem(W)$ is optimal because $\genmem(W) = 1$ implies $\chrmem(W) = 1$ (one state of general memory is equally useless as one state of chromatic memory).
We call our $W$ the ``Rope Ladder'' condition. We define it in Section \ref{sec:def}. The upper bound on $\genmem(W)$ and the lower bound on $\chrmem(W)$ are given in Section \ref{sec:upp} and in Section \ref{sec:low}, respectively. Before that, we give Preliminaries in Section \ref{sec:prel}.
\subsection*{Further open questions}
Still, some intriguing variations of Question \ref{my_conj} remain open. For example, it is interesting to obtain Theorem \ref{uniform_separation} for a closed condition, i.e., for a condition given by a set of prohibited finite prefixes. In the game-theory literature, such conditions are usually called safety conditions. Our $W$ is an infinite union of safety conditions. In~\cite{colcombet2014playing}, Colcombet, Fijalkow and Horn give a tight bound on $\genmem(W)$ for safety $W$, but they do not address chromatic memory complexity.
\begin{problem}
\label{pref_ind}
Construct a finite set of colors $C$ and a safety winning condition $W\subseteq C^\omega$ such that $\genmem(W) < \infty$ and $\chrmem(W) = +\infty$.
\end{problem}
It is equally interesting to obtain Theorem \ref{uniform_separation}
for a prefix-independent $W$, as our $W$ is not prefix-independent. One motivation is that definition of a winning condition in the Kopczy\'{n}ski's thesis~\cite{phdthesis} includes prefix-independence. It is unclear, whether he meant his original question for all winning conditions or only for prefix-independent ones.
\begin{problem}
\label{pref_ind}
Construct a finite set of colors $C$ and a prefix-independent winning condition $W\subseteq C^\omega$ such that $\genmem(W) < \infty$ and $\chrmem(W) = +\infty$.
\end{problem}
There is also a variation of Question \ref{my_conj} related to a paper of Bouyer et al.~\cite{bouyer_et_al:LIPIcs:2020:12836}. In this paper, they introduce and study the class of arena-independent finite-memory determined winning conditions.
\begin{definition}
A winning condition $W$ is \textbf{arena-independent finite-memory determined} if both $\chrmem(W)$ and $\chrmem(\lnot W)$ are finite. Here $\lnot W = C^\omega\setminus W$ denotes the complement to $W$.
(Instead of taking the complement to $W$, one can swap Protagonist and Antagonist. In other words, in this definition we want \emph{both} Protagonist and Antagonist to play optimally w.r.t.~$W$ using some constant number of states of chromatic memory.)
\end{definition}
First, Bouyer et al.~obtain an automata-theoretic characterization of arena-independent finite-memory determinacy. Second, they deduce a one-to-two-player lifting theorem from it. Namely, they show that as long as both $\chrmem(W)$ and $\chrmem(\lnot W)$ are finite in arenas without the Antagonist's nodes, the same is true for all arenas.
A natural step forward would be to study $W$ for which
both $\genmem(W)$ and $\genmem(\lnot W)$ are finite. Unfortunately, it is even unknown whether this is a larger class of conditions. This raises the following problem.
\begin{problem}
\label{bouyer}
Construct a finite set of colors $C$ and a winning condition $W\subseteq C^\omega$ such that $\genmem(W)$ and $\genmem(\lnot W)$ are finite, but $\chrmem(W)$ is infinite.
\end{problem}
In fact, it is not clear if our $W$ from Theorem \ref{uniform_separation} solves this problem. We construct it using the probabilistic argument, and we do not know how to analyze $\genmem(\lnot W)$ for it.
Question \ref{my_conj} is also open over infinite arenas. There is a relevant result due to Bouyer, Randour and Vandenhove~\cite{bouyer_et_al:LIPIcs.STACS.2022.16}, who showed that the class of $W$ for which $\chrmem(W)$ and $\chrmem(\lnot W)$ are both finite in infinite arenas coincides with the class of $\omega$-regular $W$. Thus, it would be sufficient to give a non-$\omega$-regular $W$ for which both $\genmem(W)$ and $\genmem(\lnot W)$ are finite in infinite arenas.
Finally, let us mention a line work which studied the relationship between chromatic and general memory in the non-uniform setting. Namely, fix a single arena $\mathcal{A}$ and some winning condition $W$, and then consider two quantities: first, the minimal $k_{gen}$ such that $\mathcal{A}$ has an optimal strategy with $k_{gen}$ states of general memory, and second, the minimal $k_{chr}$ such that $\mathcal{A}$ has an optimal strategy with $k_{chr}$ states of chromatic memory. In~\cite{le2020time}, Le Roux showed that if $k_{gen}$ is finite, then $k_{chr}$ is also finite. There is no contradiction with Theorem \ref{uniform_separation} because $k_{chr}$ depends not only on $k_{gen}$, but also on $\mathcal{A}$.
A tight bound on $k_{chr}$ in terms of $k_{gen}$ and the number of nodes of $\mathcal{A}$ was obtained in~\cite{kozachinskiy2022state}.
\section{Preliminaries}
\label{sec:prel}
\textbf{Notation.} For a set $A$, we let $A^*$ and $A^\omega$ stand for the set of all finite and the set of all infinite sequences of elements of $A$, respectively. For $x\in A^*$, we let $|x|$ denote the length of $x$. We also set $|x| = +\infty$ for $x\in A^\omega$. We let $\circ$ denote the function composition. The set of positive integral numbers is denoted by $\mathbb{Z}^+$.
\subsection{Arenas}
\begin{definition} Let $C$ be a non-empty set. A tuple $\mathcal{A} = \langle V_P, V_A, E\rangle$ is called an \textbf{arena over the set of colors $C$} if the following conditions hold:
\begin{itemize}
\item $V_P, V_A, E$ are finite sets such that $V_P\cap V_A = \varnothing$, $V_P \cup V_A\neq\varnothing$ and $E\subseteq (V_P\cup V_A) \times C\times (V_P\cup V_A)$;
\item for every $s\in V_P\cup V_A$ there exists $c\in C$ and $t\in V_P\cup V_A$ such that $(s, c, t)\in E$.
\end{itemize}
\end{definition}
Elements of the set $V = V_P\cup V_A$ will be called nodes of $\mathcal{A}$. Elements of $V_P$ will be called nodes controlled by Protagonist (or simply Protagonist's nodes). Similarly, elements of $V_A$ will be called nodes controlled by Antagonist (or simply Antagonist's nodes). Elements of $E$ will be called edges of $\mathcal{A}$. For an edge $e = (s, c, t) \in E$ we define $\source(e) = s, \col(e) = c$ and $\target(e) = t$.
We imagine $e\in E$ as an arrow which is drawn from the node $\source(e)$ to the node $\target(e)$ and which is colored into $\col(e)$. Note that the second condition in the definition of an arena means that every node has at least one out-going edge.
We extend the function $\col$ to a function $\col\colon E^*\cup E^\omega\to C^*\cup C^\omega$ by setting:
\[\col(e_1 e_2 e_3\ldots) = \col(e_1)\col(e_2)\col(e_3)\ldots, \qquad e_1, e_2, e_3,\ldots\in E.\]
A non-empty sequence $p = e_1 e_2 e_3 \ldots \in E^*\cup E^\omega$ is called a path if for any $1\le i < |p|$ we have $\target(e_i) = \source(e_{i+1})$. We set $\source(p) = \source(e_1)$ and, if $p$ is finite, $\target(p) = \target(e_{|p|})$.
For technical convenience, every node $v\in V$ is assigned a $0$-length path $\lambda_v$, for which we set $\source(\lambda_v) = \target(\lambda_v) = v$ and $\col(\lambda_v) = \mbox{empty string}$.
Paths are sequences of edges, so we will say that some paths are prefixes of the others. However, we have to define this for $0$-length paths. Namely, we say that $\lambda_v$ is a prefix of a path $p$ if and only if $\source(p) = v$.
\subsection{Strategies}
Let $\mathcal{A} = \langle V_P, V_A, E\rangle$ be an arena over the set of colors $C$. A Protagonist's strategy in $\mathcal{A}$ is any function
\[S\colon\{p\mid p \mbox{ is a finite path in $\mathcal{A}$ with }\target(p) \in V_P\}\to E,\]
such that for every $p$ from the domain of $S$ we have $\source(S(p)) = \target(p)$. In this paper, we do not mention Antagonist's strategies, but, of course, they can be defined similarly.
The set of finite paths in $\mathcal{A}$ is the set of positions of the game. Possible starting positions are $0$-length paths $\lambda_s, s\in V$. When the starting position\footnote{We do not have to redefine $S$ for every starting position. The same $S$ can be played from any of them.} is $\lambda_s$, we say that the game starts at $s$. Now, consider any finite path $p$. Protagonist is the one to move after $p$ if and only if $t =\target(p)$ is a Protagonist's node. In this situation, Protagonist must choose some edge starting at $t$. A Protagonist's strategy fixes this choice for every $p$ with $\target(p)\in V_P$. We then append this edge to $p$ and get the next position in the game. Antagonist acts the same for those $p$ such that $\target(p)$ is an Antagonist's node.
Let us define paths that are consistent with a Protagonist's strategy $S$. First, any 0-length path $\lambda_v$ is consistent with $S$. Now, a non-empty path $p = e_1 e_2 e_3\ldots$ (which may be finite or infinite) is consistent with $S$ if the following holds:
\begin{itemize}
\item if $\source(p) \in V_P$, then $e_1 = S(\lambda_{\source(p)})$;
\item for every $1 \le i < |p|$, if $\target(e_i) \in V_P$, then $e_{i + 1} = S(e_1 e_2\ldots e_i)$.
\end{itemize}
For brevity, paths that are consistent with $S$ will also be called \emph{plays with} $S$. For a node $v$, we let $\fp(S, v)$ and $\ip(S, v)$ denote the set of finite plays with $S$ that start at $v$ and the set of infinite plays with $S$ that start at $v$, respectively. For $U\subseteq V$, we define $\fp(S, U) = \bigcup_{v\in U} \fp(S, v)$ and $\ip(S, U) = \bigcup_{v\in U} \ip(S, v)$.
\subsection{Memory structures}
Let $\mathcal{A} = \langle V_P, V_A, E\rangle$ be an arena over the set of colors $C$.
A memory structure in $\mathcal{A}$ is a tuple $\mathcal{M} = \langle M, m_{init}, \delta\rangle$, where $M$ is a finite set, $m_{init}\in M$ and $\delta\colon M\times E\to M$. Elements of $M$ are called states of $\mathcal{M}$, $m_{init}$ is called the initial state of $\mathcal{M}$ and $\delta$ is called the transition function of $\mathcal{M}$. Given $m\in M$, we inductively define the function $\delta(m,\cdot)$ over arbitrary finite sequences of edges:
\begin{align*}
\delta(m, \mbox{empty sequence}) &= m,\\
\delta(m, se) &= \delta(\delta(m, s), e), \qquad s\in E^*, e\in E.
\end{align*}
In other words, $\delta(m, s)$ is the state of $\mathcal{M}$ after we fed $s$ to it, provided that before that $\mathcal{M}$ was in $m$.
A memory structure $\mathcal{M} = \langle M, m_{init}, \delta\rangle$ is called chromatic if $\delta(m, e_1) = \delta(m, e_2)$ for every $m\in M$ and for every $e_1, e_2\in E$ with $\col(e_1) = \col(e_2)$. In this case, there exists $\sigma\colon M\times C\to M$ such that $\delta(m, e) = \sigma(m,\col(e))$. In other words, we can view $\mathcal{M}$ as a deterministic finite automaton over $C$, with $\sigma$ being its transition function.
A strategy $S$ is built on top of a memory structure $\mathcal{M}$ if we have $S(p_1) = S(p_2)$ for any two paths $p_1, p_2$ with $\target(p_1) = \target(p_2)$ and $\delta(m_{init}, p_1) = \delta(m_{init}, p_2)$. In this case, we sometimes simply say that $S$ is an $\mathcal{M}$-strategy. To define an $\mathcal{M}$-strategy $S$, it is sufficient to give its \emph{next-move function} $n_S\colon V_P\times M\to E$. For $v\in V_P$ and $m\in M$, the value of $n_S(v, m)$ determines what $S$ does for paths that end at $v$ and bring $\mathcal{M}$ to $m$ from $m_{init}$.
A strategy $S$ built on top of a memory structure $\mathcal{M}$ with $k$ states is called a strategy with $k$ states of general memory. If $\mathcal{M}$ is chromatic, then $S$ is a strategy with $k$ states of chromatic memory.
For brevity, if $S$ is an $\mathcal{M}$-strategy and $p$ is a finite path, we say that $\delta(m_{init}, p)$ is the state of $S$ after $p$.
\subsection{Winning conditions and their memory complexity}
A winning condition is any set $W\subseteq C^\omega$. We say that a Protagonist's strategy $S$ is winning from a node $u$ w.r.t.~to $W$ if the image of $\ip(S, u)$ under $\col$ is a subset of $W$.
In other words, any infinite play from $u$ against $S$ must give a sequence of colors belonging to $W$. Now, a Protagonist's strategy $S$ is called optimal w.r.t.~$W$ if there exists no node $u$ such that some Protagonist's strategy is winning from $u$ w.r.t.~$W$ and $S$ is not.
We let $\genmem(W)$ be the minimal $k\in\mathbb{Z}^+$ such that in every arena $\mathcal{A}$ over $C$ there exists a Protagonist's strategy with $k$ states of general memory which is optimal w.r.t.~$W$. If no such $k$ exists, we set $\genmem(W) = +\infty$.
Likewise, we let $\chrmem(W)$ be the minimal $k\in\mathbb{Z}^+$ such that in every arena $\mathcal{A}$ over $C$ there exists a Protagonist's strategy with $k$ states of general memory which is optimal w.r.t.~$W$. Again, if no such $k$ exists, we set $\chrmem(W) = +\infty$.
\section{The ``Rope Ladder'' Condition}
\label{sec:def}
Consider a partially ordered set $\Omega = (\mathbb{N}\times\{0, 1\}, \preceq)$, where $\preceq$ is defined by
\[
\label{order}
\forall (n, a), (m, b)\in\mathbb{N}\times\{0, 1\}\qquad (n, a)\preceq (m, b) \iff (n, a) = (m, b) \mbox{ or } n < m.
\]
\begin{center}
\begin{tikzpicture}
\node[draw=none, circle] (00) {$(0, 0)$};
\node[draw=none, circle, right = 0.7cm of 00] (01) {$(0, 1)$};
\node[draw=none, circle, above=0.7cm of 00] (10) {$(1, 0)$};
\node[draw=none, circle, right = 0.7cm of 10] (11) {$(1, 1)$};
\node[draw=none, circle, above=0.7cm of 10] (20) {$(2, 0)$};
\node[draw=none, circle, right = 0.7cm of 20] (21) {$(2, 1)$};
\node[draw=none, above = 0.1cm of 20] (dots) {$\vdots$};
\node[draw=none, above = 0.1cm of 21] (dots) {$\vdots$};
\draw[->] (10) -- (00);
\draw[->] (10) -- (01);
\draw[->] (11) -- (00);
\draw[->] (11) -- (01);
\draw[->] (20) -- (10);
\draw[->] (20) -- (11);
\draw[->] (21) -- (10);
\draw[->] (21) -- (11);
\end{tikzpicture}
\end{center}
Above is its informal depiction, with arrows representing $\preceq$ (they are directed from bigger elements to smaller elements):
We will use an abbreviation $\zero = (0, 0)$. Next,
we let $\mathbb{M}$ be the set of all functions $f\colon \Omega\to\Omega$ that are monotone w.r.t.~$\preceq$. Being monotone w.r.t.~$\preceq$ means that $x\preceq y\implies f(x)\preceq f(y)$ for all $x, y\in\Omega$.
\begin{definition}
\label{rope_ladder}
\textbf{The Rope Ladder} condition is a set $\RL\subseteq \mathbb{M}^\omega$, consisting of all infinite sequences $(f_1, f_2, f_3, \ldots)\in \mathbb{M}^\omega$ for which there exists $(N, b)\in\Omega$ such that $f_{n}\circ\ldots\circ f_2\circ f_1(\zero)\preceq (N, b)$ for all $n\ge 1$.
\end{definition}
We will use the following informal terminology with regard to $\RL$. Imagine that there is an ant which can move over the elements of $\Omega$. Initially, it sits at $\zero$. Next, take any sequence $(f_1, f_2, f_3, \ldots)\in \mathbb{M}^\omega$. We start moving the ant by applying functions from the sequence to the position of the ant. Namely, we first move the ant from $\zero$ to $f_1(\zero)$, then from $f_1(\zero)$ to $f_2\circ f_1(\zero)$, and so on. Now, $(f_1, f_2, f_3, \ldots)\in \RL$ if and only if there exists a ``layer'' in $\Omega$ which is never exceeded by the ant.
\begin{remark}
$\RL$ is defined over infinitely many colors, but for our lower bound on its chromatic memory complexity we will consider its restriction to some finite subset of $\mathbb{M}$.
\end{remark}
To illustrate these definitions, we establish the following fact. It can also be considered as a warm-up for our lower bound.
\begin{fact}
\label{not_positional}
$\chrmem(\RL) > 1$.
\end{fact}
\begin{proof}
First, consider $u, v\colon\Omega\to\Omega$, depicted below:
\begin{center}
\begin{tikzpicture}
\node[draw=none, circle] (f00) {$(0, 0)$};
\node[draw=none, circle, right = 0.7cm of f00] (f01) {$(0, 1)$};
\node[draw=none, circle, above=0.7cm of f00] (f10) {$(1, 0)$};
\node[draw=none, circle, right = 0.7cm of f10] (f11) {$(1, 1)$};
\node[draw=none, circle, above=0.7cm of f10] (f20) {$(2, 0)$};
\node[draw=none, circle, right = 0.7cm of f20] (f21) {$(2, 1)$};
\node[draw=none, above = 0.05cm of f20] (fdots1) {$\vdots$};
\node[draw=none, above = 0.05cm of f21] (fdots2) {$\vdots$};
\node[draw=none, right = 0.6cm of fdots1] (f) {$u$};
\draw[->,red,thick] (f00) -- (f10);
\draw[->,red,thick] (f01) -- (f11);
\draw[->,red,thick] (f10) -- (f20);
\draw[->,red,thick] (f11) -- (f21);
\node[draw=none, circle, right=5cm of f00] (g00) {$(0, 0)$};
\node[draw=none, circle, right = 0.7cm of g00] (g01) {$(0, 1)$};
\node[draw=none, circle, above=0.7cm of g00] (g10) {$(1, 0)$};
\node[draw=none, circle, right = 0.7cm of g10] (g11) {$(1, 1)$};
\node[draw=none, circle, above=0.7cm of g10] (g20) {$(2, 0)$};
\node[draw=none, circle, right = 0.7cm of g20] (g21) {$(2, 1)$};
\node[draw=none, above = 0.05cm of g20] (gdots1) {$\vdots$};
\node[draw=none, above = 0.05cm of g21] (gdots2) {$\vdots$};
\node[draw=none, right = 0.6cm of gdots1] (g) {$v$};
\draw[->,red,thick] (g00) -- (g11);
\draw[->,red,thick] (g01) -- (g10);
\draw[->,red,thick] (g10) -- (g21);
\draw[->,red,thick] (g11) -- (g20);
\end{tikzpicture}
\end{center}
These functions are defined by arrows that direct each element of $\Omega$ to the value of the function on this element. Formally, $u((n, a)) = (n + 1, a)$ and $v((n, a)) = (n + 1, 1 - a)$ for every $(n, a)\in\Omega$. It holds that $u, v\in\mathbb{M}$ because they both always increase the first coordinate by 1.
We also consider the following two functions $f_0, f_1\colon\Omega\to\Omega$:
\begin{equation}
\label{fg}
f_b((n, a)) = \begin{cases} (n, a) & (n, a) = (0, 0), (0, 1) \mbox{ or } (1, b),\\(n + 1, a) & \mbox{otherwise},
\end{cases} \qquad b\in\{0, 1\}
\end{equation}
For the reader's convenience, we depict them as well.
\begin{center}
\begin{tikzpicture}
\node[draw=none, circle] (u00) {$(0, 0)$};
\node[draw=none, circle, right = 0.7cm of u00] (u01) {$(0, 1)$};
\node[draw=none, circle, above=0.7cm of u00] (u10) {$(1, 0)$};
\node[draw=none, circle, right = 0.7cm of u10] (u11) {$(1, 1)$};
\node[draw=none, circle, above=0.7cm of u10] (u20) {$(2, 0)$};
\node[draw=none, circle, right = 0.7cm of u20] (u21) {$(2, 1)$};
\node[draw=none, above = 0.1cm of u20] (udots1) {$\vdots$};
\node[draw=none, above = 0.1cm of u21] (udots2) {$\vdots$};
\node[draw=none, right = 0.6cm of udots1] (u) {$f_0$};
\path
(u00) edge [loop above, red,thick] (u00);
\path
(u01) edge [loop above, red,thick] (u01);
\path
(u10) edge [loop above, red,thick] (u10);
\draw[->,red,thick] (u11) -- (u21);
\node[draw=none, circle, right = 5cm of u00] (v00) {$(0, 0)$};
\node[draw=none, circle, right = 0.7cm of v00] (v01) {$(0, 1)$};
\node[draw=none, circle, above=0.7cm of v00] (v10) {$(1, 0)$};
\node[draw=none, circle, right = 0.7cm of v10] (v11) {$(1, 1)$};
\node[draw=none, circle, above=0.7cm of v10] (v20) {$(2, 0)$};
\node[draw=none, circle, right = 0.7cm of v20] (v21) {$(2, 1)$};
\node[draw=none, above = 0.05cm of v20] (vdots1) {$\vdots$};
\node[draw=none, above = 0.05cm of v21] (vdots2) {$\vdots$};
\node[draw=none, right = 0.6cm of vdots1] (v) {$f_1$};
\path
(v00) edge [loop above, red,thick] (v00);
\path
(v01) edge [loop above, red,thick] (v01);
\path
(v11) edge [loop above, red,thick] (v11);
\draw[->,red,thick] (v10) -- (v20);
\end{tikzpicture}
\end{center}
Both of these functions have 3 fixed points. At remaining points, they act by increasing the first coordinate by 1. Now, note that the set of fixed points is
downwards-closed w.r.t.~$\preceq$ for both of them. Hence, $f_0, f_1\in\mathbb{M}$.
Consider the following arena.
\begin{center}
\begin{tikzpicture}
\node[draw, circle,minimum size=1cm] (1) {};
\node[draw, regular polygon, regular polygon sides=4, minimum size=1cm,right=2cm of 1] (2) {};
\path[->]
(1) edge [thick, in=120,out=60,out distance=1cm,in distance=1cm] node[midway, above] {$u$} (2);
\path[->]
(1) edge [thick, in=-120,out=-60,out distance=1cm,in distance=1cm] node[midway, below] {$v$} (2);
\path[->]
(2) edge [thick, in=30,out=90,out distance=1.5cm,in distance=1.5cm] node[midway, above] {$f_0$} (2);
\path[->]
(2) edge [thick, in=-30,out=-90,out distance=1.5cm,in distance=1.5cm] node[midway, above] {$f_1$} (2);
\end{tikzpicture}
\end{center}
The circle is controlled by Antagonist and the square is controlled by Protagonist. Assume that the game start in the circle. We first show that Protagonist has a wining strategy w.r.t.~$\RL$. Then we show that Protagonist does not have a positional strategy which is winning w.r.t.~$\RL$. This implies that $\chrmem(\RL) > 1$.
Let us start with the first claim. After the first move of Antagonist, the ant moves either to $u(\zero) = (1, 0)$ or to $v(\zero) = (1,1)$. In the first case, Protagonist wins by forever using the $f_0$-edge (the ant will always stay at $(1, 0)$). In the second case, Protagonist wins by always using the $f_1$-edge (the ant will always stay at $(1, 1)$).
Now we show that every positional strategy of Protagonist is not winning w.r.t.~$\RL$. In fact, there are just 2 Protagonist's positional strategies -- one which always uses the $f_0$-edge and the other which always uses the $f_1$-edge. The first one looses if Antagonist goes by the $v$-edge. Then the ant moves to $v(\zero) = (1, 1)$. If we start applying $f_0$ to the ant's position, the first coordinate of the ant will get arbitrarily large. Similarly, the second Protagonist's positional strategy looses if Antagonist goes by the $u$-edge.
\end{proof}
\section{Upper Bound on the General Memory}
\label{sec:upp}
In this section, we establish
\begin{proposition}
$\genmem(\RL) = 2$.
\end{proposition}
By Fact \ref{not_positional}, we only have to show an upper bound $\genmem(\RL) \le 2$. For that, for every arena $\mathcal{A}$ over $\mathbb{M}$ and for every Protagonist's strategy $S_1$ in $\mathcal{A}$ we construct a Protagonist's strategy $S_2$ with 2 states of general memory for which the following holds:
for every node $v$ of $\mathcal{A}$, if $S_1$ is winning w.r.t.~$W$ from $v$, then so is $S_2$.
We will use the following notation. Take any finite path $p = e_1 \ldots e_m$ in $\mathcal{A}$. Define $\ant(p) = \col(e_m) \circ \ldots\circ\col(e_2)\circ\col(e_1)(\zero)$.
In other words, $\ant(p)$ is the position of the ant after the path $p$. In case when $p$ is a $0$-length path, we set $\ant(p) = \zero$. We also write $\layer(p)$ for the first coordinate of $\ant(p)$.
Let $W$ be the set of nodes of $\mathcal{A}$ from which $S_1$ is winning w.r.t.~$\RL$. By definition of $\RL$, for every $P\in\ip(S_1, W)$ there exists $N\in\mathbb{N}$ such that $\layer(p) \le N$ for every finite prefix $p$ of $P$.
First step of our argument is to change the quantifiers here. That is, we obtain a strategy $S_1^\prime$ for which there exists some $N\in\mathbb{N}$ such that $\layer(p) \le N$ for every $p\in \fp(S_1^\prime, W)$.
We use an argument, similar to one which was used in~\cite{chatterjee2010generalized} to show finite-memory determinacy of multi-dimensional energy games. We call $p\in\fp(S_1, W)$ \emph{regular} if there exist two prefixes $q_1$ and $q_2$ of $p$ such that, first, $q_1$ is shorter than $q_2$, second, $\target(q_1) = \target(q_2)$, and third, $\ant(q_1) = \ant(q_2)$. In other words, $q_1$ and $q_2$ must lead to the same node in $\mathcal{A}$ and to the same position of the ant in $\Omega$. We stress that $q_2$ might coincide with $p$, but $q_1$ must be a proper prefix of $p$. If $p\in \fp(S_1, W)$ is not regular, then we call it \emph{irregular}.
First, we show that there are only finitely many irregular plays in $\fp(S_1, W)$. Note that any prefix of an irregular play is irregular. Thus, irregular plays form a collection of trees with finite branching (for each $u\in W$ there is a tree of irregular plays that start at $u$). Assume for contradiction that there are infinitely many irregular plays. Then, by K\H{o}nig's lemma, there exists an infinite branch in one of our trees. It gives some $P\in\ip(S_1, W)$ whose finite prefixes are all irregular. However, $P$ must be winning for Protagonist w.r.t.~$\RL$. In other words, there exists $N\in\mathbb{N}$ such that $\layer(p) \le N$ for every finite prefix of $P$. So, if $p$ ranges over finite prefixes of $P$, then $\ant(p)$ takes only finitely many values. Hence, there exist a node $v$ of $\mathcal{A}$ and some $(n, b)\in \Omega$ such that $v = \target(p)$ and $(n, b) = \ant(p)$ for infinitely many prefixes of $P$. Consider any two such prefixes. A longer one is regular because the shorter one is its prefix and leads to the same node in $\mathcal{A}$ and to the same position of the ant. This is a contradiction.
We now define $S_1^\prime$. It will maintain the following invariant for plays that start at $W$: if $p_{cur}$ is the current play, then there exists an irregular $p\in\fp(S_1, W)$ such that $\target(p_{cur}) = \target(p)$ and $\ant(p_{cur}) = \ant(p)$. Since there are only finitely many irregular plays, this invariant implies that $\ant(p_{cur})$ takes only finitely many values over $p_{cur}\in \fp(S_1^\prime, W)$, as required from $S_1^\prime$.
To maintain the invariant, $S_1^\prime$ plays as follows.
In the beginning, $p_{cur} = \lambda_w$ for some $w\in W$. Hence, we can set $p = \lambda_w$ also. Indeed, $\lambda_w\in \fp(S_1, W)$ and it is irregular as it has no proper prefixes. Let us now show how to maintain the invariant. Consider any play $p_{cur}$ with $S_1^\prime$ for which there exists an irregular $p\in\fp(S_1, W)$ such that $\target(p_{cur}) = \target(p)$ and $\ant(p_{cur}) = \ant(p)$. In this position, if its Protagonist's turn to move, $S_1^\prime$ makes the same move as $S_1$ from $p$. As a result, some edge $e$ is played. Observe that $pe\in\fp(S_1, W)$. In turn, our new current play with $S_1^\prime$ is $p_{cur} e$. We have that $\target(p_{cur}e) = \target(pe) = \target(e)$ and $\ant(p_{cur}e) = \col(e)\big(\ant(p_{cur})\big) = \col(e)\big(\ant(p)\big) = \ant(pe)$. So,
if $pe$ is irregular, then the invariant is maintained. Now, assume that $pe$ is regular. Then there are two prefixes $q_1$ and $q_2$ of $pe$ such that, first, $q_1$ is shorter than $q_2$, second, $\target(q_1) = \target(q_2)$, and third, $\ant(q_1) = \ant(q_2)$. Since $p$ is irregular, $q_2$ cannot be a prefix of $p$. Hence, $q_2 = pe$. By the same reason, $q_1$ is irregular. Thus, invariant is maintained if we set the new value of $p$ be $q_1$. Indeed, $\target(p_{cur}e) = \target(pe) = \target(q_2) = \target(q_1)$ and $\ant(p_{cur}e) = \ant(pe) =\ant(q_2) = \ant(q_1)$.
We now turn $S_1^\prime$ into a strategy $S_2$ with 2 states of general memory which is winning w.r.t.~$\RL$ from every node of $W$.
\begin{remark}
Existence of such $S_2$ ``almost'' follows from the paper of Colcombet, Fijalkow and Horn~\cite{colcombet2014playing}. Namely,
consider a winning condition ``the first coordinate of the ant never exceeds $N$''. This is a safety condition, which means that it can be given by a set of prohibited prefixes. For such conditions, Colcombet, Fijalkow and Horn give a tight bound on their general memory complexity. In our case, it gives 2. However, in their definition of general memory complexity, different starting nodes might have different low-memory winning strategies. As far as we can see, our argument essentially repeats their one, but we have to give it for the sake of rigor.
\end{remark}
\textbf{Preliminary definitions.} Let $X$ be the set of nodes reachable from $W$ by plays with $S_1^\prime$.
Next, for $v\in X$, define $\Omega_v\subseteq\Omega$ as the set of all $(n, b)\in\Omega$ such that $(n, b) = \ant(p)$ for some $p\in\fp(S_1^\prime, W)$ with $v = \target(p)$. In other words, $\Omega_v$ is the set of all possible positions of the ant that can arise at $v$ if we play according to $S_1^\prime$ from a node of $W$.
Now, take any $v\in X$. The set $\Omega_v$ is non-empty and, by our requirements on $S_1^\prime$, finite. Hence, it has $1$ or $2$ maximal elements w.r.t.~$\preceq$. We will denote them by $M_0^v$ and $M_1^v$. If $\Omega_v$ has just a single maximal element, then $M_0^v = M_1^v$. If $\Omega_v$ has two different maxima, then $M_0^v$ is the one having $0$ as the second coordinate. Finally, for every $v\in X$ and for every $b\in\{0, 1\}$ fix some $p_b^v\in\fp(S_1^\prime, W)$ such that $\target(p_b^v) = v$ and $\ant(p_b^v) = M_b^v$.
\textbf{Description of $S_2$.}
Two states of $S_2$ will simply be denoted by $0$ and $1$. The initial state of $S_2$ is $0$. The next-move function of $S_2$ is defined as follows. Assume that the state of $S_2$ is $I\in\{0,1\}$ and it has to make a move from a node $v$. If $v\notin X$, it makes an arbitrary move (this case does not matter for the argument below). Now, assume that $v\in X$. Then $S_2$ make the same move as $S_1^\prime$ after $p_I^v$.
We now describe the memory structure of $S_2$. Assume that it receives an edge $e$ when its state is $I\in\{0,1\}$. The new state $J\in\{0,1\}$ is computed as follows. Denote $u = \source(e)$ and $v = \target(e)$. If $u\notin X$ or $v\notin X$, then $J = 0$ (again, this case is irrelevant for the rest of the argument). Assume now that $u, v\in X$. If $\col(e)\big(M_I^u\big) \in \Omega_v$, then we find some $b\in\{0, 1\}$ such that $\col(e)\big(M_{I}^u\big) \preceq M_b^v$ and set $J = b$.
Otherwise, we set $J = 0$.
\textbf{Showing that $S_2$ is winning from $W$.} First, we observe that $\target(p) \in X$ for every $p\in\fp(S_2, W)$ (in other words, $S_2$ cannot leave $X$ if we start somewhere in $W$). Indeed, assume for contradiction that some play with $S_2$ leaves $X$ from some node $v\in X$. Let $I$ be the state of $S_2$ at the moment just before leaving $X$. If it is a Protagonist's turn to move, then it moves as $S_1^\prime$ after $p_I^v$. Recall that $p_I^v\in \fp(S_1^\prime, W)$. Thus, we obtain a continuation of $p_I^v$ which is consistent with $S_1^\prime$ and leads outside $X$. This contradicts the definition of $X$. Now, if it is an Antagonist's turn to move from $v$, then any continuation of $p_I^v$ by one edge is consistent with $S_1^\prime$, so we obtain the same contradiction.
Next, we show that for any play $p\in\fp(S_2, W)$ we have $\ant(p) \preceq M_I^{\target(p)}$, where $I$ is the state of $S_2$ after $p$. This statement implies that $S_2$ is winning w.r.t.~$\RL$ from every node of $W$. Note that $M_I^{\target(p)}$ is well-defined thanks to the previous paragraph.
We prove this statement by induction on the length of $p$. Let us start with the induction base. Assume that $|p| = 0$ (then $p = \lambda_w$ for some $w\in W$). The state of $S_2$ after $p$ is the initial state, that is, $0$. Thus, we have to show that $\ant(p) \preceq M_0^{\target(p)}$. Note that $p$ has length $0$ and hence is consistent with any strategy. In particular, $p\in \fp(S_1^\prime, W)$. Hence, $\ant(p) \in\Omega_{\target(p)}$. If $\Omega_{\target(p)}$ has just a single maximum, then $\ant(p)$ does not exceed this maximum, as required. Now, if $M_0^{\target(p)} \neq M_1^{\target(p)}$, then the second coordinate of $M_0^{\target(p)}$ is $0$, so we have $\ant(p) \preceq M_0^{\target(p)}$ just because $\ant(p) = \zero$.
Next, we establish the induction step. Consider any $p\in\fp(S_2, W)$ of positive length and assume that for all paths from $\fp(S_2, W)$ of smaller length the statement is already proved. We prove our statement for $p$. Let $e$ be the last edge of $p$. Correspondingly, let $q$ be the part of $p$ preceding $e$. Denote $u = \target(q) = \source(e)$ and $v = \target(p) = \target(e)$
Any prefix of $p$ is also in $\fp(S_2, W)$, so $q\in \fp(S_2, W)$. Therefore, our statement holds for $q$. Namely, if $I$ is the state of $S_2$ after $q$, then $\ant(q) \preceq M_I^u$.
Let $J$ be the state of $S_2$ after $p$. Our goal is to show that $\ant(p) \preceq M_J^v$. Note that $\ant(p) = \col(e)\big(\ant(q)\big)$ by definition of $\ant$. Since $\col(e) \in\mathbb{M}$ is monotone and $\ant(q) \preceq M_I^u$, we have that $\ant(p) = \col(e)\big(\ant(q)\big) \preceq \col(e)\big(M_I^u\big)$. It remains to show that $\col(e)\big(M_I^u\big) \preceq M_J^v$. Note that $J$ is the state into which $S_2$ transits from the state $I$ after receiving $e$. By definition of the memory structure of $S_2$, it is sufficient to show that $\col(e)\big(M_I^u\big) \in \Omega_v$.
By definition of $p_I^u$, we have that $M_I^u = \ant(p_I^u)$. Hence, $\col(e)\big(M_I^u\big) = \ant(p_I^u e)$. The path $p_I^u e$ starts at some node of $W$ and ends in $\target(e) = v$. Thus, to establish $\ant(p_I^u e)\in \Omega_v$, it remains to show consistency of $p_I^ue$ with $S_1^\prime$. We have $p_I^u\in \fp(S_1^\prime, W)$ by definition of $p_I^u$. In turn, if Protagonist is the one to move from $u = \target(p_I^u)$, then $e = S_1^\prime(p_I^u)$. Indeed, $e$ is the edge played by $S_2$ from $u$ when its state is $I$. Hence, $e = S_1^\prime(p_I^u)$, by the definition of the next-move function of $S_2$.
\section{Lower Bound on the Chromatic Memory}
\label{sec:low}
In this section, we establish the following proposition.
\begin{proposition}
There exists a finite set $C\subseteq \mathbb{M}$ such that $\chrmem(\RL\cap C^\omega) = +\infty$.
\end{proposition}
We start by describing $C$. First, we put there $f_0, f_1$ that are defined in \eqref{fg}. Next, put there a function $h\colon\Omega\to\Omega$, defined by
\[h((n, a)) = \begin{cases}(n - 1, a) & n > 1 \\ (0, 0) & n = 0, 1.\end{cases}\]
For the reader's convenience, it is depicted below:
\begin{center}
\begin{tikzpicture}
\node[draw=none, circle] (h00) {$(0, 0)$};
\node[draw=none, circle, right = 0.7cm of h00] (h01) {$(0, 1)$};
\node[draw=none, circle, above=0.7cm of h00] (h10) {$(1, 0)$};
\node[draw=none, circle, right = 0.7cm of h10] (h11) {$(1, 1)$};
\node[draw=none, circle, above=0.7cm of h10] (h20) {$(2, 0)$};
\node[draw=none, circle, right = 0.7cm of h20] (h21) {$(2, 1)$};
\node[draw=none, above = 0.05cm of h20] (hdots1) {$\vdots$};
\node[draw=none, above = 0.05cm of h21] (hdots2) {$\vdots$};
\node[draw=none, right = 0.6cm of hdots1] (h) {$h$};
\path
(h00) edge [loop left, red,thick] (h00);
\draw[->,red,thick] (h01) -- (h00);
\draw[->,red,thick] (h10) -- (h00);
\draw[->,red,thick] (h11) -- (h00);
\draw[->,red,thick] (h20) -- (h10);
\draw[->,red,thick] (h21) -- (h11);
\end{tikzpicture}
\end{center}
Let us establish that $h\in\mathbb{M}$. Take any $(n, a), (m, b) \in\Omega$ such that $(n, a)\preceq (m, b)$. We show that $h((n, a)) \preceq h((m, b))$. If $(n, a) = (m, b)$, then $h((n, a)) = h((m, b))$. Now, if $(n, a)\neq (m, b)$, then $n < m$. The first coordinates of $h((n, a))$ and $h((m, b))$ are $\max\{0, n - 1\}$ and $\max\{0, m - 1\}$, respectively. If $m > 1$, then $\max\{0, m - 1\} = m - 1 > \max\{0, n - 1\}$, which implies that $h((n, a)) \preceq h((m, b))$. Now, if $m \le 1$, then $n\le 1$ also, which means that $h((n, a)) = h((m, b)) = (0, 0)$.
We also put into $C$ two functions $p, q\colon\Omega\to\Omega$ that we choose at random. More specifically, we first sample two independent infinite sequences of independent, uniformly distributed Bernoulli random variables $\{I_n\}_{n = 0}^\infty\in\{0,1\}^\omega $ and $\{J_n\}_{n = 0}^\infty\in\{0,1\}^\omega$. Then for every $(n, b)\in\{0, 1\}$ we define:
\[
p((n, b)) = \Big(n + 1, b\oplus I_n\Big),\qquad q((n, b)) = \Big(n + 1, b\oplus J_n\Big).
\]
For example, if $I_0 = 0, I_1 = 1, J_0 = 1, J_1 = 0$, then $p, q$ act as follows at the first two layers of $\Omega$:
\begin{center}
\begin{tikzpicture}
\node[draw=none, circle] (f00) {$(0, 0)$};
\node[draw=none, circle, right = 0.7cm of f00] (f01) {$(0, 1)$};
\node[draw=none, circle, above=0.7cm of f00] (f10) {$(1, 0)$};
\node[draw=none, circle, right = 0.7cm of f10] (f11) {$(1, 1)$};
\node[draw=none, circle, above=0.7cm of f10] (f20) {$(2, 0)$};
\node[draw=none, circle, right = 0.7cm of f20] (f21) {$(2, 1)$};
\node[draw=none, above = 0.05cm of f20] (fdots1) {$\vdots$};
\node[draw=none, above = 0.05cm of f21] (fdots2) {$\vdots$};
\node[draw=none, right = 0.6cm of fdots1] (u) {$p$};
\draw[->,red,thick] (f00) -- (f10);
\draw[->,red,thick] (f01) -- (f11);
\draw[->,red,thick] (f10) -- (f21);
\draw[->,red,thick] (f11) -- (f20);
\node[draw=none, circle, right=4cm of f00] (g00) {$(0, 0)$};
\node[draw=none, circle, right = 0.7cm of g00] (g01) {$(0, 1)$};
\node[draw=none, circle, above=0.7cm of g00] (g10) {$(1, 0)$};
\node[draw=none, circle, right = 0.7cm of g10] (g11) {$(1, 1)$};
\node[draw=none, circle, above=0.7cm of g10] (g20) {$(2, 0)$};
\node[draw=none, circle, right = 0.7cm of g20] (g21) {$(2, 1)$};
\node[draw=none, above = 0.05cm of g20] (gdots1) {$\vdots$};
\node[draw=none, above = 0.05cm of g21] (gdots2) {$\vdots$};
\node[draw=none, right = 0.6cm of gdots1] (v) {$q$};
\draw[->,red,thick] (g00) -- (g11);
\draw[->,red,thick] (g01) -- (g10);
\draw[->,red,thick] (g10) -- (g20);
\draw[->,red,thick] (g11) -- (g21);
\end{tikzpicture}
\end{center}
Since $p, q$ increase the first coordinate by 1 at every point of $\Omega$, we have that $p, q\in\mathbb{M}$.
We set $C = \{f_0, f_1, h, p, q\}$ and show that $\Pr[\chrmem(\RL\cap C^\omega) =+\infty] \ge 1/2$. Since the event ``$\chrmem(\RL\cap C^\omega) =+\infty$'' is the intersection of the following decreasing sequence of events $\{\mbox{``$\chrmem(\RL\cap C^\omega) \ge Q$''}\}_{Q\ge 1}$, it is sufficient to establish the following lemma.
\begin{lemma}
\label{prob_lemma}
For every $Q\in\mathbb{Z}^+$ we have $\Pr[\chrmem(\RL\cap C^\omega) \ge Q] \ge 1/2$
\end{lemma}
\begin{proof}
We say that two words $x, y\in\{p, q\}^*$ are \emph{$Q$-indistinguishable} if there exists no deterministic finite automaton over $\{p, q\}$ with at most $Q$ states which comes to different states on $x$ and on $y$. Since there are only finitely many deterministic finite automata over $\{p, q\}$ with up to $Q$ states, there exist two different $Q$-indistinguishable words $x, y\in\{p, q\}^*$ of the same length. Assume that $x = x_1 x_2\ldots x_t \in\{p, q\}^t$ and $y = y_1 y_2\ldots y_t\in\{p, q\}^t$ and consider the following arena.
\begin{center}
\begin{tikzpicture}
\node[draw, circle,minimum size=0.5cm] (1) {$u$};
\node[draw, circle, above right=0.5cm and 1cm of 1] (2) {};
\node[draw, circle, below right=0.5cm and 1cm of 1] (3) {};
\node[draw, circle, right=1cm of 2] (4) {};
\node[draw, circle, right=1cm of 3] (5) {};
\node[draw=none, right=0.2cm of 4] (6) {$\ldots$};
\node[draw=none, right=0.2cm of 5] (7) {$\ldots$};
\node[draw, circle, right=0.2cm of 6] (8) {};
\node[draw, circle, right=0.2cm of 7] (9) {};
\node[draw, circle, below right=0.5cm and 1cm of 8,minimum size=0.5cm] (10) {$v$};
\node[draw, circle, right= 1cm of 10] (11) {};
\node[draw=none, right=0.2cm of 11] (12) {$\ldots$};
\node[draw, circle, right= 0.2cm of 12] (13) {};
\node[draw, regular polygon, regular polygon sides=4, minimum size=0.5cm,right=1cm of 13] (14) {$w$};
\path[->]
(1) edge [thick] node[midway, above] {$x_1$} (2);
\path[->]
(1) edge [thick] node[midway, below] {$y_1$} (3);
\path[->]
(2) edge [thick] node[midway, above] {$x_2$} (4);
\path[->]
(3) edge [thick] node[midway, below] {$y_2$} (5);
\path[->]
(8) edge [thick] node[midway, above] {$x_t$} (10);
\path[->]
(9) edge [thick] node[midway, below] {$y_t$} (10);
\path[->]
(10) edge [thick] node[midway, above] {$h$} (11);
\path[->]
(13) edge [thick] node[midway, above] {$h$} (14);
\path[->]
(14) edge [thick, in=30,out=90,out distance=1.5cm,in distance=1.5cm] node[midway, above] {$f_0$} (14);
\path[->]
(14) edge [thick, in=-30,out=-90,out distance=1.5cm,in distance=1.5cm] node[midway, above] {$f_1$} (14);
\draw [decorate,
decoration = {brace,mirror,amplitude=15pt}, thick] (5.5,-0.4) -- (10,-0.4);
\node[draw=none, below=1cm of 12] (16) {$t - 1$ edges};
\end{tikzpicture}
\end{center}
All circle are controlled by Antagonist and the square is controlled by Protagonist. The game starts at the node $u$. We claim that Protagonist has a winning strategy w.r.t.~$\RL$. Indeed, in the beginning, Antagonist has two choices -- to go through $x$'s or to go through $y$'s. In any case, the first $t$ edges in the play are colored by $p$'s and $q$'s. Thus, upon reaching $v$, the first coordinate of the ant will be $t$. Then we go through $t - 1$ edges colored by $h$. As a result, the position of the ant at $w$ will be either $(1, 0)$ or $(1, 1)$. If it is $(1, 0)$, then Protagonist wins by always using $f_0$. If it is $(1, 1)$, then Protagonist wins by always using $f_1$.
Let $b, c\in\{0, 1\}$ be such that:
\[x_t\circ\ldots\circ x_1(\zero) = (t, b),\qquad y_t\circ\ldots\circ y_1(\zero) = (t, c)\]
for some $b, c\in\{0, 1\}$.
We establish two statements. First, we show that $\Pr[b\neq c] = 1/2$. Second, we show that if $b\neq c$, then Protagonist has no winning strategy with at most $Q$ states of chromatic memory. These two statements combined imply our lemma.
\textbf{Showing that $\Pr[b\neq c]= 1/2$}. Let $k\in\{1, 2, \ldots, t\}$ be such that $x_k\neq y_k$. Assume without loss of generality that $x_k = p$ and $y_k = q$. Fix any choice of $\{I_n\}_{n = 0}^\infty$ and $\{J_n\}_{n = 0}^\infty$ except $I_{k - 1}$ and $J_{k-1}$. Let us show that $\Pr[b\neq c]= 1/2$ conditioned on this choice. Then, by the law of total probability, $\Pr[b\neq c]= 1/2$ unconditionally.
Observe that $b$ is computed as follows. First, we consider $0$. Then we add either $I_0$ or $J_0$ to it modulo 2. More specifically, if $x_1 = p$, we add $I_0$, and if $x_1 = q$, we add $J_0$. Then we either add $I_1$ or $J_1$ modulo 2, and so on. After $t$ steps, we get $b$. Since $x_j = p$, at the $k$th step we add $I_{k - 1}$. Hence, if the values of all the random variables except $I_{k - 1}$ and $J_{k - 1}$ are fixed, $b$ becomes an injective function of $I_{k - 1}$. Likewise, $c$ becomes an injective function of $J_{k - 1}$. Thus, $b$ and $c$ are independent and uniformly distributed random bits. Hence, $\Pr[b = c] = 1/2$.
\textbf{Showing that $b\neq c$ implies that Protagonist has no winning strategy with at most $Q$ states of chromatic memory.} Consider any Protagonist's strategy $S$ with at most $Q$ states of chromatic memory. It is built on top of some chromatic memory structure with at most $Q$ states. This memory structure, by definition, only reads colors of edges. Hence, when we go from $u$ to $v$, we either feed $x_1\ldots x_t$ or $y_1\ldots y_t$ to it. Now, these 2 words are $Q$-indistinguishable, and our memory structure has at most $Q$ states. This means that the state of $S$ at $v$, and hence at $w$, will be the same in both possible plays. Thus, $S$ acts identically at $w$ in these two plays.
At the same time, two possible positions of the ant when we reach $w$ are
\[\underbrace{h\circ\ldots \circ h}_{t-1}\circ x_t\circ\ldots\circ x_1(\zero) = \underbrace{h\circ\ldots \circ h}_{t-1}((t, b)) = (1, b)\]
and
\[\underbrace{h\circ\ldots \circ h}_{t-1}\circ y_t\circ\ldots\circ y_1(\zero) = \underbrace{h\circ\ldots \circ h}_{t-1}((t, c)) = (1, c).\]
If $b\neq c$, then both $(1, 0)$ and $(1, 1)$ are possible. Assume first that $S$ plays the $f_0$-edge when it first reaches $w$. Then $S$ looses if the ant reached $w$ being in $(1, 1)$. Indeed, after $S$ plays its first move at $w$, the position of the ant becomes $f_0((1, 1)) = (2, 1)$. If the first coordinate of the ant is 2 or more, both $f_0$ and $f_1$ increase it by 1. Hence, no matter what Protagonist does afterwards, the ant will get infinitely high in $\Omega$. Likewise, if the first move of $S$ at $w$ is the $f_1$-edge, then it looses if the ant reaches $w$ being in $(1, 0)$.
\end{proof}
|
1,116,691,500,692 | arxiv | \section{Introduction}\label{Introduction}
Continuous Variable (CV) Quantum Key Distribution (QKD) has been intensively studied and significant breakthroughs have been achieved in both theory and experiment (see \cite{hosseinidehaj2018satellite} for review). Compared to Discrete Variable (DV) QKD \cite{bennett1984update,weinfurter2016quantum,gyongyosi2019survey,pirandola2020advances}, CV-QKD can be implemented with well-developed technologies (e.g., homodyne detectors) in commercial fibre-optic networks\cite{korzh2015provably,eriksson2019wavelength} and free-space optical communications\cite{shen2019free,gyongyosi2019secret}, providing it a potential advantage in practical deployments\cite{jouguet2011long,jouguet2012field,liao2018long,guo2020comprehensive,zhang2020long}.
Considering the finite-key security of CV-QKD and DV\nobreakdash-QKD, there are three critical parameters. These are, $N_o$, the number of original quantum signals sent by the transmitter (Alice) that are collected by the receiver (Bob); $N_e$, the number of quantum signals from which the protocol parameters are estimated;\footnote{More precisely, in a CV-QKD protocol, Alice and Bob randomly select a $N_e$-signal subset from the $N_o$ signals to estimate the parameters.} and, $\epsilon$, the probability that a QKD protocol fails to generate secret keys\cite{leverrier2010finite,furrer2012continuous}. To satisfy an upper limit on the failure probability of parameter estimation, Alice and Bob set $N_e$ to a large value, which in turn implies a larger $N_o$.
Despite the advantages in deployment, CV-QKD systems tend to demand a larger $N_o$ to reach the same $\epsilon$ relative to DV-QKD protocols. For example, to achieve a final key rate of $0.1$ bits per pulse with $\epsilon=10^{-9}$, a CV-QKD protocol studied in \cite{kish2020feasibility} required $N_o\approx 10^{9}$ signals. However, to achieve the same final key rate with $\epsilon = 10^{-14}$, the DV-QKD protocol in \cite{tomamichel2012tight} required $N_o\approx 10^4$ signals. This higher number of required signals in CV-QKD can render the classical post-processing (i.e. key reconciliation and privacy amplification\footnote{In this work, we focus on the key reconciliation step because it is the more time-consuming part in the post-processing steps while the privacy amplification involving only bit-wise operations can be easily implemented faster than the reconciliation~\cite{yuan201810}.}) slow - possibly failing to meet target timescales for reconciliation.
The end-users of a CV-QKD system expect the system to deliver two identical and secure keys under a limited time interval. For example, for satellite-based deployments, we would hope that the reconciliation is completed while maintaining a line-of-sight connection with the ground station. For a CV-QKD-enabled satellite with orbital parameters similar to \emph{Micius} \cite{liao2017satellite}, this would mean the reconciliation should be completed in less than a few minutes. For the protocol we use in this work (see later), and for $\epsilon=10^{-9}$, this, in turn, would require the data rate of reconciliation to be at least $3.6 \times 10^6$ bits per second. For real-time reconciliation (say in sub-second timescales), two orders of magnitude increases in the reconciliation rates would be required. Demands for smaller $\epsilon$ will exacerbate the issue. Ideally, the rate of reconciliation should always be faster than the rate of quantum signalling.
This all raises the question as to whether current CV-QKD reconciliation schemes are optimised for the highest possible key rates in bits per second. As we show here, this is not the case. Further optimisation is possible on all current schemes.
To understand the issue better, we define reconciliation in the context of CV-QKD as a two-step scheme where the inputs to the reconciliation are non-identical $N=2N_o - 2N_e$ quadrature values\footnote{ $N_o$ and $N_e$ are multiplied by 2 since Alice and Bob utilise both quadratures from heterodyne detection - the detection process we assume in this work.} held by Alice and Bob (after parameter estimation), and the output is an identical bit string held by Alice and Bob \cite{lin2015high,wang2017efficient,zhou2019continuous}. Assuming a reverse reconciliation scheme, Bob first converts the quadrature values encoded by Alice in each signal to $m$ bits. Alice, after converting each of her encoded real numbers also to $m$ bits, then initiates some discrepancy-correction algorithms based on pre-defined error-correction codes to ensure her $mN$ bits are identical to Bob's. In this work, we will adopt Low-Density Parity-Check (LDPC) codes for the error correction.
However, as alluded to above, reconciling $mN$ bits within a limited time frame can be challenging. State-of-the-art LDPC-based reconciliation schemes for CV-QKD systems involve parallelised computation on a Graphics Processing Unit (GPU) \cite{milicevic2017key,guo2020comprehensive} or Field-Programmable Gate Arrays (FPGAs)\cite{yang2020high,li2021fpga}. Reconciliation schemes implemented on FPGAs offer more programmable flexibility, but sometimes at the cost of reduced memory access relative to GPUs. For our purposes, both hardware architectures are useful - both offer massive parallelisation opportunities. These parallelisation solutions generally take the following two-step approach: 1) The $mN$ bits are organised as $m$ $N$-bit blocks to be reconciled. Each $N$-bit block is divided into multiple shorter blocks of size, say, $N_R$. This is usually just set to a block size that can be processed within some timescale. 2) Then the $m$ $N_R$-bit blocks are reconciled in parallel (via independent processors) using optimally-designed LDPC decoders. However, what is missing in this approach is a proper optimisation analysis as to what the optimal value of $N_R$ is. As we show below, simply reducing $N_R$ at the cost of additional processing units is not an optimal solution. It transpires that in QKD the ``penalty'' cost of reducing the code rate (implicit in the use of small block lengths) significantly influences the bit per second final key rate.
A more sophisticated analysis is required to determine the optimal reduced block length. Such an analysis is the key contribution of this work. Although we will adopt a specific CV-QKD protocol for our analysis, the key steps of our scheme will apply to any CV-QKD protocol. Our reconciliation scheme will deliver the highest reconciliation rate for a given processor speed - thus allowing for the optimal solution to CV-QKD reconciliation.
\section{System Overview}
\label{section:SystemOverview}
Although, as just stated, our analysis will apply to most CV-QKD protocols, for detailed quantitative discussion we will consider only one specific CV-QKD protocol - the ``no-switching'' protocol\cite{weedbrook2004quantum,hosseinidehaj2020finite,dequal2020feasibility} based on heterodyne detection. In this protocol, the quantum signal is encoded using Gaussian-modulated coherent states\cite{weedbrook2004quantum}. The main advantage of the no-switching protocol is that Alice and Bob can utlise all measurement results \cite{hosseinidehaj2020finite} (in most other protocols some results are discarded due to a random quadrature selection). We also adopt a Slice Reconciliation (SR) variant named Multi-Stage Hard Decoding\footnote{The slice reconciliation can be implemented with 2 other variants: Bit Interleaved Coded Modulation (BICM)~\cite{bloch2005efficient} and Multi-Level Coding/Multi-Stage Decoding (MLC/MSD)~\cite{jouguet2014high,mani2021multiedge}. We note that the MLC/MSD takes advantage of the dependence between slices to select the optimal LDPC code rates~\cite{bloch2005efficient,mani2021multiedge}. However, as a special case of MLC/MSD, MSHD assumes that the slices are independent~\cite{imai1977new,wachsmann1999multilevel,bloch2005efficient}. Using MSHD leads to a tractable analysis at the expense of sub-optimal selection of LDPC code rates, but such expense is negligible if Gray Labelling~\cite{wesel2001constellation} is adopted and the number of slices is at least 5~\cite{wachsmann1999multilevel,bloch2005efficient}, as is the case in this work.} (MSHD)~\cite{imai1977new,wachsmann1999multilevel,bloch2005efficient} for the classical reconciliation step, where the number of bits derived from each measurement outcome is $m$. We refer to this variant simply as SR in the following.
It is worth noting that the optimisation analysis to follow is to some extent independent of the details of the reconciliation scheme. However,
SR\cite{bloch2005efficient,bloch2006ldpc} can be compared with the other well-known reconciliation scheme for CV-QKD - multi-dimensional reconciliation\cite{leverrier2008multidimensional}. It is known that SR achieves higher reconciliation efficiency when the Signal-to-Noise Ratio (SNR) is greater than 1\cite{bloch2006ldpc}. At low SNR the opposite is true.
For focus, here we adopt SR (as multistage hard decoding\cite{bloch2006ldpc}) since in many satellite scenarios post-selection is used to filter out the low SNR quantum signals
\cite{hosseinidehaj2018satellite}. Our adopted scheme will be more useful in such scenarios.
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{NDiagram}
\caption{Diagram of the reconciliation. 1) Alice prepares and sends Gaussian-modulated coherent states to Bob during the signal transmission. 2) Bob performs heterodyne detection to obtain quadrature values. Blue diamonds at Alice's side represent Alice's quadrature values. Diamonds filled with tinted or shaded blue at Bob's side show that his quadrature values deviate from what was prepared by Alice after transmission. 3) Alice and Bob select and exchange a random subset of the quadrature values (diamonds with red outlines) to perform parameter estimation. 4) Alice and Bob perform reverse reconciliation based on SR to convert their quadrature values into two bit strings and reconcile them. 5) Alice and Bob perform privacy amplification to obtain two identical but shorter key strings about which Eve has effectively no knowledge.\label{fig:NDiagram}}
\end{figure*}
We now briefly describe the steps of the protocol, a diagram of which shown in Fig.~\ref{fig:NDiagram}. In the following, we assume the quantum signalling rate is much larger than the reconciliation rate.
\begin{itemize}
\item \textbf{Step 1: Signal Preparation.} Alice selects a fixed modulation variance $V_A$. For each quantum signal to be transmitted to Bob, Alice randomly selects a number from a Gaussian distribution, $N(0,V_A)$, and then prepares a signal by displacing one of the quadrature components of a vacuum state by this random number. The process is repeated on the signal for the other quadrature. The signal is then transmitted to Bob.
\item \textbf{Step 2: Heterodyne Detection.} Bob performs heterodyne detection to obtain the two quadrature values (real numbers) for each received signal. Bob compares each measured quantum signal with a given cut-off threshold and informs Alice to discard her corresponding quantum signal if his measured quantum signal is lower than the threshold\footnote{The Gaussian post-selection technique at Bob's side effectively improves the channel conditions between Alice and Bob\cite{fiuravsek2012gaussian} so that SR is preferred for reconciliation (rather than multidimensional reconciliation).}. A quantum signal that is lost in transit registers a null signal at Bob. Neglecting null signals, Bob holds $2N_o$ quadrature values at the end of this process.
\item \textbf{Step 3: Parameter Estimation.} Bob randomly selects a subset $2N_e$ from the $2N_o$ quadrature values and sends this \emph{estimation subset}, along with the corresponding time information, to Alice via classical communications (we adopt $N_e = \frac{1}{2}N_o$, unless otherwise stated).
Alice uses the timing information to best pair the signals in this subset (and therefore the corresponding quadrature values) sent by her and then estimates the covariance matrix between the shared states. Based on the estimated covariance matrix, Alice determines the channel transmissivity, $T$, excess noise, $\xi$, Bob's SNR, $\gamma$, the Holevo Information, $\chi_{BE}$, between Bob and the eavesdropper (Eve), and the mutual information between Alice and Bob, $I_{AB}$. Finally, for a given target reconciliation efficiency $\beta$, Alice compares $\chi_{BE}$ with $\beta I_{AB}$. Alice aborts the protocol if $\chi_{BE} \geq\beta I_{AB}$. Otherwise, Alice informs Bob of the estimation results, i.e. $T$, $\xi$, $\gamma$, $\chi_{BE}$ and $I_{AB}$.
\item \textbf{Step 4: Bit Error Estimation for SR.} Using Gray Labelling, Alice and Bob represent each of the quadrature values embedded in each signal with $m$ bits. Then, for quadrature values selected in the estimation subset, Alice forms a $N_{e}$-by-$m$ bit matrix and Bob does the same. Next, Alice and Bob exchange their matrices and compare the $j^{th}$ column of the two matrices to estimate the Bit Error Ratio\footnote{At this step, sources of bit errors include the channel transmission, heterodyne detection, and quantisation.} (BER), $p_j,j\in \{0,1,\cdots,m-1\}$, for all the digits in the $j^{th}$ column. The estimated $p_j$ will be used in SR. Finally, Alice and Bob discard all the quadrature values in the estimation subset. At the end of this step, Alice and Bob each hold a $mN$-bit string.
\item \textbf{Step 5: Reverse Reconciliation.} For each column, Alice and Bob agree on an LDPC code with block length $N_R$ that is closest to the capacity determined by $p_j$. Bob forms a new $N_R$-bit string (referred as a ``slice'' in SR) by selecting the $j^{th}$ digit (bit) of each of the $N_R$ quadrature values, encodes the new bit string (the slice) into syndrome bits, and sends those bits to Alice (see III.B for details). Alice then initiates SR to obtain her best estimate of Bob's string. Alice repeats this process until all her $mN$ bits are reconciled.
Finally, Alice and Bob obtain two hashed strings by applying the same hash function to their reconciled strings and exchange the hash results to check whether SR is successful. If successful, Alice holds a $mN$-bit string identical to Bob's $mN$-bit string. Otherwise, they abort the protocol and restart from Step 1.
\item \textbf{Step 6: Privacy Amplification.} Based on Eq.~\ref{eq:BPSKeyRate}, Alice and Bob compute the length of the secret key that can be extracted and then apply a 2-universal hashing function on their reconciled string to obtain two identical and shorter secret key strings about which Eve has effectively no knowledge.
\end{itemize}
\label{section:KeyReconciliation}
\section{Overcoming the Limitations of Key Reconciliation}
\label{section:penalty}
\subsection{GPU-based SR}
\label{section:DecodingTime}
The process of SR is to reconcile $mN$ bits. One can naively use $m$ LDPC matrices with $N_R=N$ for each matrix. However, due to practical hardware limitations, the process is better implemented by dividing $N$ into $N_d$ blocks of some smaller $N_R$ so that the same LDPC decoders can reconcile these blocks in parallel. This process resembles the idea of \textit{Single Program Multiple Data} (see \cite{darema2001spmd,pharr2012ispc} for more details). As illustrated in Fig.~\ref{fig:GPUSlice}, we implement SR by creating $N_d$ LDPC decoders loaded with the same LDPC matrix on $N_d$ GPU threads and let these decoders reconcile $N_d$ blocks in parallel. This helps to reduce the SR timescale and assists in meeting the time constraints, such as those posed in satellite-based scenarios. Section \ref{section:TimeSimulation} will demonstrate in detail the advantage of using such parallelisation.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{GPUSR.eps}
\caption{Diagram of GPU-based SR adopted in this work. $N$ quadratures values are divided into multiple blocks of length $N_R$. Each block is loaded to one thread that is dedicated to performing SR for this block. The reconciled bits obtained from each thread are collected and ready for privacy amplification. The floor operator is shown as $[ \cdot ]$. \label{fig:GPUSlice}}
\end{figure}
\subsection{The Penalty of Using Finite-Length LDPC Codes}
An illustration of the SR scheme is shown in Fig.~\ref{fig:SliceDiagram}. The generic steps are: \emph{1)} for the $i^{th}$ quadrature value, $y^i, i=0,1,\cdots,N_R-1$, Bob applies a constant-step quantisation function, $M(\cdot)$, to convert $y^i$ to an $m$-bit string\footnote{We assume the least significant bit is $l_0^i$.} denoted as $\{l_0^i, l_1^i, \cdots, l_{m-1}^i\}$, where $l_j^i, j=0, \cdots, m-1$ is the binary bit for the $j^{th}$ digit of the $i^{th}$ quadrature value. \emph{2)}. We define that the $j^{th}$ slice, $\mathbf{S_j}$, is a bit string with length $N_R$ created by Bob: $\mathbf{S_j} = \{l_j^0, l_j^1, \cdots, l_j^{N_R-1} \}$. For $\mathbf{S_j}$, Bob applies an LDPC matrix, $H_j$ based on $p_j$ obtained in parameter estimation to obtain the corresponding syndrome bits. \emph{3)} Bob sends Alice the syndrome bits of $\mathbf{S_j}$ and $H_j$ via classical communications. \emph{4)} Alice uses her quadrature values as side information and what was transmitted by Bob as the inputs of the LDPC decoder. Alice takes the soft decoding output (the log-likelihood ratio when the decoding finishes) of $\mathbf{S_{j-1}}$ as the input to accelerate the reconciliation of $\mathbf{S_j}$ (except for $\mathbf{S_0}$)\footnote{The rationale behind this is that the soft decoding output of $\mathbf{S_{j-1}}$ provides \textit{a priori} information on the reliability of each bit in $\mathbf{S_{j}}$ \cite{bloch2005efficient}.}\cite{bloch2005efficient,lodewyck2007quantum}. \emph{5)} Alice obtains her estimated version of $\mathbf{S_{j}}$. Then, Alice and Bob move on to $\mathbf{S_{j+1}}$. \emph{6)} Alice and Bob repeat Step 1 to 5 until all $N$ values are reconciled. We note that Alice and Bob use $m$ LDPC matrices to reconcile $m$ slices in a block - but the same $m$ LDPC matrices are used for reconciling all $N_d$ blocks since the quantisation errors are the same for a given $m$\cite{laudenbach2018continuous}.
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\textwidth]{SliceDiagram.eps}
\caption{Diagram of SR. The red dashed arrows show the soft decoding output feeds from $\mathbf{S_{j-1}}$ to $\mathbf{S_j}$. The red dashed rectangles are graphical examples of a slice. \label{fig:SliceDiagram}}
\end{figure*}
In SR, Bob needs to transmit syndrome bits to Alice based on the selected LDPC matrix with the code rate, $R_j$, for $\mathbf{S_j}$ via classical communications. For a given channel condition, selecting $R_j$ closest to the capacity is the common approach to minimise the number of bits disclosed to the eavesdropper while Alice can still reconstruct Bob's quantised bits without error\cite{bloch2006ldpc}. Specifically, for a given $T$, we can obtain the SNR, $\gamma$, as \cite{laudenbach2018continuous}
\begin{equation}
\label{eq: SNR}
\gamma=\frac{\frac{1}{2}V_AT}{1+\frac{1}{2}\xi}\,,
\end{equation}
where $V_A$ is the modulation variance at Alice side, $\xi = \xi_{ch} + \xi_d$ is the total noise power, $\xi_{ch}$ is the channel excess noise, and $\xi_d$ is the detector noise. Finally, the reconciliation efficiency $\beta\in [0,1]$ for the SR is obtained via \cite{jouguet2014high}
\begin{equation}
\label{equation: beta}
\beta = \frac{\Pi(M(Y))-m+\sum_{j=0}^{m-1}R_j}{I_{AB}}\,,
\end{equation}
where $Y$ is a vector of Bob's quadrature values of length $N_R$, $M(Y)$ is a $mN$-bit string obtained by applying the quantisation function $M(\cdot)$ to each quadrature value in $Y$, and $\Pi(M(Y))$ is the entropy function of $M(Y)$. Increasing $m$ to values that render the quantisation error negligible is always possible, but this would require the individual LDPC codes for every $j^{th}$ slice to be near perfect (capacity-achieving) otherwise the efficiency $\beta$ will be low; $m=5$ is found to be a good pragmatic compromise, and is adopted here. Given five slices a constant quantisation size of the real line across $2^5$ bins centered on zero is chosen. This size, which is dependent on the adopted $\gamma$, optimises $\beta$ (see\cite{jouguet2014high} for further discussion).
The LDPC code rates, $R_j$, in Eq.~\ref{equation: beta} are the actual rates of the specific codes used for each slice (of length $N_R$). Normally, in practice, $N_R$ is simply set to some value that allows target time-frames to be met, given that the decoding time is an increasing function of the block length\cite{milicevic2017key}. We use $R_j$ to obtain our experimental key rate in Eq.~\ref{eq:ExpKeyRate}. A more nuanced value of that $N_R$ that optimises secure key rates is now analysed.
To make progress in our task, we utilise a previous analysis of channel coding in the finite block-length\cite{polyanskiy2010channel} regime as a means to further investigate the effective channel capacity, $C_{Finite}$ for a given block length $N_R$ and $\gamma$. For a finite message set $\mathcal{M}$, $C_{Finite}$ is the ratio of the maximum size of $\mathcal{M}$ that can be transmitted via $N_R$ channel uses with a decoding error probability less than $\epsilon_{EC}$. Specifically, for an Additive White Gaussian Noise Channel (AWGNC), $C_{Finite}$ is given by\footnote{This approximation is accurate if the code achieves more than $80\%$ of the capacity\cite{polyanskiy2010channel}. } \cite{polyanskiy2010channel}
\begin{equation}
\label{eq:R_Finite}
C_{Finite} \approx C(\gamma)-\frac{\sqrt{N_RA}Q^{-1}(\epsilon_{EC}) + \frac{1}{2}\log N_R}{N_R}\,,
\end{equation}
where $C(\gamma) = \frac{1}{2}\log (1+\gamma)$ is the Shannon Capacity for the given $\gamma$, $Q^{-1}(s)$ is the inverse of the Q-function
\begin{equation}
Q(z)=\frac{1}{\sqrt{2\pi}}\int_{z}^{\infty}e^{-\frac{t^2}{2}}dt\,,
\end{equation}
and $A$ is given by
\begin{equation}
A = \frac{\gamma}{2} \frac{\gamma+2}{(\gamma+1)^2}(\log \epsilon_{EC})^2\,.
\end{equation}
Function $A$ is termed the ``channel dispersion'' since it represents the reduction of the code rate from the channel capacity due to a tolerated decoding error probability. It is the ``price to pay" for using a code with finite block length, for a given $\gamma$.
Note, $C_{Finite}$ is the upper bound of $\sum_{j=0}^{m-1}R_j$ for a given $\epsilon_{EC}$ and $N_R$. To simplify the determination of the code rate in the finite-length regime, we determine $C_{Finite}$ instead of each $R_j$ for the purpose of analysis. Using Eq.~\ref{eq:R_Finite} we introduce $\beta_{Finite}$ as an analytical reconciliation efficiency in the finite LDPC block length regime (neglecting the information loss due to the quantisation process). This is given by
\begin{equation}
\label{eq: BetaFinite}
\beta_{Finite} = \frac{C_{Finite}}{I_{AB}}\approx \frac{{I_{AB}}-\frac{\sqrt{N_RA}Q^{-1}(\epsilon_{EC}) + \frac{1}{2}\log N_R}{N_R}}{I_{AB}}\,.
\end{equation}
Eq.~\ref{eq: BetaFinite} explicitly illustrates how LDPC codes with long block lengths generally reduce the information disclosed to Eve during reconciliation.
We demonstrate the connection between $\beta_{Finite}$ and $\beta$. Firstly, we rewrite Eq.~\ref{equation: beta} as
\begin{equation}
\label{equation: beta2}
\beta = \frac{\Pi(M(Y))-R_s}{I_{AB}}\,,
\end{equation}
where $R_s = \sum_{j=0}^{m-1}(1 - R_j) = m -\sum_{j=0}^{m-1}R_j$ is the ratio of syndrome bits sent by Bob to the total number of bits in $m$ slices. $R_s$ is the side-information that Alice uses to reconcile her $m$ slices\cite{mani2021multiedge}. It is known that $R_s$ satisfies the Slepian\nobreakdash Wolf Bound~\cite{slepian1973noiseless}
\begin{equation}
\label{equation: SWB}
R_s \geq \Pi(M(Y)|X)\,,
\end{equation}
where $X$ is a vector of Alice's quadrature values of length $N_R$. Applying Eq.~\ref{equation: SWB} to Eq.~\ref{equation: beta2}, we have
\begin{equation}
\begin{aligned}
\frac{\Pi(M(Y))-R_s}{I_{AB}} &\geq \\
&\frac{\Pi(M(Y))-\Pi(M(Y)|X)}{I_{AB}} = \frac{I(M(Y);X)}{I_{AB}}\,,
\end{aligned}
\end{equation}
where $I(M(Y);X)$ is the total mutual information (after quantisation) between Alice and Bob. Recalling that $ C_{Finite} $ is the upperbound of the mutual information between Alice and Bob for an LDPC block length, we have the following
\begin{equation}
1 \geq \beta \geq \frac{I(M(Y);X)}{I_{AB}} \geq \frac{C_{Finite}}{I_{AB}} = \beta_{Finite}\geq 0\,.
\end{equation}
\subsection{Analysing the Computational Complexity of SR}
An LDPC matrix with block length $N_R$ can be defined by the symbol and check node degree distribution polynomials, $\lambda(x)=\sum_{a=2}^{\Lambda}\lambda_a x^{a-1}$ and $\rho(x)=\sum_{b=2}^{P}\rho_b x^{b-1}$. Here, $\Lambda$ and $P$ are the highest degrees in $\lambda(x)$ and $\rho(x)$, respectively. We denote the total number of non-zero entries in an LDPC matrix as $G$, and adopt the well-known Belief Propagation (BP) decoder\cite{richardson2008modern} for error correction. We define the total number of arithmetic operations of SR as $\sum_{j=0}^{m-1} E_j D_{j}$, where, for each $\mathbf{S_j}$, $E_j$ is the number of arithmetic operations executed within a decoding iteration,\footnote{In a BP decoder, a decoding iteration is one pass through the decoding algorithm.} and $D_{j}$ is the number of decoding iterations\cite{ai2020reconciliation}. We note, in our GPU-based SR, $E_j$ and $D_j$ are different for the $m$ slices of each block since $m$ LDPC matrices are used to reconcile the $m$ slices. For a channel with constant $T$ and $\xi$, $D_{j}$ is dependent on a target $\epsilon_{EC}$, and on the polynomials $\lambda(x)$ and $\rho(x)$. Note, for $N_R$ larger than approximately $10^5$, $D_j$ is independent of $N_R$ (a result we will adopt later). Assuming the Gaussian approximation within the Density Evolution Algorithm, $D_j$ is given by
\begin{equation}
\label{eq:rob2}
D_{j} = \arg \min_{k} \{ q_k = f(\gamma, k, \lambda(x), \rho(x)) \leq \epsilon_{EC},k \in \mathbb{Z^*}\}\,,
\end{equation}
where $q_k$ is the BER after the $k^{th}$ decoding iteration and given by\cite{chung2001analysis}
\begin{equation}
\label{eq:rob}
\begin{aligned}
q_k &= f(\gamma, k, \lambda(x), \rho(x))\\
&=\sum_{b=2}^{P} \rho_b \phi^{-1}\left(1 - L^{b-1} \right)\,.
\end{aligned}
\end{equation}
Here
\begin{equation}
L = 1 - \sum_{a=2}^{\Lambda}\lambda_a \phi\left(\log \gamma+\left(a-1\right)q_{k-1}\right)\,,
\end{equation}
where $q_0=0$, and $\phi(v)$ is given by
\begin{equation}
\label{eq:phiFunc}
\phi(v) = \begin{cases}
1 - \frac{1}{\sqrt{4\pi v}}\int_{-\infty}^{+\infty} \tanh\left(\frac{u}{2}\right)e^{-\frac{\left(u - v\right)^2}{4v}}du & v>0\\
1& v=0\,.
\end{cases}
\end{equation}
Finding a closed solution to Eq.~\ref{eq:rob} is problematic due to the $\phi^{-1}(w)$ term (here $w=\phi(v)$). To make progress, the following approximation for Eq.~\ref{eq:phiFunc} is used\cite{chung2001analysis}
\begin{equation}
\phi(v) \approx \begin{cases}
e^{-0.4527v^{0.86}+0.0218} & v>0\\
1& v=0\,.
\end{cases}
\end{equation}
We then find $\phi^{-1}(w)$ is given by
\begin{equation}
\phi^{-1}(w) \approx \begin{cases}
\left( \frac{\log{w}- 0.0218}{-0.4527} \right)^{1.1628} & 0<w<1\\
0& w=1\,.
\end{cases}
\end{equation}
With this all in place, it is now possible to solve for $D_j$ as given by Eq.~\ref{eq:rob2}.
Now we focus on the determination of $E_j$. When messages are propagated from the variable nodes to the check nodes, there are $2G$ multiplications and $G$ additions\cite{chandrasetty2011fpga}. When messages are propagating back to the variable nodes, there are $4G$ operations required ($2G$ multiplications and $2G$ additions)\cite{chandrasetty2011fpga}. Therefore, $E_j$ is obtained by\cite{chandrasetty2011fpga,ai2020reconciliation}
\begin{equation}
\label{eq: EP}
\begin{aligned}
E_j &=7G\\
&= 7N_R (\frac{\sum_{b=2}^{P}\frac{\rho_b}{b}}{\sum_{a=2}^{\Lambda}\frac{\lambda_a}{a}})(\sum_{b=2}^{P}b\rho_b)\,.
\end{aligned}
\end{equation}
The decoding time of the whole reconciliation process, $\Delta t$, is given by
\begin{equation}
\label{eq:DeltaT}
\Delta t = c_h \sum_{j=0}^{m-1}E_j D_{j}\,,
\end{equation}
where $c_h$ is a hardware-dependent constant representing the time taken to complete an arithmetic operation. Clearly, by dividing $N$ values into multiple blocks with length $N_R$ and decoding these blocks simultaneously, Alice and Bob can reduce the decoding time by a factor of $N_d = \frac{N}{N_R}$.
\section{Final Key Rate}
\label{section:KeyRate}
We now present the penalty incurred for the division of $N$ in the finite-key regime, and then propose an optimisation procedure to find the optimal $N_R$ which maximises the final key rate in bits per second.
\subsection{Analysis of the Final Key Rate}
\label{section:Analysis}
For the protocol considered, in the finite-key regime the final key rate in bits per pulse, $K$, under the assumption of Gaussian collective attacks is adopted as \cite{leverrier2015composable,lupo2018continuous,hosseinidehaj2020finite}\footnote{This key-rate formulation was developed in \cite{leverrier2015composable} with a typographical error corrected in \cite{lupo2018continuous}. Eq.~\ref{eq:BPSKeyRate} is from \cite{hosseinidehaj2020finite} which acknowledged the correction and simplified the final key rate formulation (see footnote 2 of \cite{hosseinidehaj2019optimal} for more details). A general discussion on the use of other key-rate formulations (e.g. \cite{pirandola2021limits,pirandola2021satellite}) within our framework is given later.}
\begin{equation}
\label{eq:BPSKeyRate}
K=\frac{N(\beta I_{AB} - S_{BE}^{\epsilon_{PE}} ) - \sqrt{N}\Delta_{AEP}-2\log_2\frac{1}{2\epsilon_{PA}}}{N_o}\,,
\end{equation}
where $S_{BE}^{\epsilon_{PE}}$ is the upper bound of the estimated $\chi_{BE}$ (note $K$ is an upper bound, which we assume is reached).
The determination of $S_{BE}^{\epsilon_{PE}}$ is carried out and utilised in the key rates derived here, but this determination is somewhat lengthy. As such, the reader is referred to the appendix for a full explanation and derivation of this term. We simply note here that $S_{BE}^{\epsilon_{PE}}$ is dependent on estimates of the channel parameters and therefore on the value of $N_e$, the number of symbols sacrificed in the estimation.
In Eq.~\ref{eq:BPSKeyRate}, $\Delta_{AEP}$ is a penalty term (derived using the Asymptotic Equipartition Property of a stochastic source) due to the finite number of bits used in quantisation and privacy amplification, and is given by
\begin{equation}
\label{eq:AEP}
\begin{aligned}
\Delta_{AEP} =& (m+1)^2 + 4(m+1)\sqrt{\log_2 (\frac{2}{\epsilon^2_{s}})} \\
&+2\log_2(\frac{2}{\epsilon^2\epsilon_{s}}) + \frac{4\epsilon_{s}m}{\epsilon\sqrt{N}}\,,
\end{aligned}
\end{equation}
where $\epsilon = \epsilon_{EC} + 2\epsilon_{s} + \epsilon_{PA}+\epsilon_{PE}$ is the probability that a QKD protocol fails to generate secret keys. Here, $\epsilon_{s}$ is the smoothing parameter associated with the smooth min-entropy calculation, $\epsilon_{PA}$ is the failure probability of the privacy amplification, and $\epsilon_{PE}$ is the probability that the true value of $\chi_{BE}$ is not within the confidence interval calculated during parameter estimation. For a given $\epsilon$, one can determine the values of $\epsilon_{EC}$, $\epsilon_{s}$, $\epsilon_{PA}$, and $\epsilon_{PE}$ by setting them individually or collectively as part of the maximisation of $K$ (see Eqs.~18 -- 21 in \cite{kish2020feasibility} for details).
We consider the penalty on the final key rate caused by dividing $N$ into sub-blocks of length $N_R$. Replacing $\beta$ in Eq.~\ref{eq:BPSKeyRate} with the $\beta_{Finite}$ of Eq.~\ref{eq: BetaFinite}, we obtain the final key rate with the finite LDPC block length effect fully considered. The new rate is given by
\begin{equation}
\label{eq:FiniteK}
K_{Finite} = \frac{N(C_{Finite} - S_{BE}^{\epsilon_{PE}} ) - \sqrt{N}\Delta_{AEP}-2\log_2\frac{1}{2\epsilon_{PA}}}{N_o}\,.
\end{equation}
Thus far, we have been analysing the final key rate in bits per pulse. However, the final key rate in bits per second is more interesting in our context - the system will not be viable if the reconciliation takes too long to complete. From this point forward, we use a dashed symbol to distinguish a final key rate that is given in bits per second. Taking the decoding complexity into account, and ignoring the time taken to acquire the quantum signals, we can write the final key rate, $K_{Finite}^\prime$, as
\begin{equation}
\label{eq:BPSRate}
\begin{aligned}
K_{Finite}^\prime &= \frac{N_o K_{Finite}}{\Delta t}\\
& = \frac{N(C_{Finite} - S_{BE}^{\epsilon_{PE}} ) - \sqrt{N}\Delta_{AEP}-2\log_2\frac{1}{2\epsilon_{PA}}}{c_h\sum_{j=0}^{m-1}E_j D_{j}}\,.
\end{aligned}
\end{equation}
We observe that $\Delta t$ in Eq.~\ref{eq:DeltaT} and $C_{Finite}$ in Eq.~\ref{eq:R_Finite} are increasing functions of $N_R$. We are interested in finding a unique $N_R$ so that $K_{Finite}^\prime$ is maximised.
\subsection{Optimised LDPC Blocklength for CV-QKD Reconciliation}
\label{section:Optimal}
Previously, we have shown that parallelisation reduces the decoding time at the expense of increased information disclosure to Eve. In this section, we demonstrate an optimisation process to find the unique $N_R$ maximising $K_{Finite}^\prime$.
We consider a scenario where $\epsilon_{EC}$, $\epsilon_{s}$, $\epsilon_{PA}$, and $\epsilon_{PE}$ are manually set by end users. We can formulate the optimisation problem for the scenario
\begin{equation}
\label{eq:Optimisation1}
\begin{aligned}
\max_{N_R} \quad & K_{Finite}^\prime\\
\textrm{s.t.} \quad & 10^5\leq N_R \leq N \,,\\
\end{aligned}
\end{equation}
where $K_{Finite}^\prime$ is defined in Eq.~\ref{eq:BPSRate}. The lower limit of $10^5$ arises from our earlier discussion on ensuring $D_j$ is independent of $N_R$ (for smaller values of $N_R$ the penalty cost will be prohibitive and we ignore this range). We notice that $\Delta t$ is a linear function of $N_R$\footnote{Recalling Eq.~\ref{eq:DeltaT}, we note that for a given LDPC code, $E_j$ is only dependent on the degree distribution pairs and $D_{j}$ is only a function of the degree distribution pairs and $\gamma$.} and $S_{BE}^{\epsilon_{PE}}$; and that $\Delta_{AEP}$ and $\epsilon$ are independent of $N_R$. Therefore, we can rewrite Eq.~\ref{eq:Optimisation1} in the simplified form
\begin{equation}
\label{eq:simplifiedOpt}
\begin{aligned}
\max_{N_R} \quad & \frac{N C_{Finite} - B_1}{B_2 N_R}\\
\textrm{s.t.} \quad & 10^5\leq N_R \leq N\,,\\
\end{aligned}
\end{equation}
where
\begin{eqnarray}
B_1=&N S_{BE}^{\epsilon_{PE}} + \sqrt{N}\Delta_{AEP}+2\log_2\frac{1}{2\epsilon_{PA}}\,\label{eq:B1}\,,\\
B_2=&\sum_{j=0}^{m-1}7(\frac{\sum_{b=2}^{P}\frac{\rho_b}{b}}{\sum_{a=2}^{\Lambda}\frac{\lambda_a}{a}})(\sum_{b=2}^{P}b\rho_b) D_{j}c_h\,.
\end{eqnarray}
To solve this optimisation problem, firstly we show that the second derivative of $K_{Finite}^\prime$ with respect to $N_R$ is less than zero for all $N_R$ considered, where
\begin{equation}
\label{eq:2diff_eq}
\begin{aligned}
\frac{d^2 K_{Finite}^\prime}{dN^2_R} =& \frac{-1}{4B_2 N^4_R}\left(\left(12N \log N_R -10 N \right) \right.\\
&\left. + 15 \sqrt{A}Q^{-1}(\epsilon_{EC}) N\sqrt{N_R} \right.\\
&\left. - \left( 8I_{AB}N N_R - 8B_1 N_R\right) \right) \,.
\end{aligned}
\end{equation}
To show that the RHS of Eq.~\ref{eq:2diff_eq} is less than zero for all $N_R$ considered, it is equivalent to show
\begin{equation}
\label{eq:2diff_eq2}
\begin{aligned}
12N \log N_R -10 N + 15 \sqrt{A}Q^{-1}(\epsilon_{EC}) N\sqrt{N_R}\\
> \left( 8I_{AB}N N_R - 8B_1 N_R\right) \,.
\end{aligned}
\end{equation}
We find that the LHS of inequality~\ref{eq:2diff_eq2} is greater than zero for $N>N_R>10$, $\gamma>0$, and $\epsilon_{EC}<\frac{1}{2}$, but the sign of the RHS is subject to specific CV-QKD parameters. However, through detailed numerical search we find that the inequality~\ref{eq:2diff_eq2} holds for the range of parameters anticipated for realistic CV-QKD deployments.\footnote{In using this technique it is important to check that the inequality holds for the chosen parameter range of interest. This is done for all calculations we show here, but also for a much wider range not shown. For example, we find for
$\epsilon=10^{-9}$, $N=10^9$ and $m=5$, the inequality~\ref{eq:2diff_eq2} holds for any combination of the remaining parameters selected from the ranges $V_A\in[1,34]$, $T\in[0,1]$ and $\xi=[0,0.05]$. }
For example, if we consider the following CV-QKD settings\footnote{The values of $\xi_{ch}$ and $\xi_d$ in the standard CV-QKD settings are predicted values after accounting for all noise terms \cite{kish2020feasibility}.} (in the following we refer to these as the standard CV-QKD settings); $N=10^9$, $m=5$, $\epsilon_{EC} = 2\epsilon_{s} = \epsilon_{PA} = \epsilon_{PE} = 2.5\times10^{-10}$, $V_A = 5$, $T = 0.9$, $\xi_{ch}=0.0186$ and $\xi_d=0.0133$, we find that the LHS of the inequality is greater than $10^{12}$ and the RHS of the inequality is less than $10^{11}$.
To find the maximised $K_{Finite}^\prime$, we first find the value of $N_R$ that satisfies $\frac{d K_{Finite}^\prime}{d N_R} = 0$, where
\begin{equation}
\label{eq:diff_eq}
\begin{aligned}
\frac{d K_{Finite}^\prime}{d N_R} & = \frac{N_R N \frac{dC_{Finite}}{dN_R}-\left(N C_{Finite} - B_1\right)}{B_2 N^2_R}\\
&=-\frac{NI_{AB}}{B_2 N^2_R} - \frac{N}{2B_2 N^3_R}+\frac{3N\sqrt{A}Q^{-1}(\epsilon_{EC})}{2B_2 N^{\frac{5}{2}}_R } \\
&+\frac{N\log N_R}{B_2 N^3_R} + \frac{B_1}{B_2 N^2_R}\,. \\
\end{aligned}
\end{equation}
Therefore, our equation to be solved is given by
\begin{equation}
\label{eq:diff_eq2}
-\frac{NI_{AB}}{B_2 N^2_R} - \frac{N-2N\log N_R}{2B_2 N^3_R} +\frac{3N\sqrt{A}Q^{-1}(\epsilon_{EC})}{2B_2 N^{\frac{5}{2}}_R } + \frac{B_1}{B_2 N^2_R}= 0\,.
\end{equation}
Eq.~\ref{eq:diff_eq2} can be solved via a numerical root-finding algorithm\cite{mathews2004numerical}. If we consider the
standard CV-QKD settings, we obtain a stationary point at $N_R = 3.6\times 10^7$ bits (the value of $K^\prime_{Finite}$ at this $N_R$ is discussed later).
In closing this section we note the following in regard to alternate key-rate equations. Although we have adopted a specific CV-QKD protocol and specific key rate equation, different security analyses of our adopted protocol, and analyses of different CV-QKD protocols, can have a key rate with a similar form to that shown in Eq.~\ref{eq:BPSKeyRate} albeit with different bounded rates. When comparing different key-rate equations it is perhaps useful to only consider the leading terms. This allows for clearer tractability in determining the optimal $N_R$. For example, if the last term in our Eq.~\ref{eq:AEP} is neglected (a good approximation for reasonable $N$ values) then the functional dependence on $N$ of many key-rate equations is identical. In such circumstances, the same key rate can be mapped to alternate security settings of the different key-rate equations, and our framework applies directly. For example, using the key rate equation of \cite{pirandola2021limits} we find the same optimal $N_R$ albeit with a normalised key rate of one for the following security settings (described with the notations in\cite{pirandola2021limits}): the number of quantum signal exchanged, $N=10^9$, the number of quantum signals sacrificed for parameter estimation, $m=\frac{N}{2}$, the probability of successful reconciliation, $p_{ec} = 0.95$, the smoothing parameter, $\epsilon_{s} = 3.2\times10^{-13}$, the hashing parameter $\epsilon_{h} = 4.7\times10^{-13}$, the probability that the true value of the square-root transmissivity is less than the value obtained by the worst-case estimator used in parameter estimation, $\epsilon_{pe} = 1.2\times 10^{-13}$, the residue probability that Alice and Bob's bit strings are different after passing the error correction, $\epsilon_{cor} = 0.8\times 10^{-13}$, the modulation variance, $\sigma^2_x = 7.1$, the size of the effective alphabet after quantising the continuous quadrature values, $d=2^5=32$, the channel transmissivity, $T = 0.81$, the channel noise $\sigma^2_z = 0.035$, and the quantum duty to pay by the detector, $\nu_{det}=2$ for heterodyne detection. Key-rate equations with different functional dependence on $N_R$ can still be analysed within the framework proposed here - albeit via modified optimisation relations. Examples of this arise in consideration of DV-QKD protocols (albeit for which reconciliation optimisation is usually less important). We also note that extension of our adopted CV-QKD key-rate equation, to cover the most general attack, is possible via the use of the Gaussian de Finetti reduction technique and the inclusion of an energy test \cite{leverrier2017security}. This leads to a scaling of order $N^4$ in terms of the security cost.
\section{Experimental Results}
\label{section:Experiment}
We conducted an experiment of our GPU-based SR on a NVIDIA GTX 1060 GPU (with 6GB GPU memory). The GPU provides up to 1280 Compute Unified Device Architecture (CUDA) threads that can be run simultaneously. We determine the BER after decoding, denoted as $p_{Decode}$ (different from the $p_j$ obtained at Step 5 of the protocol). We also measured the decoding time for $mN$ bits. Below, we determine the experimental final key rate, $K_{exp}^\prime$, and compare it with $K_{Finite}^\prime$ to verify the optimality of $N_R$ found in Section \ref{section:Optimal}. We note that the experimental data shown in Figs.~\ref{fig:BER},~\ref{fig:DTime}, and~\ref{fig:KeyRateNR} is averaged over 50 runs. We also note, in our specific GPU the number of threads available was less than the number of blocks when $N_R=10^5$. This was numerically compensated for in the results shown.
In the experiment, we assume that Alice and Bob complete the first four steps of the protocol described in Section~\ref{section:SystemOverview}. Since Alice and Bob's quadrature values are the input and output of an AWGNC, we can generate these quadrature values for SR in the following way. 1) For a given $V_A$, $T$, and $\xi$, we obtain the noise power, $\sigma^2_n$ of the AWGNC from Eq.~\ref{eq: SNR}
\begin{equation}
\gamma = \frac{\frac{1}{2}V_AT}{1+\frac{1}{2}\xi} = \frac{1}{\frac{1+\frac{1}{2}\xi}{\frac{1}{2}V_AT}} = \frac{1}{\sigma^2_n}\,.
\end{equation}
2) We generate $N$ random numbers from the distribution $N(0, V_A)$. These numbers are regarded as Alice's quadrature values and denoted as $\mathbf{x} = \{x_0, x_1, \cdots, x_{N-1}\}$. 3) We obtain Bob's quadrature values $\mathbf{y}$, where the \emph{i}th component is given by $y_i = x_i + n $, and where $n$ is a random real number drawn from $N(0,\sigma^2_n)$.
\subsection{Decoding Error and Time}
\label{section:TimeSimulation}
In Fig.~\ref{fig:BER}, we compare the BER performance of different $N_R$ settings for each $T$ considered\footnote{Recall, we are particularly interested in the satellite-to-Earth channel. As in other works, we assume losses for this channel are dominated by diffraction effects, and therefore the transmissivity can be held constant. We further assume post-selections, using a bright classical beam sent along with the quantum signals (but different polarisation), remove any significant transmissivity deviations. As discussed elsewhere\cite{kish2020feasibility}, some receiver/transmitter apertures, coupled to detailed phase-screen simulations of satellite downlink channels, render the constant-transmissivity assumption reasonable\cite{villasenor2020atmospheric}. If the transmissivity is highly variable the optimal block length, $N_R$, can be calculated by an expectation over the transmissivity density function.}. The solid lines represent the best straight-line fit of the experimental data for each $N_R$. We note that for a given $T$, the LDPC code rates are set to $10\%$ lower than the capacity for that $T$. In Fig.~\ref{fig:BER}, we observe that larger LDPC codes lead to a lower $p_{Decode}$. This observation is consistent with the result of the finite-length information theory\cite{polyanskiy2010channel}.
In Fig.~\ref{fig:DTime}, we determine the measured decoding time for $mN$ bits in the experiment, $\Delta t_{exp}$, which is normalised to the value at $N_R=10^9$ ($\Delta t_{exp}=310$ seconds). The differences of the measured decoding time at each $N_R$ reflect additional decoding iterations. Our results confirm the reduction of $\Delta t_{exp}$ when a smaller $N_R$ is used. We note that it is difficult to quantify the exact relation between $\Delta t_{exp}$ and $N_R$ since $\Delta t_{exp}$ includes the elapsed time taken by SR's arithmetic operations and the elapsed time for the overhead; mostly due to memory access operations and synchronisation (we estimate this shortly).
However, our experiment generally confirms the advantage of parallelisation in decoding time.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{FER.eps}
\caption{$p_{Decode}$ after decoding for different $T$ and $N_R$. For each colour, the crosses are the experimental data obtained.\label{fig:BER}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{DTime.eps}
\caption{Normalised decoding time vs. $T$ for different $N_R$. For each colour, the crosses are $\Delta t_{exp}$ obtained in our experiment. For each $T$, the data presented is the ratio of $\Delta t_{exp}$ for each $N_R$ to the decoding time for $N_R=10^9$.
The advantage offered by GPU parallel processing is seen to improve decoding times by $\sim 30\%$ for sub-blocks of length $10^6$.\label{fig:DTime}}
\end{figure}
\subsection{The Optimal $N_R$}
\label{section:NRResult}
Previously, we have analytically found the optimal $N_R$ which maximises $K_{Finite}^\prime$. Now we wish to check the compatibility of this $N_R$ with the value that maximises $K_{exp}^\prime$ based on realistic LDPC codes and the specific GPU used in our experiment.
For this experiment, we pre-built a database to store LDPC codes with their code rates ranging from $0.01$ to $0.8$ and their block lengths ranging from $10^6$ to $10^9$. For code rates less than $0.1$, Multi-Edge-Type LDPC codes (degree distributions outlined in \cite{wang2017efficient}) were used. These achieve a lower $p_{Decode}$ compared to irregular LDPC codes with the same code rate. For code rates greater than or equal to $0.1$, we adopted the irregular LDPC codes (degree distributions outlined in\cite{mateo2011efficient}). At such code rates, these latter codes have the same $p_{Decode}$ performance as the Multi-Edge-Type counterparts, but allow for faster code construction.
We use the following process to obtain $K_{exp}^\prime$ for each $N_R$ considered. 1) For each $\mathbf{S_{j}}$, we select an LDPC code whose code rate is closest to the capacity determined by $p_j$ from the pre-built database. 2) We use the selected code to test whether the probability of an error correction failure of that code is less than $\epsilon_{EC}$. 3) If the test fails, we decrease $R_j$ by $\Delta R = 0.05$, select another LDPC code from the database and go back to Step 2). Otherwise, we mark the selected code as ``good'' and then go back to Step 1) for the next slice. The process terminates when all slices have been successfully decoded. We then obtain $K_{exp}^\prime$ using
\begin{equation}
\label{eq:ExpKeyRate}
K_{exp}^\prime = \frac{N(\sum_{j=0}^{m-1}R_j - S_{BE}^{\epsilon_{PE}} ) - \sqrt{N}\Delta_{AEP}-2\log_2\frac{1}{2\epsilon_{PA}}}{\Delta t_{exp}}\,.
\end{equation}
We note that the overhead mentioned earlier is one of the sources causing the discrepancy between $K_{exp}^\prime$ and $K_{Finite}^\prime$. To compensate the additional decoding time due to the overhead for all $N_R$ considered, we adopt a numerical search for a compensated $\Delta t_{exp}$ so that $|K_{exp}^\prime - K_{Finite}^\prime|^2$ is minimised. Our result shows that $K_{exp}^\prime$ after compensation is approximately $10\%$ higher than the uncompensated $K_{exp}^\prime$. In Fig.~\ref{fig:KeyRateNR}, we plot $K_{Finite}^\prime$ and $K_{exp}^\prime$ (compensated and uncompensated) with respect to $N_R$ based on Eqs.~\ref{eq:BPSRate} and \ref{eq:ExpKeyRate}, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{KeyRatewOptimalCorrection}
\caption{Final Key Rate vs. LDPC Block Length. Here, we adopt the standard CV-QKD settings for the red and the blue curves. For the green curve, we adopt the following settings: $N = 10^{10}$, $m=5$, $\epsilon_{EC} = 2\epsilon_{s} = \epsilon_{PA} = \epsilon_{PE} = 2.5\times10^{-7}$, $V_A = 4$, $T = 0.92$, $\xi_{ch}=0.0180$ and $\xi_d=0.0128$. We also set $c_h = 3.2\times 10^{-9}$ seconds for the red and green curves. All the curves are normalised to the maximum of the red curve, i.e. $K_{Finite}^\prime = 4.3 \times 10^5$ bits per second at $N_R = 3.6 \times 10^7$. \label{fig:KeyRateNR}}
\end{figure}
The optimal $N_R = 3.6 \times 10^7$ and $5\times 10^6$ are found for $K_{Finite}^\prime$ and $K_{exp}^\prime$, respectively. Assuming the usual practical scheme where $N_R$ is simply selected randomly, our results for the standard CV-QKD settings show that using the optimal $N_R$ for SR leads to a maximum gain of $33\%$ on the final key rate. Other CV-QKD settings will provide different maximum gains. For example, the green curve of Fig.~\ref{fig:KeyRateNR} provides for a $66\%$ gain (not shown in the figure are the rates for $N_R>10^9$). This point emphasises the need to consider the parameter settings before determining both the optimal $N_R$ and the gain achieved relative to the standard practice of simply picking some $N_R<N$. Note that in our rate determinations, the normalisation of one in Fig.~\ref{fig:KeyRateNR} corresponds to a key rate $K_{Finite}^\prime = 4.3\times 10^5$ bits per second, based on our hardware-specific value of $c_h=3.2 \times 10^{-9}$ seconds.\footnote{We adopted the following method to determine $c_h$. For $m$ LDPC codes with $N_R = 10^6$ and $T=0.9$, we obtained the total number of arithmetic operations for those codes. Next, we measured the elapsed time to reconcile a block of $10^6$ quadrature values. We then obtained $c_h$ by dividing the number of arithmetic operations to this measured elapsed time.} The reconciliation rate associated with this same key rate is $3.9 \times 10^6$ bits per second. Assuming the source rate of the quantum signalling was high enough (\emph{e.g.} a 100MHz source),
this reconciliation rate is higher than that required for delivery of secured ($\epsilon=10^{-9}$) keys ($N=10^{9}$) within flyover times (270 seconds) consistent with Micius-type orbits (see the introduction).
Comparing the two curves in Fig.~\ref{fig:KeyRateNR}, we find that there is still a small discrepancy between $K_{Finite}^\prime$ and $K_{exp}^\prime $ although they share a similar trend. The reason for such discrepancy is twofold.
Firstly, there remain small trapping sets in the LDPC matrices.\footnote{These trapping sets are the primary reason that additional decoding iterations are consumed for only a marginal decrease of the decoding error, i.e. the error floor effect\cite{tian2004selective}.}
Although not part of our analysis (but included in the uncompensated curve of Fig.~\ref{fig:KeyRateNR}), we attempted to remove these trapping sets in our codes by using the algorithm of \cite{tian2004selective} so that fewer iterations will be used \cite{nguyen2012construction}. This reduced the number of decoding iterations by approximately $15\%$ for $N_R = 10^6$ but did not remove the trapping sets completely. The remnant trapping sets
inside the LDPC matrices lead to a larger number of decoding iterations than predicted by $D_j$. Determined by the Density Evolution Algorithm, $D_j$ is a lower bound due to the assumption of cycle-free matrices and infinitely long block length\cite{landner2005algorithmic}.
Secondly, we note that selecting $R_j$ so that $\sum_{j=0}^{m-1}R_j$ achieves $C_{Finite}$ may lead to a $p_{Decode}$ higher than the given $\epsilon_{EC}$.
In our experiment (and included in the shown results), we find that the $K_{exp}^\prime$ is $9\%$ lower than the $K_{Finite}^\prime$ predicted by Eq.~\ref{eq:BPSRate} due to the gap between $\sum_{j=0}^{m-1}R_j$ and $C_{Finite}$.
\subsection{Final-key Effect for a Given $N_o$}
\label{section:KeyRateFiniteResult}
In the satellite-based scenario, Alice and Bob starts the protocol with only $N_o$ quantum signals because the satellite is only visible to the ground station for a limited time frame. In this section, we revisit the analysis of the final key rate in the finite-key regime and conduct a numerical search to show how the final key rate $K$ is affected by $N_e$, for a given $N_o$ and $\epsilon$.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{KvsNeGivenNo}
\caption{$K$ (in bits per pulse) vs. $N_e$ when $N_o = 10^9$ (blue) and $N_o = 10^{10}$ (red). Here, we adopt the standard CV-QKD settings except that $N = 2\left(N_o - N_e\right)$ varies for different $N_e$.\label{fig:KvsNeGivenNo}}
\end{figure}
In Fig.~\ref{fig:KvsNeGivenNo}, we observe that $K$ is cut off when $N_e$ approaches $10^7$ and $10^9$ (for $N_o = 10^9$). At $N_e=10^7$, the parameter confidence intervals are not consistent with a positive $K$. As $N_e$ approaches $N_o$, $K$ decreases rapidly since the number of quantum signals for reconciliation approaches zero. Similar remarks can also be applied for $N_o = 10^{10}$. In Fig.~\ref{fig:KvsNeGivenNo}, we see that setting $N_e = \frac{N_o}{2}$ is an acceptable compromise between accommodating finite-key effects and preserving enough quantum signals for the post-processing. In the appendix, we investigate varying $N_e$ but where it is always constrained to $N_e=\frac{N_o}{2}$.
\section{Discussion}
We close our work with a brief discussion on recent developments in high-rate CV-QKD reconciliation via the massive parallelisation offered by GPUs and FPGAs.
In \cite{milicevic2017key}, a GPU-based LDPC decoder was implemented, achieving a rate of $9\times 10^6$ bits per second. In this implementation all GPU threads were used to minimise the decoding time of a \emph{single} LDPC block of $2^{20}$ bits. In \cite{wang2018high} and \cite{li2020high}, the reconciliation rates were further increased to $3\times10^7$ and $6\times 10^7$ bits per second, respectively, by simultaneously decoding \emph{multiple} LDPC blocks of length $10^6$ bits on a GPU. To our knowledge, the highest reconciliation rate obtained thus far is $2\times10^8$ bits per second - an outcome based on an FPGA \cite{yang2020high}. All of these works show promise for the delivery of practical CV-QKD systems in which reconciliation does not become the bottleneck of the QKD process. However, none have introduced the type of optimisation we have introduced in this work and, therefore, all are likely candidates for further improvement in terms of the choice of the optimal block length. Based on our results we would anticipate this improvement to be significant for a wide range of CV-QKD parameter settings. Our work is also different from the above works in the following (less important) aspects.
\textit{1) Reconciliation schemes for satellite-based CV-QKD.} High-speed implementations realised in \cite{wang2018high,li2020high} have used multidimensional reconciliation \cite{leverrier2008multidimensional}. This multidimensional scheme is preferred for low SNR - but not so for the higher SNRs available via the Gaussian post-selection technique - a technique likely to be more useful in the satellite context \cite{hosseinidehaj2018satellite}.
\textit{2) Low probability of reconciliation failures.}
In CV-QKD, Alice and Bob have to discard a block of reconciled bits if they detect a reconciliation failure (coding error) for that block. To compensate for the discarded bits, additional quantum signals need to be transmitted and reconciled, causing unwanted delays. Such delays can be problematic for satellite-based systems since the satellite is not always visible to the ground station.
The FPGA-based reconciliation of \cite{yang2020high} may suffer from this problem due to limited precision of arithmetic operations leading to higher reconciliation failures. As shown in many GPU-based works (including this work), GPU-based reconciliation offers less probability of reconciliation failure.
\section{Conclusion}
In this work, we have carried out a full-blown analysis and experimental implementation of a Slice Reconciliation scheme applied to a specific CV QKD protocol (with post-selection) under simulated channel conditions anticipated for satellite-to-Earth channels. We have provided the optimal solution for the classical reconciliation process for this CV-QKD protocol in the context of massive parallelisation under the finite key regime. More specifically, we have identified the optimal block length when a large-code block is to be subdivided so as to improve the final secure key rate in bits per second. Although our results were based on a specific CV-QKD protocol and a specific GPU architecture, the type of analysis we have introduced here will apply in general terms a large suite of CV-QKD protocols run over any form of architecture that offers massive parallelisation. Our results, therefore, pave the way to optimal reconciliation system design for a wide range of practical CV-QKD systems that operate in the finite key regime. As the demand on the finite key size grows (better security thresholds), and technology advances lead to larger quantum signalling rates, the importance of optimised multithreaded CV-QKD reconciliation will grow.
\bibliographystyle{mybst}
|
1,116,691,500,693 | arxiv | \section{Introduction}
\IEEEPARstart{T}{here} are several main paradigms (which may overlap) among numerous deep learning studies, including new network architecture, new training loss, new training data perturbation scheme, and new learning strategy (e.g., weighting). Training data perturbation mainly refers to feature and label perturbations.
In feature perturbation, many data augmentation tricks can be viewed as feature perturbation methods when the input is the raw feature (i.e., raw samples). For example, cropped or rotated images can be seen as the perturbed samples of the raw images in computer vision; sentences with modified words can also be seen as the perturbed texts in text classification. Another well-known feature perturbation technique is about the generation of adversarial training samples~\cite{madry2017towards}, which attracts great attention in various AI applications especially in computer vision~\cite{xie2020adversarial} and natural language processing~\cite{jin2020bert}. Adversarial samples are those that can fool the learned models. They can be obtained by solving the following objective function:
\begin{equation}
\bm{x}_{adv} = \bm x + \arg\mathop {\max }\limits_{\left\| \bm \delta \right\| \le \epsilon } l(f(\bm{x} + \bm \delta ),\bm y),\label{commomadv}
\end{equation}
where $\bm x$ is the input or the hidden feature; $\bm \delta$ is the perturbation term; $\epsilon$ is the perturbation bound; $\bm y$ is the one-hot label; and $\bm x_{adv}$ is the generated adversarial sample. A number of methods have been proposed to optimize Eq. (\ref{commomadv})~\cite{goodfellow2014explaining, madry2017towards}. Adversarial samples can be used to train more robust models.
In label perturbation, the labels are modified or corrected to avoid overfitting and noises. For example, a popular yet simple training trick, label smoothing~\cite{szegedy2016rethinking}, generates a new label for each sample according to $\bm y' = \bm y + \lambda (\frac{\bm I}{C} - \bm y)$, where $\bm y$ is the one-hot vector label; $C$ is the number of categories; $\bm I$ is a vector with all elements equaling to 1; $(\frac{\bm I}{C} - \bm y)$ is the perturbation term; and $\lambda$ is a hyper-parameter. Other methods such as Boostrapping loss~\cite{reed2014training}, label correction~\cite{patrini2017making,wang2021proselflc}, and Meta label corrector~\cite{wu2020learning} can be seen as a type of label perturbation. Mixup~\cite{zhang2017mixup} can be attributed to the combination of feature and label perturbation.
Logit vectors (or logits) are the outputs of the final feature encoding layer in almost all deep neural networks (DNNs). Although logits are important in the DNN data pipeline, only several learning methods in data augmentation and long-tail classification directly (without optimization) or implicitly employ class-level logit perturbation. Based on the loss analysis of these methods, the loss variations incurred by logit perturbation are highly related to the purpose of positive/negative augmentation\footnote{In this study, negative augmentation denotes the augmentation which aims to reduce the (relative) performances of some categories. Accordingly, existing augmentation methods are positive.} on training data. A theoretical analysis is conducted to reveal the connections among loss variations, performance improvements, and class-level logit perturbation. Accordingly, new methodologies are proposed to learn a class-level logit perturbation (LPL) for single-label and multi-label learning tasks, respectively, in this study. Extensive experiments are run on benchmark data sets for single-label classification and multi-label classification tasks. The results show the competitiveness of our methodologies.
Parts of the results in this paper were published originally in its conference version \cite{li2022logit}. In our conference version, several classical methods are rediscussed in terms of logit perturbation and positive/negative augmentation. A new method is proposed to learn to perturb logits which can be used in implicit data augmentation and long-tail classification contexts for single-label classification tasks. Experimental results show that our method outperforms existing state-of-the-art methods related to logit perturbation in both contexts. This paper extends our earlier work in several important aspects:
\begin{itemize}
\item We conduct a theoretical analysis for the roles of logit perturbation-based explicit negative and positive augmentations in learning for binary classification tasks. Two typical scenarios, namely, class imbalance and variance imbalance, are considered in our analysis.
\item We extend our LPL algorithm to the multi-label classification, which contains both class and variance imbalances, and empirically validate its effectiveness on multi-label classification benchmarks.
\item Extensive experiments on large-scale long-tail data sets such as iNaturalist are performed. Our method LPL still achieves competitive results.
\end{itemize}
\section{Related Work}
\subsection{Data Augmentation}
Data augmentation is prevailed in almost all deep learning approaches. In its early stage, heuristic operations on raw samples are utilized, such as image flip, image rotation, and word replacing in sentences. Recently, advanced tricks are investigated, such as mixup~\cite{zhang2017mixup}, semantic data augmentation~\cite{wang2019implicit}, and meta semantic augmentation\cite{li2021metasaug}. In mixup, given a sample $\{\bm x_1, \bm y_1\}$, its perturbation term is $\left\{ {\lambda \left( {\bm x_2 - \bm x_1} \right), \lambda \left( {\bm y_2 - \bm y_1} \right)} \right\}$, where $\lambda$ is a random parameter (not a hyper-parameter), and $\{\bm x_2, \bm y_2\}$ is another randomly selected sample. Hu et al.~\cite{hu2019learning} introduce reinforcement learning to automatically augment data.
In this study, existing data augmentation is called positive data augmentation. Negative data augmentation, which is proposed in this study, may be helpful when we aim to restrain the (relative) performance of certain categories (e.g., to keep fairness in some tasks).
\subsection{Long-tail Classification}
Real data usually conform to a skewed or even a long-tail distribution. In long-tail classification, the proportions of tail samples are considerably small compared with those of head samples. Long-tail classification may be divided into two main strategies. The first strategy is to design new network architectures. Zhou et al.~\cite{zhou2020bbn} design a bilateral-branch network to learn the representations of head and tail samples. The second strategy is to modify the training loss. In this way, the weighting scheme~\cite{fan2017learning} is the most common practice. Relatively larger weights are exerted on the losses of the tail samples. Besides weighting, some recent studies modify the logits to change the whole loss, such as logit adjustment (LA)~\cite{menon2020long}. This new path achieves higher accuracies in benchmark data corpora compared with weighting~\cite{wu2021adversarial}.
\subsection{Multi-label Classification}
Real data usually also contain multiple objectives. Unlike the two single-label classification tasks mentioned above, there are two main challenges in multi-label classification tasks, namely, the co-occurrence of labels and the dominance of negative samples \cite{wu2020dist,guo2021long}. Li et al.\cite{8766125} introduce a novel and effective deep metric learning method, which explores the relationship of images and labels by learning a two-way deep distance metric over two embedding spaces. Wei et al.\cite{8830456} investigate the impact of labels on evaluation metrics for large-scale multi-label learning and propose to restrain labels that have less impact on performance to speed up prediction and reduce model complexity. Wu et al.\cite{wu2020dist} perturb logits to emphasize the positive samples of tail categories to prevent class-specific overfitting. In multi-label classification task, weighting scheme \cite{lin2017focal} is also a typically used method.
\subsection{Adversarial Training}
Adversarial training is an important way to enhance the robustness of neural networks \cite{deng2021adversarial,cui2021learnable}. The most important step in adversarial training is to generate adversarial training examples in Eq. (\ref{commomadv}), which can be used to improve the robustness of neural networks. Numerous works have been proposed to generate adversarial examples \cite{goodfellow2014explaining,madry2017towards,andriushchenko2020square}. Gradient-based attack methods are commonly used \cite{aldahdooh2022adversarial}. Goodfellow et al. \cite{goodfellow2014explaining} propose to quickly compute adversarial training examples by using the gradient sign. Madry et al. \cite{madry2017towards} propose projected gradient descent (PGD) to compute the adversarial training samples. PGD executes an iterative computation that performs multiple gradient descent updates with small steps within the perturbation bound $\epsilon$ to update the adversarial training samples.
\section{Methodology}
This section first discusses several typical learning methods related to logit perturbation.
\subsection{Logit Perturbation in Existing Methods}
The notations and symbols are defined as follows. Let $\displaystyle S = \{\bm x_i, \bm y_i\}_{i=1}^N$ be a corpus of $N$ training samples, where $\bm x_i$ is the input feature and $\bm y_i$ is the label. In single-label classification, $\bm y_i$ is a one-hot vector. In multi-label classification, ${\bm y_i}=\left[ {{y_{i,1}},{y_{i,2}},\cdots ,{y_{i,C}}} \right] \in {\left\{ {0,1} \right\}^C}$. Let $C$ be the number of categories and $\pi_c{\rm{ }} = {\rm{ }}N_c/N$ be the proportion of the samples, where $N_c$ is the number of the samples that contain the $c$th category in $S$. Without loss of generality, we assume that $\pi_1 > {\rm{ }} \cdots > {\rm{ }}\pi_c > {\rm{ }} \cdots > \pi_C$.
Following Menon et al.~\cite{menon2020long} and Wu et al. \cite{wu2020dist}, we determine the head and tail categories by $N_c$. The larger $N_c$ means that $c$ is the head category index, and the smaller $N_c$ means that $c$ is the tail category index. Following Guo et al. \cite{guo2021long}, if $y_{i,c}=1$, $\bm x_i$ is the positive sample of category $c$; otherwise, $\bm x_i$ is the negative sample of category $c$. Let $\bm u_i$ be the logit vector of $\bm x_i$ which can be obtained by $\bm u_i = f(\bm x_i,\bm W)$, where $f(\cdot,\cdot)$ is the deep neural network with parameter $\bm W$. Let ${\bm \delta _i}$ be the perturbation term of $\bm x_i$. Let $\mathcal{L}$ be the entire training loss and $l_i$ be the loss of $\bm x_i$. The standard cross-entropy (CE) loss is used throughout the study.
\textbf{Logit adjustment (LA)}~\cite{menon2020long}. This method is designed for single-label long-tail classification and achieves competitive performance in benchmark data sets~\cite{wu2021adversarial}. The employed loss in LA is defined as follows:
\begin{equation}
\begin{aligned}
\mathcal{L} & = -\sum\nolimits_i {\log \frac{{\exp ({u_{i,{k}}} + \lambda \log {\pi _{{k}}})}}{{\sum\nolimits_c {\exp ({u_{i,c}} + \lambda \log {\pi _c})} }}},
\end{aligned}\label{LA_pert}
\end{equation}
where $u_{i,k}$ is $k$th element of $\bm u_i$; $y_{i,k}$ is $k$th element of $\bm y_i$; $c$ and $k$ are the category index; and $k$ satisfies $y_{i,k}=1$.
In Eq. (\ref{LA_pert}), the perturbation term $\bm \delta_i$ is as follows:
\begin{equation}
{\bm \delta _i}{\rm{ = }}{\bm {\tilde \delta }}{\rm{ = }}\lambda {{\rm{[log}}{\pi _1}{\rm{,}} \cdots {\rm{,log}}{\pi _c}{\rm{,}} \cdots ,\log {\pi _C}{\rm{]}}^T},
\end{equation}
where $\bm{\tilde \delta}$ is corpus-level vector\footnote{Corpus level is viewed as a special kind of class level in this study.}; $\bm \delta _i$ is sample-level vector; thus ${\bm \delta _i}$ for all the samples in the corpus $\bm S$ are identical. Eq. (\ref{LA_pert}) can be re-written as follows:
\begin{equation}
\mathcal{L}{\rm{ = - }}\sum\nolimits_i {\log \frac{{\exp ({ u_{i,{k}}})}}{{\sum\nolimits_c {\exp ({ u_{i,c}} + \lambda (\log {\pi _c}{\rm{ - log}}{\pi _{{k}}}))} }}}.
\end{equation}
Previously, we assumed that $\pi_1 > {\rm{ }} \cdots > {\rm{ }}\pi_c > {\rm{ }} \cdots > \pi_C $; hence, the losses of the samples in the first category (head) are decreased, while those of the samples in the last category (tail) are increased. The variations of the losses of the rest categories depend on the concrete loss of each sample.
\textbf{Implicitly semantic data augmentation (ISDA)}~\cite{wang2019implicit}. ISDA is an explicit data augmentation method for single-label classification. Given a sample $\bm x_i$, ISDA assumes that each (virtual) new sample can be sampled from a distribution $\mathcal{N}\left( {\bm x_i,{\rm{ }}\bm \Sigma_{k}} \right)$, where $\bm \Sigma_{k}$ is the co-variance matrix for the $k$th category. With the $M$ (virtual) new samples for each sample, the loss becomes
\begin{equation}
\mathcal{L}{\rm{ = - }}\frac{1}{{N \cdot M}}\sum\nolimits_{i = 1}^N {\sum\nolimits_{m = 1}^M {l(f({\bm x_{i,m},\bm W}),{\bm y_i})} } ,\label{ISDA_pert_ori}
\end{equation}
where $\bm x_{i,m}$ is the $m$th (virtual) new sample for $\bm x_i$. When $M{\rm{ }} \rightarrow {\rm{ }}+\infty $, the upper bound of the loss in Eq. (\ref{ISDA_pert_ori}) becomes
\begin{equation}
\mathcal{L}{\rm{ = - }}\frac{1}{N}\sum\limits_{i = 1}^N {\log \frac{{\exp ({ u_{i,{k}}})}}{{\sum\limits_{c = 1}^C {\exp ({ u_{i,{c}}} + \frac{\lambda }{2}{{({{\bm{w}}_c} - {{\bm{w}}_{{k}}})}^T}{\bm \Sigma _{{k}}}(\bm{w}_c - {{\bm{w}}_{{k}}}))} }}},\label{ISDA_pert}
\end{equation}
where $c$ and $k$ are the category index and $k$ satisfies $\bm y_{i,k}=1$; $\bm w_c$ is the network parameter for the logit vectors and ${u_{i,c} = }{{\bm{w}}_{\rm{c}}}^T{\bm{\tilde x_i}}{{ + b_c}}$; $\bm{\tilde x_i}$ is the output of the last feature encoding layer. In contrast with previous data augmentation methods, ISDA does not generate new samples or features.
In Eq. (\ref{ISDA_pert}), there is an implicit perturbation term $\bm {\delta _i}$ defined as follows:
\begin{equation}
\bm {\delta _i}{\rm{ = }}{\bm {\tilde \delta} _{{k}}}{\rm{ = }}\frac{\lambda }{2}\left[ {\begin{array}{*{20}{c}}
{{{({{\bm{w}}_1} - {{\bm{w}}_{{k}}})}^T}{\bm \Sigma _{{k}}}({{\bm{w}}_1} - {{\bm{w}}_{{k}}})}\\
\vdots \\
{{{({{\bm{w}}_C} - {{\bm{w}}_{{k}}})}^T}{\bm \Sigma _{{k}}}({{\bm{w}}_C} - {{\bm{w}}_{{k}}})}
\end{array}} \right],\label{ISDA_pert2}
\end{equation}
where $\bm {\tilde \delta}_{k}$ is class-level vector; thus $\bm \delta_i$ is the same for each class of samples. Each element of ${\bm \delta _i}$ is non-negative. Therefore, the new loss of each category from Eq. (\ref{ISDA_pert}) is larger than the loss from the standard CE loss.
\textbf{LDAM}~\cite{cao2019learning}. This method is designed for single-label long-tail classification. It's new loss is defined as
\begin{equation}
\begin{aligned}
&\mathcal{L}= - \sum\limits_{i = 1}^N {\log \frac{{\exp ({ u_{i,{k}}}{\rm{ - }}C {{({\pi _{{k}}})}^{{\rm{ - }}1/4}})}}{{\exp ({ u_{i,{k}}}{\rm{ - }}C {{({\pi _{{k}}})}^{{\rm{ - }}\frac{1}{4}}}){\rm{ + }}\sum\nolimits_{c \ne {k}} {\exp ({ u_{i,c}})} }}},
\end{aligned}\label{LDAM_loss}
\end{equation}
where $k$ satisfies $ y_{i,k}=1$.
The perturbation term ${\bm \delta _i}$ is as follows:
\begin{equation}
{\bm \delta _i}{\rm{ = }}{\bm {\tilde \delta} _{{k}}}{\rm{ = }}\lambda {{\rm{[}}0{\rm{ , }} \cdots {\rm{, - }}C{({\pi _{{k}}})^{{\rm{ - }}\frac{1}{4}}}{\rm{,}} \cdots , 0{\rm{]}}^T},\label{LDAM_pert}
\end{equation}
which is also a category-level vector. The losses for all categories are increased in LDAM.
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\textwidth,height=0.16\textwidth]{img/456.png} \vspace{-0.3in}
\caption{The relative loss variations ($\frac{l'-l}{l}$) of the three methods on different categories on different data sets. (a) and (b) show the relative loss variation of ISDA on CIAFR100 and CIFAR10 respectively. (c), (d) and (e) show the relative loss variation of ISDA, LA and LDAM on CIAFR100-LT with imbalance ratio 100:1, respectively.}
\label{relative_loss_variations_single}
\end{figure*}
\begin{figure}[b]
\centering
\includegraphics[width=0.9\columnwidth, height=1.1in]{img/567.png} \vspace{-0.15in}
\caption{The relative loss variations ($\frac{l'-l}{l}$) of the two methods on different categories on COCO-MLT. ``pos" means the relative loss variations of positive samples. ``neg" means the relative loss variations of negative samples. }
\label{relative_loss_variations_multi}
\end{figure}
\textbf{Negative-tolerant Regularization (NTR)}~\cite{wu2020dist}. In this method, a multi-label classification task is first decomposed into $C$ independent binary classification tasks. NTR defines the following negative-tolerant binary loss:
\begin{equation}
\begin{aligned}
\mathcal{L}&= \frac{1}{N}\sum_{i=1}^N \frac{1}{C}\sum_{c=1}^C y_{i,c}\text{log}(1+\exp(- u_{i,c}+v_c)) \\
&+\frac{1}{\lambda}(1- y_{i,c})\text{log}(1+\exp(\lambda( u_{i,c}-v_c)))
\end{aligned},\label{NTR_loss}
\end{equation}
where $v_c =\psi \text{log}(\frac{N}{N_c}-1) $; $\lambda$ and $\psi $ are hyper-parameters.
In Eq.~(\ref{NTR_loss}) , an implicit logit perturbation term ($\bm \delta_i$) can also be observed as follows:
\begin{equation}
\bm {\delta _i}{\rm{ = }}{\bm {\tilde \delta }}{\rm{ = }}-\psi [\text{log}(\frac{N}{N_1}-1) ,\cdots,(\frac{N}{N_C}-1) ]^T.\label{NTR_pert}
\end{equation}
The perturbation is a corpus-level term vector. $\psi$ is non-negative in the experiments conducted by Wu et al.~\cite{wu2020dist}. Therefore, if $N < 2N_c$, then samples with label $c$ are dominant and $v_c$ in Eq.~(\ref{NTR_loss}) is smaller than zero. When $ y_{i,c}$=1, the loss will be reduced, and if $ y_{i,c}$=0, the loss will be increased. When $N > 2N_c$, it is opposite.
\textbf{Logit Compensation (LC)}~\cite{guo2021long}. LC assumes that logits conform to a normal distribution. The loss of logit compensation is defined as follows:
\begin{equation}
\begin{aligned}
\mathcal{L}&= \frac{1}{N}\sum_{i=1}^N \frac{1}{C}\sum_{c=1}^C y_{i,c}\text{log}(1+\exp(-(\sigma_c^p\cdot u_{i,c} + \mu_c^p))) \\
&+(1-y_{i,c})\text{log}(1+\exp({\sigma_c^n\cdot u_{i,c}} + \mu_c^n))
\end{aligned},
\end{equation}
where $\mu_c^p$, $\sigma_c^p$, $\mu_c^n$, and $\sigma_c^n$ ($c\in \{1, \cdots, C\}$) are the mean and variance of the positive and negative samples that can be learned. For the positive samples, the perturbation term $\delta_i$ is as follows:
\begin{equation}
\bm {\delta _i}{\rm{ = }}\bm {\tilde \delta}{\rm{ = }}[ {\mu _1^p,\mu _2^p, \cdots ,\mu _C^p} ].
\end{equation}
For the negative samples, the perturbation term $\delta_i$ is as follows:
\begin{equation}
\bm {\delta _i}{\rm{ = }}\bm {\tilde \delta}{\rm{ = }}[ {\mu _1^n,\mu _2^n, \cdots ,\mu _C^n} ].
\end{equation}
Both the two perturbation items are corpus-level vectors.
In addition, the logit is weighted in accordance with the variance simultaneously. According to the analysis in \cite{guo2021long}, LC mainly (relatively) increases the loss for positive samples and emphasizes the tail categories.
\subsection{Theoretical Analysis for Logit Perturbation}
The losses of the five example methods analyzed in the previous subsection can be written as follows:
\begin{equation}
\mathcal{L}{\rm{ = }}\sum\nolimits_i {l(\bm {u_i} + {\bm {\tilde \delta }_{i}},\bm {y_i})}. \label{common_loss}
\end{equation}
Logit perturbations result in the loss variations. Fig. \ref{relative_loss_variations_single} shows the statistics for the relative loss variations incurred by ISDA, LA, and LDAM for each category on a balanced data set (CIFAR100 \cite{krizhevsky2009learning}) and two long-tail sets (CIFAR10-LT \cite{cui2019class} and CIFAR100-LT \cite{cui2019class}) which are introduced in the experimental section. The loss variations of all categories are positive using ISDA. ISDA achieves the worst results on CIFAR100-LT\cite{cui2019class} (shown in the experimental parts), indicating that the non-tail-priority augmentation in long-tail problems is ineffective (ISDA achieves relatively better results on CIFAR10-LT\cite{cui2019class}.). Only the curves on CIFAR100-LT are shown for LA and LDAM because similar trends can be observed on CIFAR10-LT. The loss variations of head categories are negative, and those of tail are positive using LA. All the variations are positive yet there is an obvious increasing trend using LDAM. Fig. \ref{relative_loss_variations_multi} shows the statistics for the relative loss variations incurred by NTR and LC in multi-label classification. The data set COCO-MLT\cite{wu2020dist} is used. The relative loss variations of positive samples and negative samples are counted separately. In NTR, for positive samples, the loss variations of head categories are less than 0, and those of tail categories are greater than 0. However, for negative samples, the situation is opposite. LC and NTR have the similar trend of the relative loss variation, but the relative loss variation of LC is less than 0.
We propose two conjectures based on the above observations and from a unified logit-perturbation data augmentation viewpoint:
\begin{itemize}
\item If one aims to positively augment the samples in a category, the training loss of this category should be increased after logit perturbation. The larger the loss increment is, the greater the augmentation will be. Consequently, the performance of this category will (relatively) increase.
\item If one aims to negatively augment the samples in a category, then the training loss of this category should be reduced after logit perturbation. The larger the loss decrement is, the greater the negative augmentation will be. The performance of this category will (relatively) decrease.
\end{itemize}
The above two conjectures are empirically supported by the aforementioned five methods. For single-label classification, to handle a long-tail problem, LA should positively augment tail samples and negatively augment head samples. Hence, the losses of tail samples are increased, and those of heads are decreased. ISDA aims to positively augment samples in all categories; thus, the losses for all categories are increased. LDAM aims to positively augment tail samples more than head samples. Hence, the increments of tail categories are larger than those of the head. For multi-label classification task, positive samples and negative samples need to be considered separately. For positive samples, NTR positively augments the tail categories and negatively augments the head categories. For negative samples, the condition is opposite. Therefore, the losses of tails are increased, whereas those of heads are decreased. LC aims to negatively augment all categories. For positive samples, the reductions of head categories are larger than those of the tail. For negative samples, the situation is opposite.
To theoretically support the two conjectures, a simple binary classification task is employed to quantitatively investigate the relationship among loss variations, performance improvement, and logit perturbation. The binary classification setting established by Xu et al.~\cite{xu2021robust} is followed. The data from each of the two classes $\displaystyle{\mathcal{Y}}=$ $\{-1, +1\}$ follow two Gaussian distributions, which are centered on $\boldsymbol{\theta} = [\eta, \cdots,\eta]$ ($d$-dimensional vector and $\eta > 0$) and $-\boldsymbol{\theta}$, respectively.
The data follow
\begin{equation}
y \stackrel{u . a . r}{\sim}\{-1,+1\},
\end{equation}
\begin{equation}
\bm x \sim\left\{\begin{array}{ll}\mathcal{N}\left(\boldsymbol{\theta}, \sigma_{+}^{2} \boldsymbol{I}\right) & \text { if } y=+1 \\ \mathcal{N}\left(-\boldsymbol{\theta}, \sigma_{-}^{2} \boldsymbol{I}\right) & \text { if } y=-1\end{array}\right.\label{data_distribution}.
\end{equation}
For a classifier $f$, the overall standard error is defined as $\mathcal{R}(f)=\operatorname{Pr}.(f(\bm x)\neq y)$. We use $\mathcal{R}(f;y)$ to denote the standard error conditional on a specific class $y$. The class ``+1" is harder because an optimal linear classifier will give a larger error for the class ``+1" than that for the class ``-1" when $\sigma_{+}^{2} > \sigma_{-}^{2}$ \cite{xu2021robust}.
Two types of class-level logit perturbation are considered in our theoretical analysis. Let $\epsilon_c$ be the perturbation bound. The first type of perturbation is defined as follows:
\begin{equation}
{{\tilde \delta }_{{c}}}^*{\rm{ = }}arg \max_{||{{\tilde \delta }_{{c}}}|| < \epsilon_c} \text{E}_{(\bm x,y):y=c} [{l({u} + {{\tilde \delta }_{{c}}},{c})}] \label{first_perturbation}.
\end{equation}
The second type is defined as follows:
\begin{equation}
{{\tilde \delta }_{{c}}}^*{\rm{ = }}arg \min_{||{{\tilde \delta }_{{c}}}|| < \epsilon_c} \text{E}_{(\bm x,y):y=c}[ {l({u} + {{\tilde \delta }_{{c}}},{c})}], \label{second_perturbation}
\end{equation}
where $ u=\boldsymbol{w}^T\bm x+b$.
The first type implements positive augmentation, while the second type implements negative augmentation.
Assuming that the perturbation bounds between the two classes satisfy that $\epsilon _+ =\rho_+\cdot\epsilon$ and $\epsilon _- = \rho_-\cdot\epsilon$. Now, the variances of the data distributions in Eq. (\ref{data_distribution}) for the two classes are assumed to be equal, i.e., $\sigma_{+} = \sigma_{-}$. Nevertheless, the prior probabilities of the two classes $P(y=+1)$ ($P_+$) and $P(y=-1)$ ($P_-$) are assumed to be different. Without loss of generality, we assume $P_{+}: P_{-}=1: \Gamma $ and $ \Gamma >1$. That is, class imbalance exists, and the class $+1$ and the class $-1$ are the minority and the majority classes, respectively.
We have the following theorem:
\begin{thm}
For the abovementioned binary classification task, the logit perturbation bounds of classes ``$+1$" and ``$-1$" are assumed to be $\rho_+\cdot\epsilon$ ($0\leq\rho_+\cdot{\epsilon}<{\eta}$) and $\epsilon$ ($\rho_- =1$), respectively. Only the first perturbation type is utilized. The optimal linear classifier $f_{\text{opt}}$ that minimizes the average classification error is
\begin{equation}
f_{\text{opt}}=\arg\underset{f}{ \min } \operatorname{Pr}.(\mathbb{S}(u+{{\tilde \delta }_{{c}}}^*) \neq y), \label{optf_thm1}
\end{equation}
where $u=f(\bm x)=\boldsymbol{w}^T\bm x+b$; $\mathbb{S}(\cdot)$ is the signum function (if $a \geq 0$, then $\mathbb{S}(a) = 1$; else $\mathbb{S}(a) = -1$).
It has the intra-class standard error for the two classes:
\begin{equation}
\small
\begin{aligned}
&\mathcal{R}\left(f_{\text{rob}},-1\right) = \operatorname{Pr}.\left\{\mathcal{N}(0,1)<\frac{A}{2}+\frac{\text{log}\Gamma }{A}-\frac{\epsilon}{\sqrt{d}\sigma}\right\}, \\ & \mathcal{R}\left(f_{\text{rob}},+1\right)= \operatorname{Pr}.\left\{\mathcal{N}(0,1) <\frac{A}{2}-\frac{\text{log}\Gamma }{A}-\frac{\epsilon\rho_+}{\sqrt{d}\sigma}\right\},
\end{aligned}
\end{equation}
where $A=\frac{\epsilon-2d\eta+\epsilon\rho_+}{\sqrt{d}\sigma}$.\label{imbal_thm1}
\end{thm}
The proof is attached in the appendix. Theorem 1 indicates that the logit perturbation parameterized by $\epsilon$ and $\rho_+$ influences performance of both classes. We then show how the classification errors of the two classes change as $\rho_+$ increases.
\begin{corollary}
For the binary classification task investigated in Theorem 1, when $\Gamma<e^{\frac{((2d-1)\eta-\epsilon)^2}{2d\sigma^2}}$, as $\rho_+$ increases, the logit perturbations on Theorem 1 will decrease the error for class ``$+1$" and increase the error for class ``$-1$".\label{imbal_coro1}
\end{corollary}
The proof is attached in the appendix. Corollary 1 indicates that a larger scope of the first type of logit perturbation on a class will increase the performance of the class. Note that a larger scope of the first type of logit perturbation will result in a large loss increment, and the first conjecture is supported by Corollary 1. To better illuminate Corollary \ref{imbal_coro1}, we plot $\mathcal{R}\left(f_{\text{opt}},-1\right)$, $\mathcal{R}\left(f_{\text{opt}},+1\right)$, and $\mathcal{R}(f_{\text{opt}})$ for a specific learning task. Fig. \ref{imbal_lpl1_fig} shows the results when the values of $\Gamma$,
$d$, $\eta$, $\epsilon$, and $\sigma$ are 2, 2, 1, 0.2, and 1, respectively.
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\columnwidth, height=1.1in]{img/000.png} \vspace{-0.15in}
\caption{Left: Natural errors $\mathcal{R}\left(f_{\text{opt}},-1\right)$ and $\mathcal{R}\left(f_{\text{opt}},+1\right)$ for the two classes with varied $\rho_+$. Right: Total natural error $\mathcal{R}(f_{\text{opt}})$ with
varied $\rho_+$.}
\label{imbal_lpl1_fig}
\end{figure}
Theorem 1 only considers the first type of logit perturbation. When the second type of logit perturbation is also involved, the following theorem can be obtained.
\begin{thm}
For the abovementioned binary classification task, that the perturbation bounds of both classes ``$+1$" and ``$-1$" are assumed to be $\epsilon$ ($\rho_+=1$) and $\rho_-\cdot\epsilon$ ($0\leq\rho_-\cdot{\epsilon}<{\eta}$), respectively. The first perturbation type is utilized for class ``$+1$", and the second perturbation type is utilized for ``$-1$". The optimal linear classifier $f_{\text{opt}}$ that minimizes the average classification error is
\begin{equation}
f_{\text{opt}}=\arg\underset{f}{ \min } \operatorname{Pr}(\mathbb{S}(u+{{\tilde \delta }_{{c}}}^*) \neq y).
\end{equation}
It has the intra-class standard error for the two classes:
\begin{equation}
\begin{aligned} & \mathcal{R}\left(f_{\text{opt}},-1\right) = \operatorname{Pr}.\left\{\mathcal{N}(0,1)<\frac{A}{2}+\frac{\text{log}\Gamma}{A}+\frac{\epsilon\rho_-}{\sqrt{d}\sigma}\right\}, \\ & \mathcal{R}\left(f_{\text{opt}},+1\right) = \operatorname{Pr}.\left\{\mathcal{N}(0,1) <\frac{A}{2}-\frac{\text{log}\Gamma}{A}-\frac{\epsilon}{\sqrt{d}\sigma}\right\}, \end{aligned}
\end{equation}
where $A=\frac{\epsilon-2d\eta-\epsilon\rho_-}{\sqrt{d}\sigma}$.
\label{imbal_thm2}
\end{thm}
\iffalse
\begin{proof}
Like the proof in Theorem \ref{thm1}, we can easily get the following Equation.
\begin{equation}
\begin{aligned} &\mathcal{R}_{\text{rob}}(f)\propto \operatorname{Pr} .(\exists \| {{\tilde \delta }_{{+}}}|| \leq \epsilon
, \text{sign}(u+{{\tilde \delta }_{{+}}}) \neq+1 \mid y=+1)\\&+ V\cdot\operatorname{Pr} .(\forall \| {{\tilde \delta }_{{-}}}|| \leq \rho\times\epsilon
, \text{sign}(u+{{\tilde \delta }_{{-}}}) \neq-1 \mid y=-1)
\\&=V\cdot\min _{\|{{\tilde \delta }_{{-}}}\| \leq \rho\times\epsilon} \operatorname{Pr}.(\text{sign}(u+{{\tilde \delta }_{{-}}}) \neq-1 \mid y=-1)\\&+\max _{\|{{\tilde \delta }_{{+}}}\| \leq \epsilon} \operatorname{Pr}.(\text{sign}(u+{{\tilde \delta }_{{+}}}) \neq+1 \mid y=+1)
\\&=V\cdot \operatorname{Pr}.(\text{sign}(u-\rho\times\epsilon) \neq-1 \mid y=-1)
\\&+\operatorname{Pr}.(\text{sign}(u- \epsilon)\!\neq\!+1 \!\mid\! y\!=\!+1)
\\&=V\cdot\operatorname{Pr}.\left\{\sum_{i=1}^{d}x_{i}+b_{\text{rob}}-\rho\times\epsilon>0 \mid y=-1\right\}
\\&+\operatorname{Pr}.\left\{\sum_{i=1}^{d}\boldsymbol{x}_{i}+b_{\text{rob}}-\epsilon<0 \mid y=+1\right\}
\\&=V\cdot\operatorname{Pr}.\left\{\mathcal{N}(0,1)<-\frac{\sqrt{d}\eta}{\sigma}+\frac{b_{\text{rob}}-\rho\epsilon}{\sqrt{d} \sigma}\right\}
\\&+\operatorname{Pr}.\left\{\mathcal{N}(0,1)<-(\frac{\sqrt{d}\eta}{\sigma}+\frac{b_{\text{rob}}-\epsilon}{\sqrt{d} \sigma})\right\}.
\end{aligned}
\label{Rrob_2}
\end{equation}
The optimal $b_{\text{rob}}^*$ to minimize $\mathcal{R}_{\text{rob}}(f)$ is achieved at the point that $\frac{\partial R_{\text{rob}}(f)}{\partial b_{\text{rob}}} =0$. Then we can get the optimal $b_{\text{rob}}^*$:
\begin{equation}
\begin{aligned}
&b_{\text{rob}}^* = \frac{\epsilon(\epsilon(\rho-1)+2d\eta)(1+\rho)-2d\sigma^2\text{log}V}{2(2d\eta+\epsilon\rho-\epsilon)}.\label{optimal_brob2}
\end{aligned}
\end{equation}
By taking $b_{\text{rob}}^*$ into $\mathcal{R}_{\text{nat}}\left(f_{\text{rob}},-1\right)$ and $\mathcal{R}_{\text{nat}}\left(f_{\text{rob}},+1\right)$, we can get the theorem.
\begin{equation}
\begin{aligned}
& \mathcal{R}_{\text{nat}}\left(f_{\text{rob}},-1\right) =\operatorname{Pr}.\left\{\mathcal{N}(0,1)<-\frac{\sqrt{d}\eta}{\sigma}+\frac{b_{\text{rob}}^*}{\sqrt{d} \sigma}\right\}\\&= \operatorname{Pr}.\left\{\mathcal{N}(0,1)<\frac{A}{2}+\frac{\text{log}V}{A}+\frac{\rho\epsilon}{\sqrt{d}\sigma}\right\}, \\ & \mathcal{R}_{\text{nat}}\left(f_{\text{rob}},+1\right)=\operatorname{Pr}.\left\{\mathcal{N}(0,1)<-(\frac{\sqrt{d}\eta}{\sigma}+\frac{b_{\text{rob}}^*}{\sqrt{d} \sigma})\right\}\\&= \operatorname{Pr}.\left\{\mathcal{N}(0,1) <\frac{A}{2}-\frac{\text{log}V}{A}+\frac{\epsilon}{\sqrt{d}\sigma}\right\},
\end{aligned}\label{Imbal_Rnat2}
\end{equation}
where $A=\frac{\epsilon-\rho\epsilon+2d\eta}{\sqrt{d}\sigma}$.
\end{proof}
\fi
The proof of Theorem \ref{imbal_thm2} is similar to that of Theorem \ref{imbal_thm1}. Likewise, we have the following corollary.
\begin{corollary}
For the learning task investigated in Theorem 2, when $\Gamma>1$, as $\rho_-$ increases, the logit perturbations on Theorem \ref{imbal_thm2} will increase the accuracy for class ``$+1$" and decrease the accuracy for class ``$-1$".\label{imbal_coro2}
\end{corollary}
According to Corollary 2, a larger scope of the second type of logit perturbation
on a class will decrease the performance of the class. Note
that a larger scope of the second type of logit perturbation
will result in a large loss decrement, and the second conjecture is supported. Likewise, we plot $\mathcal{R}\left(f_{\text{opt}},-1\right)$, $\mathcal{R}\left(f_{\text{opt}},+1\right)$, and $\mathcal{R}(f_{\text{opt}})$. Fig. \ref{imbal_lpl2_fig} shows the results for the specific learning task discussed in Fig. \ref{imbal_lpl1_fig}. In this figue, the values of $\Gamma$,
$d$, $\eta$, $\epsilon$, and $\sigma$ are 2, 2, 1, 0.2, and 1, respectively.
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\columnwidth, height=1.1in]{img/001.png} \vspace{-0.15in}
\caption{Left: Natural errors $\mathcal{R}\left(f_{\text{opt}},-1\right)$ and $\mathcal{R}\left(f_{\text{opt}},+1\right)$ for the two classes with varied $\rho_-$. Right: Total natural error $\mathcal{R}(f_{\text{opt}})$ with
varied $\rho_{-}$.}
\label{imbal_lpl2_fig}
\end{figure}
Theorems 1-2 and Corollaries 1-2 concern the class imbalance issue, i.e., $P_{+} \neq P_{-}$. In addition, another learning scenario is also explored. The variances of the data distributions in Eq. (\ref{data_distribution}) for the two classes are assumed to be unequal, i.e., $\sigma_{+} \neq \sigma_{-}$. That is, variance imbalance exists. Without loss of generality, we assume $\sigma_{+} : \sigma_{-} = 1:K$, where $K> 1$. And $P_{+}: P_{-}=1: \Gamma $ also holds, where $\Gamma > 1$.
We have the following theorem.
\begin{thm}
For the abovementioned binary classification task, the bounds of classes ``$+1$" and ``$-1$" are assumed to be $\rho_+\cdot{\epsilon}$ and $\rho_-\cdot{\epsilon}$ ($0\leq \rho_+, \rho_-<\frac{\eta}{\epsilon}$), respectively. Only the first perturbation type is utilized. The optimal linear classifier $f_{\text{opt}}$ that minimizes the average classification error is
\begin{equation}
f_{\text{opt}}=\arg\underset{f}{ \min } \operatorname{Pr}.(\mathbb{S}(u+{{\tilde \delta }_{{c}}}^*) \neq y), \label{optf}
\end{equation}
where $u=f(x)=\boldsymbol{w}^T\bm x+b$.
It has the intra-class standard error for the two classes:
\begin{equation}
\small
\begin{aligned} & \mathcal{R}\left(f_{\text{opt}},+1\right)
\\&=\operatorname{Pr}.\left\{\mathcal{N}(0,1)<-K\sqrt{B^2+q(K,\Gamma)}-B-\frac{\epsilon\cdot\rho_+}{\sqrt{d}\sigma})\right\}, \\ & \mathcal{R}\left(f_{\text{opt}},-1\right) \\&= \operatorname{Pr}.\left\{\mathcal{N}(0,1)<KB+\sqrt{B^2+q(K,\Gamma)}-\frac{\epsilon\cdot\rho_-}{K\sqrt{d}\sigma}\right\}, \end{aligned}
\end{equation}
where $B= \frac{\epsilon\cdot\rho_++\epsilon\cdot\rho_--2d\eta}{\sqrt{d}\sigma(K^2-1)}$ and $q(K,\Gamma)=\frac{2\text{log}(\frac{K}{\Gamma})}{K^2-1}$.\label{Bal_thm1}
\end{thm}
Thus, training with different logit perturbation bounds for the two classes can still influence the performance according to Theorem \ref{Bal_thm1}. We then show how the classification errors of the two classes change as $\rho_-$ or $\rho_+$ increases.
\begin{corollary}
For the data distribution and logit perturbation investigated in Theorem 3,
\begin{itemize}
\item if $Ke^{\frac{(2d\eta-\epsilon)^2}{2dK^2\sigma^2}}<\Gamma< Ke^{\frac{2d\eta^2}{(K^2-1)\sigma^2}}$, then $\mathcal{R}\left(f_{\text{opt}},+1\right)>\mathcal{R}\left(f_{\text{opt}},-1\right)$. That is, class imbalance is the primary challenge and class ``$+1$'' is harder than class ``$-1$''. Then if $\rho_- = 1$ and the first logit perturbation type is used, the error of class ``$+1$'' decreases and the error of class ``$-1$'' increases, as $\rho_+$ increases;
\item if $K>\Gamma$, then $\mathcal{R}\left(f_{\text{opt}},+1\right)<\mathcal{R}\left(f_{\text{opt}},-1\right)$. That is, variance imbalance is the primary challenge and class ``$-1$'' is harder than class ``$+1$''. If $\rho_+ = 1$ and the first logit perturbation type is used, the error of class ``$+1$'' increases and the error of class ``$-1$'' decreases, as $\rho_-$ increases.
\end{itemize}
\label{bal_coro1}
\end{corollary}
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\columnwidth, height=1.1in]{img/002.png} \vspace{-0.15in}
\caption{Left: Natural errors $\mathcal{R}\left(f_{\text{opt}},-1\right)$ and $\mathcal{R}\left(f_{\text{opt}},+1\right)$ for the two classes with varied $\rho_+$. Right: Total natural error $\mathcal{R}(f_{\text{opt}})$ with
varied $\rho_+$.}
\label{bal_lpl_fig1}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\columnwidth, height=1.1in]{img/003.png} \vspace{-0.15in}
\caption{Left: Natural errors $\mathcal{R}\left(f_{\text{opt}},-1\right)$ and $\mathcal{R}\left(f_{\text{opt}},+1\right)$ for the two classes with varied $\rho_-$. Right: Total natural error $\mathcal{R}(f_{\text{opt}})$ with
varied $\rho_-$.}
\label{bal_lpl_fig2}
\end{figure}
The first conjecture can also be justified by Corollary \ref{bal_coro1}. Likewise, we plot $\mathcal{R}\left(f_{\text{opt}},-1\right)$, $\mathcal{R}\left(f_{\text{opt}},+1\right)$, and $\mathcal{R}(f_{\text{opt}})$. Figs. \ref{bal_lpl_fig1} and \ref{bal_lpl_fig2} show the results. As shown in Fig. \ref{bal_lpl_fig1}, increasing $\rho_+$ can decrease the error of class ``$+1$'' and increase the error of class ``$-1$''. The values of $K$, $\Gamma$,
$d$, $\eta$, $\epsilon$, and $\sigma$ are 3, 3.5, 2, 1, 0.1 and 1, respectively. In Fig. \ref{bal_lpl_fig2}, increasing $\rho_-$ can decreases the error of class ``$-1$'' and increase the error of class ``$+1$''. The values of $K$, $\Gamma$,
$d$, $\eta$, $\epsilon$, and $\sigma$ are 2.5, 1.1, 2, 1, 0.2 and 1, respectively.
When both types are utilized, we can obtain the following conclusion. When class ``$-1$'' is harder than class ``$+1$'', if the first logit perturbation type is used for class ``$-1$'' and the second logit perturbation is used for class ``$+1$'', then the error of class ``$-1$'' will decrease and the error of class ``$+1$'' will increase. Similarly, when class ``$+1$'' is harder than class ``$-1$'', if the first logit perturbation is used for class ``$+1$'' and the second logit perturbation type is used for class ``$-1$'', then the error of class ``$+1$'' will decrease and the error of class ``$-1$'' will increase. That is, the second conjecture is also justified.
\subsection{Logit Perturbation Method (LPL) for Single-label Learning}
On the basis of our conjectures and theoretical investigation, we establish the following new training loss with logit perturbation:
\begin{equation}
\begin{aligned}
\mathcal{L}{\rm{ = }}\sum\limits_{{c} \in {\displaystyle {\mathcal{N}}_a}} {\sum\limits_{\bm {x}_i \in \displaystyle{S_c}} {\mathop {\min }\limits_{\left\| {\bm {\tilde \delta _{{c}}}} \right\| \le \epsilon_c } l(\text{softmax} (\bm {u}_i + {\bm {\tilde \delta }_{{c}}}),\bm {y}_i)} } \\
{\rm{ + }}\sum\limits_{{c} \in \displaystyle {\mathcal{P}_a}} {\sum\limits_{\bm {x_i} \in \displaystyle{S_c}} {\mathop {\max }\limits_{\left\| {\bm {\tilde \delta _{{c}}}} \right\| \le \epsilon_c } l(\text{softmax} (\bm {u}_i + {\bm {\tilde \delta }_{{c}}}),\bm {y}_i)} },
\end{aligned}\label{new_loss_abs}
\end{equation}
where ${\epsilon _c}$ is the perturbation bound related to the extent of augmentation; $\displaystyle {\mathcal{N}}_a$ is the index set of categories which should be negatively augmented; $\displaystyle {\mathcal{P}}_a$ is the index set of categories which should be positively augmented; and $\displaystyle {S}_c$ is the set of samples in the $c$th category. The loss maximization for the $\displaystyle {\mathcal{P}}_a$ categories is actually the category-level adversarial learning on the logits; the loss minimization for the $\displaystyle{\mathcal{N}}_a$ categories is the opposite. Fig. \ref{LPL_illustrate} illustrates the calculation of the logit perturbation-based new loss in Eq. (\ref{new_loss_abs}).
\begin{figure}[t]
\centering
\includegraphics[width=0.96\columnwidth, height=1.1in]{img/framework.png} \vspace{-0.1in}
\caption{Overview of the logit perturbation-based new loss. Four solid circles denote four categories. Two categories are positively augmented via loss maximization and the rest two are negatively augmented via minimization.}
\label{LPL_illustrate}
\end{figure}
The split of the category set (i.e., $\displaystyle{\mathcal{N}}_a$ and $\displaystyle{\mathcal{P}}_a$) and the definition (calculation) of $\epsilon_c$ are crucial for the learning with Eq. (\ref{new_loss_abs}). Category set split determines the categories that should be positively or negatively augmented. Meanwhile, the value of $\epsilon_c$ determines the augmentation extent.
\textbf{Category set split}. The split depends on specific learning tasks. Two common cases are explored in this study. The first case splits categories according to their performances. In this case, Eq. (\ref{new_loss_abs}) becomes the following compact form:
\begin{equation}
\begin{aligned}
\mathcal{L} &=\sum\nolimits_c\{\mathbb{S}(\tau- {{\bar q}_{c}}) \times \\
&\sum\limits_{\bm {x_i} \in \displaystyle{S}_c} {\mathop {\max }\limits_{\left\| {{\bm {\tilde \delta }_{{c}}}} \right\| \le \epsilon_c } [ l(\text{softmax} ({\bm u_i} + {\bm {\tilde \delta }_{{c}}}),\bm {y}_i)\mathbb{S}(\tau - {{\bar q}_{c}})]\} } ,
\end{aligned}\label{special_loss1}
\end{equation}
where $\tau$ is a threshold, $y_{i,c} = 1$, and $\bar q_{c}$ is calculated by
\begin{equation}
{\bar q_{c}}{\rm{ = }}\frac{1}{{{\rm{}}{N_{{c}}}{\rm{}}}}\sum\limits_{\bm x_i \in {\displaystyle S_{{c}}}} {{q_{i,{c}}}} = \frac{1}{{{\rm{}}{N_{{c}}}{\rm{}}}}\sum\limits_{\bm x_i \in {\displaystyle S_{{c}}}} {\frac{{\exp ({u_{i,{c}}})}}{{\sum\nolimits_{c'} {\exp ({ u_{i,c'}} )} }}}. \end{equation}
When $\tau {\rm{ }} = \text{mean}({\bar q_{c}})=\sum\nolimits_{c=1}^C{\bar q_{c}}/C$, Eq. (\ref{special_loss1}) indicates that if the performance of a category is below the mean performance, it will be positively augmented. Meanwhile, when the performance is above the mean, it will be negatively augmented. When $\tau {\rm{ }} > \max \limits_c\{\bar q_{c}\}$, all the categories will be positively augmented as in ISDA.
\begin{figure}[tb]
\centering
\includegraphics[width=1\columnwidth, height=1.1in]{img/fig2.png} \vspace{-0.15in}
\caption{Illustrative example for ISDA. Both categories are positively augmented (new samples are virtually generated) according to feature distributions.}
\label{fig_ISDA}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=1\columnwidth, height=1.1in]{img/fig3.png} \vspace{-0.15in}
\caption{Illustrative example for LPL. Samples near the classification boundary are virtually generated or deleted.}
\label{fig_LPL}
\end{figure}
The second case is special for a long-tail problem, and it splits categories according to the proportion order of each category. Eq. (\ref{new_loss_abs}) becomes the following compact form:
\begin{equation}
\begin{aligned}
\mathcal{L}&=\sum\nolimits_c\{\mathbb{S}(c - \tau ) \times \\
&\sum\limits_{{\bm x_i} \in {\displaystyle S_c}} {\mathop {\max }\limits_{\left\| {{\bm{\tilde \delta }_{{c}}}} \right\| \le \epsilon_c } [ l(\text{softmax} ({\bm u_i} + {\bm {\tilde \delta }_{{c}}}),\bm y_i)\mathbb{S}(c - \tau )]\} },
\end{aligned}\label{longtail_new_loss}
\end{equation}
where $\tau$ is a threshold for the category index and $ y_{i,c} = 1$. In Eq. (\ref{longtail_new_loss}), the tail categories locate in $\displaystyle {\mathcal{P}}_a$ and will be positively augmented.
Eqs. (\ref{special_loss1}) and (\ref{longtail_new_loss}) can be solved with an optimization approach similar to PGD~\cite{madry2017towards}.
We propose a more specific optimization method called PGD-like optimization based on PGD. According to the derivative of the cross-entropy loss function with respect to logit vector, our PGD-like optimization method can be implemented simply. First, we have
\begin{equation}
\frac{\partial l(\text{softmax}(\bm {u}_{i}+\bm {\tilde{\delta}}_{c}), \bm y_{i})}{\partial \bm {\tilde{\delta}}_{c}}\bigg|_{\boldsymbol{0}}=\text{softmax}(\bm {u}_{i})-\bm {y}_{i}.
\end{equation}
In the maximization of Eqs. (\ref{special_loss1}) and (\ref{longtail_new_loss}), $\bm {\tilde \delta }_{c}$ is updated by
\begin{equation}
\begin{aligned}
\bm {\tilde{ \delta }}_{c} = \frac{\alpha }{N_{c}} \sum_{j: y_{j,c}=1}(\text{softmax}(\bm {u}_{j})-\bm {y}_{j}),
\end{aligned}\label{lpl_max}
\end{equation}
where $\alpha$ is the hyper-parameter.
In the minimization part, $\bm{\tilde \delta} _{c}$ is updated by
\begin{equation}
\begin{aligned}
\bm{\tilde{ \delta }}_{c} = -\frac{\alpha }{N_{c}} \sum_{j: y_{j,c}=1}(\text{softmax}(\bm {u}_{j})-\bm{y}_{j}).
\end{aligned}\label{lpl_min}
\end{equation}
The PGD-like optimization in Algorithm \ref{alg:2} contains two hyper-parameters, namely, step size and \#steps. Let $\alpha$ be the step size, and $K_{c}$ be the number of steps(\#steps) for category $c$. On the balanced classification, the $\alpha$ is searched in \{0.01, 0.02, 0.03\}. With step size, the PGD-like optimization is detailed in Algorithm \ref{alg:2}.
\begin{algorithm}[tb]
\caption{PGD-like Optimization} \label{alg:2}
\textbf{Input}: The logit vectors ($\bm u_{i}$) for the $c$th category in the current mini-batch, $\epsilon_c$, and $\alpha$.
\begin{algorithmic}[1]
\STATE Let $\bm {u}^{0}_{i} = \bm u_{i}$ for the input vectors;
\STATE Calculate $K_{c}$ by $\lfloor \frac{\epsilon _{c}}{\alpha} \rfloor$;
\FOR{ $k$ = 1 to $K_{c}$}
\STATE Calculate $\frac{\partial l(\text{softmax}(\bm {u}_{i}^k+\bm {\tilde{\delta}}_{c}), \bm y_i)}{\partial \bm {\tilde{\delta}}_{c}}\bigg|_{\boldsymbol{0}}=\text{softmax}(\bm {u}^{k}_{i})-\bm y_i$.
\STATE Calculate $\bm {\tilde{\delta}}_{c}^{k+1}$ according to Eq. (\ref{lpl_max}) for maximization and Eq. (\ref{lpl_min}) for minimization;
\STATE $\bm {u}^{k+1}_{i} := \bm {u}^{k}_{i} + \bm {\tilde{\delta}}_{c}^{k+1}$.
\ENDFOR
\end{algorithmic}\label{PGD-like}
\textbf{Output}: $\bm{\tilde\delta} _{c} = \bm u_{i}^{K_{c}} - \bm u_{i}$
\end{algorithm}
\textbf{Bound calculation.}
The category with a relatively low/high performance should be more positively/negatively augmented; the category closer to the tail/head should be more positively/negatively augmented. We define
\begin{equation}
\begin{array}{l}
{\epsilon _c}{\rm{ = }}\epsilon {\rm{ + }}\Delta \epsilon \left| \tau - \bar q_c \right|, \\ {\rm{ or}} \
{\epsilon _c}{\rm{ = }}\left\{ {\begin{array}{*{20}{c}}
{\epsilon + \Delta \epsilon \frac{{{{ \bar q}_c} }}{{{{\bar q}_1}}}}&{c \le \tau }\\
{\epsilon + \Delta \epsilon \frac{{{{ \bar q}_C} }}{{{{\bar q}_c}}}}&{c > \tau }
\end{array}} \right.
\end{array}\label{finalbound_lpl}.
\end{equation}
In Eq. (\ref{finalbound_lpl}), the larger the difference between the performance (${ \bar q}_c$) of the current category and the threshold $\tau$, or the larger the ratio ${ \bar q}_c/{\bar q}_1$ (and ${\bar q}_C/{\bar q}_c$), the larger the bound $\epsilon_c$. This notion is in accordance with our previous conjecture. When $\Delta \epsilon $ in Eq. (\ref{finalbound_lpl}) equals to zero, the bound is fixed. The algorithmic steps of our LPL for single-label learning are in Algorithm \ref{alg:3}.
\textbf{Comparative Analysis.} We compare the perturbations in ISDA and our LPL in terms of data augmentation.
In the ISDA's rationale, new samples are (virtually instead of really) generated based on the distribution of each category. Fig. \ref{fig_ISDA} shows the (virtually) generated samples by ISDA. In the right case, the positive augmentation for head category may further hurt the performance of the tail category. ISDA fails in the long-tail problem. Li et al.~\cite{li2021metasaug} leverage meta learning to adapt ISDA for the long-tail problem.
In contrast with the above-mentioned methods, our proposed LPL method conducts positive or negative augmentation according to the directions of loss maximization and minimization. According to our Corollaries 1-3, loss maximization will force the category to move close to the decision boundary (i.e., the category is positively augmented or virtual samples are generated for this category). By contrast, loss minimization will force the category to be far from the boundary (i.e., the category is negatively augmented or samples are virtually deleted for this category). Fig. \ref{fig_LPL} shows an illustrative example.
\subsection{Logit Perturbation Method (LPL) for Multi-label Learning}
Multi-label learning is usually decomposed into $C$ binary learning tasks. Compared with single-label learning, both variance imbalance and class imbalance usually exist in each of the $C$ tasks, simultaneously. First, variance imbalance exists in each of the $C$ tasks. The reason lies in that negative samples in each of the $C$ tasks actually come from the remaining $C-1$ classes, whereas positive samples in each task come from only one class. Naturally, the variance of the negative samples will be larger than that of the positive samples as shown in Fig. \ref{fig_multi_label_example} (b). Theoretically, the negative samples require the first type of logit perturbation and the positive samples require the second type of logit perturbation. Second, class imbalance may exist in each of the $C$ tasks as shown in Fig. \ref{fig_multi_label_example} (c). However, the class imbalance degrees for tasks in which the positive samples are from the tail categories are larger than those for tasks in which the positive samples are from the head categories. Therefore, according to Corollaries 1 and 2, the negative samples require the second type of logit perturbation, and the positive samples require the first type of logit perturbation (especially for the tasks when the positive samples belong to tail categories).
Obviously, there is contradiction for the two cases discussed above. To deal with variance imbalance, the negative samples should perform the first type of logit perturbation. Meanwhile, to deal with class imbalance, the negative samples should perform the second type of logit perturbation. Corollary 3 demonstrates that the perturbation type is dependent of the primary challenge on the class or variance imbalances. Consequently, we extend Eq. (\ref{longtail_new_loss}) into the following form for multi-label learning.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\textwidth, height=1.1in]{img/multi-label1.jpg} \vspace{-0.15in}
\caption{An illustrative example for the variance imbalance and class imbalance in multi-label learning. ``$+$" means the positive samples. ``$-$" means the negative samples. (a) shows a multi-label learning task ($C=3$). Different colors mean those samples with one label or more labels. (b) shows the case of variance imbalance. (c) shows the case of class imbalance.
}
\label{fig_multi_label_example}
\end{figure*}
\vspace{-0.2in}
\begin{equation}
\begin{aligned}
\mathcal{L}&=\frac{1}{C \times N}\sum_{(\bm x_i,\bm y_i)}\sum_{c = 1}^{C}\mathbb{S}(c - \tau )\times
\\&\{ \max_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c} y_{i,c}\log(1+e^{- u_{i,c}+{{{\tilde \delta }_{{c}}}}})\times \mathbb{S}(c - \tau )
\\&+\min_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c}(1- y_{i,c})\log(1+e^{ u_{i,c}-{{{\tilde \delta }_{{c}}}}})\times \mathbb{S}(c - \tau )\},
\end{aligned}\label{multi_label_new_loss}
\end{equation}
where ${{{\tilde \delta }_{{c}}}}$ is a scalar, and $\tau$ is a hyperparameter (threshold) for the category split. This new loss can effectively tune the cooperation of the two types of logit perturbation by setting an appropriate value of $\tau$. There are three typical settings for $\tau$ as shown as follows:
\begin{itemize}
\item If $\tau$ is set as zero, $\mathbb{S}(c - \tau ) \equiv 1$. Eq.~(\ref{multi_label_new_loss}) becomes
\begin{equation}
\begin{aligned}
\mathcal{L}&=\frac{1}{C \times N}\sum_{(\bm x_i,\bm y_i)}\sum_{c = 1}^{C}
\{ \max_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c} y_{i,c}\log(1+e^{- u_{i,c}+{{{\tilde \delta }_{{c}}}}})
\\&+\min_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c}(1- y_{i,c})\log(1+e^{ u_{i,c}-{{{\tilde \delta }_{{c}}}}})\}.
\end{aligned}\label{multi_label_new_loss_1}
\end{equation}
In this situation, the positive samples of all the $C$ tasks perform the first type of logit perturbation, indicating that class imbalance is the primary concern in all tasks.
\item If $\tau$ is set as $C+1$, then $\mathbb{S}(c - \tau ) \equiv -1$. Eq.~(\ref{multi_label_new_loss}) becomes
\begin{equation}
\begin{aligned}
\mathcal{L}&=\frac{1}{C \times N}\sum_{(\bm x_i,\bm y_i)}\sum_{c = 1}^{C}
\{ \min_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c} y_{i,c}\log(1+e^{- u_{i,c}+{{{\tilde \delta }_{{c}}}}})
\\&+\max_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c}(1- y_{i,c})\log(1+e^{ u_{i,c}-{{{\tilde \delta }_{{c}}}}})\},
\end{aligned}\label{multi_label_new_loss_2}
\end{equation}
In this situation, the negative samples of all the $C$ binary tasks perform the second type of logit perturbation, indicating that variance imbalance is the primary concern in all tasks.
\item If $1 < \tau < C$, then $\mathbb{S}(c - \tau ) \equiv 1$ when $c > \tau$ and $\mathbb{S}(c - \tau ) \equiv -1$ when $c < \tau$. When $c<\tau$, the optimization for the $c$th binary task becomes
\begin{equation}
\begin{aligned}
\mathcal{L}_c&=\frac{1}{ N}\sum_{(\bm x_i,\bm y_i)}\{ \min_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c} y_{i,c}\log(1+e^{- u_{i,c}+{{{\tilde \delta }_{{c}}}}})
\\&+\max_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c}(1- y_{i,c})\log(1+e^{ u_{i,c}-{{{\tilde \delta }_{{c}}}}})\},
\end{aligned}\label{multi_label_variance_imbalance}
\end{equation}
which indicates that the positive samples perform the second type of logit perturbation and the negative samples perform the first type of logit perturbation. This is reasonable because the $c$th class belongs to the head categories and thus variance imbalance rather than the class imbalance is the primary concern. When $c>\tau$, the optimization for the $c$th binary task becomes
\begin{equation}
\begin{aligned}
\mathcal{L}_c&=\frac{1}{ N}\sum_{(\bm x_i,\bm y_i)}\{ \max_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c} y_{i,c}\log(1+e^{- u_{i,c}+{{{\tilde \delta }_{{c}}}}})
\\&+\min_{| {{{\tilde \delta }_{{c}}}} | \le \epsilon_c}(1- y_{i,c})\log(1+e^{ u_{i,c}-{{{\tilde \delta }_{{c}}}}})\},
\end{aligned}\label{multi_label_class_imbalance}
\end{equation}
which presents that the positive samples perform the first type of logit perturbation and the negative samples perform the second type of logit perturbation. This is reasonable because the $c$th class belongs to the tail categories, and class imbalance rather than the variance imbalance becomes the primary concern in learning.
\end{itemize}
The third setting is adopted in our experiments. Similarly, we can perform PGD-like maximization and minimization as Algorithm \ref{PGD-like}. According to Eq. (\ref{multi_label_new_loss}), for positive samples, the derivative of the loss with respect to ${{{\tilde \delta }_{{c}}}}$ is as follows.
\begin{equation}
\begin{aligned}
&\frac{\partial \log(1+e^{- u_{i,c}+{{{\tilde \delta }_{{c}}}}})}{\partial {{{\tilde \delta }_{{c}}}}}\bigg|_{\boldsymbol{0}} =1-\text{sigmoid}(- u_{i,c}).
\end{aligned}\label{pos_deritive_multi_label}
\end{equation}
For negative sample, the derivative of the loss with respect to ${{{\tilde \delta }_{{c}}}}$ is as follows.
\begin{equation}
\begin{aligned}
\frac{\partial \log(1+e^{ u_{i,c}-{{{\tilde \delta }_{{c}}}}})}{\partial {{{\tilde \delta }_{c}}}}\bigg|_{\boldsymbol{0}}=\text{sigmoid}( u_{i,c})-1.
\end{aligned}\label{neg_deritive_multi_label}
\end{equation}
According to Eq. (\ref{multi_label_new_loss}), we use Eqs. (\ref{pos_deritive_multi_label}) and (\ref{neg_deritive_multi_label}) to calculate ${{{\tilde \delta }_{c}}}$ as follows.
\begin{equation}
\begin{aligned}
&{\tilde \delta }_{c}=\frac{\alpha}{C\times N}\sum_{i=1}^{N}\{ y_{i,c}(\text{sigmoid}(- u_{i,c})-1)\\&+(1- y_{i,c})(\text{sigmoid}( u_{i,c})-1)\}\times \mathbb{S}(c - \tau )\}.\label{delta_update}
\end{aligned}
\end{equation}
where $\alpha$ is the step size. Since each image is treated as $C$ binary classification tasks, we can further simplify Eq. (\ref{delta_update}).
Positive and negative samples for each of $C$ tasks share the same ${{{\tilde \delta }_{c}}}$ for the class $c$. Obviously, according to Eqs. (\ref{pos_deritive_multi_label}) and (\ref{neg_deritive_multi_label}), $1-\text{sigmoid}(- u_{i,c}) \ge 0$ and $\text{sigmoid}( u_{i,c})-1\le 0$ holds. The term ${\tilde \delta }_{{c}}$ is a scalar. Therefore, when the perturbation bound ${\epsilon _c}$ is given by Eq. (\ref{finalbound_lpl}), we can easily obtain ${\tilde \delta }_{{c}} = \epsilon_c$. Then the logit perturbation for multi-label learning can be easily calculated.
The algorithmic steps of our LPL for multi-label learning are also in Algorithm \ref{alg:3}.
\begin{algorithm}[tb]
\caption{Learning to Perturb Logits (LPL)} \label{alg:3}
\label{alg:algorithm}
\textbf{Input}: $\displaystyle S$, $\tau$, max iteration $T$, hyper-parameters for PGD-like optimization, and other conventional training hyper-parameters.\text{ }
\begin{algorithmic}[1]
\STATE Randomly initialize $\bm W$. \\
\FOR{ $t$ = 1 to $T$}
\STATE Sample a mini-batch from $\displaystyle S$.
\STATE Update $\tau$ if it is not fixed (e.g., ${{\text{mean}({\bar q_{c}})}}$ is used) and split the category set.
\STATE Compute ${\epsilon _c}$ for each category using Eq. (\ref{finalbound_lpl}) if varied bounds are used.
\STATE Infer $\bm{\tilde \delta}_c$ for each category using a PGD-like optimization method for Eq. (\ref{special_loss1}) in balanced classification, Eq. (\ref{longtail_new_loss}) in long-tail classification, or Eq. (\ref{multi_label_new_loss}) in multi-label classification.
\STATE Update the logits for each sample and the loss.
\STATE Update $\bm W$ with SGD.
\ENDFOR
\end{algorithmic}
\textbf{Output}: $\bm W$
\end{algorithm}
\begin{table}[tb]
\caption{Mean values and standard deviations of the test Top-1 errors for all the involved methods on CIFAR10.}\label{tab:banlance_cifar10}
\centering
\vspace{-0.08in}
\begin{tabular}{|p{2.18cm}||c||c|}\hline
Method & Wide-ResNet-28-10 & ResNet-110 \\\hline
Basic & 3.82 ± 0.15\% & 6.76 ± 0.34\% \\
Large Margin & 3.69 ± 0.10\% & 6.46 ± 0.20\% \\
Disturb Label & 3.91 ± 0.10\% & 6.61 ± 0.04\% \\
Focal Loss & 3.62 ± 0.07\% & 6.68 ± 0.22\% \\
Center Loss & 3.76 ± 0.05\% & 6.38 ± 0.20\% \\
Lq Loss & 3.78 ± 0.08\% & 6.69 ± 0.07\% \\
CGAN & 3.84 ± 0.07\% & 6.56 ± 0.14\% \\
ACGAN & 3.81 ± 0.11\% & 6.32 ± 0.12\% \\
infoGAN & 3.81 ± 0.05\% & 6.59 ± 0.12\% \\
ISDA & 3.58 ± 0.15\% & 6.33 ± 0.19\% \\
ISDA+DropOut & 3.58 ± 0.15\% & 5.98 ± 0.20\% \\\hline
\multicolumn{1}{|m{2.18cm}||}{LPL (mean+ fixed $\epsilon_c$)} & 3.39 ± 0.04\% & 5.83 ± 0.21\% \\
\multicolumn{1}{|m{2.18cm}||}{LPL ({mean+ varied $\epsilon_c$})} & \textbf{3.37 ± 0.04}\% & \textbf{5.72 ± 0.05}\% \\\hline
\end{tabular}
\end{table}
\begin{table}[tb]
\caption{Mean values and standard deviations of the test Top-1 errors for all the involved methods on CIFAR100.}\label{tab:banlance_cifar100}
\centering
\vspace{-0.08in}
\begin{tabular}{|p{2.18cm}||c||c|}\hline
Method & Wide-ResNet-28-10 & ResNet-110 \\\hline
Basic & 18.53 ± 0.07\% & 28.67 ± 0.44\% \\
Large Margin & 18.48 ± 0.05\% & 28.00 ± 0.09\% \\
Disturb Label & 18.56 ± 0.22\% & 28.46 ± 0.32\% \\
Focal Loss & 18.22 ± 0.08\% & 28.28 ± 0.32\% \\
Center Loss & 18.50 ± 0.25\% & 27.85 ± 0.10\% \\
Lq Loss & 18.43 ± 0.37\% & 28.78 ± 0.35\% \\
CGAN & 18.79 ± 0.08\% & 28.25 ± 0.36\% \\
ACGAN & 18.54 ± 0.05\% & 28.48 ± 0.44\% \\
infoGAN & 18.44 ± 0.10\% & 27.64 ± 0.14\% \\
ISDA & 17.98 ± 0.15\% & 27.57 ± 0.46\% \\
ISDA+DropOut & 17.98 ± 0.15\% & 26.35 ± 0.30\% \\\hline
\multicolumn{1}{|m{2.18cm}||}{LPL (mean+ fixed $\epsilon_c$)} & 18.19 ± 0.07\% & 26.09 ± 0.16\% \\
\multicolumn{1}{|m{2.18cm}||}{LPL ({mean+ varied $\epsilon_c$})} & \textbf{17.61 ± 0.30}\% & \textbf{25.87 ± 0.07\%} \\\hline
\end{tabular}
\label{table_CIFAR100_balance}
\end{table}
\section{Experiments}
Our proposed LPL is first evaluated on data augmentation, long-tail classification and multi-label classification tasks. The properties of LPL are then analyzed with more experiments. A Linux platform with four RTX3090 graphics cards is used, and each graphics card has a capacity of 24 GB.
\subsection{Experiments on Data Augmentation}
\textbf{Datasets and competing methods}. In this subsection, two benchmark image classification data sets, namely, CIFAR10 \cite{krizhevsky2009learning} and CIFAR100 \cite{krizhevsky2009learning}, are used. Both data consist of 32$\times$32 natural images in 10 classes for CIFAR10 and 100 classes for CIFAR100. There are 50,000 images for training and 10,000 images for testing. The training and testing configurations used in~\cite{wang2019implicit} are followed. Several classical and state-of-the-art robust loss functions and (semantic) data augmentation methods are compared: Large-margin loss~\cite{liu2016large}, Disturb label~\cite{xie2016disturblabel}, Focal Loss~\cite{lin2017focal}, Center loss~\cite{wen2016discriminative}, Lq loss~\cite{zhang2018generalized},
CGAN~\cite{mirza2014conditional}, ACGAN~\cite{odena2017conditional}, infoGAN~\cite{chen2016infogan}, ISDA, and ISDA + Dropout.
The Wide-ResNet-28~\cite{zagoruyko2016wide} and ResNet-110~\cite{he2016deep} are used as the base neural networks. Considering that the training/testing configuration is fixed for both sets, the results of the above competing methods reported in the ISDA paper~\cite{wang2019implicit} are directly presented (some are from their original papers). The training settings for the above base neural networks also follow the instructions of ISDA paper and its released codes. Our methods have two variants.
\begin{itemize}
\item LPL (mean+fixed bound). In this version, the optimization in Eq. (\ref{special_loss1}) is used. Mean denotes that the threshold is $\text{mean}({\bar q_{c}})$. Fixed bound means that the value of $\epsilon_c$ is fixed and identical for all categories during optimization. It is searched in \{0.1, 0.2, 0.3, 0.4\}.
\item LPL (mean+varied bound). In this version, the optimization in Eq. (\ref{special_loss1}) is used. Theoretically, varied bound means that the value of $\epsilon_c$ is varied according to Eq. (\ref{finalbound_lpl}). However, the varied bounds in the same batch make the implementation more difficult and increase the training complexity. In our implementation, we choose to set a varied number of updating steps for each category in our PGD-like optimization. The value of $\Delta \epsilon$ is searched in \{0.1, 0.2\}.
\end{itemize}
The Top-1 error is used as the evaluation metric. The performances of the base neural networks with the standard cross-entropy loss are re-run before running our methods to conduct a fair comparison. Almost identical results are obtained compared with the published results in the ISDA paper.
\textbf{Results}. Tables \ref{tab:banlance_cifar10} and \ref{tab:banlance_cifar100} present the results of all competing methods on the CIFAR10 and CIFAR100, respectively. Our LPL method (two versions) achieves the best performance almost under both the two base neural networks. ISDA achieves the second-best performance. Only in the case of Wide-ResNet-28-10 on CIFAR100, LPL (mean+fixed $\epsilon_c$) is inferior to ISDA. However, the former still achieves the fourth lowest error.
The results of LPL with varied bounds are better than those of LPL with fixed bounds. This comparison indicates the rationality of our motivation that the category with relatively low (high) performance should be more positively (negatively) augmented. In the final part of this section, more analyses will be conducted to compare ISDA and our method. Naturally, the varied threshold will further improve the performances.
\subsection{Experiments on Long-tail Classification}
\textbf{Datasets and competing methods}. In comparison with the conference version of the paper, we supplement the experiments with real-world data sets. In the synthetic data set experiment, the long-tail versions of CIFAR10 and CIFAR100 compiled by Cui et al.~\cite{cui2019class} are used and called CIFAR10-LT and CIFAR100-LT, respectively, The training and testing configurations used in~\cite{menon2020long} are followed. In the real-world data set experiment, large-scale data sets iNaturalist 2017 (iNat2017)\cite{van2018inaturalist} and iNaturalist 2018 (iNat2018)\cite{iNaturalist2018} with extremely imbalanced class distributions are used. iNat2017 includes 579,184
training images in 5,089 classes with an imbalance factor
of 3919/9, while iNat2018 is composed of
435,713 images from 8,142 classes with an imbalance factor of 1000/2. Several classical and state-of-the-art robust loss functions and semantic data augmentation methods are compared: Class-balanced CE loss~\cite{wang2019implicit}, Class-balanced fine-tuning~\cite{Cui4190}, Meta-weight net~\cite{shu2019meta}, Focal loss~\cite{lin2017focal}, Class-balanced focal loss~\cite{cui2019class}, LDAM~\cite{cao2019learning}, LDAM-DAR~\cite{cao2019learning}, ISDA, and LA.
In the synthetic data set experiment, Menon et al.~\cite{menon2020long} released the training data when the imbalance ratio (i.e., $\pi_1/\pi_{100}$) is 100:1; hence, their data and reported results for the above competing methods are directly presented. When the ratio is 10:1, the results of ISDA+Dropout and LA are obtained by running their released codes. The results of the rest methods are from the study conducted by Li et al.~\cite{li2021metasaug}. The hyper-parameter $\lambda$ in LA is searched in \{0.5, 1, 1.5, 2, 2.5\} according to the suggestion in ~\cite{menon2020long}. Similar to the experiments in~\cite{menon2020long}, ResNet-32~\cite{he2016deep} is used as the base network. The results of ISDA, LA, and LPL are the average of five repeated runs.
In the real-world data set experiment, the results
of the above competing methods reported in \cite{menon2020long} are directly presented. The results of LA on iNat2018 are from the original paper~\cite{menon2020long}. The other results, such as ISDA+dropout and LA on iNat2017, are obtained by running their released codes. Likewise, the hyper-parameter $\lambda$ in LA is searched in \{0.5, 1, 1.5, 2, 2.5\}. Similar to the experiments in~\cite{wu2020dist}, ResNet-50~\cite{he2016deep} is used as the base network. All results are the average of five repeated runs.
Our methods have two variants: LPL (varied threshold + fixed bound) and LPL (varied threshold + varied bound). The threshold $\tau$ is searched in \{0.4$C$, 0.5$C$, 0.6$C$\}. In the fixed bound version, the value of $\Delta \varepsilon $ is set to 0, and $\epsilon$ is searched in \{1.5, 2.5, 5\}. In the varied bound version, the value of $\epsilon$ is set to 0, and $\Delta \varepsilon $ is searched in \{1.0, 2.0, 3.0\}.
Only one meta-based method, Meta-weight net, is involved, because we mainly aim to compare methods that only modify the training loss. In addition, meta-based methods require an auxiliary high-quality validation set~\cite{li2021metasaug}. Other methods, such as BBN~\cite{zhou2020bbn}, which focus on the new network structure are also not included in the comparisons.
\begin{table}[tb]
\caption{Test Top-1 errors on CIFAR100-LT (ResNet-32).}\label{tab:longtail_cifar100}
\centering
\vspace{-0.08in}
\begin{tabular}{|p{3.9cm}||c||c|}\hline
Ratio & 100:1 & 10:1 \\\hline
Class-balanced CE loss & 61.23\% & 42.43\% \\
Class-balanced fine-tuning & 58.50\% & 42.43\% \\
Meta-weight net & 58.39\% & 41.09\% \\
Focal Loss & 61.59\% & 44.22\% \\
Class-balanced focal loss & 60.40\% & 42.01\% \\
LDAM & 59.40\% & 42.71\% \\
LDAM-DRW & 57.11\% & 41.22\% \\
ISDA + Dropout & 62.60\% & 44.49\% \\
LA & 56.11\% & 41.66\% \\\hline
LPL (varied $\tau$ + fixed $\epsilon_c$) & 58.03\% & 41.86\% \\
LPL (varied $\tau$ + varied $\epsilon_c$) & \textbf{55.75}\% & \textbf{39.03\%} \\\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\caption{Test Top-1 errors on CIFAR10-LT (ResNet-32).}\label{tab:longtail_cifar10}
\centering
\vspace{-0.08in}
\begin{tabular}{|p{3.9cm}||c||c|}\hline
Ratio & 100:1 & 10:1 \\\hline
Class-balanced CE loss & 27.32\% & 13.10\% \\
Class-balanced fine-tuning & 28.66\% & 16.83\% \\
Meta-weight net & 26.43\% & 12.45\% \\
Focal Loss & 29.62\% & 13.34\% \\
Class-balanced focal loss & 25.43\% & 12.52\% \\
LDAM & 26.45\% & 12.68\% \\
LDAM-DRW & 25.88\% & 11.63\% \\
ISDA + Dropout & 26.45\% & 12.98\% \\
LA & 22.33\% & 11.07\% \\\hline
LPL (varied $\tau$ + fixed $\epsilon_c$) & 23.97\% & 11.09\% \\
LPL (varied $\tau$ + varied $\epsilon_c$) & \textbf{22.05}\% & \textbf{10.59\%} \\\hline
\end{tabular}
\end{table}
\begin{table*}[htb]
\caption{ Results of mAP by our methods and other comparing
approaches on VOC-MLT and COCO-MLT. }\label{tab:multi-label_cifar100}
\centering
\vspace{-0.08in}
\resizebox{.8\textwidth}{!}{
\begin{tabular}{|l||c||c||c||c||c||c||c||c|}\hline
Datasets & \multicolumn{4}{c||}{VOC-MLT} & \multicolumn{4}{c|}{COCO-MLT} \\\hline
Method & total & head & medium & tail & total & head & medium & tail \\\hline
ERM & 70.86\% & 68.91\% & 80.20\% & 65.31\% & 41.27\% & 48.48\% & 49.06\% & 24.25\% \\
RW & 74.70\% & 67.58\% & 82.81\% & 73.96\% & 42.27\% & 48.62\% & 45.80\% & 32.02\% \\
Focal Loss & 73.88\% & 69.41\% & 81.43\% & 71.56\% & 49.46\% & 49.80\% & 54.77\% & 42.14\% \\
RS & 75.38\% & 70.95\% & 82.94\% & 73.05\% & 46.97\% & 47.58\% & 50.55\% & 41.70\% \\
RS-Focal & 76.45\% & 72.05\% & 83.42\% & 74.52\% & 51.14\% & 48.90\% & 54.79\% & 48.30\% \\
ML-GCN & 68.92\% & 70.14\% & 76.41\% & 62.39\% & 44.24\% & 44.04\% & 48.36\% & 38.96\% \\
OLTR & 71.02\% & 70.31\% & 79.80\% & 64.95\% & 45.83\% & 47.45\% & 50.63\% & 38.05\% \\
LDAM & 70.73\% & 68.73\% & 80.38\% & 69.09\% & 40.53\% & 48.77\% & 48.38\% & 22.92\% \\
CB-Focal & 75.24\% & 70.30\% & 83.53\% & 72.74\% & 49.06\% & 47.91\% & 53.01\% & 44.85\% \\
R-BCE & 76.34\% & 71.40\% & 82.76\% & 75.22\% & 49.43\% & 48.77\% & 53.00\% & 45.33\% \\
R-BCE-Focal & 77.39\% & 72.44\% & 83.16\% & 76.77\% & 52.75\% & 50.20\% & 56.52\% & 50.02\% \\
R-BCE+NTR & 78.65\% & 73.16\% & 84.11\% & 78.66\% & 52.53\% & 50.25\% & 56.33\% & 49.54\% \\
R-BCE-Focal+NTR & 78.94\% & 73.22\% & 84.18\% & 79.30\% & 53.55\% & 51.13\% & 57.05\% & 51.06\% \\
R-BCE+LC & 78.08\% & 73.10\% & 83.49\% & 77.75\% & 53.68\% & 50.58\% & 57.10\% & 51.90\% \\
R-BCE-Focal+LC & 78.66\% & 72.74\% & 83.45\% & 79.52\% & 53.94\% & 50.99\% & 57.47\% & 51.88\% \\\hline
R-BCE+LPL (varied $\tau$ + fixed $\epsilon_c$) & 78.64\% & 73.00\% & 82.81\% & 79.74\% & 53.97\% & 50.23\% & 57.36\% & 52.79\% \\
R-BCE+LPL (varied $\tau$ + varied $\epsilon_c$) & 79.02\% & 72.39\% & 82.14\% & 81.64\% & 54.35\% & 51.48\% & 57.72\% & 52.42\% \\
R-BCE-Focal+LPL (varied $\tau$ + fixed $\epsilon_c$) & 79.17\% & 73.33\% & 83.56\% & 80.27\% & 54.37\% & 51.14\% & 57.68\% & 52.85\% \\
R-BCE-Focal+LPL (varied $\tau$ + varied $\epsilon_c$) & \textbf{79.57}\% & 73.47\% & 83.95\% & 80.87\% & \textbf{54.76}\% & 50.78\% & 58.12\% & 53.81\% \\\hline
\end{tabular}}
\end{table*}
\begin{table}[h]
\caption{Test Top-1 errors on real-world datasets (ResNet-50). }\label{tab:longtail_inat}
\centering
\vspace{-0.08in}
\begin{tabular}{|p{3.8cm}||c||c|}\hline
Method & iNat2017 & iNat2018 \\\hline
Class-balanced CE loss & 42.02\% & 33.57\% \\
Class-balanced fine-tuning & -- & -- \\
Meta-weight net & -- & -- \\
Focal Loss& -- & -- \\
Class-balanced focal loss & 41.92\% & 38.88\% \\
LDAM & 39.15\% & 34.13\% \\
LDAM-DRW &37.84\% & 32.12\% \\
ISDA + Dropout & 43.37\% & 39.92\% \\
LA & 36.75\% & 31.56\% \\\hline
LPL (varied $\tau$ + fixed $\epsilon_c$) & 38.47\% & 32.06\% \\
LPL (varied $\tau$ + varied $\epsilon_c$) & \textbf{35.86}\% & \textbf{30.59}\% \\\hline
\end{tabular}
\end{table}
\begin{table}[b]
\caption{The error reduction of LPL (varied $\tau$+varied $\epsilon$) over LA on the two data sets.}\label{tab:LPL_vs_LA}
\centering
\vspace{-0.08in}
\begin{tabular}{|c||p{1.1cm}||p{1.1cm}||p{1.1cm}||p{1.1cm}|}\hline
Ratio & \multicolumn{2}{c||}{100:1} & \multicolumn{2}{c|}{10:1} \\\hline
LA & 56.11\% & 22.33\% & 41.66\% & 11.07\% \\\hline
LPL & 55.75\% (-0.36\%) & 22.05\% (-0.28\%) & 39.03\% (-2.63\%) & 10.59\% (-0.48\%) \\\hline
\end{tabular}
\end{table}
\textbf{Results}. The Top-1 error is also used. Table \ref{tab:longtail_cifar100} shows the results of all the methods on the CIFAR100-LT data. On the ratios 100:1 and 10:1, LPL (varied $\tau$ + varied $\epsilon_c$) yields the lowest Top-1 errors. It exceeds the best competing method LA by 0.36\% and 2.63\% on the ratios 100:1 and 10:1, respectively. Table \ref{tab:longtail_cifar10} shows the results of all the methods on the CIFAR10-LT data. LPL (varied $\tau$ + varied $\epsilon_c$) still obtains the lowest Top-1 errors on both ratios. Table \ref{tab:longtail_inat} shows the results of all the methods on the iNat2017 and iNat2018. For real-world long-tail datasets, it still exceeds LA 0.89\% and 0.97\%, respectively. On all the comparisons, the semantic augmentation method ISDA obtains poor results. On CIFAR100-LT, ISDA achieves the worst performances on both ratios. This result is expected because ISDA aims to positively augment all categories equally and does not favor tail categories, which may lead to tail categories suffering from this positive augmentation. Nevertheless, ISDA has a better performance on CIFAR10-LT than on CIFAR100-LT. In Fig. \ref{relative_loss_variations_single} (b), the loss increments of tail categories are larger than those of the head ones. That is, larger augmentations are exerted on tail categories.
We listed the Top-1 errors of LA and LPL (varied $\tau$ + varied $\epsilon_c$) on Table \ref{tab:LPL_vs_LA} to better present the comparison. When the ratio is smaller, the improvements (error reductions) are relatively larger. This result is reasonable because when the ratio becomes small, the effectiveness of LA will be subsequently weakened. When the imbalance ratio is one, indicating that there is no imbalance, LA will lose effect; however, our LPL can still augment the training data effectively.
\begin{table*}[htb]
\caption{Number of parameters and test Top-1 errors of ISDA and LPL with different base networks.}
\label{tab:other_lpl}
\centering
\vspace{-0.08in}
\resizebox{.7\textwidth}{!}{
\begin{tabular}{|l||c||c||c|}\hline
Method & \#Params & CIFAR10 & CIFAR100 \\\hline
ResNet-32+ISDA & 0.5M & 7.09 ± 0.12\% & 30.27 ± 0.34\% \\
ResNet-32+LPL (mean + fixed $\epsilon_c$) & 0.5M & 7.01 ± 0.16\% & 29.59 ± 0.27\% \\
ResNet-32+LPL (mean + varied $\epsilon_c$) & 0.5M & \textbf{6.66 ± 0.09\%} & \textbf{28.53 ± 0.16\%} \\\hline
SE-Resnet110+ISDA & 1.7M & 5.96 ± 0.21\% & 26.63 ± 0.21\% \\
SE-Resnet110+LPL (mean + fixed $\epsilon_c$) & 1.7M & 5.87 ± 0.17\% & 26.12 ± 0.24\% \\
SE-Resnet110+LPL (mean + varied $\epsilon_c$) & 1.7M & \textbf{5.39 ± 0.10\%} & \textbf{25.70 ± 0.07\%} \\\hline
Wide-ResNet-16-8+ISDA & 11.0M & 4.04 ± 0.29\% & 19.91 ± 0.21\% \\
Wide-ResNet-16-8+LPL (mean + fixed $\epsilon_c$) & 11.0M & 3.97 ± 0.09\% & 19.87 ± 0.02\% \\
Wide-ResNet-16-8+LPL (mean + varied $\epsilon_c$) & 11.0M & \textbf{3.93 ± 0.10\%} & \textbf{19.83 ± 0.09\%} \\\hline
\end{tabular}
}
\end{table*}
\subsection{Experiments on Multi-label Classification}
\textbf{Datasets and competing methods}. In this part, the long-tail multi-label versions of VOC\cite{everingham2015pascal} and MS-COCO\cite{lin2014microsoft} compiled by Wu et al.~\cite{wu2020dist} are used and called VOC-MLT and COCO-MLT, respectively. The training and test configurations used in~\cite{wu2020dist} are followed. The training set of VOC-MLT is sampled from train-val set of
VOC2012, containing 1142 images from 20 categories, with a maximum of 775 images per category and a minimum of 4 images per category. A toal of 4952 images from the VOC2007 test set are used for evaluation. COCO-MLT is sampled from MS COCO-2017 dataset, containing 1909 images from 80 categories, with a maximum of 1128 images per category and a minimum of 6 images per category. 5000 images from the MS COCO-2017 test set are used for evaluation.
We mainly compare NTR and LC that perturb logit. The code of LC is not open sourced.
To keep the consistency of the experimental setup, we conduct both comparison experiments on the basis of R-BCE \cite{wu2020dist}. Several classical and state-of-the-art robust loss functions and multi-label methods are compared:
Empirical Risk Minimization (ERM), Re-Weighting (RW), Focal Loss~\cite{lin2017focal}, Re-Sampling (RS)~\cite{shen2016relay}, ML-GCN~\cite{chen2019multi}, OLTR~\cite{liu2019large}, LDAM~\cite{cao2019learning},
CB-Focal~\cite{cui2019class},
R-BCE ~\cite{wu2020dist},
R-BCE-Focal ~\cite{wu2020dist},
R-BCE + NTR ~\cite{wu2020dist},
R-BCE-Focal + NTR ~\cite{wu2020dist},
R-BCE + LC ~\cite{guo2021long},
R-BCE-Focal + LC~\cite{guo2021long}.
Wu et al.~\cite{wu2020dist} released the training data and code. Hence, their data and reported results for the above competing methods are directly presented. The experimental results of LC are reimplemented from the original paper's formula. Similar to the experiments in~\cite{wu2020dist}, ResNet-50~\cite{he2016deep} is used as the base network.
Our methods have two variants: LPL (varied threshold + fixed bound) and LPL (varied threshold + varied bound). The threshold $\tau$ is searched in \{0.4$C$, 0.5$C$, 0.6$C$\}. In the fixed bound version, the value of $\Delta \varepsilon $ is set to 0, and $\epsilon$ is searched in \{0.05, 0.1, 0.1\}. In the varied bound version, the value of $\epsilon$ is set to 0, and $\Delta \varepsilon $ is searched in \{0.1, 0.2, 0.3\}. Other experimental setups such as training epochs and optimizer follow NTR.
\textbf{Results}. The evaluation metric mAP is used. Table \ref{tab:multi-label_cifar100} shows the results of all the methods on VOC-MLT and COCO-MLT. Our method achieves competitive or better results. R-BCE-Focal+LPL (varied $\tau$ + varied $\epsilon_c$) achieves the best results on VOC-MLT and COCO-MLT. R-BCE-Focal+LPL (varied $\tau$ + varied $\epsilon_c$) outperforms R-BCE-Focal + NTR by 0.63\% and 1.21\%, respectively, and outperforms R-BCE-Focal + LC by 0.91\% and 0.82\%, respectively. In the comparison experiment, R-BCE-Focal+LPL (varied $\tau$ + varied $\epsilon_c$) exceeds R-BCE-Focal by 2.18 \% on VOC-MLT and by 2.01 \% on COCO-MLT, respexctively. Similarly, when our method is added to the baseline R-BCE, our method can further improve the performance. The effectiveness of LPL is well proven.
\begin{table}[b]
\caption{Test Top-1 errors of three methods on two data sets.}\label{tab:table_LPL+LA}
\centering
\vspace{-0.08in}
\begin{tabular}{|c||c||c|}\hline
Method & CIFAR10-LT100 & CIFAR100-LT100 \\\hline
LA & 22.33\% & 56.11\% \\
LPL & 22.05\% & 55.75\% \\
LA+LPL & \textbf{21.46\%} & \textbf{53.89\%} \\\hline
\end{tabular}
\end{table}
\begin{table}[b]
\caption{Results of mAP by our methods and other comparing
approaches on MS-COCO.}\label{tab:multi-label-big}
\centering
\vspace{-0.08in}
\begin{tabular}{|l||c|}\hline
Method & MS-COCO \\\hline
R-BCE+NTR & 83.7\% \\
R-BCE+LC & 84.5\% \\
R-BCE+LPL(varied $\tau$ + varied $\epsilon_c$) & \textbf{85.4\%} \\\hline
\end{tabular}\label{oriMLT}
\end{table}
\subsection{More Analysis for Our Method}
\textbf{Improvements on existing methods}. Our LPL method seeks the perturbation via an optimization scheme. In ISDA and LA, the perturbations are directly calculated rather than optimization. A natural question arises, that is, whether the perturbations in existing methods further improved via our method. Therefore, we propose a combination method with the following loss in imbalance image classification:
\begin{equation*}
\begin{aligned}
&\sum\limits_{{\rm{c}} \in {\bm {\mathcal{N}}_a}} {\sum\limits_{{\bm x_i} \in {\bm S_c}} {\mathop {\min }\limits_{\left\| {\bm {\tilde \delta} _{{c}}} \right\| \le \epsilon_c } l(\text{softmax} ({\bm u_i} + \lambda \log \boldsymbol{\pi} + \bm {\tilde \delta} _{{c}}),\bm y_i)} } \\
&+\sum\limits_{{\rm{c}} \in {\bm {\mathcal{P}}_a}} {\sum\limits_{{\bm x_i} \in {\bm S_c}} {\mathop {\max }\limits_{\left\| {\bm {\tilde \delta} _{{c}}} \right\| \le \epsilon_c } l(\text{softmax} ({\bm u_i} + \lambda \log \boldsymbol{\pi} + \bm {\tilde \delta} _{{c}}),{\bm y_i})} },
\end{aligned}
\end{equation*}
where $\log \boldsymbol{\pi} = [\log \pi_1, \cdots, \log \pi_C]$. When all $\epsilon_c$s are zero, the above-mentioned loss becomes the loss of LA; when $\lambda$ is zero, the above loss becomes our LPL (with fixed bound). We conducted experiments on CIFAR10-LT100 and CIFAR100-LT100. The results are shown in Table \ref{tab:table_LPL+LA}. ResNet-32 is used as the basic model.
The value of $\lambda$ is searched in \{0.5, 1, 1.5, 2, 2.5\}.
The threshold $\tau$ is set as 4 and 40 on CIFAR10 and CIFAR100, respectively. Other parameters follow the setting in the previous experiments.
The combination method LA+LPL achieves the lowest errors on both comparisons, indicating that our LPL can further improve the performances of existing SOTA methods. ISDA can likewise be improved with the same manner.
\begin{figure}[htb]
\centering
\includegraphics[width=0.96\columnwidth, height=3.8in]{img/789.png} \vspace{-0.1in}
\caption{Relative loss variations of our LPL on two balanced data sets, two long-tail data sets, and two multi-label data sets. ``pos" means the relative loss variations of positive samples. ``neg" means the relative loss variations of negative samples.}
\label{fig_loss_variation_lpl}
\end{figure}
\textbf{More comparisons with ISDA}. ISDA claims that it does not increase the number of parameters compared with the direct learning with the basic DNN models. Our method also does not increase the number of model parameters. The reason lies in that the perturbation terms are no longer used in the final prediction.
Table \ref{tab:other_lpl} shows the comparisons between ISDA and LPL (two variants) on three additional base DNN models, namely, SE-ResNet110~\cite{hu2018squeeze}, Wide-ResNet-16~\cite{zagoruyko2016wide}, and ResNet-32. The numbers of parameters are equal for ISDA and LPL. Nevertheless, the two variants of our method LPL outperform ISDA on both data sets under all the five base models.
\textbf{Loss variations of LPL during training}. For single-label classification, we plot the loss variations of LPL on two balanced and two long-tail data sets to assess whether our method LPL is in accordance with the two conjectures. The curves are shown in Fig. \ref{fig_loss_variation_lpl} (a) and (b). On the balanced data, the relative loss variations are similar to those of ISDA; on the long-tail data,
the losses of head categories are reduced, whereas those of tail ones are increased, which is similar to those of LA. For the multi-label classification, Fig. \ref{fig_loss_variation_lpl} (c) shows the results.
In comparison with NTR and LC, our method LPL focuses more on the tail categories according to the trends of relative loss reduction.
\textbf{Performances of LPL under different $\tau$ and $\epsilon_c$}. Both the threshold for category set split and the bound for augmentation extent are two important hyper-parameters in LPL. Based on our experiments, the following observations are obtained. On the balanced data sets, the results are relatively stable when the bound locates in [0.1, 0.5]; when the threshold is searched around the $\text{mean}({\bar q_{c}})$, the results are usually better. On the long-tail data sets, the results are relatively stable when the bound locates in [1.5, 5.0]. When the threshold is searched in \{0.4$C$, 0.5$C$, 0.6$C$\}, the results are usually good in our experiment. Long-tail problems require larger extent of data augmentation.
\textbf{More comparisons with NTR and LC}. We also compare our method with NTR and LC on the original multi-label dataset MS-COCO. MS-COCO contains 122,218 images with 80 different labels, which is divided to a training set with 82,081 images and a test set with 40,137 images. In this part, ResNet-110 is used as backbone network and the input size is 448$\times$448. Other setups follow Subsection C in Section IV. Table \ref{oriMLT} shows the results. The evaluation metric mAP is used. Again, our method achieves the competitive results. R-BCE+LPL(varied $\tau$ + varied $\epsilon_c$) exceeds R-BCE+NTR and R-BCE+LC 1.7 \% and 0.9 \% respectively.
\section{Conclusions}
This study investigates the class-level logit perturbation in deep learning. Two conjectures for the relationship between (logit perturbation-incurred) loss increment/decrement and positive/negative data augmentation are proposed. To support the two conjectures, theoretical investigation is performed in the presence of class imbalance and variance imbalance. On the basis of the two conjectures and our theoretical findings, new methodologies are introduced to learn to perturb logits (LPL) during DNN training for both single-label and multi-label learning tasks. Two key components of LPL, namely, category-set split and boundary calculation, are investigated. Extensive experiments on data augmentation (for balanced classification), long-tail classification, and multi-label classification are conducted. LPL achieves the best performances in both situations under different basic networks. Existing methods with logit perturbation (e.g. LA) can also be improved by using our method.
\bibliographystyle{IEEEtran}
|
1,116,691,500,694 | arxiv | \section{Prompt photon production}
\subsection{Introduction}\label{subsec:intro1}
Isolated high transverse energy photons in the final state
are a powerful tool for detailed studies of the Quantum Chromodynamics (QCD)
in hard interaction processes and of the hadronic structure of the
incoming particles.
The photons are called ``prompt'' if they are directly coupled to the
interacting quarks, instead of being produced as hadronic decay products.
In contrast to jet measurements, where the partonic structure is
obscured by the non-perturbative hadronisation process, prompt photons
at large transverse energy $E_{T}^{\gamma}$ can be directly related to
the partonic event structure. Furthermore, the experimental uncertainties
connected with the energy determination of an electromagnetic shower
initiated by a photon are smaller compared to the measurement of a hadron jet.
However, the cross section for prompt photon production is small
and the identification of photons in the detector is not trivial.
Preliminary results for two analyses are presented here:
An H1 study of inclusive prompt photons in deep inelastic scattering
(DIS)\cite{pph1},
and a ZEUS study of prompt photons with an accompanying jet in
photoproduction\cite{ppzeus}.
\subsection{Prompt~Photon~identification}\label{subsec:ppident}
The main experimental difficulty is the separation of the prompt photons
from hadronic background, in particular from signals due to $\pi^{0}$ mesons.
In the H1 analysis photons are identified in the liquid argon calorimeter
by a compact electromagnetic cluster with no track pointing to it.
The photon transverse energy $E^{\gamma}_{T}$ and pseudorapidity
$\eta^{\gamma}$ are restricted to 3~GeV~$<~E^{\gamma}_{T}~<$~10~GeV and
$-1.2~<~\eta^{\gamma}~<~1.8$. In order to compare with perturbative QCD
(pQCD) calculations,
the photon isolation requirement is defined in an infrared-safe way,
using: $z = E^{\gamma} / E^{photonjet} > 0.9$,
i.e. $z$ is the ratio of the photon energy to the energy
of the jet, which contains the photon.
The photon signal is extracted by a shower shape analysis which uses six
discriminating shower shape functions in a likelihood method.
In the ZEUS analysis photons are identified using the barrel preshower
detector (BPRE). The BPRE prompt photon signal is determined using the
conversion probability in the detector, known from a study of DVCS
data\cite{dvcs}.
The photon kinematic range is restricted to
$5~< E^{\gamma}_{T} < 16$~GeV and
$-0.7~<~\eta^{\gamma}~<~1.1$, where positive $\eta^{\gamma}$ corresponds
to the proton beam direction. The photon isolation criteria are similar
to the ones used in the H1 analysis.
Hadronic jets were selected in the kinematic range
$6<E^{jet}_{T}<17$~GeV and $-1.6<\eta^{jet}<2.4$.
\subsection{Results}\label{subsec:ppres}
Differential cross sections for the production of isolated photons in DIS
measured by H1 are shown in Fig. 1, as function of $E^{\gamma}_{T}$ and
$\eta^{\gamma}$. A new LO($\alpha^{3}$)
calculation\cite{lo1} gives a good description, although it lies slightly
below the data.
At large pseudorapidities the dominant contribution is radiation off
the quark line (QQ), whereas in the backward region the radiation off the
electron line (LL) dominates the cross section.
\begin{figure}
\hspace*{0.3cm}
\includegraphics[width=0.45\textwidth]{f031.4b.ps}
\hspace*{0.3cm}
\includegraphics[width=0.45\textwidth]{f031.4a.ps}
\caption{ Prompt photon differential cross sections $d\sigma/dE^{\gamma}_{T}$
for $-1.2< \eta^{\gamma} <1.8$ and $d\sigma/d\eta^{\gamma}$ for
3~GeV$< E^{\gamma}_{T} <$ 10~GeV, for photon virtualities
$Q^2 > $ 4~GeV$^{2}$ and
$y_{e} >$ 0.05. Curves show an LO calculation with LL and QQ giving the
contribution of radiation off the electron and the quark line respectively.
As the interference is very small it is not shown, but included in the sum.}
\label{fig1}
\end{figure}
A comparison with predictions of the PYTHIA and HERWIG
generators (radiation off the quark) plus photon radiation off the electron
is also made\cite{pph1}.
Both generators describe the shape in $E^{\gamma}_{T}$ well, but are
\hyphenation{signi-fi-cantly}
lower in the absolute scale (factor 2.3 for PYTHIA and 2.6 for HERWIG).
The $\eta^{\gamma}$ distribution is better described by PYTHIA.
The ZEUS differential cross sections as functions of $E^{\gamma}_{T}$ and
$\eta^{\gamma}$ for the prompt photons are shown in Fig.~2.
Two next-to-\hyphenation{lead-ing} order (NLO) pQCD predictions are
compared to the data.
In both calculations several photon and proton pdf's, and several
fragmentation functions are used.
The FGH (Fontannaz, Guillet and Heinrich)\cite{fgh} calculation contains
additional
higher order corrections to the resolved photon process. Like the KZ
(Krawczyk and Zembrzuski)\cite{kz} prediction, it describes the data
rather well.
However, they both underestimate the observed cross section at low
$E^{\gamma}_{T}$ and in the backward region.
The difference between the data and the
NLO QCD calculations is mainly observed in the $x_{\gamma}^{obs}~ <$~ 0.75
region (not shown), which is sensitive to the resolved photon contribution.
A comparison with the prediction of Lipatov and Zotov\cite{lz} (LZ), which
is based on
$k_{T}$-factorisation, corrected for hadronisation effects, is also shown.
The LZ prediction gives the best description of the $E^{\gamma}_{T}$ and
$\eta^{\gamma}$ cross sections. In particular, it describes the
lowest $E^{\gamma}_{T}$ region better than the KZ and FGH NLO predictions.
PYTHIA and HERWIG do not rise as steeply
at low $E^{\gamma}_{T}$ as do the data and underestimate the measured
cross section.
\begin{figure}
\includegraphics[width=0.5\textwidth]{zeus.prph_3.eps}
\caption{ The differential cross section for the prompt photon events with an
accompanying jet as functions of $E^{\gamma}_{T}$ and $\eta^{\gamma}$
compared to theoretical QCD calculations (including hadronisation corrections).
The shaded bands correspond to the uncertainty in the renormalisation scale
which was changed by factors 0.5 and 2.}
\label{fig2}
\end{figure}
Since the largest difference between the NLO calculations and the data is
observed in the region of low $E^{\gamma}_{T}$ and low $\eta^{\gamma}$,
the level of agreement with NLO QCD was verified by increasing the minimum
transverse energy of prompt photons from 5 to 7 GeV. In this case
hadronisation corrections are expected to be smaller. As shown in Fig.~3
with $E^{\gamma}_{T}~>~7$~GeV the NLO QCD and the LZ
predictions all agree well with the data.
The PYTHIA model then also agrees well, while HERWIG is still below the data.
\begin{figure}
\includegraphics[width=0.5\textwidth]{zeus.prph_6.eps}
\caption{ The differential cross section for the $\gamma$ + jet events as
function of: a) $\eta^{\gamma}$, b) E$^{jet}_{T}$ and c) $\eta^{jet}$ compared
to QCD calculations (with hadronisation corrections) and Monte Carlo
models. The cut on $E^{\gamma}_{T}$ is increased to 7 GeV. }
\label{fig3}
\end{figure}
\section{Charged Particle Momentum distributions}
In an H1 analysis\cite{pmdh1} the process of parton fragmentation and
hadronisation is studied using inclusive charged particle spectra in
DIS.
In the current region of the Breit frame a comparison with one
hemisphere of $e^{+}e^{-}$ annihilation offers a direct possibility
to test quark fragmentation universality.
The energy scale for the current region, set by the virtual
photon, is given by $Q/2$ and, for purpose of comparison, is taken to be
equivalent to one half of the $e^{+}e^{-}$ c.m. energy $E^{*}/2$.
In the Breit frame the scaled momentum variable $x_p$ is defined to be
$2p^{\pm}_{h} / Q$, where $p^{\pm}_{h}$ is the momentum of a charged particle.
In $e^{+}e^{-}$ annihilation events the equivalent variable is
$2p^{\pm}_{h} / E^{*}$.
The use of much higher statistics now available at
high $Q$ as compared to previous studies\cite{st1,st2}, as well as an improved
understanding of the H1 detector and associated systematics, provide a much
improved measurement of the scaled momentum spectra. Results are now available
up to $<Q> \sim$ 100 GeV, close to the LEP-1 c.m. energy, and in the
full range of $x_p$ (0 $< x_p <$ 1).
In Fig. 4 the inclusive, event normalised, charged particle scaled momentum
spectrum is shown as a function of $Q$ for nine different bins of $x_p$.
Also shown is a comparison to results from $e^{+}e^{-}$ annihilation
data (see references in ref. \refcite{pmdh1}).
As seen, the $ep$ and $e^{+}e^{-}$ data
are in excellent agreement, which supports the concept of
quark fragmentation universality.
Moving from low to high $Q$ the $x_p$ spectra become softer, i.e. there is a
dramatic increase in the number of
hadrons with a small share of the initial parton's momentum and a decrease
of those hadrons with a large share.
These scaling violations (parton splitting in QCD) are compatible
to the scaling violations observed for the DIS structure functions.
In the RAPGAP simulation\cite{rapgap}, also shown in Fig. 4, the
Parton Shower model is implemented.
It describes the fragmentation process as the splitting
of parent partons into two daughters (e.g. $q \rightarrow qg,
q \rightarrow qq, g \rightarrow qq$), the splitting continues with daughters
going on to form parents. The evolution of the parton shower is based on
leading $\log{Q^2}$ DGLAP splitting functions. RAPGAP gives a very good
description of the $ep$ scaled momentum spectra over the whole range of $x_p$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{f033.f7b.ps}
\caption{ H1 data for the event normalised inclusive scaled momentum
spectrum as a function of $Q$ for nine different $x_p$ regions.
Also shown are data from various $e^{+}e^{-}$ experiments
(taking $Q = E^{*}$). The DIS data are compared with the RAPGAP generator.}
\label{fig4}
\end{figure}
|
1,116,691,500,695 | arxiv |
\section{Introduction}
\label{sec:introduction}
Pedestrian detection, as the first and most fundamental step in many real-world tasks, \eg, human behavior analysis,
gait recognition, intelligent video surveillance and automatic driving, has attracted massive attention in the last
decade~\cite{dollar2009pedestrian,Geiger2012CVPR,dollar2009integral,zhang2015filtered,zhang2016faster,xiang2016subcategory}.
However, while great progress has been made by deep convolutional neural
networks (CNNs) on general object detection~\cite{ren2015faster,liu2015ssd,Dai2016aa,he2015deep},
research in the realm of pedestrian detection remains not as cumulative considering two major challenges.
\begin{figure}[!tb]
\subfigure[]{
\centering
\includegraphics[width=1\linewidth]{figuresv2/figures-intro-1}
\label{fig:challengingcases1}
\vspace{-1em}
}
\subfigure[]{
\centering
\includegraphics[width=1\linewidth]{figuresv2/figures-intro-2}
\label{fig:challengingcases2}
}
\caption[]{(a) Examples of true pedestrians and hard negative samples of low resolution. Without extra semantic contexts,
it is difficult to discriminate between them, even for human eyes.
(b) Example of pedestrians in crowded scenes, where CNN-based detectors fail to locate each individual without
low-level apparent features.}
\label{fig:challengingcases}
\vspace{-1.8em}
\end{figure}
\begin{figure*}[!tp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figuresv2/cf2.pdf}
\vspace{-1em}
\end{center}
\caption{A demonstration of various channel features. Includes: apparent-to-semantic features, temporal features, depth features.}
\label{fig:ChannelFeatureIntroduction}
\vspace{-1em}
\end{figure*}
Firstly, compared to general objects, pedestrians are less discriminable from backgrounds. In other words, the discrimination relies more on
the semantic contexts. As shown in Figure~\ref{fig:challengingcases1}, usually appearing in low resolution
(less than $20{\times}40$ pixels), pedestrians together with the cluttered background bring about hard negative samples, such as
traffic signs, pillar boxes, and models in shopping windows, which have very similar apparent features with pedestrians.
Without extra semantic contexts, detectors working with such low-resolution inputs are unable to discriminate between them,
resulting in the decrease in recall and increase in false alarms.
How to accurately locate each pedestrian is the second challenge.
Figure~\ref{fig:challengingcases2} is one showcase in practical applications where the pedestrians
stand close in a crowded scene.
As a result, detectors typically fail to locate each individual and hence produce a dozen of false positives due to
inaccurate localization.
This problem becomes even worse for CNN-based detectors since while convolution and pooling layers generate
high-level semantic activation maps, they also blur the boundaries between closely-laid instances.
An intuitive alternative to address the problem is to make use of extra low-level apparent features (\eg edges),
for the purpose of solving the localization drawbacks by providing detectors with detailed apparent information.
In addition, in many applications, detectors can also benefit from other information,
like depth when the camera is equipped with a depth sensor, or temporal information when a video sequence is input.
However, it is still unclear how these information can be utilized by detectors, especially CNN-based detectors.
Given the observations above, one question comes up naturally: {\it what kind of extra features are effective and
how they actually work to improve the CNN-based pedestrian detectors?}
In this paper, we aim to answer this question and explore the characteristics of different extra features in
pedestrian detection task. This paper contributes to:
\begin{adjustwidth}{1em}{0em}
\noindent $\bullet$~Firstly, we integrate extra features as input channels into CNN-based detectors.
To investigate three groups of channel features: apparent-to-semantic channels, temporal channels and depth
channels, extensive experiments are carried out on the KITTI pedestrian dataset~\cite{Geiger2012CVPR}. \par
\noindent $\bullet$~Then, we experimentally analyze both advantages and disadvantages of different kinds of channel
features.
Specifically, we quantify the improvement brought by different channel features and provide insight into the error sources.\par
\noindent $\bullet$~Moreover, a novel network architecture, namely HyperLearner, is proposed to aggregate extra
features in a multi-task learning manner. In HyperLearner, channel features are aggregated as supervision instead of extra inputs,
and hence it is able to utilize the information of given features and improve detection performance while requiring no
extra inputs in inference.
We verify the effectiveness of HyperLearner on several pedestrian detection benchmarks and
achieve state-of-the-art performance.
\end{adjustwidth}
\section{Related work}
Traditional pedestrian detectors, extended from Viola and Jones paradigm~\cite{viola2004robust},
such as \verb'ACF'~\cite{dollar2014fast}, \verb'LDCF'~\cite{nam2014local}, and
\verb'Checkerboards'~\cite{zhang2015filtered}, filter various Integral Channels Features (ICF)~\cite{dollar2009integral}
before feeding them into a boosted decision forest, predominating the field of pedestrian detection for years.
Coupled with the prevalence of deep convolutional neural network, CNN-based models~\cite{li2015scale,zhang2016faster,
cai2016unified} have pushed pedestrian detection results to an unprecedented level.
In~\cite{zhang2016faster}, given region proposals generated by a Region Proposal Network (RPN),
CNN features extracted by an RoI pooling layer~\cite{girshick2015fast} are fed into a boosted forest;
while in Cai \etal ~\cite{cai2016unified}, a downstream neural network architecture is proposed to
preform end-to-end detection.
Integrating channel features of different types has been proved to be useful in many decision-forest-based pedestrian
detectors. Prior work by Park \etal ~\cite{park2013exploring} embeds optical flow into a boosted decision forest to
improve pedestrian detectors working on video clips. \verb'CCF'~\cite{yang2015convolutional}
uses the activation maps of a VGG-16~\cite{simonyan2014very} network
pretrained on ImageNet~\cite{krizhevsky2012imagenet} as channel feature, while Costea and
Nedevschi~\cite{daniel2016semantic} utilize the heatmap of semantic scene parsing, in which detectors benefit from the
semantic information within a large receptive field.
However, the problem whether and how CNN-based pedestrian detectors can benefit from extra features still exhibits
a lack of study.
\section{Channel features for pedestrian detection}
In this section, we empirically explore the performance boost when extra channel features are integrated into CNN-based detectors.
\subsection{Preliminaries}
\label{subsec:channelfeaturepreliminaries}
Before delving into our experiments, we first describe the dataset, evaluation metrics and baseline detector we use.
\myparagraph{KITTI dataset}
We choose KITTI dataset~\cite{Geiger2012CVPR} for channel feature analysis considering its possession of
pedestrians of various scales in numerous scenes, as well as the information of adjacent frames and stereo data.
KITTI contains $7,481$ labeled images of resolution $1250{\times}375$ and another $7,518$ images for testing.
The training set is further split into two independent set for training and validation following~\cite{chen20153d}.
The person class in KITTI is divided into two sub-classes: pedestrian and cyclist, both evaluated under PASCAL criteria~\cite{everingham2010pascal}.
KITTI contains three evaluation metrics: {\it easy, moderate} and {\it hard},
with difference in the \emph{min$.$} bounding box height, \emph{max$.$} occlusion level, \etc.
Standard evaluation follows moderate metric.
\myparagraph{Faster R-CNN}
Our baseline detector is an implementation of Faster R-CNN~\cite{ren2015faster},
initialized with VGG-16~\cite{simonyan2014very} weights pretrained on ImageNet~\cite{krizhevsky2012imagenet}.
It consists of two components: a fully convolutional Region Proposal Network (RPN) for proposal generation,
and a downstream Fast R-CNN (FRCNN) detector taking regions with high foreground likelihood as input.
Since KITTI contains abounding small objects, we slightly modify the framework as~\cite{xiang2016subcategory} and~\cite{cai2016unified}.
Specifically, we adjust the number of anchors from $3$ scales and $3$ ratios to $5$ scales and $7$ ratios;
besides, all \layername{conv5} layers are removed to preserve an activation map of high resolution for both RPN and FRCNN.
We choose Faster R-CNN not only for its prevalence and state-of-the-art performance,
but also generality: our observations should remain mostly effective when similar techniques are applied
in other CNN-based pedestrian detectors.
\subsection{Introduction to channel features}
\label{subsec:introtochannel}
In this section, we introduce the channel features we integrated into the CNN-based pedestrian detector.
Based on the type of information they carry, the selected channel features for integration are divided into
three groups: apparent-to-semantic channels, temporal channels and depth channels.
Figure~\ref{fig:ChannelFeatureIntroduction} provides a demonstration of all channels.
\begin{figure}[!tb]
\begin{center}
\includegraphics[width=0.9\linewidth]{figuresv2/dan0.pdf}
\end{center}
\caption{As described in Section~\ref{subsec:integration}, our Faster R-CNN for channel feature integration.
The side branch takes channel features as input and generates channel feature representations before concatenated
with \layername{conv4\_3}.}
\label{fig:FasterPipeline}
\vspace{-1em}
\end{figure}
\myparagraph{Apparent-to-semantic channels}
This group of channels includes ICF channel~\cite{dollar2009integral}, edge channel, segmentation channel and heatmap channel.
The information in these channels ranges from low-level apparent to high-level semantic.
The ICF channel is a handy-crafted feature channel composed of
LUV color channels, gradient magnitude channel, and histogram of gradient (HOG) channels,
which has been widely employed in the decision-forest-based detectors~\cite{dollar2014fast,nam2014local,zhang2016far}.
Containing only colors and gradients within a local patch, ICF channel represents the most low-level but
detailed information of an image.
The edge channel is extracted from the second and third layers of HED network~\cite{xie15hed}.
Different with traditional edge detector such as Canny~\cite{canny1986computational},
the HED framework produces more semantically meaningful edge maps (see Figure~\ref{fig:ChannelFeatureIntroduction}).
The edge channel is thus considered as a mid-level feature channel containing both detailed appearance as well as high-level semantics.
As in~\cite{long2015fully,chen2014semantic}, a fully convolutional network (FCN) is trained on
MS-COCO dataset~\cite{lin2014microsoft} to generate the semantic segmentation channel,
where each pixel represents the probability of the category (\eg, person and street) it belongs to.
The segmentation channel carries higher-level semantic information, while still perserving some detailed appearance
features, \ie, the boundaries between objects of different categories.
However, two closely-laid instances of same category cannot be distinguished from each other in the segmentation
channel without contour of each instance.
Furthermore, to obtain a feature channel with only high-level semantics,
we blur the segmentation channel into the heatmap channel.
By doing so, the clear boundaries between objects of different categories are also removed and
only high-level information of categories remains.
\myparagraph{Temporal channels}
The temporal features (\eg, optical flow~\cite{beauchemin1995computation} and motion~\cite{wang2009evaluation})
have been proved to be beneficial to traditional pedestrian detectors~\cite{walk2010new,park2013exploring} working on videos.
To test their effectiveness in CNN-based framework, we extract optical flow channel as representative using temporally adjacent frames.
\myparagraph{Depth channels}
With more and more depth sensors employed in intelligent systems such as robotics and automatic driving,
the depth information available in these tasks becomes an alternative extra channel feature to boost detectors.
Instead of using the sparse point clouds captured by laser radars, we turn to DispNet~\cite{mayer2015large}
to reconstruct the disparity channel from stereo images.
\subsection{Integration techniques}
\label{subsec:integration}
We integrate channel features by creating a new shallow side branch alongside the VGG-16 main stream
(see Figure~\ref{fig:FasterPipeline}).
This side branch consists of several convolution layers (with kernel size $3$, padding $1$ and stride $1$) and max
pooling layers (with kernel size $2$ and stride $2$), outputing an $128$-channel activation maps of $\rfrac{1}{8}$ input size,
which is further concatenated with activation map \layername{conv4\_3}.
The concatenated activation map is fed into the RPN and FRCNN to preform detection.
We experiment different compositions of the side branch: the number of convolution layers and the initial weights
(\ie, a random gaussian kernel, or pretrained weights).
The technique we employed to pretrain the side branch is to train a Faster R-CNN detector which completely relies on
the side branch and intialize the side branch with the weights from this network.
\input{src/t-seg-feature-adding}
Summariesed in Table~\ref{tab:segfeatureadding}, all integration methods improve the baseline Faster R-CNN detector
in KITTI validation set on both classes across all three metrics.
Nevertheless, the model with two extra convolution layers outperforms the model with only one extra convolution layer.
A pretrained side branch does not perform well when further assembled with the VGG-16 network.
When probing the network, we find that the model with pretrained weights tend to ``rely'' more on the sidebranch,
(\ie, activation map produced by side branch has much greater value than the main stream).
Given the fact that the side branch was pretrained to perform detection independently, this inbalance may be a
cause accounting for the performance degradation.
Based on the analysis, we use two convolution layers with random Gaussian initialization in all future experiments.
\subsection{Comparison and analysis}
\label{subsec:comparison}
We conduct experiments on two input scales
(\emph{1x} and \emph{2x}).
Table~\ref{tab:featurecomparison} summarizes the results.
For a fair comparison, a controlled experiment in which the original image is used as input of the side branch is
also included.
In general, models integrated with extra channel features show improvement over the baseline.
The experiment using original image as extra input shows nonobvious improvement,
which confirms that the performance gain is indeed attributed to channel feature integration.
Among all channel features, ICF channel shows least contribution to the detection performance in both scales.
We conjecture the reason is that in deep convolutional networks, CNN features are more discriminative than
hand-crafted features like HOG.
Recall the two major challenges for pedestrian detection: hard negative samples and the individual localization.
Through detailed analysis, we demonstrate how CNN-based detectors can benefit from extra channel features
to overcome these problems.
\myparagraph{{\emph 1x} experiments} In \emph{1x} experiments,
channels that carry more semantic information show better performance.
As shown in Table~\ref{tab:featurecomparison}, detectors with segmentation channel and heatmap channel
bring most significant improvement to the detector.
In accord with our previous hypotheses, the detectors utilize the semantic context provided by extra channel features
to discriminate pedestrian of low resolution from hard negative samples.
Table~\ref{tab:recallcomparison1x} provides the recall comparison at certain precision rate ($70\%$) between models
with segmentation channel and the baseline model for pedestrians of different sizes.
All pedestrians are divided into four groups based on their heights in pixel.
Leading absolute $4\%$ recall rate on average, the detector with segmentation channel performs significantly better
in recall for small pedestrians (less than or equal to $80$ pixel in height).
\input{src/t-feature-comparison}
\myparagraph{{\emph 2x} experiments}
In \emph{2x} experiments, model with only high-level semantic information but no low-level apparent features
(\ie the heatmap channel) fails to produce consistent improvement over the baseline model compared to the \emph{1x} experiments.
Nonetheless, channel features with both high-level semantic and low-level apparent information
(edge channel and segmentation channel) outperforms other channels.
A possible explanation for this is that when it comes to large input scale, low-level details
(\eg, edge) will show greater importance in detection.
To further explore this phenomenon, we randomly sampled $\rfrac{1}{4}$ of images (about $800$)
from validation set and collected false positive statistics at $70\%$ recall rate, as shown in Figure~\ref{fig:fpanalysis1}.
While in Figure~\ref{fig:fpanalysis2}, we also count top-200 false positives in the validation set and show the
fractions of each error source.
Not only inhibiting false positives across all categories at a high recall, edge channel also contributes
significantly to the localization precision.
Integrated with the edge channel, detector lowers localization error rate by absolute $9\%$ and $7\%$ compared with
the baseline and the detector with heatmap channel respectively.
This proves that
channel features with low-level apparent features (\eg, boundaries between individuals and contours of objects)
improve localization precision when the input image is of high resolution.
\begin{figure}[!tb]
\begin{center}
\subfigure[False positive sources at $70\%$ recall rate]{
\centering
\includegraphics[width=0.85\linewidth]{figuresv2/comp1.pdf}
\label{fig:fpanalysis1}
}
\subfigure[Top-200 false positives sources]{
\centering
\includegraphics[width=0.85\linewidth]{figuresv2/comp2.pdf}
\label{fig:fpanalysis2}
}
\end{center}
\caption{False positive analysis for baseline, edge channel and heatmap channel at \emph{2x} scale.
All false positives are categorized into four types: localization error, background classification error,
cyclist classification error, and annotation error.
Localization error is defined as non-matched detection bounding boxes which overlap with a groundtruth but iou~$<0.5$,
while background error has no overlap with any groundtruth box.
Cyclist error happens when a bounding box match cyclist groundtruth.
Annotation error occurs when detection ``matches'' a {\it de facto} groundtruth which, however, is not annotated.}
\vspace{-1em}
\end{figure}
\begin{figure*}[!tb]
\centering
\includegraphics[width=0.9\linewidth]{figuresv2/dan.pdf}
\caption{The proposed HyperLearner, which consists of 4 components: body network, channel feature network (CFN),
region proposal network (RPN) and Fast R-CNN (FRCNN). HyperLearner learns representations of channel features while
requiring no extra input in inference. Refer to Section~\ref{subsec:hyperlearner} for details.}
\label{fig:hyperlearner}
\end{figure*}
Besides, We witness noticeable improvement in \emph{1x} when optical flow is integrated into the detector.
Park \etal~\cite{park2013exploring} also proved this effectiveness in decision-forest-based detectors with a detailed
analysis.
For the disparity channel, the results are very similar to the results of heatmap channel. To have an insight into this,
we should notice that the relative value in a disparity map also serves as a ``segmentation-like'' channel (see
Figure~\ref{fig:ChannelFeatureIntroduction}), while the
absolute value has only limited effects compared to the deep convolutional features and the predefined anchors.
\section{Jointly learn the channel features}
As observed above, integrating channel features into the network can boost our detector working on images of both low
resolution and high resolution.
With these channel features, we can narrow most of the gap between resolutions without introducing heavy computational
cost brought by enlarging the input image, and push state-of-the-art forward.
However, a brute-force integration method is computationally expensive with respect to the basic Faster R-CNN,
given that the input channel feature usually requires extra computational cost.
While many of the channel features comes from neural networks (\eg, semantic segmentation and edge),
it is natural to think of ``teaching'' our neural-network both channel features generation and detection.
In the following section, we propose a new network structure to address the issue in a multi-task learning manner,
namely, HyperLearner.
\subsection{HyperLearner}
\label{subsec:hyperlearner}
The HyperLearner framework is illustrated in Figure~\ref{fig:hyperlearner}. As shown, our system consists of four components:
the body network for activation map generation, a channel feature network (CFN), a region proposal network (RPN) and
a Fast R-CNN (FRCNN) network for final detection task.
From the very left, the entire image is forwarded through multiple convolution layers to generate the hierarchical
activation maps. We first aggregate activation maps and make them into a uniform space, namely aggregated activation map.
Aggregating activation maps from multiple level has been proved to be useful and important in many computer vision
tasks~\cite{kong2016hypernet,xie15hed} for its ability to collect rich hierarchical representations.
This aggregated map is then fed into the channel feature network (CFN). CFN is a feed-forward fully convolutional
network (FCN) for channel feature prediction. Unlike Faster R-CNN, RPN and FRCNN do not only take the output of the last
convolution layer (\layername{conv4\_3}) as input.
Instead, the aggregated activation map is also fed into the RPN, as well as FRCNN. By sharing the same aggregated
activation map, the RPN and FRCNN are able to benefit from the representations CFN learned.
\myparagraph{Aggregated activation map}
The body network takes the raw image, of shape $3{\times}H{\times}W$, as its input, and outputs several activation maps.
In our experiments, the body network is a VGG-16~\cite{simonyan2014very} network (without \layername{conv5\_1} to
\layername{conv5\_3}) intialized with the weights pretrained on ImageNet~\cite{krizhevsky2012imagenet}.
We extract the activation maps from layer \layername{conv1\_2}, \layername{conv2\_2}, \layername{conv3\_3} and
\layername{conv4\_3}. Due to the pooling layer in the network, these maps are of different size and number of channels.
We add two convolution layers after each map and keep their numbers of output channels same ($32$ in all our experiments).
The high-level maps are then upsampled to the same size as the first activation map.
Finally, they are concatenated together to form the aggregated activation map.
\myparagraph{Channel Feature Network (CFN)}
The CFN directly takes the aggregated activation map to generate the predicted channel feature map through
a fully convolutional structure. This map is typically of the same shape as the raw image.
For example, the predicted channel feature may be a semantic segmentation map of several categories,
or an edge detection map like HED Network~\cite{xie15hed}.
\myparagraph{Region Proposal Network (RPN) and Fast-RCNN (FRCNN)}
We build the RPN and FRCNN using the same structure as proposed in~\cite{ren2015faster}. RPN and FRCNN now take both
last convolutional activation map in the VGG16 network (\layername{conv4\_3}) and the aggregated activation map from
the body network as the inputs. The proposals generated by RPN are then fed into FRCNN to perform final detection.
\subsection{Training Details}
\myparagraph{Loss Function}
During the training phase, besides the raw image and groundtruth bounding boxes for standard Faster R-CNN framework,
the HyperLearner also takes a channel feature map as its supervisor, which is typically generated by another CNN
(\eg, semantic segmentation and edge). To address the channel feature learning, we introduce a new pixel-level loss.
Denote the feature map predicted by the CFN as $C_{x, y}$, and the supervisor map as $S_{x, y}$.
The loss is computed by:
$ \displaystyle \frac{1}{H \times W} \sum_{(x, y)} \ell(S_{x, y}, C_{x, y}), $
where $H$ and $W$ represents the size of the feature map and $\ell$ is a loss function for a single pixel.
In binary probabilistic maps, like edge map, cross-entropy loss is used, given by:
$ \ell(p, q) = \beta_{x, y} {\big(} -p \log q - (1-p) \log (1-q) {\big)}, $
where $\beta$ is a weight function to balance the positive labels and negative labels.
If $S_{x, y} > 0.5$, $\beta = 1 - |S_{+}|\Large/|S|$; otherwise, $\beta = |S_{+}|\Large/|S|$, where $|S_{+}| = \sum \mathds 1[S_{x, y} > 0.5]$.
For multi-class probabilistic maps, like segmentation map, cross-entropy loss is used.
For other tasks, MSE loss is used.
The final loss for the network is thus computed by:
$ \mathcal{L} = \mathcal{L}_{\mathrm{CFN}} + \lambda_1 \mathcal{L}_{\mathrm{RPNcls}} + \lambda_2 \mathcal{L}_{\mathrm{RPNbbox}}
+ \lambda_3 \mathcal{L}_{\mathrm{FRCNNcls}} + \lambda_4 \mathcal{L}_{\mathrm{FRCNNbbox}}$
where the last four component remains the same as Faster R-CNN~\cite{ren2015faster}. In all our experiments, we set all $\lambda_i = 1$.
\myparagraph{Multi-stage training}
The aggregated activation map acts as an important role in the framework, which must be carefully trained.
We employs a pragmatic multi-stage training methods, making the whole training process splitted into four stages.
In the first stage, only CFN is optimized. In detail, we fix parameters of all pretrained convolution layers in the body
network (\layername{conv1\_1} to \layername{conv4\_3}), and drop all RPN and FRCNN layers to train the CFN.
In the second stage, we fix the whole body network (including the convolution layers for aggregating activation maps)
and CFN, and train only RPN.
Then in the third stage, body network, CFN and RPN are all fixed; only FRCNN component is optimized.
While in the final stage, all layers are jointly optimized.
Acrossing all stages, in the optimization of the FRCNN, we treat region proposals coordinates from RPN as fixed value
and do not back-propagate the gradient.
\section{Experiments and results}
The performance of HyperLearner is evaluated across multiple pedestrian datasets: KITTI~\cite{Geiger2012CVPR}, Caltech Pedestrian~\cite{dollar2009pedestrian}, and Cityscapes~\cite{Cordts2016Cityscapes}. The datasets we chose cover most of the popular ones in pedestrian detection task.
One may also notice that our body network an implementation of HyperNet proposed in~\cite{kong2016hypernet}.
Thus, we implement a control experiment where the CFN is removed as a typical HyperNet setting.
That is, the body network keeps its side branches for aggregated activation map, but it does not learn from any extra
supervision.
\subsection{KITTI Dataset}
We evaluated the performance of HyperLearner with two kinds of feature supervision: edge and semantic segmentation.
These two kinds of channel features have been proved to be effective when directly integrated into the Faster R-CNN
framework (see Section~\ref{subsec:integration}).
The results on the validation set of KITTI dataset is illustrated in the Table~\ref{tab:kittivalidation}.
\begin{table}[!tb]
\begin{center}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{l || a c c | a c c }
\multirow{2}{*}{Model} & \multicolumn{3}{c|}{1x input} & \multicolumn{3}{c}{2x input} \\ \cline{2-7}
& \multicolumn{1}{c}{Mod} & Easy & Hard & \multicolumn{1}{c}{Mod} & Easy & Hard \\
\hline
\hline
Fr-RCNN*~\cite{ren2015faster} & 59.29 & 64.53 & 53.01 & 71.05 & 76.00 & 62.08 \\
MS-CNN~\cite{cai2016unified} & 68.37 & 73.70 & 60.72 & 72.26 & 76.38 & \bfred{64.08} \\
\hline
Baseline & 69.80 & 74.37 & 61.20 & 71.73 & 77.84 & 62.30 \\
HyperNet & 69.72 & 76.91 & 61.10 & 72.23 & 77.96 & 63.43 \\
+Segmentation & 71.15 & 79.43 & \bfred{62.34} & 72.35 & \bfred{79.17} & 62.34 \\
+Edge & \bfred{71.25} & \bfred{78.43} & 62.15 & \bfred{72.51} & 78.51 & 63.24
\end{tabular}
\end{center}
\caption[]{Results on KITTI validation set, the model HyperNet refers to the HyperLearner without CFN.
Evaluation follows moderate metric in KITTI. \\ *: Fr-RCNN follows setting as \cite{ren2015faster} while baseline model is Faster-RCNN with slightly different
parameters. See also Table~\ref{tab:featurecomparison}.}
\label{tab:kittivalidation}
\end{table}
In experiments on \emph{1x} scale, we notice great performance improvement when our HyperLearner is jointly learned from an
edge detection network or a semantic segmentation network compared to the Faster R-CNN baseline and the
HyperNet. The quantitative analysis is consistent with the experiments in Section~\ref{subsec:integration}
where we directly integrate them as an extra input into the network through a branch network.
In experiments on \emph{2x} scale, HyperLearner as well as HyperNet make clear improvement. Based on former analysis,
when the input image is of high resolution, the introduction of channel features with low-level details could benefit
the detector.
In HyperNet setting, side branches of the body network act as an multi-level feature extractor,
and therefore such kind of improvement is expected.
As a transfer learning application, HyperLearner successfully boost a CNN-based detector using features learned by
other networks with different architecture and trained for other tasks.
From another perspective, HyperLearner
offers an alternative way to perform feature learning in such CNNs and showed noticeable improvement.
Based on the results in Table~\ref{tab:kittivalidation}~and~\ref{tab:cityscapesvalidation}, it is safe to conclude that
HyperLearner actually utilizes the extra supervision from channel
features to generate a better hyper-feature extractor, especially for the detection task.
\subsection{Cityscapes dataset}
The Cityscapes dataset~\cite{Cordts2016Cityscapes}, is a large-scale dataset for semantic urban segmentation which contains a diverse set of stereo video recordings from 50 cities. It consists of $2,975$ training and $500$ validation images with fine annotations, as well as another $20,000$ training images with coarse annotations. The experiments are conducted on the fine-annotated images. Compared with former standard datasets, Cityscapes possesses meticulous detection labeling (pixel-level), as well as fine semantic segmentation labeling.
As mentioned, the Cityscapes dataset provides pixel-level semantic segmentation labeling, so instead of using segmentation model pretrained on MS-COCO dataset, we directly address the multi-task learning by employing pixel-level segmentation labels as supervisor (\ie, our HyperLearner jointly learns pedestrian detection and semantic segmentation).
During training, we only use segmentation labels for ``person''.
As shown in Table~\ref{tab:cityscapesvalidation}, we also witness significant improvement over the Faster R-CNN baseline
and HyperNet.
\subsection{Caltech dataset}
The Caltech dataset~\cite{dollar2009pedestrian} is also a commonly used dataset for pedestrian detection evaluation. It consists of 2.5 hours 30Hz VGA video recorded from a vehicle traversing the streets of Los Angeles, USA. Detection results are evaluated on a test set consisting of 4024 frames.
Zhang \etal~\cite{zhang2016far} conducted a detailed survey and provided a refined groundtruth labeling on Caltech dataset. Our experiments is completely based on this new labeling (both training and testing).
HyperLearner achieves state-of-the-art performance on the test set. Figure~\ref{fig:caltech} shows the detailed comparison of HyperLearner, the Faster R-CNN baseline and other methods.
\section{Summary}
In this paper, we integrated channel features into CNN-based pedestrian detectors, specifically, ICF channel, edge
channel, segmentation channel and heatmap channel (apparent-to-semantic channel); optical flow channel (temporal channel);
disparity channel (depth channel).
Our quantitative experiments show semantic channel features can help detectors discriminate hard positive samples and
negative samples at low resolution, while apparent channel features inhibit false positives of backgrounds and improve
localization accuracy at high resolution.
To address the issue of computational cost, we propose a novel framework, namely HyperLearner, to jointly learn channel
features and pedestrian detection. HyperLearner is able to learn the representation of channel features while requiring
no extra input in inference, and provides significant improvement on several datasets. From another point of
view, HyperLearner offers an alternative way to perform feature learning in HyperNet-like CNNs in a transfer
learning manner.
\begin{table}[!tb]
\begin{center}
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{l || c c | c c | c c}
\multirow{2}{*}{Model} & \multicolumn{2}{c|}{540p input} & \multicolumn{2}{c|}{720p input}
& \multicolumn{2}{c}{Improvement} \\ \cline{2-7}
& Speed & AP & Speed & AP & 540p & 720p \\
\hline
\hline
Baseline & 130ms & 74.97 & 240ms & 86.89 & - & - \\
HyperNet & 140ms & 74.30 & 250ms & 86.67 & -0.53 & -0.22 \\
Jointsegmap & 140ms & \bfred{77.22} & 250ms & \bfred{87.67} & \bfred{+2.25} & \bfred{+0.78}
\end{tabular}
\end{center}
\caption{Results on Cityspcaes validation set. The speed column shows the time each model needed to
perform detection on a single image. The speed is tested on single NVIDIA TITAN-X GPU. We use all segmentation
polygons labeled ``person'' to generate bounding boxes for the pedestrian detection task. Following the standard
in Caltech dataset~\cite{dollar2009pedestrian}, all persons with (pixel-level) occlusion greater than 0.5 or of
height less than $50$ pixels are ignored. Furthermore, all polygons labeled ``cyclist'' or ``person group'' are
also ignored.
}
\label{tab:cityscapesvalidation}
\vspace{-0.5em}
\end{table}
\begin{figure}[!tb]
\begin{center}
\includegraphics[width=0.95\linewidth]{figures/demo.pdf}
\end{center}
\caption{Results of HyperLearner on Cityscapes validation set. The left column shows our detection result,
while the right column demonstrate CFN's output learned from segmentation labeling.}
\label{fig:democityscapes}
\vspace{-1em}
\end{figure}
\begin{figure}[!tb]
\begin{center}
\includegraphics[width=0.9\linewidth]{figures/caltech-final2.pdf}
\end{center}
\caption{Detection quality on Caltech test set (reasonable, $\mathit{MR}_{-2}^N(\mathit{MR}_{-4}^N)$),
evaluated on the new annotations~\cite{zhang2016far}. We achieve state-of-the-art
results on both evaluation metrics.
}
\label{fig:caltech}
\vspace{-1em}
\end{figure}
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the CVPR 2017 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for
CVPR 2017.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 23)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
Make sure the first page is numbered by commenting out the first page being
empty on line 46
\begin{verbatim}
\end{verbatim}
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the CVPR 2017 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee}
|
1,116,691,500,696 | arxiv | \section{Introduction}
The study of first order algebraic ordinary differential equations (AODEs in short) has a long history, which can be at least tracked back to the time of Fuchs and Poincar\'e. Fuchs presented a sufficient and necessary condition so called Fuchs' criterion for a first order AODE having no movable singularities. Roughly speaking, an AODE is said to have movable singularities if it has a solution (with arbitrary constants) whose branch points depend on arbitrary constants. For instance the solution $y=\sqrt{t+c}$ of $2yy'-1=0$ has branch points $t=-c$, where $c$ is an arbitrary constant, so $2yy'-1=0$ has movable singularities.
Based on differential algebra developed by Ritt \cite{ritt} and the theory of algebraic function field of one variable, Matsuda \cite{matsuda} reproduced many classic results of first order AODEs. In particular, he presented an algebraic definition of movable singularities. In 1999, combining Matsuda's results and height estimates of points on plane algebraic curves, Eremenko showed that rational solutions of first order AODEs have bounded degrees. In \cite{feng-feng}, we proved that if a first order AODE has movable singularities then it has only finitely many rational solutions. As for algebraic solutions of first order AODEs, Freitag and Moosa \cite{freitag-moosa} showed that they are of bounded heights.
On the other hand, the algorithmic aspects of computing closed form solutions of AODEs have been extensively studied in the past decades. Several algorithms have been developed for computing closed form solutions (e.g. liouvillian solutions) of linear homogeneous differential equations (see \cite{barkatou,kovacic,vanhoeij-ragot-ulmer-weil,singer1,vanderput-singer} etc). Yet, the situation is different in the nonlinear case. Existing algorithms were only valid for AODEs of special types. Based on parametrization of algebraic curves, Aroca et al \cite{aroca-cano-feng-gao,feng-gao} gave two complete methods for finding rational and algebraic solutions of first order autonomous AODEs. Their methods were generalized by Winkler and his collegues to the class of first order non-autonomous AODEs whose rational general solutions involve arbitrary constants rationally as well as some other certain classes of AODEs (see \cite{vo-grasegger-winkler1,vo-grasegger-winkler2,chau-winkler, winkler} etc). Particularly, in \cite{vo-grasegger-winkler1}, the authors introduced a class of first order AODEs called maximally comparable AODEs and presented an algorithm to compute a degree bound for rational solutions of this kind of equations as well as first order quasi-linear AODEs. Readers are referred to \cite{winkler} for a survey of recent developments in this direction.
Theoretically, it suffices to compute a degree bound for all rational solutions of a first order AODE to find all its rational solutions. The following example implies that the degrees of rational solutions may depend not only on the degrees of the original equation but also on its constant coefficients.
\begin{example}
Let $n$ be an integer. Then $y=t^n$ is a rational solution of $ty'-ny=0$. The degree of $t^n$ depends on the constant coefficient $n$ of $ty'-ny$.
\end{example}
Let $f=\sum_{i=0}^d a_i(t,y)y'^i=0$ be an irreducible first order AODE. Set
\begin{equation*}
\label{eqn:index}
{\rm m.s.index}(f)=\max_{i=0}^d \{ \deg(a_i,y)-2(d-i)\}.
\end{equation*}
Fuchs' criterion (see Remark on page 14 of \cite{matsuda}) implies that $f=0$ has movable singularities if ${\rm m.s.index}(f)>0$. On the other hand, it was proved in \cite{eremenko} that if $f=0$ has movable singularities then it can be transferred into an AODE $g$ with positive ${\rm m.s.index}$. This motivates us to focus on first order AODEs with positive ${\rm m.s.index}$.
We prove that for an irreducible first order AODE $f=0$ with ${\rm m.s.index}(f)>0$ the degrees of rational solutions of $f=0$ are independent of the constant coefficients of $f$ and furthermore we present an explicit degree bound in terms of the degrees of $f$. The key step to obtain this degree bound is to estimate the heights of points on plane algebraic curves. This height estimate is a special case of the result about heights on complete nonsingular varieties (see for instance Proposition 3 on page 89 of \cite{lang}). Eremenko in \cite{eremenko} provided a simple proof for this special case based on the Riemann-Roch Theorem. We follow Eremenko's proof but present explicit bounds for each step.
The paper is organized as follows. In Section 2, we introduce some basic materials used in the later sections. In Sections 3, we estimate the degrees and heights for elements in a Riemann-Roth space. In Section 4, we present an explicit bound for the heights of points on a plane algebraic curve. Finally, in Section 5, we apply the results in Section 4 to first order AODEs.
Throughout this paper, $\Z$ stands for the ring of integers, $k, K$ for algebraically closed fields of characteristic zero, and $R$ and ${\mathcal R}$ for algebraic function fields over $k$ and $K$ respectively. $\bP^m(\cdot)$ denotes the projective space of dimension $m$ over a field and $\bV(\cdot)$ denotes the variety in a projective space defined by a set of homogeneous polynomials.
\section{Basic materials}
In this section, we will introduce some basic materials used in this paper, including differential rings, algebraic function fields of one variable and heights. Readers are referred to \cite{ritt, matsuda, chevalley, lang} for details.
\subsection{Differential fields associated to AODEs}
In this subsection, we introduce some basic notations of differential algebra.
\begin{define}
A derivation on a ring ${\mathcal R}$ is a map $\delta: {\mathcal R} \rightarrow {\mathcal R}$ satisfying that for all $a,b\in {\mathcal R}$,
$$
\delta(a+b)=\delta(a)+\delta(b),\,\,\delta(ab)=\delta(a)b+a\delta(b).
$$
A ring (resp. field) equipped with a derivation is called a differential ring (resp. differential field). An ideal $I\subset {\mathcal R}$ is called a differential ideal if $\delta(I)\subset I$.
\end{define}
The field $k(t)$ of rational functions in $t$ can be endowed with a structure of differential field whose derivation $\delta$ is the usual derivation with respect to $t$, i.e. $\delta=\frac{\rm d}{{\rm d} t}$. Set $y_0=y$ and denote
$$k(t)\{y\}=k(t)[y_0,y_1,\dots]$$
where $y_0,y_1,\dots$ are indeterminates. One can extend the derivation $\delta$ on $k(t)$ to a derivation $\delta'$ on $k(t)\{y\}$ by assigning $y_i=\delta'^i(y_0)$ so that $k(t)\{y\}$ becomes a differential ring. For the sake of notations, we use $\delta$ in place of $\delta'$. Elements in $k(t)\{y\}$ are called differential polynomials over $k(t)$. Let $f$ be a differential polynomial not in $k(t)$. Then there is a unique integer $d$ such that
$f\in k(t)[y_0,\dots,y_d]\setminus k(t)[y_0,\dots,y_{d-1}]$. This integer is called the order of $f$. We shall use $[\cdot]$ (resp. $\langle \cdot \rangle$) to stand for the differential (resp. algebraic) ideal generated by a set of differential polynomials (resp. polynomials) respectively. Suppose that $f$ is irreducible viewed as an algebraic polynomial. Set
$$
\Sigma_f=\left\{ A\in k(t)\{y\} | \,\exists\, m>0\, \,\mbox{s.t.}\, S^m A^m \in [f] \right\}
$$
where $S=\partial f/\partial y_d$ and $d$ is the order of $f$. It was proved on page 30 of \cite{ritt} that $\Sigma_f$ is a prime differential ideal and so $k(t)\{y\}/\Sigma_f$ is a differential domain.
Lemma 2.2 of \cite{feng-feng} implies that the field of fractions of $k(t)\{y\}/\Sigma_f$ is isomorphic to that of $k(t)[y_0,\dots,y_d]/\langle f \rangle$. Under this isomorphism, the field of fractions of $k(t)[y_0,\dots,y_d]/\langle f\rangle$ can be endowed with a structure of differential field. We shall still use $\delta$, or $'$ in short, to denote the induced derivation on the field of fractions of $k(t)[y_0,\dots,y_d]/\langle f \rangle$.
In this paper, the first order AODEs under consideration are differential equations of the following form
\begin{equation}
\label{eq:differentialeqn}
f(y,y')=0
\end{equation}
where $f(y,y')\in k(t)[y,y']\setminus k(t)$.
\begin{define}
An element $r(t)\in k(t)$ satisfying
$f(r(t),r'(t))=0$ is called a rational solution of $f(y,y')=0$.
\end{define}
Remark that the derivation $\delta$ in $k(t)$ can be uniquely extended to a derivation in ${\overline{k(t)}}$ which we shall still denote by $\delta$. Assume that viewed as a polynomial in ${\overline{k(t)}}[y,y']$, $f$ is irreducible over $\overline{k(t)}$. Then the field of fractions of ${\overline{k(t)}}[y,y']/\langle f(y,y') \rangle$ is not only an algebraic function field over ${\overline{k(t)}}$ but also a differential field.
\subsection{Algebraic function fields of one variable}
Let $K$ be an algebraically closed field of characteristic zero and ${\mathcal R}$ an extension field of $K$.
We say ${\mathcal R}$ is an {\em algebraic function field of one variable} over $K$ if ${\mathcal R}$ satisfies the following conditions: there is an element $a$ of ${\mathcal R}$ which is transcendental over $K$, and ${\mathcal R}$ is algebraic of finite degree over $K(a)$. Assume ${\mathcal R}$ is an algebraic function field of one variable over $K$. A {\em valuation ring} of ${\mathcal R}$ over $K$ is a subring $V$ satisfying that
\begin{enumerate}
\item $K\subset V\neq {\mathcal R}$; and
\item if $a\in {\mathcal R} \setminus V$, then $a^{-1}\in V$.
\end{enumerate}
All non-invertible elements of $V$ form a maximal ideal ${\mathfrak P}$ which is called a place of ${\mathcal R}$, and $V$ is called the corresponding valuation ring of ${\mathfrak P}$. Let $V$ be a valuation ring with ${\mathfrak P}$ as place. There is an element $u\in V$, called a {\em local uniformizer} of ${\mathfrak P}$ or $V$, such that ${\mathfrak P}=uV$ and $\bigcap_{n=1}^\infty u^n V=\{0\}$. The factor ring $V/{\mathfrak P}$ is equal to $K$ since $K$ is algebraically closed. For every valuation ring $V$ with place ${\mathfrak P}$, we define a map
\[
\pi_{{\mathfrak P}}: {\mathcal R} \longrightarrow K\cup \{\infty\}
\]
satisfying if $a\in V$ then $\pi_{{\mathfrak P}}(a)=a+{\mathfrak P}\in V/{\mathfrak P}=K$, otherwise $\pi_{{\mathfrak P}}(a)=\infty$. It is well-known that ${\mathcal R}$ admits infinitely many places, and there is one-to-one correspondence between places and valuation rings.
Let ${\mathfrak P}$ be a place of ${\mathcal R}$ and $V$ the corresponding valuation ring of ${\mathfrak P}$. Let $u$ be a local uniformizer of ${\mathfrak P}$. Then for every non-zero element $a$ of ${\mathcal R}$, there is a unique integer $n$ such that
\[
a=u^nv
\]
for some invertible element $v\in V$. It is easy to see that the integer $n$ is independent of the choice of local uniformizers.
Such $n$ is called the {\em order} of $a$ at ${\mathfrak P}$ and denoted by $\nu_{{\mathfrak P}}(a)$. We make the convention to write $\nu_{{\mathfrak P}}(0)=\infty$. Then the place ${\mathfrak P}$ induces a map $\nu_{{\mathfrak P}}$ from ${\mathcal R}$ to $\Z$ sending $a$ to $\nu_{{\mathfrak P}}(a)$. This map $\nu_{{\mathfrak P}}$ is called the {\em order function} at ${\mathfrak P}$. For $a,b\in {\mathcal R}$, we have
$$
\nu_{{\mathfrak P}}(ab)=\nu_{{\mathfrak P}}(a)+\nu_{{\mathfrak P}}(b),\,\,\nu_{{\mathfrak P}}(a+b)\geq \min\{\nu_{{\mathfrak P}}(a),\nu_{{\mathfrak P}}(b)\}
$$
where the equality in the later formula holds if $\nu_{{\mathfrak P}}(a)\neq \nu_{{\mathfrak P}}(b)$. Let $a\in {\mathcal R}$ and ${\mathfrak P}$ be a place. We say ${\mathfrak P}$ is a {\em zero} of $a$ if $\nu_{{\mathfrak P}}(a)>0$, and a {\em pole} of $a$ if $\nu_{{\mathfrak P}}(a)<0$. Every non-zero element of ${\mathcal R}$ admits only finitely many zeros and poles.
A {\em divisor} in ${\mathcal R}$ is a formal sum
$$
D=\sum_{{\mathfrak P}} n_{{\mathfrak P}} {\mathfrak P}
$$
for all the places of ${\mathcal R}$, where $n_{{\mathfrak P}} \in \Z$ and $n_{{\mathfrak P}}=0$ for all but finitely many ${\mathfrak P}$. It is easy to see that the set of divisors in ${\mathcal R}$ forms an abelian group.
$D$ is {\em effective} if $n_{{\mathfrak P}}\geq 0$ for all ${\mathfrak P}$.
The {\em degree} of $D$, denoted by $\deg(D)$, is defined to be $\sum n_{{\mathfrak P}}$ and the {\em support} of $D$, denoted by ${\rm supp}(D)$, is defined to be $\{{\mathfrak P} \,|\, n_{{\mathfrak P}}\neq 0\}$. For brief, we denote
$$
D^{+}=\sum_{n_{\mathfrak P}>0}n_{{\mathfrak P}} {\mathfrak P},\quad D^{-}=\sum_{n_{{\mathfrak P}}<0}-n_{{\mathfrak P}} {\mathfrak P}.
$$
Let $D_1=\sum_{{\mathfrak P}} n_{{\mathfrak P}} {\mathfrak P}$ and $D_2=\sum_{{\mathfrak P}} m_{{\mathfrak P}} {\mathfrak P}$ be two divisors in ${\mathcal R}$, we write $D_1\geq D_2 $ provided $D_1-D_2$ is effective. For every non-zero element $a$ of ${\mathcal R}$, we denote
$$
{\rm div}(a)=\sum_{{\mathfrak P}} \nu_{{\mathfrak P}}(a) {\mathfrak P}
$$
where ${\mathfrak P}$ ranges over all places of ${\mathcal R}$. Then ${\rm div}(a)$ is a divisor of degree $0$. For a divisor $D$, we denote
$$
{\mathfrak L}(D)=\{a\in {\mathcal R}\mid {\rm div}(a)+D\geq 0\}\cup \{0\},
$$
which is called the Riemann-Roch space of $D$.
It is well-known that each Riemann-Roch space is a $K$-vector space of finite dimension.
The Riemann-Roch Theorem implies that if $D$ is a divisor whose degree is not less than the genus of ${\mathcal R}$ then ${\mathfrak L}(D)$ is of positive dimension.
Let $f\in K[x_0,x_1]\setminus K$ be irreducible. One sees that the field of fractions of $K[x_0,x_1]/\langle f \rangle$ is an algebraical function field of one variable over $K$ which is called the algebraic function field of $f$. For an irreducible homogeneous polynomial $F$ in $K[x_0,x_1,x_2]$, the corresponding algebraic function field is defined to be the algebraic function field of $F(x_0,x_1,1)$. Remark that the algebraic function fields of $F(1,x_1,x_2), F(x_0,1,x_2)$ and $F(x_0,x_1,1)$ are all isomorphic.
\subsection{Models of algebraic function fields of one variable}
Let ${\mathcal R}$ be an algebraic function field of one variable over $K$. The set of all places of ${\mathcal R}$ can be viewed as a nonsingular model of ${\mathcal R}$. On the other hand, let $F$ be an irreducible homogeneous polynomial $F\in K[x_0,x_1,x_2]$ whose algebraic function field is ${\mathcal R}$. Then the projective curve $F=0$ is another model of ${\mathcal R}$. There is a surjective map from a nonsingular model of ${\mathcal R}$ to the curve $F=0$. To describe this map precisely, let $\xi_0,\xi_1,\xi_2$ be three nonzero elements of ${\mathcal R}$ satisfying that
$${\mathcal R}=K(\xi_0/\xi_2,\xi_1/\xi_2)\,\,\mbox{and}\, \,F(\xi_0,\xi_1,\xi_2)=0.$$
Set ${\bm \xi}=(\xi_0,\xi_1,\xi_2)$. Let ${\mathfrak P}$ be a place of ${\mathcal R}$ with $u$ as local uniformizer. Denote by $\ell=\min_{i} \{\nu_{{\mathfrak P}}(\xi_i)\}$. One sees that $\nu_{{\mathfrak P}}(u^{-\ell}\xi_i)\geq 0$ and moreover not all $\pi_{{\mathfrak P}}(u^{-\ell}\xi_i)$ are zero. Therefore $(\pi_{{\mathfrak P}}(u^{-\ell}\xi_0), \pi_{{\mathfrak P}}(u^{-\ell}\xi_1),\pi_{{\mathfrak P}}(u^{-\ell}\xi_2))$ defines a point of $\bP^2(K)$. Remark that this point does not depend on the choice of $u$.
\begin{define}
We call $(\pi_{{\mathfrak P}}(u^{-\ell}\xi_0), \pi_{{\mathfrak P}}(u^{-\ell}\xi_1),\pi_{{\mathfrak P}}(u^{-\ell}\xi_2))$ the center of ${\mathfrak P}$ with respect to ${\bm \xi}$. Denote by ${\mathcal C}({\bm \xi})$ the set of centers with respect to ${\bm \xi}$.
\end{define}
We claim that ${\mathcal C}({\bm \xi})$ is the plane projective curve in $\bP^2(K)$ defined by $F$ and the map sending ${\mathfrak P}$ to the center of ${\mathfrak P}$ with respect to ${\bm \xi}$ is the required map. It is easy to verify that $F({\bf c})=0$ for all ${\bf c}\in {\mathcal C}({\bm \xi})$. Conversely, let $(c_0,c_1,c_2)$ be a point of $F=0$. Without loss of generality, we may assume that $c_0\neq 0$. Then $F(1,c_1/c_0,c_2/c_0)=0$. Remark tha ${\mathcal R}=K(\xi_1/\xi_0,\xi_2/\xi_0)$. As $F(1,\xi_1/\xi_0,\xi_2/\xi_0)=0$, due to Corollary 2 on page 8 of \cite{chevalley}, there is a place ${\mathfrak P}$ containing $\xi_1/\xi_0-c_1/c_0$ and $\xi_2/\xi_0-c_2/c_0$. For this place, one has that
$\nu_{{\mathfrak P}}(\xi_1)\geq \nu_{{\mathfrak P}}(\xi_0), \nu_{{\mathfrak P}}(\xi_2)\geq \nu_{{\mathfrak P}}(\xi_0) $ and furthermore $\pi_{{\mathfrak P}}(\xi_i/\xi_0)=c_i/c_0$. Write $\ell=\nu_{{\mathfrak P}}(\xi_0)$. Then the center of ${\mathfrak P}$ with respect to ${\bm \xi}$ is
\begin{align*}
(\pi_{{\mathfrak P}}(u^{-\ell}\xi_0),\pi_{{\mathfrak P}}(u^{-\ell}\xi_1),\pi_{{\mathfrak P}}(u^{-\ell}\xi_2))=\pi_{{\mathfrak P}}(u^{-\ell}\xi_0)(1,c_1/c_0,c_2/c_0).
\end{align*}
This implies that $(c_0,c_1,c_2)\in {\mathcal C}({\bm \xi})$.
\begin{define}
We call ${\mathcal C}({\bm \xi})$ or $F=0$ a plane projective model of ${\mathcal R}$.
\end{define}
The plane projective models of ${\mathcal R}$ usually have singularities. Let ${\mathcal C}$ be an irreducible projective curve in $\bP^2(K)$ defined by a homogeneous polynomial $F$. A point ${\bf c}$ of ${\mathcal C}$ is said to be of multiplicity $r$, if all derivatives of $F$ up to and including the $(r-1)$-th vanish at ${\bf c}$ but not all the $r$-th derivatives vanish at ${\bf c}$. Suppose that ${\bf c}$ is a point of ${\mathcal C}$ with multiplicity $r$. If $r=1$, then ${\bf c}$ is called a simple point of ${\mathcal C}$, otherwise a singular point of ${\mathcal C}$. A point of multiplicity $r$ is ordinary if the $r$ tangents to ${\mathcal C}$ at this point are distinct, otherwise it is non-ordinary. Due to Propositon on page of \cite{fulton}, ${\mathcal R}$ has always a plane projective model with only ordinary singularities.
Let $\Phi=(\phi_0,\phi_1,\phi_2)$ be an invertible transformation, where $\phi_0,\phi_1,\phi_2$ are homogeneous polynomials in $K[x_0,x_1,x_2]$ of the same degree and they have no common factors. We further assume that $\phi_i({\bm \xi})\neq 0$ for all $i=0,1,2$. Then
$${\mathcal R}=K\left(\frac{\phi_0({\bm \xi})}{\phi_2({\bm \xi})},\frac{\phi_1({\bm \xi})}{\phi_2({\bm \xi})} \right).$$
\begin{prop}
\label{prop:centertransformation}
Let $\Phi,{\bm \xi}$ be as above and ${\mathfrak P}$ a place of ${\mathcal R}$. Assume that ${\bf c}$ is the center of ${\mathfrak P}$ with respect to ${\bm \xi}$. If $\Phi({\bf c})\neq (0,0,0)$, then $\Phi({\bf c})$ is the center of ${\mathfrak P}$ with respect to $\Phi({\bm \xi})$.
\end{prop}
\begin{proof}
Let $u$ be a local uniformizer of ${\mathfrak P}$ and $\ell=\min_{i=0}^2 \{\nu_{{\mathfrak P}}(\xi_i)\}$.
One has that
$$(\pi_{{\mathfrak P}}(u^{-\ell}\xi_0),\pi_{{\mathfrak P}}(u^{-\ell}\xi_1),\pi_{{\mathfrak P}}(u^{-\ell}\xi_2))=\lambda {\bf c}$$
for some nonzero $\lambda\in K$. Denote $m={\rm tdeg}(\phi_i)$. Then
$$\pi_{{\mathfrak P}}(\Phi(u^{-\ell}{\bm \xi}))=\Phi(\pi_{\mathfrak P}(u^{-\ell}{\bm \xi}))=\Phi(\lambda {\bf c})=\lambda^m \Phi({\bf c})\neq (0,0,0).$$
This implies that
$\min_{i=0}^2 \{\nu_{{\mathfrak P}}(\phi_i(u^{-\ell}{\bm \xi}))\}=0.$
In other words,
$$
\min_{i=0}^2 \{ \nu_{{\mathfrak P}}(\phi_i({\bm \xi}))\}=m\ell.
$$
Then the center of ${\mathfrak P}$ with respect to $\Phi({\bm \xi})$ is
$$
\pi_{{\mathfrak P}}(u^{-m\ell}\Phi({\bm \xi}))=\pi_{{\mathfrak P}}(\Phi(u^{-\ell}{\bm \xi}))=\Phi(\lambda {\bf c})=\lambda^m \Phi({\bf c}).
$$
Hence $\Phi({\bf c})$ is the center of ${\mathfrak P}$ with respect to $\Phi({\bm \xi})$.
\end{proof}
\subsection{Heights}
All algebraic function fields under consideration in this subsection are finite extensions of $k(t)$. They are algebraic function fields of one variable over $k$ and the places and order functions in them are defined as the same as in the previous subsection.
\begin{define}
\label{def:height}
All points are considered as points in some suitable projective spaces over ${\overline{k(t)}}$.
\begin{enumerate}
\item
Given ${\bf a}=(a_0,\dots,a_m)\in \bP^m({\overline{k(t)}})$, let $R$ be a finite extension of $k(t)$ containing all $a_i$. We define the {\em height} of ${\bf a}$, denoted by $T({\bf a})$, to be
$$
\frac{\sum_{{\mathfrak p}} \max_{i=0}^m\{-\nu_{{\mathfrak p}}(a_i)\}}{[R:k(t)]}
$$
where ${\mathfrak p}$ ranges over all places of $R$.
\item For $A=(a_{i,j})\in {\rm GL}_3({\overline{k(t)}})$, we define
$$
T(A)=T((a_{1,1},a_{1,2}, a_{1,3},\dots,a_{3,3})).
$$
\item For $a\in {\overline{k(t)}}$, we define the height of $a$ to be $T((1,a))$, denoted by $T(a)$.
\item Let $F$ be a polynomial in ${\overline{k(t)}}[x_0,\dots,x_m]$. Suppose that $F$ contains at least two terms. We define the height of $F$, denoted by $T(F)$, to be $T({\bf c})$ where ${\bf c}$ is the point in a suitable projective space formed by the coefficients of $F$. For convention, when $F$ only contains one term, we defined $T(F)$ to be zero.
\item Let $V$ be a hypersurface in $\bP^m({\overline{k(t)}})$ defined by $F\in {\overline{k(t)}}[x_0,\dots,x_m]$. We define the height of $V$, denoted by $T(V)$, to be $T(F)$.
\end{enumerate}
\end{define}
\begin{remark}
\label{rem:heights}
Assume that ${\bf a}=(a_0,\dots,a_m), {\bf b}=(b_0,\dots,b_m)\in \bP^m({\overline{k(t)}})$.
\begin{enumerate}
\item
One sees that $T({\bf a})$ is independent of the choice of homogeneous coordinates and the choice of $R$. Without loss of generality, we suppose $a_0=1$, then
$$
T({\bf a})=\frac{\sum_{{\mathfrak p}} \max\{0,-\nu_{\mathfrak p}(a_1),\dots,\nu_{\mathfrak p}(a_m)\}}{[R:k(t)]}\geq 0.
$$
\item Assume $R$ is a finite extension of $k(t)$ containing all $a_i$ and $b_i$. Then one sees that if
$\max_i\{-\nu_{\mathfrak p}(a_i)\}\geq \max_i\{-\nu_{{\mathfrak p}}(b_i)\}$ for all places ${\mathfrak p}$ of $R$ then $T({\bf a})\geq T({\bf b})$.
\item Suppose that $a_0,a_1,\dots,a_m\in k[t]$ and $\gcd(a_0,\dots,a_m)=1$. Then
$$
T({\bf a})=\max \{\deg(a_0),\dots,\deg(a_m)\}.
$$
To see this, let $R=k(t)$. Then
$$
T({\bf a})=\sum_{{\mathfrak p}} \max \{-\nu_{{\mathfrak p}}(a_0),\dots,-\nu_{\mathfrak p}(a_m)\}
$$
where ${\mathfrak p}$ ranges over all the places of $R$. Note that every place of $R$ has a local uniformizer of the form $1/t$ or $t-c$ for some $c\in k$. Suppose the place ${\mathfrak p}$ has $t-c$ as a local uniformizer. Then $\nu_{\mathfrak p}(a_i)>0$ if and only if $(t-c )| a_i$. Since $\gcd(a_0,\dots,a_m)=1$, there is some $i_0$ such that $\nu_{\mathfrak p}(a_{i_0})=0$. This implies that
for places ${\mathfrak p}$ with $t-c, c\in k$ as local uniformizers,
$$\max \{-\nu_{\mathfrak p}(a_0),\dots,-\nu_{\mathfrak p}(a_m)\}=0.$$
For the place with $1/t$ as local uniformizer, one has that $\nu_{\mathfrak p}(a_i)=-\deg(a_i)$. So for this place,
$$\max \{-\nu_{\mathfrak p}(a_0),\dots,-\nu_{\mathfrak p}(a_m)\}=\max\{\deg(a_0),\dots,\deg(a_m)\}.$$
Consequently, $T({\bf a})=\max\{\deg(a_0),\dots,\deg(a_m)\}$.
\item
Let $a\in {\overline{k(t)}}$ and $R=k(t,a)$. Let $g(t,x)$ be a nonzero irreducible polynomial over $k$ such that $g(t,a)=0$. It is clear that $T(a)=0$ if $a\in k$. Assume that $a\notin k$ and ${\mathfrak p}_1,\dots,{\mathfrak p}_s$ are all distinct poles of $a$ in $R$, then
\[
T(a)=\frac{-\sum_{i=1}^{s}\nu_{{\mathfrak p}_i}(a)}{[R:k(t)]}=\frac{[R:k(a)]}{[R:k(t)]}=\frac{\deg(g,t)}{\deg(g,x)}.
\]
In particular, if $a\in k(t)$ then $T(a)=\deg(a)$ which is defined to be the maximun of the degrees of the denominator and numerator of $a$.
\end{enumerate}
\end{remark}
From the above remark, it is easy to see that for $a\in {\overline{k(t)}}\setminus \{0\}$ and $i\in \Z$
$$
T(a^i)=|i| T(a).
$$
\begin{prop}\label{prop:heightproperty}
Let $a,b\in {\overline{k(t)}}$, $c_1,\dots,c_4\in k$ with $c_1c_4-c_2c_3\neq 0$. Then
\begin{enumerate}
\item $T\left(\frac{c_1a+c_2}{c_3a+c_4}\right)=T(a)$ if $c_3a+c_4\neq 0$;
\item $T(ab)\leq T(a)+T(b)$;
\item $T(a+\lambda b)\leq T(a)+T(b)$ for all $\lambda\in k$.
\end{enumerate}
\end{prop}
\begin{proof}
$1.$ If $a\in k$ then the assertion is obvious. Suppose that $a\notin k$. Let $R=k(t,a)$. Then $R=k(t, (c_1a+c_2)/(c_3a+c_4))$. The assertion follows from Remark~\ref{rem:heights} and the fact that $$\left[R:k\left(\frac{c_1a+c_2}{c_3a+c_4}\right)\right]=[R:k(a)].$$
$2.$ Let $R=k(t,a,b)$. For each place ${\mathfrak p}$ of $R$,
$
-\nu_{{\mathfrak p}}(ab)=-\nu_{{\mathfrak p}}(a)-\nu_{{\mathfrak p}}(b)
$
and thus
$$
\max\{0,-\nu_{{\mathfrak p}}(ab)\}\leq \max\{0,-\nu_{{\mathfrak p}}(a)\}+\max\{0,-\nu_{{\mathfrak p}}(b)\}.
$$
The assertion then follows from Remark~\ref{rem:heights}.
$3.$ Use an argument similar to that in 2. and the fact that
$$
-\nu_{{\mathfrak p}}(a+\lambda b)\leq \max\{-\nu_{{\mathfrak p}}(a),-\nu_{{\mathfrak p}}(b)\}.
$$
\end{proof}
The following examples show that the equalities may hold in $2$ and $3$ of Proposition~\ref{prop:heightproperty}.
\begin{example}
Let $a=t^2, b=t^3+1$ and $\lambda\in k\setminus \{0\}$. Then
$$T(ab)=5=T(a)+T(b),\,\,T(1/a+\lambda/b)=5=T(1/a)+T(1/b.)$$
Moreover both of them are greater than the maximun of $T(a), T(b)$.
\end{example}
In the following, ${\bf y}$ stands for the vector with indeterminates $y_1,\dots,y_s$ and ${\bf y}^{\bf d}$ denotes $\prod_{i=1}^s y_i^{d_i}$ for ${\bf d}=(d_1,\dots,d_s)\in \Z^s$.
\begin{prop}\label{prop:height2}
Let $f$ and $g$ be polynomials in ${\overline{k(t)}}[{\bf y}]$, then
\begin{enumerate}
\item $T(fg)\leq T(f)+T(g);$
\item If both $f$ and $g$ have 1 as a coefficient, then $T(f+g)\leq T(f)+T(g).$
\end{enumerate}
\end{prop}
\begin{proof}
Write $f=\sum_{{\bf d}} a_{{\bf d}} {\bf y}^{\bf d}$ and $g=\sum_{{\bf d}} b_{\bf d} {\bf y}^{\bf d}$ with $a_{\bf d},b_{\bf d}\in{\overline{k(t)}}$. Let $R$ be a finite extension of $k(t)$ containing all $a_{\bf d}, b_{\bf d}$, and ${\mathfrak p}$ a place of $R$.
1. Each coefficient of $fg$ is of the form $\sum_{i=1}^s a_{{\bf d}_i}b_{{\bf d}_i}$, where $s\geq 1$. Since
$$
-\nu_{p}\left(\sum_{i=1}^s a_{{\bf d}_i}b_{{\bf d}_i}\right)\leq \max_{i=1}^s\{-\nu_{p}(a_{{\bf d}_i})-\nu_{p}(b_{{\bf d}_i})\},
$$
we have
$$
\max_{\mbox{$c\in C$}}\{-\nu_{{\mathfrak p}}(c)\}\leq \max_{{\bf d}}\{-\nu_{{\mathfrak p}}(a_{\bf d})\}+\max_{{\bf d}}\{-\nu_{{\mathfrak p}}(b_{\bf d})\}
$$
where $C$ is the set of all coefficients of $fg$.
It follows that
$
T(fg)\leq T(f)+T(g).
$
2. The assertion follows from the fact that
\begin{align*}
-\nu_{\mathfrak p}(a_{\bf d}+b_{\bf d}) &\leq \max_{\bf d} \{-\nu_{\mathfrak p}(a_{\bf d}),-\nu_{\mathfrak p}(b_{\bf d})\} \leq \max_{{\bf d}}\{0,-\nu_{\mathfrak p}(a_{\bf d}),-\nu_{\mathfrak p}(b_{\bf d})\} \\
&\leq \max_{{\bf d}}\{0,-\nu_{\mathfrak p}(a_{\bf d})\}+\max_{{\bf d}}\{0,-\nu_{\mathfrak p}(b_{\bf d})\}.
\end{align*}
\end{proof}
\begin{prop}\label{prop:heightofzero}
Let $f=x^n+a_{n-1}x^{n-1}+\dots+a_0$ where $n>0$ and $a_i\in {\overline{k(t)}}$. Suppose that $\alpha$ is a zero of $f$ in ${\overline{k(t)}}$ and $R$ is a finite extension of $k(t)$ containing $\alpha$ and all $a_i$. Then for each place ${\mathfrak p}$ of $R$,
$$
\max \{0, -\nu_{{\mathfrak p}}(\alpha)\} \leq \max \{0,-\nu_{\mathfrak p}(a_0),\dots,-\nu_{\mathfrak p}(a_{n-1})\}.
$$
\end{prop}
\begin{proof}
The assertion is clear if $\alpha\in k$ or ${\mathfrak p}$ is not a pole of $\alpha$. Assume $\alpha \in {\overline{k(t)}}\setminus k$ and ${\mathfrak p}$ is a pole of $\alpha$. Then
$$
\nu_{{\mathfrak p}}(\alpha^n)=\nu_p\left(\sum_{i=0}^{n-1}a_i \alpha^i\right)\geq \min_{i=0}^{n-1} \left\{i\nu_{{\mathfrak p}}(\alpha)+\nu_{{\mathfrak p}}(a_i)\right\}= i_0 \nu_{\mathfrak p}(\alpha)+\nu_{\mathfrak p}(a_{i_0})
$$
for some $i_0$ with $0\leq i_0 \leq n-1$.
This together with the fact that $\nu_{\mathfrak p}(\alpha)<0$ implies that
$$
\nu_{{\mathfrak p}}(\alpha)\geq \frac{\nu_{{\mathfrak p}}(a_{i_0})}{n-i_0}\geq \nu_{{\mathfrak p}}(a_{i_0})\geq \min_{i=0}^ {n-1}\left\{ \nu_{\mathfrak p}(a_i)\right\},
$$
i.e.
$$
-\nu_{{\mathfrak p}}(\alpha)\leq \max_{i=0}^ {n-1} \left\{-\nu_{{\mathfrak p}}(a_i)\right\}.
$$
Consequently, one has that
$$
\max\{0,-\nu_{{\mathfrak p}}(\alpha)\}\leq \max\{0,-\nu_{{\mathfrak p}}(a_0),\dots,-\nu_{{\mathfrak p}}(a_{n-1})\}.
$$
\end{proof}
\begin{corollary}
\label{cor:heightofzero}
Let $f$ be a polynomial in ${\overline{k(t)}}[x]$ and $\alpha$ a zero of $f$ in ${\overline{k(t)}}$. Then $T(\alpha)\leq T(f)$.
\end{corollary}
The equality in Corollary~\ref{cor:heightofzero} may hold as shown in the following example.
\begin{example}
Let
$$
f=x^2-(t^3+1)x+t^3=(x-t^3)(x-1),
$$
then $T(t^3)=T(f)=3$.
\end{example}
\begin{lemma}\label{lm:factor}
Assume $f,g\in {\overline{k(t)}}[x]$ and $g$ is a factor of $f$. Then
$
T(g)\leq T(f).
$
\end{lemma}
\begin{proof}
Without loss of generality, we may assume both $f$ and $g$ are monic. Write
$$
f=x^n+\sum_{i=0}^{n-1}a_i x^i, \,\,a_i\in {\overline{k(t)}}.
$$
We first show that if $f=gh$ and $\deg g=1$ then $T(h)\leq T(f)$. Suppose that
$$h=x^{n-1}+\sum_{j=0}^{n-2}b_jx^j,\,\,g=x+\alpha
$$
where $\alpha, b_j\in {\overline{k(t)}}$. Let $R=k(t,\alpha,b_0,\dots,b_{n-2})$. For each place ${\mathfrak p}$ of $R$, denote
$$
N_{\mathfrak p}=\max \{0,-\nu_{\mathfrak p}(a_0),\dots,-\nu_p(a_{n-1})\}.
$$
Then $\max\{0,-\nu_{\mathfrak p}(a_i)\}\leq N_{\mathfrak p}$ for all $i=0,\dots,n-1$ and all places ${\mathfrak p}$ of $R$. From $f=gh$, one has that
$$
b_{n-2}+\alpha=a_{n-1},\,\, b_0\alpha=a_0,\,\, b_i\alpha+b_{i-1}=a_i, \,\,i=1,\dots,n-2.
$$
We claim that $\max\{0,-\nu_{\mathfrak p}(b_j)\}\leq N_{\mathfrak p}$ for all $j=0,\dots,n-2$ and all places ${\mathfrak p}$ of $R$. For $j=n-2$, one has that
$$
\nu_{\mathfrak p}(b_{n-2})=\nu_{\mathfrak p}(a_{n-1}-\alpha)\geq \min\{\nu_{\mathfrak p}(a_{n-1}),\nu_{\mathfrak p}(\alpha)\}.
$$
This implies that $\max\{0,-\nu_{\mathfrak p}(b_{n-2})\}\leq \max \{0,-\nu_{\mathfrak p}(a_{n-1}), -\nu_{\mathfrak p}(\alpha)\}$. By Proposition~\ref{prop:heightofzero}, we have that
$\max\{0,-\nu_{\mathfrak p}(\alpha)\}\leq N_{\mathfrak p}$. Hence $\max\{0,-\nu_{\mathfrak p}(b_{n-2})\}\leq N_{\mathfrak p}$ for all places ${\mathfrak p}$ of $R$. Now assume that there is a place ${\mathfrak q}$ of $R$ and $j_0$ with $0\leq j_0<n-2$ such that $\max\{0,-\nu_{\mathfrak q}(b_{j_0})\}>N_{\mathfrak q}$ but $\max\{0,-\nu_{\mathfrak q}(b_{j_0+1})\}\leq N_{\mathfrak q}$. Then one has that $\nu_{\mathfrak q}(b_{j_0})<0$ and $\nu_{\mathfrak q}(b_{j_0})<\nu_{\mathfrak q}(a_i)$ for all $i=0,\dots,n-1$. On the other hand, since $\max\{0,-\nu_{\mathfrak q}(b_{j_0+1})\}\leq N_{\mathfrak q}$, $-\nu_{\mathfrak q}(b_{j_0})>N_{\mathfrak q}\geq -\nu_{\mathfrak q}(b_{j_0+1})$ i.e. $\nu_{\mathfrak q}(b_{j_0+1})<\nu_{{\mathfrak q}}(b_{j_0})<0$.
From $\alpha b_{j_0+1}=a_{j_0+1}-b_{j_0}$, one has that
$$\nu_{\mathfrak q}(\alpha b_{j_0+1})=\nu_{\mathfrak q}(\alpha)+\nu_{\mathfrak q}(b_{j_0+1})=\min\{\nu_{\mathfrak q}(a_{j_0+1}),\nu_{\mathfrak q}(b_{j_0})\}=\nu_{{\mathfrak q}}(b_{j_0}).$$
The last equality holds because $\nu_{{\mathfrak q}}(b_{j_0})<\nu_{\mathfrak q}(a_i)$ for all $i$. Thus $\nu_{\mathfrak q}(\alpha)<0$ which implies that
$$
\nu_{\mathfrak q}(\alpha b_{j_0})=\nu_{\mathfrak q}(\alpha)+\nu_{\mathfrak q}(b_{j_0})<\nu_{\mathfrak q}(a_i)
$$
for all $i=0,\dots,n-1$.
As $b_{j_0-1}=a_{j_0}-\alpha b_{j_0}$, one has that
$
\nu_q(b_{j_0-1})=\nu_q(\alpha)+\nu_q(b_{j_0})<0
$
and furthermore $\nu_q(b_{j_0-1})<\nu_q(a_i)$
for all $i=0,\dots,n-1$.
In other words, $\max\{0,-\nu_q(b_{j_0-1})\}> N_q$. Applying a similar argument to the equalities $b_j=a_{j+1}-\alpha b_{j+1}$ for $j=j_0-2,\dots,0$ successively yields that $\nu_{\mathfrak q}(b_j)<0$ and $\max\{0,-\nu_{\mathfrak q}(b_j)\}>N_{\mathfrak q}$ for all $j=j_0-2,\dots,0$. However, one has that $\alpha b_0=a_0$. This implies that $\nu_{\mathfrak q}(a_0)<\nu_{\mathfrak q}(b_0)<0$ and thus $N_{\mathfrak q}\geq -\nu_{\mathfrak q}(a_0)>-\nu_{\mathfrak q}(b_0)>0$. That is to say, $N_{\mathfrak q}\geq \max\{0,-\nu_{\mathfrak q}(b_0)\}$, a contradiction. This proves the claim. This claim and Remark~\ref{rem:heights} imply that $T(h)\leq T(f)$.
Now we prove the assertion by induction on $\deg(f)$. The base case $\deg(f)=1$ is obvious. Suppose that the assertion holds for $\deg(f)\leq n$. Consider the case $\deg(f)=n+1$. If $f=g$ then there is nothing to prove. Suppose that $f\neq g$. Then there is $\beta\in {\overline{k(t)}}$ such that $(x+\beta)g$ divides $f$. Let $f=(x+\beta)\tilde{f}$ for some $\tilde{f}\in {\overline{k(t)}}[x]$. Then $T(\tilde{f})\leq T(f)$ and $g$ divides $\tilde{f}$. By induction hypothesis, one has that $T(g)\leq T(\tilde{f})\leq T(f)$.
\end{proof}
\begin{corollary}
\label{cor:factor}
Assume $f(x,y),g(x,y)\in {\overline{k(t)}}[x,y]$ and $g(x,y)$ is a factor of $f(x,y)$. Then
$
T(g(x,y))\leq T(f(x,y)).
$
\end{corollary}
\begin{proof}
Suppose that $f(x,y)=\sum_{i,j} c_{i,j}x^i y^j$ where $c_{i,j}\in {\overline{k(t)}}$. Let $d$ be an integer greater than ${\rm tdeg}(f(x,y))$. One has that
$$
f(x,x^d)=\sum_{i,j} c_{i,j} x^{i+jd}.
$$
Note that for $0\leq i,j,l,m<d$, $i+jd=l+md$ if and only if $(i,j)=(l,m)$. This implies that the set of the coefficients of $f(x,y)$ coincides with that of $f(x,x^d)$. Hence $T(f(x,y))=T(f(x,x^d))$. Similarly, $T(g(x,y))=T(g(x,x^d))$. It is clear that $g(x,x^d)$ is still a factor of $f(x,x^d)$. By Lemma~\ref{lm:factor}, $T(g(x,x^d))\leq T(f(x,x^d))$. Thus $T(g(x,y))\leq T(f(x,y))$.
\end{proof}
\begin{prop}\label{prop:resultant}
Assume $f,g\in {\overline{k(t)}}[{\bf y},z]$, then
$$
T({\rm res}_{z}(f,g))\leq \deg(g,z)T(f)+\deg(f,z)T(g)
$$
where ${\rm res}_{z}(f,g)$ is the resultant of $f$ and $g$ with respect to $z$.
\end{prop}
\begin{proof}
The assertion is clear that if ${\rm res}_{z}(f,g)=0$. Consider the case ${\rm res}_{z}(f,g)\neq 0$.
Assume $\deg(f,z)=n,\deg(g,z)=m$. Write
$$
f=\sum_{i=0}^n a_i({\bf y})z^i,\quad g=\sum_{i=0}^m b_i({\bf y})z^i
$$
where $a_i({\bf y}),b_i({\bf y})\in {\overline{k(t)}}[{\bf y}]$. Denote further by $C_1, C_2$ the sets of the coefficients in ${\bf y},z$ of $f, g$ respectively.
Then
$$
{\rm res}_z(f,g)=
\begin{vmatrix}
a_n & a_{n-1} &\cdots & a_0 \\
& \ddots & \ddots & & \ddots \\
& & a_n & a_{n-1} &\cdots & a_0 \\
b_m & b_{m-1} &\cdots & b_0 \\
& \ddots & \ddots & & \ddots \\
& & b_m & b_{m-1} &\cdots & b_0 \\
\end{vmatrix}.
$$
By the definitions of determinant, we can write
$$
{\rm res}_z(f,g)=\sum_{{\bf d}} \left(\sum_{j=1}^{\ell_{\bf d}}\beta_{{\bf d},j} {\bf m}_{{\bf d},j}{\bf n}_{{\bf d},j} \right){\bf y}^{{\bf d}}
$$
where $\beta_{{\bf d},j}, \ell_{\bf d}\in \Z, \ell_{\bf d}\geq 0$, ${\bf m}_{{\bf d},j}$ is a monomial in $C_1$ with total degree $m$ and ${\bf n}_{{\bf d},j}$ is a monomial in $C_2$ with total degree $n$. Let $R=k(t,C_1,C_2)$.
For each place ${\mathfrak p}$ of $R$, we have
\begin{align*}
-\nu_{{\mathfrak p}}\left(\sum_j \beta_{{\bf d},j} {\bf m}_{{\bf d},j}{\bf n}_{{\bf d},j}\right)
&\leq \max_j \{-\nu_{{\mathfrak p}}( {\bf m}_{{\bf d},j}{\bf n}_{{\bf d},j})\}\\
&\leq m\max_{c\in C_1}\{-\nu_{{\mathfrak p}}(c)\}+n\max_{c\in C_2}\{-\nu_{{\mathfrak p}}(c)\}.
\end{align*}
Therefore by Remark~\ref{rem:heights},
$$
T({\rm res}_{z}(f,g))\leq mT(f)+nT(g).
$$
\end{proof}
\section{Degrees and Heights on Riemann-Roth spaces}
Throughout this section, ${\mathcal R}$ denotes an algebraic function field of one variable over ${\overline{k(t)}}$.
Let ${\bm \xi}=(\xi_0,\xi_1,\xi_2)\in {\mathcal R}^3$ be such that ${\mathcal C}({\bm \xi})$ a plane projective model of ${\mathcal R}$, i.e. ${\mathcal R}={\overline{k(t)}}(\xi_0/\xi_2,\xi_1/\xi_2)$. Each $h\in {\mathcal R}$ can be presented by $G({\bm \xi})/H({\bm \xi})$ where $G, H$ are two homogeneous polynomials in ${\overline{k(t)}}[x_0,x_1,x_2]$ of the same degree and having no common factors, and $H({\bm \xi})\neq 0$. We call $(G,H)$ a representation of $h$. This section shall focus on determining the degrees and heights of representations of elements in Riemann-Roch spaces.
There are several algorithms for computing the bases of Riemann-Roth spaces (see for example \cite{hess,huang-ierardi}). However no existing algorithm provided explicit bounds for the degrees and heights of $G$ and $H$ where $(G,H)$ represents an element in these bases. These bounds play an essential role in estimating the heights of points on a plane algebraic curve. In this section, we shall follow the algorithm developed in \cite{huang-ierardi} to obtain these bounds. For this purpose, we need to resolve singularities of a given plane algebraic curve to obtain the one with only ordinary singularities. This can be done by a sequence of quadratic transformations.
In this section, unless otherwise stated, $F$ always stands for an irreducible homogeneous polynomial in ${\overline{k(t)}}[x_0,x_1,x_2]$ of degree $n>0$ which defines a plane projective model of ${\mathcal R}$, i.e. there is ${\bm \xi}=(\xi_0,\xi_1,\xi_2)\in {\mathcal R}^3$ such that ${\mathcal R}={\overline{k(t)}}(\xi_0/\xi_2,\xi_1/\xi_2)$ and $F({\bm \xi})=0$.
\subsection{Quadratic transformations}\label{subsec:quadratic}
Let $D$ be a divisor in ${\mathcal R}$. Due to Proposition on page of \cite{fulton}, there is a birational transformation ${\mathcal B}$ such that the transformation of $F=0$ under ${\mathcal B}$ is a plane projective curve with only ordinary singularties, moreover ${\mathcal B}$ can be chosen to be the composition of a sequence of suitable quadratic transformations. In this subsection, we shall investigate the degree and height of the transformation of $F=0$ under a quadratic transformation.
\begin{define}
\begin{enumerate}
\item
${\mathcal L}$ stands for a projective change of coordinates on $\bP^2({\overline{k(t)}})$ that is defined as
$
{\mathcal L}({\bf c})={\bf c} M_{\mathcal L}
$
where $M_{\mathcal L}\in {\rm GL}_3({\overline{k(t)}})$, and ${\mathcal Q}$ denotes the standard quadratic transformation that is defined as
$$
{\mathcal Q}({\bf c})=(c_1c_2,c_0c_2,c_0c_1)
$$
where ${\bf c}=(c_0,c_1,c_2)$.
\item The height of ${\mathcal L}$, denoted by $T({\mathcal L})$, is defined as $T(M_{\mathcal L})$.
\end{enumerate}
\end{define}
\begin{notation}
\begin{enumerate}
\item
$F^{\mathcal L}$ stands for $F((x_0,x_1,x_2)M_{\mathcal L})$.
\item
$F^{\mathcal Q}$ stands for the irreducible polynomial $\tilde{F}$ satisfying
$$
F(x_1x_2,x_0x_2,x_0x_1)=x_0^{d_0}x_1^{d_1}x_2^{d_2}\tilde{F}
$$
where $d_i\geq 0$.
\end{enumerate}
One sees that $\bV(F^{\mathcal L})$ (resp. $\bV(F^{\mathcal Q})$) is the variaty ${\mathcal L}^{-1}(\bV(F))$ (resp. the projective closure of ${\mathcal Q}^{-1}(\bV(F)\setminus \bV(x_0x_1x_2)$).
\end{notation}
\begin{remark}
\label{rem:quadratictransformation}
${\mathcal Q}$ is bijective on $\bP^2({\overline{k(t)}})\setminus \bV(x_0x_1x_2)$ and ${\mathcal Q}^{-1}={\mathcal Q}$.
\end{remark}
Let us first bound the heights of the common points of two algebraic curves in $\bP^2({\overline{k(t)}})$.
\begin{prop}\label{prop:intersection}
Let $F,G$ be two homogenenous polynomials in ${\overline{k(t)}}[x_0,x_1,x_2]$ of degree $n,m$ respectively. Suppose that $F$ and $G$ have no common factor, and ${\bf c}\in \bP^2({\overline{k(t)}})$ is a common point of $F=0$ and $G=0$. Then
$$
T({\bf a})\leq 2(mT(F)+nT(G)).
$$
Furthermore, if $G=c_0x_1+c_1x_1+c_2x_2$ with $c_i\in k$ then $T({\bf a})\leq T(F)$.
\end{prop}
\begin{proof}
Let
$H_i(x_j,x_l)={\rm res}_{x_i}(F,G)$ where $ \{i,j,l\}=\{0,1,2\}$.
Proposition \ref{prop:resultant} implies that
$
T(H_i)\leq mT(F)+nT(G).
$
Without loss of generality, suppose ${\bf a}=(1,a_1,a_2)$. Since ${\bf a}$ is a common point of $F=0$ and $G=0$,
$$
H_2(1,a_1)=H_1(1,a_2)=0.
$$
It follows from Corollary \ref{cor:heightofzero} that
$
T(a_i)\leq mT(F)+nT(G)
$ for all $i=1,2$.
Let $R=k(t,a_1,a_2)$ and ${\mathfrak p}$ a place of $R$. Then
$$
\max\{0,-\nu_{{\mathfrak p}}(a_1),-\nu_{{\mathfrak p}}(a_2)\}\leq
\max\{0,-\nu_{{\mathfrak p}}(a_1)\}+\max\{0,-\nu_{{\mathfrak p}}(a_2)\}.
$$
Whence
$$
T({\bf a})\leq T(a_1)+T(a_2)\leq 2(mT(F)+nT(G)).
$$
It remains to pove the second assertion. Since $a_0=1\neq 0$, not all $c_1,c_2$ are zero. Without loss of generality, assume that $c_1\neq 0$. Substituting $a_1=-(c_0+a_2c_2)/c_1$ into $F=0$ yields that
$$F(1,-(c_0+a_2c_2)/c_1,a_2)=0.$$
This implies that $T(a_2)\leq T(F)$. On the other hand, one sees that
$$
\nu_{\mathfrak p}(a_1)=\nu_{\mathfrak p}(-(c_0+a_2c_2)/c_1)\geq \min\{\nu_{\mathfrak p}(c_0),\nu_{\mathfrak p}(a_2)\}.
$$
So
$$
\max\{0, -\nu_{\mathfrak p}(a_1),-\nu_{\mathfrak p}(a_2)\}\leq \max\{0, -\nu_{\mathfrak p}(a_2)\}.
$$
which results in $T({\bf a})\leq T(a_2)\leq T(F)$.
\end{proof}
\begin{corollary}\label{cor:singularity}
If ${\bf a}\in \bP^2({\overline{k(t)}})$ is a singular point of $F=0$, then
$
T({\bf a})\leq 2(2n-1)T(F).
$
\end{corollary}
\begin{lemma}
\label{lm:lineartransformation}
Suppose that ${\mathcal L}$ is a projective change of coordinates. Then
\begin{enumerate}
\item
$T(F^{\mathcal L})\leq T(F)+\deg(F)T({\mathcal L})$;
\item
for each ${\bf c}\in \bP^2({\overline{k(t)}})$, $T({\mathcal L}({\bf c}))\leq T({\bf c})+T({\mathcal L})$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $M_{\mathcal L}=(a_{i,j})$ and
$$F=\sum_{i=0}^n \sum_{j=0}^{n-i} c_{i,j}x_0^ix_1^j x_2^{n-i-j}$$
where $n=\deg(F)$ and $c_{i,j}\in {\overline{k(t)}}$.
1. One has that
$$
F^{\mathcal L}=\sum_{i=0}^n \sum_{j=0}^{n-i} c_{i,j} \left(\sum_{l=1}^3 a_{l,1}x_{l-1}\right)^i \left(\sum_{l=1}^3 a_{l,2}x_{l-1}\right)^j \left(\sum_{l=1}^3 a_{l,3}x_{l-1}\right)^{n-i-j}.
$$
Let $\rho$ be a coefficient of $F^{\mathcal L}$ viewed as polynomial in $x_0,x_1,x_2$. Then $\rho$ is a $k$-linear combination of monomials
$ c_{i,j}a_{1,1}^{e_{1,1}} \dots a_{3,3}^{e_{3,3}}$ with $\sum e_{i,j}=n$. Let $R$ be a finite extension of $k(t)$ containing all $c_{i,j}$ and $a_{i,j}$. Suppose that ${\mathfrak p}$ is a place of $R$. Then one has that
\begin{align*}
\nu_{{\mathfrak p}}(\rho) &\geq \min_{i,j, i',j'}\left\{\nu_{{\mathfrak p}}(c_{i,j})+\sum_{i',j'} e_{i',j'}\nu_{\mathfrak p}(a_{i',j'})\right\}\\
&\geq \min_{i,j} \{\nu_{{\mathfrak p}}(c_{i,j})\}+\min_{i,j} \left\{\sum_{i,j} e_{i,j}\nu_{\mathfrak p}(a_{i,j})\right\},
\end{align*}
i.e.
\begin{align*}
-\nu_{{\mathfrak p}}(\rho)& \leq \max_{i,j}\{-\nu_{{\mathfrak p}}(c_{i,j})\}+\max_{i,j} \left\{-\sum_{i,j} e_{i,j}\nu_{\mathfrak p}(a_{i,j})\right\}\\
& \leq \max_{i,j}\{-\nu_{\mathfrak p}(c_{i,j})\}+n\max_{i,j}\{-\nu_{\mathfrak p}(a_{i,j})\}.
\end{align*}
Therefore $T(F^{\mathcal L})\leq T(F)+nT({\mathcal L})$ due to Remark~\ref{rem:heights}.
2. Suppose that ${\bf c}=(c_0,c_1,c_2)$ and ${\mathcal L}({\bf c})=(b_0,b_1,b_2)$. Then $b_i=\sum_{j=1}^3 a_{j,i}c_{j-1}$. Let $R$ be a finite extension of $k(t)$ containing all $c_i$ and $a_{i,j}$, and ${\mathfrak p}$ a place of $R$. Then
$$
\nu_{\mathfrak p}(b_i)=\nu_{\mathfrak p}\left(\sum_{j=1}^3 a_{j,i}c_{j-1}\right)\geq \min_j \{\nu_{\mathfrak p}(a_{j,i})+\nu_{\mathfrak p}(c_{j-1})\}
$$
i.e.
$$
-\nu_{\mathfrak p}(b_i)\leq \max_j \{-\nu_{\mathfrak p}(a_{j,i})-\nu_{\mathfrak p}(c_{j-1})\}\leq \max_j \{-\nu_{\mathfrak p}(a_{j,i})\}+\max_j \{-\nu_{\mathfrak p}(c_{j-1})\}.
$$
So $T({\mathcal L}({\bf c}))\leq T({\bf c})+T({\mathcal L})$.
\end{proof}
\begin{corollary}
\label{cor:lineartransformation}
Suppose that ${\bf c}=(c_0,c_1,1)\in \bP^2({\overline{k(t)}})$. Let ${\mathcal L}$ be a projective change of coordinates with
\begin{equation}
\label{eq:lineartransformation}
M_{\mathcal L}=\begin{pmatrix}
a_1 & a_2 & a_3 \\
a_4 & a_5 & a_6 \\
c_0 & c_1 & 1
\end{pmatrix}
\end{equation}
where $a_i\in k$. Then
\begin{enumerate}
\item ${\mathcal L}((0,0,1))={\bf c}$;
\item $T(F^{{\mathcal L}}), T(F^{{\mathcal L}^{-1}})\leq T(F)+\deg(F)T({\bf c})$;
\item for each ${\bf b}\in \bP^2({\overline{k(t)}})$, $T({\mathcal L}({\bf b})), T({\mathcal L}^{-1}({\bf b}))\leq T({\bf b})+T({\bf c})$.
\end{enumerate}
\end{corollary}
\begin{proof}
The first assertion is obvious. The second and third assertions follows from Lemma~\ref{lm:lineartransformation} and the fact that $T(M_{\mathcal L})=T({\bf c})$ and $T(M_{{\mathcal L}^{-1}})\leq T({\bf c})$.
\end{proof}
\begin{define}
\label{def:excellentposition}
\begin{enumerate}
\item The points $(1,0,0), (0,1,0), (0,0,1)\in \bP^2({\overline{k(t)}})$ are called fundamental points.
\item Assume $(0,0,1)$ is a singular point of $F=0$ with multiplicity $r$. $F=0$ is said to be in excellent position if it satisfies the following two conditions:
\begin{enumerate}
\item the line $x_2=0$ intersects $F=0$ in $n$ distinct non-fundamental points;
\item the lines $x_0=0, x_1=0$ intersect $F=0$ in $n-r$ distinct points other than fundamental points.
\end{enumerate}
\end{enumerate}
\end{define}
\begin{lemma}
\label{lm:lineartransformation2}
Suppose that ${\bf c}=(c_0,c_1,c_2)$ is a singular point of $F=0$ of multiplicity $r$. There is a projective change of coordinates ${\mathcal L}$ with $M_{\mathcal L}$ having the form (\ref{eq:lineartransformation}) such that
${\mathcal L}((0,0,1))={\bf c}$ and $F^{\mathcal L}=0$ is in excellent position.
\end{lemma}
\begin{proof}
Denote ${\bf y}=(y_1,\dots,y_6)$. Let ${\mathcal L}'_{\bf y}$ be the projective change of coordinates with $M_{{\mathcal L}'_{\bf y}}$ of the form
\[
\begin{pmatrix}
y_1 & y_2 & y_3\\
y_4 & y_5 & y_6 \\
c_0 & c_1 & c_2
\end{pmatrix}.
\]
One sees that there are polynomials $f_1,\dots,f_s\in {\overline{k(t)}}[y_1,\dots,y_6]$ such that ${\mathcal L}'_{\bf b}$ with ${\bf b} \in {\overline{k(t)}}^6$ satisfies the required conditions if and only if
$$
{\bf b} \in S=\left\{ {\bf b} \in {\overline{k(t)}}^6 \mid \forall\,i=1,\dots,s, \,f_i({\bf b})\neq 0\right\}.
$$
Note that if ${\mathcal L}$ is a projective change of coordinates such that ${\mathcal L}({\bf c})=(0,0,1)$ then ${\mathcal L}={\mathcal L}'_{\bf b}$ for some ${\bf b}\in {\overline{k(t)}}^6$.
Due to Lemma 1 on page of \cite{fulton}, there are projective changes of coordinates satisfying the above conditions. In other words, $S\neq \emptyset$. Therefore $S\cap k^6 \neq \emptyset$. For every ${\bf b} \in S\cap k^6$, ${\mathcal L}'_{\bf b}$ is as required.
\end{proof}
\begin{lemma}
\label{lm:standardtransformation}
\begin{enumerate}
\item
$T(F^{{\mathcal Q}})=T(F)$;
\item
For each ${\bf a}=(a_0,a_1,a_2)\in \bP^2({\overline{k(t)}})$, $T({\mathcal Q}({\bf a}))\leq 2T({\bf a})$.
\end{enumerate}
\end{lemma}
\begin{proof}
$1.$ Assume that $F=\sum_{i=0}^n \sum_{j=0}^{n-i} c_{i,j}x_0^i x_1^j x_2^{n-i-j}$ where $c_{i,j}\in {\overline{k(t)}}$. Then
\begin{align*}
F(x_1x_2,x_0x_2,x_0x_1)&=\sum_{i=0}^n \sum_{j=0}^{n-i} c_{i,j} (x_1x_2)^i (x_0x_2)^j (x_0x_1)^{n-i-j}\\
&=\sum_{i=0}^n \sum_{j=0}^{n-i} c_{i,j} x_0^{n-i}x_1^{n-j}x_2^{i+j}.
\end{align*}
From this, one sees that the set of coefficients of $F$ is equal to that of $F^{{\mathcal Q}}$. Hence $T(F)=T(F^{{\mathcal Q}})$.
$2.$ One has that ${\mathcal Q}({\bf a})=(a_1a_2,a_0a_2,a_0a_1)$. Let $R$ be a finite exntesion of $k(t)$ containing all $a_i$ and ${\mathfrak p}$ a place of $R$. Note that
$$
\nu_{{\mathfrak p}}(a_ia_j)=\nu_{{\mathfrak p}}(a_i)+\nu_{{\mathfrak p}}(a_j) \geq 2\min\{\nu_{{\mathfrak p}}(a_0),\nu_{{\mathfrak p}}(a_1),\nu_{{\mathfrak p}}(a_2)\},
$$
i.e. $-\nu_{{\mathfrak p}}(a_ia_j)\leq 2\max\{-\nu_{{\mathfrak p}}(a_0),-\nu_{{\mathfrak p}}(a_1),-\nu_{{\mathfrak p}}(a_2)\}$. So $T({\mathcal Q}({\bf a}))\leq 2T({\bf a})$.
\end{proof}
\begin{define}
\begin{enumerate}
\item
We call a projective change of coordinates in Lemma~\ref{lm:lineartransformation2} a projective change of coordinates centered at ${\bf c}$.
\item
\label{def:qtransformation} Let ${\mathcal L}_{\bf c}$ be a projective change of coordinates centered at ${\bf c}$ and ${\mathcal Q}$ the stand quadratic transformation. We call ${\mathcal Q}_{\bf c}={\mathcal L}_{\bf c}\circ {\mathcal Q}$, the composition of ${\mathcal Q}$ and ${\mathcal L}_{\bf c}$, a quadratic transformation centered at ${\bf c}$.
\end{enumerate}
\end{define}
\begin{notation}
Let ${\mathcal Q}_{\bf c}$ be a quadratic transformation centered at ${\bf c}$. We shall denote $F^{{\mathcal Q}_{\bf c}}=(F^{{\mathcal L}_{\bf c}})^{{\mathcal Q}}$.
\end{notation}
\begin{corollary}
\label{cor:qtransformation}
Let ${\bf c}$ be a singular point of $F=0$ and ${\mathcal Q}_{\bf c}$ a quadratic transformation centered at ${\bf c}$. Then
\begin{enumerate}
\item
$T(F^{{\mathcal Q}_{\bf c}})\leq T(F)+\deg(F)T({\bf c})$;
\item
for ${\bf a} \in \bP^2({\overline{k(t)}})$, $T({\mathcal Q}_{\bf c}^{-1}({\bf a}))\leq 2(T({\bf c})+T({\bf a}))$;
\end{enumerate}
\end{corollary}
\begin{proof}
$1$ and $2$ follow from the fact that ${\mathcal Q}^{-1}={\mathcal Q}$ and Lemmas~\ref{lm:lineartransformation} and~\ref{lm:standardtransformation}.
\end{proof}
\begin{prop}
\label{prop:centers}
Let ${\mathcal C}({\bm \xi})$ be a plane projective model of ${\mathcal R}$ defined by $F$ and ${\mathfrak P}$ a place of ${\mathcal R}$. Let ${\bf a}$ be the center of ${\mathfrak P}$ with respect to ${\bm \xi}$. Assume that ${\mathcal Q}_{\bf c}$ is a quadratic transformation centered at ${\bf c}$ for some singular point ${\bf c}$ of $F=0$ and ${\bf a}'$ is the center of ${\mathfrak P}$ with respect to ${\mathcal Q}_{\bf c}^{-1}({\bm \xi})$.
Then
$$T({\bf a}')\leq \max\{2(T({\bf c})+T({\bf a})), T(F)+\deg(F)T({\bf c})\}.$$
\end{prop}
\begin{proof}
We first claim that if ${\bf a} \neq {\bf c}$ then ${\mathcal Q}_{\bf c}^{-1}({\bf a})\neq (0,0,0)$. Otherwise assume that ${\mathcal Q}_{\bf c}^{-1}({\bf a})={\mathcal Q}^{-1}{\mathcal L}_{\bf c}^{-1}({\bf a})=(0,0,0)$. Then ${\mathcal L}_{\bf c}^{-1}({\bf a})$ is a fundamental point of $F^{{\mathcal L}_{\bf c}}=0$. Since neither $(1,0,0)$ nor $(0,1,0)$ is a point of $F^{{\mathcal L}_{\bf c}}=0$. One has that ${\mathcal L}_{\bf c}^{-1}({\bf a})=(0,0,1)$. Hence ${\bf a}={\bf c}$. This proves our claim.
Suppose that ${\bf a} \neq {\bf c}$. Then ${\mathcal Q}_{\bf c}^{-1}({\bf a})\neq (0,0,0)$ and thus ${\bf a}'={\mathcal Q}_{\bf c}^{-1}({\bf a})$ by Proposition~\ref{prop:centertransformation}. Corollary~\ref{cor:qtransformation} then implies that $T({\bf a}')\leq 2(T({\bf a})+T({\bf c})).$
Now suppose that ${\bf a}={\bf c}$. Denote ${\bm \xi}'={\mathcal L}_{\bf c}^{-1}({\bm \xi})=(\xi_0',\xi_1',\xi_2')$. By Proposition~\ref{prop:centertransformation} again, $(0,0,1)$ is the center of ${\mathfrak P}$ with respect to ${\bm \xi}'$. Suppose that $u$ is a local uniformizer of ${\mathfrak P}$ and $\ell_i=\nu_{{\mathfrak P}}(\xi_i')$. From the definition of center, one sees that $\ell_i>\ell_2$ for all $i=0,1$.
Write $\xi_i'=u^{\ell_i}(c_i+u \eta_i)$
where $c_i\in {\overline{k(t)}}\setminus\{0\}$, $\eta_i\in {\mathcal R}$ with $\nu_{{\mathfrak P}}(\eta_i)\geq 0$.
One then has that
\begin{align*}
{\mathcal Q}^{-1}({\bm \xi}')&=(\xi_1'\xi_2',\xi_0'\xi_2',\xi_0'\xi_1')\\
&=\left(u^{\ell_1+\ell_2}(c_1c_2+u\tilde{\eta_0}), u^{\ell_0+\ell_2}(c_0c_2+u\tilde{\eta_1}), u^{\ell_1+\ell_0}(c_0c_1+u\tilde{\eta_2})\right)
\end{align*}
where $\tilde{\eta_i}\in {\mathcal R}$ with $\nu_{{\mathfrak P}}(\tilde{\eta_i})\geq 0$. Set
$$
\mu= \min \{\nu_{{\mathfrak P}}(\xi_1'\xi_2'), \nu_{{\mathfrak P}}(\xi_0'\xi_2'),\nu_{{\mathfrak P}}(\xi_1'\xi_0')\}.
$$
Since both $\ell_0$ and $\ell_1$ are greater than $\ell_2$, $\mu=\ell_0+\ell_2$ or $\ell_1+\ell_2$.
In the case that $\mu=\ell_0+\ell_2=\ell_1+\ell_2$, one has that ${\bf a}'=(c_1c_2,c_0c_2,0)=c_2(c_1,c_0,0)$. So ${\bf a}' \in \bV(F^{{\mathcal Q}_{\bf c}})\cap \bV(x_2)$. By Propositon~\ref{prop:intersection}, $T({\bf a}')\leq T(F^{{\mathcal Q}_{\bf c}})\leq T(F)+\deg(F)T({\bf c})$. In other cases, one sees that ${\bf a}'$ is a fundamental point and so $T({\bf a}')=0$.
Hence, in each case, one has that
$$
T({\bf a}')\leq \max\{2(T({\bf c})+T({\bf a})), T(F)+\deg(F)T({\bf c})\}.
$$
\end{proof}
\subsection{Degrees and Heights for Riemann-Roch Spaces}\label{subsec:ordinary}
Let $D=\sum_{i=1}^m n_i {\mathfrak P}_i$ be a disivor in ${\mathcal R}$ where $n_i\neq 0$. Let ${\mathcal C}({\bm \xi})$ defined by $F$ be a plane projective model of ${\mathcal R}$. Suppose that $h\in {\mathfrak L}(D)$ and $h=G({\bm \xi})/H({\bm \xi})$ where $G,H$ are two homogeneous polynomials of the same degree in ${\overline{k(t)}}[x_0,x_1,x_2]$. In this subsection, we shall estimate $\deg(G)$ and $T(G), T(H)$ in terms of $\deg(F)$ and $T(F)$. For this, we introduce the following notations and definitions.
\begin{notation}
Let ${\mathcal C}({\bm \xi})$ be a plane projective model of ${\mathcal R}$ and $D$ a divisor.
\begin{enumerate}
\item
${\mathcal S}_{\bm \xi}(D):=\{\mbox{the centers of places in ${\rm supp}(D)$ with respect to ${\bm \xi}$}\}$.
\item
$T_{\bm \xi}(D):=\max\left\{T({\bf c}) \mid {\bf c}\in {\mathcal S}_{\bm \xi}(D)\right\}$.
\end{enumerate}
\end{notation}
\begin{define}
\label{def:intersection}
Let $G,H$ be two homogeneous polynomials in ${\overline{k(t)}}[x_0,x_1,x_2]$ of the same degree. Write ${\bm \xi}=(\xi_0,\xi_1,\xi_2)$.
\begin{itemize}
\item [$(1).$]
Define
$$
{\rm ord}_{{\mathfrak P}}(G({\bm \xi}))=\nu_{{\mathfrak P}}\left(G({\bm \xi})\right)-\deg(G)\min_{i=0}^2\{\nu_{{\mathfrak P}}(\xi_i)\}.
$$
\item [$(2).$]
Define
$$
{\rm div}_{\bm \xi}(G)=\sum {\rm ord}_{{\mathfrak P}}(G({\bm \xi})){\mathfrak P}
$$
where the sum ranges over all places of ${\mathcal R}$. Furthermore, define $${\rm div}_{\bm \xi}(G/H)={\rm div}_{\bm \xi}(G)-{\rm div}_{\bm \xi}(H).$$
\end{itemize}
\end{define}
It is easy to see that ${\rm ord}_{{\mathfrak P}}(G({\bm \xi}))\geq 0$ and ${\rm ord}_{\mathfrak P}(G({\bm \xi}))>0$ if and only if $G({\bf c})=0$ where ${\bf c}$ is the center of ${\mathfrak P}$ with respect to ${\bm \xi}$. Futhermore ${\rm ord}_{{\mathfrak P}}(G(\lambda {\bm \xi}))={\rm ord}_{{\mathfrak P}}(G({\bm \xi}))$ for all nonzero $\lambda \in {\mathcal R}$.
\begin{remark}
\label{rm:intersectioncycle}
On page 182 of \cite{fulton}, ${\rm ord}_{{\mathfrak P}}(G)$ is defined to be the order at ${\mathfrak P}$ of the image of $G$ in the valutaion ring of ${\mathfrak P}$. Remark that
$${\rm ord}_{\mathfrak P}(G({\bm \xi}))=\nu_{{\mathfrak P}}(G({\bm \xi})/\xi_{i_0}^d)$$
where $\xi_{i_0}$ satisfies that $\nu_{{\mathfrak P}}(\xi_{i_0})=\min_{i=0}^{2}\{\nu_{\mathfrak P}(\xi_i)\}$. Under the map sending $x_j$ to $\xi_j/\xi_{i_0}$ for all $j=0,1,2$, $G$ is sent to $G({\bm \xi})/\xi_{i_0}^d$ which lies in the valuation ring of ${\mathfrak P}$. Therefore ${\rm ord}_{\mathfrak P}$ given in Definition~\ref{def:intersection} concides with the one given in \cite{fulton} and ${\rm div}_{\bm \xi}(G)$ is nothing else but the intersection cycle of $G$ and $H$ (see page 119 of \cite{fulton}).
\end{remark}
The lemma below follows easily from the definition.
\begin{lemma}
\label{lm:cycle}
Suppose that $G,H\in {\overline{k(t)}}[x_0,x_1,x_2]$ are two homogeneous polynomials of the same degree. Then
\begin{enumerate}
\item
${\rm div}_{\bm \xi}\left(\frac{G}{H}\right)={\rm div}_{\bm \xi}(G)-{\rm div}_{\bm \xi}(H)={\rm div}\left(\frac{G({\bm \xi})}{H({\bm \xi})}\right)$; and
\item
$\deg({\rm div}_{\bm \xi}(G))=\deg(G)\deg(F)$.
\end{enumerate}
\end{lemma}
\begin{lemma}
\label{lm:generallines}
Suppose that ${\bf c}=(c_0,c_1,1)$ is an ordinary singular point of $F=0$ of multiplicity $r$, $n=\deg(F)$ and $S$ is a finite set of points of $F=0$.
\begin{enumerate}
\item Let $L_\lambda=x_0-c_0x_2-\lambda(x_1-c_1x_2)$. Then for all but a finite number of $\lambda$, $L_\lambda=0$ intersects $F=0$ in $n-r$ distinct points other than the points in $\{{\bf c}\}\cup S$.
\item Write
$$
F=F_r(x_0-c_0x_2, x_1-c_1x_2)x_2^{n-r}+\dots+ F_n(x_0-c_0x_2,x_1-c_1x_2)
$$
where $F_i(y_0,y_1)$ is a homogeneous polynomial in $y_0,y_1$ of degree $i$. Assume that $F_r(1,0)F_r(0,1)F_n(1,0)F_n(0,1)\neq 0$. Let
$$G_\lambda=\alpha (x_1-c_1x_2)x_2-(x_0-c_0x_2)x_2-\lambda (x_0-c_0x_2)(x_1-c_1x_2)$$
where $\alpha\in {\overline{k(t)}}\setminus\{0\}$ satisfies that $F_r(\alpha,1)=0$. Then for all but a finite number of $\lambda$, $G_\lambda=0$ intersects $F=0$ in $2n-r-1$ distinct points other than the points in $\{{\bf c}\}\cup S$.
\end{enumerate}
\end{lemma}
\begin{proof}
$1.$ Under the projective change of coordinates ${\mathcal L}$ with ${\mathcal L}(x_0)=x_0+c_0x_2, {\mathcal L}(x_1)=x_1+c_1x_2$ and ${\mathcal L}(x_2)=x_2$, we may assume that ${\bf c}=(0,0,1)$. Set $L_\lambda=x_0-\lambda x_1$ where $\lambda\in {\overline{k(t)}}$. Substituting $x_0=\lambda x_1$ into $F$ yields that
$$
x_1^r\left(F_r(\lambda, 1)x_2^{n-r}+F_{r+1}(\lambda, 1)x_1x_2^{n-r-1}+\dots+F_n(\lambda, 1)x_1^{n-r}\right).
$$
Set $H_\lambda(t)=F_\lambda(\lambda,1)t^{n-r}+\dots+F_n(\lambda,1)$. For every root $\gamma$ of $H_\lambda(t)=0$, one sees that $(\lambda, 1,\gamma)$ is a common point of $L_\lambda=0$ and $F=0$ other than ${\bf c}$. Moreover if $H_\lambda(t)=0$ has $n-r$ distinct roots then $L_\lambda=0$ intersects $F=0$ in $n-r$ distinct points other than ${\bf c}$. So it suffices to prove that for all but a finite number of $\lambda$, $H_\lambda(t)=0$ has $n-r$ distinct roots. Note tht substituting $x_0=\lambda x_1$ into $\partial F/\partial x_2$ yields that
$$
x_1^r \sum_{i=r}^n (n-i) F_i(\lambda,1) x_2^{n-i-1} x_1^{i-r}.
$$
From this, one sees that if $\gamma$ is a common root of $H_\lambda(t)=0$ and $\partial H_\lambda(t)/\partial t=0$ then $(\lambda,1,\gamma)$ is a common point of $F=0$ and $\partial F/\partial x_2=0$. Since $F=0$ and $\partial F/\partial x_2=0$ have only finitely many common points, there are only finitely many $\lambda$ such that $H_\lambda(t)=0$ has multiple roots. In other words, there is only a finite number of $\lambda$ such that $L_\lambda=0$ intersects $F=0$ in less than $n-r$ distinct points other than ${\bf c}$. It remains to show that there are only finitely many $\lambda$ such that $(S\setminus \{{\bf c}\})\cap \bV(L_\lambda)\neq \emptyset$.
Assume that $(a_0,a_1,a_2)\in S\setminus\{{\bf c}\}$ which lies in the line $L_\lambda=0$. If $a_1-c_1a_2\neq 0$ then
$$\lambda=(a_0-c_2a_2)/(a_1-c_1a_2).$$ If $a_1-c_1a_2=0$ then $a_0-c_0a_2=0$ and thus $(a_0,a_1,a_2)=a_2(c_0,c_1,1)$. In other words, $(a_0,a_1,a_2)=(0,0,0)$ or ${\bf c}$, which is impossible.
$2.$ Similarly, we may assume that ${\bf c}=(0,0,1)$. Then
\begin{align*}
G_\lambda & =\alpha x_1x_2-x_0x_2 -\lambda x_0x_1,\\
F&=F_r(x_0,x_1)x_2^{n-r}+\dots+F_n(x_0,x_1).
\end{align*}
Applying the standard quadratic transformation ${\mathcal Q}$ to $G_\lambda$ and $F$, one obtains that
\begin{align*}
G_\lambda^{{\mathcal Q}}&=\alpha x_0-x_1-\lambda x_2,\\
F(x_1x_2,x_0x_2,x_0x_1)&=x_2^r\left(F_r(x_1,x_0)(x_0x_1)^{n-r}+\dots+F_n(x_1,x_0)x_2^{n-r}\right).
\end{align*}
Since ${\bf c}$ is an ordinary singular point and $F_r(1,0)F_r(0,1)\neq 0$, by on page of \cite{fulton} $(1,\alpha,0)$ is a simple point of $F^{{\mathcal Q}}=0$. Moveover, as $F_n(1,0)F_n(0,1)\neq 0$, neither $(1,0,0)$ nor $(0,1,0)$ is a point of $F^{{\mathcal Q}}=0$ and so $\deg(F^{{\mathcal Q}})=2n-r$. Thus
$$
F^{\mathcal Q}=F_r(x_1,x_0)(x_0x_1)^{n-r}+\dots+F_n(x_1,x_0)x_2^{n-r}.
$$
For every common point $(\gamma_0,\gamma_1,\gamma_2)$ of $G_\lambda^{{\mathcal Q}}=0$ and $F^{{\mathcal Q}}=0$ with $\lambda\gamma_2\neq 0$,
$(\gamma_1\gamma_2,\gamma_0\gamma_2,\gamma_0\gamma_1)\neq (0,0,0)$ and then it is a common point of $G_\lambda=0$ and $F=0$ other than ${\bf c}$. Therefore it suffices to show that for all but a finite number of $\lambda$, $G_\lambda^{{\mathcal Q}}=0$ and $F^{{\mathcal Q}}=0$ have $2n-r-1$ distinct common points $(\gamma_0,\gamma_1,\gamma_2)$ with $\gamma_2\neq 0$. Let ${\mathcal L}$ be the projective change of coordinates such that ${\mathcal L}(x_0)=(x_0+x_1)/\alpha,{\mathcal L}(x_1)=x_1,{\mathcal L}(x_2)=x_2$. Then
$(G_\lambda^{{\mathcal Q}})^{\mathcal L}=x_0-\lambda x_2$. Note that ${\mathcal L}^{-1}((1,\alpha,0))=(0,\alpha,0)$ which is a simple point of $(F^{{\mathcal Q}})^{\mathcal L}=0$. Thus
$$
(F^{{\mathcal Q}})^{\mathcal L}=\tilde{F}_1(x_0,x_2)x_1^{2n-r-1}+\tilde{F}_2(x_0,x_2)x_1^{2n-r-2}+\dots+\tilde{F}_{2n-r}(x_0,x_2).
$$
By $(1)$, for all but a finite number of $\lambda$, $(G_\lambda^{{\mathcal Q}})^{\mathcal L}=0$ intersects $(F^{{\mathcal Q}})^{\mathcal L}=0$ in $2n-r-1$ distinct points $(\gamma_0',\gamma_1',\gamma_2')$ with $\gamma_2'\neq 0$. Remark that if $(\gamma'_0,\gamma_1',\gamma_2')$ is a common point of $(G_\lambda^{{\mathcal Q}})^{\mathcal L}=0$ and $(F^{{\mathcal Q}})^{\mathcal L}=0$ with $\gamma_2'\neq 0$ then $(\gamma_0'+\beta/\alpha \gamma_1',\gamma_1',\gamma_2')$ is a common point of $G_\lambda^{{\mathcal Q}}=0$ and $F^{{\mathcal Q}}=0$ with $\gamma_2'\neq 0$. These imply that for all but a finite number of $\lambda$, $G_\lambda^{{\mathcal Q}}=0$ intersects $F^{{\mathcal Q}}=0$ in $2n-r-1$ distinct points $(\gamma_0,\gamma_1,\gamma_2)$ with $\gamma_2\neq 0$.
Finally, we need to prove that there are only finitely many $\lambda$ such that $S\setminus \{{\bf c}\}\cap \bV(G_\lambda)\neq \emptyset$. Assume that ${\bf a}=(a_0,a_1,a_2)\in S\setminus\{{\bf c}\}$ which lies in $G_\lambda=0$. We claim that $(a_0-c_0a_2)(a_1-c_1a_2)\neq 0$. Suppose on the contrary that $(a_0-c_0a_2)(a_1-c_1a_2)=0$. Then by $G_\lambda({\bf a})=0$, one sees that either $a_2=0$ or both $a_0-c_0a_2$ and $a_1-c_1a_2$ are zero. This implies that ${\bf a}$ must be one of three points $(1,0,0), (0,1,0), a_2(c_0,c_1,1)$. This is impossible and then our claim holds. It follows from the claim that $\lambda$ is uniquely determined by ${\bf a}$.
\end{proof}
\begin{prop}\label{prop:simplification}
Suppose that $F=0$ has only ordinary singularities and $D$ is an effective divisor in ${\mathcal R}$. Let $D'$ be a divisor in ${\mathcal R}$.
\begin{enumerate}
\item
Assume further that $D=\sum_{i=1}^r {\mathfrak P}_i$ where all ${\mathfrak P}_i$ have the same center which is a point of $F=0$ with multiplicity $r$.
Then there is a linear homogeneous polynomial $G$ in ${\overline{k(t)}}[x_0,x_1,x_2]$ such that
$$
{\rm div}_{\bm \xi}(G)=D+A
$$
where $A$ is a very simple and effective divisor of degree $n-r$, ${\rm supp}(A)\cap ({\rm supp}(D')\cup\{{\mathfrak P}_1,\dots,{\mathfrak P}_r\})=\emptyset$, and
$$
T(G)\leq T_{\bm \xi} (D), \,\,T_{\bm \xi}(A)\leq 2(T(F)+nT_{\bm \xi}(D)).$$
\item Assume that $D={\mathfrak P}$ where the center of ${\mathfrak P}$ is a singular point of $F=0$. Then there are two homogeneous polynomials $G,H\in {\overline{k(t)}}[x_0,x_1,x_2]$ of degree two such that
$$
{\rm div}_{\bm \xi}(G/H)=D+A
$$
where $A$ is a very simple divisor, ${\rm supp}(A)\cap ({\rm supp}(D')\cup \{{\mathfrak P}\})=\emptyset$ and
$$T(G), T(H)\leq T(F)+nT_{\bm \xi}(D),\,\,T_{\bm \xi}(A)\leq (2n+4)T(F)+2n^2T_{\bm \xi}(D).$$
\end{enumerate}
\end{prop}
\begin{proof}
$1.$
Suppose ${\bf c}=(c_0,c_1,c_2)$ is the center of ${\mathfrak P}_i$ with respect to ${\bm \xi}$. Without loss of generality, we assume that $c_2\neq 0$ and ${\bf c}=(c_0,c_1,1)$. Set $L_\lambda=x_0-c_0x_2-\lambda(x_1-c_1x_2)$.
Due to Lemma~\ref{lm:generallines}, for all but a finite number of $\lambda$, $L_\lambda=0$ intersects $F=0$ in $n-r$ distincet points other than the points in $\{{\bf c}\}\cup {\mathcal S}_{\bm \xi}(D')$. Let $\lambda'\in k$ be such that $L_{\lambda'}=0$ satisfies the above condition. Then
$$
{\rm div}_{{\bm \xi}}(L_{\lambda'})=\sum_{i=1}^{r} {\mathfrak P}_{i}+A
$$
where $A$ is an effective divisor of degree $n-r$ and ${\rm supp}(A)\cap ({\rm supp}(D')\cup\{{\mathfrak P}_1,\dots,{\mathfrak P}_r\})=\emptyset$. It is clear that $A$ is very simple since $L_{\lambda'}=0$ intersects $F$ in $n-r$ distinct points other than ${\bf c}$. Finally, one easily sees that $T(L_{\lambda'})\leq T({\bf c})=T_{\bm \xi}(D)$. As the points in $T_{\bm \xi}(A)$ are the intersection points of $F=0$ and $L_{\lambda'}=0$, $T_{\bm \xi}(A)\leq 2(T(F)+nT_{\bm \xi}(D))$ by Proposition~\ref{prop:intersection}.
$2.$ Suppose that ${\bf c}=(c_0,c_1,c_2)$ is the center of ${\mathfrak P}$ with respect to ${\bm \xi}$, and ${\bf c}$ is of multiplicity $r>0$. Since ${\bf c}$ is an ordinary singular point, there are exactly $r$ places of ${\mathcal R}$ with ${\bf c}$ as the center with respect to ${\bm \xi}$. Denote these $r$ places by ${\mathfrak P}_1={\mathfrak P}, \dots, {\mathfrak P}_r$. Without loss of generality, we may assume that $c_2\neq 0$ and ${\bf c}=(c_0,c_1,1)$.
Write
$$
F=F_r(x_0-c_0x_2, x_1-c_1x_2)x_2^{n-r}+\dots+ F_n(x_0-c_0x_2,x_1-c_1x_2)
$$
where $F_i(y_0,y_1)$ is a homogeneous polynomial of degree $i$. Choose a projective change of coordinates ${\mathcal L}$ with $M_{\mathcal L}={\rm diag}(B,1), B\in {\rm GL}_2(k)$
such that
$$
F^{{\mathcal L}}=\tilde{F}_r(x_0-\tilde{c}_0x_2,x_1-\tilde{c}_1x_2)x_2^{n-r}+\dots+\tilde{F}_n(x_0-\tilde{c}_0x_2,x_1-\tilde{c}_1x_2)
$$
satisfies that $\tilde{F}_r(1,0) \tilde{F}_r(0,1)\tilde{F}_n(1,0)\tilde{F}_n(0,1)\neq 0$, where $\tilde{F}_i=F_i((y_0,y_1)B)$ and $(\tilde{c}_0,\tilde{c}_1)=(c_0,c_1)B^{-1}$. By Lemma~\ref{lm:lineartransformation}, $T(F^{\mathcal L})\leq T(F)$.
Denote
$$\tilde{{\bm \xi}}=(\tilde{\xi}_0,\tilde{\xi}_1,\tilde{\xi}_2)={\bm \xi} M_{\mathcal L}^{-1}.$$
Then $\tilde{{\bf c}}=(\tilde{c}_0,\tilde{c}_1,1)={\bf c} M_{\mathcal L}^{-1}$ is the center of ${\mathfrak P}_1$ with respect to $\tilde{{\bm \xi}}$. For $i=0,1$, write
$$\tilde{\xi}_i/\tilde{\xi}_2=\tilde{c}_i+\alpha_i u^d+u^{d+1}\eta_i$$
where $u$ is a local uniformizer of ${\mathfrak P}_1$, $d\geq 1$, $\alpha_i\in {\overline{k(t)}}$ not all zero, and $\nu_{{\mathfrak P}}(\eta_i)\geq 0$. Furthermore,
$$
0=F^{\mathcal L}(\tilde{\xi}_0/\tilde{\xi}_2,\tilde{\xi}_1/\tilde{\xi}_2,1)=u^{dr} \tilde{F}_r(\alpha_0,\alpha_1)+u^{dr+1}\beta
$$
where $\nu_{{\mathfrak P}_1}(\beta)\geq 0$. This implies that $\tilde{F}_r(\alpha_0,\alpha_1)=0$. Since $\tilde{F}_r(0,1)\tilde{F}_r(1,0)\neq 0$, $\alpha_0\alpha_1\neq 0$.
Set $\bar{\alpha}=\alpha_0/\alpha_1$ and
$$\tilde{G}_\lambda=\bar{\alpha} (x_1-\tilde{c}_1x_2)x_2-(x_0-\tilde{c}_0x_2)x_2-\lambda (x_0-\tilde{c}_0x_2)(x_1-\tilde{c}_1x_2).$$
Due to Lemma~\ref{lm:generallines}, for all but a finite number of $\lambda$, $\tilde{G}_\lambda$ intersects $F^{\mathcal L}=0$ in $2n-r-1$ distinct points other than the points $\{\tilde{{\bf c}}\}\cup {\mathcal S}_{\tilde{{\bm \xi}}}(D')$. Let $A_\lambda$ be the very simple divisor consisting of the $2n-r-1$ places whose centers with respect to $\tilde{{\bm \xi}}$ are the intersection points of $\tilde{G}_\lambda=0$ and $F^{\mathcal L}=0$ other than $\tilde{{\bf c}}$ respectively. Then ${\rm supp}(A_\lambda) \cap ({\rm supp}(D')\cup \{{\mathfrak P}_1,\dots,{\mathfrak P}_r\})=\emptyset$.
We claim that for the above $\tilde{G}_\lambda$,
$$
{\rm div}_{\tilde{{\bm \xi}}}(\tilde{G}_\lambda)={\mathfrak P}_1+\sum_{i=1}^r {\mathfrak P}_i +A_\lambda
$$
Note that
\begin{align*}
\frac{ \tilde{G}_\lambda(\tilde{{\bm \xi}})}{\tilde{\xi}_2^2}&=\bar{\alpha}\left(\alpha_1 u^d+u^{d+1} \eta_1\right)-(\alpha_0 u^d+u^{d+1}\eta_0)\\
&-\lambda (\alpha_0 u^d+u^{d+1}\eta_0)(\alpha_1 u^d+u^{d+1} \eta_1)=u^{d+1}\gamma
\end{align*}
where $\nu_{{\mathfrak P}_1}(\gamma)\geq 0$. This implies that ${\rm ord}_{{\mathfrak P}_1}(\tilde{G}_\lambda(\tilde{{\bm \xi}}))\geq d+1\geq 2$. Hence
$${\rm div}_{\tilde{{\bm \xi}}}(\tilde{G}_\lambda)\geq {\mathfrak P}_1+\sum_{i=1}^r {\mathfrak P}_i +A_\lambda.$$
On the other hand, since $\deg({\rm div}_{\tilde{{\bm \xi}}}(\tilde{G}_\lambda))=2n$, one has that
$${\rm div}_{{\bm \xi}'}(\tilde{G}_\lambda)={\mathfrak P}_1+\sum_{i=1}^r {\mathfrak P}_i +A_\lambda.$$
This proves our claim. Now set $G_\lambda=\tilde{G}_\lambda^{{\mathcal L}^{-1}}$. As $\tilde{\xi}_2=\xi_2$, one sees that
$$
\min_j\{\nu_{{\mathfrak P}_i}(\tilde{\xi}_j)\}=\nu_{{\mathfrak P}_i}(\tilde{\xi}_2)=\nu_{{\mathfrak P}_i}(\xi_2)=\min_j\{\nu_{{\mathfrak P}_i}(\xi_j)\}.
$$
This implies that
\begin{align*}
{\rm ord}_{{\mathfrak P}_i}(G_\lambda({\bm \xi}))&=\nu_{{\mathfrak P}_i}\left(G_\lambda({\bm \xi})\right)-2\min_i \{\nu_{{\mathfrak P}_i}(\xi_i)\}\\
&=\nu_{{\mathfrak P}_i}\left(\tilde{G}_\lambda^{{\mathcal L}^{-1}}({\bm \xi})\right)-2\min_i \{\nu_{{\mathfrak P}_i}(\tilde{\xi}_i)\}\\
&=\nu_{{\mathfrak P}_i}\left(\tilde{G}_\lambda(\tilde{{\bm \xi}})\right)-2\min_i \{\nu_{{\mathfrak P}_i}(\tilde{\xi}_i)\}={\rm ord}_{{\mathfrak P}_i}(\tilde{G}_\lambda(\tilde{{\bm \xi}})).
\end{align*}
Therefore ${\rm div}_{\bm \xi}(G_\lambda)={\mathfrak P}_1+\sum_{i=1}^r {\mathfrak P}_i +A_1$. Note that we can choose $\lambda\in k$. For such $\lambda$, one has that
$$
T(G_\lambda)\leq T(\tilde{G}_\lambda)\leq 2T({\bf c})+T((\alpha_0,\alpha_1,0))\leq 2T({\bf c})+T(\tilde{F}_r).
$$
Since $\deg(\tilde{F}_r)=r\geq 2$, one sees that $T(\tilde{F}_r)\leq T(F)+(n-2)T({\bf c})$. This implies that $T(G_\lambda)\leq T(F)+nT({\bf c})$
and $$T(A_\lambda)\leq 2(2T(F)+nT(G_\lambda))\leq (2n+4)T(F)+2n^2T({\bf c}).$$
Now applying $1.$ to the case that $D=\sum_{i=1}^r {\mathfrak P}_i$ and $D'=A_\lambda$, one gets a linear homogeneous polynomial $L_1$ such that
${\rm div}_{\bm \xi}(L_1)=\sum_{i=1}^r {\mathfrak P}_i+C$, where $C$ is a very simple divisor satisfying that ${\rm supp}(C)\cap \{{\rm supp}(A_\lambda)\cup \{{\mathfrak P}_1,\dots,{\mathfrak P}_r\}\}=\emptyset$. Moreover $T(L_1)\leq T_{\bm \xi}({\mathfrak P}_1)=T({\bf c})$. Let $L_2$ be a linear homogeneous polynomial in $k[x_0,x_1,x_2]$ such that $L_2=0$ intersects $F=0$ in $n$ distinct points other than the points in $T_{\bm \xi}(A_\lambda+C)$. For such $L_2$, one has that ${\rm div}_{\bm \xi}(L_2)$ is a very simple divisor satisfying that ${\rm supp}({\rm div}_{\bm \xi}(L_2))\cap {\rm supp}(A_1+C+D')=\emptyset$. Set $H=L_1L_2$ and $A=A_1-C-{\rm div}_{\bm \xi}(L_2)$. Then $T(H)\leq T({\bf c})$ and we obtain two polynomials $G,H$ as required. Note that $T_{\bm \xi}(C)\leq 2(T(F)+nT_{\bm \xi}(D))$ and $T_{\bm \xi}({\rm div}_{\bm \xi}(L_2))\leq 2T(F)$. Hence
$
T(G_\lambda), T(H)\leq T(F)+nT({\bf c}) \,\,\mbox{and}\,\,T_{\bm \xi}(A) \leq (2n+4)T(F)+2n^2T({\bf c}).
$
\end{proof}
\begin{define}
\label{def:adjoint}
Suppose that $F$ has only ordinary singularities, say ${\bf q}_1,\dots,{\bf q}_\ell$, and $r_i$ is the multiplicity of ${\bf q}_i$. Suppose further that for each $i=1,\dots,\ell$, ${\mathfrak Q}_{i,1}, \dots, {\mathfrak Q}_{i,r_i}$ are all places of ${\mathcal R}$ with ${\bf q}_i$ as center with respect to ${\bm \xi}$.
Set
$$
E_{\bm \xi}=\sum_{i=1}^\ell (r_i-1)\sum_{j=1}^{r_i}{\mathfrak Q}_{i,j}.
$$
A homogeneous polynomial $G$ such that ${\rm div}_{\bm \xi}(G)\geq E_{\bm \xi}$ is called an {\em adjoint} of $F$.
\end{define}
We have the following two corollaries of Proposition~\ref{prop:simplification}.
\begin{corollary}
\label{cor:simplification1}
Suppose that $D$ is a simple and effective divisor in ${\mathcal R}$. Let $D'$ be a divisor in ${\mathcal R}$. Then there is a homogeneous polynomial $G$ of degree not greater than $\deg(D)+(n-1)^2/2$ such that
$$
{\rm div}_{\bm \xi}(G)=D+E_{\bm \xi}+A
$$
where $A$ is a very simple and effective divisor of degree not greater than
$$\deg(D)(n-1)+n(n-1)^2/2$$
such that ${\rm supp}(A)\cap ({\rm supp}(D+E_{\bm \xi})\cup {\rm supp}(D'))=\emptyset$ and $T_{\bm \xi}(A)\leq 2(T(F)+nT_{\bm \xi}(D+E_{\bm \xi}))$. Moreover
$$T(G)\leq \left(\deg(D)+(n-1)^2/2\right)T_{\bm \xi}(D+E_{\bm \xi}).$$
\end{corollary}
\begin{proof}
Denote $\mu=\deg(D)+\sum_{i=1}^\ell (r_i-1)$ where $r_i$ is given as in Definition~\ref{def:adjoint}. Note that $\sum_{i=1}^\ell (r_i-1)\leq (n-1)^2/2$, $\mu\leq \deg(D)+(n-1)^2/2$.
Write
$$D+E_{\bm \xi}=\sum_{i=1}^{\deg(D)}{\mathfrak P}_i +\sum_{s=\deg(D)+1}^{\mu} D_s$$
where the centers of ${\mathfrak P}_i$ is a simple point of $F=0$, $D_s=\sum_{j=1}^{r_i}{\mathfrak Q}_{i,j}$ for some $1\leq i \leq \ell$. Applying succesively Proposition~\ref{prop:simplification} to ${\mathfrak P}_i$ and $D_s$, one obtains $\mu$ linear homogeneous polynomials $L_1,\dots, L_\mu$ such that ${\rm div}_{\bm \xi}(L_i)={\mathfrak P}_i+A_i$ if $i\leq m$, or ${\rm div}_{\bm \xi}(L_i)=D_i+A_i$ if $i>m$, where $A_i$ is a very simple and effective divisor such that $${\rm supp}(A_i)\cap ({\rm supp}(D')\cup {\rm supp}(D+E_{\bm \xi}+A_1+\dots+A_{i-1}))=\emptyset.$$
Set $G=\sum_{i=1}^\mu L_i$ and $A=\sum_{i=1}^\mu A_i$. Then one has that
$${\rm div}_{\bm \xi}(G)=D+E_{\bm \xi}+A.$$
Moreover by Proposition~\ref{prop:simplification}, $T(L_i)\leq T_{\bm \xi}(D+E_{\bm \xi})$ for all $i=1,\dots,\mu$ and then Proposition~\ref{prop:height2} implies that $T(G)\leq \mu T_{\bm \xi}(D+E_{\bm \xi})$. It is obvious that $T_{\bm \xi}(A)$ is not greater than $2(T(F)+nT_{\bm \xi}(D+E_{\bm \xi}))$ because so is $T_{\bm \xi}(A_i)$ for all $i=1,\dots,\mu$.
\end{proof}
\begin{corollary}
\label{cor:simplification2}
Suppose that $D, D'$ are two divisors in ${\mathcal R}$. Then there are two homogeneous polynomials $G,H$ of the same degree $\leq 2\deg(D^{+}+D^{-})$ such that
$
\hat{D} ={\rm div}_{\bm \xi}(G/H)+D
$
is very simple and ${\rm supp}({\rm div}_{\bm \xi}(G/H)+D)\cap {\rm supp}(D')=\emptyset$. Moreover $\deg(\hat{D}^{+}), \deg(\hat{D}^{-})\leq 2n(\deg(D^{+}+D^{-}))$ and
\begin{align*}
T_{\bm \xi}({\rm div}_{\bm \xi}(G/H)+D)& \leq (2n+4)T(F)+2n^2T_{\bm \xi}(D)\\
T(G), T(H)&\leq \deg(D^{+}+D^{-})(T(F)+nT_{\bm \xi}(D)),
\end{align*}
where $n=\deg(F)$.
\end{corollary}
\begin{proof}
We first show the case that $-D$ is effective. Denote $\mu=\deg(-D)$ and write
$$
-D=\sum_{i=1}^s {\mathfrak P}_i + \sum_{i=s+1}^\mu {\mathfrak Q}_i
$$
where the center of ${\mathfrak P}_i$ (resp. ${\mathfrak Q}_j$) with respect to ${\bm \xi}$ is a simple (resp. singular) point of $F=0$. Applying Proposition~\ref{prop:simplification} to $\sum_{i=1}^s {\mathfrak P}_i$ yields a homogenenous polynomial $G_0$ of degree $s$ such that
${\rm div}_{\bm \xi}(G_0)=\sum_{i=1}^s {\mathfrak P}_i +A_0$ where $A_0$ is a very simple and effective divisor such that ${\rm supp}(A_0)\cap ({\rm supp}(D)\cup {\rm supp}(D'))=\emptyset$. Moreover $\deg(A_0)=ns-\deg(\sum_{i=1}^s{\mathfrak P}_i)$. Construct $s$ linear homogeneous polynomials $L_1,\dots,L_s$ in $k[x_0,x_1,x_2]$ such that ${\rm div}_{\bm \xi}(L_1\cdots L_s)$ is very simple and ${\rm supp}({\rm div}_{\bm \xi}(L_1\cdots L_s))\cap ({\rm supp}({\rm div}_{\bm \xi}(G_0)\cup {\rm supp}(D'))=\emptyset$. It is easy to see that $\deg({\rm div}_{\bm \xi}(L_1\cdots L_s))=ns$. Set $H_0=L_1\cdots L_s$. By Proposition~\ref{prop:simplification} again, one obtains $\mu-s$ pairs $(G_1, H_1), \dots, (G_{\mu-s}, H_{\mu-s})$ of homogeneous polynomials of degree two such that ${\rm div}_{\bm \xi}(G_i/H_i)={\mathfrak Q}_i+A_i$ where $A_i$ is a very simple divisor such that
$${\rm supp}(A_i)\cap ({\rm supp}(D')\cup {\rm supp}(D+A_0+\cdots+A_{i-1}))=\emptyset$$
and $\deg(A_i^{+})=2n-\deg({\mathfrak Q}_i)$, $\deg(A_i^{-})=2n$.
Set $\tilde{G}=G_0G_1\cdots G_{\mu-s}$ and $\tilde{H}=H_0H_1\cdots H_{\mu-s}$. Then
$$\hat{D}={\rm div}_{\bm \xi}(\tilde{G}/\tilde{H})+D=A_0+A_1+\dots+A_{\mu-s}$$
which is very simple. It is clear that $\deg(\tilde{G})=\deg(\tilde{H})\leq 2\deg(-D)$, and by Proposition~\ref{prop:simplification}
$$T(\tilde{G})\leq T(G_0)+(\mu-s)T(G_i)\leq \mu (T(F)+nT_{\bm \xi}(D)).$$
Similarly, $T(\tilde{H})\leq \mu (T(F)+nT_{\bm \xi}(D))$. Furthermore, one has that
$$T_{\bm \xi}\left({\rm div}_{\bm \xi}(\tilde{G}/\tilde{H})+D\right)\leq (2n+4)T(F)+2n^2T_{\bm \xi}(D)$$
and
$$
\deg(\hat{D}^{+})=(2\mu-s)n-\deg(-D)\leq 2n\mu, \, \deg(\hat{D}^{-})=(2\mu-s)n\leq 2n\mu.
$$
For the general case, write $D=D^{+}-D^{-}$. The previous discussion implies that we can obtain $\tilde{G}_i, \tilde{H}_i$ such that
$
{\rm div}_{\bm \xi}(\tilde{G}_1/\tilde{H_1})-D^{+}
$ and ${\rm div}_{\bm \xi}(\tilde{G}_2/\tilde{H_2})-D^{-}$ are very simple. Moreover
\begin{align*}
{\rm supp}\left({\rm div}_{\bm \xi} (\tilde{G}_1/\tilde{H_1})-D^{+}\right)\bigcap {\rm supp}\left({\rm div}_{\bm \xi}(\tilde{G}_2/\tilde{H_2})-D^{-}\right)&=
\emptyset,\\
{\rm supp}\left({\rm div}_{\bm \xi}(\tilde{G}_2/\tilde{H_2})-D^{-}+{\rm div}_{\bm \xi}(\tilde{G}_1/\tilde{H_1})-D^{+}\right)\bigcap \left({\rm supp}(D)\cup {\rm supp}(D')\right)&=\emptyset.
\end{align*}
Set $G=\tilde{G}_2\tilde{H}_1$ and $H=\tilde{G}_1\tilde{H}_2$. Then ${\rm div}_{\bm \xi}(G/H)+D$ satisfies the required condition. Furthermore
$\deg(\hat{D}^{+}), \deg(\hat{D}^{-})\leq 2n(\deg(D^{+}+D^{-}))$ and
\begin{align*}
T(G), T(H)&\leq \deg(D^{+}+D^{-})(T(F)+nT_{\bm \xi}(D)),\\
T_{\bm \xi}({\rm div}_{\bm \xi}(G/H)+D)&\leq (2n+4)T(F)+2n^2T_{\bm \xi}(D).
\end{align*}
\end{proof}
Now we are ready to pove the main results of this section. Let us start with two lemmas.
\begin{lemma}
\label{lm:riemannrochspace}
Suppose that $F=0$ has only ordinary singularities and $D$ is a divisor in ${\mathcal R}$. Let $H$ be a homogeneous polynomial in ${\overline{k(t)}}[x_0,x_1,x_2]$ such that ${\rm div}_{\bm \xi}(H)=D^{+}+E_{\bm \xi}+A$, where $A$ is an effective divisor. Then
\begin{equation}
\label{eq:riemannrochspace}
{\mathfrak L}(D)=\left\{\left.\frac{G({\bm \xi})}{H({\bm \xi})} \,\right | \,\begin{array}{c} \mbox{$G$ are homogeneous polynomials of $\deg(H)$}\\
\mbox{with ${\rm div}_{\bm \xi}(G)\geq D^{-}+E_{\bm \xi}+A$}
\end{array}\right\}.
\end{equation}
\end{lemma}
\begin{proof}
Note that ${\rm div}(G({\bm \xi})/H({\bm \xi}))={\rm div}_{\bm \xi}(G)-{\rm div}_{\bm \xi}(H)$.
It is obvious that the right hand side of (\ref{eq:riemannrochspace}) is a subspace of ${\mathfrak L}(D)$.
Suppose that $h\in {\mathfrak L}(D)\setminus\{0\}$, i.e. $D'={\rm div}(h)+D$ is effective. Then
$${\rm div}_{\bm \xi}(H)+{\rm div}(h)=D^{-}+D'+E_{\bm \xi}+A.$$
By the Residuce Theorem (see page of \cite{fulton}), there is a homogeneous polynomial $G$ of degree $\deg(H)$ such that
$$
{\rm div}_{\bm \xi}(G)=D^{-}+D'+E_{\bm \xi}+A\geq D^{-}+E+A.
$$
One sees that
$$
{\rm div}(hH({\bm \xi})/G({\bm \xi}))={\rm div}(h)+{\rm div}_{\bm \xi}(H/G)={\rm div}(h)+{\rm div}_{\bm \xi}(H)-{\rm div}_{\bm \xi}(G)=0.
$$
Thus $hH({\bm \xi})/G({\bm \xi})\in {\overline{k(t)}}$, i.e. $h$ belongs to the right hand side of (\ref{eq:riemannrochspace}).
\end{proof}
\begin{lemma}
\label{lm:linearsystem}
Assume that $M=(a_{i,j})$ is an $l \times m $ matrix with $a_{i,j}\in {\overline{k(t)}}$. Assume further that for each place ${\mathfrak p}$ of $k(t, a_{1,1},\dots,a_{l,m})$,
$
-\nu_{\mathfrak p}(a_{i,j})\leq m_{\mathfrak p}
$
where $m_{\mathfrak p}\geq 0$ and for all but finite number of ${\mathfrak p}$, $m_{\mathfrak p}\neq 0$.
Then there is a basis $B$ of the solution space of $MY=0$ satisfying that
$$T({\bf b})\leq \frac{ \min \{l,m\} \sum_{{\mathfrak p}} m_{\mathfrak p}}{[k(t,a_{1,1},\dots,a_{l,m}):k(t)]}$$
for all ${\bf b}\in B$.
\end{lemma}
\begin{proof}
Assume that $r={\rm rank}(M)$. Then $r\leq \min\{l,m\}$. Without loss of generality, we may assume the first $r$-rows of $M$ are linearly independent and denote by $\tilde{M}$ the matrix formed by them.
Then the solution space of $\tilde{M}Y=0$ is the same as that of $MY=0$. Hence it suffices to consider the system $\tilde{M}Y=0$. We may further assume that the matrix $\tilde{M}_1$ formed by the first $r$-columns of $\tilde{M}$ is invertible. For every $i=1,\dots,r$ and $j=r+1,\dots,m$, set $d_{i,j}$ to be the determinant of the matrix obtained from $\tilde{M}_1$ by replacing the $i$-th column of $\tilde{M}_1$ by the $j$-th column of $\tilde{M}$. For each $j=r+1,\cdots, m$, denote
$$
{\bf c}_j=(d_{1,j},\dots,d_{r,j}, 0,\dots, 0,\underbrace{\det(M_1)}_{j},0,\dots,0)^t
$$
where $(\cdot)^t$ denotes the transpose of a vector.
Then by Cramer's rule, the ${\bf c}_j$ are solutions of $\tilde{M}Y=0$ and thus they form a basis of the solution space of $\tilde{M}Y=0$. Note that $d_{i,j}$ as well as $\det(\tilde{M}_1)$ is an integer combination of the monomials in the entries of $\tilde{M}$ of total degree $r$. So for all $i=1,\dots,r$ and $j=r+1,\dots,m$,
$$
-\nu_{\mathfrak p}(\tilde{M}_1), -\nu_{\mathfrak p}(d_{i,j})\leq r m_{\mathfrak p} \leq \min\{l,m\} m_{\mathfrak p} $$
where ${\mathfrak p}$ is a place of $k(t,a_{1,1},\dots,a_{l,m})$. This together Remark~\ref{rem:heights} implies the lemma.
\end{proof}
\begin{theorem}
\label{thm:riemann-roch1}
Suppose that $F=0$ has only ordinary singularities.
Let $D$ be a divisor in ${\mathcal R}$. Denote $\mu=\deg(D^{+}+D^{-})$ and
$N=\max\{T_{\bm \xi}(D), T(F)\}.$ Then there is a ${\overline{k(t)}}$-basis $B$ of ${\mathfrak L}(D)$ such that every element of $B$ can be represented by $G({\bm \xi})/H({\bm \xi})$ where $G,H$ are two homogeneous polynomials of the same degree not greater than $\leq 2(n+1)\mu+(n-1)^2/2$ and
$$T(G), T(H)\leq 4n^5(n+1)^3(2\mu+(n-1)/2)^3 N.$$
\end{theorem}
\begin{proof}
By Corollary~\ref{cor:simplification2}, there are two homogeneous polynomials $G_1,H_1$ of the same degree $\leq 2\mu$ such that $\hat{D}={\rm div}_{\bm \xi}(G_1/H_1)+D$ is very simple. Moreover
\begin{align*}
T(G_1), T(H_1) & \leq \mu (n+1)N,\\
T_{\bm \xi}(\hat{D}) \leq (2n+4)T(F)+2n^2T_{\bm \xi}(D) &\leq (2n^2+2n+4)N,\\
\deg(\hat{D}^{+}), \deg(\hat{D}^{-})&\leq 2n\mu.
\end{align*}
Due to Corollary~\ref{cor:simplification1}, there is a homogeneous polynomial $G_2$ of degree not greater than
$2n\mu+(n-1)^2/2$
such that ${\rm div}_{\bm \xi}(G_2)=\hat{D}^{+}+E_{\bm \xi}+A$, where $A$ is a very simple and effective divisor and ${\rm supp}(A)\cap {\rm supp}(\hat{D}^{-})=\emptyset$.
Moreover
$$T(G_2)\leq (2n\mu+(n-1)^2/2) T_{\bm \xi}(\hat{D}^{+}+E_{\bm \xi})\leq (2n\mu+(n-1)^2/2)(2n^2+2n+4)N$$
and
$$
T_{\bm \xi}(A)\leq 2(T(F)+nT_{\bm \xi}(\hat{D}^{+}+E_{\bm \xi}))\leq 2(2n^3+2n^2+4n+1)N.
$$
Denote $d=\deg(G_2)$. By Lemma~\ref{lm:riemannrochspace}, to compute ${\mathfrak L}(D)$, it suffices to compute all homogeneous polynomials $H_2$ of degree $d$ satisfying that
$$
{\rm div}_{\mathfrak P}(H_2)\geq \hat{D}^{-}+E_{\bm \xi}+A.
$$
Assume that
$$
H_2=\sum_{i=0}^d \sum_{j=0}^{d-i} c_{i,j} x_0^i x_1^j x_2^{d-i-j}
$$
where $c_{i,j}$ are indeterminates. There are $(d+1)(d+2)/2$ indeterminates in total. For each ${\mathfrak P}\in {\rm supp}(\hat{D}^{-}+A)$, ${\rm div}_{\bm \xi}(H_2)\geq {\mathfrak P}$ if and only if the center of ${\mathfrak P}$ with respect to ${\bm \xi}$ is a zero of $H_2$. This imposes $\deg(\hat{D}^{-}+A)$ linear constraints on $H_2$. At the same time, ${\rm div}_{\bm \xi}(H_2)\geq (r_i-1)\sum_{j=1}^{r_i}{\mathfrak Q}_{i,j}$ if and only if the center of ${\mathfrak Q}_{i,1}$ with respect to ${\bm \xi}$ is a common zero of
$$\frac{\partial^{j_0+j_1+j_2}(H_2)}{\partial x_0^{j_0}x_1^{j_1}x_2^{j_2}}$$
for all nonnegative integers $j_0,j_1,j_2$ satisfying that $j_0+j_1+j_2=r_i-2$, where ${\mathfrak Q}_{i,j}$ is as in Definition~\ref{def:adjoint}. This imposes $r_i(r_i-1)/2$ linear constraints on $H_2$. So
there are totally $\deg(\hat{D}^{-}+A)+\deg(E_{\bm \xi})/2$ linear constraints on $H_2$. The problem of finding $H_2$ is reduced to that of solving the system
$M Y=0$, where $Y$ is a vector with indeterminates entries and $M$ is a $(\deg(\hat{D}^{-}+A)+\deg(E_{\bm \xi})/2)\times (d+1)(d+2)/2$ matrix. Denote by ${\bf c}_{\mathfrak P}=(c_{0,{\mathfrak P}}, c_{1,{\mathfrak P}}, c_{2,{\mathfrak P}})$ the center of ${\mathfrak P}$ in ${\rm supp}(\hat{D}^{-}+E_{\bm \xi}+A)$.
Then the entries in the same row of $M$ are monomials of total degree $\leq d$ in $c_{0,{\mathfrak P}}, c_{1,{\mathfrak P}}, c_{2,{\mathfrak P}}$ for some ${\mathfrak P}$ in ${\rm supp}(\hat{D}^{-}+E_{\bm \xi}+A)$. Without loss of generality, we may assume that one of $c_{0,{\mathfrak P}}, c_{1,{\mathfrak P}}, c_{2,{\mathfrak P}}$ is 1. Let $R$ be a finite extension of $k(t)$ containing all $c_{i,{\mathfrak P}}$. For each place ${\mathfrak p}$ of $R$, set
$$
m_{\mathfrak p}=d \sum_{{\mathfrak P}\in {\rm supp}(\hat{D}^{-}+E_{\bm \xi}+A)}\max\{-\nu_{\mathfrak p}(c_{0,{\mathfrak P}}), -\nu_{\mathfrak p}(c_{1,{\mathfrak P}}), -\nu_{\mathfrak p}(c_{2,{\mathfrak P}})\}.
$$
Since $\max\{-\nu_{\mathfrak p}(c_{0,{\mathfrak P}}), -\nu_{\mathfrak p} (c_{1,{\mathfrak P}}), -\nu_{\mathfrak p}(c_{2,{\mathfrak P}})\}\geq 0$ for all ${\mathfrak P}$, $m_{\mathfrak p}\geq 0$ and
\begin{align*}
-\nu_{\mathfrak p}(a_{i,j})\leq d \max_{{\mathfrak P}\in{\rm supp}(\hat{D}^{-}+E_{\bm \xi}+A)}\max\{-\nu_{\mathfrak p}(c_{0,{\mathfrak P}}), -\nu_{\mathfrak p}(c_{1,{\mathfrak P}}), -\nu_{\mathfrak p}(c_{2,{\mathfrak P}})\} \leq m_{\mathfrak p}
\end{align*}
where $M=(a_{i,j})$. Note that
$$
\deg(\hat{D}^{-}+E_{\bm \xi}+A)\leq \deg({\rm div}_{\bm \xi}(H_2))=nd.
$$
Applying Lemma~\ref{lm:linearsystem} to $M$ yields that
\begin{align*}
T(H_2)& \leq \deg(\hat{D}^{-}+E_{\bm \xi}+A)\sum_{{\mathfrak p}}\frac{m_{\mathfrak p}}{[R:k(t)]}\\
& \leq nd^2\sum_{{\mathfrak P}} \sum_{{\mathfrak p}}\frac{\max\{-\nu_{\mathfrak p}(c_{0,{\mathfrak P}}), -\nu_{\mathfrak p}(c_{1,{\mathfrak P}}), -\nu_{\mathfrak p}(c_{2,{\mathfrak P}})\}}{[R:k(t)]}\\
&\leq nd^2\sum_{{\mathfrak P}} T({\bf c}_{\mathfrak P}) \leq nd^2\deg(\hat{D}^{-}+E_{\bm \xi}+A) \max_{{\mathfrak P}} T({\bf c}_{\mathfrak P})\\
&\leq n^2d^3 T_{\bm \xi}(\hat{D}^{-}+E_{\bm \xi}+A)\leq 2n^2d^3(2n^3+2n^2+4n+1)N \\
&\leq 2n^5(2\mu+(n-1)/2)^3(2n^3+2n^2+4n+1)N.
\end{align*}
The last inequality holds because
$$d\leq 2n\mu+(n-1)^2/2\leq n(2\mu+(n-1)/2).$$
Set $G=H_2G_1$ and $H=G_2H_1$. Then
\begin{align*}
\deg(G)&=\deg(H)\leq 2(n+1)\mu+(n-1)^2/2,\\
T(G), T(H)&\leq T(H_2)+T(G_1)<T(H_2) + \mu (n+1)N\\
&\leq 2n^5(2\mu+(n-1)/2)^3(2n^3+2n^2+4n+2)N\\
&\leq 4n^5(n+1)^3(2\mu+(n-1)/2)^3 N.
\end{align*}
\end{proof}
Next, we consider the case that $F=0$ may have non-ordinary singular points. Let ${\mathcal C}({\bm \xi})$ be a plane projective model of ${\mathcal R}$. Suppose that ${\mathcal C}({\bm \xi})$ is defined by $F_0$ and for $i=1,\dots,s$, $F_i$ is the quadratic transformation of $F_{i-1}$ under the quadratic transformation ${\mathcal Q}_{{\bf c}_{i-1}}$, where ${\bf c}_{i-1}$ is a singular point of $F_{i-1}=0$. Denote $n=\deg(F_0)$ and set ${\bm \xi}_0={\bm \xi}$, ${\bm \xi}_{i+1}={\mathcal Q}_{{\bf c}_{i}}^{-1}({\bm \xi}_{i})$ and
\begin{equation}
\label{eq:seqtransformation}
N_i=2^{\frac{i(i-1)}{2}}n^{i}\max\{8nT(F_0),T_{{\bm \xi}_0}(D)\}.
\end{equation}
\begin{prop}
\label{prop:seqtransformation}
Let $D$ be a divisor in ${\mathcal R}$ and the notations $F_i, {\bm \xi}_i, N_i$ as above. One has that
\begin{enumerate}
\item
${\rm tdeg}(F_i)\leq n 2^{i}-2^{i+1}+2$;
\item
$ T_{{\bm \xi}_i}(D), T(F_i)\leq N_i$.
\end{enumerate}
\end{prop}
\begin{proof}
1. Set $n_i=\deg(F_i)$. Since every ${\bf c}_i$ is a singular point, one has that $n_i\leq 2n_{i-1}-2$. This implies $n_i\leq 2^i n -2^{i+1}+2$.
2. Denote by $S_i$ the maximum of the heights of singular points of $F_i=0$. We first prove by induction on $i$ that $T(F_i), S_i\leq N_i$ for all $i=0,\dots,s$. Note that $n_i+1<2^i n$ for all $i=1,\dots,s$. Since $N_0=8nT(F_0)$, it is clear that $T(F_0)<N_0$ and $S_0<4nT(F_0)<N_0$ by Corollary~\ref{cor:singularity}.
Now assume that $T(F_i), S_i\leq N_i$ for $i=\ell\geq 0$. Consider the case $i=\ell+1$. By Corollary~\ref{cor:qtransformation} and induction hypothesis, one has that
$$
T(F_{\ell+1}) \leq T(F_\ell)+n_{\ell}S_\ell\leq (1+n_\ell)N_\ell<2^\ell n N_\ell=N_{\ell+1}.
$$
Note that $n_0=n>2$ as the curve $F_0=0$ has singularities. One sees that $4S_\ell \leq 4N_\ell < 2^\ell n N_\ell=N_{\ell+1}$ if $\ell>0$ and
$$4S_0< 16nT(F_0)<8n^2 T(F_0)\leq N_1.$$
Consequently, $4S_j<N_{j+1}$ for all $j\geq 0$. On the other hand, one has already seen that
$T(F_\ell)+n_\ell S_\ell <N_{\ell+1}.$
By Corollary~\ref{cor:qtransformation} again,
\begin{align*}
S_{\ell+1}\leq \max\{4S_\ell, T(F_{\ell})+n_{\ell}S_\ell\}<N_{\ell+1}.
\end{align*}
For the divisor $D$, it is obvious that $T_{{\bm \xi}_0}(D)\leq N_0$. Suppose that $T_{{\bm \xi}_i}(D)\leq N_i$ for $i=\ell\geq 0$. By Corollary~\ref{cor:qtransformation} and the induction hypothesis,
$$
T_{{\bm \xi}_{\ell+1}}(D)\leq \max\{2(S_\ell+N_\ell), T(F_\ell)+n_\ell T_{{\bm \xi}_\ell}(D)\}\leq N_{\ell+1}.
$$
\end{proof}
\begin{notation}
\label{not:steps}
Let $F$ be the defining polynomial of ${\mathcal C}({\bm \xi})$.
Denote by $s({\bm \xi})$ the number of quadratic transformations such that ${\mathcal C}(\tilde{{\bm \xi}})$ has only ordinary singularities, where $\tilde{{\bm \xi}}$ is the image of ${\bm \xi}$ under these quadratic transformations. By Theorem 2 in Chapter 7 of \cite{fulton}, $s({\bm \xi})$ can be chosen to be an integer not greater than
$$
m+\frac{(n-1)(n-2)}{2}-\sum \frac{r_{\bf c}(r_{\bf c}-1)}{2}\leq \frac{(n-1)(n-2)}{2}
$$
where $n=\deg(F)$, $m$ is the number of non-ordinary singularities of $F=0$, ${\bf c}$ ranges over all singularities of $F=0$ and $r_{\bf c}$ is the multiplicity of ${\bf c}$.
\end{notation}
\begin{theorem}
\label{thm:riemann-roch2}
Let $D$ be a divisor in ${\mathcal R}$. Denote
$$n=\deg(F), s=s({\bm \xi}), \mu=\deg(D^{+}+D^{-}).$$
Then there is a ${\overline{k(t)}}$-basis $B$ of ${\mathfrak L}(D)$ such that every element of $B$ can be represented by $G({\bm \xi})/H({\bm \xi})$ where $G,H$ are two homogeneous polynomials of the same degree not greater $2^{2s+1}(n+1)(\mu+2^{s-2}n)$ and
$$T(G), T(H)\leq 2^{\frac{s^2}{2}+\frac{15s}{2}+5}n^{s+5}(n+1)^3(\mu+2^{s-2}n)^3 \max\{8nT(F), T_{\bm \xi}(D)\}.$$
\end{theorem}
\begin{proof}
If $F=0$ has only ordinary singularities, i.e. $s=0$, then the assertion is clear by Theorem \ref{thm:riemann-roch1}. Suppose $F=0$ has non-ordinary singularities, i.e. $s\geq 1$.
Let $\tilde{{\bm \xi}}$ be the image of ${\bm \xi}$ under $s$ quadratic transformations ${\mathcal Q}^{-1}_{{\bf c}_0}, \dots, {\mathcal Q}^{-1}_{{\bf c}_{s-1}}$ such that ${\mathcal C}(\tilde{{\bm \xi}})$ has only ordinary singularities.
Let $\tilde{F}$ be the defining polynomial of ${\mathcal C}(\tilde{{\bm \xi}})$ and set
$$\kappa=2^{s(s-1)/2}n^s \max\{8nT(F), T_{\bm \xi}(D)\}.$$
Then by Proposition~\ref{prop:seqtransformation}
$$
\tilde{n}=\deg(\tilde{F})\leq 2^s(n-2)+2 \,\,\mbox{and}\,\,
T_{\tilde{{\bm \xi}}}(D), T(\tilde{F})\leq \kappa.
$$
By Theorem~\ref{thm:riemann-roch1}, there is a ${\overline{k(t)}}$-basis $B$ of ${\mathfrak L}(D)$ satisfying that
each element in $ B$ can be represented by $\tilde{G}(\tilde{{\bm \xi}})/\tilde{H}(\tilde{{\bm \xi}})$ where $\tilde{G},\tilde{H}$ are homogeneous polynomials of degree not greater than $2(\tilde{n}+1)\mu+(\tilde{n}-1)^2/2$ and
$$
T(\tilde{G}), T(\tilde{H}) \leq 4\tilde{n}^5(\tilde{n}+1)^3 (2\mu+(\tilde{n}-1)/2)^3 \kappa.
$$
It remains to represent elements of $B$ in terms of ${\bm \xi}$. We use the same notations as in the proof of Propositoin~\ref{prop:seqtransformation}. Let ${\bm \xi}_0={\bm \xi}$ and ${\bm \xi}_i={\mathcal Q}_{{\bf c}_{i-1}}^{-1}({\bm \xi}_{i-1})$. Denote by $n_i$ the degree of the defining polynomial of ${\mathcal C}({\bm \xi}_i)$ and $S_i$ the maximum of the heights of singular points of ${\mathcal C}({\bm \xi}_i)$. Let $G_s=\tilde{G}$ and
$G_{i-1}=G_i({\mathcal Q}_{{\bf c}_{i-1}}^{-1}((x_0,x_1,x_2)))$
for all $i=1,\dots,s.$ One sees that $\deg(G_i)=2^{s-i}\deg(\tilde{G})$. By Lemmas~\ref{lm:lineartransformation} and~\ref{lm:standardtransformation},
$$
T(G_{i-1})\leq T(\bar{G}_i)+\deg(\bar{G}_i)T({\bf c}_{i-1})=T(G_i)+2\deg(G_i)T({\bf c}_{i-1})
$$
where $\bar{G}_i=G_i({\mathcal Q}^{-1}((x_0,x_1,x_2)))$.
From the proof of Proposition~\ref{prop:seqtransformation}, we have
\begin{align*}
T(G_0)&\leq T(\tilde{G})+\sum_{i=0}^{s-1}2\deg(G_{i+1})T({\bf c}_i)
\leq T(\tilde{G})+(\sum_{i=0}^{s-1}2^{s-i})\deg(\tilde{G})N_{s-1}\\
&\leq T(\tilde{G})+2^{s+1}\deg(\tilde{G})N_{s-1},
\end{align*}
where $N_{s-1}$ is given as in (\ref{eq:seqtransformation}). Note that $\tilde{n}\leq n2^s$. One has that
\begin{align*}
\deg(G_0)& \leq 2^s \deg(\tilde{G}) \leq 2^s (2(\tilde{n}+1)\mu+(\tilde{n}-1)^2/2) \\
&\leq 2^s (\tilde{n}+1)(2\mu+(\tilde{n}-1)/4)\leq 2^{2s+1}(n+1)(\mu+n2^{s-2});\\
T(G_0)&\leq 4\tilde{n}^5(\tilde{n}+1)^3 \left(2\mu+\frac{\tilde{n}-1}{2}\right)^3 \kappa + 2^{s+1}\left(2(\tilde{n}+1)\mu+\frac{(\tilde{n}-1)^2}{2}\right)N_{s-1} \\
&\leq 4\tilde{n}^5(\tilde{n}+1)^3 \left(2\mu+\frac{\tilde{n}-1}{2}\right)^3 \kappa + 2^{s+1}(\tilde{n}+1)\left(2\mu+\frac{\tilde{n}-1}{2}\right)\kappa\\
&\leq \left(4\tilde{n}^5\left(2\mu+\frac{\tilde{n}-1}{2}\right)+2^{s+1}\right)(\tilde{n}+1)^3 \left(2\mu+\frac{\tilde{n}-1}{2}\right)^2 \kappa \\
&\leq \left(4n^52^{5s}\left(2\mu+\frac{n2^s-1}{2}\right)+2^{s+1}\right)(\tilde{n}+1)^3 \left(2\mu+\frac{\tilde{n}-1}{2}\right)^2 \kappa \\
&\leq 8n^52^{5s}(\mu+n2^{s-2})(\tilde{n}+1)^3 \left(2\mu+\frac{\tilde{n}-1}{2}\right)^2 \kappa \\
&\leq 2^{8s+5}n^5(n+1)^3(\mu+n2^{s-2})^3 \kappa \\
&\leq 2^{\frac{s^2}{2}+\frac{15s}{2}+5}n^{s+5}(n+1)^3(\mu+2^{s-2}n)^3 \max\{8nT(F), T_{\bm \xi}(D)\}.
\end{align*}
Similarly, we obtain bounds for $\deg(H_0)$ and $T(H_0)$.
\end{proof}
\section{Heights on plane algebraic curves}
\label{sec:heights}
Let $f\in k[x_0,x_1]$ be an irreducible polynomial over $k$ and $a,b\in k(t)\setminus \{0\}$ satisfy $f(a,b)=0$, i.e. $(a,b)$ is a rational parametrization of $f=0$. The result on parametrization (see \cite{sendra-winkler} for instance) implies that
$$
\deg(a)=m \deg(f,x_1),\,\,\deg(b)=m\deg(f,x_0).
$$
In other words, $T(a)\deg(f,x_0)=T(b)\deg(f,x_1)$. A similar relation holds for points in algebraic curves defined over ${\overline{k(t)}}$, i.e there is a constant $C$ only depending on $f$ such that if $(a,b)$ is a point of $f(x_0,x_1)=0$ with coordinates in ${\overline{k(t)}}$ then
$$
\deg(f,x_0)T(a)-C \leq \deg(f,x_1)T(b)\leq \deg(f,x_0)T(a)+C.
$$
This is a special case of a general result for points in complete nonsingular varieties over a field with valuations.
In the case of algebraic curves defined over ${\overline{k(t)}}$, Eremenko in 1999 presented another proof which actually provides a procedure to find $C$ explicitly. In this section, we shall present an explicit formula for $C$ following Eremenko's proof.
\subsection{Heights on plane projective curves}
Throughout this subsection, $f$ is an irreducible polynomial in ${\overline{k(t)}}[x_0,x_1]$ and ${\mathcal R}$ is the algebraic function field over ${\overline{k(t)}}$ associated to $f$. Let us start with a refinement of Lemma 1 of \cite{eremenko}.
\begin{lemma}
\label{lm:pointsinequality}
Assume that $f\in {\overline{k(t)}}[x_0,x_1]$ is irreducible over ${\overline{k(t)}}$ and $\alpha,\beta\in {\mathcal R}\setminus {\overline{k(t)}}$ satisfying $f(\alpha,\beta)=0$. If ${\rm div}(\alpha)^{-} \leq {\rm div}(\beta)^{-}$, then for every place ${\mathfrak P}$ of ${\mathcal R}$ with $\nu_{{\mathfrak P}}(\beta)\geq 0$, we have that
$$
T(\pi_{\mathfrak P}(\alpha))\leq T(\pi_{\mathfrak P}(\beta))+T(f).
$$
\end{lemma}
\begin{proof}
Since ${\rm div}(\alpha)^{-}\leq {\rm div}(\beta)^{-}$, by Proposition 2 of \cite{eremenko}, $f$ can be written to be of the form
$$
f= x_0^n+a_{n-1}(x_1)x_0^{n-1}+\dots+a_1(x_1)x_0+a_0(x_1),
$$
where $a_i\in {\overline{k(t)}}[x_1]$ with $\deg(a_i)\leq n-i$. Write $a_i=\sum_{j=0}^{n-i} a_{i,j} x_1^j$ with $a_{i,j}\in {\overline{k(t)}}$. Let $R$ be a finite extension of $k(t)$ containing all $a_{i,j}$ and $\pi_{\mathfrak P}(\alpha), \pi_{\mathfrak P}(\beta)$. Suppose that ${\mathfrak p}$ is a place of $R$. Then
\begin{align*}
\nu_{\mathfrak p}(\pi_{\mathfrak P}(\alpha^n))&=\nu_{\mathfrak p}\left(-\sum_{i=0}^{n-1} \sum_{j=0}^{n-i}a_{i,j}\pi_{\mathfrak P}(\beta)^j \pi_{\mathfrak P}(\alpha)^i\right)\\
&\geq \min_{0\leq i \leq n-1,0\leq j\leq n-i}\{\nu_{\mathfrak p}(a_{i,j})+j\nu_{\mathfrak p}(\pi_{\mathfrak P}(\beta))+i\nu_{\mathfrak p}(\pi_{\mathfrak P}(\alpha))\}\\
&=\nu_{\mathfrak p}(a_{i',j'})+j'\nu_{\mathfrak p}(\pi_{\mathfrak P}(\beta))+i'\nu_{\mathfrak p}(\pi_{\mathfrak P}(\alpha))
\end{align*}
for some $0\leq i' \leq n-1, 0\leq j'\leq n-i'$.
Equivalently,
$$
\nu_{\mathfrak p}(\pi_{\mathfrak P}(\alpha))\geq \frac{1}{n-i'}\nu_{\mathfrak p}(a_{i',j'})+\frac{j'}{n-i'}\nu_{\mathfrak p}(\pi_{\mathfrak P}(\beta)).
$$
Therefore
\begin{align*}
\max\{0, -\nu_{\mathfrak p}(\pi_{\mathfrak P}(\alpha))\} &\leq \max\left\{0,-\frac{\nu_{\mathfrak p}(a_{i',j'})}{n-i'}-\frac{j'\nu_{\mathfrak p}(\pi_{\mathfrak P}(\beta))}{n-i'}\right\}\\
& \leq \max\left\{0,-\nu_{\mathfrak p}(a_{i',j'})\right\}+\max\left\{0,-\nu_{\mathfrak p}(\pi_{\mathfrak P}(\beta))\right\}\\
&\leq \max_{i,j}\left\{0,-\nu_{\mathfrak p}(a_{i,j})\right\}+\max\left\{0,-\nu_{\mathfrak p}(\pi_{\mathfrak P}(\beta))\right\}.
\end{align*}
This implies that $T(\pi_{\mathfrak P}(\alpha))\leq T(\pi_{\mathfrak P}(\beta))+T(f)$.
\end{proof}
\begin{lemma}
\label{lm:distinctpoles}
Let $S$ be a finite set of places in ${\mathcal R}$ and $\alpha\in {\mathcal R}$. Then there are $a_1,a_2\in k$ with $a_2\neq 0$ such that
$$
{\rm supp}\left({\rm div}\left(\frac{\alpha}{a_1\alpha+a_2}\right)^{-}\right)\cap S =\emptyset.
$$
\end{lemma}
\begin{proof}
Set
$$
M=\left\{ \pi_{\mathfrak P}(\alpha) \mid \mbox{$\forall\, {\mathfrak P}\in S$ with $\nu_{\mathfrak P}(\alpha)\geq 0$} \right\}. $$
Then $M$ is a finite set in ${\overline{k(t)}}$. Let $a_1, a_2\in k$ satisfy that $a_2\neq 0$ and $a_1c+a_2\neq 0$ for all $c\in M$.
For ${\mathfrak P}\in S$ with $\nu_{\mathfrak P}(\alpha)\geq 0$, one has that
$$
\pi_{\mathfrak P}(a_1\alpha+a_2)=a_1\pi_{\mathfrak P}(\alpha)+a_2\neq 0,\, {\mathrm i.e.} \,\,\nu_{\mathfrak P}(a_1\alpha+a_2)=0.
$$
This implies that $\nu_{\mathfrak P}(\alpha/(a_1\alpha+a_2))=\nu_{\mathfrak P}(\alpha)\geq 0$. On the other hand, for ${\mathfrak P}\in S$ with $\nu_{\mathfrak P}(\alpha)<0$, one has that
$$
\nu_{\mathfrak P}(\alpha/(a_1\alpha+a_2))=\nu_{\mathfrak P}(\alpha)-\nu_{\mathfrak P}(a_1\alpha+a_2)=\nu_{\mathfrak P}(\alpha)-\nu_{\mathfrak P}(\alpha)=0.
$$
In both cases, ${\mathfrak P}$ is not a pole of $\alpha/(a_1\alpha+a_2)$. Thus $a_1,a_2$ satisfy the requirement.
\end{proof}
The main result of this section is the following theorem which is a special case of Lemma 2 of \cite{eremenko}. The original proof of Lemma 2 of \cite{eremenko} contains a small gap. We shall fill in this gap in the proof.
\begin{theorem}
\label{thm:boundpoints1}
Let $f$ be an irreducible polynomial in ${\overline{k(t)}}[x_0,x_1]$ of degree $n_0$ with respect to $x_0$ and of degree $n_1$ with respect to $x_1$. Suppose that $n={\rm tdeg}(f)$ and $N\geq 1$. Then for every $c_0,c_1\in {\overline{k(t)}}$ satisfying $f(c_0,c_1)=0$, one has that
$$
\left(1-\frac{n}{N+n}\right)n_0T(c_0)-C \leq n_1T(c_1) \leq \left(1+\frac{n}{N}\right)n_1T(c_1)+C
$$
where
\begin{equation}
\label{eq:boundsforpoints}
C=2^{s^2/2+15s/2+10}(2Nn+n^2+2^{s-2})^4 n^{s+9}(n+1)^4T(f)/N
\end{equation}
and $s$ is the number of quadratic transformations which are applied to resolve the singularities of $f=0$.
\end{theorem}
\begin{proof}
If one of $c_i$ is in $k$ then the height of the other one is not greater than $T(f)$. The inequalities then obviously hold. In the following, we assume that neither $c_0$ nor $c_1$ is in $k$.
Let ${\mathcal R}$ be the algebraic function field associated to $f$ and $\alpha,\beta\in {\mathcal R}\setminus {\overline{k(t)}}$ satisfy that $f(\alpha,\beta)=0$.
Choose $a_1,a_2\in k$ such that $a_2\neq 0$ and
$${\rm supp}({\rm div}(\alpha/(a_1\alpha+a_2))^{-})\cap {\rm div}(\beta)^{-}=\emptyset.$$
Such $a_1,a_2$ exist due to Lemma~\ref{lm:distinctpoles}. Set $\bar{\alpha}=\alpha/(a_1\alpha+a_2)$. Consider the divisor
$$D=(N+n)n_1{\rm div}(\beta)^{-} -Nn_0{\rm div}(\bar{\alpha})^{-}.$$
Note that $\deg({\rm div}(\bar{\alpha})^{-})=n_1$, $\deg({\rm div}(\beta)^{-})=n_0$ and
$$n_0n_1\geq n_0+n_1-1\geq n-1. $$
So
$$\deg(D)=nn_0n_1\geq n(n-1). $$
This implies that $\deg(D)$ is greater than the genus of $f=0$ and thus ${\mathfrak L}(D)\neq \{0\}$. Denote ${\bm \xi}=(\alpha,\beta,1)$ and by $F(x_0,x_1,x_2)$ the homogenization of $f$ . We claim that $T_{\bm \xi}(D)\leq T(f)$. Note that $T_{\bm \xi}(D)=\max\{T_{\bm \xi}({\rm div}(\bar{\alpha})^{-}), T_{\bm \xi}({\rm div}(\beta)^{-})\}.$ For each ${\bf a}\in {\mathcal S}_{\bm \xi}({\rm div}(\beta)^{-})$, ${\bf a}$ is of the form $(b_0,b_1,0)$ where $b_0,b_1$ satisfies that $F(b_0,b_1,0)=0$. So $T({\bf a})\leq T(F)=T(f)$ and then $T_{\bm \xi}({\rm div}(\beta)^{-})\leq T(f)$. If $a_1=0$ then each point in ${\mathcal S}_{\bm \xi}({\rm div}(\bar{\alpha})^{-})$ is of the form $(b_0,b_1,0)$ too and so $T_{\bm \xi}({\rm div}(\bar{\alpha})^{-})\leq T(f)$. Otherwise, for each ${\mathfrak P}\in {\rm div}(\bar{\alpha})^{-}$, one has that
$$\nu_{\mathfrak P}(\alpha)=\nu_{\mathfrak P}(a_2\bar{\alpha}/(1-a_1\bar{\alpha}))=\nu_{\mathfrak P}(\bar{\alpha})-\nu_{\mathfrak P}(1-a_1\bar{\alpha})=0.$$
Moreover $\pi_{\mathfrak P}(\alpha)=-a_2/a_1$. This implies that each point of ${\mathcal S}_{\bm \xi}({\rm div}(\bar{\alpha})^{-})$ is of the form $(-a_2/a_1, b, 1)$ whose height is not greater than $T(f)$. Thus $T_{\bm \xi}({\rm div}(\bar{\alpha})^{-})\leq T(f)$. Our claim is proved. Note that
$$\deg(D^{+}+D^{-})=2Nn_0n_1+n n_0n_1\leq (2N+n)n^2.$$
Suppose that $\gamma\in {\mathfrak L}(D)\setminus\{0\}$. Due to Theorem~\ref{thm:riemann-roch2}, $\gamma=G({\bm \xi})/H({\bm \xi})$ where $G,H$ are two homogeneous polynomials of degree not greater than
$$
2^{2s+1}(n+1)\left(\deg(D^{+}+D^{-})+2^{s-2}n\right)\leq 2^{2s+1}n(n+1)(2Nn+n^2+2^{s-2})
$$
and
\begin{align*}
T(G), T(H) &\leq 2^{s^2/2+15s/2+8}n^{s+6}(n+1)^3\left(\deg(D^{+}+D^{-})+2^{s-2}n\right)^3 T(f)\\
&\leq 2^{s^2/2+15s/2+8}n^{s+9}(n+1)^3\left(2Nn+n^2+2^{s-2}\right)^3 T(f).
\end{align*}
Set
$$
\tilde{C}=2^{s^2/2+15s/2+9} n^{s+9}(n+1)^4(2Nn+n^2+2^{s-2})^4T(f).
$$
Without loss of generality, we assume that $G(x_0,x_1,1)$ and $H(x_0,x_1,1)$ have no common factor. Otherwise, by Corollary~\ref{cor:factor}, we may replace $G$ and $H$ by $G/W$ and $H/W$ where $W$ is the greatest common factor of $G$ and $H$. Moreover, multiplying by suitable elements in ${\overline{k(t)}}$ if necessary, we can assume that both $G(x_0,x_1,1)$ and $H(x_0,x_1,1)$ have 1 as a coefficient.
Let ${\mathfrak P}$ be a place of ${\mathcal R}$ containing $\alpha-c_0$ and $\beta-c_1$. Then $\nu_{\mathfrak P}(\alpha)=0$ and $\nu_{\mathfrak P}(\beta)=0$. As $\gamma\in {\mathfrak L}(D)$, $\nu_{\mathfrak P}(\gamma)\geq 0$. If $\nu_{\mathfrak P}(\gamma)>0$, then $\nu_{\mathfrak P}(G(\alpha,\beta,1))>0$ and so $G(c_0,c_1,1)=0$. Consequently, $(c_0,c_1)$ is a common point of $G(x_0,x_1,1)=0$ and $f(x_0,x_1)=0$. Proposition~\ref{prop:intersection} implies that $T(c_i)\leq \deg(G)T(f)+nT(G)$. It is easy to verify that in this case $T(c_0), T(c_1)$ satisfy the required inequalities. Therefore we only need to prove the case $\nu_{\mathfrak P}(\gamma)=0$.
Set
\begin{align*}
h_1(x_1,y)&={\rm res}_{x_0}(f(x_0,x_1),H(x_0,x_1,1)y-G(x_0,x_1,1)\\
h_2(x_2,y)&={\rm res}_{x_1}(h_1(x_1,y), x_2-x_1^{(N+n)n_1})
\end{align*}
where ${\rm res}_{x_0}(f,g)$ denotes the resultant of $f$ and $g$ with respect to $x_0$. Note that $h_1(x_1,y)\neq 0$, because $G(x_0,x_1,1)$ and $H(x_0,x_1,1)$ have no common factor. As $D$ is not effective, $\gamma\notin {\overline{k(t)}}$. Furthermore, as $h_1(x_1,\gamma)=0$, $\deg(h_1,x_1)>0$. It is easy to see that $h_2\neq 0$ and $h_2(\beta^{(N+n)n_1},\gamma)=0$. Let $\tilde{h}_2$ be an irreducible factor of $h_2$ in ${\overline{k(t)}}[x_2,y]$ such that $\tilde{h}_2(\beta^{(N+n)n_1},\gamma)=0$.
Propositions~\ref{prop:resultant} and~\ref{prop:height2} imply that
\begin{align*}
T(\tilde{h}_2)&\leq T(h_2)\leq (N+n)n_1 T(h_1)\\
& \leq (N+n)n_1\left(\deg(H)T(f)+n(T(G)+T(H))\right)\\
&\leq (N+n)n_1(2^{2s+1}n(n+1)(2Nn+n^2+2^{s-2})T(f) \\
& +2n2^{s^2/2+15s/2+8}n^{s+9}(n+1)^3\left(2Nn+n^2+2^{s-2}\right)^3 T(f) )\\
&\leq (N+n)n_1 2^{s^2/2+15s/2+9}n^{s+9}(n+1)^4\left(2Nn+n^2+2^{s-2}\right)^3 T(f)\\
&\leq 2^{s^2/2+15s/2+9}n^{s+9}(n+1)^4\left(2Nn+n^2+2^{s-2}\right)^4 T(f)= \tilde{C}.
\end{align*}
Remark that
$$
{\rm div}(\beta^{(N+n)n_1})^{-}=(N+n)n_1{\rm div}(\beta)^{-}\geq D^{+}\geq {\rm div}(\gamma)^{-}.
$$
Note that $\pi_{\mathfrak P}(\beta)=c_1$.
By Lemma~\ref{lm:pointsinequality},
\begin{align}
\label{eq:betagamma}
T(\pi_{\mathfrak P}(\gamma)) &\leq T\left(\pi_{\mathfrak P}(\beta)^{(N+n)n_1}\right)+T(\tilde{h}_2) \\ \notag
& \leq (N+n) n_1T\left(c_1\right)+\tilde{C}.
\end{align}
Similarly, let $r_1(x_0,y)={\rm res}_{x_1}(f(x_0,x_1), G(x_0,x_1,1)y-H(x_0,x_1,1))$ and
$$r_2(x_2,y)={\rm res}_{x_0}\left(r_1(x_0,y), (a_1x_0+a_2)^{Nn_0}x_2-x_0^{Nn_0}\right).$$ Then $r_2\neq 0$ and $r_2(\bar{\alpha}^{Nn_0},\gamma^{-1})=0$. Let $\tilde{r}_2$ be an irreducible factor of $r_2$ in ${\overline{k(t)}}[x_2,y]$ such that $\tilde{r}_2(\bar{\alpha}^{Nn_0},\gamma^{-1})=0$. Applying Propositions~\ref{prop:resultant} and~\ref{prop:height2} again yields that
\begin{align*}
T(\tilde{r})\leq T(r_2)&\leq Nn_0T(r_1)\leq Nn_0\left( \deg(G)T(f)+n(T(H)+T(G))\right).
\end{align*}
An argument similar to the above implies that $T(\tilde{r})\leq \tilde{C}$.
Since ${\rm supp}({\rm div}(\bar{\alpha})^{-})\cap {\rm supp}({\rm div}(\beta)^{-})=\emptyset$, one has that $Nn_0\delta(\bar{\alpha})^{-}=D^{-}$ and thus
$$
{\rm div}(\bar{\alpha}^{Nn_0})^{-}=Nn_0{\rm div}(\bar{\alpha})^{-}=D^{-}\leq {\rm div}(\gamma)^{+}={\rm div}(\gamma^{-1})^{-}.
$$
Furthermore since $a_1c_0+a_2\neq 0$, $\pi_{\mathfrak P}(\bar{\alpha})=c_0/(a_1c_0+a_2)$.
By Lemma~\ref{lm:pointsinequality},
\begin{align*}
\label{eq:alphagamma}
Nn_0 T\left(\frac{c_0}{a_1c_0+a_2}\right) &=T(\pi_{\mathfrak P}(\bar{\alpha})^{Nn_0})\leq T(\pi_{\mathfrak P}(\gamma^{-1}))+T(\tilde{r}_2)\\
&=T(\pi_{\mathfrak P}(\gamma))+T(\tilde{r}_2) \leq T(\pi_{\mathfrak P}(\gamma))+\tilde{C}.
\end{align*}
which together with (\ref{eq:betagamma}) gives
$$
N n_0T\left(\frac{c_0}{a_1c_0+a_2}\right)\leq (N+n) n_1T\left(c_1\right)+2\tilde{C}.
$$
Proposition~\ref{prop:heightproperty} implies that
\begin{align*}
\left(1-\frac{n}{N+n}\right)n_0T\left(\frac{c_0}{a_1c_0+a_2}\right)-\frac{2\tilde{C}}{N+n} &\leq \left(1-\frac{n}{N+n}\right)n_0T\left(c_0\right)-\frac{2\tilde{C}}{N+n}\\
&\leq n_1T(c_1).
\end{align*}
To prove the inequality in the opppsite direction, consider
$$\tilde{D}=(N+n)n_0{\rm div}(\bar{\alpha})^{-}-Nn_1{\rm div}(\beta)^{-}.$$
Remark that $\deg(D^{+}+D^{-})=\deg(\tilde{D}^{+}+\tilde{D}^{-})$ and $T_{\bm \xi}(D)=T_{\bm \xi}(\tilde{D})$. We have the same bounds for elements in ${\mathcal L}(\tilde{D})$. A similar argument then implies that
$$
n_1T(c_1)\leq \frac{N+n}{N}n_0T(c_0)+\frac{2\tilde{C}}{N}\leq \left(1+\frac{n}{N}\right)n_1T(c_0)+\frac{2\tilde{C}}{N}.
$$
Set $C=2\tilde{C}/N$. Then one gets the required inequalities.
\end{proof}
\section{Main results}
In this section, we always ssume that $f(y,y')=\sum_{i=0}^d a_i(y)y'$ is irreducible over $k(t)$ and
$$
\ell={\rm m.s.index}(f)=\max_{i=0}^d \{ \deg(a_i)-2(d-i)\}>0.
$$
Pick $c\in k$ such that $a_0(c)\neq 0$.
Set $y=(cz+1)/z$. Then $y'=-z'/z^2$. Set
$$
b_i(z)=a_i((cz+1)/z)z^{\ell+2d-2i} (-1)^i
$$
where $i=0,\dots,d$. Then an easy calculation yields that
\begin{align*}
g(z,z')=\sum_{i=0}^d b_i(z)z'^i=z^{2d+\ell}f\left(\frac{cz+1}{z}, \frac{-z'}{z^2}\right).
\end{align*}
As $a_0(c)\neq 0$,
$
\deg(a_0(cz+1)/z)z^{\deg(a_0)})=\deg(a_0).
$
This implies that $$\deg(b_0)=\ell+2d>2d.$$
Then ${\rm tdeg}(g)=2d+\ell$ because
$$
2d+\ell\leq {\rm tdeg}(g)=\max\{ \deg(b_i)+i\} \leq \max\{2d+\ell-2i\}=2d+\ell.
$$
We claim that $g(z,z')$ is irreducible over $k(t)$. First of all, assume that $\ell=\deg(a_{i_0})-2(d-i_0)$ for some $0
\leq i_0 \leq d$. Then we have that
$$
b_{i_0}(0)=(-1)^{i_0}\cdot\mbox{the leading coefficient of $a_{i_0}(y)$}\neq 0.
$$
If $\gcd(b_0,\dots,b_d)\neq 1$ then the $b_i(z)$ have common zeroes and none of common zeroes is zero. It is easy to see that $(c\eta+1)/\eta$ is a common zero of all $a_i(y)$ if $\eta$ is a common zero of all $b_i$. This contradicts with the fact that $\gcd(a_0,\dots,a_d)=1$. Secondly, if $g(z,z')$ has a factor with positive degree in $z'$ then $f(y,y')$ will have a factor with positive degree in $y'$, a contradiction. This proves our claim. Remark that $r(t)$ is a nontrivial rational solution of $g(z,z')=0$ if and only if $(cr(t)+1)/r(t)$ is a nontrivial rational solution of $f(y,y')=0$. The main result of this paper is the following theorem.
\begin{theorem}
\label{thm:boundforsols}
Assume that $f(y,y')=0$ is a first order AODE with positive ${\rm m.s.index}$ and assume further that $f(y,y')$ is irreducible over $k(t)$. Then if $r(t)$ is a rational solution of $f(y,y')=0$ then
$$
\deg(r(t))\leq (54n^3+9n^2+2^{5n^2})^4 n^{5n^2+12}2^{11n^4+43n^2+34}T(f).
$$
where $n={\rm tdeg}(f)$.
\end{theorem}
\begin{proof}
We shall use the notations as above. Due to the above discussion, we only need to consider the differential equation $g(z,z')=0$. Denote $n={\rm tdeg}(f)$ and $d=\deg(f,y')$. One sees that $T(g)\leq T(f), d=\deg(g,z')$ and
$$
\deg(g,z)=2d+\ell={\rm tdeg}(g)\leq 3n.
$$
Suppose that
$$
g=h_1h_1\dots h_m
$$
where $h_i$ is irreducible over ${\overline{k(t)}}$. Since $g$ is irreducible over $k(t)$, one has that all $h_i$ are conjugate to each other and then
\begin{align*}
\deg(h_i, z)=\deg(g,z)/m,\,\,\deg(h_i, z')=\deg(g,z')/m=d/m.
\end{align*}
By Corollary~\ref{cor:factor}, $T(h_i)\leq T(g)\leq T(f)$. Assume that $r(t)$ is a rational solution of $g(z,z')=0$ then $r(t)$ is a rational solution of all $h_i=0$. In particular, $h_1(r(t),r'(t))=0$. Denote $\tilde{n}={\rm tdeg}(h_1)$ and $\tilde{d}=\deg(h_1,z')$. Then
\begin{align*}
\tilde{n}& ={\rm tdeg}(g)/m=(2d+\ell)/m\leq 3n/m \\
\tilde{d}&=\deg(g,z')/m=d/m.
\end{align*}
Set $N=\tilde{n}^2$. By Theorem~\ref{thm:boundpoints1} and Remark~\ref{rem:heights}, one has that
$$
\frac{N}{N+\tilde{n}}\frac{\deg(g,z)}{\deg(g,z')} \deg(r(t))-\frac{C}{\tilde{d}}\leq \deg(r'(t))
$$
where
\begin{align*}
C=(2\tilde{n}^3+\tilde{n}^2+2^{s-2})^4\tilde{n}^{s+7}(\tilde{n}+1)^4 2^{\frac{s^2}{2}+\frac{15s}{2}+10}T(f).
\end{align*}
Note that $s$ is the number of quadratic transformations applied to transfer $h_1=0$ to an algebraic curve with only ordinary singularities. Due to Theorem 2 in Chapter 7 of \cite{fulton}, $s$ can be chosen to be an integer not greater than
$$(\tilde{n}-1)(\tilde{n}-2)/2\leq (3n-1)(3n-2)/2\leq 9n^2/2.$$
Remark that $\deg(r'(t))\leq 2\deg(r(t))$. Thus
\begin{equation}
\label{eq:leftinequality}
\left(\frac{N}{N+\tilde{n}}\frac{\deg(g,z)}{\deg(g,z')}-2\right)\deg(r(t))\leq \frac{C}{\tilde{d}}.
\end{equation}
As $m$ divides both $\deg(g,z)$ and $\deg(g,z')$, $m$ divides $\ell$. Set $\ell=m\bar{\ell}$. Then
\begin{align*}
\frac{N}{N+\tilde{n}}\frac{\deg(g,z)}{\deg(g,z')}-2=\frac{\tilde{n}^2(2d+\ell)}{(\tilde{n}^2+\tilde{n})d}-2
=\frac{\tilde{n}\ell-2d}{(\tilde{n}+1)d}&\geq \frac{\bar{\ell}\deg(g,z)-2d}{(\tilde{n}+1)d}\\
&\geq \frac{1}{(\tilde{n}+1)d}.
\end{align*}
This together with (\ref{eq:leftinequality}) implies that
\begin{align*}
\deg(r(t))&\leq m(\tilde{n}+1) C \\
&\leq 4n(2\tilde{n}^3+\tilde{n}^2+2^{s-2})^4\tilde{n}^{s+7}(\tilde{n}+1)^42^{\frac{s^2}{2}+\frac{15s}{2}+10}T(f)\\
&\leq n(54n^3+9n^2+2^{s-2})^4 (4n)^{s+11}2^{\frac{s^2}{2}+\frac{15s}{2}+12}T(f)\\
&\leq (54n^3+9n^2+2^{s-2})^4 n^{s+12} 2^{\frac{s^2}{2}+\frac{19s}{2}+34}T(f)\\
&\leq (54n^3+9n^2+2^{5n^2})^4 n^{5n^2+12}2^{11n^4+43n^2+34}T(f).
\end{align*}
The second inequality holds because $ m(\tilde{n}+1)\leq 3n+m\leq 4n$.
\end{proof}
\begin{remark}
Theorem~\ref{thm:boundforsols} implies that an autonomous first order AODE $f=0$ with positive ${\rm m.s.index}$ has no nontrival rational solutions, because $T(f)=0$. In fact, suppose that $f=0$ has a nontrival rational solution. Then it will have infinitely many rational solutions. By Corollary 4.6 of \cite{feng-feng}, $f=0$ has no movable singularities. However, as $f=0$ has positive ${\rm m.s.index}$, Fuchs' criterion implies that $f=0$ has movable singularities, a contradiction.
\end{remark}
In \cite{vo-grasegger-winkler1}, the authors developed two algorithms to compute rational solutions of maximally comparable first-order AODEs and first order quasi-linear AODEs respectively. Let us first recall the definition of maximally comparable first order AODEs. Suppose that $f=\sum_{i,j}a_{i,j}y^iy'^j$ is a differential polynomial over $k(t)$. Denote
$$
S(f)=\{(i,j)\in \N^2 \mid a_{i,j}\neq 0\}.
$$
If there is $(i_0,j_0)\in S(f)$ satisfying that $i_0+j_0\geq i+j$ and $i_0+2j_0>i+2j$ for every $(i,j)\in S(f)$, then we say that $f$ is maximally comparable. The following examples shows that their algorithms can not deal with all first order AODEs with positive ${\rm m.s.index}$.
\begin{example}
Let
$$
f=yy'^m+y^{2m+1}+t
$$
where $m\geq 1$. Then $S(f)=\{(1,m),(2m+1,0),(0,0)\}$. Since $2m+1+0\geq 1+m$ but $2m+1+2\cdot 0=1+2\cdot m$, $f$ is not maximally comparable.
On the other hand, we have that
$
{\rm m.s.index}(f)=1>0.
$
\end{example}
\section*{References}
\bibliographystyle{plain}
|
1,116,691,500,697 | arxiv | \section{Type derivation for \refToExample{Five}}\label{app:derivation}
{In \refToFigure{TypingFive} we
give the type derivation that shows that the expression ${\tt e}$ of
Example~\ref{ex:Five} is well-typed, where
$\Gamma_1=\TypeDec{{\tt c1}}{{\tt C}},\TypeDec{{\tt outer}}{{\tt C}}$,
$\Gamma_2=\TypeDec{{\tt c2}}{{\tt C}},\TypeDec{{\tt inner}}{\Type{\terminale{a}}{{\tt C}}}$, and
$\Gamma_3=\TypeDec{{\tt c3}}{{\tt C}},\TypeDec{{\tt r}}{{\tt C}}$ are the type contexts
corresponding to the top level, outer, and inner block, respectively.}
\begin{figure}[t]
{\small
\begin{center}
\begin{math}
\begin{array}{l}
{\cal D}_1:\hskip 0.4em{
\prooftree
{\prooftree
\begin{array}{c}
\TypeCheck{\Gamma}{{\tt c1}}{{\tt C}}{\{{\tt c1},\terminale{res}\}}\\
\TypeCheck{\Gamma}{{\tt c2}}{{\tt C}}{\{{\tt c2},\terminale{res}\}}
\end{array}
\justifies
\TypeCheck{\Gamma}{\MethCall{{\tt c2}}{{\tt mix}}{{\tt c1}}}{{\tt C}}{\{{\tt c1},{\tt c2},\terminale{res}\}}
\endprooftree
}
\justifies
\TypeCheck{\Gamma}{{\MethCall{\MethCall{{\tt c2}}{{\tt mix}}{{\tt c1}}}{{\tt clone}}{}}}{{\tt C}}{\{{\tt c1},{\tt c2}\}}
\endprooftree
}\hskip 1.5em
{\cal D}_2:\hskip 0.4em
{\prooftree
\begin{array}{c}
\TypeCheck{\Gamma}{{\tt r}}{{\tt C}}{\{{\tt r},\terminale{res}\}}\\
\TypeCheck{\Gamma}{{\tt c3}}{{\tt C}}{\{{\tt c3},\terminale{res}\}}
\end{array}\justifies
\TypeCheck{\Gamma}{{\MethCall{{\tt r}}{{\tt mix}}{{\tt c3}}}}{{\tt C}}{\{{\tt c3},{\tt r},\terminale{res}\}}
\endprooftree
} \\ \\ \\
{\cal D}_3:\hskip 0.4em\prooftree
\begin{array}{c}
\prooftree
\TypeCheck{\Gamma}{{\tt c3}}{{\tt C}}{\{{\tt c3},\terminale{res}\}}
\justifies
\TypeCheck{\Gamma}{{\ConstrCall{{\tt C}}{{\tt c3}}}}{{\tt C}}{\{{\tt c3},\terminale{res}\}}
\endprooftree
\hskip 1.5em
{\cal D}_1
\hskip 1.5em
{\cal D}_2
\end{array}
\justifies
\TypeCheck{{\SubstFun{\Gamma_1}{\Gamma_2}}}{\Block{\Dec{{\tt C}}{{\tt c3}}{\ConstrCall{{\tt C}}{{\tt c3}}}\,\Dec{{\tt C}}{{\tt r}}{\MethCall{\MethCall{{\tt c2}}{{\tt mix}}{{\tt c1}}}{{\tt clone}}{}}}{\MethCall{{\tt r}}{{\tt mix}}{{\tt c3}}}}{{\tt C}}{\{{\tt c1},{\tt c2}\}}
\endprooftree
\\ \\ \\
{\cal D}_4:\hskip 0.4em\prooftree
\prooftree
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\tt c2}}{{\tt C}}{\{{\tt c2},\terminale{res}\}}
\justifies
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\ConstrCall{{\tt C}}{{\tt c2}}}}{{\tt C}}{\{{\tt c2},\terminale{res}\}}
\endprooftree
\hskip 0.4em
{\cal D}_3
\hskip 0.4em
\prooftree
\begin{array}{c}
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\tt inner}}{{\tt C}}{\epsilon}\\
\TypeCheck{{\SubstFun{\Gamma_1}{\Gamma_2}}}{{\tt c2}}{{\tt C}}{\{{\tt c2},\terminale{res}\}}\\[0.5ex]
\end{array}\justifies
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{\MethCall{{\tt inner}}{{\tt mix}}{{\tt c2}}}{{\tt C}}\{{\tt c2},\terminale{res}\}
\endprooftree
\justifies
\TypeCheck{\Gamma_1}{\Block{\Dec{{\tt C}}{{\tt c2}}{\ConstrCall{{\tt C}}{{\tt c2}}}\,\Dec{\Type{\terminale{a}}{{\tt C}}}{{\tt inner}}{{{\tt e}^\texttt{i}}}}{\MethCall{{\tt inner}}{{\tt mix}}{{\tt c2}}}}{{\tt C}}{\{{\tt c1},\terminale{res}\}}
\endprooftree\\ \\ \\
{\cal D}:\hskip 0.4em\prooftree
\prooftree
\TypeCheck{\Gamma_1}{{\tt c1}}{{\tt C}}{\{{\tt c1},\terminale{res}\}}
\justifies
\TypeCheck{\Gamma_1}{\ConstrCall{{\tt C}}{{\tt c1}}}{{\tt C}}{\{{\tt c1},\terminale{res}\}}
\endprooftree
\hskip 1.5em
{\cal D}_4
\hskip 1.5em
\TypeCheck{\Gamma_1}{{\tt outer}}{{\tt C}}{\{{\tt outer},\terminale{res}\}}
\justifies
\TypeCheck{}{\Block{\Dec{{\tt C}}{{\tt c1}}{\ConstrCall{{\tt C}}{{\tt c1}}}\,\Dec{{\tt C}}{{\tt outer}}{{{\tt e}^\texttt{o}}}}{{\tt outer}}}{{\tt C}}{\epsilon}
\endprooftree\\ \\
\end{array}
\end{math}
\end{center}
}
\hrulefill
{\small
\begin{itemize}
\item ${\cal D}_3$ yields ${{\tt e}^\texttt{ia}}=\BlockLab{\Dec{{\tt C}}{{\tt c3}}{\ConstrCall{{\tt C}}{{\tt c3}}}\,\Dec{{\tt C}}{{\tt r}}{\MethCall{\MethCall{{\tt c2}}{{\tt mix}}{{\tt c1}}}{{\tt clone}}{}}}{\MethCall{{\tt r}}{{\tt mix}}{{\tt c3}}}{\{{\tt r},{\tt c3}\}}$
\item ${\cal D}_4$ yields ${{\tt e}^\texttt{oa}}=\BlockLab{\Dec{{\tt C}}{{\tt c2}}{\ConstrCall{{\tt C}}{{\tt c2}}}\,\Dec{\Type{\terminale{a}}{{\tt C}}}{{\tt inner}}{{{\tt e}^\texttt{i}}}}{\MethCall{{\tt inner}}{{\tt mix}}{{\tt c2}}}{\{{\tt c2}\}}$
\item ${\cal D}$ yields
${{\tt e}'=}\BlockLab{\Dec{{\tt C}}{{\tt c1}}{\ConstrCall{{\tt C}}{{\tt c1}}}\,\Dec{{\tt C}}{{\tt outer}}{{{\tt e}^\texttt{o}}}}{{\tt outer}}{\{{\tt outer}\}}$
\end{itemize}
}
\caption{Type derivation for \refToExample{Five}}\label{fig:TypingFive}
\end{figure}
Derivations ${\cal D}_1$ and ${\cal D}_2$ end with an application of rule
\rn{T-Invk}. Consider ${\cal D}_2$. The method ${\tt mix}$ produces sharing between
its receiver, parameter, and result. Then the call of ${\tt mix}$ with receiver
${\tt r}$ and parameter ${\tt c3}$ returns a result connected with both these
variables. The call of ${\tt mix}$ with receiver ${\tt c2}$ and parameter ${\tt c1}$ in the
derivation ${\cal D}_1$ does the same with ${\tt c2}$ and ${\tt c1}$. Instead, the call of
${\tt clone}$ does not produce any connection from its receiver and result. So the
call of ${\tt clone}$ with receiver $\MethCall{{\tt c2}}{{\tt mix}}{{\tt c1}}$ in the derivation
${\cal D}_1$ does not cause connections for $\terminale{res}$.
The type derivation ${\cal D}_3$ justifies the judgment
$\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{{\tt e}^\texttt{i}}}{{\tt C}}{\{{\tt c1},{\tt c2}\}}$ where ${{\tt e}^\texttt{i}}$
is the inner block (the initialization expression of ${\tt inner}$). {The effects of
the evaluation of initialization expressions and body are to mix the external
variables ${\tt c1}$ and ${\tt c2}$, and the local variables ${\tt r}$ and ${\tt c3}$.
Moreover, the result is only connected with the two local variables. Hence,
the inner block denotes a capsule, so it can be used as initialization
expression of the affine variable ${\tt inner}$.} The sharing relation resulting from
the evaluation of the declarations and the body (before removing the local
variables ${\tt r}$ and ${\tt c3}$) is represented by
$\{{\tt c1},{\tt c2}\}\,\{{\tt c3},{\tt r},\terminale{res}\}$.\label{sequence-two}
The annotation for the block, i.e., the set of local variables connected to the
result, is $\{{\tt r},{\tt c3}\}$, so, when applying the congruence relation, the
variables ${\tt r}$ and ${\tt c3}$ cannot be moved outside this block.
The type derivation ${\cal D}_4$ justifies the judgment
{$\TypeCheck{{\Gamma_1}}{{{\tt e}^\texttt{o}}}{{\tt C}}{\{{\tt c1},\terminale{res}\}}$, where ${{\tt e}^\texttt{o}}$ is the outer
block (the initialization expression of ${\tt outer}$). The effects of the evaluation
of initialization expressions and body are to mix the external variable ${\tt c1}$
with the local variable ${\tt c2}$ (this effect propagates from the inner block).
Moreover, the result is connected with variable ${\tt c2}$. Hence, the result turns
out to be connected with the external variable ${\tt c1}$ as well. Therefore, this
block is not a capsule, and could not be used to initialize an affine variable.
Note that the variable ${\tt inner}$, being affine, is not in the domain of the
sharing relation. Indeed, it will be substituted with the result of the
evaluation of ${{\tt e}^\texttt{i}}$ and so it will disappear.}
Finally, ${\cal D}$ is the derivation for the expression ${\tt e}$. The block is a
closed expression, and closed expressions are capsules. The block is annotated
with the local variable ${\tt outer}$, which is connected with its result.
\section{Proofs}\label{app:proofs}
\noindent
{\bf Proposition \ref{prop:value}.}
{\it If $\metavariable{v}$ is a value, then there exists $\metavariable{v}'$ such that $\congruence{\metavariable{v}}{\metavariable{v}'}$ and $\metavariable{v}'$ is in canonical form.}
\begin{proof}
By structural induction on values.\\
\underline{Consider $\ConstrCall{\metavariable{C}}{\metavariable{vs}}$}.
By inductive hypotheses on $\metavariable{vs}$ we
have that $\congruence{\ConstrCall{\metavariable{C}}{\metavariable{vs}}}{\ConstrCall{\metavariable{C}}{\metavariable{vs}'}}$ for some
$\metavariable{vs}'$ in canonical form.\\
\PG{By induction on the number $n$ of $\metavariable{v}\in\metavariable{vs}'$ such that $\metavariable{v}$ is not a
variable, i.e., is a block value, we show that
$\congruence{\ConstrCall{\metavariable{C}}{\metavariable{vs}'}}{{\BlockLab{\metavariable{dvs}}{\ConstrCall{\metavariable{C}}{\metavariable{xs}}}{\X}}}$
for some $\metavariable{dvs}$, $\X$ and $xs$.\\
If $n=0$, then
$\congruence{\ConstrCall{\metavariable{C}}{\metavariable{vs}'}}{\BlockLab{\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\metavariable{vs}'}}}{\metavariable{x}}{{\{\metavariable{x}\}}}}$
where $\metavariable{x}$ is a fresh variable using congruence rule \rn{new}. \\
If $n>0$, then
$\ConstrCall{\metavariable{C}}{\metavariable{vs}'}=\ConstrCall{\metavariable{C}}{\metavariable{xs},\BlockLab{\metavariable{dvs}}{\metavariable{x}}{\X},vs''}$ where
$\metavariable{dvs}\neq\epsilon$ is in canonical form. We may assume, by $\alpha$-renaming, that
$(\FV{\metavariable{vs}''}\cup\{\metavariable{xs}\})\cap\dom{\metavariable{dvs}}=\emptyset$. Using rule \rn{val-ctx} we
have that $\congruence{\ConstrCall{\metavariable{C}}{\metavariable{vs}'}}{\BlockLab{\metavariable{dvs}}{\metavariable{v}'}{\X}}$ where
$\metavariable{v}'=\ConstrCall{\metavariable{C}}{\metavariable{xs},\metavariable{x},\metavariable{vs}''}$. Since $\metavariable{v}'$ has $n-1$ arguments which
are not variables by inductive hypothesis we have that
$\congruence{\ConstrCall{\metavariable{C}}{\metavariable{xs},\metavariable{x},\metavariable{vs}''}}{\BlockLab{\metavariable{dvs}'}{\ConstrCall{\metavariable{C}}{\metavariable{xs},\metavariable{x},\metavariable{ys}}}{\Y}}$.
Therefore $\congruence{\ConstrCall{\metavariable{C}}{\metavariable{vs}'}}{\BlockLab{\metavariable{dvs}}{\BlockLab{\metavariable{dvs}'}{\ConstrCall{\metavariable{C}}{\metavariable{xs},\metavariable{x},\metavariable{ys}}}{\Y}}{\X}}$.\\
We may assume, by $\alpha$-renaming, that
$(\FV{\metavariable{dvs}}\cup\dom{\metavariable{dvs}})\cap\dom{\metavariable{dvs}'}=\emptyset$.
Using congruence rule \rn{body}, we get
$\congruence{\ConstrCall{\metavariable{C}}{\metavariable{vs}'}}
{\BlockLab{\metavariable{dvs}\ \metavariable{dvs}'}{\ConstrCall{\metavariable{C}}{\metavariable{xs},\metavariable{x},\metavariable{ys}}}{\X\cup\Y}}$.
Applying congruence rule \rn{new} followed by \rn{body} we have that
$\congruence{\ConstrCall{\metavariable{C}}{\metavariable{vs}'}}
{\BlockLab{\metavariable{dvs}\ \metavariable{dvs}'\ \Dec{\metavariable{C}}{\metavariable{z}}{\ConstrCall{\metavariable{C}}{\metavariable{xs},\metavariable{x},\metavariable{ys}}}}{\metavariable{z}}{\X\cup\Y\cup\{\metavariable{z}\}}}$ with $\metavariable{z}$ a fresh variable.\\
\underline{Consider $\BlockLab{\metavariable{dvs}}{\metavariable{v}}{\X}$}.
By induction hypothesis on values $\congruence{\BlockLab{\metavariable{dvs}}{\metavariable{v}}{\X}}{\BlockLab{\metavariable{dvs}}{\metavariable{v}'}{\X}}$
with $\metavariable{v}'$ in canonical form.\\
By induction on the number $n$ of declarations $\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\metavariable{vs}}}\in\metavariable{dvs}$ which are not in canonical
form we show that $\congruence{\BlockLab{\metavariable{dvs}}{\metavariable{v}'}{\X}}{\BlockLab{\metavariable{dvs}'}{\metavariable{v}'}{\Y}}$ where all the $\metavariable{dvs}'$
are in canonical form. \\
If $n=0$, then $\metavariable{dvs}$ is in canonical form. We have two cases $\metavariable{v}'=\metavariable{x}$ or $\metavariable{v}'=\BlockLab{\metavariable{dvs}'}{\metavariable{y}}{\Y}$
with $\metavariable{dvs}'$ in canonical form.
In the first case $\congruence{\BlockLab{\metavariable{dvs}}{\metavariable{v}}{\X}}{\BlockLab{\metavariable{dvs}}{\metavariable{x}}{\X}}$ and we are done.
In the second case $\congruence{\BlockLab{\metavariable{dvs}}{\metavariable{v}}{\X}}{\BlockLab{\metavariable{dvs}}{\BlockLab{\metavariable{dvs}'}{\metavariable{y}}{\Y}}{\X}}$.
We may assume, by $\alpha$-renaming, that $\dom{\metavariable{dvs}'}\cap(\FV{dvs}\cup\dom{\metavariable{dvs}})=\emptyset$,
so congruence rule \rn{body} can be applied to obtain
$\congruence{\BlockLab{\metavariable{dvs}}{\metavariable{v}}{\X}}{\BlockLab{\metavariable{dvs}\,\metavariable{dvs}'}{\BlockLab{}{\metavariable{y}}{\emptyset}}{\X\cup\Y}}$
and with rule \rn{{block-elim}} we obtain
$\congruence{\BlockLab{\metavariable{dvs}}{\metavariable{v}}{\X}}{\BlockLab{\metavariable{dvs}\,\metavariable{dvs}'}{{\metavariable{y}}}{\X\cup\Y}}$.\\
If $n>0$, then We may assume that
$\metavariable{dvs}=\metavariable{dvs}_1\,\Dec{\metavariable{C}}{\metavariable{z}}{\ConstrCall{\metavariable{C}}{\metavariable{vs}}}\,\metavariable{dvs}_2$ with $\metavariable{dvs}_1$ in
canonical form, the values in $\metavariable{vs}$ in canonical form, and some $\metavariable{v}\in\metavariable{vs}'$
not a variable. As for the proof for constructor values we can prove that
$\congruence{\ConstrCall{\metavariable{C}}{\metavariable{vs}}}{\BlockLab{\metavariable{dvs}'}{\ConstrCall{\metavariable{C}}{\metavariable{ys}}}{\Y}}$
for some $\Y$, $\metavariable{ys}$ and $\metavariable{dvs}'$ in canonical form. (We just omit the application of the
congruence rule \rn{new} from the proof.) Therefore
$\congruence{\BlockLab{\metavariable{dvs}}{\metavariable{v}}{\X}}{\BlockLab{\metavariable{dvs}_1\,\Dec{\metavariable{C}}{\metavariable{z}}{\BlockLab{\metavariable{dvs}'}{\ConstrCall{\metavariable{C}}{\metavariable{ys}}}{\Y}}\,\metavariable{dvs}_2}{\metavariable{v}}{\X}}$.
We may assume, by $\alpha$-renaming, that
$\dom{\metavariable{dvs}'}\cap(\FV{\metavariable{dvs}_1\,\metavariable{dvs}_2\,\metavariable{v}}\cup\dom{\metavariable{dvs}_1\,\metavariable{dvs}_2})=\emptyset$.
Therefore, by applying congruence rule \rn{dec} and \rn{block-elim} we get
\begin{center}
$\congruence{\BlockLab{\metavariable{dvs}_1\,\Dec{\metavariable{C}}{\metavariable{z}}{\BlockLab{\metavariable{dvs}'}{\ConstrCall{\metavariable{C}}{\metavariable{vs}}}{\Y}}\,\metavariable{dvs}_2}{\metavariable{v}}{\X}}{\metavariable{v}'}$
\end{center} where
$\metavariable{v}'=\BlockLab{\metavariable{dvs}_1\,\metavariable{dvs}'\,\Dec{\metavariable{C}}{\metavariable{z}}{\ConstrCall{\metavariable{C}}{\metavariable{ys}}}\,\metavariable{dvs}_2}{\metavariable{v}}{\X}$.
Since $\metavariable{dvs}'$ is in canonical form, the number of declarations which are not
in canonical form in $\metavariable{v}'$ is $n-1$, hence the thesis holds by the inductive
hypothesis.
}
\end{proof}
\PG{The following lemma shows that scope extrusion preserves types. In particular, annotations on blocks
associated to capsule variables prevent extrusion of declarations that may be connected to the result
of the block. The lemma is the main result needed to prove that congruence preserves typability.}
\begin{lemma}\label{lemma:extrusion}
Let
\begin{enumerate}[(i)]
\item $\metavariable{d}=\Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\,\metavariable{ds}_2}{\metavariable{e}}{\X}}$ and
\item $\metavariable{ds}=\metavariable{dvs}_1\,\Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\Y}}$ and
\item $\dom{\metavariable{ds}_2}\cap\FV{\metavariable{dvs}_1}=\emptyset$ and
\item if $\mu=\terminale{a}$, then $D_1\cap\X=\emptyset$.
\end{enumerate}
$\TypeCheckDecs{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}}]}{\metavariable{d}}{{\cal S}}$ if and only if
$\TypeCheckDecs{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}]}{\metavariable{ds}}{{\cal S}'}$ where ${\cal S}=\Remove{{\cal S}'}{D_1}$.
\end{lemma}
\begin{proof}
Let $D_1=D_1$ and $D_2=\dom{\metavariable{ds}_2}$. \\
\underline{We first show the ``only if'' implication}.\\
Let $\TypeCheckDecs{\Gamma[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}}]}{\metavariable{d}}{{\cal S}}$ and
$\Gamma_1={\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}}][\Gamma_{\metavariable{dvs}_1},\Gamma_{\metavariable{ds}_2}]$. From \refToLemma{invBlock} we have that
\begin{enumerate} [(1)]
\item ${\cal S}=\SubstEqRel{{\cal S}_x}{\metavariable{x}}{\terminale{res}}$ where ${\cal S}_x=\Remove{({\cal S}_1+{\cal S}_2+{\cal S}_e)}{(D_1\cup D_2)}$
\item $\TypeCheckDecs{\Gamma_1}{\metavariable{dvs}_1}{{\cal S}_1}$ and $\TypeCheckDecs{\Gamma_1}{\metavariable{ds}_2}{{\cal S}_2}$
and $\TypeCheck{\Gamma_1}{\metavariable{e}}{\metavariable{C}}{{\cal S}_e}$
\item $\X=\Closure{\terminale{res}}{({\cal S}_1+{\cal S}_2+{\cal S}_e)}\cap(D_1\cup D_2)$
\item if $\mu=\terminale{a}$, then $\IsCapsule{{\cal S}_x}$
\end{enumerate}
By $\alpha$-renaming
we may assume that $\metavariable{x}\not\in(D_1\cup D_2)$ so let $\Gamma'_1={\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}][\Gamma_{\metavariable{ds}_2}]$ we have $\Gamma_1=\Gamma'_1$. From (2) we derive
\begin{enumerate} [(a)]
\item $\TypeCheckDecs{\Gamma'_1}{\metavariable{ds}_2}{{\cal S}_2}$
and $\TypeCheck{\Gamma'_1}{\metavariable{e}}{\metavariable{C}}{{\cal S}_e}$
\end{enumerate}
and applying rule \rn{T-block} to (a)
\begin{enumerate} [(a)]\addtocounter{enumi}{1}
\item $\TypeCheck{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}]}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\Y}}{\metavariable{C}}{{\cal S}'_x}$ where
\item ${\cal S}'_x=\Remove{({\cal S}_2+{\cal S}_e)}{D_2}$
\item $\Y=\Closure{\terminale{res}}{{({\cal S}_2+{\cal S}_e)}}\cap{{D_2}}$
\end{enumerate}
If $\mu=\terminale{a}$, from (iv) we get ${D_1}\cap(\Closure{\terminale{res}}{({\cal S}_1+{\cal S}_2+{\cal S}_e)}\cap(D_1\cup D_2))=\emptyset$.
Therefore, $\Closure{\terminale{res}}{{\cal S}_x}=\Closure{\terminale{res}}{({\cal S}_1+{\cal S}_2+{\cal S}_e)}\setminus(D_1\cup D_2)=\Closure{\terminale{res}}{({\cal S}_1+{\cal S}_2+{\cal S}_e)}\setminus{D_2}$. From (d)
$\Closure{\terminale{res}}{{\cal S}'_x}=\Closure{\terminale{res}}{({\cal S}_2+{\cal S}_e)}\setminus{D_2}\subseteq\Closure{\terminale{res}}{({\cal S}_1+{\cal S}_2+{\cal S}_e)}\setminus{D_2}$. Therefore from (4), $\IsCapsule{{\cal S}_x}$ and we get
that $\IsCapsule{{\cal S}'_x}$. From (b)
\begin{enumerate} [(a)]\addtocounter{enumi}{4}
\item $\TypeCheckDecs{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}]}{\Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\Y}}}{\SubstEqRel{{\cal S}'_x}{\metavariable{x}}{\terminale{res}}}$
\end{enumerate}
From (iii) and \refToLemma{weakening}.2 and (2)
\begin{enumerate} [(a)]\addtocounter{enumi}{5}
\item $\TypeCheckDecs{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}]}{\metavariable{dvs}_1}{{\cal S}_1}$
\end{enumerate}
Therefore\\
\centerline{$\TypeCheckDecs{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}]}{\metavariable{dvs}_1\,\Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\X\setminus{D_1}}}}{{\cal S}'={\cal S}_1+\SubstEqRel{{\cal S}'_x}{\metavariable{x}}{\terminale{res}}}$.}\\
Let ${\cal S}''={\cal S}_{2}+{\cal S}_e$. From (iii) and \refToProp{invTyping1} we have that
$\Remove{{\cal S}_1}{D_2}={\cal S}_1$ and so by \refToProp{lessSrRel}.\ref{p3}
\begin{enumerate} [(a)]\addtocounter{enumi}{6}
\item $\Remove{({\cal S}_1+{\cal S}'')}{D_2}=\Remove{{\cal S}_1}{D_2}+\Remove{{\cal S}''}{D_2}={\cal S}_1+\Remove{{\cal S}''}{D_2}$
\end{enumerate}
Therefore\\
\centerline{$
\begin{array}{lcll}
{\cal S}&=& \SubstEqRel{(\Remove{({\cal S}_1+{\cal S}'')}{(D_1\cup D_2)})}{\metavariable{x}}{\terminale{res}}
\\
&=& \SubstEqRel{(\Remove{(\Remove{({\cal S}_1+{\cal S}'')}{D_2})}{D_1})}{\metavariable{x}}{\terminale{res}}& \\
&=& \SubstEqRel{(\Remove{({\cal S}_1+\Remove{{\cal S}''}{D_2})}{D_1})}{\metavariable{x}}{\terminale{res}}&\text{by (g)} \\
&=& \Remove{(\SubstEqRel{({\cal S}_1+\Remove{{\cal S}''}{D_2})}{\metavariable{x}}{\terminale{res}})}{D_1}&\text{since $\{\metavariable{x},\terminale{res}\}\cap D_1=\emptyset$} \\
&=& \Remove{{\cal S}_1+(\SubstEqRel{(\Remove{{\cal S}''}{D_2})}{\metavariable{x}}{\terminale{res}})}
{D_1}&\text{since $\Closure{\terminale{res}}{{\cal S}_1}=\{\terminale{res}\}$ and} \\
& & &
\quad\Closure{\metavariable{x}}{{\cal S}_1}=\{\metavariable{x}\} \\
&=& \Remove{{\cal S}_1+(\SubstEqRel{{\cal S}'_x}{\metavariable{x}}{\terminale{res}})}
{D_1}&\text{since ${\cal S}'_x=\Remove{{\cal S}''}{\metavariable{D}_2}$} \\
&=& \Remove{{\cal S}'}{D_1}
\end{array}
$}
\underline{We now show the ``if'' implication}.\\
Let $\TypeCheckDecs{\Gamma[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}]}{\metavariable{ds}}{{\cal S}'}$ and
$\Gamma_1={{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}]}[\Gamma_{\metavariable{ds}_2}]$. From \refToLemma{invBlock} we have that
\begin{enumerate} [(1)]
\item ${\cal S}'={\cal S}_1+\SubstEqRel{{\cal S}_x}{\metavariable{x}}{\terminale{res}}$ where ${\cal S}_x=\Remove{({\cal S}_2+{\cal S}_e)}{D_2}$
\item $\TypeCheckDecs{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}},\Gamma_{\metavariable{dvs}_1}]}{\metavariable{dvs}_1}{{\cal S}_1}$
\item $\TypeCheckDecs{\Gamma_1}{\metavariable{ds}_2}{{\cal S}_2}$
and $\TypeCheck{\Gamma_1}{\metavariable{e}}{\metavariable{C}}{{\cal S}_e}$
\item $\Y=\Closure{\terminale{res}}{({\cal S}_2+{\cal S}_e)}\cap D_2$
\item if $\mu=\terminale{a}$, then $\IsCapsule{{\cal S}_x}$
\end{enumerate}
By $\alpha$-renaming
we may assume that $\metavariable{x}\not\in D_2$ so let $\Gamma'_1={\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}}][\Gamma_{\metavariable{dvs}_1},\Gamma_{\metavariable{ds}_2}]$ we have $\Gamma_1=\Gamma'_1$. From (3) we derive
\begin{enumerate} [(a)]
\item $\TypeCheckDecs{\Gamma'_1}{\metavariable{ds}_2}{{\cal S}_2}$
and $\TypeCheck{\Gamma'_1}{\metavariable{e}}{\metavariable{C}}{{\cal S}_e}$
\end{enumerate}
and, from (iii) and \refToLemma{weakening}.2 also
\begin{enumerate} [(a)]\addtocounter{enumi}{1}
\item $\TypeCheckDecs{\Gamma'_1}{\metavariable{dvs}_1}{{\cal S}_1}$
\end{enumerate}
Applying rule \rn{T-block} to (a) and (b) we have
\begin{enumerate} [(a)]\addtocounter{enumi}{2}
\item $\TypeCheck{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}}]}{\BlockLab{\metavariable{dvs}_1\,\metavariable{ds}_2}{\metavariable{e}}{\X}}{\metavariable{C}}{{\cal S}'_x}$ where
\item ${\cal S}'_x=\Remove{({\cal S}_1+{\cal S}_2+{\cal S}_e)}{(D_1\cup D_2)}$ and
\item $\X=\Closure{\terminale{res}}{({\cal S}_1+{\cal S}_2+{\cal S}_e)}\cap(D_1\cup D_2)$
\end{enumerate}
\PG{From (iii) and \refToProp{invTyping1} there are no $\Pair{\metavariable{y}}{\metavariable{y}'}\in{\cal S}_2$
with $\metavariable{y}\neq\metavariable{y}'$ such that either $\metavariable{y}\in{D_1}$ or $\metavariable{y}'\in{D_1}$.
Therefore $\Pair{\metavariable{z}}{\terminale{res}}\in({\cal S}_1+{\cal S}_2+{\cal S}_e)\setminus(D_1\cup D_2)$
and $\metavariable{z}\in{D_2}$ implies $\Pair{\metavariable{z}}{\terminale{res}}\in({\cal S}_2+{\cal S}_e)\setminus{D_1}$.
If $\mu=\terminale{a}$, then from (5) we have that $\IsCapsule{{\cal S}'_x}$.}
From \refToDef{typeBlock} we derive
\begin{enumerate} [(a)]\addtocounter{enumi}{5}
\item $\TypeCheckDecs{{\Gamma}[{\metavariable{x}{:}\Type{\mu}{\metavariable{C}}}]}{\Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\,\metavariable{ds}_2}{\metavariable{e}}{\X}}}{\SubstEqRel{{\cal S}'_x}{\metavariable{x}}{\terminale{res}}}$
\end{enumerate}
As for the ``ony if'' proof we can show that ${\cal S}=\Remove{{\cal S}'}{D_1}$.\end{proof}
\medskip
\noindent{\bf Lemma \ref{lemma:congruence}.} (Congruence preserves types)
{\it Let $\metavariable{e}_1$ and $\metavariable{e}_2$ be annotated expressions.
If $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$
and $\congruence{\metavariable{e}_1}{\metavariable{e}_2}$, then $\TypeCheck{\Gamma}{\metavariable{e}'_2}{\metavariable{C}}{{\cal S}}$ for some $\metavariable{e}'_2$ such that $\metavariable{e}_2'\EZ{\approx^-}\metavariable{e}_2$.
}
\begin{proof}
\PG{By cases on the congruence rule used. We do the
case for rules \rn{dec} and \rn{val-ctx} with $\terminale{new}$ that are the most significative and show how scope extrusion preserves
typing. The other cases are similar and simpler. In both cases we first show that typability of the left-side of $\congruence{}{}$
implies typability of the right-side with the same type and sharing relation and then the viceversa.}
\underline{Rule \rn{{dec}}}. Let
\begin{itemize}
\item $\metavariable{e}_1=\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\ \metavariable{ds}_2}{\metavariable{e}}{\X}}\ \metavariable{ds}'}{\metavariable{e}'}{\Y}$ and
\item$\metavariable{e}_2=\BlockLab{\metavariable{dvs}\ \metavariable{dvs}_1\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\X'}\ \metavariable{ds}'}}{\metavariable{e}'}{\Y'}$.
\end{itemize}
where
\begin{enumerate} [(1)]
\item $\FV{\metavariable{dvs}_1}\cap\dom{\metavariable{ds}_2}=\emptyset$
\item $\FV{\metavariable{dvs}\,\metavariable{ds}'\,\metavariable{e}'}\cap\dom{\metavariable{dvs}_1}=\emptyset$
\item $\mbox{if }\mu=\terminale{a}\mbox{ then }\dom{\metavariable{dvs}_1}\cap\X=\emptyset$.
\end{enumerate}
Let $D_1=\dom{\metavariable{dvs}_1}$.\\
We show that \underline{$\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$ implies $\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{{\cal S}}$}
for some $\X'$ and $\Y'$.\\
Let $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$, define $\Gamma_1=\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}},\metavariable{x}{:}\Type{\mu}{\metavariable{C}},\Gamma_{\metavariable{ds}'}}$,
and $\Z_d=\dom{\metavariable{dvs}}\cup\dom{\metavariable{ds}'}\cup\{\metavariable{x}\}$. From \refToLemma{invBlock} we have that
\begin{enumerate}[(a)]
\item ${\cal S}=\Remove{({\cal S}_d+{\cal S}_x+{\cal S}_e)}{\Z_d}$
\item $\Y=\Closure{\terminale{res}}{({\cal S}_d+{\cal S}_x+{\cal S}_e)}\cap\Z_d$
\item $\TypeCheck{\Gamma_1}{\metavariable{e}'}{\metavariable{C}}{{\cal S}_e}$ and $\TypeCheckDecs{\Gamma_1}{\metavariable{dvs}\ \metavariable{ds}'}{{\cal S}_d}$
\item $\TypeCheckDecs{\Gamma_1}{\Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\ \metavariable{ds}_2}{\metavariable{e}}{\X}}}{{\cal S}_x}$
\end{enumerate}
From (d), (1), (3) and \refToLemma{extrusion}, letting $\Gamma'_1=\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}},\metavariable{x}{:}\Type{\mu}{\metavariable{C}}, \Gamma_{\metavariable{dvs}_1},\Gamma_{\metavariable{ds}'}}$, we have that
\begin{enumerate}[(a)]\addtocounter{enumi}{4}
\item $\TypeCheckDecs{\Gamma'_1}{\metavariable{dvs}_1\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\X\setminus{D_1}}}}{{\cal S}'_x}$ where ${\cal S}_x=\Remove{{\cal S}'_x}{D_1}$
\end{enumerate}
From (2), (c) and \refToLemma{weakening}.1 we have
\begin{enumerate}[(a)]\addtocounter{enumi}{5}
\item $\TypeCheck{\Gamma'_1}{\metavariable{e}'}{\metavariable{C}}{{\cal S}_e}$ and $\TypeCheckDecs{\Gamma'_1}{\metavariable{dvs}\ \metavariable{ds}'}{{\cal S}_d}$
\end{enumerate}
\begin{enumerate}[(a)]\addtocounter{enumi}{5}
\item $\TypeCheck{\Gamma'_1}{\metavariable{e}'}{\metavariable{C}}{{\cal S}_e}$ and $\TypeCheckDecs{\Gamma'_1}{\metavariable{dvs}\ \metavariable{ds}'}{{\cal S}_d}$
\end{enumerate}
From (e), (f) and rule \rn{T-Block}\\
\centerline{$
\TypeCheck{\Gamma}{\BlockLab{\metavariable{dvs}\ \metavariable{dvs}_1\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\X\setminus{D_1}}}\ \metavariable{ds}'}{\metavariable{e}'}{\Y'}}{\metavariable{C}}{{\cal S}'}
$}\\
where ${\cal S}'=\Remove{({\cal S}_d+{\cal S}'_x+{\cal S}_e)}{(\Z_d\cup{D_1})}$
and $\Y'=\Closure{\terminale{res}}{({\cal S}_d+{\cal S}'_x+{\cal S}_e)}\cap(\Z_d\cup{D_1})$. \\
From (2), \refToProp{invTyping1} and \refToProp{lessSrRel}.\ref{p4} we have that
\begin{enumerate}[(a)]\addtocounter{enumi}{6}
\item $\Remove{({\cal S}'_x+{\cal S}_d+{\cal S}_e)}{D_1}={\cal S}_x+{\cal S}_d+{\cal S}_e$
\end{enumerate}
Therefore\\
\centerline{$
\begin{array}{lcll}
{\cal S}'&=& \Remove{(\Remove{({\cal S}'_x+{\cal S}_d+{\cal S}_e)}{D_1})}{\Z_d} &\text{by definition of $\setminus$} \\
&=& \Remove{({\cal S}_x+{\cal S}_d+{\cal S}_e)}{\Z_d} &\text{from (g)} \\
&=&{\cal S}
\end{array}
$}\\
\medskip\noindent
We show that \underline{$\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{{\cal S}}$ implies $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$} for some $\X$ and $\Y$.\\
Let $\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{{\cal S}}$, define $\Gamma_1=\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}},\Gamma_{\metavariable{dvs}_1},\metavariable{x}{:}\Type{\mu}{\metavariable{C}},\Gamma_{\metavariable{ds}'}}$,
and $\Z_d=\dom{\metavariable{dvs}}\cup\dom{\metavariable{ds}'}\cup\{\metavariable{x}\}$. From \refToLemma{invBlock} we have that
\begin{enumerate}[(a)]
\item ${\cal S}=\Remove{({\cal S}_d+{\cal S}_{1}+{\cal S}_x+{\cal S}_e)}{(\Z_d\cup{D_1})}$
\item $\Y'=\Closure{\terminale{res}}{({\cal S}_d+{\cal S}_{1}+{\cal S}_x+{\cal S}_e)}\cap(\Z_d\cup{D_1})$
\item $\TypeCheck{\Gamma_1}{\metavariable{e}'}{\metavariable{C}}{{\cal S}_e}$ and $\TypeCheckDecs{\Gamma_1}{\metavariable{dvs}\ \metavariable{ds}'}{{\cal S}_d}$
\item $\TypeCheckDecs{\Gamma_1}{\metavariable{dvs}_1}{{\cal S}_1}$
\item $\TypeCheckDecs{\Gamma_1}{\Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\X'}}}{{\cal S}''_x}$
\end{enumerate}
From (d), (e) and \refToDef{typeBlock}, letting ${\cal S}'_x={\cal S}_1+{\cal S}''_x$, we have that $\TypeCheckDecs{\Gamma_1}{\metavariable{dvs}_1\, \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\X'}}}{{\cal S}'_x}$. Therefore from \refToLemma{extrusion},
letting $\Gamma'_1=\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}},\metavariable{x}{:}\Type{\mu}{\metavariable{C}},\Gamma_{\metavariable{ds}'}}$, we have that
\begin{enumerate}[(a)]\addtocounter{enumi}{5}
\item $\TypeCheckDecs{\Gamma'_1}{\Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\ \metavariable{ds}_2}{\metavariable{e}}{\X}}}{{\cal S}_x}$ where ${\cal S}_x=\Remove{{\cal S}'_x}{D_1}$ and $\Remove{\X}{D_1}=\X'$.
\end{enumerate}
From (2), (c) and \refToLemma{weakening}.2 $\TypeCheck{\Gamma'_1}{\metavariable{e}'}{\metavariable{C}}{{\cal S}_e}$ and $\TypeCheckDecs{\Gamma'_1}{\metavariable{dvs}\ \metavariable{ds}'}{{\cal S}_d}$. From rule \rn{T-block} we have that\\
\centerline{$
\TypeCheck{\Gamma}{\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\ \metavariable{ds}_2}{\metavariable{e}}{\X}}\ \metavariable{ds}'}{\metavariable{e}'}{\Y}}{\metavariable{C}}{{\cal S}'}
$}\\
where, $\Y=\Closure{\terminale{res}}{{\cal S}_x+{\cal S}_e+{\cal S}_d}\cap\Z_d$ and ${\cal S}'=\Remove{{\cal S}_x+{\cal S}_e+{\cal S}_d}{\Z_d}$. The prof that ${\cal S}'={\cal S}$ is as for the previous
implication.
\medskip
\underline{Rule \rn{val-ctx}} with $\terminale{new}$. Let
\begin{itemize}
\item $\metavariable{e}_1=\ConstrCall{\metavariable{C}}{\metavariable{v}_1,\dots,\metavariable{v}_n,\BlockLab{\metavariable{dvs}_1\ \metavariable{dvs}_2}{\metavariable{v}}{\X},\metavariable{v}_{n+1},\dots,\metavariable{v}_{n+m}}$ and
\item$\metavariable{e}_2=\BlockLab{\metavariable{dvs}_1}{\ConstrCall{\metavariable{C}}{\metavariable{v}_1,\dots,\metavariable{v}_n,\BlockLab{\metavariable{dvs}_2}{\metavariable{v}}{\X'},\metavariable{v}_{n+1},\dots,\metavariable{v}_{n+m}}}{\Y}$.
\end{itemize}
where
\begin{enumerate} [(1)]
\item $\FV{\metavariable{dvs}_1}\cap\dom{\metavariable{dvs}_2}=\emptyset$
\item $\FV{\metavariable{vs}\,\metavariable{vs}'}\cap\dom{\metavariable{dvs}_1}=\emptyset$
\end{enumerate}
Let $D_1=\dom{\metavariable{dvs}_1}$ and $D_2=\dom{\metavariable{dvs}_2}$.\\
We show that \underline{$\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$ implies $\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{{\cal S}}$} for some $\X'$ and $\Y$.\\
Let $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$, define $\Gamma_1=\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}_1},\Gamma_{\metavariable{dvs}_2}}$. From rule \rn{T-new} \refToLemma{invBlock} we have that
\begin{enumerate}[(a)]
\item ${\cal S}={\cal S}_{\metavariable{v}}+\Remove{({\cal S}'_1+{\cal S}'_2+{\cal S}')}{(D_1\cup D_2)}$ where ${\cal S}_{\metavariable{v}}=\sum\limits_{i=1}^{n+m}{\cal S}_i$
\item $\TypeCheck{\Gamma}{\metavariable{v}_i}{\metavariable{T}_i}{{\cal S}_i}$ for all $i$, $1\leq i\leq n+m$
\item $\TypeCheckDecs{\Gamma'}{\metavariable{dvs}_1}{{\cal S}'_1}$
\item $\TypeCheckDecs{\Gamma'}{\metavariable{dvs}_2}{{\cal S}'_2}$ and $\TypeCheck{\Gamma'}{\metavariable{v}}{\metavariable{T}'}{{\cal S}'}$ and
\item
$\X=\Closure{\terminale{res}}{({\cal S}'_1+{\cal S}'_2+{\cal S}')}\cap(D_1\cup D_2)$
\end{enumerate}
where $\fields{\metavariable{C}}=\Field{\metavariable{T}_1}{\metavariable{f}_1}\dots\Field{\metavariable{T}_n}{\metavariable{f}_n}\Field{\metavariable{T}'}{\metavariable{f}'}\Field{\metavariable{T}_{n+1}}{\metavariable{f}_{n+1}}\dots\Field{\metavariable{T}_{n+m}}{\metavariable{f}_{n+m}}$.\\
By wellformedness of blocks $D_1\cap D_2=\emptyset$. Therefore $\Gamma_1={\Gamma}[{\Gamma_{\metavariable{dvs}_1}][\Gamma_{\metavariable{dvs}_2}}]$.
From \refToLemma{weakening}.2 and (d) we get $\TypeCheckDecs{\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}_2}}}{\metavariable{dvs}_2}{{\cal S}'_2}$ and $\TypeCheck{\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}_2}}}{\metavariable{v}}{\metavariable{T}'}{{\cal S}'}$ and by rule \rn{T-block}
\begin{enumerate}[(A)
\item $\TypeCheck{\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}_1}}}{\BlockLab{\metavariable{dvs}_2}{\metavariable{v}}{\X'}}{\metavariable{T}'}{\Remove{({\cal S}'_2+{\cal S}')}{D_2}}$ where $\X'=\Closure{\terminale{res}}{({\cal S}'_2+{\cal S}')}\cap D_2$
\end{enumerate}
From (1), (c) and \refToLemma{weakening}.2 we get
\begin{enumerate}[(A)]\addtocounter{enumi}{1}
\item $\TypeCheckDecs{\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}_1}}}{\metavariable{dvs}_1}{{\cal S}'_1}$
\end{enumerate}
From (b), (2) and \refToLemma{weakening}.1 we get
\begin{enumerate}[(A)]\addtocounter{enumi}{2}
\item $\TypeCheck{\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}_1}}}{\metavariable{v}_i}{\metavariable{T}_i}{{\cal S}_i}$ for all $i$, $1\leq i\leq n+m$
\end{enumerate}
From (A), (B), (C) and rules \rn{T-New} and \rn{T-block} we get\\
\centerline{
$
\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{\Remove{(({\cal S}'_1+{\cal S}_{\metavariable{v}})+\Remove{({\cal S}'_2+{\cal S}')}{D_2})}{D_1}}
$
}
and $\Y=\Closure{\terminale{res}}{(({\cal S}'_1+{\cal S}_{\metavariable{v}})+\Remove{({\cal S}'_2+{\cal S}')}{D_2})}\cap D_1$.
By $\alpha$-congruence we may assume that $\FV{\metavariable{vs}\,\metavariable{vs}'}\cap(D_1\cup D_2)=\emptyset$ so
\begin{enumerate}[(A)]\addtocounter{enumi}{3}
\item ${\cal S}_{\metavariable{v}}+\Remove{({\cal S}'_1+{\cal S}'_2+{\cal S}')}{(D_1\cup D_2)}=
\Remove{({\cal S}_{\metavariable{v}}+{\cal S}'_1+{\cal S}'_2+{\cal S}')}{(D_1\cup D_2)}$.
\end{enumerate}
By (1) and $\FV{\metavariable{vs}\,\metavariable{vs}'}\cap D_2=\emptyset$ and \refToProp{invTyping1} we have that
$\Remove{({\cal S}_{\metavariable{v}}+{\cal S}'_1)}{D_2}={\cal S}_{\metavariable{v}}+{\cal S}'_1$, so from
\refToProp{lessSrRel}.\ref{p4} we have that
\begin{enumerate}[(A)]\addtocounter{enumi}{4}
\item $\Remove{({\cal S}_{\metavariable{v}}+{\cal S}'_1+{\cal S}'_2+{\cal S}')}{D_2}=({\cal S}_{\metavariable{v}}+{\cal S}'_1)+(\Remove{({\cal S}'_2+{\cal S}')}{D_2})$
\end{enumerate}
Therefore we have that\\
\centerline{
$
\begin{array}{lcll}
{\cal S}&=& \Remove{({\cal S}_{\metavariable{v}}+{\cal S}'_1+{\cal S}'_2+{\cal S}')}{(D_1\cup D_2)}& \text{by (D)}\\
&=& \Remove{(\Remove{({\cal S}_{\metavariable{v}}+{\cal S}'_1+{\cal S}'_2+{\cal S}')}{D_2})}{D_1}& \text{by definition of $\setminus$}\\
&=& \Remove{(({\cal S}_{\metavariable{v}}+{\cal S}'_1)+\Remove{({\cal S}'_2+{\cal S}')}{D_2})}{D_1}& \text{by (E)}\\
\end{array}
$
}
which proves that the typing of $\metavariable{e}_1$ and $\metavariable{e}_2$ produce the same sharing relation.
\medskip\noindent
We show that \underline{$\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{{\cal S}}$ implies $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$} for some $\X$.\\
Let $\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{{\cal S}}$, define $\Gamma_1=\SubstFun{\Gamma,\Gamma_{\metavariable{dvs}_1}}{\Gamma_{\metavariable{dvs}_2}}$. From rule \rn{T-new} \refToLemma{invBlock} we have that
\begin{enumerate}[(a)]
\item ${\cal S}=\Remove{{\cal S}_3}{D_1}$ where ${\cal S}_3={\cal S}_{\metavariable{v}}+{\cal S}'_1+\Remove{({\cal S}'_2+{\cal S}'')}{D_2}$ and ${\cal S}_{\metavariable{v}}=\sum\limits_{i=1}^{n+m}{\cal S}_i$
\item $\TypeCheck{\Gamma[\Gamma_{\metavariable{dvs}_1}]}{\metavariable{v}_i}{\metavariable{T}_i}{{\cal S}_i}$ for all $i$, $1\leq i\leq n+m$
\item $\TypeCheckDecs{\Gamma[\Gamma_{\metavariable{dvs}_1}]}{\metavariable{dvs}_1}{{\cal S}'_1}$
\item $\TypeCheckDecs{\Gamma_1}{\metavariable{dvs}_2}{{\cal S}'_2}$ and $\TypeCheck{\Gamma_1}{\metavariable{v}}{\metavariable{T}'}{{\cal S}'}$ and
\item
$\X'=\Closure{\terminale{res}}{({\cal S}'_2+{\cal S}')}\cap(D_2)$ and $\Y=\Closure{\terminale{res}}{{\cal S}_3}\cap(D_2)$
\end{enumerate}
where $\fields{\metavariable{C}}=\Field{\metavariable{T}_1}{\metavariable{f}_1}\dots\Field{\metavariable{T}_n}{\metavariable{f}_n}\Field{\metavariable{T}'}{\metavariable{f}'}\Field{\metavariable{T}_{n+1}}{\metavariable{f}_{n+1}}\dots\Field{\metavariable{T}_{n+m}}{\metavariable{f}_{n+m}}$.\\
By wellformedness of blocks $D_1\cap D_2=\emptyset$. Therefore, letting $\Gamma_2={\Gamma}[{\Gamma_{\metavariable{dvs}_1},\Gamma_{\metavariable{dvs}_2}}]$ we have that $\Gamma_1=\Gamma_2$.
From \refToLemma{weakening}.1 and (c) and (d) we get $\TypeCheckDecs{\Gamma_2}{\metavariable{dvs}_2}{{\cal S}'_2}$ and $\TypeCheck{\Gamma_2}{\metavariable{v}}{\metavariable{T}'}{{\cal S}'}$ and $\TypeCheckDecs{\Gamma_2}{\metavariable{dvs}_1}{{\cal S}'_1}$ and by rule \rn{T-block}
\begin{enumerate}[(A)
\item $\TypeCheck{\SubstFun{\Gamma}{\Gamma_{\metavariable{dvs}_1}}}{\BlockLab{\metavariable{dvs}_2}{\metavariable{v}}{\X}}{\metavariable{T}'}{\Remove{({\cal S}'_1+{\cal S}'_2+{\cal S}')}{(D_2\cup D_1)}}$ where $\X'=\Closure{\terminale{res}}{({\cal S}'_1+{\cal S}'_2+{\cal S}')}\cap (D_2\cup D_1)$
\end{enumerate}
From (1), (b) and \refToLemma{weakening}.2 we get
\begin{enumerate}[(A)]\addtocounter{enumi}{1}
\item $\TypeCheck{\Gamma}{\metavariable{v}_i}{\metavariable{T}_i}{{\cal S}_i}$ for all $i$, $1\leq i\leq n+m$
\end{enumerate}
From (A), (B) and rule \rn{T-New} we get\\
\centerline{
$
\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}_{\metavariable{v}}+\Remove{({\cal S}'_1+{\cal S}'_2+{\cal S}')}{(D_1\cup D_2)}}
$
}
and $\X=\Closure{\terminale{res}}{({\cal S}'_1+{\cal S}'_2+{\cal S}')}\cap(D_1\cup D_2)$.\\
The proof that ${\cal S}={\cal S}_{\metavariable{v}}+\Remove{({\cal S}'_1+{\cal S}'_2+{\cal S}')}{(D_1\cup D_2)}$ is as for the previous implication.
\end{proof}
The following lemma asserts that subexpressions of typable expressions are
themselves typable, and may be replaced with expressions that have the same
type and the same or possibly less sharing effects.
\noindent{\bf Lemma \ref{lemma:context}.} (Context)
{\it Let $\TypeCheck{\Gamma}{\Ctx{\metavariable{e}}}{\metavariable{C}}{{\cal S}}$, then
\begin{enumerate}
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{{\metavariable{e}}}{\metavariable{D}}{{\cal S}_1}$ for some
$\metavariable{D}$ and ${\cal S}_1$,
\item if $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{{\metavariable{e}'}}{\metavariable{D}}{{\cal S}_2}$ where
$\Finer{{\cal S}_2}{{\cal S}_1}$ (${{\cal S}_2}={{\cal S}_1}$),
then $\TypeCheck{\Gamma}{\CtxP{\metavariable{e}'}}{\metavariable{C}}{{\cal S}'}$ for some ${\cal E}'$ such that
$\CtxP{\metavariable{e}'}\EZ{\approx^-}\Ctx{\metavariable{e}'}$ and
$\Finer{{\cal S}'}{{\cal S}}$ (${{\cal S}'}={{\cal S}}$).
\end{enumerate}
}
\begin{proof}
\begin{enumerate}
\item Easy, by induction on evaluation contexts.
\item Let $\TypeCheck{\Gamma}{\Ctx{\metavariable{e}}}{\metavariable{C}}{{\cal S}}$. By point 1. of this lemma
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{{\metavariable{e}}}{\metavariable{D}}{{\cal S}_1}$ for
some $\metavariable{D}$ and ${\cal S}_1$. By induction on evaluation contexts.\\
If ${\cal{E}}=[\ ]$, then $\metavariable{C}=\metavariable{D}$ and ${\cal S}_1={\cal S}$.
The result is immediate.\\
If ${\cal{E}}=\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{{\cal{E}}'}\ \metavariable{ds}}{\metavariable{e}_b}{\X}$,
then $\TypeCheck{\Gamma}{\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{\CtxP{\metavariable{e}}}\ \metavariable{ds}}{\metavariable{e}_b}{\X}}{\metavariable{C}}{{\cal S}}$.
Let $\Gamma'=\Gamma_{\metavariable{dvs}},\TypeDec{\metavariable{x}}{\metavariable{T}},\Gamma_{\metavariable{ds}}$, from \refToLemma{invBlock} we have
\begin{enumerate}[(1)]
\item ${\cal S}=\Remove{({\cal S}_1+{\cal S}_2+{\cal S}_3)}{\dom{\Gamma'}}$ where
\item $\TypeCheckDecs{\Gamma[\Gamma']}{\Dec{\metavariable{T}}{\metavariable{x}}{\CtxP{\metavariable{e}}}}{{\cal S}_1}$ where ${\cal S}_1=\SubstEqRel{{\cal S}_x}{\metavariable{x}}{\terminale{res}}$ and $\metavariable{T}=\Type{\mu}{\metavariable{C}_x}$ and $\TypeCheck{\Gamma[\Gamma']}{{\CtxP{\metavariable{e}}}}{\metavariable{C}_x}{{\cal S}_x}$,
\item $\TypeCheckDecs{\Gamma[\Gamma']}{\metavariable{dvs}\ \metavariable{ds}}{{\cal S}_2}$,
\item $\TypeCheck{\Gamma[\Gamma']}{\metavariable{e}_b}{}{{\cal S}_3}$ and
\item $\X=\Closure{\terminale{res}}{{\cal S}_1+{\cal S}_2+{\cal S}_3}\cap(\dom{\metavariable{dvs}}\cup\dom{\metavariable{ds}}\cup\{\metavariable{x}\})$
\end{enumerate}
From (2) and point 1. of this lemma $\TypeCheck{\Gamma[\Gamma'][\Gamma_{{\cal E}'}]}{{\metavariable{e}}}{\metavariable{D}}{{\cal S}_4}$ for some $\metavariable{D}$ and
${\cal S}_4$.
Let
$\TypeCheck{\Gamma[\Gamma'][\Gamma_{{\cal E}'}]}{{\metavariable{e}'}}{\metavariable{D}}{{\cal S}'_4}$
where $\Finer{{\cal S}'_4}{{\cal S}_4}$. From (2), by induction hypothesis on
${\cal E}'$, we have that
$\TypeCheck{\Gamma[\Gamma']}{{{\cal E}''[\metavariable{e}']}}{\metavariable{D}}{{\cal S}'_x}$
where ${\cal E}'[\metavariable{e}']\EZ{\approx^-}{\cal E}''[\metavariable{e}']$ and $\Finer{{\cal S}'_x}{{\cal S}_x}$.
Moreover,
from $\TypeCheck{\Gamma}{\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{\CtxP{\metavariable{e}}}\
\metavariable{ds}}{\metavariable{e}_b}{\X}}{\metavariable{C}}{{\cal S}}$, if $\mu=\terminale{a}$, then
$\IsCapsule{{\cal S}_x}$, and so also $\IsCapsule{{\cal S}'_x}$.
Therefore
\begin{enumerate}[(a)]
\item $\TypeCheckDecs{\Gamma[\Gamma']}{\Dec{\metavariable{T}}{\metavariable{x}}{\CtxS{\metavariable{e}}}}{{\cal S}'_1}$ where ${\cal E}'[\metavariable{e}']\EZ{\approx^-}{\cal E}''[\metavariable{e}']$ and ${\cal S}'_1=\SubstEqRel{{\cal S}'_x}{\metavariable{x}}{\terminale{res}}$.
\end{enumerate}
From \refToProp{lessSrRel}.\ref{p2} and \ref{p3} we have that $\Finer{{\cal S}'_1}{{\cal S}_1}$.
Applying rule \rn{T-block} to (2), (3), (4) and (a) we have\\
\centerline{
$\TypeCheck{\Gamma}{\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{\CtxS{\metavariable{e}'}}\
\metavariable{ds}}{\metavariable{e}_b}{\X'}}{\metavariable{C}}{{\cal S}''}$}\\
where ${\cal S}''=\Remove{({\cal S}'_1+{\cal S}_2+{\cal S}_3)}{\dom{\Gamma'}}$ and $\X'=\Closure{\terminale{res}}{{\cal S}'_1+{\cal S}_2+{\cal S}_3}\cap(\dom{\metavariable{dvs}}\cup\dom{\metavariable{ds}}\cup\{\metavariable{x}\})$. From \refToProp{lessSrRel}.\ref{p2} and \ref{p3} we derive that
$\Finer{{\cal S}''}{{\cal S}}$. The case with equality is similar.\\
If
${\cal{E}}=\BlockLab{\metavariable{dvs}}{{\cal{E}}}{\X}$, then the proof is similar to the previous one.
\end{enumerate}
\end{proof}
\noindent{\bf Lemma \ref{lemma:monotoneSharing}.}
{\it Let $\TypeCheck{\Gamma}{\Ctx{e}}{\metavariable{C}}{{\cal S}}$ and $\TypeCheck{\Gamma}{\metavariable{e}}{\metavariable{D}}{{\cal S}'}$.
If $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}'$ with $\metavariable{x},\metavariable{y}\not\in\HB{{\cal{E}}}$ and $\metavariable{x},\metavariable{y}\neq\terminale{res}$,
then $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$.}
\begin{proof}
By induction on ${\cal{E}}$. \\
For \underline{${\cal{E}}=[\ ]$} is obvious. \\
Let \underline{${\cal{E}}=\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{C}_1}}{\metavariable{z}}{{\cal E}'}\ \metavariable{ds}}{\metavariable{e}_b}{\X}$}.
From $\TypeCheck{\Gamma}{\Ctx{e}}{\metavariable{C}}{{\cal S}}$ and \refToLemma{invBlock}, let
$\Gamma'=\Gamma_{\metavariable{dvs}},\TypeDec{\metavariable{x}}{\metavariable{T}},\Gamma_{\metavariable{ds}}$
\begin{enumerate}[(a)]
\item$\TypeCheck{\Gamma[\Gamma']}{{{\cal E}'[\metavariable{e}]}}{\metavariable{C}_1}{{\cal S}_1}$,
\item $\TypeCheckDecs{\Gamma[\Gamma']}{\metavariable{dvs}\ \metavariable{ds}}{{\cal S}_2}$, and
\item $\TypeCheck{\Gamma[\Gamma']}{\metavariable{e}_b}{}{{\cal S}_3}$.
\item ${\cal S}=\Remove{({\cal S}'_1+{\cal S}_2+{\cal S}_3)}{\dom{\Gamma'}}$ where ${\cal S}'_1=\Remove{({\cal S}_1+\{\metavariable{z},\terminale{res}\})}{\terminale{res}}$.
\end{enumerate}
Assume that, $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}'$ with $\metavariable{x},\metavariable{y}\not\in\HB{{\cal{E}}}$ and $\metavariable{x},\metavariable{y}\neq\terminale{res}$. From
$\HB{{\cal E}'}\subseteq\HB{{\cal{E}}}$ and $\TypeCheck{\Gamma}{\metavariable{e}}{\metavariable{D}}{{\cal S}'}$, by induction hypothesis
on ${\cal E}'$ we derive that $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}_1$. Since $(\{\metavariable{z}\}\cup\dom{\Gamma'})\subseteq\HB{{\cal{E}}}$ we have that
$\metavariable{x},\metavariable{y}\not\in(\{\metavariable{z}\}\cup\dom{\Gamma'})$. Therefore $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$.\\
Similar (and simpler) for \underline{${\cal{E}}=\BlockLab{\metavariable{dvs}}{{\cal E}'}{\X}$}.
\end{proof}
\noindent{\bf Lemma \ref{lemma:fieldAcc}.}
{\it
If $\TypeCheck{\Gamma}{\Ctx{e_1}}{\metavariable{C}}{{\cal S}_1}$, $\TypeCheck{\Gamma}{\Ctx{e_2}}{\metavariable{C}}{{\cal S}_2}$,
$\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{D}}{\{\metavariable{xs},\terminale{res}\}}$ and $\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{D}}{\{\metavariable{ys},\terminale{res}\}}$ with
$\{\metavariable{xs},\metavariable{ys}\}\cap\HB{{\cal{E}}}=\emptyset$. Then ${\cal S}_1+\{\metavariable{xs},\metavariable{ys}\}={\cal S}_2+\{\metavariable{xs},\metavariable{ys}\}$.
}
\begin{proof}
Let \underline{${\cal{E}}=[\ ]$}. Then $\{\metavariable{xs},\terminale{res}\}+\{\metavariable{xs},\metavariable{ys}\}=\{\metavariable{xs}, \metavariable{ys}, \terminale{res}\}$ and $\{\metavariable{ys},\terminale{res}\}+\{\metavariable{xs},\metavariable{ys}\}=\{\metavariable{xs}, \metavariable{ys}, \terminale{res}\}$. \\
Let \underline{${\cal{E}}=\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{D}}}{\metavariable{z}}{{\cal E}'}\ \metavariable{ds}}{\metavariable{e}}{\X}$},
$\TypeCheck{\Gamma}{\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{D}}}{\metavariable{z}}{{\cal E}'[\metavariable{e}_1]}\ \metavariable{ds}}{\metavariable{e}}{\X}}{\metavariable{C}}{{\cal S}_1}$,
$\TypeCheck{\Gamma}{\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{D}}}{\metavariable{z}}{{\cal E}'[\metavariable{e}_2]}\ \metavariable{ds}}{\metavariable{e}}{\X}}{\metavariable{C}}{{\cal S}_2}$,
$\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{\{\metavariable{xs},\terminale{res}\}}$, and
$\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{\{\metavariable{ys} ,\terminale{res}\}}$.
Let $\Gamma'=\Gamma_{\metavariable{dvs}},\TypeDec{\metavariable{z}}{\metavariable{T}},\Gamma_{\metavariable{ds}}$, from \refToLemma{invBlock}
\begin{enumerate}[(a)]
\item $\TypeCheck{\Gamma[\Gamma']}{{{\cal E}'[\metavariable{e}_1]}}{\metavariable{C}_1}{{\cal S}_3}$
and $\TypeCheck{\Gamma[\Gamma']}{{{\cal E}'[\metavariable{e}_2]}}{\metavariable{C}_1}{{\cal S}_4}$
\item $\TypeCheckDecs{\Gamma[\Gamma']}{\metavariable{dvs}\ \metavariable{ds}}{{\cal S}_d}$
\item $\TypeCheck{\Gamma[\Gamma']}{\metavariable{e}}{}{{\cal S}_e}$
\item ${\cal S}_1=\Remove{({\cal S}_z+{\cal S}_d+{\cal S}_e)}{\dom{\Gamma'}}$
and ${\cal S}_2=\Remove{({\cal S}'_z+{\cal S}_d+{\cal S}_e)}{\dom{\Gamma'}}$
where ${\cal S}_z=\SubstEqRel{{\cal S}_3}{\metavariable{z}}{\terminale{res}}$ and
${\cal S}'_z=\SubstEqRel{{\cal S}_4}{\metavariable{z}}{\terminale{res}}$.
\end{enumerate}
Let $\Y=\{\metavariable{xs},\metavariable{ys}\}$. By (a) and induction hypotheses we have that
$\Sum{{\cal S}_3}{\Y}=\Sum{{\cal S}_4}{\Y}$. Therefore $\SubstEqRel{(\Sum{{\cal S}_3}{\Y})}{\metavariable{z}}{\terminale{res}}=\SubstEqRel{(\Sum{{\cal S}_4}{\Y})}{\metavariable{z}}{\terminale{res}}$. From $\Y\cap\HB{{\cal{E}}}=\emptyset$, we have that
for all $\metavariable{x}\in\metavariable{xs}$, $\metavariable{x}\not=\metavariable{z}$ and for all $\metavariable{y}\in\metavariable{ys}$, $\metavariable{y}\not=\metavariable{z}$ and so
$\Sum{(\SubstEqRel{{\cal S}_3}{\metavariable{z}}{\terminale{res}})}{\Y}=\Sum{(\SubstEqRel{{\cal S}_4}{\metavariable{z}}{\terminale{res}})}{\Y}$. Therefore
$\Sum{{\cal S}_z}{\Y}=\Sum{{\cal S}'_z}{\Y}$ and so also
${\cal S}_z+{\cal S}_d+{\cal S}_e+\Y={\cal S}'_z+{\cal S}_d+{\cal S}_e+\Y$,
which implies
$\Remove{({\cal S}_z+{\cal S}_d+{\cal S}_e+\Y)}{\dom{\Gamma'}}=\Remove{({\cal S}'_z+{\cal S}_d+{\cal S}_e+\Y)}{\dom{\Gamma'}}$.
Since $\Y\cap\dom{{\cal{E}}}=\emptyset$ we have that
$\Y\cap\dom{\Gamma'}=\emptyset$, and $\Remove{\Y}{\dom{\Gamma'}}=\Y$.
Therefore from \refToProp{lessSrRel}.\ref{p4} we have that
$\Remove{(({\cal S}_z+{\cal S}_d+{\cal S}_e)+\Y)}{\dom{\Gamma'}}=\Remove{({\cal S}_z+{\cal S}_d+{\cal S}_e)}{\dom{\Gamma'}}+\Y)=\Remove{({\cal S}'_z+{\cal S}_d+{\cal S}_e+\Y)}{\dom{\Gamma'}}=\Remove{({\cal S}'_z+{\cal S}_d+{\cal S}_e)}{\dom{\Gamma'}}+\Y)$ which implies ${\cal S}_1+\Y={\cal S}_2+\Y$.\\
Similar (and simpler) for \underline{${\cal{E}}=\BlockLab{\metavariable{dvs}}{{\cal E}'}{\X}$}.
\end{proof}
\noindent{\bf Theorem \ref{theo:subred}.} (Subject reduction)
{\it If $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$ and $\reduce{\metavariable{e}_1}{\metavariable{e}_2}$, then
\begin{enumerate}
\item \PG{$\TypeCheck{\Gamma}{\metavariable{e}'_2}{\metavariable{C}}{{\cal S}'}$ for $\metavariable{e}'_2\EZ{\approx^-}\metavariable{e}_2$ and
$\Finer{{\cal S}'}{{\cal S}}$, and}
{\item for all $\metavariable{x}$ such that $\metavariable{e}_1=\Decctx{\metavariable{x}}{\metavariable{e}}$, \PG{$\metavariable{e}'_2=\DecctxP{\metavariable{x}}{\metavariable{e}'}$},
and $\TypeCheck{\TypeEnv{\decctx{\metavariable{x}}}}{\metavariable{e}}{\metavariable{D}}{{\cal S}_x}$ we have that:
$\TypeCheck{\TypeEnv{\decctxP{\metavariable{x}}}}{\metavariable{e}'}{\metavariable{D}}{{\cal S}'_x}$ and
$\Finer{({\cal S}'_x+{\cal S}_{\metavariable{dvs}'})}{({\cal S}_x+{\cal S}_{\metavariable{dvs}})}$
where $\metavariable{dvs}=\extractDec{\decctx{}}{\FV{\metavariable{e}}}$ and
$\metavariable{dvs}'=\extractDec{\decctxP{}}{\FV{\metavariable{e}'}}$.}
\end{enumerate}
}
\begin{proof}
\underline{Rule \rn{invk}}.
\begin{enumerate}
\item In this case $\rho=\MethCall{\metavariable{v}_0}{\metavariable{m}}{\metavariable{v}_1,..,\metavariable{v}_n}$ and
\begin{center}
$\metavariable{e}'=\Block{\Dec{\Type{\mu}{\metavariable{C}_0}}{\terminale{this}}{\metavariable{v}_0}\, \Dec{\Type{\mu_1}{\metavariable{C}_1}}{\metavariable{x}_1}{\metavariable{v}_1}..\Dec{\Type{\mu_n}{\metavariable{C}_n}}{\metavariable{x}_n}{\metavariable{v}_n}}{\metavariable{e}_b}$
\end{center}
where
$\method{\metavariable{C}_0}{\metavariable{m}}{=}\Method{\ReturnTypeNew{\metavariable{D}}{{\cal S}_b}}{\mu_0}{\Param{\Type{\mu_1}{\metavariable{C}_1}}{\metavariable{x}_1}\ldots\Param{\Type{\mu_n}{\metavariable{C}_n}}{\metavariable{x}_n}}{\metavariable{e}_b}$.
From \refToLemma{context}.1 we have that
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{\MethCall{\metavariable{v}_0}{\metavariable{m}}{\metavariable{v}_1,..,\metavariable{v}_n}}{\metavariable{D}}{{\cal S}''}$
for some ${\cal S}'_1$. From typing rule \rn{T-invk}
\begin{enumerate} [(1)]
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{\metavariable{v}_i}{\metavariable{C}_i}{{\cal S}_i}$
( $0\leq i\leq n$)
\item $\forall\ {0 \leq i \leq n}\ \ \mu_i=\terminale{a}\Longrightarrow{\IsCapsule{{\cal S}_i}}$
\item ${\cal S}'_0=\SubstEqRel{{\cal S}_0}{\terminale{this}}{\terminale{res}}$
\item ${\cal S}'_i=\SubstEqRel{{\cal S}_i}{\metavariable{x}_i}{\terminale{res}}$ ($1\leq i\leq n$)
\item ${\cal S}''=\Remove{(\Sum{\sum\limits_{i=1}^{n}{\cal S}'_i}{{\cal S}_b})}{\{\terminale{this},\metavariable{x}_1,\ldots,\metavariable{x}_n\}}$
\end{enumerate}
From the fact that the class table is well-typed we have that
$\TypeCheck{\Gamma'}{\metavariable{e}_b}{\metavariable{D}}{{\cal S}_b}$ where
$\Gamma'=\TypeDec{\terminale{this}}{\Type{\mu}{\metavariable{C}_0},\TypeDec{\metavariable{x}_1}{\Type{\mu_1}{\metavariable{C}_1}},\ldots,\TypeDec{\metavariable{x}_n}{\Type{\mu_n}{\metavariable{C}_n}}}$.
Moreover, since we may assume that
$\{\terminale{this},\metavariable{x}_1,\ldots,\metavariable{x}_n\}\cap\dom{\Gamma[\Gamma_{{\cal{E}}}]}=\emptyset$,
from \refToLemma{weakening} we have that
\begin{enumerate}[(a)]
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}][\Gamma']}{\metavariable{e}_b}{\metavariable{D}}{{\cal S}_b}$, and
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}][\Gamma']}{\metavariable{v}_i}{\metavariable{C}_i}{{\cal S}_i}$ ( $0\leq i\leq n$).
\end{enumerate}
Therefore by typing rule \rn{T-block}, (1)$\div$(5), (a) and (b) we have
that $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{\metavariable{e}'}{\metavariable{D}}{{\cal S}''}$. From
\refToLemma{context}.2 we derive $\TypeCheck{\Gamma}{\CtxP{\metavariable{e}''}}{\metavariable{D}}{{\cal S}}$ where $\CtxP{\metavariable{e}''}\EZ{\approx^-}\Ctx{\metavariable{e}'}$.
\item The result is proved as in the case of \rn{field-assign} just replacing
$\Finer{{\cal S}'_x}{{\cal S}_x}$ with
${{\cal S}'_x}={{\cal S}_x}$ since the sharing relation of the redex
is equal to the one of the block to which it reduces.
\end{enumerate}
\underline{Rule \rn{alias-elim}}.
\begin{enumerate}
\item In this case
\begin{enumerate} [(1)]
\item $\rho=\BlockLab{\metavariable{ds}' }{\metavariable{e}_b}{\X}$ where
$\metavariable{ds}'=\metavariable{dvs}\ \Dec{\metavariable{C}_1}{\metavariable{x}}{\metavariable{y}}\ \metavariable{ds}$
\item $\metavariable{e}'=\BlockLab{\metavariable{dvs}\ \Subst{\metavariable{ds}}{\metavariable{y}}{\metavariable{x}}}{\Subst{\metavariable{e}_b}{\metavariable{y}}{\metavariable{x}}}{X\setminus\{\metavariable{x}\}}$
\end{enumerate}
From \refToLemma{context}.1 we have that
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{\BlockLab{\metavariable{ds}'}{\metavariable{e}_b}{\X}}{\metavariable{D}}{{\cal S}_1}$
for some ${\cal S}_1$. Therefore from \refToLemma{invBlock} we have that
\begin{enumerate} [(a)]
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{dvs}}{{\cal S}_2}$ for some ${\cal S}_2$,
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\Dec{\metavariable{C}_1}{\metavariable{x}}{\metavariable{y}}}{\{\metavariable{x},\metavariable{y}\}}$,
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{ds}}{{\cal S}_3}$ for some ${\cal S}_3$,
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{e}_b}{\metavariable{D}}{{\cal S}_4}$ for some ${\cal S}_4$,
\item ${\cal S}_1=\Remove{{\cal S}'_1}{\dom{\metavariable{ds}'}}$ where ${\cal S}'_1=\sum\limits_{i=2}^{4}{\cal S}_i+\{\metavariable{x},\metavariable{y}\}$
and $\X=\Closure{\terminale{res}}{{\cal S}'_1}\cap\dom{\metavariable{ds}'}$.
\end{enumerate}
Since $\metavariable{x}$ cannot be free in $\metavariable{dvs}$, from \refToLemma{weakening}.2 and (a)
we derive
\begin{enumerate} [(A)]
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}}][\Remove{\Gamma_{\metavariable{ds}'}}{\metavariable{x}}]}{\metavariable{dvs}}{\Remove{{\cal S}_2}{\metavariable{x}}}$.
\end{enumerate}
From \refToLemma{substitution}.1 and (c) and (d) we have that
\begin{enumerate} [(A)]\addtocounter{enumii}{2}
\item $\TypeCheckDecs{\Remove{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{x}}}{\Subst{\metavariable{ds}}{\metavariable{y}}{\metavariable{x}}}{\Remove{{\cal S}_3}{\metavariable{x}}}$ and
\item $\TypeCheckDecs{\Remove{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{x}}}{\Subst{\metavariable{e}_b}{\metavariable{y}}{\metavariable{x}}}{\Remove{{\cal S}_4}{\metavariable{x}}}$.
\end{enumerate}
Moreover,
\begin{enumerate}[(A)]\addtocounter{enumii}{4}
\item let ${\cal S}''=\sum\limits_{i=2}^{4}(\Remove{{\cal S}_i}{\metavariable{x}})$,
\end{enumerate}
from (A), (C)$\div$(E) and rule \rn{T-block} we have that
\begin{center}
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\BlockLab{\metavariable{dvs}\ \Subst{\metavariable{ds}}{\metavariable{y}}{\metavariable{x}}}{\Subst{\metavariable{e}_b}{\metavariable{y}}{\metavariable{x}}}{\Y}}{\metavariable{D}}{\Remove{{\cal S}''}{\dom{\metavariable{dvs}\,\metavariable{ds}}}}$
\end{center}
where $\Y=\Closure{\terminale{res}}{{{\cal S}''}}\cap\dom{\metavariable{dvs}\,\metavariable{ds}}$. If
$\metavariable{x}\not\in\Closure{\terminale{res}}{{\cal S}'_1}$, then
$\Closure{\terminale{res}}{{\cal S}'_1}=\Closure{\terminale{res}}{{{\cal S}''}}$, and
since $\dom{\metavariable{dvs}\,\metavariable{ds}}\cup\{x\}=\dom{\metavariable{ds}'}$ we have that $\X=\Y$. If
$\metavariable{x}\in\Closure{\terminale{res}}{{\cal S}'_1}$, then
$\Closure{\metavariable{x}}{{\cal S}'_1}=\Closure{\terminale{res}}{{\cal S}'_1}$ and from (e)
we have that
$\Closure{\metavariable{x}}{{\cal S}'_1}=\Closure{\terminale{res}}{{\cal S}'_1}=\Closure{\metavariable{y}}{{\cal S}'_1}$.
Therefore
$\Closure{\terminale{res}}{{\cal S}''}=\Closure{\terminale{res}}{{\cal S}'_1}\setminus\{\metavariable{x}\}$
and $\Y=X\setminus\{\metavariable{x}\}$. From
$\Finer{\Remove{{\cal S}_i}{\metavariable{x}}}{{\cal S}_i}$ ($2\leq i\leq 4$) and
\refToProp{lessSrRel}.\ref{p2} we have that
$\Finer{{\cal S}''}{{\cal S}'_1}$. Therefore from
\refToProp{lessSrRel}.\ref{p3} we derive $\Finer{{\cal S}_2}{{\cal S}_1}$.
\item The result is proved as in the case of \rn{field-assign} since from
$\Finer{{\cal S}_2}{{\cal S}_1}$ by \refToLemma{context} we derive
${{\cal S}'_x}={{\cal S}_x}$.
\end{enumerate}
\underline{Rule \rn{affine-elim}}.
\begin{enumerate}
\item In this case
\begin{enumerate} [(1)]
\item $\rho=\BlockLab{\metavariable{ds}' }{\metavariable{e}_b}{\X}$ where
$\metavariable{ds}'=\metavariable{dvs}\ \Dec{\Type{\terminale{a}}{\metavariable{C}_1}}{\metavariable{x}}{\metavariable{v}}\ \metavariable{ds}$
\item $\metavariable{e}'=\BlockLab{\metavariable{dvs}\ \Subst{\metavariable{ds}}{\metavariable{v}}{\metavariable{x}}}{\Subst{\metavariable{e}_b}{\metavariable{v}}{\metavariable{x}}}{X\setminus\{\metavariable{x}\}}$
\end{enumerate}
From \refToLemma{context}.1 we have that
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{\BlockLab{\metavariable{ds}'}{\metavariable{e}_b}{\X}}{\metavariable{D}}{{\cal S}_1}$
for some ${\cal S}_1$. Therefore from \refToLemma{invBlock} we
have that
\begin{enumerate} [(a)]
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{dvs}}{{\cal S}_2}$
for some ${\cal S}_2$,
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{v}}{\metavariable{C}_1}{{\cal S}_v}$
where $\IsCapsule{{\cal S}_v}$, therefore also
$\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\Dec{\Type{\terminale{a}}{\metavariable{C}_1}}{\metavariable{x}}{\metavariable{v}}}{{\cal S}_v}$
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{ds}}{{\cal S}_3}$
for some ${\cal S}_3$,
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{e}_b}{\metavariable{D}}{{\cal S}_4}$
for some ${\cal S}_4$,
\item ${\cal S}_1=\Remove{{\cal S}'_1}{\dom{\metavariable{ds}'}}$ where
${\cal S}'_1=\sum\limits_{i=2}^{4}{\cal S}_i+{\cal S}_v$ and
$\X=\Closure{\terminale{res}}{{\cal S}'_1}\cap\dom{\metavariable{ds}'}$.
\end{enumerate}
Let
\begin{enumerate} [(A)]\addtocounter{enumii}{1}
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\EZ{\aux{gc}}(\metavariable{v})}{\metavariable{C}_1}{{\cal S}'_v}$
\end{enumerate}
we also have $\IsCapsule{{\cal S}'_v}$.\\
From (B), the fact that $\Gamma_{\metavariable{ds}'}(\metavariable{x})=\Type{\terminale{a}}{\metavariable{C}_1}$, and
\refToLemma{sharingCapsule} we have that ${\cal S}'_v=\epsilon$. Since
we do not have forward references to unevaluated variables,
$\metavariable{x}$ cannot be free in $\metavariable{dvs}$ and from \refToLemma{weakening}.2 and (a) we derive
\begin{enumerate} [(A)
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}}][\Remove{\Gamma_{\metavariable{ds}'}}{\metavariable{x}}]}{\metavariable{dvs}}{\Remove{{\cal S}_2}{\metavariable{x}}}$.
\end{enumerate}
From (B) with ${\cal S}'_v=\epsilon$, \refToLemma{substitution}.2 and (c)
and (d) we have that
\begin{enumerate} [(A)]\addtocounter{enumii}{2}
\item $\TypeCheckDecs{\Remove{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{x}}}{\Subst{\metavariable{ds}}{\EZ{\aux{gc}}(\metavariable{v})}{\metavariable{x}}}{\Remove{{\cal S}_3}{\metavariable{x}}}$ and
\item $\TypeCheckDecs{\Remove{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\metavariable{x}}}{\Subst{\metavariable{e}_b}{\EZ{\aux{gc}}(\metavariable{v})}{\metavariable{x}}}{\Remove{{\cal S}_4}{\metavariable{x}}}$.
\end{enumerate}
Moreover,
\begin{enumerate} [(A)]\addtocounter{enumii}{4}
\item let
${\cal S}''=\sum\limits_{i=2}^{4}(\Remove{{\cal S}_i}{\metavariable{x}})$,
\end{enumerate}
from (A), (C)-(E) and rule \rn{T-block} we have that
\begin{center}
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}][\Gamma_{\metavariable{ds}'}]}{\BlockLab{\metavariable{dvs}\ \Subst{\metavariable{ds}}{\metavariable{v}}{\metavariable{x}}}{\Subst{\metavariable{e}_b}{\metavariable{v}}{\metavariable{x}}}{\Y}}{\metavariable{D}}{\Remove{{\cal S}''}{\dom{\metavariable{dvs}\,\metavariable{ds}}}}$
\end{center}
where $\Y=\Closure{\terminale{res}}{{{\cal S}''}}\cap\dom{\metavariable{dvs}\,\metavariable{ds}}$. Since
$\Closure{\metavariable{x}}{{\cal S}'_1}=\{\metavariable{x}\}$, we have that
$\metavariable{x}\not\in\Closure{\terminale{res}}{{\cal S}'_1}$. Moreover
$\dom{\metavariable{dvs}\,\metavariable{ds}}\cup\{x\}=\dom{\metavariable{ds}'}$. Therefore we have that
$\X=\Remove{\X}{\metavariable{x}}=\Y$. From $\Finer{\Remove{{\cal S}_i}{\metavariable{x}}}{{\cal S}_i}$
($2\leq i\leq 4$) and \refToProp{lessSrRel}.\ref{p2} we get
$\Finer{{\cal S}''}{{\cal S}'_1}$.
\item The result is proved as in the case of \rn{field-assign} since from
$\Finer{{\cal S}_2}{{\cal S}_1}$ by \refToLemma{context} we derive
${{\cal S}'_x}={{\cal S}_x}$.
\end{enumerate}
\end{proof}
\noindent{\bf Lemma \ref{lemma:decomposition}.} (Decomposition)
{\it If $\metavariable{e}$ is not a value, then there are
${\cal{E}}$ and $\rho$ such that $\congruence{\metavariable{e}}{\Ctx{\rho}}$.}
\begin{proof}
By structural induction on expressions. \\ If
\underline{$\FieldAssign{\metavariable{v}}{\metavariable{f}}{\metavariable{v}'}$}, then from \refToProp{value} we
have that $\metavariable{v}=\metavariable{x}$ or $\metavariable{v}=\BlockLab{\metavariable{dvs}}{\metavariable{x}}{\X}$ and $\metavariable{v}'=\metavariable{y}$ or
$\metavariable{v}'=\BlockLab{\metavariable{dvs}'}{\metavariable{y}}{\Y}$. If $\metavariable{v}=\BlockLab{\metavariable{dvs}}{\metavariable{x}}{\X}$ and
$\metavariable{v}'=\BlockLab{\metavariable{dvs}'}{\metavariable{y}}{\Y}$, we may assume, by $\alpha$-renaming, that
$\dom{dvs}\cap\dom{dvs'}=\emptyset$. From rule \rn{val-ctx} (applied twice)
$\congruence{\FieldAssign{\metavariable{v}}{\metavariable{f}}{\metavariable{v}'}}{\BlockLab{\metavariable{dvs}\
\metavariable{dvs}'}{\FieldAssign{\metavariable{x}}{\metavariable{f}}{\metavariable{y}}}{\X\cup\Y}}$. So
$\congruence{\FieldAssign{\metavariable{v}}{\metavariable{f}}{\metavariable{v}'}}{\Ctx{\FieldAssign{\metavariable{x}}{\metavariable{f}}{\metavariable{y}}}}$
where ${\cal{E}}=\BlockLab{\metavariable{dvs}\ \metavariable{dvs}'}{[\ ]}{\X\cup\Y}$ The other cases are
easier.\\ For field access the proof is similar. \\ Method call is a redex so
the result holds with ${\cal{E}}=[\ ]$.\\ If
\underline{$\BlockLab{\metavariable{ds}}{\metavariable{e}}{\X}$} is not a value, then either
\begin{enumerate}[(1)]
\item $\metavariable{ds}=\metavariable{dvs}\,\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}_1}\,\metavariable{ds}_1$ where $\metavariable{e}_1$ is not
$\ConstrCall{\metavariable{C}}{\metavariable{xs}}$ for some $\metavariable{C}$ and $\metavariable{xs}$ or
\item $\metavariable{ds}=\metavariable{dvs}$ and $\metavariable{e}$ is not a value.
\end{enumerate}
In \underline{case (1)}, either $\metavariable{e}_1$ is not a value or $\metavariable{e}_1$ is a value but not $\ConstrCall{\metavariable{C}}{\metavariable{xs}}$ for some $\metavariable{C}$
and $\metavariable{xs}$.\\
In case $\metavariable{e}_1$ is not a value, by induction hypothesis, there are ${\cal{E}}$, and $\rho$ such that
$\congruence{\metavariable{e}_1}{\Ctx{\rho}}$. Applying congruence rule \rn{reorder} of
\refToFigure{congruence} we have that
\begin{center}
$\congruence{\BlockLab{\metavariable{dvs}\,\metavariable{dvs}_1\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}_1}\,\metavariable{ds}_2}{\metavariable{e}}{\X}}{\BlockLab{\metavariable{dvs}\,\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}_1}\,\metavariable{ds}_1}{\metavariable{e}}{\X}}$
\end{center}
where $\metavariable{dvs}_1$ are all the evaluated declarations of $\metavariable{ds}_1$. Therefore
$\congruence{\BlockLab{\metavariable{ds}}{\metavariable{e}}{\X}}{\CtxP{\rho}}$ where
${\cal E}'=\BlockLab{\metavariable{dvs}\,\metavariable{dvs}_1\Dec{\metavariable{T}}{\metavariable{x}}{{\cal{E}}}\,\metavariable{ds}_2}{\metavariable{e}}{\X}$.\\
In case $\metavariable{e}_1$ is a value but not $\ConstrCall{\metavariable{C}}{\metavariable{xs}}$ for some $\metavariable{C}$
and $\metavariable{xs}$, by Proposition~\ref{prop:value}, either $\congruence{\metavariable{e}_1}{\metavariable{y}}$ or
$\congruence{\metavariable{e}_1}{\BlockLab{\metavariable{dvs}'}{\metavariable{y}}{\Y}}$. If $\metavariable{T}=\Type{\terminale{a}}{\metavariable{D}}$ for
some $\metavariable{D}$, then the block is a redex, else either $\congruence{\metavariable{e}_1}{\metavariable{y}}$ or
$\congruence{\metavariable{e}_1}{\BlockLab{\metavariable{dvs}'}{\metavariable{y}}{\Y}}$. \\ If $\congruence{\metavariable{e}_1}{\metavariable{y}}$,
then $\BlockLab{\metavariable{ds}}{\metavariable{e}}{\Y}$ is congruent to
$\BlockLab{\metavariable{dvs}\,\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{y}}\,\metavariable{ds}_1}{\metavariable{e}}{\X}$, which is a redex. \\ If
$\congruence{\metavariable{e}_1}{\BlockLab{\metavariable{dvs}'}{\metavariable{y}}{\X}}$, then
$\congruence{\BlockLab{\metavariable{dvs}\,\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}_1}\,\metavariable{ds}_1}{\metavariable{e}}{\X}}{\BlockLab{\metavariable{dvs}\,\Dec{\metavariable{T}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}'}{\metavariable{y}}{\Y}}\,\metavariable{ds}_1}{\metavariable{e}}{\X}}$
Since $\metavariable{T}=\metavariable{D}$ for some $\metavariable{D}$, with $\alpha$-renaming of variables in $\dom{\metavariable{dvs}'}$, applying congruence rule \rn{Dec} we have
\begin{center}
$\congruence{\BlockLab{\metavariable{dvs}\,\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}_1}\,\metavariable{ds}_1}{\metavariable{e}}{\X}}{\BlockLab{\metavariable{dvs}\,\metavariable{dvs}'\Dec{\metavariable{T}}{\metavariable{x}}{{\metavariable{y}}}\,\metavariable{ds}_1}{\metavariable{e}}{\X}}$
\end{center}
and the expression on the right is a redex.\\
In \underline{case (2)}, by
induction hypothesis, there are ${\cal{E}}$, and $\rho$ such that
$\congruence{\metavariable{e}}{\Ctx{\rho}}$. Therefore
$\congruence{\BlockLab{\metavariable{ds}}{\metavariable{e}}{\X}}{\CtxP{\rho}}$ where
${\cal E}'=\BlockLab{\metavariable{dvs}}{{\cal{E}}}{\X}$.
\end{proof}
\section{The calculus}\label{sect:calculus}
The calculus has a simplified syntax, defined in \refToFigure{calculus}, where
we assume that, except from right-hand sides of declarations and bodies of
blocks, subterms of a compound expression are only values. This simplification
can be easily obtained by a (type-driven) translation of the syntax of
\refToFigure{syntax} generating for each subterm {which is not a value} a local
declaration of the appropriate type. Moreover we omit primitive types.
\EZ{Finally, the syntax describes runtime terms, where blocks are annotated as described in the previous section.}
\begin{figure}[t]
\begin{grammatica}
\produzione{\metavariable{e}}{\metavariable{x}\,{\mid}\,\FieldAccess{\metavariable{v}}{\metavariable{f}}\,{\mid}\,\MethCall{\metavariable{v}}{\metavariable{m}}{\metavariable{vs}}\,{\mid}\,\FieldAssign{\metavariable{v}}{\metavariable{f}}{\metavariable{v}}\,{\mid}\,\ConstrCall{\metavariable{C}}{\metavariable{vs}}}{expression}\\
\seguitoproduzione{\,{\mid}\,\EZ{\BlockLab{\metavariable{ds}}{\metavariable{e}}{\X}}}{}\\
\produzione{\metavariable{d}}{\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}}}{declaration}\\ \\
\produzione{\metavariable{T}}{\Type{\mu}{\metavariable{C}}}{declaration type}\\
\produzione{\metavariable{v}}{{\metavariable{x}\mid\BlockLab{\metavariable{dvs}}{{\metavariable{v}}}{\X}\mid\ConstrCall{\metavariable{C}}{\metavariable{vs}}}}{value}\\
\produzione{\metavariable{dv}}{\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\metavariable{vs}}}}{evaluated declaration}\\
\end{grammatica}
\caption{Syntax of calculus, values, and evaluated declarations}\label{fig:calculus}
\end{figure}
A {\em value} is the result of the reduction of an expression, and is either a
variable (a reference to an object), or a block where the declarations are
evaluated (hence, correspond to a local store) and the body is in turn a
value\EZ{, or a constructor call where argument are evaluated.}
A sequence $\metavariable{dvs}$ of \emph{evaluated declarations} plays the role of the store
in conventional models of imperative languages, that is, each $\metavariable{dv}$ can be seen
as an association of \EZ{a right-value to a reference.}
As anticipated in \refToSection{language}, mutual recursion among evaluated
declarations is allowed, whereas we do not allow references to variables on the
left-hand side of forward unevaluated declarations. E.g,
\lstinline{C x= new C(y); C y= new C(x); x}{} is allowed, whereas
\lstinline{C x= new C(y); C y= x.f; x}{} is not.
That is, our calculus supports \emph{recursive object initialization},
whereas, e.g., in Java, we cannot directly create two mutually referring
objects as in the allowed example above, but we need to first initialize their
fields to \texttt{null}. However, to make recursive object initialization safe,
we should prevent access to an object which is not fully initialized yet, as in
the non-allowed example. Here we take a simplifying assumption, just requiring
initialization expressions to be values. More permissive assumptions can be
expressed by a sophisticated type system as in \cite{ServettoEtAl13}.
Semantics is defined by a \emph{congruence} relation, which captures structural
equivalence, and a \emph{reduction} relation, which models actual computation,
similarly to what happens, e.g., in $\pi$-calculus \cite{Milner99}.
The congruence relation, denoted by $\congruence{}{}$, is defined as the
smallest congruence satisfying the axioms in \refToFigure{congruence}. We
write $\FV{\metavariable{ds}}$ and $\FV{\metavariable{e}}$ for the free variables of a sequence of
declarations and of an expression, respectively, and $\Subst{\X}{\metavariable{y}}{\metavariable{x}}$,
$\Subst{\metavariable{ds}}{\metavariable{y}}{\metavariable{x}}$, and $\Subst{\metavariable{e}}{\metavariable{y}}{\metavariable{x}}$ for the capture-avoiding
variable substitution on a set of variables, a sequence of declarations, and an
expression, respectively, all defined in the standard way.
\begin{figure}[!htbp]
{
\begin{math}
\begin{array}{l}
{\NamedRuleOL{alpha}{{\congruence{\BlockLab{\metavariable{ds}\ \Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}}\ \metavariable{ds}'}{\metavariable{e}'}{\X}}{\BlockLab{\Subst{\metavariable{ds}}{\metavariable{y}}{\metavariable{x}}\ \Dec{\metavariable{T}}{\metavariable{y}}{\Subst{\metavariable{e}}{\metavariable{y}}{\metavariable{x}}}\ \Subst{\metavariable{ds}'}{\metavariable{y}}{\metavariable{x}}}{\Subst{\metavariable{e}'}{\metavariable{y}}{\metavariable{x}}}{{\Subst{\X}{\metavariable{y}}{\metavariable{x}}}}}}}{}}
\\[3ex]
\NamedRuleOL{reorder}{\congruence{
\BlockLab{
\metavariable{ds}\ \Dec{\metavariable{C}}{\metavariable{x}}{{\ConstrCall{\metavariable{C}}{\EZ{\metavariable{vs}}}}}\ \metavariable{ds}'
}{\metavariable{e}}{\X}}{
\BlockLab{
\Dec{\metavariable{C}}{\metavariable{x}}{{\ConstrCall{\metavariable{C}}{\EZ{\metavariable{vs}}}}}\ \metavariable{ds}\ \metavariable{ds}'\ }{\metavariable{e}}{\X}}}{}
\\[3ex]
\NamedRuleOL{new}{\congruence{
\ConstrCall{\metavariable{C}}{\EZ{vs}}
}{
\BlockLab{\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\EZ{\metavariable{vs}}}}}{\metavariable{x}}{{\{\metavariable{x}\}}}}
}{}
\\[3ex]
\NamedRuleOL{{block-elim}}{\congruence{{\BlockLab{}{\metavariable{e}}{\emptyset}}}{\metavariable{e}}}{}
\\[2ex]
\NamedRuleOL{{{dec}}}
{
\PG{
\begin{array}{l}
\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\ \metavariable{ds}_2}{\metavariable{e}}{\X}}\ \metavariable{ds}'}{\metavariable{e}'}{\Y}\cong \\
\hskip 1.5em\BlockLab{\metavariable{dvs}\ \metavariable{dvs}_1\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{\X'}}\ \metavariable{ds}'}{\metavariable{e}'}{{\Y'}}
\end{array}
}
}
{
\!\!\!\!\!\!\!\!\!\!\begin{array}{l}
\FV{\metavariable{dvs}_1}\cap\dom{\metavariable{ds}_2}=\emptyset\\
\FV{\metavariable{dvs}\,\metavariable{ds}'\,{\metavariable{e}'}}{\cap}\dom{\metavariable{dvs}_1}{=}\emptyset\\
\mu=\terminale{a}{\implies}\dom{\metavariable{dvs}_1}{\cap}\X{=}\emptyset
\end{array}}
\\[4ex]
\NamedRuleOL{{body}}
{\begin{array}{l}
{\BlockLab{\EZ{\metavariable{dvs}}}{\BlockLab{\EZ{\metavariable{dvs}_1}\ \metavariable{ds}_2}{\metavariable{e}}{{\X}}}{{\Y}}}\cong\\
\hskip 1.5em{\BlockLab{\EZ{\metavariable{dvs}}\ \EZ{\metavariable{dvs}_1}}{
\BlockLab{\metavariable{ds}_2}{\metavariable{e}}{{\X'}}}{{\Y'}}}
\end{array}
}
{
\begin{array}{l}
\FV{\EZ{\metavariable{dvs}_1}}\cap\dom{\metavariable{ds}_2}=\emptyset\\
\FV{\EZ{\metavariable{dvs}}}\cap\dom{\EZ{\metavariable{dvs}_1}}=\emptyset
\end{array}}
\\[4ex]
\NamedRuleOL{val-ctx}{
\begin{array}{l}
{\ValCtx{\BlockLab{{\metavariable{dvs}_1\,\metavariable{dvs}_2}}{\metavariable{v}}{\X}}}\cong\\
\hskip 1.5em{\BlockLab{{\metavariable{dvs}_1}}{\ValCtx{\BlockLab{{\metavariable{dvs}_2}}{\metavariable{v}}{\X'}}}{\Y}}
\end{array}}
{
\begin{array}{l}
\FV{{\metavariable{dvs}_1}}\cap\dom{{\metavariable{dvs}_2}}=\emptyset\\
\FV{{\cal{V}}}\cap\dom{{\metavariable{dvs}_1}}=\emptyset
\end{array}}
\end{array}
\end{math}
}
\caption{Congruence rules}
\label{fig:congruence}
\end{figure}
Rule \rn{alpha} is the usual $\alpha$-conversion. The condition
$\metavariable{x},\metavariable{y}\not\in\dom{\metavariable{ds}\,\metavariable{ds}'}$ is implicit by well-formedness of blocks. \\
Rule \rn{reorder} states that we can move evaluated declarations in an
arbitrary order. Note that, instead, $\metavariable{ds}$ and $\metavariable{ds}'$ cannot be swapped,
because this could change the order of side effects. \\
In rule \rn{new}, a constructor invocation can be seen as an elementary block
where a new object is allocated.\\
Rule \rn{{block-elim}} states that a block with no
declarations is equivalent to its body.\\
With the remaining rules we can move a sequence of declarations from a block to
the directly enclosing block, or conversely, as it happens with rules for
\emph{scope extension} in the $\pi$-calculus \cite{Milner99}.\\
In rules \rn{dec} and \rn{body}, the inner block is the body, or the right-hand
side of a declaration, respectively, of the enclosing block.} The first two
side conditions ensure that moving the declarations outside the block does cause
neither scope extrusion nor capture of free variables. More precisely: the first
prevents moving outside declarations which depend on local
variables of the inner block. The second prevents capturing free
variables of the enclosing block. Note that the second condition can be
obtained by $\alpha$-conversion of the inner block, but the first cannot.
Finally, the third side condition of rule \rn{{dec}} prevents, in case the
block {initializes} an affine variable, to move outside declarations of
variables that {will be possibly} connected to the result of the block.
Indeed, in this case we would get an ill-typed term. In case of a non-affine
declaration, instead, this is not a problem.\\
Rule \rn{val-ctx} handles the cases when the inner block is a subterm of a
field access, method invocation, field assignment or constructor invocation.
Note that in this case the inner block is necessarily a (block) value. To
express all such cases in a compact way, we define {\em value contexts}
${\cal{V}}$ in the following way:
\begin{center}
${\cal{V}}::=[\ ]\mid\FieldAccess{{\cal{V}}}{\metavariable{f}}\mid\FieldAssign{{\cal{V}}}{\metavariable{f}}{\metavariable{v}}\mid\FieldAssign{\metavariable{v}}{\metavariable{f}}{{\cal{V}}}\mid\ConstrCall{\metavariable{C}}{\metavariable{vs},{\cal{V}},\metavariable{vs}'}$
\end{center}
For instance, if ${\cal{V}}=\ConstrCall{\metavariable{C}}{\metavariable{vs},[\ ],\metavariable{vs}'}$, we get
{\small \begin{center}
${\ConstrCall{\metavariable{C}}{\metavariable{vs},\BlockLab{{\metavariable{dvs}_1\,\metavariable{dvs}_2}}{\metavariable{v}}{\X},\metavariable{vs}'}\cong
\BlockLab{\metavariable{dvs}_1}{{\ConstrCall{\metavariable{C}}{\metavariable{vs},\BlockLab{\metavariable{dvs}_2}{\metavariable{v}}{\X'},\metavariable{vs}'}}}{\Y}}$
\end{center}}
\vspace{-5pt}
As for rules \rn{dec} and \rn{body}, the first side condition prevents moving
outside a declaration in $\metavariable{dvs}_1$ which depends on local variables of the inner
block, and the second side condition prevents capturing free variables of
${\cal{V}}$\EZ{, defined in the standard way.}
The following definition introduces a simplified syntactical form for values
and evaluated declarations. \EZ{In this canonical form, a sequence of evaluated declarations (recall that its order is immaterial) can be seen as a \emph{store} which associates to references \emph{object states} of shape $\ConstrCall{\metavariable{C}}{\metavariable{xs}}$, where fields contain in turn references, and a value is a
variable (a reference to an object) possibly enclosed in a local store. }
\begin{mydefinition}\label{def:canonicalVal}\
\begin{enumerate}
\item A sequence of evaluated declarations $\metavariable{dvs}$ is \emph{in canonical form}
if, for all $\metavariable{dv}$ in $\metavariable{dvs}$, $\metavariable{dv}=\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\metavariable{xs}}}$ for
some $\metavariable{C}$, $\metavariable{x}$ and $\metavariable{xs}$.
\item A value $\metavariable{v}$ is \emph{in canonical form} if either $\metavariable{v}=\metavariable{x}$ for some
$\metavariable{x}$, or $\metavariable{v}=\BlockLab{\metavariable{dvs}}{\metavariable{x}}{\X}$ for some $\X$, $\metavariable{dvs}$, and $\metavariable{x}$,
with $\metavariable{dvs}\neq\epsilon$ in canonical form.
\end{enumerate}
\end{mydefinition}
The following proposition shows that congruence allows us to assume that values
are in canonical form.
\begin{myproposition}\label{prop:value}
If $\metavariable{v}$ is a value, then there exists $\metavariable{v}'$ such that $\congruence{\metavariable{v}}{\metavariable{v}'}$ and $\metavariable{v}'$ is in
canonical form.
\end{myproposition}
\begin{proof}
By structural induction on values, and for blocks by induction on the number of
declarations that are not in canonical form. The full proof is in
\ref{app:proofs}.
\end{proof}
\EZ{From now on}, unless otherwise stated, we assume that values and evaluated
declarations are in canonical form.
\EZ{Moreover, we also need to characterize values which are garbage-free, in the sense that they do not contain useless store.
To this end, we first inductively define $\connected{\metavariable{ds}}{\X}{\metavariable{x}}$, meaning that $\metavariable{x}$ is (transitively)
used by $\X$ through $\metavariable{ds}=\Dec{\metavariable{T}_1}{\metavariable{x}_1}{\metavariable{e}_1}\ldots\Dec{\metavariable{T}_n}{\metavariable{x}_n}{\metavariable{e}_n}$, by:
\begin{quote}
$\connected{\metavariable{ds}}{\X}{\metavariable{x}}$ if $\metavariable{x}\in\X$\\
$\connected{\metavariable{ds}}{\X}{\metavariable{x}}$ if $\metavariable{x}\in\FV{\metavariable{e}_i}$, for some $i\in 1..n$, and $\connected{\metavariable{ds}}{\X}{\metavariable{x}_i}$.
\end{quote}
Then, we write $\Reduct{\metavariable{ds}}{\X}$ for the subsequence of $\metavariable{ds}$ (transitively) used by $\X$, defined by: for all $i\in 1..n$,
$\Dec{\metavariable{T}_i}{\metavariable{x}_i}{\metavariable{e}_i}\in\Reduct{\metavariable{ds}}{\X}$ if
$\connected{\metavariable{ds}}{\X}{\metavariable{x}_i}$.\\
Finally, we define }
$\EZ{\aux{gc}}(\BlockLab{\metavariable{dvs}}{\metavariable{x}}{\X})=\BlockLab{\Reduct{\metavariable{dvs}}{\metavariable{x}}}{\metavariable{x}}{{\X}\cap{\dom{\Reduct{\metavariable{dvs}}{\metavariable{x}}}}}$, and we say that a value $\metavariable{v}$ is \emph{garbage-free}} if either $\metavariable{v}=\metavariable{x}$ or $\metavariable{v}=\EZ{\aux{gc}}(\metavariable{v})$.}\\
{\em Evaluation contexts}, defined below, express standard left-to-right
evaluation.
\begin{center}
${\cal{E}}::=[\ ]\mid\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{{\cal{E}}}\ \metavariable{ds}}{\metavariable{e}}{\X}\mid \BlockLab{\metavariable{dvs}}{{\cal{E}}}{\X}$
\end{center}
In the evaluation context $\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{{\cal{E}}}\ \metavariable{ds}}{\metavariable{e}}{\X}$
we assume that no declaration in $\metavariable{ds}$ is evaluated. \PG{This can always be
achieved by the congruence rule \rn{reorder}.}
We introduce now some notations which will be used in reduction rules. We
write $\metavariable{dvs}(\metavariable{x})$ for {\em the declaration of $\metavariable{x}$ in $\metavariable{dvs}$}, if any (recall
that in well-formed blocks there are no multiple declarations for the same
variable). We write $\HB{{\cal{E}}}$ for the \emph{hole binders} of ${\cal{E}}$, that
is, the variables declared in blocks enclosing the context hole, defined by: \\
\indent$\bullet$ if ${\cal{E}}=\Block{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{{\cal E}'}\ \metavariable{ds}}{\metavariable{e}}$,
then $\HB{{\cal{E}}}=\dom{\metavariable{dvs}}\cup\{\metavariable{x}\}\cup\HB{{\cal E}'}\cup\dom{\metavariable{ds}}$\\
\indent $\bullet$ if ${\cal{E}}=\Block{\metavariable{dvs}}{{\cal E}'}$,
then $\HB{{\cal{E}}}=\dom{\metavariable{dvs}}\cup\HB{{\cal E}'}$\\
We write $\decCtx{{\cal{E}}}{\metavariable{x}}$ and $\extractDec{{\cal{E}}}{\metavariable{x}}$ for the {\em
sub-context {declaring} $\metavariable{x}$} and the {\em evaluated declaration of $x$} extracted from
${\cal{E}}$, defined as follows:\\
\indent$\bullet$ let ${\cal{E}}=\Block{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{y}}{{\cal E}'}\ \metavariable{ds}}{\metavariable{e}}$\\
\indent\indent- if {$\metavariable{dvs}(\metavariable{x})=\metavariable{dv}$ and $\metavariable{x}\not\in\HB{{\cal E}'}$}, then
$\decCtx{{\cal{E}}}{\metavariable{x}}=\Block{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{y}}{[\ ]}\ \metavariable{ds}}{\metavariable{e}}$ and\\
\indent\indent\ \ $\extractDec{{\cal{E}}}{\metavariable{x}}={\metavariable{dv}}$\\
\indent\indent- else
$\decCtx{{\cal{E}}}{\metavariable{x}}=\Block{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{y}}{\decCtx{{\cal E}'}{\metavariable{x}}}\ \metavariable{ds}}{\metavariable{e}}$
and $\extractDec{{\cal{E}}}{\metavariable{x}}=\extractDec{{\cal E}'}{\metavariable{x}}$\\
\indent$\bullet$ let ${\cal{E}}=\Block{\metavariable{dvs}}{{\cal E}'}$\\
\indent\indent- if {$\metavariable{dvs}(\metavariable{x})=\metavariable{dv}$ and $\metavariable{x}\not\in\HB{{\cal E}'}$}, then
$\decCtx{{\cal{E}}}{\metavariable{x}}=\Block{\metavariable{dvs}}{[\ ]}$ and
$\extractDec{{\cal{E}}}{\metavariable{x}}={\metavariable{dv}}$\\
\indent\indent- else $\decCtx{{\cal{E}}}{\metavariable{x}}=\Block{\metavariable{dvs}}{\decCtx{{\cal E}'}{\metavariable{x}}}$, and
$\extractDec{{\cal{E}}}{\metavariable{x}}=\extractDec{{\cal E}'}{\metavariable{x}}$.\\
Note that $\decCtx{{\cal{E}}}{\metavariable{x}}$ and $\extractDec{{\cal{E}}}{\metavariable{x}}$ are not defined if \EZ{there is no evaluated declaration for
$\metavariable{x}$} in some block enclosing the context hole.
Reduction rules are given in \refToFigure{reduction}.
\begin{figure}[!htbp]
\begin{small}
\begin{math}
\begin{array}{l}
\PG{\NamedRule{congr}{\reduce{\metavariable{e}'}{\metavariable{e}''}
}{
\reduce{{\MS{\metavariable{e}}}}{\metavariable{e}''}
}{ \begin{array}{l}
\congruence{\metavariable{e}}{\metavariable{e}'
\end{array}
}}
\\[4ex]
{\NamedRuleOL{field-access}{\reduce{\Ctx{\FieldAccess{\metavariable{x}}{\metavariable{f}_i}}}{\Ctx{\metavariable{x}_i}}}{
\begin{array}{l}
\extractDec{{\cal{E}}}{\metavariable{x}}=\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\metavariable{x}_1,\ldots,\metavariable{x}_n}}\\
\fields{\metavariable{C}}=\Field{\metavariable{C}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{C}_n}{\metavariable{f}_n}\\
i\in 1..n\\
{{\cal{E}}=\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal E}'}\ \wedge\ {\metavariable{x}_i\not\in\HB{{\cal E}'}}}
\end{array}
}}
\\[4ex]
{\NamedRuleOL{invk}{
\begin{array}{l}
{\Ctx{\MethCall{\metavariable{v}}{\metavariable{m}}{\metavariable{v}_1,..,\metavariable{v}_n}}}\longrightarrow\\
{\Ctx{\Block{\Dec{\Type{\mu}{\metavariable{C}}}{\terminale{this}}{\metavariable{v}}\, \Dec{\metavariable{T}_1}{\metavariable{x}_1}{\metavariable{v}_1}..\Dec{\metavariable{T}_n}{\metavariable{x}_n}{\metavariable{v}_n}}{\metavariable{e}}}}
\end{array}
}{
\begin{footnotesize}
\begin{array}{l}
\!\!\class{{\cal{E}}}{\metavariable{v}}=\metavariable{C}\\
\!\!\method{\metavariable{C}}{\metavariable{m}}{=}\FourTuple{\metavariable{D}}{\mu}{\Param{\metavariable{T}_1}{\metavariable{x}_1}..\Param{\metavariable{T}_n}{\metavariable{x}_n}}{\metavariable{e}}
\end{array}
\end{footnotesize}
}}
\\[4ex]
{\NamedRuleOL{field-assign}{
\reduce{\Ctx{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}}{{\UpdateCtx{\metavariable{y}}{\metavariable{x}}{i}{\metavariable{y}}}}
}{
\begin{array}{l}
\extractDec{{\cal{E}}}{\metavariable{x}}=\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\metavariable{x}_1,\ldots,\metavariable{x}_n}}\\
\fields{\metavariable{C}}=\Field{\metavariable{C}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{C}_n}{\metavariable{f}_n}\\
i\in 1..n\\
{{\cal{E}}=\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal E}'}\ \wedge\ {\metavariable{y}\not\in\HB{{\cal E}'}}}\\
\end{array}}}
\\[6ex]
{\NamedRuleOL{alias-elim}{\reduce{\Ctx{\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{C}}{\metavariable{x}}{\metavariable{y}}\ \metavariable{ds} }{\metavariable{e}}{\X}}}{\Ctx{{\BlockLab{\metavariable{dvs}\ \Subst{\metavariable{ds}}{\metavariable{y}}{\metavariable{x}}}{\Subst{\metavariable{e}}{\metavariable{y}}{\metavariable{x}}}{\X\setminus\{\metavariable{x}\}}}}}}{}}
\\[4ex]
{\NamedRuleOL{affine-elim}{\reduce{\Ctx{\BlockLab{\metavariable{dvs}\ \Dec{\Type{\terminale{a}}{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}}\ \metavariable{ds} }{\metavariable{e}}{X}}}{\Ctx{{\BlockLab{\metavariable{dvs}\ \SubstVal\metavariable{ds}{\metavariable{v}'}{\metavariable{x}}}{\SubstVal\metavariable{e}{\metavariable{v}'}{\metavariable{x}}}{\PG{\X}}}}}}{\!\!\!\!\!\!\PG{\metavariable{v}'=\EZ{\aux{gc}}(\metavariable{v})}}}
\end{array}
\end{math}
\end{small}
\caption{{Reduction rules}}
\label{fig:reduction}
\end{figure}
Rule \rn{congr} can be used to reduce a term which
otherwise would be stuck, as it happens for the $\alpha$-rule in lambda calculus.
In rule \rn{field-access}, given a field access of shape
$\FieldAccess{\metavariable{x}}{\metavariable{f}}$, the first enclosing declaration for $\metavariable{x}$ is found
(through the auxiliary function \textsf{dec}). The fields of the class $\metavariable{C}$ of
$\metavariable{x}$ are retrieved from the class table. If $\metavariable{f}$ is actually the name of a
field of $\metavariable{C}$, say, the $i$-th, then the field access is reduced to the
reference $\metavariable{x}_i$ stored in this field. In the last side condition,
$\decCtx{{\cal{E}}}{\metavariable{x}}$ is the (necessarily defined) sub-context containing the
first enclosing declaration for $\metavariable{x}$, and the condition $\metavariable{x}_i\not\in\HB{{\cal E}'}$
ensures that there are no declarations for $\metavariable{x}_i$ in inner blocks (otherwise
$\metavariable{x}_i$ would be erroneously bound). This can always be obtained by rule
\rn{alpha} of \refToFigure{congruence}.
For instance, assuming a class table where class \lstinline{A} has an
\lstinline{int} field, and class \lstinline{B} has an \lstinline{A} field \lstinline{f},
without this side condition, the term (without annotations):
\begin{lstlisting}
A a= new A(0); B b= new B(a); {A a= new A(1); b.f}
\end{lstlisting}
{would reduce to}
\begin{lstlisting}
A a= new A(0); B b= new B(a); {A a= new A(1); a}
\end{lstlisting}
{whereas this reduction is forbidden, and by rule \rn{alpha} the term is
instead reduced to}
\begin{lstlisting}
A a= new A(0); B b= new B(a); {A a1= new A(1); a}
\end{lstlisting}
For this example:
$\decCtx{{\cal{E}}}{\texttt{b}}=\Block{\texttt{A a = new A(0); B b = new B(a)}}{{\cal E}'}$ and
${\cal E}'=\Block{\texttt{A a = new A(1)}}{[\ ]}$.\\
In rule \rn{{invk}}, the class $\metavariable{C}$ of the receiver $\metavariable{v}$ is found through the
auxiliary function \textit{class} defined by
\begin{quote}
$\class{{\cal{E}}}{\metavariable{x}}=\metavariable{C}$ if $\extractDec{{\cal{E}}}{\metavariable{x}}=\Dec{\Type{}{\metavariable{C}}}{\metavariable{x}}{\_}$\\
$\class{{\cal{E}}}{\BlockLab{\metavariable{dvs}}{\metavariable{x}}{\X}}=\metavariable{C}$ if $\metavariable{dvs}(\metavariable{x})=\Dec{\Type{}{\metavariable{C}}}{\metavariable{x}}{\_}$
\end{quote}
and method $\metavariable{m}$ of $\metavariable{C}$, if any, is retrieved from the class table. The call is
reduced to a block where declarations of the appropriate type for $\terminale{this}$ and the
parameters are initialized with the receiver and the arguments, respectively, and the body is the method body.
In rule \rn{field-assign}, given a field assignment of shape
$\FieldAssign{\metavariable{x}}{\metavariable{f}}{\metavariable{y}}$, the first enclosing declaration for $\metavariable{x}$ is found
(through the auxiliary function \textsf{dec}). If $\metavariable{f}$ is actually the name of
a field of $\metavariable{C}$, say, the $i$-th, then this first enclosing declaration is
updated, by replacing the $i$-th constructor argument by $\metavariable{y}$, obtaining
$\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\metavariable{x}_1,\metavariable{x}_{i-1},\metavariable{y},\metavariable{x}_{i+1},\ldots,\metavariable{x}_n}}$, as
expressed by the notation $\updateCtx{\metavariable{y}}{\metavariable{x}}{i}$ (the obvious formal
definition of which is omitted). As for rule \rn{field-access} we have the side
condition that $\metavariable{y}\not\in\HB{{\cal E}'}$. This side condition, requiring that there
are no inner declarations for the reference $\metavariable{y}$, prevents scope extrusion,
since if $\metavariable{y}\in\HB{{\cal E}'}$, $\updateCtx{\metavariable{u}}{\metavariable{x}}{i}$ would take $\metavariable{y}$
outside the scope of its definition. {The congruence {rules \rn{dec} and
\rn{body}} of \refToFigure{congruence} can be used to {\em correctly} move the
declaration of $\metavariable{y}$ outside its declaration block, as previously described.}
For example, without the side condition, the term (without annotations)
\begin{lstlisting}
A a= new A(0); B b= new B(a); {A a1= new A(1); b.f=a1}
\end{lstlisting}
would reduce to
\begin{lstlisting}
A a= new A(0); B b= new B(a1); {A a1= new A(1); a1}
\end{lstlisting}
The previous term is congruent to
\begin{lstlisting}
A a= new A(0); B b= new B(a); A a1= new A(1); b.f=a1
\end{lstlisting}
by applying rule \rn{body}, and then \rn{{block-elim}}. This term reduces
correctly to
\begin{lstlisting}
A a= new A(0); B b= new B(a1); A a1= new A(1); a1
\end{lstlisting}
The last two rules eliminate evaluated declarations from a block.\\
In rule \rn{alias-elim}, a (non-affine) variable $\metavariable{x}$ which is
initialized as an alias of another reference $\metavariable{y}$ is eliminated by replacing
all its occurrences. In rule \rn{affine-elim}, an affine variable is eliminated
by replacing its unique occurrence with the value associated to its
declaration \PG{from which we remove garbage. } \EZComm{explain why?}
We conclude this section by briefly discussing how the reduction relation \EZ{could} actually be computed. Indeed, the definition in \refToFigure{reduction} is not fully algorithmic, since rule \rn{congr} can always be applied in a non-deterministic way, analogously to what happens, e.g., with $\alpha$-conversion in lambda calculus or structural rules in $\pi$-calculus.
\EZ{However, a}gain as is usually done for analogous cases, congruence is expected to be applied only when needed (since otherwise reduction would be stuck). All our congruence rules except for \rn{alpha} and \rn{reorder} are meant to be applied
from left to right (as a reduction). This is witnessed by the fact that values in canonical form do not match the left-hand side
of any of the previously mentioned congruence rules and \refToProp{value} ensures that any value may be reduced in this form.
\section{Conclusion}\label{sect:conclu}
We have presented a type and effect system which \emph{infers} sharing possibly
introduced by the evaluation of an expression. Sharing is directly represented
at the syntactic level as a relation among free variables. This representation
allows us to express in a natural way, and to generalize, widely-used notions
in literature, providing great expressivity, as shown by the examples of
\refToSection{examples}.
We adopt a non standard operational model, where store is encoded directly
in the language terms. In this model, sharing properties are directly
expressed at the syntactic level. Notably, in a subterm $\metavariable{e}$ of a program,
objects reachable from other parts of the program are simply those denoted by
free variables in $\metavariable{e}$, whereas local objects are those denoted by local
variables declared in $\metavariable{e}$. In our opinion, this offers a very intuitive and
simple understanding of the portion of memory only reachable from $\metavariable{e}$, since
it is encoded in $\metavariable{e}$ itself. Another advantage is that, the store being
encoded in terms, most proofs can be done by structural induction on
terms, as we have exploited in the Coq implementation mentioned
below.
On the other hand, a disadvantage of our model may be that it is
possibly more esoteric for people used to the other one. Moreover, since
isolation is encoded by scoping, some care is needed to avoid scope extrusion
during reduction. For this reason, reduction is defined on typechecked terms,
where blocks have been annotated with the information of which local variables
will be connected to the result. In this way, it is possible to define a rather
sophisticated notion of congruence among terms, which allows to move
declarations out of a block only when this preserves well-typedness.
Besides standard type soundness, we proved that uniqueness and lent properties
inferred by the type system actually ensure the expected behaviour.
To focus on novelties and allow complete formal proofs, we illustrate our
type and effect system on a minimal language, only supporting the features
which are strictly necessary for significance. However, we expect that the
approach could be smoothly extended to typical constructs of
imperative/object-oriented languages, in the same way as other type systems
or type and effect systems (e.g., when effects are possibly thrown
exceptions \cite{AnconaLZ01}). We briefly discuss two key features below.
\begin{description}
\item[Inheritance] For a method redefined in a heir class, the returned
sharing relation should be \emph{contained} in that of the parent, that is,
\emph{less} connections should be possibly caused by the method,
analogously to the requirement that possibly thrown exceptions should be a
subset (modulo subtyping) of those of the parent.
\item[Control flow] As it is customary in type systems, control flow
constructs are handled by taking the ``best common approximation'' of the
types obtained by the different paths. For instance, in the case of
if-then-else we would get the \emph{union} of the sharing relations of
the two branches (the same happens for possibly thrown exceptions).
\end{description}
Note that including control flow constructs we would get, again as it is
customary in type systems, which are never complete for exactly this reason,
examples which are ill-typed but well-behaved at runtime. For instance, an
expression of shape:
\begin{lstlisting}
D y= new D(y);
C$^\terminale{a}$ z= {
D x= new D(x);
if (...) x.f = x else x.f = y;
x
};
z
\end{lstlisting}
will be ill-typed, since the sharing relation computed for the right-hand
side of the declaration of \lstinline{z} is $\{\texttt{y}, \terminale{res}\}$, that is, the
type and effect system conservatively assumes that this expression does
\emph{not} denote a capsule. However, if the guard of the conditional
evaluates to true, then the right-hand side of the declaration reduces to a
capsule.
We have implemented in Coq the type and effect system and the reduction rules.
The current code can be found at
\texttt{//github.com/paola-giannini/sharing}. We did not include the
construct of method call since, as we can see from the typing and reduction
rules, it can be translated into a block. To complete the implementation of the operational semantics, we need an oriented version of the congruence relation on terms to be applied before reduction steps, as explained at the end of \refToSection{calculus}. Then, we plan to mechanize the full proof of soundness. As mentioned before, we argue that the Coq implementation nicely illustrates the advantages of our purely syntactic model, since proofs can be carried out inductively, without requiring more complicated techniques such as, e.g., bisimulation.
In further work, in addition to completing the formalization in Coq of the
proof of soundness, we will enrich the type system to also handle
\emph{immutable} references. A preliminary presentation of this extension is in \cite{GianniniSZ18}.
We also plan to formally state and prove
behavioural equivalence of the calculus with the conventional imperative
model. Intuitively, the reachability information which, in our calculus, is
directly encoded in terms, should be reconstructable from the
dependencies among references in a conventional model with a flat global
memory.
As a long term goal, it would be interesting to investigate (a form of)
Hoare logic on top of our model. We believe that the hierarchical structure of
our memory representation should help local reasoning, allowing specifications
and proofs to mention only the relevant portion, similarly to what is achieved
by separation logic \cite{Reynolds02}.
\EZ{\paragraph{Acknowledgement}
We are indebted to the anonymous SCP referees for the thorough job they did reviewing our paper and for their valuable suggestions. We also thank the OOPS'17 and FTfJP'17 referees for their helpful comments on preliminary versions. {Finally, we thank Isaac Oscar Gariano for his careful reading.}}
\section{Examples}\label{sect:examples}
In this section we illustrate the expressiveness of the type system by
programming examples, and we show a type derivation.
\begin{myexample}\label{ex:One}
Assume we have a class \lstinline{D} with a field \lstinline{f} of type \lstinline{D}, and
a class \lstinline{C} with two fields of type {\lstinline{D}}. Consider the
following closed expression ${\tt e}$:
\begin{lstlisting}
D y= new D(y);
D x= new D(x);
C$^\terminale{a}$ z= {D z2= new D(z2); D z1= (y.f= x); new C(z2,z2)};
z
\end{lstlisting}
The inner block (right-hand side of the declaration of \lstinline{z}{}) refers
to the external variables \lstinline{x} and \lstinline{y}, that is, they occur
free in the block. In particular, the execution of the block has the sharing
effect of connecting \lstinline{x} and \lstinline{y}. However, these variables
will \emph{not} be connected to the final result of the block, since the result
of the assignment will be only connected to a local variable which is not used
to build the final result, as more clearly shown by using the sequence
abbreviation: \lstinline@{D z2= new D(z2); y.f= x; new C(z2,z2)}@. Indeed, as
will be shown in the next section, the block reduces to
\lstinline@{D z2= new D(z2); new C(z2,z2)}@ which is a {closed block}. In
existing type systems {supporting the capsule notion} this example is either
ill-typed \cite{GordonEtAl12}, or can be typed by means of a rather tricky
\emph{swap} typing rule \cite{ServettoZucca15,GianniniEtAl16} which, roughly
speaking, temporarily changes, in a subterm, the set of variables which can
be freely used.
\end{myexample}
\vspace{-10pt}
\begin{myexample}
As a counterexample, consider the following {ill-typed} term
\begin{lstlisting}
D y= new D(y);
D x= new D(x);
C$^\terminale{a}$ z= {D z1= (y.f= x); new C(z1,z1)};
z
\end{lstlisting}
Here the inner block is not a capsule, since the local variable \lstinline{z1}
is initialized as an \emph{alias} of \lstinline{x}, hence connected to both
\lstinline{x} and \lstinline{y}{}. Indeed, the block reduces to
\lstinline@new C(x,x)@ which is not {closed}.
\end{myexample}
\begin{figure*}[ht]
{\small
\begin{center}
\begin{math}
\begin{array}{c}
{\cal D}_1:\hskip 1.5em{\prooftree
{\prooftree
\begin{array}{l}
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\tt x}}{{\tt D}}{\{{\tt x},\terminale{res}\}}\ \\
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\tt y}}{{\tt D}}{\{{\tt y},\terminale{res}\}}\
\end{array}
\justifies
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{\FieldAssign{{\tt y}}{{\tt f}}{{\tt x}}}{{\tt D}}{\{{\tt x},{\tt y},\terminale{res}\}}
\endprooftree}
\justifies
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\ConstrCall{{\tt D}}{\FieldAssign{{\tt y}}{{\tt f}}{{\tt x}}}}}{{\tt D}}{\{{\tt x},{\tt y},\terminale{res}\}}
\endprooftree}
\\[10ex]
{\cal D}_2:\hskip 0.4em{\prooftree
\prooftree
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\tt z2}}{{\tt D}}{\{{\tt z2},\terminale{res}\}}\
\justifies
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\ConstrCall{{\tt D}}{{\tt z2}}}}{{\tt D}}{\{{\tt z2},\terminale{res}\}}
\endprooftree
\hskip 0.4em
{\cal D}_1
\hskip 0.4em
\prooftree
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\tt z2}}{{\tt D}}{\{{\tt z2},\terminale{res}\}}\
\justifies
\TypeCheck{\SubstFun{\Gamma_1}{\Gamma_2}}{{\ConstrCall{{\tt C}}{{\tt z2},{\tt z2}}}}{{\tt C}}{\{{\tt z2},\terminale{res}\}}
\endprooftree
\justifies
\TypeCheck{{\Gamma_1}}{\Block{\Dec{{\tt D}}{{\tt z2}}{\ConstrCall{{\tt D}}{{\tt z2}}}\,\Dec{{\tt D}}{{\tt z1}}{\ConstrCall{{\tt D}}{\FieldAssign{{\tt y}}{{\tt f}}{{\tt x}}}}}{\ConstrCall{{\tt C}}{{\tt z2},{\tt z2}}}}{{\tt C}}{\{{\tt x},{\tt y}\}}
\endprooftree}
\\[10ex]
{\cal D}:\hskip 0.4em\prooftree
\prooftree
\TypeCheck{\Gamma_1}{{\tt y}}{{\tt D}}{\{{\tt y},\terminale{res}\}}\
\justifies
\TypeCheck{\Gamma_1}{\ConstrCall{{\tt D}}{{\tt y}}}{{\tt D}}{\{{\tt y},\terminale{res}\}}
\endprooftree
\hskip 0.4em
\prooftree
\TypeCheck{{\Gamma_1}}{{\tt x}}{{\tt D}}{\{{\tt x},\terminale{res}\}}\
\justifies
\TypeCheck{{\Gamma_1}}{{\ConstrCall{{\tt D}}{{\tt x}}}}{{\tt D}}{\{{\tt x},\terminale{res}\}}
\endprooftree
\hskip 1.5em
{\cal D}_2
\hskip 1.5em
\TypeCheck{\Gamma_1}{{\tt z}}{{\tt C}}{\epsilon}\
\justifies
\TypeCheck{}{\Block{\Dec{{{\tt D}}}{{\tt y}}{\ConstrCall{{\tt D}}{{\tt y}}}\,\Dec{{{\tt D}}}{{\tt x}}{\ConstrCall{{\tt D}}{{\tt x}}}\,\Dec{\Type{\terminale{a}}{{\tt C}}}{{\tt z}}{{{\tt e}^\texttt{i}}}}{{\tt z}}}{{\tt C}}{\epsilon}
\endprooftree\\ \\
\end{array}
\end{math}
\end{center}
}
\hrulefill
{\small
\begin{itemize}
\item ${\cal D}_2$ yields ${{\tt e}^\texttt{ia}}=\BlockLab{\Dec{{\tt D}}{{\tt z2}}{\ConstrCall{{\tt D}}{{\tt z2}}}\,\Dec{{\tt D}}{{\tt z1}}{\ConstrCall{{\tt D}}{\FieldAssign{{\tt y}}{{\tt f}}{{\tt x}}}}}{\ConstrCall{{\tt C}}{{\tt z2},{\tt z2}}}{\{{\tt z2}\}}$
\item ${\cal D}$ yields
${{\tt e}'=}\BlockLab{\Dec{{{\tt D}}}{{\tt y}}{\ConstrCall{{\tt D}}{{\tt y}}}\,\Dec{{{\tt D}}}{{\tt x}}{\ConstrCall{{\tt D}}{{\tt x}}}\,\Dec{\Type{\terminale{a}}{{\tt C}}}{{\tt z}}{{{\tt e}^\texttt{ia}}}}{{\tt z}}{\emptyset}$
\end{itemize}
}
\caption{Type derivation for \refToExample{One}}\label{fig:TypingOne}
\end{figure*}
\noindent
\emph{Type derivation for \refToExample{One}.} Let {$\Gamma_1=\TypeDec{{\tt y}}{{\tt D}},\TypeDec{{\tt x}}{{\tt D}},\TypeDec{{\tt z}}{\Type{\terminale{a}}{{\tt C}}}$, and $\Gamma_2=\TypeDec{{\tt z2}}{{\tt D}},\TypeDec{{\tt z1}}{{\tt D}}$}.
In \refToFigure{TypingOne} we give the type derivation that shows that
expression ${\tt e}$ {of Example~\ref{ex:One}} is well-typed. To save space we
omit the annotated expression produced by the derivation, and show the
annotations for the blocks at the bottom of the figure.
Consider the type derivation ${\cal D}_2$ {for the expression ${{\tt e}^\texttt{i}}$ which
initializes ${\tt z}$ (inner block)}. The effect of the declaration of ${\tt z2}$ is
the sharing relation $\{{\tt z2},\terminale{res}\}$ and the effect of the declaration of ${\tt z1}$
is the sharing relation $\{{\tt z1},{\tt x},{\tt y},\terminale{res}\}$. Before joining these sharing
relations, we remove $\terminale{res}$ from their domains, since the results of the two
expressions are not connected with each other. So the resulting sharing
relation is represented by $\{{\tt z1},{\tt x},{\tt y}\}$ (${\tt z2}$ is only connected with
itself). The effect of the evaluation of the body is the sharing relation
represented by $\{{\tt z2},\terminale{res}\}$. Therefore, before removing the local variables
${\tt z1}$ and ${\tt z2}$ from the sharing relations
we have the sharing relation $\{{\tt z1},{\tt x},{\tt y}\}\,\{{\tt z2},\terminale{res}\}$. The block is
annotated with $\{{\tt z2}\}$ (the local variable in the equivalence class of
$\terminale{res}$). Since, after removing the local variables,
$\Closure{\terminale{res}}{}=\{\terminale{res}\}$, ${{\tt e}^\texttt{i}}$ denotes a capsule, and may be used to
initialize an affine variable.\\
{${\cal D}$ is the type} derivation for the {whole} expression ${\tt e}$. Note that,
since ${\tt z}$ is an affine variable, the sharing relation of ${\tt z}$, which is
the body of the block, is the identity. In particular,
$\Closure{\terminale{res}}{}=\{\terminale{res}\}$, so the annotation of the block ${\tt e}$ is
$\emptyset$.
\begin{myexample}
We provide now a more realistic programming example, assuming a syntax enriched
by usual programming constructs. See the Conclusion for a discussion on how
to extend the type system to such constructs.
The class \lstinline@CustomerReader@ below models reading information about customers
out of a text file formatted as shown in the example:
\begin{lstlisting}
Bob
1 500 2 1300
Mark
42 8 99 100
\end{lstlisting}
In {odd} lines we have customer names, in {even} lines we have a shop history:
a sequence of product codes. The method \[email protected]@ takes a
\lstinline@Scanner@, assumed to be a class similar to the one in Java, for reading a
file and extracting different kinds of data.
\vspace{-5pt}
\begin{lstlisting}
class CustomerReader {
static Customer read(Scanner s)/*${\cal S} = \epsilon$*/{
Customer c=new Customer(s.nextLine())
while(s.hasNextNum()){
c.addShopHistory(s.nextNum())
}
return c
}
}
class Scanner {
String nextLine()/*${\cal S} = \epsilon$*/{...}
boolean hasNextNum()/*${\cal S} = \epsilon$*/{...}
int nextNum()/*${\cal S} = \epsilon$*/{...}
}
\end{lstlisting}
Here and in the following, we insert after method headers, as comments, their
sharing effects. In a real language, a library should declare sharing effects
of methods by some concrete syntax, as part of the type information available
to clients. In this example, \lstinline{CustomerReader.read}{} uses some
methods of class \lstinline{Scanner}{}. Such methods have no sharing effects,
as represented by the annotation ${\cal S}=\epsilon$. Note that for the
last two methods this is necessarily the case since they have no explicit
parameters and a primitive return type. For the first method, the only possible
effect could have been to mix $\terminale{this}$ with the result.
A \lstinline@Customer@ object is read from the file, and then its shop history is
added. Since methods invoked on the scanner introduce no sharing, we can
infer that the same holds for method \[email protected]@. In other words,
we can statically ensure that the data of the scanner are not mixed with the
result.
\label{open-capsule}
The following method \lstinline@update@ illustrates how we can ``open'' capsules,
modify their values and then recover the original capsule guarantee. The method
takes a customer, which is required to be a capsule by the fact that the
corresponding parameter is affine, and a scanner as before.
\vspace{-5pt}
\begin{lstlisting}
class CustomerReader {...//as before
static Customer update(Customer$^\terminale{a}$ old,Scanner s)/*${\cal S} = \epsilon$*/{
Customer c=old//we open the capsule `old'
while(s.hasNextNum()){
c.addShopHistory(s.nextNum())
}
return c
}
}
\end{lstlisting}
A method which has no sharing effects can use the pattern illustrated above:
one (or many) affine parameters are opened (that is, assigned to local
variables) and, in the end, the result is guaranteed to be a capsule again.
This mechanism is not possible in~\cite{Almeida97,ClarkeWrigstad03,DietlEtAl07}
and relies on destructive reads in~\cite{GordonEtAl12}.
A less restrictive version of method \lstinline{update} could take a non affine
\lstinline{Customer old}{} parameter, that is, not require that the old
customer is a capsule. In this case, the sharing effects would be ${\cal S}
=\{{\small \terminale{res},\texttt{old} }\}$. Hence, in a call
\lstinline{Customer.update(c1,s)}{}, the connections of \lstinline{c1}{} would
be propagated to the result of the call. In other words, the method type is
``polymorphic'' with respect to sharing effects. Notably, the method will
return a capsule if invoked on a parameter which is a capsule.
\end{myexample}
\vspace{-10pt}
\begin{myexample}\label{ex:Four}
The following method takes two teams \lstinline{t1}, \lstinline{t2}. Both teams
want to add a reserve player from their respective lists \lstinline{p1} and
\lstinline{p2}, assumed to be sorted with best players first. However, to keep
the game fair, the two reserve players can only be added if they have the same
skill level.
\vspace{-5pt}
\begin{lstlisting}
static void addPlayer(Team t1, Team t2, Players p1, Players p2)
/*${\cal S} = \{\texttt{t1},\texttt{p1}\}, \{\texttt{t2},\texttt{p2}\}$*/{
while(true){//could use recursion instead
if(p1.isEmpty()||p2.isEmpty()) {/*error*/}
if(p1.top().skill==p2.top().skill){
t1.add(p1.top());
t2.add(p2.top());
return;
}
else{
removeMoreSkilled(p1,p2);
}
}
\end{lstlisting}
\vspace{-5pt}
The sharing effects express the fact that each team is only mixed with its list
of reserve players.
\end{myexample}
\begin{myexample}\label{ex:Five}
Finally, we provide a more involved example which illustrates the expressive
power of our approach. Assume we have a class ${\tt C}$ as follows:
\vspace{-5pt}
\begin{lstlisting}
class C {
C f;
C clone()/*${\cal S} = \epsilon$*/{...}
C mix(C x)/*${\cal S} = \{{\tt x},\terminale{this},\terminale{res}\}$*/{...}
}
\end{lstlisting}
\vspace{-5pt}
The method ${\tt clone}$ is expected to return a deep copy of the receiver. Indeed,
this method has no parameters, apart from the implicit non-affine\footnote{{We
assume $\terminale{this}$ to be non-affine unless explicitly indicated, e.g., by inserting
$\terminale{a}$ as first element of the list of parameters.} } parameter $\terminale{this}$,
and returns an object of class ${\tt C}$ which is \emph{not} connected to the
receiver, as specified by the fact that the sharing relation is the identity,
represented by $\epsilon$, where $\terminale{res}$ is not connected to $\terminale{this}$. Note that
a shallow clone method would be typed
\lstinline{C clone()/*${\cal S} = \{\terminale{this},\terminale{res}\}$*/}{}.
The method ${\tt mix}$ is expected to return a ``mix'' of the receiver with the
argument. Indeed, this method has, besides $\terminale{this}$, a parameter ${\tt x}$ of class
${\tt C}$, both non affine, returns an object of class ${\tt C}$ and its effects are
connecting ${\tt x}$ with $\terminale{this}$, and both with the result.
Consider now the following closed expression ${\tt e}$:
\vspace{-5pt}
\begin{lstlisting}
C c1= new C(c1);
C outer= {
C c2= new C(c2);
C$^\terminale{a}$ inner= {
C c3= new C(c3);
C r= c2.mix(c1).clone()
r.mix(c3)};
inner.mix(c2)};
outer
\end{lstlisting}
\vspace{-5pt}
The key line in this example is \lstinline{C r=c2.mix(c1).clone()}{}.\\
Thanks to the fact that \lstinline{clone}{} returns a capsule, we know that
\lstinline{r}{} will not be connected to the external variables
\lstinline{c1}{} and \lstinline{c2}{}, hence also the result of the block will
not be connected to \lstinline{c1}{} or \lstinline{c2}{}. However, the sharing
between \lstinline{c2}{} and \lstinline{c1}{} introduced by the
\lstinline{mix}{} call is traced, and prevents the \emph{outer} block {from
being} a capsule. The reader can check that, by replacing the declaration of
\lstinline{r}{} with \lstinline{C r=c2.mix(c2).clone()}{}, also the outer block
is a capsule, hence variable {\lstinline{outer}} could be declared affine.
{This example is particularly challenging to test the capability of a type
system to detect the capsule property.} For instance, the type system in
\cite{GordonEtAl12} does not discriminate these two cases, those in
\cite{ServettoZucca15,GianniniEtAl16} require rather tricky and non
syntax-directed rules.
The type derivation for \refToExample{Five} can be found in the Appendix.
\end{myexample}
\section{Introduction}\label{sect:intro}
In the imperative programming paradigm, \emph{sharing} is the situation when a
portion of the store can be accessed through more than one reference, say $\metavariable{x}$
and $\metavariable{y}$, so that a change to $\metavariable{x}$ affects $\metavariable{y}$ as well. Unwanted sharing
relations are common bugs: unless sharing is carefully maintained, changes
through a reference might propagate unexpectedly, objects may be observed in an
inconsistent state, and conflicting constraints on shared data may
inadvertently invalidate invariants. Preventing such errors is even more
important in increasingly ubiquitous multi-core and many-core architectures.
For this reason, there is a huge amount of literature on type systems for
controlling sharing and interference, notably using type annotations to
restrict the usage of references, see \refToSection{related} for a survey.
In particular, it is very useful for a programmer to be able to rely on the
following properties of a reference $\metavariable{x}$.
\begin{myitemize}
\item \emph{Capsule} reference: $\metavariable{x}$ denotes an isolated portion of store,
that is, the subgraph reachable from $\metavariable{x}$ cannot be reached through other
references. This allows programmers (and static analysis) to identify state
that can be safely handled by a thread. In this paper we will use the
name \emph{capsule} for this property, to avoid confusion with many
variants in the literature
\cite{ClarkeWrigstad03,Almeida97,ServettoEtAl13a,Hogg91,DietlEtAl07,GordonEtAl12}.
\item \emph{Lent} reference \cite{ServettoZucca15,GianniniEtAl16}, also
called \emph{borrowed} \cite{Boyland01,NadenEtAl12}: the subgraph reachable
from $\metavariable{x}$ \EZ{can be manipulated by a client, but no sharing can be introduced through $\metavariable{x}$}. Typically,
borrowing can be employed to ensure that the capsule guarantee is not broken.
\end{myitemize}
In this paper, we propose a type and effect system which provides, in our
opinion, a very powerful, yet natural, way to express sharing. Notably, the two
above mentioned notions are smoothly included and generalized.
The distinguishing features are the following:
\begin{enumerate}
\item Rather than declaring type annotations, the type system \emph{infers}
sharing possibly introduced by the evaluation of an expression.
\item Sharing is \emph{directly represented at the syntactic level}, as an
equivalence relation among the free variables of the expression.
\item The calculus is \emph{pure} in the sense that reduction is defined
on language terms only, rather than requiring an auxiliary structure.
\end{enumerate}
We now describe these three features in more detail.
Given an expression $\metavariable{e}$, the type system computes a
\emph{sharing relation} ${\cal S}$ which is an equivalence relation on \EZ{a} set
containing the free variables of $\metavariable{e}$ and an additional, distinguished variable $\terminale{res}$ denoting the result of
$\metavariable{e}$. That two variables, say $\metavariable{x}$ and $\metavariable{y}$, are in the same equivalence class
in ${\cal S}$ means \EZ{that} the evaluation of $\metavariable{e}$ can possibly introduce sharing
between $\metavariable{x}$ and $\metavariable{y}$, that is, connect their reachable object graphs, so that
a modification of (a subobject of) $\metavariable{x}$ could affect $\metavariable{y}$ as well, or
conversely. For instance, evaluating the expression
$\Sequence{\FieldAssign{\metavariable{x}}{\metavariable{f}}{\metavariable{y}}}{\FieldAccess{\metavariable{z}}{\metavariable{f}}}$
introduces connections
\begin{myitemizeA}
\item between $\metavariable{x}$ and $\metavariable{y}$,
\item between {$\terminale{res}$ (the result)} and $\metavariable{z}$.
\end{myitemizeA}
The \emph{capsule} notion becomes just a special case: an expression is a
\emph{capsule} iff its result will be disjoint from any free variable
(formally, $\terminale{res}$ is a singleton in ${\cal S}$). For instance, the
expression
$\Sequence{\FieldAssign{\metavariable{x}}{\metavariable{f}}{\metavariable{y}}}{\FieldAccess{\ConstrCall{\metavariable{C}}{\ConstrCall{\metavariable{D}}{}}}{\metavariable{f}}}$
is a capsule, whereas the previous expression is not.\footnote{{Note that our
notion is related to the whole reachable object graph. For instance, a
doubly-linked list whose elements can be arbitrarily aliased can be
externally unique~\cite{ClarkeWrigstad03} and properly follow an owners as
dominator strategy~\cite{ZibinEtAl10}, but is not a capsule.}}
The \emph{lent} notion also becomes a special case: a variable $\metavariable{x}$ is used as
lent in an expression if the evaluation of the expression will neither connect
$\metavariable{x}$ to any other variable, nor to the result (formally, $\metavariable{x}$ is a singleton in
${\cal S}$). {In other words, the evaluation of the expression does not
introduce sharing between $\metavariable{x}$ and other variables (including $\terminale{res}$).} For
instance $\metavariable{x}$ is lent in
$\Sequence{\FieldAssign{\metavariable{x}}{\metavariable{f}_1}{\metavariable{x}.\metavariable{f}_2}}{\FieldAccess{\metavariable{z}}{\metavariable{f}}}$. {In our
type system, this notion is generalized from singletons to arbitrary sets of
variables: for instance, in the previous example
$\Sequence{\FieldAssign{\metavariable{x}}{\metavariable{f}}{\metavariable{y}}}{\FieldAccess{\metavariable{z}}{\metavariable{f}}}$, the set
$\{\metavariable{x},\metavariable{y}\}$ is an equivalence class in ${\cal S}$ since the evaluation of
the expression does not introduce sharing between this set and other
variables (including $\terminale{res}$).
Altogether, this direct representation at the syntactic level allows us to
express sharing in a natural way. Moreover, execution is modeled by a
\emph{pure} calculus, where store is encoded directly in language terms,
rather than by an auxiliary structure. Formally, this is achieved by the block
construct, introducing local variable declarations, which play the role of
store when evaluated. This operational semantics\footnote{Which is, of
course, expected to be behaviorally equivalent to the conventional semantics
where store is a global flat auxiliary structure, as we plan to formally state
and prove in further work.} will be informally introduced in
\refToSection{language}, and formalized in \refToSection{calculus}.
A preliminary presentation of the approach presented in this paper has been given in \cite{GianniniSZ17,GianniniSZ17a}.
The rest of the paper is organized as follows: in \refToSection{language} we
provide syntax and an informal execution model, in \refToSection{types} the
type system, and in \refToSection{examples} some examples. The operational
semantics of the calculus is presented in \refToSection{calculus}, and the main
results and proofs in \refToSection{results}. In \refToSection{related} we
discuss related work, and in \refToSection{conclu} we draw some conclusion and
highlight future work. \ref{app:derivation} contains a (rather complex) type
derivation. The proofs omitted from the main paper are in
\ref{app:proofs}.
\section{Language}\label{sect:language}
The syntax of the language is given in \refToFigure{syntax}. We assume sets
of \emph{variables} $\metavariable{x}, \metavariable{y}, \metavariable{z}$, \emph{class names} $\metavariable{C}, \metavariable{D}$, \emph{field
names} $\metavariable{f}$, and \emph{method names} $\metavariable{m}$. We adopt the convention that a
metavariable which ends in \metavariable{s} is implicitly defined as a
(possibly empty) sequence, {for example}, $\metavariable{ds}$ is defined by
$\produzioneinline{\metavariable{ds}}{\epsilon\mid \metavariable{d}\ \metavariable{ds}}$, where $\epsilon$ denotes
the empty sequence.
\begin{figure}[t]
\begin{grammatica}
\produzione{\metavariable{e}}{\metavariable{x}\mid\FieldAccess{\metavariable{e}}{\metavariable{f}}\mid\FieldAssign{\metavariable{e}}{\metavariable{f}}{\metavariable{e}'}\mid\ConstrCall{\metavariable{C}}{\metavariable{es}}\mid\Block{\metavariable{ds}}{\metavariable{e}}{\mid\MethCall{\metavariable{e}}{\metavariable{m}}{\metavariable{es}}}}{expression}\\
\produzione{\metavariable{d}}{\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}}}{declaration}\\
\produzione{\metavariable{T}}{\Type{\mu}{\metavariable{C}}\mid\terminale{int}}{declaration type}\\
\produzione{\mu}{\epsilon\mid\terminale{a}}{optional modifier}\\
\end{grammatica}
}
\caption{Syntax}\label{fig:syntax}
\end{figure}
The calculus is designed with an object-oriented flavour, inspired {by}
Featherweight Java \cite{IgarashiEtAl01}. This is only a presentation choice:
all the ideas and results of the paper could be easily rephrased in a
different imperative calculus, e.g., supporting data type constructors and
reference types. For the same reason, we omit features such as inheritance and
late binding, which are orthogonal to our focus.
An expression can be a variable (including the special variable $\terminale{this}$
denoting the receiver in a method body), a field access, a field assignment, a
constructor invocation, a block consisting of a sequence of declarations and a
body, {or a method invocation. In a block, a} declaration specifies a type, a
variable and an initialization expression. We assume that in well-formed blocks
there are no multiple declarations for the same variable{, that is, $\metavariable{ds}$ can
be seen as a map from variables to expressions.
A declaration type is a class name with an optional modifier $\terminale{a}$,
{which, if present, indicates that} the variable is \emph{affine}. We also
include $\terminale{int}$ as an example of primitive type, but we do not formally
model related operators used in the examples, such as integer constants and
sum. An affine variable can occur at most once in its scope, and should be
initialized with a \emph{capsule}, that is, an isolated portion of store. In
this way, it can be used as a temporary reference, to ``move'' a capsule to
another location in the store, without introducing sharing. In the examples,
we generally omit the brackets of the outermost block, and abbreviate
$\Block{\Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}}}{\metavariable{e}'}$ by $\Sequence{\metavariable{e}}{\metavariable{e}'}$ when $\metavariable{x}$ does not
occur free in $\metavariable{e}'$.
{We turn our attention to the operational semantics now.}
\refToFigure{examplered} shows an example of a reduction sequence in the
calculus.
\begin{figure}[t]
\begin{lstlisting}[basicstyle=\ttfamily\scriptsize,backgroundcolor=\color{white}]
$\store{D z=new D(0); C x=new C(z,z);}$ $\ul{C y=x;}$ D w=new D(y.f1.f+1); x.f2=w; x $\longrightarrow$
$\store{D z=new D(0); C x=new C(z,z);}$ D w=new D($\ul{x.f1}$.f+1); x.f2=w; x $\longrightarrow$
$\store{D z=new D(0); C x=new C(z,z);}$ D w=new D($\ul{z.f}$+1); x.f2=w; x $\longrightarrow$
$\store{D z=new D(0); C x=new C(z,z);}$ D w=new D($\ul{0+1}$); x.f2=w; x $\longrightarrow$
$\store{D z=new D(0); C x=new C(z,z); D w=new D(1);}$ $\ul{x.f2=w}$; x $\longrightarrow$
$\store{D z=new D(0); C x=new C(z,w); D w=new D(1);}$ x
\end{lstlisting}
\caption{Example of reduction}\label{fig:examplered}
\end{figure}
The main idea is to use variable declarations to directly represent the store.
That is, a declared {(non affine)} variable is not replaced by its value, as in
standard \texttt{let}, but the association is kept and used when necessary{, as
it happens, with different aims and technical problems, in cyclic lambda
calculi \cite{AriolaFelleisen97,MaraistEtAl98}.}
In the figure, we emphasize at each step the declarations which can be seen
as the store (in grey) and the redex which is reduced (in a box).
Assuming a program (class table) where class \lstinline{C} has two fields
\lstinline{f1} and \lstinline{f2} of type \lstinline{D}, and class
\lstinline{D} has an integer field \lstinline{f}, in the initial term in
\refToFigure{examplered} the first two declarations can be seen as a store
which associates to \lstinline{z} an object of class \lstinline{D} whose field
contains \lstinline{0}{}, and to \lstinline{x}{} an object of class
\lstinline{C}{} whose two fields contains (a reference to) the previous object.
The first reduction step eliminates an alias, by replacing occurrences of
\lstinline{y} by \lstinline{x}. The next three reduction steps compute
\lstinline{x.f1.f+1}{}, by performing two field accesses and one sum. The last
step performs a field assignment, modifying the current store. Finally, in
the last line we have a term which can no longer be reduced, consisting of a
store and the expression \texttt{x} which denotes a reference in the store,
taken as entry point. In other words, the final result of the evaluation is an
object of class \lstinline{C} whose fields contain (references to) two
objects of class \lstinline{D}, whose fields contain \lstinline{0} and
\lstinline{1}, respectively.
As usual, references in the store can be mutually recursive\footnote{However,
mutual recursion is not allowed between declarations which are {\emph{not
evaluated}}, e.g., {\lstinline{B x= new B(y.f); B y= new B(x.f); y}} is
ill-formed.}, as in the following example, where we assume a class
\lstinline{B} with a field of type \lstinline{B}.
\begin{lstlisting}
B x= new B(y); B y= new B(x); y
\end{lstlisting}
Again, this is a term which can no longer be reduced, consisting of a
store and the reference \texttt{y} as entry point. In other words, this term
can be seen as an object of class \lstinline{B} whose field contains (a
reference to) another object of class \lstinline{B}, whose field contains (a
reference to) the original object.
In the examples until now, store is flat, as it usually happens in models of
imperative languages. However, in our calculus, we are also able to represent
a hierarchical store, as shown in the example below, where we assume a class
\lstinline{A}{} with two fields of type \lstinline{B} and \lstinline{D},
respectively.
\begin{lstlisting}
D z= new D(0);
A w= {
B x= new B(y);
B y= new B(x);
A u= new A(x,z);
u};
w
\end{lstlisting}
Here, the store associates to \lstinline{w}{} a block introducing local
declarations, that is, in turn a store. The advantage of this representation
is that it models in a simple and natural way constraints about sharing among
objects, notably:
\vspace{3pt}
\begin{myitemize}
\item the fact that an object is not referenced from outside some enclosing
object is directly modeled by the block construct: for instance, the
object denoted by \lstinline{y}{} can only be reached through \lstinline{w}{}
\item conversely, the fact that an object does not refer to the outside is
modeled by the fact that the corresponding block is closed (that is, has no
free variables): for instance, the object denoted by \lstinline{w}{} is not
closed, since it refers to the external object \lstinline{z}{}.
\end{myitemize}
In other words, our calculus smoothly integrates memory representation with
shadowing and $\alpha$-conversion. However, there is a problem which needs to
be handled to keep this representation correct: reading (or, symmetrically,
updating) a field could cause scope extrusion. For instance, the term
\begin{lstlisting}
C y= {D z= new D(0); C x= new C(z,z); x}; y.f1
\end{lstlisting}
under a naive reduction strategy would reduce to the ill-formed term
\begin{lstlisting}
C y= {D z= new D(0); C x= new C(z,z); x}; z
\end{lstlisting}
To avoid this problem, the above reduction step is forbidden. However,
reduction is not stuck, since we can transform the above term into an
equivalent term where the inner block has been flattened, and get the following
correct reduction sequence:
\begin{lstlisting}
C y= {D z= new D(0); C x= new C(z,z); x} y.f1 $\congruence{}{}$
D z= new D(0); C x= new C(z,z); C y= x; y.f1 $\longrightarrow$
D z= new D(0); C x= new C(z,z); x.f1 $\longrightarrow$
D z= new D(0); C x= new C(z,z); z $\congruence{}{}$
D z= new D(0); z
\end{lstlisting}
Formally, in addition to the reduction relation which models actual
computation, our operational semantics is defined by a \emph{congruence}
relation $\congruence{}{}$, which captures structural equivalence, as in
$\pi$-calculus \cite{Milner99}. Note also that in the final term the
declaration of \lstinline{x}{} can be removed (again by congruence), since it
is useless.
Moving declarations from a block to the directly enclosing block is not always
safe. For instance, in the following variant of the previous example
\begin{lstlisting}
C$^\terminale{a}$ y= {D z= new D(0); C x= new C(z,z); x}; y.f1
\end{lstlisting}
the affine variable is required to be initialized with a capsule, and this is
the case indeed, since the right-hand side of the declaration is a closed
block. However, by flattening the term:
\begin{lstlisting}
D z= new D(0); C x= new C(z,z); C$^\terminale{a}$ y= x; y.f1
\end{lstlisting}
this property would be lost, and we would get an ill-typed term. {Indeed, these
two terms are \emph{not} considered equivalent in our operational model.
Technically, this is obtained by detecting, during typechecking, which local
variables will be connected to the result of the block, as \lstinline{z} in
the example, and preventing to move such declarations from a block which is the
initialization expression of an affine variable. }
In this case, reduction proceeds by replacing the (unique) occurrence of the
affine variable by its initialization expression, as shown below.
\begin{lstlisting}
C$^\terminale{a}$ y= {D z= new D(0); C x= new C(z,z); x} y.f1 $\longrightarrow$
{D z= new D(0); C x= new C(z,z); x}.f1 $\congruence{}{}$
D z= new D(0); C x= new C(z,z); x.f1 $\longrightarrow$
D z= new D(0); C x= new C(z,z); z $\congruence{}{}$
D z= new D(0); z
\end{lstlisting}
\section*{References}
\section{Related work}\label{sect:related}
\emph{Capsule and lent notions.} As mentioned, the capsule property has many
variants in the literature, such as \emph{isolated} \cite{GordonEtAl12},
\emph{uniqueness} \cite{Boyland10} and \emph{external
uniqueness}~\cite{ClarkeWrigstad03}, \emph{balloon}
\cite{Almeida97,ServettoEtAl13a}, \emph{island} \cite{DietlEtAl07}. The fact
that aliasing can be controlled by using \emph{lent} (\emph{borrowed})
references is well-known~\cite{Boyland01,NadenEtAl12}. However, before the
work in \cite{GordonEtAl12}, the capsule property was only detected in simple
situations, such as using a primitive deep clone operator, or composing
subexpressions with the same property.
The important novelty of the type system in \cite{GordonEtAl12} has been
\emph{recovery}, that is, the ability to detect properties (e.g., capsule or
immutability) by keeping into account not only the expression itself but the
way the surrounding context is used. In \cite{GordonEtAl12} an expression which
does not use external mutable references is recognized to be a capsule.
However, expressions which \emph{do} use external mutable references, but do
not introduce sharing between them and the final result, are not recognized to
be capsules. For instance, \refToExample{One} and \refToExample{Five} in
\refToSection{examples} would be ill-typed in \cite{GordonEtAl12}. Our type
system generalizes recovery by precisely computing sharing effects.
\noindent
\emph{Capabilities.}
In other proposals
{\cite{HallerOdersky10,HallerLoiko16,ClebschEtAl15,CastegrenWrigstad16,CastegrenW17}}
types are compositions of one or more \emph{capabilities}, and expose the union
of their operations. The modes of the capabilities in a type control how
resources of that type can be aliased.
By using capabilities it is possible to obtain an expressivity which looks
similar to our type system, even though with different sharing notions and
syntactic constructs. For instance, the \emph{full encapsulation} notion in
\cite{HallerOdersky10}\footnote{{This paper includes a very good survey of work
in this area, notably explaining the difference between \emph{external
uniqueness}~\cite{ClarkeWrigstad03} and \emph{full encapsulation}.}}, apart
from the fact that sharing of immutable objects is not allowed, is equivalent
to the guarantee of our $\terminale{a}$ modifier. Their model has a higher
syntactic/logic overhead to explicitly track regions. As for all work
before~\cite{GordonEtAl12}, objects need to be born \lstinline|@unique| (that is, with the capsule property) and the type
system permits to manipulate data preserving their uniqueness. With
recovery~\cite{GordonEtAl12} (as with our type and effect system), instead, we can forget
about uniqueness, use normal code designed to work on conventional shared
data, and then recover the aliasing encapsulation property.
\noindent
\emph{Ownership.}
A closed stream of research is that on \emph{ownership} (see an overview
in~\cite{ClarkeEtAl13}) which, however, offers an opposite approach. The ownership invariant, which can be
expressed and proved, is, however, expected to be guaranteed by defensive
cloning.
In our approach, instead, the capsule concept models an efficient
\emph{ownership transfer}. In other words, when an object $\metavariable{x}$ is ``owned'' by
another object $\metavariable{y}$, it remains always true that $\metavariable{y}$ can only be accessed
through $\metavariable{x}$, whereas the capsule notion is more dynamic: a capsule can be
``opened'', that is, assigned to a standard reference and modified, and then we
can recover the original capsule guarantee. Cloning, if needed, becomes a
responsibility of the client. Other work in the literature supports ownership
transfer, see for example~\cite{MullerRudich07, ClarkeWrigstad03}. In the
literature it is however applied to uniqueness/external uniqueness, thus not
{the whole} reachable object graph is transferred.
\noindent
\emph{Full, deep and shallow interpretation.}
The literature on sharing control distinguishes three interpretations for properties of objects.
\begin{myitemize}
\item Full: the whole reachable object graph respects that property.
\item Shallow: only the object itself respects that property.
\item Deep: the reachable object graph is divided in 2 parts:
The first part is the one that is logically ``owned'' by the
object itself, while the second part is just ``referenced''.
\end{myitemize}
In our approach, {as in \cite{Almeida97,GianniniEtAl16,GordonEtAl12,
ServettoEtAl13a,ServettoZucca15}}, properties have the \emph{full}
interpretation, in the sense that they are propagated to the whole reachable
object graph. In a deep interpretation, instead, {as in
\cite{Boyland10,NadenEtAl12,Reynolds02}, it is possible, for instance, to
reach a mutable object from an immutable object. In this sense, approaches
based on ownership, or where it is somehow possible to have any form of
``internal mutation'' are (only) deep, as in
\cite{CastegrenWrigstad16,Hogg91, BocchinoADAHKOSSV09,Turon17}. This also
includes \cite{ClarkeWrigstad03}, where an unique object can point to
arbitrarily shared objects, if they do not, in turn, point back to the unique
object itself.}
The advantage of the full interpretation is that libraries can declare strong
intentions in a coherent and uniform way, independently of the concrete
representation of the user input (that, with the use of interfaces, could be
unknown to the library). On the other side, providing (only) full modifiers
means that we do not offer any language support for (as an example) an
immutable list of mutable objects.
\noindent
\emph{Destructive reads.}
Uniqueness can be enforced by destructive reads, i.e., assigning a copy of the
unique reference to a variable and destroying the original reference, see
\cite{GordonEtAl12,Boyland10}. Traditionally, borrowing/fractional
permissions~\cite{NadenEtAl12} are related to uniqueness in the opposite way:
a unique reference can be borrowed, it is possible to track when all borrowed
aliases are buried~\cite{Boyland01}, and then uniqueness can be recovered.
These techniques offer a sophisticate alternative to destructive reads. We
also wish to avoid destructive reads. In our work, we ensure uniqueness by
linearity, that is, by allowing at most one use of an $\terminale{a}$ reference.
\noindent
\emph{Alias analysis.}
Alias analysis is a fundamental static analysis, used in compilers and code
analysers. Algorithms such as Steensgaard's algorithm, \cite{Steensgaard96},
infer equivalence classes that may alias. In \cite{DeD12} is presented a
refined version of such algorithm, performing a uniqueness analysis, similar to
our detection of ``capsule'' values. However, the aim of our work is to design
a language in which annotations, such as the affine modifier, can be used by
the user to enforce properties of its code. Then the inference system checks
that such annotations are correctly used.
\noindent
\emph{Calculus.}
Finally, an important distinguishing feature of our work is that sharing can be
directly represented at the syntactic level as a relation among free variables,
thanks to the fact that the calculus is pure. Models of the imperative
paradigm as pure calculi have been firstly proposed in
\cite{ServettoLindsay13,CapriccioliEtAl15}.
\section{Results}\label{sect:results}
In this section we present the main formal results on our calculus. First, we
show a canonical form theorem describing constraints on free variables of
well-typed \EZ{garbage-free} values. Then we prove subject reduction, stating that reduction
preserves the type, and may reduce the sharing effects. In addition, reduction
preserves an invariant on the store that allows us to prove that lent and
capsule references have the expected behaviour.
Finally, we prove progress, i.e., that well-typed expressions do not get
``stuck''.
First of all we extend the typing judgment to annotated expressions, and to
(annotated) sequences of declarations, as follows.
\begin{definition}\label{def:typeBlock}\
\begin{itemize}
\item \EZ{Given an annotated expression $\metavariable{e}$, $\erase{\metavariable{e}}$ is the expression
obtained by erasing the annotations from $\metavariable{e}$, and} $\TypeCheck{\Gamma}{\metavariable{e}}{\metavariable{C}}{{\cal S}}$ if
$\TypeCheckAnnotate{\Gamma}{\erase{\metavariable{e}}}{\metavariable{C}}{{\cal S}}{\metavariable{e}}$.
\item \PG{Given the annotated expression $\metavariable{e}_1$ and $\metavariable{e}_2$, we say that {\em $\metavariable{e}_1$ and $\metavariable{e}_2$ are equal up to annotations}, dubbed $\metavariable{e}_1\EZ{\approx^-}\metavariable{e}_2$ if $\erase{\metavariable{e}_1}=\erase{\metavariable{e}_2}$.}
\item Given
$\metavariable{ds}=\Dec{\Type{\mu_1}{\metavariable{C}_1}}{\metavariable{x}_1}{\metavariable{e}_1}\ldots\Dec{\Type{\mu_n}{\metavariable{C}_n}}{\metavariable{x}_n}{\metavariable{e}_n}$,
an (annotated) sequence of declarations, $\TypeCheckDecs{\Gamma}{\metavariable{ds}}{{\cal S}}$ if
\begin{itemize}
\item $\TypeCheck{\Gamma}{\metavariable{e}_i}{\metavariable{C}_i}{{\cal S}_i}$, for some
${\cal S}_i$ ($1\leq i\leq n$),
\item \PG{${\cal S}=\sum\limits_{i=1}^{n}(\SubstEqRel{{\cal S}_i}{\metavariable{x}_i}{\terminale{res}})$}, and
\item if $\mu_i{=}\terminale{a}$ then $\IsCapsule{{\cal S}_i}$ ($1\leq i\leq n$).
\end{itemize}
\end{itemize}
\end{definition}
Note that in the ${\cal S}$ derived for a sequence of declarations the equivalence class
of $\terminale{res}$ is a singleton, according to the fact that a sequence of declarations has no ``result''.
\PG{As we can see from the typing rules of \refToFigure{typing} if
the non-annotated expression $\metavariable{e}$ is typable, then there is a unique annotated $\metavariable{e}'$ such that
$\TypeCheck{\Gamma}{\metavariable{e}'}{\metavariable{C}}{{\cal S}}$ for some $\metavariable{C}$ and ${\cal S}$.}
\paragraph{Canonical form theorem}
In this subsection \EZComm{not really a subsection} we state a theorem describing constraints on free variables
of well-typed \EZ{garbage-free\ values\footnote{Recall that values are assumed to be in canonical
form.}. Notably, such free variables are either affine or connected to its result.} Therefore, a garbage-free\ capsule
value can contain only affine free variables.
In the following we use the underscore $\_$ for a type, when the specific type
is irrelevant. Moreover, we we will say that \emph{$\metavariable{x}$ is affine/non-affine}
in $\Gamma$ if $\Gamma(\metavariable{y})=\Type{\terminale{a}}{\metavariable{C}}$ or $\Gamma(\metavariable{y})=\Type{}{\metavariable{C}}$,
respectively.
\begin{theorem}\label{theo:freevars}
\PG{If $\TypeCheck{\Gamma}{\metavariable{v}}{\metavariable{C}}{{\cal S}}$ where $\metavariable{v}$ is garbage-free\ and $\metavariable{y}\in\FV{\metavariable{v}}$, then:}
\begin{enumerate}
\item if {$\metavariable{y}$ is non-affine in $\Gamma$}, then $\Pair{\metavariable{y}}{\terminale{res}}\in{\cal S}$
\item if $\IsCapsule{{\cal S}}$, then {$\metavariable{y}$ is affine in $\Gamma$}.
\end{enumerate}
\end{theorem}
\begin{proof}
\PG{1. By cases on the shape of canonical values.
\begin{itemize}
\item If \underline{$\metavariable{v}=\metavariable{x}$}, then the only free variable in $\metavariable{v}$
is $\metavariable{x}$ itself. Since $\metavariable{x}$ is non-affine in $\Gamma$, then the judgment $\TypeCheck{\Gamma}{\metavariable{x}}{\metavariable{C}}{{\cal S}}$
is derived by rule \rn{t-var}, hence $\Pair{\metavariable{x}}{\terminale{res}}\in{\cal S}$.
\item If \underline{$\metavariable{v}=\BlockLab{\metavariable{dvs}}{\metavariable{x}}{\X}$}, since $\metavariable{dvs}$ is in
canonical form and garbage-free\ we have
$\metavariable{dvs}=\DecP{\metavariable{C}_1}{\metavariable{x}_1}{\ConstrCall{\metavariable{C}_1}{\metavariable{xs}_1}}\ldots\DecP{\metavariable{C}_n}{\metavariable{x}_n}{\ConstrCall{\metavariable{C}_n}{\metavariable{xs}_n}}$
and $\connected{\metavariable{dvs}}{\metavariable{x}}{\metavariable{x}_i}$ for all $i\in 1..n$. The judgment
$\TypeCheck{\Gamma}{\metavariable{v}}{\_}{{\cal S}}$ is derived
by rule \rn{t-block} with premises derived by \rn{t-new} for each declaration (which in turn
have premises that are derived with rules \rn{t-var} or \rn{t-affine-var})
and rule \rn{t-var} to derive a type for the body ($\metavariable{x}=\metavariable{x}_i$ for some $1{\leq}i{\leq}n$).
Hence, letting $\Gamma'=\TypeDec{\metavariable{x}_1}{\Type{}{\metavariable{C}_1}},\ldots,\TypeDec{\metavariable{x}_n}{\Type{}{\metavariable{C}_n}}$, we have
\begin{itemize}
\item $\TypeCheck{\SubstFun{\Gamma}{\Gamma'}}{\ConstrCall{\metavariable{C}_i}{\MS{\metavariable{xs}_i}}}{\metavariable{C}_i}{\{\metavariable{xs}'_i,\terminale{res}\}}$ $(1\leq i\leq n)$ where $\metavariable{xs}'_i$ are the non affine variables in $\metavariable{xs}_i$
\item $\TypeCheck{\SubstFun{\Gamma}{\Gamma'}}{\metavariable{x}}{\metavariable{C}}{\{\metavariable{x},\terminale{res}\}}$
\item ${\cal S}_i=\{\metavariable{xs}'_i,\metavariable{x}_i\}$
\item ${\cal S}=\Remove{{\cal S}'}{\dom{\Gamma'}}$, with ${\cal S}'=\Sum{\sum\limits_{i=1}^{n}{\cal S}_i}{\{\metavariable{x},\terminale{res}\}}$
\end{itemize}
From \refToProp{lessSrRel}.\ref{p1} and $\connected{\metavariable{dvs}}{\metavariable{x}}{\metavariable{x}_i}$ for all $i\in 1..n$, we have
that ${\cal S}'$ has a unique equivalence class
$\bigcup_{1\leq i\leq n}\{\metavariable{xs}'_i,\metavariable{x}_i\}\cup\{\terminale{res}\}$.
If $\metavariable{y}\in\FV{e}$, then $\metavariable{y}\in\metavariable{xs}_j$ for some $j\in\{1,\dots,n\}$ and $\metavariable{y}\not\in\{\metavariable{x}_1,\dots,\metavariable{x}_n\}$. From
the fact that $\metavariable{y}$ is not affine we have that $\metavariable{y}\in\metavariable{xs}'_j$. Therefore $\Pair{\metavariable{y}}{\terminale{res}}\in{\cal S}$.
\end{itemize}
}
2. If $\metavariable{y}$ is free in $\metavariable{v}$ and $\metavariable{y}$ is non-affine in $\Gamma$, then by the previous point we would have
$\Pair{\metavariable{y}}{\terminale{res}}\in{\cal S}$, contradicting $\IsCapsule{{\cal S}}$. Hence,
$\Gamma(\metavariable{y})=\Type{\terminale{a}}{\metavariable{C}}$.
\end{proof}
\EZComm{moved here and changed the proof}
\EZ{The following lemma is a corollary of the canonical form theorem.}
\begin{lemma}\label{lemma:sharingCapsule}
If $\TypeCheck{\Gamma}{\metavariable{v}}{\metavariable{C}}{{\cal S}}$ where $\metavariable{v}$ is garbage-free, and
$\IsCapsule{{\cal S}}$, then ${\cal S}=\epsilon$.
\end{lemma}
\begin{proof}
\EZ{Assume ${\cal S}\neq\epsilon$. Then, from \refToProp{invTyping1}, there would be a free variable in $\metavariable{v}$ non-affine in $\Gamma$, but this is impossible by \refToTheorem{freevars}.2. }
\end{proof}
\paragraph{Subject reduction} To show subject reduction, we need some preliminary lemmas.
The following lemma states that typing essentially depends only on the free
variables of the expression. We denote by $\Remove{\Gamma}{\metavariable{x}}$ the type environment obtained by removing the
type association for $\metavariable{x}$ from $\Gamma$, if any.
\begin{lemma}{\rm (Weakening)}\label{lemma:weakening}
Let $\TypeCheck{\Gamma}{\metavariable{e}}{\metavariable{C}}{{\cal S}}$. If $\metavariable{x}\not\in\FV{\metavariable{e}}$, then
\begin{enumerate}
\item $\TypeCheck{\SubstFun{\Gamma}{\metavariable{x}{:}\metavariable{T}}}{\metavariable{e}}{\metavariable{C}}{\Sum{{\cal S}}{\{\metavariable{x}\}}}$ for all $\metavariable{T}$, and
\item $\TypeCheck{\Remove{\Gamma}{\metavariable{x}}}{\metavariable{e}}{\metavariable{C}}{\Remove{{\cal S}}{\metavariable{x}}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
By induction on derivations.
\end{proof}
The following lemma states the dependency between the type and sharing relation derived for
a block \EZ{and} the ones derived for its declarations and body.
\begin{lemma}{\rm (Inversion for blocks)}\label{lemma:invBlock}
If $\TypeCheck{\Gamma}{\BlockLab{\metavariable{ds}}{\metavariable{e}}{\X}}{\metavariable{C}}{{\cal S}}$, then
\begin{itemize}
\item $\TypeCheckDecs{\Gamma}{\metavariable{ds}}{{\cal S}_{\metavariable{ds}}}$ for some ${\cal S}_{\metavariable{ds}}$
\item $\TypeCheck{\Gamma}{{\metavariable{e}}}{\metavariable{C}}{{\cal S}_{\metavariable{e}}}$ for some ${\cal S}_{\metavariable{e}}$ such that
\item ${\cal S}=\Remove{(\Sum{{\cal S}_{\metavariable{ds}}}{{\cal S}_{\metavariable{e}}})}{\dom{\metavariable{ds}}}$ and $\X=\Closure{\terminale{res}}{(\Sum{{\cal S}_{\metavariable{ds}}}{{\cal S}_{\metavariable{e}}})}\cap\dom{\metavariable{ds}}$.
\end{itemize}
\end{lemma}
\begin{proof}
By rule \rn{T-Block} and definition of the type judgement for declarations.
\end{proof}
The following lemma asserts that congruent expressions have the same type and
sharing effects. Regarding annotations, which are uniquely determined by the
type derivation, if one of the two expressions is well-typed, hence its annotations are those derived
from its typing, then the annotations of the other are also uniquely determined.
\begin{lemma}{\rm (Congruence preserves types)}\label{lemma:congruence}
Let $\metavariable{e}_1$ and $\metavariable{e}_2$ be annotated expressions.
If $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$
and $\congruence{\metavariable{e}_1}{\metavariable{e}_2}$, then $\TypeCheck{\Gamma}{\metavariable{e}'_2}{\metavariable{C}}{{\cal S}}$ for some $\metavariable{e}'_2$ such that $\metavariable{e}_2'\EZ{\approx^-}\metavariable{e}_2$.\end{lemma}
\begin{proof}
The proof is in~\ref{app:proofs}.
\end{proof}
\PG{In the following when
we have $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$ and $\congruence{\metavariable{e}_1}{\metavariable{e}_2}$ we assume
$\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{C}}{{\cal S}}$, that is, we picked the term with the right annotations.}
\bigskip
The {\em type environment extracted from $\metavariable{ds}$}, denoted $\TypeEnv{\metavariable{ds}}$,
is defined by:
\begin{center}
$\Gamma_{\metavariable{ds}}=\TypeDec{\metavariable{x}_1}{\metavariable{T}_1},\ldots,\TypeDec{\metavariable{x}_n}{\metavariable{T}_n}$ if $\metavariable{ds}=\Dec{\metavariable{T}_1}{\metavariable{x}_1}{\metavariable{e}_1}\ldots\Dec{\metavariable{T}_n}{\metavariable{x}_n}{\metavariable{e}_n}$.
\end{center}
Given an evaluation context ${\cal{E}}$, the {\em type
environment extracted from ${\cal{E}}$}, denoted $\Gamma_{{\cal{E}}}$, is defined by:
\begin{itemize}
\item $\Gamma_{[\ ]}$ is the empty type environment,
\item $\Gamma_{\Block{\metavariable{dvs}\,\Dec{\metavariable{T}}{\metavariable{x}}{{\cal{E}}}\ \metavariable{ds}}{\metavariable{e}}}=\SubstFun{(\Gamma_{\metavariable{dvs}\, \metavariable{ds}})[\TypeDec{\metavariable{x}}{\metavariable{T}}]}{\Gamma_{{\cal{E}}}}$ and
\item $\Gamma_{{\Block{\metavariable{dvs}}{{\cal{E}}}}}=\SubstFun{\Gamma_{\metavariable{dvs}}}{\Gamma_{{\cal{E}}}}$.
\end{itemize}
The following lemma asserts that subexpressions of typable expressions are
themselves typable, and may be replaced with expressions that have the same
type and the same or possibly less sharing effects. \PG{Annotations may change
by effect of the reduced sharing effects since the equivalence class of
$\terminale{res}$ in the reduced sharing relations may contain less variables.}
\begin{lemma}{\rm (Context)}\label{lemma:context}
Let $\TypeCheck{\Gamma}{\Ctx{\metavariable{e}}}{\metavariable{C}}{{\cal S}}$, then
\begin{enumerate}
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{{\metavariable{e}}}{\metavariable{D}}{{\cal S}_1}$ for some
$\metavariable{D}$ and ${\cal S}_1$,
\item if $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}}]}{{\metavariable{e}'}}{\metavariable{D}}{{\cal S}_2}$ where
$\Finer{{\cal S}_2}{{\cal S}_1}$ (${{\cal S}_2}={{\cal S}_1}$),
then $\TypeCheck{\Gamma}{\CtxP{\metavariable{e}'}}{\metavariable{C}}{{\cal S}'}$ for some ${\cal E}'$ such that
$\CtxP{\metavariable{e}'}\EZ{\approx^-}\Ctx{\metavariable{e}'}$ and
$\Finer{{\cal S}'}{{\cal S}}$ (${{\cal S}'}={{\cal S}}$).
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is in~\ref{app:proofs}.
\end{proof}
The following lemma is used to prove that the elimination rules, namely \rn{alias-elim} and \rn{\EZ{affine-elim}},
do not introduce sharing. In particular, for \rn{alias-elim} a non-affine variable $\metavariable{x}$ is
substituted with a non-affine variable $\metavariable{y}$ which is in the equivalence class of $\metavariable{x}$ in the sharing relation
${\cal S}$, so that there is no newly produced connection. For \rn{\EZ{affine-elim}}, an affine
variable is substituted with a capsule value, so also in this case there is no newly produced connection.
\begin{lemma} {\rm (Substitution)}\label{lemma:substitution}
\begin{enumerate}
\item
If $\TypeCheck{\Gamma,\metavariable{x}{:}\metavariable{D},\metavariable{y}{:}\metavariable{D}}{\metavariable{e}}{\metavariable{C}}{{\cal S}}$, then
$\TypeCheck{\EZ{\Gamma}}{\Subst{\metavariable{e}}{\metavariable{y}}{\metavariable{x}}}{\metavariable{C}}{\Remove{{\cal S}}{\metavariable{x}}}$.
\item Let $\TypeCheck{\Gamma,\metavariable{x}{:}\Type{\terminale{a}}{\metavariable{D}}}{\metavariable{e}}{\metavariable{C}}{{\cal S}}$. If
$\TypeCheck{\Gamma}{\metavariable{v}}{\metavariable{D}}{\epsilon}$, then
$\TypeCheck{\EZ{\Gamma}}{\SubstVal{\metavariable{e}}{\metavariable{v}}{\metavariable{x}}}{\metavariable{C}}{{\cal S}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
By induction on type derivation. For point 1. we use \refToProp{lessSrRel}.\ref{p5}.
\end{proof}
The previous lemma can be easily extended to the type checking judgement for declarations
$\TypeCheckDecs{\EZ{\Gamma}}{\metavariable{ds}}{{\cal S}}$.
The following lemma asserts that the sharing relation of a subexpression is finer than the
sharing relation of the expression that contains it.
\begin{lemma}\label{lemma:monotoneSharing}
Let $\TypeCheck{\Gamma}{\Ctx{e}}{\metavariable{C}}{{\cal S}}$ and $\TypeCheck{\Gamma}{\metavariable{e}}{\metavariable{D}}{{\cal S}'}$.
If $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}'$ with $\metavariable{x},\metavariable{y}\not\in\HB{{\cal{E}}}$ and $\metavariable{x},\metavariable{y}\neq\terminale{res}$,
then $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$.
\end{lemma}
\begin{proof}
The proof is in~\ref{app:proofs}.
\end{proof}
The following lemma {states} a technical property needed to prove that sharing
relations are preserved when we reduce a field access redex $\FieldAccess{\metavariable{x}}{\metavariable{f}}$ \EZ{to the reference $\metavariable{y}$ stored in the field. Recall that a set of variables $\X$ stands for the sharing relation where $\X$ is an equivalence class and the others are singletons.}
\begin{lemma}\label{lemma:fieldAcc}
If $\TypeCheck{\Gamma}{\Ctx{e_1}}{\metavariable{C}}{{\cal S}_1}$, $\TypeCheck{\Gamma}{\Ctx{e_2}}{\metavariable{C}}{{\cal S}_2}$,
$\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{D}}{\{\metavariable{x},\terminale{res}\}}$ and $\TypeCheck{\Gamma}{\metavariable{e}_2}{\metavariable{D}}{\{\metavariable{y},\terminale{res}\}}$ with
$\{\metavariable{x},\metavariable{y}\}\cap\HB{{\cal{E}}}=\emptyset$. Then ${\cal S}_1+\{\metavariable{x},\metavariable{y}\}={\cal S}_2+\{\metavariable{x},\metavariable{y}\}$.
\end{lemma}
\begin{proof}
The proof is in~\ref{app:proofs}.
\end{proof}
\EZ{As already mentioned, the subject reduction theorem states that in a reduction step $\reduce{\metavariable{e}_1}{\metavariable{e}_2}$:
\begin{enumerate}[(1)]
\item $\metavariable{e}_2$ preserves the type of $\metavariable{e}_1$, and has less or equal sharing effects.
\item For each variable declaration $\Dec{\metavariable{x}}{\_}{\metavariable{e}}$ occurring in $\metavariable{e}_1$ and reduced to $\Dec{\metavariable{x}}{\_}{\metavariable{e}'}$ in $\metavariable{e}_2$, $\metavariable{e}'$ preserves the type of $\metavariable{e}$.
Moreover, ``$\metavariable{e}'$ inside $\metavariable{e}_2$'' has less or equal sharing effects than ``$\metavariable{e}$ inside $\metavariable{e}_1$'', where such sharing effects are those of the initialization expression, plus the connections existing in the store (sequence of evaluated declarations) currently available in the enclosing expression.
\end{enumerate}
Invariant (2) corresponds, in a sense, to the invariant on store which we would have in a conventional calculus, and allows us to express and prove the expected properties of lent and capsule references. }
Note that there is no
guarantee that the sole sharing effects of the initialization expression are reduced, since a
new free variable could be introduced in the expression as an effect of field
access.
\EZ{To formally express invariant (2), we need some notations and definitions. First of all, we need to trace the reduction of right-hand sides of declarations. To simplify the notation, we assume in the following that expressions contain at most one declaration for a variable (no shadowing, as can be always obtained by alpha-conversion). We define \emph{declaration contexts $\decctx{\mu\hspace{.015cm}\x}$} by:}
\begin{quote}
\begin{grammatica}
\produzione{\decctx{\EZ{\mu\hspace{.015cm}\x}}}
{\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{[\ ]}\ \metavariable{ds}}{\metavariable{e}}{\X}
\mid\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{y}}{\decctx{\EZ{\mu\hspace{.015cm}\x}}}\ \metavariable{ds}}{\metavariable{e}}{\X}
\mid\BlockLab{\metavariable{dvs}}{\decctx{\EZ{\mu\hspace{.015cm}\x}}}{\X}
}{}
\end{grammatica}
\end{quote}
That is, in $\Decctx{\EZ{\mu\hspace{.015cm}\x}}{\metavariable{e}}$ the expression $\metavariable{e}$ occurs as the right-hand side
of the (unique) declaration for reference $\metavariable{x}$\EZ{, which has qualifier $\mu$}. \EZ{Since declaration contexts are a subset of evaluation contexts, the same assumptions and definitions hold.}
To lighten the notation we
write simply $\decctx{\metavariable{x}}$ when the modifier is not relevant, and $\decctx{}$ when not even the variable is relevant.
\EZ{We define now the sharing relation $\induced{\decctx{\metavariable{x}}}$ induced by the store (sequence of evaluated declarations) enclosing the hole in $\decctx{\metavariable{x}}$. To this end, we first inductively define $\extractAllDec{\decctx{\metavariable{x}}}$:}
\begin{itemize}
\item $\extractAllDec{\BlockLab{\metavariable{dvs}\ \Dec{\Type{\mu}{\metavariable{C}}}{\metavariable{x}}{[\ ]}\ \metavariable{ds}}{\metavariable{e}}{\X}}=\metavariable{dvs}$
\item $\extractAllDec{\BlockLab{\metavariable{dvs}}{\decctx{\metavariable{x}}}{\X}}=\extractAllDec{\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{y}}{\decctx{\metavariable{x}}}\ \metavariable{ds}}{\metavariable{e}}{\X}}=\metavariable{dvs}\ \extractAllDec{\decctx{\metavariable{x}}}$
\end{itemize}
\EZ{Then, if $\extractAllDec{\decctx{\metavariable{x}}}=\Dec{\metavariable{C}_1}{\metavariable{x}_1}{\ConstrCall{\metavariable{C}_1}{\metavariable{xs}_1}}\ \cdots \Dec{\metavariable{C}_n}{\metavariable{x}_n}{\ConstrCall{\metavariable{C}_n}{\metavariable{xs}_n}}$,
we define
$\induced{\decctx{\metavariable{x}}}=\X_1+\cdots+X_n$ where $\X_i=\{x_i,\metavariable{xs}_i\}$ ($1\leq i\leq n$).}
To prove \EZ{invariant (2) in the subject reduction} we first need to show that it holds for
the congruence relation.
\begin{lemma}\label{lemma:congrSR}
If $\TypeCheck{\Gamma}{\Decctx{\metavariable{x}}{\metavariable{e}}}{\metavariable{C}}{{\cal S}}$ and $\TypeCheck{\Gamma[\TypeEnv{\decctx{\metavariable{x}}}]}{\metavariable{e}}{\metavariable{D}}{{\cal S}_x}$
and $\congruence{\Decctx{\metavariable{x}}{\metavariable{e}}}{\metavariable{e}_1}$ \EZComm{do we need to mention $\metavariable{e}_1$?}\PGComm{Nella prova} where $\metavariable{e}_1=\DecctxP{\metavariable{x}}{\metavariable{e}'}$ for some $\decctxP{\metavariable{x}}$ and $\metavariable{e}'$, then
$\TypeCheck{\Gamma[\TypeEnv{\decctxP{\metavariable{x}}}]}{\metavariable{e}'}{\metavariable{D}}{{\cal S}'_x}$
and $\induced{\decctx{\metavariable{x}}}+{\cal S}_{x}=\induced{\decctxP{\metavariable{x}}}+{\cal S}'_{x}$.
\end{lemma}
\begin{proof}
By induction on $\decctx{\metavariable{x}}$.\\
\underline{Case $\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{[\ ]}\ \metavariable{ds}}{\metavariable{e}_b}{\X}$}.
Then $\TypeCheck{\Gamma}{\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{e}}\ \metavariable{ds}}{\metavariable{e}_b}{\X}}{\metavariable{C}}{{\cal S}}$ and
$\TypeEnv{\decctx{\metavariable{x}}}=\Gamma_{\metavariable{dvs}},\TypeDec{\metavariable{x}}{\metavariable{T}},\Gamma_{\metavariable{ds}'}$. Let
$\congruence{\Decctx{\metavariable{x}}{\metavariable{e}}}{\metavariable{e}_1}$, by \refToLemma{congruence} we have
$\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$.\\
Consider how $\metavariable{e}_1$ could be obtained from $\Decctx{\metavariable{x}}{\metavariable{e}}$ by applying congruence rules to its subexpressions.\\
Rule \rn{reorder} applied to the block $\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{e}\ \metavariable{ds}}{\metavariable{e}_b}{\X}$
does not modify $\metavariable{e}$ or ${\cal S}_{\metavariable{dvs}}$. In particular, since no declaration following $[\ ]$ can be evaluated, no
declaration of $\metavariable{dvs}$ can be moved after the one of $\metavariable{x}$, and no declaration in $\metavariable{ds}$ can be moved. So
$\metavariable{e}_1=\BlockLab{\metavariable{dvs}'\ \Dec{\metavariable{T}}{\metavariable{x}}{e}\ \metavariable{ds}}{\metavariable{e}_b}{\X}$ where $\metavariable{dvs}'$ is a reordering of $\metavariable{dvs}$. So the result is obvious.\\
Rule \rn{body} applied to the block $\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{e}\ \metavariable{ds}}{\metavariable{e}_b}{\X}$ can only
modify the structure of the block by ``inserting curly brackets'' between declaration. For example
$\metavariable{e}_1$ could be $\BlockLab{\metavariable{dvs}_1} {\BlockLab{\metavariable{dvs}_2\ \Dec{\metavariable{T}}{\metavariable{x}}{e}\ \metavariable{ds}}{\metavariable{e}_b}{\Y}}{\X}$
where $\metavariable{dvs}=\metavariable{dvs}_1\,\metavariable{dvs}_2$ and $\FV{\metavariable{dvs}_1}\cap\dom{\metavariable{dvs}_2}=\emptyset$. Again in this case
$\metavariable{e}_1=\DecctxP{\metavariable{x}}{\metavariable{e}}$ for some $\decctxP{\metavariable{x}}$ such that $\induced{\decctx{\metavariable{x}}}=\induced{\decctxP{\metavariable{x}}}$ and therefore the result holds.\\
If $\metavariable{e}_1$ is obtained from $\Decctx{\metavariable{x}}{\metavariable{e}}$ by applying the congruence rules in $\metavariable{ds}$ or $\metavariable{e}_b$ or $\metavariable{e}$, then
$\metavariable{e}_1=\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{e'}\ \metavariable{ds}'}{\metavariable{e}'_b}{\X}$, so $\metavariable{e}_1=\DecctxP{\metavariable{x}}{\metavariable{e}'}=\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{e'}\ \metavariable{ds}'}{\metavariable{e}'_b}{\X}$ where $\congruence{\metavariable{e}}{\metavariable{e}'}$ . By \refToLemma{congruence} and
$\TypeCheck{\Gamma[\TypeEnv{\decctx{\metavariable{x}}}]}{\metavariable{e}}{\metavariable{D}}{{\cal S}_x}$ we have $\TypeCheck{\Gamma[\TypeEnv{\decctx{\metavariable{x}}}]}{\metavariable{e}'}{\metavariable{D}}{{\cal S}_x}$. It is easy to see that $\FV{\metavariable{e}}=\FV{\metavariable{e}'}$. (Congruent expressions have the same set of free variables.)
So again the result holds.\\
Since the congruence has to produce an expression of the shape of $\decctxP{\metavariable{x}}$ no rule can be applied to $\metavariable{dvs}$.\\
Finally, we may have that $\DecctxP{\metavariable{x}}{\metavariable{e}'}$ is obtained by application of rule \rn{dec} to the declaration of $\metavariable{x}$. There are two
cases
\begin{enumerate}[(1)]
\item $\Decctx{\metavariable{x}}{\metavariable{e}}=\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\ \metavariable{ds}_2}{\MS{\metavariable{e}_b}}{\X}}\ \MS{\metavariable{ds}'}}{\MS{\metavariable{e}'_b}}{\Y}$ and \\
$\DecctxP{\metavariable{x}}{\metavariable{e}'}=\BlockLab{\metavariable{dvs}\ \metavariable{dvs}_1\ \Dec{\metavariable{T}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\MS{\metavariable{e}_b}}{\X'}}\ \metavariable{ds}'}{\metavariable{e}'_b}{\Y'}$
or
\item $\Decctx{\metavariable{x}}{\metavariable{e}}=\BlockLab{\metavariable{dvs}\ \metavariable{dvs}_1\ \Dec{\metavariable{T}}{\metavariable{x}}{\BlockLab{\metavariable{ds}_2}{\MS{\metavariable{e}_b}}{\X'}}\ \metavariable{ds}'}{\metavariable{e}'_b}{\Y'}$ and
\\
$\DecctxP{\metavariable{x}}{\metavariable{e}'}=\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{x}}{\BlockLab{\metavariable{dvs}_1\ \metavariable{ds}_2}{\MS{\metavariable{e}_b}}{\X}}\ \MS{\metavariable{ds}'}}{\MS{\metavariable{e}'_b}}{\Y}$
\end{enumerate}
where in both cases
\begin{enumerate}[(1)]\addtocounter{enumi}{2}
\item $\FV{\metavariable{dvs}_1}\cap\dom{\metavariable{ds}_2}=\emptyset$ and $\FV{\metavariable{dvs}\,\metavariable{ds}'\,{\MS{\metavariable{e}'_b}}}{\cap}\dom{\metavariable{dvs}_1}{=}\emptyset$ and
\item $\mu=\terminale{a}$ implies $\dom{\metavariable{dvs}_1}{\cap}\X{=}\emptyset$
\end{enumerate}
From (3) we have that, if $\Gamma'=\Gamma[\Gamma_{\metavariable{dvs}},\Gamma_{\MS{\metavariable{ds}'}}][\Gamma_{\metavariable{dvs}_1},\Gamma_{\metavariable{ds}_2}]$
and $\Gamma''=\Gamma[\Gamma_{\metavariable{dvs}},\Gamma_{\MS{\metavariable{ds}'}},\Gamma_{\metavariable{dvs}_1}][\Gamma_{\metavariable{ds}_2}]$, then $\Gamma'=\Gamma''$.\\
From $\TypeCheck{\Gamma}{\Decctx{\metavariable{x}}{\metavariable{e}}}{\metavariable{C}}{{\cal S}}$ and \refToLemma{invBlock} we have that
\begin{enumerate}[(a)]
\item $\TypeCheck{\Gamma[\Gamma_{\metavariable{dvs}},\Gamma_{\MS{\metavariable{ds}'}}]}{\BlockLab{\metavariable{dvs}_1\ \metavariable{ds}_2}{\MS{\metavariable{e}_b}}{\X}}{\MS{\metavariable{T}}}{{\cal S}_x}$
and $\TypeCheckDecs{\Gamma[\Gamma_{\metavariable{dvs}},\Gamma_{\metavariable{ds}}]}{\metavariable{dvs}}{{\cal S}_d}$ where
\item ${\cal S}_x={\cal S}_1+{\cal S}_2+{\cal S}_e$
\item $\TypeCheckDecs{\Gamma'}{\metavariable{dvs}_1}{{\cal S}_1}$ and $\TypeCheckDecs{\Gamma'}{\metavariable{ds}_2}{{\cal S}_2}$ and $\TypeCheck{\Gamma'}{\metavariable{e}_b}{\_}{{\cal S}_e}$
\end{enumerate}
From $\congruence{\Decctx{\metavariable{x}}{\metavariable{e}}}{\DecctxP{\metavariable{x}}{\metavariable{e}'}}$ and \refToLemma{congruence} we have that
$\TypeCheck{\Gamma}{\DecctxP{\metavariable{x}}{\metavariable{e}'}}{\metavariable{C}}{{\cal S}}$.
From \refToLemma{invBlock}
\begin{enumerate}[(a)]\addtocounter{enumi}{3}
\item $\TypeCheck{\Gamma[\Gamma_{\metavariable{dvs}},\Gamma_{\MS{\metavariable{ds}'}},\Gamma_{\metavariable{dvs}_1}]}{\BlockLab{\metavariable{ds}_2}{\MS{\metavariable{e}_b}}{\X'}}{\metavariable{D}}{{\cal S}'_x}$ and\\
$\TypeCheckDecs{\Gamma[\Gamma_{\metavariable{dvs}},\Gamma_{\MS{\metavariable{ds}'}},\Gamma_{\metavariable{dvs}_1}]}{\metavariable{dvs}}{{\cal S}'_d}$ and $\TypeCheckDecs{\Gamma[\Gamma_{\metavariable{dvs}},\Gamma_{\MS{\metavariable{ds}'}},\Gamma_{\metavariable{dvs}_1}]}{\metavariable{dvs}_1}{{\cal S}'_1}$ where
\item ${\cal S}'_x={\cal S}'_2+{\cal S}'_e$
\item $\TypeCheckDecs{\Gamma''}{\metavariable{ds}_2}{{\cal S}'_2}$ and
$\TypeCheck{\Gamma''}{\metavariable{e}'_b}{\_}{{\cal S}'_e}$
\end{enumerate}
By \refToLemma{weakening}.1 and 2 we have that ${\cal S}'_1={\cal S}_1$ and
${\cal S}'_2={\cal S}_2$ and $\MS{{\cal S}'_e}=\MS{{\cal S}_e}$ and ${\cal S}'_d={\cal S}_d$.\\
From the fact that forward references can be done only to evaluated declarations and there are none in $\metavariable{ds}$
we have that in $\metavariable{dvs}$ and $\metavariable{dvs}_1$ there cannot free variable in $\dom{\metavariable{ds}}$. Therefore
${\cal S}_d={\cal S}_{\metavariable{dvs}}$. Moreover, from (3) in $\metavariable{dvs}_1$ there cannot free variable in $\dom{\metavariable{ds}_2}$ and
${\cal S}_1={\cal S}_{\metavariable{dvs}_1}$. So we have that\\
\centerline{
$
\begin{array}{lcll}
\induced{\decctx{\metavariable{x}}}+{\cal S}_{x}&=&{\cal S}_{\metavariable{dvs}}+({\cal S}_1+{\cal S}_2+{\cal S}_e)&\text{by definition of $\induced{\decctx{\metavariable{x}}}$ and ${\cal S}_{x}$}\\
&=&{\cal S}_{\metavariable{dvs}}+{\cal S}_{\metavariable{dvs}_1}+({\cal S}_2+{\cal S}_e)\\
&=&\induced{\decctxP{\metavariable{x}}}+({\cal S}_2+{\cal S}_e)&\text{by definition of $\induced{\decctxP{\metavariable{x}}}$}\\
&=&\induced{\decctxP{\metavariable{x}}}+{\cal S}'_{x}&\text{by definition of ${\cal S}'_x$}
\end{array}
$
}
This proves the result for both cases (1) and (2)
\medskip\noindent
The \underline{cases $\BlockLab{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{y}}{\decctx{\metavariable{x}}}\ \metavariable{ds}}{\metavariable{e}_b}{\X}$ and $\BlockLab{\metavariable{dvs}}{\decctx{\metavariable{x}}}{\X}$}
are proved using the inductive hypothesis and a case analysis on the congruence used for the block as for the previous case. \\
\end{proof}
\begin{theorem}{\rm (Subject reduction)}\label{theo:subred}
If $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$ and $\reduce{\metavariable{e}_1}{\metavariable{e}_2}$, then
\begin{enumerate}
\item \PG{$\TypeCheck{\Gamma}{\metavariable{e}'_2}{\metavariable{C}}{{\cal S}'}$ where $\metavariable{e}_2'\EZ{\approx^-}\metavariable{e}_2$
$\Finer{{\cal S}'}{{\cal S}}$, and}
{\item if $\metavariable{e}_1=\Decctx{\metavariable{x}}{\metavariable{e}}$, $\metavariable{e}_2=\DecctxP{\metavariable{x}}{\metavariable{e}'}$,
and $\TypeCheck{\TypeEnv{\decctx{\metavariable{x}}}}{\metavariable{e}}{\metavariable{D}}{{\cal S}_x}$ we have
that: $\TypeCheck{\TypeEnv{\decctxP{\metavariable{x}}}}{\metavariable{e}''}{\metavariable{D}}{{\cal S}'_x}$ where $\metavariable{e}''\EZ{\approx^-}\metavariable{e}'$ and
$\Finer{(\induced{\decctxP{\metavariable{x}}}+{\cal S}'_{x})}{(\induced{\decctx{\metavariable{x}}}+{\cal S}_{x})}$.}
\end{enumerate}
\end{theorem}
\begin{proof}
By induction on the derivation of $\reduce{\metavariable{e}_1}{\metavariable{e}_2}$ with a case analysis on the last rule of \refToFigure{reduction} used for $\reduce{\Ctx{\rho}}{\CtxP{\metavariable{e}'}}$.
We show the two most interesting cases, which are \rn{congr}, \rn{field-access} and
\rn{field-assign}, and just hint the one for \rn{alias-elim} and
\rn{\EZ{affine-elim}}. The proof of the remaining cases is in~\ref{app:proofs}.
\underline{Rule \rn{congr}}.
In this case
\begin{itemize}
\item $\congruence{\metavariable{e}_1}{\metavariable{e}'_1}$
\item $\reduce{\metavariable{e}'_1}{\metavariable{e}_2}$
\end{itemize}
If $\TypeCheck{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}}$ and
$\congruence{\metavariable{e}_1}{\metavariable{e}'_1}$, from \refToLemma{congruence},
$\TypeCheck{\Gamma}{\metavariable{e}'_1}{\metavariable{C}}{{\cal S}}$
\begin{enumerate}
\item
By induction hypothesis on $\metavariable{e}'_1$ we have that
$\TypeCheck{\Gamma}{\metavariable{e}'_2}{\metavariable{C}}{{\cal S}'}$ where $\metavariable{e}_2'\EZ{\approx^-}\metavariable{e}_2$ and $\Finer{{\cal S}'}{{\cal S}}$.
\item If $\metavariable{e}_1=\Decctx{\metavariable{y}}{\metavariable{e}}$ and $\congruence{\metavariable{e}_1}{\metavariable{e}'_1}$, then $\metavariable{e}'_1=\DecctxS{\metavariable{y}}{\metavariable{e}''}$
for some $\decctxS{\metavariable{y}}$. Let $\metavariable{e}_2=\DecctxP{\metavariable{y}}{\metavariable{e}'}$,
from $\TypeCheck{\TypeEnv{\decctx{\metavariable{y}}}}{\metavariable{e}}{\metavariable{D}}{{\cal S}_y}$ and
\refToLemma{congrSR}, we have that
$\TypeCheck{\TypeEnv{\decctxS{\metavariable{y}}}}{\metavariable{e}''}{\metavariable{D}}{{\cal S}''_y}$ and
${(\induced{\decctx{\metavariable{y}}}+{\cal S}_{y})}={(\induced{\decctxS{\metavariable{y}}}+{\cal S}''_{y})}$.
By induction hypothesis $\TypeCheck{\TypeEnv{\decctxP{\metavariable{x}}}}{\metavariable{e}''}{\metavariable{D}}{{\cal S}'_y}$ where $\metavariable{e}''\EZ{\approx^-}\metavariable{e}'$ and
$\Finer{(\induced{\decctxP{\metavariable{y}}}+{\cal S}'_{y})}{(\induced{\decctxS{\metavariable{y}}}+{\cal S}''_{y})}$.
Therefore
$\Finer{(\induced{\decctxP{\metavariable{y}}}+{\cal S}'_{y})}{(\induced{\decctx{\metavariable{y}}}+{\cal S}_{y})}$.
\end{enumerate}
\medskip
\underline{Rule \rn{field-access}}.
\begin{enumerate}
\item In this case
\begin{itemize}
\item \MS{$\metavariable{e}_1 = {\cal{E}}[\rho]$, and $\metavariable{e}_2 = {\cal{E}}'[\metavariable{e}']$}
\item ${\cal E}'={\cal{E}}=\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal{E}}_1}$ for some ${\cal{E}}_1$
\item $\extractDec{{\cal{E}}}{\metavariable{x}}=\metavariable{dv}_x=\Dec{\metavariable{D}}{\metavariable{x}}{\ConstrCall{\metavariable{D}}{\metavariable{x}_1,\ldots,\metavariable{x}_n}}$
\item $\rho=\FieldAccess{\metavariable{x}}{\metavariable{f}_i}$ and $\metavariable{e}'=\metavariable{x}_i$ with $\metavariable{x}_i\not\in\HB{{\cal{E}}_1}$.
\end{itemize}
By definition of $\decCtx{{\cal{E}}}{\metavariable{x}}$, for some ${\cal{E}}_2$
\begin{enumerate}[(1)]
\item either $\decCtx{{\cal{E}}}{\metavariable{x}}={\cal{E}}_2[\Block{\metavariable{ds}'}{\metavariable{e}_b}]$ with
$\metavariable{ds}'=\metavariable{dvs}\ \metavariable{dv}_x\ \Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\FieldAccess{\metavariable{x}}{\metavariable{f}_i}]}\ \metavariable{ds}$
\item or $\decCtx{{\cal{E}}}{\metavariable{x}}={\cal{E}}_2[\Block{\metavariable{dvs}\ \metavariable{dv}_x}{{\cal{E}}_1}]$.
\end{enumerate}
and $\metavariable{x}\not\in\HB{{\cal{E}}_1}$. \\
We consider only case (1) since the proof for the other case is similar and simpler.\\
From \refToLemma{context}.1 we have that
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}]}{\Block{\metavariable{ds}'}{\metavariable{e}_b}}{\metavariable{C}'}{{\cal S}_1}$
for some $\metavariable{C}'$ and ${\cal S}_1$. From typing rule \rn{T-block},
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{{\cal{E}}_1[\FieldAccess{\metavariable{x}}{\metavariable{f}_i}]}{\metavariable{D}'}{{\cal S}_2}$
for some $\metavariable{D}'$ and ${\cal S}_2$. From \refToLemma{context}.1 and typing
rules \rn{field-access} and \rn{var} we have that
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}][\Gamma_{{\cal{E}}_1}]}{\FieldAccess{\metavariable{x}}{\metavariable{f}_i}}{\metavariable{D}_i}{\{\metavariable{x},\terminale{res}\}}$ and
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}][\Gamma_{{\cal{E}}_1}]}{\metavariable{x}_i}{\metavariable{D}_i}{\{{x_i},\terminale{res}\}}$.
(Note that neither $\metavariable{x}$ nor $\metavariable{x}_i$ can be forward references to non
evaluated declarations and therefore they must be defined without the
$\terminale{a}$ modifier.) From \refToLemma{fieldAcc} we have that: if
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{{\cal{E}}_1[\metavariable{x}_i]}{\metavariable{D}'}{{\cal S}'_2}$,
then ${\cal S}_2+\{\metavariable{x},\metavariable{x}_i\}={\cal S}'_2+\{\metavariable{x},\metavariable{x}_i\}$. Since
$\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{\metavariable{dv}_x}{\{\metavariable{x},\metavariable{x}_1,\ldots,\metavariable{x}_n,\terminale{res}\}}$,
we have that:\\
$\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{\Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\FieldAccess{\metavariable{x}}{\metavariable{f}_i}]}\ \metavariable{dv}_x}{{\cal S}_3}$ implies
$\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}''}]}{\Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\metavariable{x}_i]}\ \metavariable{dv}_x}{{\cal S}_3}$ where
$\metavariable{ds}''=\metavariable{dvs}\ \metavariable{dv}_x\ \Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[x_i]}\ \metavariable{ds}$. Therefore
\begin{center}
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}]}{\Block{\metavariable{dvs}\ \metavariable{dv}_x\ \Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\metavariable{x}_i]}\ \metavariable{ds}}{\metavariable{e}_b}}{\metavariable{C}'}{{\cal S}_1}$
\end{center}
and from \refToLemma{context}.2 we derive \PG{$\TypeCheck{\Gamma}{\CtxP{\metavariable{e}''}}{\metavariable{C}}{{\cal S}}$ where $\Ctx{\metavariable{e}'}\EZ{\approx^-}\CtxP{\metavariable{e}''}$.}
\item Let $\metavariable{e}_1=\Decctx{\metavariable{y}}{\metavariable{e}}$, $\metavariable{e}_2=\DecctxP{\metavariable{y}}{\metavariable{e}'}$, and
$\reduce{\metavariable{e}_1=\Ctx{\FieldAccess{\metavariable{x}}{\metavariable{f}_i}}}{\metavariable{e}_2=\CtxP{\metavariable{x}_i}}$. If the
redex $\FieldAccess{\metavariable{x}}{\metavariable{f}_i}$ is not a subexpression of $\metavariable{e}$ then
$\metavariable{e}=\metavariable{e}'$, and since $\TypeEnv{\decctx{\metavariable{y}}}=\TypeEnv{\decctxP{\metavariable{y}}}$, the
result is obvious. If $\FieldAccess{\metavariable{x}}{\metavariable{f}_i}$ is a subexpression of $\metavariable{e}$,
then from \refToLemma{decomposition} for some ${\cal E}''$ we have that
$\metavariable{e}={\cal E}''[\FieldAccess{\metavariable{x}}{\metavariable{f}_i}]$ and $\metavariable{e}'={\cal E}''[{\metavariable{x}_i}]$. From
$\TypeCheck{\TypeEnv{\decctx{\metavariable{x}}}}{{\cal E}''[\FieldAccess{\metavariable{x}}{\metavariable{f}_i}]}{\metavariable{D}}{{\cal S}_x}$
and $\TypeCheck{\TypeEnv{\decctx{\metavariable{x}}}}{{\cal E}''[{\metavariable{x}_i}]}{\metavariable{D}}{{\cal S}'_x}$,
and \refToLemma{context}.1 we have that
$\TypeCheck{\Gamma[\TypeEnv{\decctx{\metavariable{x}}}][\Gamma_{{\cal E}''}]}{\FieldAccess{\metavariable{x}}{\metavariable{f}_i}}{\metavariable{D}_i}{\{\metavariable{x},\terminale{res}\}}$
and
$\TypeCheck{\Gamma[\TypeEnv{\decctx{\metavariable{x}}}][\Gamma_{{\cal E}''}]}{\metavariable{x}_i}{\metavariable{D}_i}{\{{x_i},\terminale{res}\}}$.
If $\metavariable{dv}_x\in\extractAllDec{{\cal E}''}$ then ${\cal S}_x={\cal S}'_x$, otherwise
$\metavariable{dv}_x\in\extractAllDec{\decctx{\metavariable{y}}}$. In both cases
$\induced{\decctx{\metavariable{y}}}+{\cal S}_x=\induced{\decctx{\metavariable{y}}}+{\cal S}'_x$.
\end{enumerate}
\underline{Rule \rn{field-assign}}.
\begin{enumerate}
\item In this case
\begin{itemize}
\item ${\cal{E}}=\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal{E}}_1}$ for some ${\cal{E}}_1$,
\item ${\cal{E}}'=\UpdateCtxX{\metavariable{y}}{\metavariable{x}}{i}{{\cal{E}}_1}$ since $\metavariable{x}\not\in\HB{{\cal{E}}_1}$
\item $\extractDec{{\cal{E}}}{\metavariable{x}}=\metavariable{dv}_x=\Dec{\metavariable{D}}{\metavariable{x}}{\ConstrCall{\metavariable{D}}{\metavariable{x}_1,\ldots,\metavariable{x}_n}}$
\item $\rho=\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}$ and $\metavariable{e}'=\metavariable{y}$ with $\metavariable{y}\not\in\HB{{\cal{E}}_1}$.
\end{itemize}
As for the case of field update $\decCtx{{\cal{E}}}{\metavariable{x}}$ has either shape (1) or (2) with
$\metavariable{y}\not\in\HB{{\cal{E}}_1}$. We consider only case (1).\\
From \refToLemma{context}.1 we have that
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}]}{\Block{\metavariable{ds}'}{\metavariable{e}_b}}{\metavariable{C}'}{{\cal S}_1}$
for some $\metavariable{C}'$ and ${\cal S}_1$. From typing rule \rn{T-block} we have that
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{{\cal{E}}_1[\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}]}{\metavariable{D}'}{{\cal S}_2}$
for some $\metavariable{D}'$ and ${\cal S}_2$. From \refToLemma{context}.1, typing
rule \rn{T-Field-assign} and \rn{t-var} we have that
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}][\Gamma_{{\cal{E}}_1}]}{\metavariable{x}}{\metavariable{D}}{\{\metavariable{x},\terminale{res}\}}$,
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}][\Gamma_{{\cal{E}}_1}]}{\metavariable{y}}{\metavariable{D}_i}{\{\metavariable{y},\terminale{res}\}}$, and
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}][\Gamma_{{\cal{E}}_1}]}{{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}}{\metavariable{D}_i}{\{\metavariable{x},\metavariable{y},\terminale{res}\}}$
where $\fields{\metavariable{D}}{=}\Field{\metavariable{D}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{D}_n}{\metavariable{f}_n}$ and
$\Finer{\{\metavariable{y},\terminale{res}\}}{\{\metavariable{x},\metavariable{y},\terminale{res}\}}$.\\
From $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}]}{\Block{\metavariable{ds}'}{\metavariable{e}_b}}{\metavariable{C}'}{{\cal S}_1}$
and \refToLemma{invBlock}, we have that
\begin{enumerate}[(a)]
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{\Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}]}}{{\cal S}_3}$,
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{\metavariable{dv}_x}{{\cal S}_x}$
where ${\cal S}_x=\{\metavariable{x},\metavariable{x}_1,\ldots,\metavariable{x}_n\}$,
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{\metavariable{dvs}\ \metavariable{ds}}{{\cal S}_4}$, and
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}'}]}{\metavariable{e}_b}{\metavariable{C}'}{{\cal S}_5}$.
\end{enumerate}
Let $\metavariable{dv}'_x=\Dec{\metavariable{D}}{\metavariable{x}}{\ConstrCall{\metavariable{D}}{\metavariable{x}_1,\ldots,\metavariable{x}_{i-1},\metavariable{y},\metavariable{x}_{i+1},\ldots,\metavariable{x}_n}}$
and $\metavariable{ds}''=\metavariable{dvs}\ \metavariable{dv}'_x\ \Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\metavariable{y}]}\ \metavariable{ds}$. We have that
$\Gamma_{\metavariable{ds}'}=\Gamma_{\metavariable{ds}''}$. Therefore
\begin{enumerate}[(A)]
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}''}]}{\Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\metavariable{y}]}}{{\cal S}'_3}$,
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}''}]}{\metavariable{dv}'_x}{{\cal S}_{x'}}$
where ${\cal S}_{x'}=\{\metavariable{x},\metavariable{x}_1,\ldots,\metavariable{x}_{i-1},\metavariable{y},\metavariable{x}_{i+1},\ldots,\metavariable{x}_n\}$,
\item $\TypeCheckDecs{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}''}]}{\metavariable{dvs}\ \metavariable{ds}}{{\cal S}_4}$, and
\item $\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}][\Gamma_{\metavariable{ds}''}]}{\metavariable{e}_b}{}{{\cal S}_5}$.
\end{enumerate}
From typing rules \rn{T-field-assign} and \rn{t-var}, and
\refToLemma{fieldAcc} we get
${\cal S}_3+\{\metavariable{x},\metavariable{y}\}={\cal S}'_3+\{\metavariable{x},\metavariable{y}\}$. Moreover, from (a),
typing rule \rn{T-field-assign}, and \refToLemma{monotoneSharing} we have
that $\metavariable{x}$ and $\metavariable{y}$ are in the same equivalence class in ${\cal S}_3$,
i.e., $\Closure{\metavariable{x}}{{\cal S}_3}=\Closure{\metavariable{y}}{{\cal S}_3}$.
Therefore,
$\Finer{{\cal S}'_3+{\cal S}_{x'}}{{\cal S}_3+{\cal S}_{x}}$.
From (A)$\div$(D), typing rule \rn{T-block} and \refToLemma{context}.2 we get
\begin{center}
$\TypeCheck{\Gamma[\Gamma_{{\cal{E}}_2}]}{\Block{\metavariable{dvs}\ \metavariable{dv}'_x\ \Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\metavariable{y}]}\ \metavariable{ds}}{\metavariable{e}_b}}{\metavariable{C}'}{{\cal S}'_1}$
\end{center}
where $\Finer{{\cal S}'_1}{{\cal S}_1}$. From \refToLemma{context}.2
we derive that $\TypeCheck{\Gamma}{{\cal{E}}_2[\Block{\metavariable{dvs}\ \metavariable{dv}'_x\
\Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1[\metavariable{y}]}\ \metavariable{ds}}{\metavariable{e}_b}]}{\metavariable{C}}{{\cal S}'}$ where
$\Finer{{\cal S}'}{{\cal S}}$. Consider
$\UpdateCtxX{\metavariable{y}}{\metavariable{x}}{i}{{\cal{E}}_1}={\cal{E}}_2[\Block{\metavariable{dvs}\ \metavariable{dv}'\
\Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}_1}\ \metavariable{ds}}{\metavariable{e}_b}]$, we have that
$\TypeCheck{\Gamma}{\UpdateCtxX{\metavariable{y}}{\metavariable{x}}{i}{{\cal{E}}_1[\metavariable{y}]}}{\metavariable{C}}{{\cal S}'}$,
which proves the result.
\item Let $\metavariable{e}_1=\Decctx{\metavariable{y}}{\metavariable{e}}$, $\metavariable{e}_2=\DecctxP{\metavariable{y}}{\metavariable{e}'}$, and
$\reduce{\metavariable{e}_1=\Ctx{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}}{\metavariable{e}_2=\CtxP{\metavariable{y}}}$. If the
redex is not a subexpression of $\metavariable{e}$ then $\metavariable{e}=\metavariable{e}'$, and since
$\TypeEnv{\decctx{\metavariable{y}}}=\TypeEnv{\decctxP{\metavariable{y}}}$, the result is obvious. If
$\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}$ is a subexpression of $\metavariable{e}$, then from
\refToLemma{decomposition} for some ${\cal E}''$ we have that
$\metavariable{e}={\cal E}''[\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}]$ and $\metavariable{e}'={\cal E}''[{\metavariable{y}}]$. From
$\TypeCheck{\TypeEnv{\decctx{\metavariable{x}}}}{{\cal E}''[\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}]}{\metavariable{D}}{{\cal S}_x}$,
$\TypeCheck{\TypeEnv{\decctx{\metavariable{x}}}}{{\cal E}''[{\metavariable{y}}]}{\metavariable{D}}{{\cal S}'_x}$, and
\refToLemma{context}.1 we have that
$\TypeCheck{\Gamma[\TypeEnv{\decctx{\metavariable{x}}}][\Gamma_{{\cal E}''}]}{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}{\metavariable{D}_i}{\{\metavariable{x},\metavariable{y},\terminale{res}\}}$ and
$\TypeCheck{\Gamma[\TypeEnv{\decctx{\metavariable{x}}}][\Gamma_{{\cal E}''}]}{\metavariable{y}}{\metavariable{D}_i}{\{\metavariable{y},\terminale{res}\}}$.
From $\Finer{\{\metavariable{y},\terminale{res}\}}{\{\metavariable{x},\metavariable{y},\terminale{res}\}}$, and \refToLemma{context}.2 we
derive that $\Finer{{\cal S}'_x}{{\cal S}_x}$. Therefore, from
\refToProp{lessSrRel}.\ref{p2} $\Finer{\induced{\decctx{\metavariable{y}}}+{\cal S}'_x}{\induced{\decctx{\metavariable{y}}}+{\cal S}_x}$
\end{enumerate}
\underline{Rule \rn{alias-elim}}.
Clause 1. is proved using \refToLemma{substitution}.1. Clause 2. is proved as
in the case of \rn{field-assign}.
\underline{Rule \rn{affine-elim}}.
Clause 1. is proved using \refToLemma{substitution}.2 and
\ref{lemma:sharingCapsule}. Clause 2. is proved as in the case of
\rn{field-assign}.
\end{proof}
\EZComm{rewritten from here}
\paragraph{Lent and capsule properties}
We can now formally express the lent and capsule notions, informally
described in the Introduction.
Informally, a reference $\metavariable{x}$ is (used as) lent if no sharing can be introduced through $\metavariable{x}$. Formally, if $\TypeCheck{\Gamma}{\metavariable{e}}{\_}{{\cal S}}$, then $\metavariable{x}$ is \emph{lent in $\metavariable{e}$} if $\Closure{\metavariable{x}}{{\cal S}}=\{\metavariable{x}\}$. The notion can be generalized to a set of references $\metavariable{xs}$, that is, $\metavariable{xs}$ is \emph{lent in $\metavariable{e}$} if, for each $\metavariable{x}\in\metavariable{xs}$, $\Closure{\metavariable{x}}{{\cal S}}\subseteq \metavariable{xs}$.
Consider now an expression in a \EZ{declaration} context $\Decctx{\metavariable{x}}{\metavariable{e}}$, and a reference $\metavariable{y}$. The portion of store connected to $\metavariable{y}$ before the evaluation of $\metavariable{e}$ is $\Closure{\metavariable{y}}{\induced{\decctx{\metavariable{x}}}}$.
Now, if the expression $\metavariable{e}$ can access such portion of store only through lent references, then the two following properties are ensured by the evaluation of $\metavariable{e}$:
\begin{enumerate}
\item such portion of store remains isolated from others
\item such portion of store cannot be part of the final result of $\metavariable{e}$.
\end{enumerate}
This is formally expressed by the following theorem.
\begin{theorem}{\rm (Lent)}\label{theo:lent}
If $\IsWellTyped{\Decctx{}{\metavariable{e}}}$,
$\TypeCheck{\TypeEnv{\decctx{}}}{\metavariable{e}}{\_}{{\cal S}}$, and $\metavariable{ys}=\Closure{\metavariable{y}}{\induced{\decctx{}}}$ is lent in $\metavariable{e}$, then:
\begin{enumerate}
\item if $\Decctx{}{\metavariable{e}}\longrightarrow\DecctxP{}{\metavariable{e}'}$ then $\TypeCheck{\TypeEnv{\decctxP{}}}{\PG{\metavariable{e}''}}{\_}{{\cal S}'}$ where
\PG{$\metavariable{e}''\EZ{\approx^-}\metavariable{e}'$} and $\Closure{\metavariable{y}}{(\induced{\decctxP{}}+{\cal S}')}\subseteq\metavariable{ys}$
\item if $\Decctx{}{\metavariable{e}}\longrightarrow^\star\DecctxP{}{\metavariable{v}}$, then $\metavariable{y}\not\in\FV{\EZ{\aux{gc}}(\metavariable{v})}$.
\end{enumerate}
\end{theorem}
\begin{proof}\
\begin{enumerate}
\item Since $\metavariable{ys}$ is lent in $\metavariable{e}$, $\Closure{\metavariable{y}}{{\cal S}}\subseteq\metavariable{ys}$, hence $\Closure{\metavariable{y}}{(\induced{\decctx{}}+{\cal S})}=\Closure{\metavariable{y}}{\induced{\decctx{}}}$.
From
\refToTheorem{subred}.2, since
$\TypeCheck{\TypeEnv{\decctx{}}}{\metavariable{e}}{\_}{{\cal S}}$, we have that
$\TypeCheck{\TypeEnv{\decctxP{}}}{\metavariable{e}''}{\_}{{\cal S}'}$ where
\PG{$\metavariable{e}''\EZ{\approx^-}\metavariable{e}'$} and $\Finer{(\induced{\decctxP{}}+{\cal S}')}{(\induced{\decctx{}}+{\cal S})}$, hence
$\Closure{\metavariable{y}}{(\induced{\decctxP{}}+{\cal S}')}\subseteq\Closure{\metavariable{y}}{(\induced{\decctx{}}+{\cal S})}=\Closure{\metavariable{y}}{\induced{\decctx{}}}\EZ{=\metavariable{ys}}$.
\item
By induction on the number $n$ of steps of the reduction
$\Decctx{}{\metavariable{e}}\longrightarrow^\star\DecctxP{}{\metavariable{v}}$. \\
For $n=0$, we have $\TypeCheck{\TypeEnv{\decctx{}}}{\metavariable{v}}{\_}{{\cal S}}$. If
we had $\metavariable{y}\in\FV{\EZ{\EZ{\aux{gc}}(\metavariable{v})}}$, then, from \refToTheorem{freevars}.1, it
should be either $\Pair{\metavariable{y}}{\terminale{res}}\in{\cal S}$ or $\metavariable{y}$ affine in $\TypeEnv{\decctx{}}$. However, $\Pair{\metavariable{y}}{\terminale{res}}\in{\cal S}$ contradicts the hypothesis that $\metavariable{ys}$ is lent in $\metavariable{e}$, and it is easy to see from the definition that in $\decctx{}$ there are no affine variable declarations.\\
\PG{Let $\Decctx{}{\metavariable{e}} \reduce{} \DecctxP{}{\metavariable{e}'}$ and
$\DecctxP{}{\metavariable{e}''}\longrightarrow^\star\DecctxS{}{\metavariable{v}}$, where
$\metavariable{e}''\EZ{\approx^-}\metavariable{e}'$}, in $n$ steps. By point 1. of this theorem we have $\TypeCheck{\TypeEnv{\decctxP{}}}{\metavariable{e}''}{\_}{{\cal S}'}$ and \EZ{$\Closure{\metavariable{y}}{(\induced{\decctxP{}}+{\cal S}')}\subseteq\metavariable{ys}$, hence $\Closure{\metavariable{y}}{{\cal S}'}\subseteq\metavariable{ys}$, that is, $\metavariable{ys}$ is lent in $\metavariable{e}''$ (note that to be lent does not depend on the annotations)}, and by inductive
hypothesis on $\TypeCheck{\TypeEnv{\decctxP{}}}{\metavariable{e}''}{\_}{{\cal S}'}$ we
derive the thesis.
\end{enumerate}
\end{proof}
Informally, a capsule is a reachable object graph which is an isolated portion
of store, that is, it does not contain nodes reachable from the outside. In
our calculus, a reachable object subgraph is a value $\metavariable{v}$, nodes reachable
from the outside are free variables, hence the condition to be a capsule can
be formally expressed by requiring that $\metavariable{v}$ has no free variables.
The following theorem states that the right-hand side of a capsule declaration actually reduces to a closed portion of store.
\PGComm{Review proof with new Theorem 22.2!}
\begin{theorem}{\rm (Capsule)}\label{theo:capsule}
If $\IsWellTyped{\Decctx{{\tt a}\hspace{.015cm}\x}{\metavariable{e}}}$,
$\TypeCheck{\TypeEnv{\decctx{{\tt a}\hspace{.015cm}\x}}}{\metavariable{e}}{\_}{{\cal S}}$ with
$\IsCapsule{{\cal S}}$, and
$\Decctx{{\tt a}\hspace{.015cm}\x}{\metavariable{e}}\longrightarrow^\star\DecctxP{{\tt a}\hspace{.015cm}\x}{\metavariable{v}}$, then
$\FV{\EZ{\aux{gc}}(\metavariable{v})}=\emptyset$.
\end{theorem}
{\begin{proof}
By induction on the number $n$ of steps of the reduction
$\Decctx{{\tt a}\hspace{.015cm}\x}{\metavariable{e}}\longrightarrow^\star\DecctxP{{\tt a}\hspace{.015cm}\x}{\metavariable{v}}$. \\
For $n=0$, we have $\TypeCheck{\TypeEnv{\decctx{}}}{\metavariable{v}}{\_}{{\cal S}}$.
From \refToTheorem{freevars}.2, $\IsCapsule{{\cal S}}$ implies $\metavariable{y}$ affine in $\TypeEnv{\decctx{}}$ for each $\metavariable{y}\in\FV{\metavariable{v}}$. However,
there are no affine variable declarations in $\decctx{{\tt a}\hspace{.015cm}\x}$.\\
Let $\Decctx{{\tt a}\hspace{.015cm}\x}{\metavariable{e}} \reduce{} \DecctxP{{\tt a}\hspace{.015cm}\x}{\metavariable{e}'}$ and
\PG{$\DecctxP{{\tt a}\hspace{.015cm}\x}{\metavariable{e}''}\longrightarrow^\star\DecctxS{{\tt a}\hspace{.015cm}\x}{\metavariable{v}}$, where
$\metavariable{e}''\EZ{\approx^-}\metavariable{e}'$, in $n$ steps}. From
\refToTheorem{subred}.2, since
$\TypeCheck{\TypeEnv{\decctx{}}}{\metavariable{e}}{\_}{{\cal S}}$, and
$\Closure{\terminale{res}}{{\cal S}}=\emptyset$, we have
$\TypeCheck{\TypeEnv{\decctxP{}}}{\metavariable{e}''}{\_}{{\cal S}'}$, and
$\Closure{\terminale{res}}{{\cal S}'}{=\emptyset}$. By induction hypothesis on
$\TypeCheck{\TypeEnv{\decctxP{}}}{\metavariable{e}''}{\_}{{\cal S}'}$ we derive the
result.
\end{proof}}
\paragraph{Progress} Closed expressions are not ``stuck'', that is, they
either are values or can be reduced.
To prove the theorem we introduce the set of \emph{redexes},
and we show that expressions can be decomposed in an evaluation
context filled with a redex. Therefore an expression either is a value or it
matches the left-hand side of exactly one reduction rule.
\begin{definition}
Redexes, $\rho$, are defined by:
{\small \begin{center}
$
\begin{array}{c}
\rho ::=\FieldAccess{\metavariable{x}}{\EZ{\metavariable{f}}}\ \mid\ \MethCall{\metavariable{v}}{\metavariable{m}}{\metavariable{vs}}\ \mid\
\FieldAssign{\metavariable{x}}{\metavariable{f}}{\metavariable{y}}
\mid\ \BlockLab{\metavariable{dvs}\ \Dec{\metavariable{C}}{\metavariable{x}}{\metavariable{y}}\ \metavariable{ds} }{\metavariable{e}}{\X}\ \mid\ \BlockLab{\metavariable{dvs}\ \Dec{\Type{\terminale{a}}{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}}\ \metavariable{ds} }{\metavariable{e}}{\X}
\end{array}
$
\end{center}
}
\end{definition}
\begin{lemma}{\rm (Decomposition)}\label{lemma:decomposition}
If
$\metavariable{e}$ is not a value, then there are
${\cal{E}}$ and $\rho$ such that $\congruence{\metavariable{e}}{\Ctx{\rho}}$.
\end{lemma}
\begin{proof}
The proof is in~\ref{app:proofs}.
\end{proof}
{We write $\reduce{\metavariable{e}}{}$ for $\reduce{\metavariable{e}}{\metavariable{e}'}$ for some $\metavariable{e}'$,
$\TypeCheckGround{\metavariable{e}}{\metavariable{C}}{{\cal S}}$ for
$\TypeCheck{\emptyset}{\metavariable{e}}{\metavariable{C}}{\epsilon}$, and $\IsWellTyped{\metavariable{e}}$ for
$\TypeCheckGround{\metavariable{e}}{\metavariable{C}}{\epsilon}$ for some $\metavariable{C}$ (note that an expression
with no free variables has the identity as sharing effects).}
\begin{theorem}{\rm (Progress)}\label{theo:progress}
If $\IsWellTyped{\metavariable{e}}$, and $\metavariable{e}$ is not a value, then
$\reduce{\metavariable{e}}{}$.
\end{theorem}
\begin{proof}
By \refToLemma{decomposition}, if $\metavariable{e}$ is not a value, then
$\congruence{\metavariable{e}}{\Ctx{\rho}}$. {By rule \rn{congr}, it is enough to show the
thesis for $\Ctx{\rho}$. For all $\rho$, except field access and field
update, we have that the corresponding reduction rule is applicable, since
either there are no side conditions (cases \rn{alias-elim} and
\rn{affine-elim}), or the side conditions can be easily shown to hold (case
\rn{invk}).}
In the proof for field access and field update, we use the following auxiliary
notation. Given an evaluation context ${\cal{E}}$, the context $\HoleCtx{{\cal{E}}}$,
the outermost block of the evaluation context, is defined by
\begin{itemize}
\item $\HoleCtx{{\cal{E}}}=\Block{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{y}}{[\ ]}\ \metavariable{ds}}{\metavariable{e}}$ if ${\cal{E}}=\Block{\metavariable{dvs}\ \Dec{\metavariable{T}}{\metavariable{y}}{{\cal E}'}\ \metavariable{ds}}{\metavariable{e}}$ and
\item $\HoleCtx{{\cal{E}}}=\Block{\metavariable{dvs}}{[\ ]}$ if ${\cal{E}}=\Block{\metavariable{dvs}}{{\cal E}'}$.
\end{itemize}
If \underline{$\rho$ is $\FieldAccess{\metavariable{x}}{\metavariable{f}_i}$}, from
\refToLemma{context}.1, rule \rn{t-field-access}, and rule \rn{t-var} of
\refToFigure{typing}, we have that
$\TypeCheck{\Gamma_{{\cal{E}}}}{{\FieldAccess{\metavariable{x}}{\metavariable{f}_i}}}{\metavariable{C}_i}{\EZ{\{\metavariable{x},\terminale{res}\}}}$ where
$\fields{\metavariable{C}}=\Field{\metavariable{C}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{C}_n}{\metavariable{f}_n}$. So, we have that \EZ{$\extractDec{{\cal{E}}}{\metavariable{x}}=\Dec{\metavariable{C}}{\metavariable{x}}{\ConstrCall{\metavariable{C}}{\metavariable{x}_1,\ldots,\metavariable{x}_n}}$,}
$\decCtx{{\cal{E}}}{\metavariable{x}}$ is defined, and, for some ${\cal{E}}'$,
${\cal{E}}=\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal E}'}$. If $\metavariable{x}_i\not\in\HB{{\cal E}'}$, then rule
\rn{Field-Access} is applicable. \\
Otherwise, since $\metavariable{x}_i\in\HB{{\cal E}'}$, $\decCtx{{\cal E}'}{\metavariable{x}_i}$ is
defined, and
$\CtxP{\FieldAccess{\metavariable{x}}{\metavariable{f}_i}}={\cal{E}}_1[\HoleCtx{{\decCtx{{\cal{E}}'}{\metavariable{x}_i}}}[{\cal{E}}_2[\FieldAccess{\metavariable{x}}{\metavariable{f}_i}]]]$
for some ${\cal{E}}_1$ and ${\cal{E}}_2$ such that
${\HoleCtx{{\decCtx{{\cal{E}}'}{\metavariable{x}_i}}}[{\cal{E}}_2[\FieldAccess{\metavariable{x}}{\metavariable{f}_i}]]}=\BlockLab{\metavariable{dvs}\,\Dec{\EZ{\metavariable{C}_i}}{\metavariable{x}_i}{\metavariable{v}}\,\metavariable{ds}}{\metavariable{e}}{\X}$.
Using rule \rn{alpha} of \refToFigure{congruence} we have that
\begin{quote}
$\congruence{\BlockLab{\metavariable{dvs}\,\Dec{\EZ{\metavariable{C}_i}}{\metavariable{x}_i}{\metavariable{v}}\,\metavariable{ds}}{\metavariable{e}}{\X}}{\BlockLab{\Subst{\metavariable{dvs}}{\metavariable{y}}{\metavariable{x}_i}\ \Dec{\metavariable{T}}{\metavariable{y}}{\Subst{\metavariable{v}}{\metavariable{y}}{\metavariable{x}_i}}\ \Subst{\metavariable{ds}'}{\metavariable{y}}{\metavariable{x}_i}}{\Subst{\metavariable{e}}{\metavariable{y}}{\metavariable{x}_i}}{{\Subst{\X}{\metavariable{y}}{\metavariable{x}_i}}}}$
\end{quote}
where $\metavariable{y}$ can be chosen such that $\metavariable{y}\not\in\HB{{\cal{E}}_2}$. Therefore
$\congruence{\DecCtx{{\cal{E}}}{\metavariable{x}}{\CtxP{\FieldAccess{\metavariable{x}}{\metavariable{f}_i}}}}{\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal{E}}_3[\FieldAccess{\metavariable{x}}{\metavariable{f}_i}]}}$
where $\metavariable{x}_i\not\in\HB{{\cal{E}}_3}$. So
$\reduce{\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal{E}}_3[{\FieldAccess{\metavariable{x}}{\metavariable{f}_i}}]}}{\metavariable{e}_2}$ by applying rule \rn{field-access}.
\bigskip
If \underline{$\rho$ is $\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}$}, from
\refToLemma{context}.1, rule \rn{t-field-assign}, and rule \rn{t-var} of
\refToFigure{typing}, we have that
$\TypeCheck{\Gamma_{{\cal{E}}}}{{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}}{\metavariable{C}_i}{\{\metavariable{x},\metavariable{y},\terminale{res}\}}$
where $\fields{\metavariable{C}}=\Field{\metavariable{C}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{C}_n}{\metavariable{f}_n}$. So, we have
that $\decCtx{{\cal{E}}}{\metavariable{x}}$ is defined, and ${\cal{E}}=\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal E}'}$ for
some ${\cal E}'$. Therefore, for some ${\cal{E}}_1'$,
${\cal{E}}={\cal{E}}_1'[\HoleCtx{\decCtx{{\cal{E}}}{\metavariable{x}}}[{{\cal E}'}]]$. If
$\metavariable{y}\not\in\HB{{\cal E}'}$, then rule \rn{Field-Assign} is applicable. \\
Otherwise, since $\metavariable{y}\in\HB{{\cal E}'}$, $\decCtx{{\cal E}'}{\metavariable{y}}$ is defined, and
$\CtxP{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}={\cal{E}}_1[\HoleCtx{{\decCtx{{\cal{E}}'}{\metavariable{y}}}}[{\cal{E}}_2[\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}]]]$
for some ${\cal{E}}_1$ and ${\cal{E}}_2$ such that
$\HoleCtx{{\decCtx{{\cal{E}}'}{\metavariable{y}}}}[{\cal{E}}_2[\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}]]=\BlockLab{\EZ{\metavariable{dvs}}\,\Dec{\EZ{\metavariable{C}_i}}{\metavariable{y}}{\metavariable{v}}\,\metavariable{ds}}{\metavariable{e}}{}$
where \EZ{$\metavariable{dvs}=\Reduct{(\metavariable{dvs}\, \metavariable{ds})}{\FV{\metavariable{v}}}$ are all the declarations connected to the free variables of $\metavariable{v}$, hence to be extruded together with the declaration of $\metavariable{y}$}.\\
By induction on the
number $n>0$ of blocks from which we have to extrude the declaration of $\metavariable{y}$.
Let $\metavariable{dvs}_1=\EZ{\metavariable{dvs}}\,\Dec{\EZ{\metavariable{C}_i}}{\metavariable{y}}{\metavariable{v}}$.
If $n>1$, then for some ${\cal{E}}''\neq[\ ]$, either
\begin{enumerate}[(a)]
\item $\HoleCtx{\decCtx{{\cal{E}}}{\metavariable{x}}}[{{\cal E}'[\rho]}]=\BlockLab{\metavariable{dvs}'\ \Dec{\EZ{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}'}}{{\cal{E}}''[\BlockLab{\metavariable{dvs}_1\,\metavariable{ds}}{\metavariable{e}}{}]}{}$ or
\item
$\HoleCtx{\decCtx{{\cal{E}}}{\metavariable{x}}}[{{\cal E}'[\rho]}]=\BlockLab{\metavariable{dvs}'\ \Dec{\EZ{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}'}\ \Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}''[\BlockLab{\metavariable{dvs}_1\,\metavariable{ds}}{\metavariable{e}}{\X}]}\ \metavariable{ds}'}{\metavariable{e}'}{}$,
\end{enumerate}
For (a), by induction hypothesis we have that
\begin{quote}
$\congruence{\BlockLab{\metavariable{dvs}'\ \Dec{\EZ{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}'}}{{\cal{E}}''[\BlockLab{\metavariable{dvs}_1\,\metavariable{ds}}{\metavariable{e}}{}]}{}}{\BlockLab{\metavariable{dvs}'\ \Dec{\EZ{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}'}}{\BlockLab{\metavariable{dvs}'_1\,\metavariable{ds}'}{\metavariable{e}'}{}}{}}$
\end{quote}
for some $\metavariable{ds}'$, $\metavariable{e}'$, and \PG{$\metavariable{dvs}'_1$ containing $\Dec{\EZ{\metavariable{C}_i}}{\metavariable{y}}{\metavariable{v}}$, which are the declarations that have been extruded}.
Let $\metavariable{ds}'=\metavariable{dvs}_2\,\metavariable{ds}_2$ where $\metavariable{ds}_2$ are not evaluated declarations and let $\metavariable{ds}_2=\metavariable{dvs}'_2\,\metavariable{ds}'_2 $
where $\metavariable{dvs}'_2=\Reduct{\metavariable{dvs}_2}{\FV{\metavariable{dvs}'_1}}$.
Since we cannot have forward reference to unevaluated variables $\FV{\metavariable{dvs}'_2\,\metavariable{dvs}_1'}\cap\dom{\metavariable{ds}'_2 }=\emptyset$. Therefore we can apply rule \rn{Body} of \refToFigure{congruence}.
Applying rule \rn{body} of \refToFigure{congruence} we have that
\begin{quote}
$\BlockLab{\metavariable{dvs}'\ \Dec{\EZ{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}'}}{\BlockLab{\metavariable{dvs}_1\,\metavariable{ds}'}{\metavariable{e}'}{}}{}\cong\BlockLab{\metavariable{dvs}'\ \Dec{\EZ{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}'}\ \metavariable{dvs}'_1\ \metavariable{dvs}'_2}{\BlockLab{\metavariable{ds}'_2}{\metavariable{e}'}{}}{}$.
\end{quote}
For (b), by induction hypothesis we have that
\begin{quote}
\begin{tabular}{l}
$\BlockLab{\metavariable{dvs}'\ \Dec{\EZ{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}'}\ \Dec{\metavariable{T}}{\metavariable{z}}{{\cal{E}}''[\BlockLab{\metavariable{dvs}_1\,\metavariable{ds}}{\metavariable{e}}{\X}]}\ \metavariable{ds}'}{\metavariable{e}'}{}\cong$\\
\hskip 1.5em$\BlockLab{\metavariable{dvs}'\ \Dec{\EZ{\metavariable{C}}}{\metavariable{x}}{\metavariable{v}'}\ \Dec{\metavariable{T}}{\metavariable{z}}{\BlockLab{\metavariable{dvs}'_1\,\metavariable{ds}'}{\metavariable{e}'}{\Y}}\ \metavariable{ds}''}{\metavariable{e}''}{}$
\end{tabular}
\end{quote}
for some $\metavariable{ds}'$, $\metavariable{ds}''$, $\metavariable{e}'$, $\metavariable{e}''$, $\Y$ and \PG{$\metavariable{dvs}'_1$ containing $\Dec{\EZ{\metavariable{C}_i}}{\metavariable{y}}{\metavariable{v}}$, which are the declarations that have been extruded}.
Let $\metavariable{ds}'=\metavariable{dvs}_2\,\metavariable{ds}_2$ where $\metavariable{ds}_2$ are not evaluated declarations and let $\metavariable{ds}_2=\metavariable{dvs}'_2\,\metavariable{ds}'_2 $
where $\metavariable{dvs}'_2=\Reduct{\metavariable{dvs}_2}{\FV{{\metavariable{dvs}'_1}}}$.
Since we cannot have forward reference to unevaluated variables $\FV{\metavariable{dvs}'_2\,\metavariable{dvs}_1'}\cap\dom{\metavariable{ds}'_2 }=\emptyset$. \\
From the fact that the term is well typed, we have that $\Y=\Closure{\terminale{res}}{{\cal S}}\cap(\dom{\metavariable{dvs}'_1}\cup\dom{\metavariable{ds}'})$
for some ${\cal S}$ such that $(\dom{\metavariable{dvs}'_2}\cup\dom{\metavariable{dvs}'_1})\subseteq\Closure{\metavariable{y}}{{\cal S}}$ and from \refToLemma{monotoneSharing}
and $\TypeCheck{\Gamma_{{\cal{E}}}}{{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}}{\metavariable{C}_i}{\{\metavariable{x},\metavariable{y},\terminale{res}\}}$ also $\metavariable{x}\in\Closure{\metavariable{y}}{{\cal S}}$.
Moreover, let $\Z=\dom{\metavariable{dvs}'_1}\cup\dom{\metavariable{ds}'}$, $\Remove{{\cal S}}{\Z}$ is the sharing relation associated to the inner block.\\
In order to apply congruence rule \rn{dec} we have to prove that if $\metavariable{T}=\Type{\mu}{\metavariable{D}}$ and $\mu=\terminale{a}$, then $(\dom{\metavariable{dvs}'_2}\cup\dom{\metavariable{dvs}_1'})\cap\Y=\emptyset$. \\
If $\metavariable{y}\not\in\Closure{\terminale{res}}{{\cal S}}$, then $\Closure{\terminale{res}}{{\cal S}}\cap(\dom{\metavariable{dvs}'_2}\cup\dom{\metavariable{dvs}_1'})=\emptyset$.\\
If $\metavariable{y}\in\Closure{\terminale{res}}{{\cal S}}$, then, since $\metavariable{x}\in\Closure{\metavariable{y}}{{\cal S}}$ we have that $\metavariable{x}\in\Closure{\terminale{res}}{{\cal S}}$.
Therefore $\Closure{\terminale{res}}{{\cal S}}\setminus\Z\not=\emptyset$ and $\mu$ cannot be $\terminale{a}$. \\
Applying rule \rn{Dec} of
\refToFigure{congruence} we get
\begin{quote}
\begin{tabular}{l}
$\BlockLab{\metavariable{dvs}'\ \Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{v}'}\ \Dec{\metavariable{T}}{\metavariable{z}}{\BlockLab{\metavariable{dvs}'_1\,\metavariable{ds}'}{\metavariable{e}'}{\Y}}\ \metavariable{ds}''}{\metavariable{e}''}{}\cong$\\
\hskip 1.5em$\BlockLab{\metavariable{dvs}'\ \Dec{\metavariable{T}}{\metavariable{x}}{\metavariable{v}'}\ \metavariable{dvs}'_1\ \metavariable{dvs}'_2\ \Dec{\metavariable{T}}{\metavariable{z}}{\BlockLab{\metavariable{ds}'_2}{\metavariable{e}'}{\Y'}}\ \metavariable{ds}''}{\metavariable{e}''}{}$.
\end{tabular}
\end{quote}
Therefore
$\congruence{\DecCtx{{\cal{E}}}{\metavariable{x}}{\CtxP{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}}}{\DecCtx{{\cal{E}}}{\metavariable{x}}{{\cal{E}}_3[\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}]}}$
for some ${\cal{E}}_3$ such that $\metavariable{y}\not\in\HB{{\cal{E}}_3}$. So
$\reduce{{\cal{E}}_1'[\HoleCtx{\decCtx{{\cal{E}}}{\metavariable{x}}}[{{\cal{E}}_3[{\FieldAssign{\metavariable{x}}{\metavariable{f}_i}{\metavariable{y}}}]}]]}{\metavariable{e}_2}$
applying rule \rn{field-assign}.
\end{proof}
\section{Type system}\label{sect:types}
In this section we introduce the {type and effect} system for the language. We
use $\X,\Y$ to range over sets of variables.
A {\em sharing relation ${\cal S}$} on a set of variables $X$ is an
equivalence relation on $X$.\PGComm{Removed:, called {\em the domain of ${\cal S}$} and
dubbed $\dom{{\cal S}}$.} As usual $\Closure{x}{{\cal S}}$ denotes the
{\em equivalence class of $\metavariable{x}$ in ${\cal S}$}. We will call the
elements $\Pair{\metavariable{x}}{\metavariable{y}}$ of a sharing relation \emph{connections}, and say
that $\metavariable{x}$ and $\metavariable{y}$ are \emph{connected}. The intuitive meaning is that, if
$\metavariable{x}$ and $\metavariable{y}$ are connected, then their reachable graphs in the store are
possibly shared (that is, not disjoint), hence a modification of the
reachable graph of $\metavariable{x}$ could affect $\metavariable{y}$ as well, or conversely.
We use the following notations on sharing relations:
\begin{itemize}
\item A sequence
of subsets of $X$, say, $\X_1\ldots\X_n$, represents the smallest
equivalence relation on $X$ containing the connections $\Pair{\metavariable{x}}{\metavariable{y}}$, for all $\metavariable{x},\metavariable{y}$ belonging to the same $\X_i$. So, $\epsilon$
represents the identity relation on any set of variables. \EZ{Note that this representation is deliberately ambiguous as to the
domain of the defined equivalence: any common superset of the $\X_i$
will do.}\EZComm{suggested by Tim}
\item $\Sum{{\cal S}_1}{{\cal S}_2}$ is the
smallest equivalence relation containing ${\cal S}_1$ and ${\cal S}_2$.
It is easy to show that sum is commutative and associative. \PG{With $\Sum{{\cal S}}{\X}$ we denote the
sum of ${\cal S}$ with the sharing relation containing the connections $\Pair{\metavariable{x}}{\metavariable{y}}$, for all $\metavariable{x},\metavariable{y}\in\X$.}
\item $\Remove{{\cal S}}{\X}$ is the sharing relation \PGComm{Removed: on
$\dom{{\cal S}}\setminus \X$} obtained by ``removing'' $\X$ from
${\cal S}$, that is, the smallest equivalence relation containing the
connections $\Pair{\metavariable{x}}{\metavariable{y}}$, for all $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$ such
that $\metavariable{x},\metavariable{y}\not\in\X$. $\Remove{{\cal S}}{\metavariable{y}}$ stands for
$\Remove{{\cal S}}{\{\metavariable{y}\}}$. It is easy to see that
$\Remove{{\cal S}}{(\X\cup\Y)}=\Remove{(\Remove{{\cal S}}{\X})}{\Y}$.
\item \EZ{$\SubstEqRel{{\cal S}}{\metavariable{y}}{\metavariable{x}}$ is the sharing relation \PGComm{Removed: on $\dom{{\cal S}}\setminus \{\metavariable{x}\}$} obtained by ``replacing'' $\metavariable{x}$ by $\metavariable{y}$ in ${\cal S}$, that is, the smallest equivalence relation containing the connections:\\
$\Pair{\metavariable{z}}{\metavariable{z}'}$, for all $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}$, $\metavariable{z}\neq\metavariable{x}, \metavariable{z}'\neq\metavariable{x}$\\
$\Pair{\metavariable{y}}{\metavariable{z}}$, for all $\Pair{\metavariable{x}}{\metavariable{z}}\in{\cal S}$.}
\item {\em ${\cal S}_1$ has less (or equal) sharing effects
than ${\cal S}_2$}, dubbed $\Finer{{\cal S}_1}{{\cal S}_2}$, if, for all
$\metavariable{x}$,
$\Closure{\metavariable{x}}{{\cal S}_1}\subseteq\Closure{\metavariable{x}}{{\cal S}_2}$.
\end{itemize}
The following proposition asserts some properties of sharing relations.
\begin{proposition}\label{prop:lessSrRel}\
\begin{enumerate}
\item \label{p1} Let $\metavariable{x}\neq\metavariable{y}$, $\Pair{\metavariable{x}}{\metavariable{y}}\in\sum\limits_{i=1}^{n}{\cal S}_i$ if and only if
there are sequences $i_1\dots i_{k-1}$ ($1\leq i_h\leq n$ for all $h$) and $\metavariable{z}_1\dots\metavariable{z}_k$ ($k> 1$) such that $\metavariable{x}=\metavariable{z}_1$ and $\metavariable{y}=\metavariable{z}_k$
and $\Pair{z_j}{\metavariable{z}_{{j+1}}}\in{\cal S}_{i_{j}}$ and $i_j\neq i_{j+1}$ and $\metavariable{z}_j\neq \metavariable{z}_{j+1}$ for $1\leq j\leq (k-1)$.
\item \label{p2} $\Finer{{\cal S}_1}{{\cal S}_2}$ implies
$\Finer{\Sum{{\cal S}}{{\cal S}_1}}{\Sum{{\cal S}}{{\cal S}_2}}$ for all ${\cal S}$.
\item \label{p3} $\Finer{{\cal S}_1}{{\cal S}_2}$ implies $\Finer{\Remove{{\cal S}_1}{\X}}{\Remove{{\cal S}_2}{\X}}$ for all $\X$.
\item If \label{p4}$\Remove{{\cal S}_1}{\X}={{\cal S}_1}$, then
$\Remove{(\Sum{{\cal S}_1}{{\cal S}_2})}{\X}=\Sum{\Remove{{\cal S}_1}{\X}}{\Remove{{\cal S}_2}{\X}}$.
\item If \label{p5}$\metavariable{y}\in\Closure{\metavariable{x}}{{\cal S}}$, then ${\cal S}[\metavariable{y}/\metavariable{x}]=\Remove{{\cal S}}{\metavariable{x}}$.
\end{enumerate}
Since ${\cal S}+\epsilon={\cal S}$ and $\Finer{\epsilon}{{\cal S}}$
for all ${\cal S}$, from 2. we have that
$\Finer{{\cal S}}{\Sum{{\cal S}}{{\cal S}'}}$ for all ${\cal S}$
and ${\cal S}'$.
\end{proposition}
\begin{proof}\
\begin{enumerate}
\item From the fact that $\sum\limits_{i=1}^{n}{\cal S}_i$ is the transitive closure of
$\bigcup_{1\leq i\leq n}\{\Pair{\metavariable{x}}{\metavariable{y}}\ |\ \Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}_i\}$.
\item From 1. and the fact that for all $\metavariable{z}$ and $\metavariable{z}'$, if $\Pair{\metavariable{z}}{\metavariable{z}'}\in{{\cal S}_1}$ then $\Pair{\metavariable{z}}{\metavariable{z}'}\in{{\cal S}_2}$.
\item Let $\Pair{\metavariable{z}}{\metavariable{z}'}\in\Remove{{\cal S}_1}{\X}$ with $\metavariable{z}\neq\metavariable{z}'$. Then $\Pair{\metavariable{z}}{\metavariable{z}'}\in{{\cal S}_1}$ and $\metavariable{z},\metavariable{z}'\not\in\X$. From
$\Finer{{\cal S}_1}{{\cal S}_2}$, then $\Pair{\metavariable{z}}{\metavariable{z}'}\in{{\cal S}_2}$ and so also $\Pair{\metavariable{z}}{\metavariable{z}'}\in\Remove{{\cal S}_2}{\X}$.
\\
\PG{Let $\Pair{\metavariable{z}}{\metavariable{z}'}$ be such that $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\SubstEqRel{{\cal S}_1}{\metavariable{y}}{\metavariable{x}}}$ and $\metavariable{z}\neq\metavariable{z}'$.
Then $\metavariable{z}\neq\metavariable{x}$ and $\metavariable{z}'\neq\metavariable{x}$. If $\Pair{\metavariable{z}}{\metavariable{z}'}\in{{{\cal S}_1}}$, then
$\Pair{\metavariable{z}}{\metavariable{z}'}\in{{{\cal S}_2}}$ and so also $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\SubstEqRel{{\cal S}_2}{\metavariable{y}}{\metavariable{x}}}$.
If $\Pair{\metavariable{z}}{\metavariable{z}'}\not\in{{{\cal S}_1}}$, then there are pairs $\Pair{\metavariable{z}}{\metavariable{x}}$ and $\Pair{\metavariable{z}'}{\metavariable{y}}$
such that $\Pair{\metavariable{z}}{\metavariable{x}}\in{{{\cal S}_1}}$ and $\Pair{\metavariable{y}}{\metavariable{z}'}\in{{{\cal S}_1}}$. Therefore
$\Pair{\metavariable{z}}{\metavariable{x}}\in{{{\cal S}_2}}$ and $\Pair{\metavariable{y}}{\metavariable{z}'}\in{{{\cal S}_2}}$ and so
$\Pair{\metavariable{z}}{\metavariable{z}'}\in{\SubstEqRel{{\cal S}_2}{\metavariable{y}}{\metavariable{x}}}$.}
\item
From 2. and 3. we have that $\Finer{\Remove{{\cal S}_1}{\X}}{\Remove{(\Sum{{\cal S}_1}{{\cal S}_2})}{\X}}$ and $\Finer{\Remove{{\cal S}_2}{\X}}{\Remove{(\Sum{{\cal S}_1}{{\cal S}_2})}{\X}}$. By definition of $+$ we
derive
$\Finer{\Sum{\Remove{{\cal S}_1}{\X}}{\Remove{{\cal S}_2}{\X}}}{\Remove{(\Sum{{\cal S}_1}{{\cal S}_1})}{\X}}$.\\
We now prove that $\Finer{\Remove{(\Sum{{\cal S}_1}{{\cal S}_2})}{\X}}{\Sum{\Remove{{\cal S}_1}{\X}}{\Remove{{\cal S}_2}{\X}}}$.
Let $\Pair{\metavariable{x}}{\metavariable{y}}\in\Remove{(\Sum{{\cal S}}{{\cal S}'})}{\X}$ with $\metavariable{x}\neq\metavariable{y}$. Then $\Pair{\metavariable{x}}{\metavariable{y}}\in\Sum{{\cal S}}{{\cal S}'}$
and $\metavariable{x},\metavariable{y}\not\in\X$.
By 1., there are sequences $i_1\dots i_{k-1}$ and $\metavariable{z}_1\dots\metavariable{z}_k$ ($k> 1$) such that $\metavariable{x}=\metavariable{z}_1$ and $\metavariable{y}=\metavariable{z}_k$
and $\Pair{z_j}{\metavariable{z}_{{j+1}}}\in{\cal S}_{i_{j}}$ and $i_j\neq i_{j+1}$ and $\metavariable{z}_j\neq \metavariable{z}_{j+1}$ for $1\leq j\leq (k-1)$.
The fact $i_j\neq i_{j+1}$ implies that the sequence $i_1\dots i_{k-1}$ alternates between $1$ and $2$. So for
any $j$, $2\leq j\leq (k-1)$, $\Pair{z_{j-1}}{\metavariable{z}_{{j}}}\in{\cal S}_{1}$ or
$\Pair{z_{j}}{\metavariable{z}_{{j+1}}}\in{\cal S}_{1}$. Since ${\cal S}_{1}=\Remove{{\cal S}_{1}}{\X}$, in either cases
$\metavariable{z}_j \not\in\X$. So no element of $\metavariable{z}_1\dots\metavariable{z}_k$ is in $\X$, thus for any $j$, $1\leq j\leq (k-1)$,
$\Pair{z_{j}}{\metavariable{z}_{{j+1}}}\in{\cal S}_{i_j}$ implies $\Pair{z_{j}}{\metavariable{z}_{{j+1}}}\in\Remove{{\cal S}_{i_j}}{\X}$.
By 1. we have that $\Pair{\metavariable{x}}{\metavariable{y}}\in\Sum{\Remove{{\cal S}_1}{\X}}{\Remove{{\cal S}_2}{\X}}$.
\item Let $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\Remove{{\cal S}}{\metavariable{x}}}$ with $\metavariable{z}\neq\metavariable{z}'$, if $\metavariable{z}\neq\metavariable{x}$ and $\metavariable{z}'\neq\metavariable{x}$, then
$\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}[\metavariable{y}/\metavariable{x}]$. So $\Remove{{\cal S}}{\metavariable{x}}\subseteq{\cal S}[\metavariable{y}/\metavariable{x}]$. \\
To show ${\cal S}[\metavariable{y}/\metavariable{x}]\subseteq\Remove{{\cal S}}{\metavariable{x}}$, first observe that there cannot be $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}[\metavariable{y}/\metavariable{x}]$ such that $\metavariable{z}\neq\metavariable{z}'$ and
either $\metavariable{z}=\metavariable{x}$ or $\metavariable{z}'=\metavariable{x}$.\\
Let $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}[\metavariable{y}/\metavariable{x}]$ with $\metavariable{z}\neq\metavariable{z}'$. If $\metavariable{z}\neq\metavariable{y}$ and $\metavariable{z}'\neq\metavariable{y}$, then, by definition of ${\cal S}[\metavariable{y}/\metavariable{x}]$
we get $\Pair{\metavariable{z}}{\metavariable{z}'}\in\Remove{{\cal S}}{\metavariable{x}}$. If $\metavariable{z}=\metavariable{y}$ there are
2 cases: either $\Pair{\metavariable{y}}{\metavariable{z}'}\in{\cal S}$, or $\Pair{\metavariable{x}}{\metavariable{z}'}\in{\cal S}$. In the first case $\Pair{\metavariable{y}}{\metavariable{z}'}\in\Remove{\cal S}\metavariable{x}$.
In the second, from $\metavariable{y}\in\Closure{\metavariable{x}}{{\cal S}}$ we get that $\Pair{\metavariable{x}}{\metavariable{z}'}\in{\cal S}$ implies
$\Pair{\metavariable{y}}{\metavariable{z}'}\in{\cal S}$ and so also $\Pair{\metavariable{y}}{\metavariable{z}'}\in\Remove{\cal S}\metavariable{x}$. Similar if $\metavariable{z}'=\metavariable{y}$. Therefore ${\cal S}[\metavariable{y}/\metavariable{x}]\subseteq\Remove{{\cal S}}{\metavariable{x}}$.
\end{enumerate}
\end{proof}
The typing judgment has shape
\begin{center}
$\TypeCheckAnnotate{\Gamma}{\metavariable{e}}{\metavariable{T}}{{\cal S}}{\metavariable{e}'}$
\end{center}
where $\Gamma$ is a \emph{type environment}, that is, an assignment of types
to variables, written $\TypeDec{\metavariable{x}_1}{\metavariable{T}_1},\ldots,\TypeDec{\metavariable{x}_1}{\metavariable{T}_n}$, \EZ{$\metavariable{T}$ is a type\footnote{\EZ{Note that types of shape $\Type{\terminale{a}}{\metavariable{C}}$ only occur as declaration types and in $\Gamma$, whereas they are never assigned to expressions. However, we use the same metavariable for simplicity.}}}, {${\cal S}$ is a sharing relation, and $\metavariable{e}'$ is an \emph{annotated expression}.
\PG{The sharing relation ${\cal S}$ is defined on the variables in $\Gamma$ plus a distinguished variable $\terminale{res}$ for ``result''.}
The intuitive
meaning is that ${\cal S}$ represents the connections among the free
variables of $\metavariable{e}$ possibly introduced by the evaluation of the expression, and
the variables in $\Closure{\terminale{res}}{{\cal S}}$ are the ones that {will be}
possibly connected to the result of the expression.
We write $\IsCapsule{{\cal S}}$ if
$\Closure{\terminale{res}}{{\cal S}}=\{\terminale{res}\}$. If $\TypeCheckAnnotate{\Gamma}{\metavariable{e}}{\metavariable{T}}{{\cal S}}{\metavariable{e}'}$ with $\IsCapsule{{\cal S}}$, then the expression $\metavariable{e}$
denotes a \emph{capsule}, that is, reduces to an isolated portion of store\EZ{, as will be formally shown in \refToSection{results} (\refToTheorem{capsule})}. An
affine variable will never be connected to another, nor to $\terminale{res}$, since it is
initialized with a capsule and used only once. Analogously, a variable of a
primitive type will never be connected to another.
Moreover, during typechecking expressions are annotated. The syntax of {\em
annotated expressions} is given by:
\begin{center}
$
\begin{array}{lcl}
\metavariable{e}& ::= & \metavariable{x}\mid\FieldAccess{\metavariable{e}}{\metavariable{f}}\mid{\MethCall{{\metavariable{e}}}{\metavariable{m}}{{\metavariable{e}_1}, \ldots, {\metavariable{e}_n}}}\mid\FieldAssign{\metavariable{e}}{\metavariable{f}}{\metavariable{e}'}\mid\ConstrCall{\metavariable{C}}{\metavariable{es}}\mid\BlockLab{\metavariable{ds}}{\metavariable{e}}{\X}
\end{array}
$
\end{center}
\PG{where $\X\subseteq\dom{\metavariable{ds}}$.} We use the same metavariable of source expressions for simplicity. As we
can see, the only difference is that blocks are annotated by {a} set $\X$ of variables.
In an annotated block obtained as output of typechecking, $\X$ will be the local variables declared in the block
(possibly) connected with the result of the body, see rule \rn{t-block}. Such
annotations, as we will see in the next section, are used to define
the congruence relation among terms. \EZ{Notably, we can move local store from a block to
the directly enclosing block, or conversely, as it happens with rules for
\emph{scope extension} in the $\pi$-calculus \cite{Milner99}.
However, this is not allowed if such block initializes an affine variable declaration, and we would move outside
variables} possibly connected to the result of the block. Indeed, this would make the term ill-typed, as shown in the last
example of \refToSection{language}.
The class table is abstractly modeled by the following functions:
\begin{itemize}
\item $\fields{\metavariable{C}}$ gives, for each declared class $\metavariable{C}$, the sequence
$\Field{\metavariable{T}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{T}_n}{\metavariable{f}_n}$ of its fields
declarations, with $\metavariable{T}_i$ either class name or primitive type.\footnote{That
is, the $\terminale{a}$ modifier, denoting a temporary reference, makes no sense for fields.}
\item $\method{\metavariable{C}}{\metavariable{m}}$ gives, for each method $\metavariable{m}$ declared in class $\metavariable{C}$, the tuple \\
$\Method{\ReturnTypeNew{\metavariable{T}}{{\cal S}}}{\mu}{\Param{\metavariable{T}_1}{\metavariable{x}_1}\ldots\Param{\metavariable{T}_n}{\metavariable{x}_n}}{\metavariable{e}}$
consisting of its return type paired with the resulting sharing relation, optional
$\terminale{a}$ modifier for $\terminale{this}$, parameters, and body.
\end{itemize}
We assume a well-typed class table, that is, method bodies are expected to be
well-typed with respect to method types. Formally, if \\
$\method{\metavariable{C}}{\metavariable{m}}=\Method{\ReturnTypeNew{\metavariable{T}}{{\cal S}}}{\mu}{\Param{\metavariable{T}_1}{\metavariable{x}_1}\ldots\Param{\metavariable{T}_n}{\metavariable{x}_n}}{\metavariable{e}}$,
then it should be
\begin{itemize}
\item $\TypeCheckAnnotate{\Gamma}{\metavariable{e}}{\metavariable{T}}{{\cal S}}{\metavariable{e}'}$, with
\item $\Gamma=\TypeDec{\terminale{this}}{\Type{\mu}{\metavariable{C}},\TypeDec{\metavariable{x}_1}{\metavariable{T}_1},\ldots,\TypeDec{\metavariable{x}_n}{\metavariable{T}_n}}$.
\end{itemize}
{Note that the ${\cal S}$ effects in the return type of a method can be
inferred by typechecking the body for a non-recursive method. Recursion
could be handled by a global fixed-point inference to find effects across
methods. Alternatively, and also to support interfaces, (some) effect
annotations in method return types could be supplied by the programmer,
likely in a simpler form, e.g., using the \emph{capsule} modifier. In this
case, typechecking the body should check conformance to its declared
interface. Still, the fixed-point inference scheme would be useful in porting
over code-bases, and might help to identify how effective the type system is in
practice. We leave this matter to further work.
The typing rules are given in \refToFigure{typing}.
\begin{figure}[t]
\begin{small}
\begin{math}
\begin{array}{l}
\NamedRule{t-var}{}{\TypeCheckAnnotate{\Gamma}{\metavariable{x}}{\metavariable{C}}{\{\metavariable{x},\terminale{res}\}}{\metavariable{x}}}{
\Gamma(\metavariable{x})=\metavariable{C}
}\hskip 0.4em
\NamedRule{t-affine-var}{}{\TypeCheckAnnotate{\Gamma}{\metavariable{x}}{\metavariable{C}}{\epsilon}{\metavariable{x}}}{
\Gamma(\metavariable{x})=\metavariable{T}\\
\metavariable{T}=\Type{\terminale{a}}{\metavariable{C}}\mid\terminale{int}
}
\\[5ex]
\NamedRule{t-field-access}{\TypeCheckAnnotate{\Gamma}{\metavariable{e}}{\metavariable{C}}{{\cal S}}{\metavariable{e}'}}{\TypeCheckAnnotate{\Gamma}{\FieldAccess{\metavariable{e}}{\metavariable{f}_i}}{\metavariable{T}_i}{{\cal S}}{\FieldAccess{\metavariable{e}'}{\metavariable{f}_i}}}{
\fields{\metavariable{C}}=\Field{\metavariable{T}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{T}_n}{\metavariable{f}_n}\\
i\in 1..n}
\\[4ex]
\NamedRule{t-field-assign}{
\TypeCheckAnnotate{\Gamma}{\metavariable{e}_1}{\metavariable{C}}{{\cal S}_1}{\metavariable{e}'_1}
\hskip 1.5em
\TypeCheckAnnotate{\Gamma}{\metavariable{e}_2}{\metavariable{T}_i}{{\cal S}_2}{\metavariable{e}'_2}
}{
\TypeCheckAnnotate{\Gamma}{\FieldAssign{\metavariable{e}_1}{\metavariable{f}_i}{\metavariable{e}_2}
}{\MS{\metavariable{T}_i}}{\Sum{{\cal S}_1}{{\cal S}_2}}{\FieldAssign{\metavariable{e}'_1}{\metavariable{f}_i}{\metavariable{e}'_2}}
}
{\fields{\metavariable{C}}=\Field{\metavariable{T}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{T}_n}{\metavariable{f}_n}\\
i\in 1..n
}
\\[4ex]
\NamedRule{t-new}{ \TypeCheckAnnotate{\Gamma}{\metavariable{e}_i}{\metavariable{T}_i}{{\cal S}_i}{\metavariable{e}'_i}
\hskip 1.5em
1{\leq}i{\leq}n}
{
\TypeCheckAnnotate{\Gamma}{\ConstrCall{\metavariable{C}}{\metavariable{e}_1,\ldots,\metavariable{e}_n}}
{\metavariable{C}}
{\sum\limits_{i=1}^{n}{\cal S}_i}
{\ConstrCall{\metavariable{C}}{\metavariable{e}'_1,\ldots,\metavariable{e}'_n}}
}
{
\fields{\metavariable{C}}{=}\Field{\metavariable{T}_1}{\metavariable{f}_1}\ldots\Field{\metavariable{T}_n}{\metavariable{f}_n}
}
\\[4ex]
\NamedRule{t-block}{
\begin{array}{l}
\TypeCheckAnnotate{\SubstFun{\Gamma}{\Gamma'}}{\metavariable{e}_i}{\metavariable{T}_i}{{\cal S}_i}{\metavariable{e}'_i}\hskip 0.4em 1{\leq}i{\leq}n\\
\TypeCheckAnnotate{\SubstFun{\Gamma}{\Gamma'}}{\metavariable{e}}{\metavariable{T}}{{\cal S}'}{\metavariable{e}'}
\end{array}
}
{
\begin{array}{l}
\TypeCheckAnnotate{\Gamma}{\Block{\DecP{\metavariable{T}_1}{\metavariable{x}_1}{\metavariable{e}_1}\ldots\DecP{\metavariable{T}_n}{\metavariable{x}_n}{\metavariable{e}_n}}{\metavariable{e}}}{\metavariable{T}}
{{\Remove{{\cal S}}{\dom{\Gamma'}}}}{}\\
\quad\quad\BlockLab{\DecP{\metavariable{T}_1}{\metavariable{x}_1}{\metavariable{e}'_1}\ldots\DecP{\metavariable{T}_n}{\metavariable{x}_n}{\metavariable{e}'_n}}{\metavariable{e}'}
{{\Closure{\terminale{res}}{{\cal S}}}\cap\dom{\Gamma'}}
\end{array}
}
{
\begin{array}{l}
\Gamma'=\TypeDec{\metavariable{x}_1}{\metavariable{T}_1},\ldots,\TypeDec{\metavariable{x}_n}{\metavariable{T}_n}\\
\forall {1{\leq}i{\leq}n}\ \ \metavariable{T}_i{=}\Type{\terminale{a}}{\metavariable{C}_i}\Longrightarrow\\
\ \ \ {\IsCapsule{{\cal S}_i}}\wedge \metavariable{x}_i\ \mbox{affine}\\
\EZ{{\cal S}'_i=\SubstEqRel{{\cal S}_i}{\metavariable{x}_i}{\terminale{res}}}\\
\EZ{{\cal S}=\Sum{\sum\limits_{i=1}^{n}{\cal S}'_i}{{\cal S}'}}
\end{array}
}
\\[9ex]
\NamedRule{{t-invk}}{
\begin{array}{l}
\TypeCheckAnnotate{\Gamma}{\metavariable{e}_0}{\metavariable{C}}{{\cal S}_0}{\metavariable{e}'_0}\\
\TypeCheckAnnotate{\Gamma}{\metavariable{e}_i}{\metavariable{T}_i}{{\cal S}_i}{\metavariable{e}'_i}\hskip 1.5em
0{\leq}i{\leq}n
\end{array}
}
{\begin{array}{l}
\TypeCheckAnnotate{\Gamma}{\MethCall{\metavariable{e}_0}{\metavariable{m}}{\metavariable{e}_1,\ldots,\metavariable{e}_n}}{\metavariable{T}}{{\cal S}}{}\\
\quad\quad\quad\quad\quad\quad{\MethCall{{\metavariable{e}'_0}}{\metavariable{m}}{{\metavariable{e}'_1},\ldots,{\metavariable{e}'_n}}}
\end{array}}
{\begin{array}{l}
\method{\metavariable{C}}{\metavariable{m}}{=}\Method{\ReturnTypeNew{\metavariable{T}}{{\cal S}'}}{\mu}{\Param{\metavariable{T}_1}{\metavariable{x}_1}\ldots\Param{\metavariable{T}_n}{\metavariable{x}_n}}{\metavariable{e}}\\
\mu=\terminale{a}\Longrightarrow\IsCapsule{{\cal S}_0}\\
\forall {1{\leq}i{\leq}n}\ \metavariable{T}_i{=}\Type{\terminale{a}}{\metavariable{C}_i}\Longrightarrow{\IsCapsule{{\cal S}_i}}\\
\EZ{{\cal S}'_0=\SubstEqRel{{\cal S}_0}{\terminale{this}}{\terminale{res}}}\hskip 1.5em\EZ{{\cal S}'_i=\SubstEqRel{{\cal S}_i}{\metavariable{x}_i}{\terminale{res}}}\\
{\cal S}=\Remove{(\Sum{\sum\limits_{i={0}}^{n}{\cal S}'_i}{{\cal S}'})}{\{\terminale{this},\metavariable{x}_1,\ldots,\metavariable{x}_n\}}
\end{array}}
\\[11ex]
\end{array}
\end{math}
\end{small}
\caption{Typing rules}\label{fig:typing}
\end{figure}
In rule \rn{t-var}, the evaluation of a variable (if neither affine nor of
a primitive type) connects the result of the expression with the variable
itself. In rule {\rn{t-affine-var}}, the evaluation of an affine variable does
not introduce any connection, so the resulting sharing relation is the identity
relation. Indeed, affine variables are temporary references and will be
substituted with capsules. The same happens for variables of primitive
types.\\
In rule \rn{t-field-access}, the connections introduced by a field access are
those introduced by the evaluation of the receiver.\\
In rule \rn{t-field-assign}, the connections introduced by a
field assignment are those introduced by the evaluation of the two expressions
(${\cal S}_1$ and ${\cal S}_2$). Since both ${\cal S}_1$ and
${\cal S}_2$ contain the variable $\terminale{res}$, the equivalence class of this
variable in the resulting sharing relation is, as expected, the (transitive
closure of the) union of the two equivalence classes. For instance, given the
assignment $\FieldAssign{\metavariable{e}}{\metavariable{f}}{\metavariable{e}'}$, if the evaluation of $\metavariable{e}$ connects $\metavariable{y}$
with $\metavariable{z}$ and $\metavariable{x}$ with its result, and the evaluation of $\metavariable{e}'$ connects $\metavariable{y}'$
with $\metavariable{z}'$ and $\metavariable{x}'$ with its result, then the evaluation of the field
assignment connects $\metavariable{y}$ with $\metavariable{z}$, $\metavariable{y}'$ with $\metavariable{z}'$, and {both $\metavariable{x}$ and $\metavariable{x}'$
with the result}. \\
In rule \rn{t-new}, the connections introduced by a constructor
invocation are those introduced by the evaluation of the arguments. As for
field assignment, the equivalence class of $\terminale{res}$ in the resulting sharing
relation is, as expected, the (transitive closure of the) union of the
equivalence classes of $\terminale{res}$ in the sharing relations of the arguments of
the constructor. \\
In rule \rn{t-block}, the initialization expressions and the body of the block
are typechecked in the current type {environment}, enriched by the association
to local variables of their declaration types. We denote by
$\SubstFun{\Gamma}{\Gamma'}$ the type environment which to a variable $\metavariable{x}$ assigns
$\Gamma'(\metavariable{x})$ if this is defined, and $\Gamma(\metavariable{x})$ otherwise. If a local variable is
affine, then its initialization expression is required to denote a capsule. Moreover, the variable can
occur at most once in its scope, as abbreviated by the side condition ``$\metavariable{x}_i$
affine''.\footnote{{In our case the affinity requirement can be simply
expressed as syntactic well-formedness condition, rather than by context rules,
as in \EZ{affine} type systems.}} The connections
introduced by a block are obtained modifying those introduced by the evaluation
of the initialization expressions {(${\cal S}_i, 1{\leq}i{\leq}n$)} plus
those introduced by the evaluation of the body {${\cal S}'$}. More
precisely\EZ{, for each declared variable, the connections of the result of the initialization expression are transformed in connections to the variable itself. Finally, we remove from the resulting sharing relation the local variables.}
The block is annotated with the subset of local variables which are in the
sharing relation ${\cal S}$ with the result of the block.\\
In rule \rn{t-invk}, the typing of $\MethCall{\metavariable{e}_0}{\metavariable{m}}{\metavariable{e}_1,\ldots,\metavariable{e}_n}$ is
similar to the typing of the block
$\Block{\Dec{\EZ{\Type{\mu}{\metavariable{C}}}}{\terminale{this}}{\metavariable{e}_0}\,\Dec{\metavariable{T}_1}{\metavariable{x}_1}{\metavariable{e}_1}\ldots\Dec{\metavariable{T}_n}{\metavariable{x}_n}{\metavariable{e}_n}}{\metavariable{e}}$.
For
instance, assume that method $\metavariable{m}$ has parameters $\metavariable{x}$ and $\metavariable{y}$, and the
evaluation of its body possibly connects $\metavariable{x}$ with $\terminale{this}$, and $\metavariable{y}$ with the
result, i.e., the sharing relation associated to the method is
${\cal S}'=\{\metavariable{x},\terminale{this}\}\{\metavariable{y},\terminale{res}\}$. Then, the evaluation of the method
call $\MethCall{\metavariable{z}}{\metavariable{m}}{\metavariable{x}',\metavariable{y}'}$, possibly connects $\metavariable{x}'$ with $\metavariable{z}$, and
$\metavariable{y}'$ with the result of the expression, i.e., has sharing effects
$\{\metavariable{x}',\metavariable{z}\}\{\metavariable{y}',\terminale{res}\}$.
Finally, note that primitive types are used in the standard way. For
instance, in the premise of rule \rn{t-new} the types of constructor
arguments could be primitive types, whereas in rule \rn{t-meth-call} the type
of receiver could not.
\PG{The following proposition formalizes some properties of the \EZ{typing judgment}. Notably,
if two different variables are
in sharing relation, then they must have a reference type and cannot be affine. This is
true also for variables in sharing relation with the result of an expression.}
So affine
variables are always singletons in the sharing relation. In the following proposition we omit the annotations of terms,
which are irrelevant.
\PGComm{If we have time could be simplified!}
\begin{proposition}\label{prop:invTyping1}
Let $\TypeCheck{\Gamma}{\metavariable{e}}{\metavariable{D}}{{\cal S}}$. If $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$ and $\metavariable{x}\neq\metavariable{y}$, then
\begin{itemize}
\item if $\metavariable{x}\neq\terminale{res}$ and $\metavariable{y}\neq\terminale{res}$, then $\Gamma(\metavariable{x})=\PG{\metavariable{C}}$ and $\Gamma(\metavariable{y})=\PG{\metavariable{C}'}$ (for some $\metavariable{C}$ and $\metavariable{C}'$) and $\EZ{\metavariable{x},\metavariable{y}\in}\FV{\metavariable{e}}$.
\item if $\metavariable{y}=\terminale{res}$ or $\metavariable{x}=\terminale{res}$, then $\Gamma(\metavariable{x})={\metavariable{C}}$ or $\Gamma(\metavariable{y})={\metavariable{C}}$ (for some $\metavariable{C}$) and $\metavariable{x}\in\FV{\metavariable{e}}$ or $\metavariable{y}\in\FV{\metavariable{e}}$.
\end{itemize}
\end{proposition}
\begin{proof}
The proof is by induction on the type derivation $\TypeCheck{\Gamma}{\metavariable{e}}{\EZ{\metavariable{D}}}{{\cal S}}$.
Consider the last \EZ{typing} rule used in the type derivation.\\
\underline{Rule \rn{T-Var}}. In this case $\metavariable{e}=\metavariable{x}$, $\Gamma(\metavariable{x})=\metavariable{D}$, and $\FV{\metavariable{e}}=\{\metavariable{x}\}$. The only non trivial equivalence
class is $\{\metavariable{x},\terminale{res}\}$. Therefore the result holds. \\
\underline{Rule \rn{T-Affine-Var}}. In this case $\metavariable{e}=\metavariable{x}$ and ${\cal S}$ is the identity. Therefore there is no $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$ such that $\metavariable{x}\neq\metavariable{y}$ and the result holds trivially. \\
\underline{Rule \rn{T-Field-Access}}. In this case $\metavariable{e}=\FieldAccess{\metavariable{e}_1}{\metavariable{f}}$ and the result derives by induction hypothesis on $\metavariable{e}_1$.\\
\underline{Rule \rn{T-Field-Assign}}. In this case $\metavariable{e}=\FieldAssign{\metavariable{e}_1}{\metavariable{f}_i}{\metavariable{e}_2}$ and ${\cal S}={\cal S}_1+{\cal S}_2$
and $\TypeCheck{\Gamma}{\metavariable{e}_i}{\metavariable{C}}{{\cal S}_i}$ for $i=1,2$. By induction hypotheses, we have that
if $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}_i$ and $\metavariable{z}\neq\metavariable{z}'$
\begin{enumerate} [(1)]
\item if $\metavariable{z}\neq\terminale{res}$ and $\metavariable{z}'\neq\terminale{res}$, then $\Gamma(\metavariable{z})=\metavariable{C}$ and $\Gamma(\metavariable{z}')=\metavariable{C}'$ (for some $\metavariable{C}$ and $\metavariable{C}'$) and $\{\metavariable{z},\metavariable{z}'\}\subseteq\FV{\metavariable{e}_h}$
for $1{\leq}h{\leq}2$
\item if either $\metavariable{z}=\terminale{res}$ or $\metavariable{z}'=\terminale{res}$, then $\Gamma(\metavariable{z}')={\metavariable{C}}$ or $\Gamma(\metavariable{z})={\metavariable{C}}$ (for some $\metavariable{C}$) and $\metavariable{z}'\in\FV{\metavariable{e}_h}$
or $\metavariable{z}\in\FV{\metavariable{e}_i}$
for $1{\leq}h{\leq}2$
\end{enumerate}
If $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$ and $\metavariable{x}\neq\metavariable{y}$, then, by \refToProp{lessSrRel}.\ref{p1}, there are sequences $i_1\dots i_{k-1}$ and $\metavariable{z}_1\dots\metavariable{z}_k$ ($k> 1$) such that $\metavariable{x}=\metavariable{z}_1$ and $\metavariable{y}=\metavariable{z}_k$
and $\Pair{z_j}{\metavariable{z}_{{j+1}}}\in{\cal S}_{i_{j}}$ and $i_j\neq i_{j+1}$ and $\metavariable{z}_j\neq \metavariable{z}_{j+1}$ for $1\leq j\leq (k-1)$.
The fact $i_j\neq i_{j+1}$ implies that the sequence $i_1\dots i_{k-1}$ alternates between $1$ and $2$.
So for
any $j$, $1\leq j\leq (k-1)$, $\Pair{z_j}{\metavariable{z}_{{j+1}}}\in{\cal S}_{1}$ or
$1\leq j\leq (k-1)$, $\Pair{z_j}{\metavariable{z}_{{j+1}}}\in{\cal S}_{2}$.
In both cases, by inductive hypotheses (1) and (2) on ${\cal S}_1$ or ${\cal S}_2$
we have that for all $i$, $1\leq i\leq (k-1)$
\begin{enumerate} [(a)]\addtocounter{enumi}{2}
\item if $\metavariable{z}_i\neq\terminale{res}$ and $\metavariable{z}_{i+1}\neq\terminale{res}$, then $\Gamma(\metavariable{z}_i)=\metavariable{C}$ and $\Gamma(\metavariable{z}_{i+1})=\metavariable{C}'$ (for some $\metavariable{C}$ and $\metavariable{C}'$) and $\{\metavariable{z}_i,\metavariable{z}_{i+1}\}\subseteq\FV{\metavariable{e}_h}$ ($h=1$ or $h=2$)
\item if $\metavariable{z}_{i+1}=\terminale{res}$ or $\metavariable{z}_{i}=\terminale{res}$, then $\Gamma(\metavariable{z}_i)=\metavariable{C}$ or $\Gamma(\metavariable{z}_{i+1})=\metavariable{C}$ (for some $\metavariable{C}$) and $\metavariable{z}_i\in\FV{e_h}$
or $\metavariable{z}_{i+1}\in\FV{e_h}$ ($h=1$ or $h=2$)
\end{enumerate}
By transitivity of equality we have that, for all $i$, $1\leq i\leq k$,
if $\metavariable{z}_i\neq\terminale{res}$, then $\Gamma(\metavariable{z}_i)=\metavariable{C}$ and if there is $j$, $1\leq j\leq k$,
such that $\metavariable{z}_j=\terminale{res}$ also $\metavariable{C}=\metavariable{D}$. Morever $\{\metavariable{z}_i\ |\ \metavariable{z}_i\neq\terminale{res}\ \ 1\leq i\leq k\}\subseteq\FV{\metavariable{e}_1}\cup\FV{\metavariable{e}_2}=\FV{\metavariable{e}}$.
Therefore the result holds.\\
\PG{\underline{Rule \rn{T-Block}}. In this case $\metavariable{e}=\Block{\DecP{\metavariable{T}_1}{\metavariable{x}_1}{\metavariable{e}_1}\ldots\DecP{\metavariable{T}_n}{\metavariable{x}_n}{\metavariable{e}_n}}{\metavariable{e}_0}$. \\
Let
$\Gamma'=\SubstFun{\Gamma}{\TypeDec{\metavariable{x}_1}{\metavariable{T}_1},\ldots,\TypeDec{\metavariable{x}_n}{\metavariable{T}_n}}$ we have that
\begin{enumerate} [(a)]
\item ${\cal S}=\Remove{({\sum\limits_{i=0}^{n}{\cal S}'_i})}{\X}$ where $\X=\dom{\Gamma'}$
\item ${\cal S}'_i=\SubstEqRel{{\cal S}_i}{\metavariable{x}_i}{\terminale{res}}$ ($1{\leq}i{\leq}n$)
\item $\TypeCheck{\Gamma'}{\metavariable{e}_i}{\metavariable{T}_i}{{\cal S}_i}$ ($1\leq i\leq n$)
\item $\TypeCheck{\Gamma'}{\metavariable{e}_0}{\metavariable{T}}{{\cal S}'_0}$
\item if $\metavariable{T}_i{=}\Type{\terminale{a}}{\metavariable{C}_i}$, then $\Closure{\terminale{res}}{{\cal S}_i}=\{\terminale{res}\}$ ($1{\leq}i{\leq}n$)
\end{enumerate}
By induction hypotheses on (c), we have that for all $i$, $1\leq i\leq n$,
if $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}_i$ and $\metavariable{z}\neq\metavariable{z}'$
\begin{enumerate} [(1)
\item if $\metavariable{z}\neq\terminale{res}$ and $\metavariable{z}'\neq\terminale{res}$, then $\Gamma'(\metavariable{z})=\metavariable{C}$ and $\Gamma'(\metavariable{z}')=\metavariable{C}'$ (for some $\metavariable{C}$ and $\metavariable{C}'$) and $\{\metavariable{z},\metavariable{z}'\}\subseteq\FV{\metavariable{e}_i}$
\item if either $\metavariable{z}=\terminale{res}$ or $\metavariable{z}'=\terminale{res}$, then $\Gamma'(\metavariable{z}')={\metavariable{C}}$ or $\Gamma'(\metavariable{z})={\metavariable{C}}$ (for some $\metavariable{C}$) and $\metavariable{z}'\in\FV{\metavariable{e}_i}$
or $\metavariable{z}\in\FV{\metavariable{e}_i}$
\end{enumerate}
By induction hypotheses on (d), we have that
if $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}'_0$ and $\metavariable{z}\neq\metavariable{z}'$
\begin{enumerate} [(1)]\addtocounter{enumi}{2}
\item if $\metavariable{z}\neq\terminale{res}$ and $\metavariable{z}'\neq\terminale{res}$, then $\Gamma'(\metavariable{z})=\metavariable{C}$ and $\Gamma'(\metavariable{z}')=\metavariable{C}'$ (for some $\metavariable{C}$ and $\metavariable{C}'$) and $\{\metavariable{z},\metavariable{z}'\}\subseteq\FV{\metavariable{e}_0}$
\item if either $\metavariable{z}=\terminale{res}$ or $\metavariable{z}'=\terminale{res}$, then $\Gamma'(\metavariable{z}')={\metavariable{C}}$ or $\Gamma'(\metavariable{z})={\metavariable{C}}$ (for some $\metavariable{C}$) and $\metavariable{z}'\in\FV{\metavariable{e}_0}$
or $\metavariable{z}\in\FV{\metavariable{e}_0}$
\end{enumerate}
Observe that if for all $i$, $1\leq i\leq n$ if $\metavariable{z}\neq\metavariable{z}'$, if $\metavariable{z}\neq\metavariable{z}'$ and $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}'_i$, then $\metavariable{z},\metavariable{z}'\neq\terminale{res}$ and
either $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}_i$ or $\Pair{\metavariable{z}}{\metavariable{x}_i}\in{\cal S}_i$ and $\Pair{\metavariable{z}'}{\terminale{res}}\in{\cal S}_i$.
Therefore by (1) and (2) we have that
\begin{enumerate} [(1)]\addtocounter{enumi}{4}
\item if $\metavariable{z}\neq\metavariable{z}'$ and $\Pair{\metavariable{z}}{\metavariable{z}'}\in{\cal S}'_i$, then $\metavariable{z},\metavariable{z}'\neq\terminale{res}$ and $\Gamma'(\metavariable{z})=\metavariable{C}$ and
$\Gamma'(\metavariable{z})=\metavariable{C}'$ (for some $\metavariable{C}$ and $\metavariable{C}'$) and $\{\metavariable{z},\metavariable{z}'\}\subseteq\FV{\metavariable{e}_i}$
\end{enumerate}
Let $\metavariable{x}\neq\metavariable{y}$ and $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$, then ${\Pair{\metavariable{x}}{\metavariable{y}}\in\sum\limits_{i=0}^{n}{\cal S}'_i}$ and
$\metavariable{x},\metavariable{y}\not\in\X$.
By \refToProp{lessSrRel}.\ref{p1},
there are sequences $i_1\dots i_{k-1}$ \PG{($0\leq i_h\leq n$ for all $h$)} and $\metavariable{z}_1\dots\metavariable{z}_k$ ($k> 1$) such that $\metavariable{x}=\metavariable{z}_1$ and $\metavariable{y}=\metavariable{z}_k$
\begin{enumerate} [(A)]\addtocounter{enumi}{1}
\item $\Pair{z_j}{\metavariable{z}_{{j+1}}}\in{\cal S}'_{i_{j}}$ and $i_j\neq i_{j+1}$ and $\metavariable{z}_j\neq \metavariable{z}_{j+1}$ for $1\leq j\leq (k-1)$.
\end{enumerate}
By (5) and (3) (the induction hypothesis on ${\cal S}'_{0}$) for all $i$, $1\leq i\leq k-1$
\begin{enumerate} [(A)
\item if $\metavariable{z}_i\neq\terminale{res}$ and $\metavariable{z}_{i+1}\neq\terminale{res}$, then $\Gamma(\metavariable{z}_i)=\metavariable{C}$ and $\Gamma(\metavariable{z}_{i+1})=\metavariable{C}'$ (for some $\metavariable{C}$ and $\metavariable{C}'$) and $\{\metavariable{z}_i,\metavariable{z}_{i+1}\}\subseteq\FV{\metavariable{e}_{i_j}}$
\end{enumerate}
By (5) and (4) (the induction hypothesis on ${\cal S}'_{0}$) for all $i$, $1\leq i\leq k-1$
\begin{enumerate} [(A)]\addtocounter{enumi}{1}
\item if $\metavariable{z}_{i+1}=\terminale{res}$ or $\metavariable{z}_{i}=\terminale{res}$, then $\Gamma(\metavariable{z}_i)=\metavariable{C}$ or $\Gamma(\metavariable{z}_{i+1})=\metavariable{C}$ (for some $\metavariable{C}$) and $\metavariable{z}_i\in\FV{e_{i_j}}$
or $\metavariable{z}_{i+1}\in\FV{e_{i_j}}$.
\end{enumerate}
Finally, by transitivity of equality we have that, for all $i$, $1\leq i\leq k$,
if $\metavariable{z}_i\neq\terminale{res}$, then $\Gamma(\metavariable{z}_i)=\metavariable{C}$ and if there is $j$, $1\leq j\leq k$,
such that $\metavariable{z}_j=\terminale{res}$ also $\metavariable{C}=\metavariable{D}$. Morever $\{\metavariable{z}_i\ |\ \metavariable{z}_i\neq\terminale{res}\ \ 1\leq i\leq k\}\subseteq\bigcup_{1\leq i\leq n}(\FV{\metavariable{e}_i}\setminus\X)=\FV{\metavariable{e}}$.
Therefore, if $\Pair{\metavariable{x}}{\metavariable{y}}\in{\cal S}$ and $\metavariable{x}\neq\metavariable{y}$ we get
\begin{itemize}
\item if $\metavariable{x}\neq\terminale{res}$ and $\metavariable{y}\neq\terminale{res}$, then $\Gamma(\metavariable{x})={\metavariable{C}}$ and $\Gamma(\metavariable{y})=\PG{\metavariable{C}'}$ (for some $\metavariable{C}$ and $\metavariable{C}'$) and $\metavariable{x},\metavariable{y}\in\FV{\metavariable{e}}$,
\item if $\metavariable{y}=\terminale{res}$ or $\metavariable{x}=\terminale{res}$, then $\Gamma(\metavariable{x})={\metavariable{C}}$ or $\Gamma(\metavariable{y})={\metavariable{C}}$ (for some $\metavariable{C}$) and $\metavariable{x}\in\FV{\metavariable{e}}$ or $\metavariable{y}\in\FV{\metavariable{e}}$.
\end{itemize}
.\\
}
The proofs for \underline{rules \rn{T-Invk} and \rn{T-New}} are similar.
\end{proof}
|
1,116,691,500,698 | arxiv | \section{Introduction}\label{s:intro}
Nuclear rings in barred-spiral galaxies often exhibit strong activities of star formation (e.g,
\citealt{but96,ken97,kna06,maz08,san10,maz11,hsi11,van11, oni15}). They are mostly circular, with ellipticity of $e\sim0-0.4$. They are thought to form due to nonlinear interactions of gas with an underlying non-axisymmetric stellar bar potential (e.g.,
\citealt{com85,but86,shl90,kna95,com01,com10}). Recent hydrodynamic
simulations show that the inflowing gas driven inward by the bar torque tends to gather at the location of centrifugal barrier, well inside the inner Lindblad resonance, where the centrifugal force on the gas balances the external gravity \citep{kim12b,kim12c,ks12,li15}. This predicts that nuclear rings are smaller in size in galaxies with stronger bars and/or lower pattern speeds, overall consistent with observational results of \citet{com10}.
One of important issues regarding nuclear rings is what determines the star formation rate (SFR) in them. Observations indicate that the ring SFRs vary widely in the range of $\sim0.1$--$10\Mrate$ from galaxy to galaxy, with a smaller value corresponding to a more strongly barred galaxy \citep{maz08,com10}, although the total gas mass in each ring is almost constant at $\sim(1$--$6) \times 10^8{\rm\,M_\odot}$ (e.g., \citealt{but00,ben02,she05,sch06}). By analyzing photometric H$\alpha$ data of 22 nuclear rings, \citet{maz08} found that about a half of their sample possesses an azimuthal age gradient of young star clusters in such a way that older ones are located systematically farther away from the contact points between a ring and dust lanes, while other rings do not show a noticeable age gradient (see also, e.g., \citealt{bok08,ryd10,bra12}).
To explain the spatial distributions of ages of young star clusters,
\citet{bok08} proposed two scenarios of star formation: the ``popcorn'' model in which star formation takes place in dense clumps randomly distributed along a nuclear ring, and the ``pearls on a string'' model where star formation occurs preferentially near the contact points. Since star clusters age as they orbit about the galaxy center, the pearls-on-a-string model naturally explains the presence of an azimuthal age gradient, while clusters with different ages are well mixed in the popcorn model (see also, e.g.,
\citealt{ryd01,ryd10,all06,san10,van13}). The most important factor
that determines the dominating type of star formation appears to be the mass inflow rate $\dot{M}$ to the ring along the dust lanes
\citep{seo13,seo14}. When $\dot{M}$ is less than a critical value
$\dot{M}_{c}$, all inflowing gas can be consumed at the contact points, and star formation occurs in the pearls-on-a-string fashion. When $\dot{M}> \dot{M}_{c}$, on the other hand, the inflowing gas overflows the contact points and is transferred into other parts of the ring, resulting in popcorn-style star formation when it becomes
gravitationally unstable. \citet{seo13} found numerically
$\dot{M}_c\sim 1\Mrate$ for typical nuclear rings, although it depends rather sensitively on the gas sound speed as well as the ring size.
The above consideration implicitly assumes that nuclear rings
undergoing star formation in the pearls-on-a-string manner are
gravitationally stable, while those with popcorn-type star formation
are globally unstable. However, this has yet to be tested
theoretically. Although several authors studied gravitational
instability of ring-like systems (e.g., \citealt{goo88,elm94,chr97,had11}), it is difficult to apply their
results directly to nuclear rings because of the approximations made in these studies. For example, \citet{goo88} analyzed a linear stability of shearing accretion rings (or tori) to gravitational perturbations, but their models were limited to incompressible gas without any motion along the vertical direction parallel to the rotation axis (see also \citealt{luy90,and97}). For magnetized compressible rings, \citet{elm94} showed that a ring with density larger than $0.6\kappa^2/G$ is gravitationally unstable, with $\kappa$ and $G$ referring to the epicycle frequency and the gravitational constant, respectively. However, this result was based on the local approximation that treated the ring as a thin uniform cylinder without considering its internal structure.
On the other hand, \citet{had11} and \citet{had14} analyzed stability
of polytropic rings with index $n=1.5$ by solving the linearized
equations as an initial value rather than eigenvalue problem, and found several unstable modes with the azimuthal mode number $m\leq 4$. \citet{chr97} instead ran two-dimensional nonlinear simulations of galaxy rings using the equations integrated along the vertical
direction. Using an adiabatic equation of state, they found that
massive slender rings are highly unstable to gravitating modes with $m$ as large as 18. However, these linear or nonlinear initial-value
approaches did not search all unstable modes systematically as functions of $m$ and rotation frequency $\Omega_0$.
Is a ring with given physical quantities (such as mass, size, sound
speed, rotation speed) gravitationally stable or not? What is the most dominant mode if it is unstable? How fast does it grow? To address these questions, we in this paper perform a linear stability analysis of nuclear rings, assuming that they are slender and isothermal. We will find full dispersion relations of gravitationally unstable modes as well as the critical angular frequencies for stability. We will then apply the results to observed nuclear rings to check the presence or absence of an azimuthal age gradient of young star clusters is really consistent with stability properties of the rings. We will also run three-dimensional numerical simulations and compare the results with those of our linear stability analysis.
Stability analysis of any system requires to set up its initial
equilibrium a priori. Due to their complicated geometry, finding
equilibrium configurations of isothermal rings is a non-trivial task.
In a pioneering work, \citet{ost64b} treated the effects of rotation
and the curvature as perturbing forces to otherwise non-rotating
infinite cylinders, and obtained approximate expressions for density
distributions of polytropic or isothermal rings in axisymmetric
equilibrium. To determine the equilibrium structure of a
slowly-rotating, spheroid-like body, \citet{ost68} developed a
self-consistent field (SCF) method that solves the Poisson equation as well as the equation for steady equilibrium, alternatively and
iteratively. \citet{eri81} used a similar iteration method to find a
ring-like equilibrium sequence of incompressible bodies as a function
of $\Omega_0$. \citet{hac86a,hac86b} extended the original SCF method
of \citet{ost68} to make it work even for rapidly-rotating, ring-like
polytropes in two or three dimensions. In this paper, we shall modify
the SCF technique of \citet{hac86a} to find equilibrium sequences of
rigidly-rotating isothermal bodies. This will allow us to explore the
effects of compressibility on the internal structures of rings.
The remainder of this paper is organized as follows. In Section
\ref{s:eql}, we describe our SCF method used to construct isothermal
bodies in steady equilibrium. In Section \ref{s:seq}, we present the
equilibrium sequences of rigidly-rotating isothermal objects, together with test results for incompressible bodies and Bonner-Ebert spheres. We will also show that the density profiles of slender rings can well be approximated by those of infinite isothermal cylinders. In Section \ref{s:GI}, we perform a linear stability analysis of slender isothermal rings to obtain the dispersion relations as well as the critical angular frequencies, and present the results of numerical simulations. In Section \ref{s:sum}, we summarize and
conclude this work with applications to observed nuclear rings.
\section{SCF Method}\label{s:eql}
\subsection{Equilibrium Equations}
\begin{figure*}
\hspace{0.5cm}\includegraphics[angle=0, width=17cm]{fig01.pdf}
\caption{Shapes of the meridional cross section of axially-symmetric incompressible bodies in
(a) spheroid-like configurations and (b) ring-like configurations.
\label{f:incomp_shape}}
\end{figure*}
In this section, we explore equilibrium sequences of rotating,
isothermal bodies in the presence of both self-gravity and external
gravity. These bodies can take a spheroid-like or ring-like
configuration when the total angular momentum is small or large. We
assume that equilibrium bodies are rotating rigidly at angular
frequency $\Omega_0$ about its symmetry axis that is aligned in the
$\hat{z}$-direction. The equation of steady equilibrium then reads
\be\label{e:HSE0}
\cs^2 \nabla \ln \rho + \nabla \Phi_{\rm eff} = 0,
\ee
where $\rho$ is the density, $\cs$ is the isothermal speed of sound,
and $\Phi_{\rm eff}$ is the effective
potential defined by
\be\label{e:effP}
\Phi_{\rm eff} = \Phi_e + \Phi_s - \frac{1}{2}\Omega_0^2 R^2,
\ee
with $R$ being the cylindrical radial distance from the rotation axis. In Equation \eqref{e:effP}, $\Phi_e$ represents the external
gravitational potential, while $\Phi_s$ is the self-gravitational
potential satisfying the Poisson equation
\be\label{e:pos0}
\nabla^2\Phi_s = 4\pi G\rho.
\ee
For nuclear rings in barred galaxies, $\Phi_e$ is provided mainly by a dark halo as well as a stellar disk and a bulge. The last term in
Equation \eqref{e:effP} is the centrifugal potential.
We assume $\nabla \Phi_e = \Omega_e^2 \mathbf{R}$, so that the external gravity alone can make a body rotate at constant angular frequency $\Omega_e$. For nuclear rings, this approximation is valid if rings are geometrically thin. Equation \eqref{e:HSE0} is then integrated to yield
\be\label{e:HSE1}
\cs^2 \ln \rho + \Phi_s - \frac{1}{2}
\Omega_s^2 R^2 = C,
\ee
where $C$ is constant, and
\be
\Omega_s^2 \equiv \Omega_0^2 - \Omega_e^2.
\ee
Note that $\Omega_s$ corresponds to the equilibrium angular frequency
of an \emph{isolated}, self-gravitating ring in the absence of the
external gravity. Our aim is to obtain $\rho$ satisfying Equations
\eqref{e:pos0} and \eqref{e:HSE1} simultaneously.
To obtain equilibrium structure of slowly-rotating stars, \citet{ost68}
introduced an efficient SCF method that solves Equations \eqref{e:pos0}
and \eqref{e:HSE1} alternatively and iteratively. In the SCF method,
one first takes a trial distribution for $\rho$ and solves the Poisson equation to find $\Phi_s$, which in turn yields new $\rho$ from Equation \eqref{e:HSE1}. Calculations are repeated until the trial and new density distributions agree within a tolerance. \citet{hac86a} extended the original SCF method to make it suitable for rapidly rotating polytropes. Here, we closely follow Hachisu's method to determine isothermal equilibria.
Following \citet{hac86a}, we let $\rho_{c}$ and $R_{\rm A}$ denote the
maximum density and the maximum radial extent in the equatorial plane
of an equilibrium object, respectively. We introduce the following
dimensionless variables:
$ \widehat \rho \equiv {\rho}/{\rho_{c}},$
$ \widehat R \equiv {R}/{R_{\rm A}},$
$ {\widehat \Phi}_s \equiv {\Phi_s}/{(GR_{\rm A}^2\rho_{c})},$ and
$ {\widehat \Omega}_s \equiv {\Omega_s}/{(G\rho_{c})^{1/2}}.$
Then, Equation \eqref{e:HSE1} reduces to
\be\label{e:HSE2}
\alpha \ln \widehat \rho = \widehat {C} - {\widehat \Phi}_s + \frac{1}{2}
{\widehat \Omega}_s^2 \widehat R^2,
\ee
where $\widehat {C}$ is a dimensionless constant and
\be\label{e:alp}
\alpha \equiv {\cs^2}/({GR_{\rm A}^2\rho_{c}})
\ee
measures the relative importance of the thermal to gravitational
potential energies.
In the SCF method, it is crucial to solve the Poisson equation accurately and efficiently. For spheroid-like configurations, it is customary to employ a multipole expansion technique on spherical polar coordinates. On the other hand, it is more efficient to utilize toroidal coordinates for ring-like configurations, especially for slender rings. In Appendix \ref{a:pairs}, we describe the methods to find $\Phi$ for given $\rho$ both in spherical and toroidal coordinates.
\subsection{Boundary Conditions}\label{s:bd}
In the case of a polytropic equation of state, an object in equilibrium achieves vanishing density at a finite radius, and is thus self-truncated. On the other hand, an isothermal object in steady equilibrium would extend to infinite distance without an external medium. In reality, gas clouds or rings are usually in pressure equilibrium with their surrounding hot rarefied medium that provides a confining external pressure $P_{\rm ext}$. Fixing $P_{\rm ext}$ is equivalent to choosing $\widehat {C}$ in Equation \eqref{e:HSE2}, or to placing the boundaries where $\rho=P_{\rm ext}/\cs^2$.
Following \citet{hac86a}, we let (positive) $R_{\rm B}$ denote the radial
distance of the boundary along the $z$-axis for spheroid-like
configurations. For ring-like configurations, $R_{\rm B}$ takes the
negative of the radial distance to the inner boundary in the equatorial plane. Then, Equation \eqref{e:HSE2} requires
\be\label{e:Omg}
{\widehat \Omega}_s^2= \left\{\begin{array}{l}
2[{\widehat \Phi}_s (1, \pi/2) - {\widehat \Phi}_s (\widehat R_{\rm B}, 0)], \\
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \textrm{for spheroid-like bodies}, \\
2[{\widehat \Phi}_s (1, \pi/2) - {\widehat \Phi}_s (-\widehat R_{\rm B}, {\pi}/{2})]
/ {(1- \widehat R_{\rm B} ^2)},
\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\textrm{for ring-like bodies},
\end{array}\right.
\ee
where
\be\label{e:rB}
\hatR_{\rm B} \equiv R_{\rm B} /R_{\rm A}.
\ee
with $0<\hatR_{\rm B}\leq 1$ for spheroid-like bodies and $-1<\hatR_{\rm B}<0$
for ring-like bodies. Since ${\widehat \Omega}_s$ depends on $\hatR_{\rm B}$ through Equation \eqref{e:Omg}, an isothermal equilibrium can be completely specified by two parameters: $\alpha$ and $\hatR_{\rm B}$.
Since $R_{\rm A}$ is defined to be the maximum radial extent in the
meridional plane, the existence of an equilibrium demands
${\widehat \Phi}_s-{\widehat \Omega}_s^2\widehat R^2/2$ in Equation \eqref{e:HSE2} to be an
increasing function of $\widehat R$ near $\widehat R=1$: the boundary should
otherwise retreat to a smaller radius where the thermal pressure is
equal to $P_{\rm ext}$. Since the potential minimum occurs inside
$\widehat R=1$, this requires that the equilibrium should be sufficiently
self-gravitating and/or have small enough ${\widehat \Omega}_s$. As will be
presented in Section \ref{s:iso}, an isothermal equilibrium turns out to be nonexistent for fairly small $|R_{\rm B}|$ because
self-gravity is not strong enough or the angular frequency is too large to form gravitationally bound objects.
\subsection{Computation Method}
In Appendix \ref{a:comp}, we compare the results based on the potential expansions in the spherical and toroidal coordinates for ring-like equilibria, and show that the two methods agree with each other when $\widehat R_{\rm B}\gtrsim -0.86$, while the multipole expansion in the spherical coordinates overestimates ${\widehat \Omega}_s^2$ for smaller $\widehat R_{\rm B}$. When we present the results in Section \ref{s:seq}, therefore, we employ the multipole expansion with $l_{\rm max}=10$ in the toroidal coordinates for flattened equilibria with $\widehat R_{\rm B}\leq -0.8$, while adopting the spherical multipole expansion with $l_{\rm max}=50$ for any other equilibria.
As a domain of computation, we consider a meridional cross-section of
an equilibrium body, and divide it into $N_r\times N_a$ cells.
Here, $N_r$ and $N_a$ refer respectively to the mesh numbers over $0 \leq \widehat r \leq 1.2$ in the $r$-direction and $0\leq \theta \leq \pi/2$ in the $\theta$-direction of the spherical coordinates, or
over $2.5\leq \sigma \leq 9.0$ in the $\sigma$-direction and over
$0\leq\tau\leq \pi$ in the $\tau$-direction of the toroidal coordinates.
Initially, we take $\widehat \rho= 1$ when $\widehat r \leq 1$ and $\widehat \rho=0$ otherwise for spheroid-like configurations, and $\widehat \rho= 1$ when $\hatR_{\rm B} \leq \widehat r \sin\theta \leq 1$ and $\widehat \rho=0$ otherwise for ring-like configurations in spherical coordinates. When we use the toroidal coordinates, we set the focal length equal to $\widehat{a}\equiv a/R_{\rm A} = (1-\widehat R_{\rm B})/2$, and take $\widehat \rho=1$ in the regions with $(x-a)^2+z^2 \leq a^2$ and $\widehat \rho=0$ otherwise.
We then calculate $\rho_l$ from Equation
\eqref{e:exp1} or Equation \eqref{e:rho_to}, and $\Phi$ using Equation \eqref{e:pos1} or Equation \eqref{e:gpot_toroidal} based on
Gaussian and Newton-Cotes quadratures (e.g., \citealt{pre88}). Next, we calculate ${\widehat \Omega}_s^2$ from Equation \eqref{e:Omg} and then update $\widehat \rho$ from Equation \eqref{e:HSE2}. In each iteration on the toroidal mesh, $\widehat{a}$ is set to move to the location of the maximum density. We repeat the calculations using the updated density until the relative difference in ${\widehat \Omega}_s^2$ from two successive iterations is smaller than $10^{-6}$. We employ $N_r\times N_a$=$1024\times512$ cells, and it typically takes less than 20 iterations to obtain a converged solution.
Once we find an equilibrium configuration $\rho(r,\theta)$ in the spherical coordinates, it is straightforward to calculate its volume $V$, mass $M$, angular momentum $J$, kinetic energy $T$, and gravitational potential energy $W$ via
\be
M = 2\pi \int \rho r^2 \sin \theta dr d\theta,
\ee
\be
J = 2\pi \int \rho\Omega r^4 \sin^3 \theta dr d\theta,
\ee
\be
T = \pi \int \rho\Omega^2 r^4 \sin^3\theta dr d\theta,
\ee
and
\be
W = \pi \int \rho \Phi_s r^2 \sin \theta dr d\theta.
\ee
Note that $T=J\Omega/2$ for rigidly-rotating bodies. These quantities
will be evaluated and used to draw the equilibrium sequences of
isothermal rings in Section \ref{s:seq}.
\section{Equilibrium Configurations}\label{s:seq}
\subsection{Incompressible Bodies}\label{s:incomp}
\citet{eri81} found that an incompressible body in axial symmetry takes a form of a Maclaurin spheroid when rotating slowly, and bifurcates into a one-ring sequence when the total angular momentum exceeds a critical value (see also \citealt{cha67,bar71,hac86a}). As a test of our SCF method, we first apply it to construct equilibrium
configurations in the incompressible limit, which is attained by taking $\alpha\gg 1$. By comparing our results with \citet{eri81} for the Maclaurin spheroids and incompressible rings, we can check the accuracy of our SCF method.
\begin{figure}
\includegraphics[angle=0, width=8.5cm]{fig02.pdf}
\caption{Dependence on the normalized angular momentum
$j^2 = J^2/(4\pi G M^{10/3}{\langle\rho\rangle}^{-1/3})$ of (a) the normalized
angular velocity $\omega_s^2 = \Omega_s^2/(4\pi G {\langle\rho\rangle})$ and
(b) the energy ratio $T/|W|$. The black and blue solid lines are our
results with $\alpha=10^5$ for spheroid-like and ring-like
equilibria, respectively. The black dashed lines correspond to
the Maclaurin spheroid sequence, while
the red dotted lines, adopted from Table 1 of \citet{eri81}, are
for the hamburgers or the one-ring sequence.
The filled stars at $j^2=2.233\times 10^{-2}$ indicate the bifurcation point from the Maclaurin sequence, while the open circles at
$j^2=2.183\times 10^{-2}$ correspond to $\widehat R_{\rm B}=0$.
The insets zoom in the regions around the bifurcation point.
\label{f:incomp_Omg}}
\end{figure}
\begin{deluxetable}{rlllll}
\tablecaption{Properties of Axially-symmetric Incompressible Bodies in Steady Equilibrium\label{t:incomp}}
\tablewidth{0pt} %
\tablehead{ \colhead{$\hatR_{\rm B}$} & \colhead{${\widehat \Omega}_s^2$}
& \colhead{$\widehat M$} & \colhead{$\widehat J$}
& \colhead{$\widehat T$}
& \colhead{$-\widehat W$} }
\startdata
1.0 & 0.000E$+$0 & 4.189E$+$0 & 0.000E$+$0 & 0.000E$+$0 & 1.054E$+1$ \\
0.9 & 3.263E$-$1 & 3.767E$+$0 & 8.599E$-$1 & 2.456E$-$1 & 8.811E$+0$ \\
0.8 & 6.307E$-$1 & 3.349E$+$0 & 1.063E$+$0 & 4.221E$-$1 & 7.220E$+0$ \\
0.7 & 9.082E$-$1 & 2.927E$+$0 & 1.115E$+$0 & 5.311E$-$1 & 5.729E$+0$ \\
0.6 & 1.140E$+$0 & 2.510E$+$0 & 1.071E$+$0 & 5.718E$-$1 & 4.385E$+0$ \\
0.5 & 1.314E$+$0 & 2.093E$+$0 & 9.581E$-$1 & 5.491E$-$1 & 3.179E$+0$ \\
0.4 & 1.405E$+$0 & 1.671E$+$0 & 7.910E$-$1 & 4.688E$-$1 & 2.121E$+0$ \\
0.3 & 1.380E$+$0 & 1.254E$+$0 & 5.880E$-$1 & 3.454E$-$1 & 1.253E$+0$ \\
0.2 & 1.192E$+$0 & 8.389E$-$1 & 3.660E$-$1 & 1.998E$-$1 & 5.902E$-1$ \\
0.1 & 1.014E$+$0 & 7.885E$-$1 & 3.609E$-$1 & 1.817E$-$1 & 5.012E$-1$ \\
0.0 & 1.008E$+$0 & 8.504E$-$1 & 3.998E$-$1 & 2.007E$-$1 & 5.726E$-1$ \\
$-0.1$ & 1.019E$+$0 & 8.796E$-$1 & 4.162E$-$1 & 2.100E$-$1 & 6.103E$-1$ \\
$-0.2$ & 9.845E$-$1 & 9.403E$-$1 & 4.528E$-$1 & 2.246E$-$1 & 6.844E$-1$ \\
$-0.3$ & 8.743E$-$1 & 9.397E$-$1 & 4.537E$-$1 & 2.121E$-$1 & 6.703E$-1$ \\
$-0.4$ & 7.189E$-$1 & 8.615E$-$1 & 4.076E$-$1 & 1.728E$-$1 & 5.558E$-1$ \\
$-0.5$ & 5.459E$-$1 & 7.182E$-$1 & 3.235E$-$1 & 1.195E$-$1 & 3.845E$-1$ \\
$-0.6$ & 3.794E$-$1 & 5.366E$-$1 & 2.214E$-$1 & 6.818E$-$2 & 2.162E$-1$ \\
$-0.7$ & 2.321E$-$1 & 3.439E$-$1 & 1.224E$-$1 & 2.947E$-$2 & 9.076E$-2$ \\
$-0.8$ & 1.130E$-$1 & 1.696E$-$1 & 4.661E$-$2 & 7.835E$-$3 & 2.311E$-2$ \\
$-0.9$ & 3.204E$-$2 & 4.654E$-$2 & 7.537E$-$3 & 6.745E$-$4 &
1.911E$-3$
\enddata
\tablecomments{The number behind E indicates the exponent of the
power of 10.}
\end{deluxetable}
Figure \ref{f:incomp_shape} plots the boundaries in the meridional
plane of axially-symmetric incompressible bodies in (a) spheroid-like
and (b) ring-like equilibrium for some selected values of $\hatR_{\rm B}$.
For all cases, $\alpha=10^5$ is taken. For $0.158 \lesssim \hatR_{\rm B}
\leq 1$, an equilibrium is exactly a Maclaurin spheroid with an
ellipticity $e = (1-\hatR_{\rm B}^2)^{1/2}$. The cases with $0 < \hatR_{\rm B}
\lesssim 0.158$ result in concave ``hamburgers'' that are somewhat more flared at intermediate $\widehat R$ $(\sim 0.6-0.8)$ than at the symmetry axis (e.g., \citealt{eri81,hac86a}). When $\hatR_{\rm B} <0$, on the other hand, equilibrium bodies take a form of rotating rings (or tori), with a larger $|\hatR_{\rm B}|$ corresponding to a more slender ring. Table \ref{t:incomp} lists the dimensionless quantities ${\widehat \Omega}_s$, $\widehat M = M/(R_{\rm A}^3\rho_{c})$, $\widehat J = J/(G^{1/2}R_{\rm A}^5\rho_{c}^{3/2})$, $\widehat T = T/(GR_{\rm A}^5\rho_{c}^2)$, and $\widehat W = W/(GR_{\rm A}^5\rho_{c}^2)$ for incompressible bodies.
\begin{figure}
\includegraphics[angle=0, width=8.5cm]{fig03.pdf}
\caption{ (a) Density distributions of non-rotating isothermal spheres as functions of the dimensionless radius (a) $\widehat r$ and (b)
$\xi = (4\pi/\alpha)^{1/2} \widehat r$ for
$\alpha=1$, $0.1$, and $0.01$.
All the cases are truncated at $\widehat r=1$.
The dashed line in (b) represents
an infinite isothermal sphere
(without pressure truncation).
Note that $\rho$ is independent of $\alpha$ except for the truncation radius. Smaller $\alpha$ corresponds to truncation at
smaller external pressure.
\label{f:isosphere}}
\vspace{0.2cm}
\end{figure}
\begin{figure*}
\hspace{0.5cm}\includegraphics[angle=0, width=17.cm]{fig04.pdf}
\caption{Equilibrium density distributions on the meridional plane of isothermal bodies with
$R_{\rm B}=1.0$, 0.8, 0.6, 0.4, 0.0, $-0.2$, $-0.4$, $-0.6$, and $-0.8$ from top to bottom.
The left, middle, and right columns correspond to $\alpha=1$, 0.1, and 0.01,
respectively. Colorbars label $\log \rho/\rho_{c}$.
\label{f:isocontour}}
\end{figure*}
Figure \ref{f:incomp_Omg} plots (a) the square of the normalized
angular velocity $\omega_s^2 \equiv \Omega_s^2/(4\pi G{\langle\rho\rangle})$ and (b) the ratio of the kinetic to gravitational potential energy $t\equiv T/|W|$ as functions of the square of the normalized angular momentum $j^2\equiv J^2/(4\pi G M^{10/3} {\langle\rho\rangle}^{-1/3})$. Here, ${\langle\rho\rangle}$ denotes the volume-averaged density. The black and blue solid lines are the spheroid-like and ring-like equilibria, respectively, that we obtain by taking $\alpha=10^5$. The black dashed lines plot the theoretical predictions
\be
\omega_s^2= \frac{(1-e^2)^{1/2}}{2e^2}
\left[(3-2e^2)\frac{\sin^{-1}e}{e} - 3(1-e^2)^{1/2}\right],
\ee
\be
t = \frac{3}{2e^2} - 1 - \frac{3(1-e^2)^{1/2}}{2e\sin^{-1}e},
\ee
and
\be
j^2 = \frac{4 \omega_s^2}{25} \left(\frac{3}{4\pi}\right)^{4/3}
(1-e^2)^{-2/3},
\ee
of the Maclaurin spheroid sequence (e.g., Eqs.~(3.234), (3.236), and
(3.239) of \citealt{lan99}), which are in excellent agreement with our results at the small-$j$ part. The filled stars at
$j^2=2.233\times10^{-2}$ (occurring at $\hatR_{\rm B}=0.158$) mark the
bifurcation point where the Maclaurin sequence branches out into the
concave hamburgers and then into the one-ring sequence after the filled circles at $j^2=2.183\times 10^{-2}$ (or $\hatR_{\rm B}=0$). The insets zoom in the regions with $0.020 \leq j^2 \leq 0.024$ around the bifurcation point for clarity. For comparison, we also plot the
$\omega_s^2$--$j^2$ and $T/|W|$--$j^2$ relationships adopted from Table 1 of \citet{eri81} as the red dotted lines, which are slightly
($\sim1$--$2$\%) different from our results (see also \citealt{hac86a}). These discrepancies are presumably due to the
insufficient resolution used by these authors in solving the Poisson
equation.\footnote{\citet{hac86a} employed $N_r\times N_a$=$257\times277$ cells and truncated the multipole expansion at $l_{\rm max}=16$ in his SCF calculations.}
\begin{figure*}[!t]
\hspace{0.5cm}\includegraphics[angle=0, width=17cm]{fig05.pdf}
\caption{Equilibrium density profiles of (a) spheroid-like objects with $\hatR_{\rm B}=0.6$ and (b) ring-like objects with $\hatR_{\rm B}=-0.4$. The cases with $\alpha=1$, 0.1, and 0.01 are shown as black, red, and blue curves, respectively. The solid and dotted lines are along the $\widehat R$- and $\widehat z$-axis, respectively. In (b), the solid curves are shifted horizontally to make the maximum density occur at zero in the abscissa.
\label{f:isoprofile}}
\end{figure*}
\subsection{Isothermal Objects}\label{s:iso}
\begin{figure*}
\hspace{0.5cm}\includegraphics[angle=0, width=17cm]{fig06.pdf}
\caption{Dependence on $\hatR_{\rm B}$ of (a) the angular frequency ${\widehat \Omega}_s^2$, (b)
the total mass $\widehat M$, (c) the average density ${\langle\widehat\rho\rangle}$, and (d) the kinetic energy $\widehat T$ (thick lines) and the gravitational potential energy $|\widehat W|$ (thin lines) for isothermal equilibria with $\alpha=1$, 0.1, and 0.01. Filled circles mark the ranges of $R_{\rm B}$ for the existence of isothermal equilibria. The incompressible cases with $\alpha=10^5$ are compared as dotted lines.
\label{f:isoRB}}
\end{figure*}
We now present density distributions of isothermal objects in
axisymmetric equilibrium. We first visit non-rotating isothermal
spheres truncated by an external pressure. We then explore how rotation changes equilibrium structures.
\subsubsection{Bonnor-Ebert Spheres}
\begin{deluxetable*}{rllllll}
\tabletypesize{\small}
\tablecaption{Properties of Axially-symmetric Isothermal Bodies in Steady Equilibrium\label{t:iso}}
\tablewidth{0pt} %
\tablehead{ \colhead{$\hatR_{\rm B}$} & \colhead{${\widehat \Omega}_s^2$}
& \colhead{$\widehat M$} & \colhead{${\langle\widehat\rho\rangle}$} & \colhead{$\widehat J$}
& \colhead{$\widehat T$}
& \colhead{$-\widehat W$} }
\startdata
& & & $\alpha=1$ & & & \\
\hline
1.0 & 0.000E$+$0 & 1.806E$+$0 & 4.310E$-$1 & 0.000E$+$0 & 0.000E$+$0 & 2.140E$+$0 \\
0.9 & 1.861E$-$1 & 1.735E$+$0 & 4.611E$-$1 & 2.566E$-$1 & 5.536E$-$2 & 2.028E$+$0 \\
0.8 & 3.805E$-$1 & 1.657E$+$0 & 4.964E$-$1 & 3.551E$-$1 & 1.095E$-$1 & 1.900E$+$0 \\
0.7 & 5.826E$-$1 & 1.564E$+$0 & 5.392E$-$1 & 4.209E$-$1 & 1.606E$-$1 & 1.746E$+$0 \\
0.6 & 7.810E$-$1 & 1.455E$+$0 & 5.901E$-$1 & 4.599E$-$1 & 2.032E$-$1 & 1.561E$+$0 \\
0.5 & 9.653E$-$1 & 1.322E$+$0 & 6.512E$-$1 & 4.709E$-$1 & 2.313E$-$1 & 1.335E$+$0 \\
0.4 & 1.111E$+$0 & 1.152E$+$0 & 7.243E$-$1 & 4.453E$-$1 & 2.347E$-$1 & 1.054E$+$0 \\
0.3 & 1.171E$+$0 & 9.247E$-$1 & 8.065E$-$1 & 3.672E$-$1 & 1.987E$-$1 & 7.133E$-$1 \\
0.1 & 9.806E$-$1 & 7.472E$-$1 & 9.465E$-$1 & 3.316E$-$1 & 1.642E$-$1 & 4.531E$-$1 \\
0.0 & 9.681E$-$1 & 8.052E$-$1 & 9.350E$-$1 & 3.662E$-$1 & 1.802E$-$1 & 5.162E$-$1 \\
$-$0.1 & 9.728E$-$1 & 8.281E$-$1 & 9.305E$-$1 & 3.781E$-$1 & 1.865E$-$1 & 5.440E$-$1 \\
$-$0.2 & 9.272E$-$1 & 8.744E$-$1 & 9.202E$-$1 & 4.044E$-$1 & 1.947E$-$1 & 5.950E$-$1 \\
$-$0.3 & 8.176E$-$1 & 8.696E$-$1 & 9.173E$-$1 & 4.030E$-$1 & 1.822E$-$1 & 5.767E$-$1 \\
$-$0.4 & 6.726E$-$1 & 7.999E$-$1 & 9.232E$-$1 & 3.644E$-$1 & 1.494E$-$1 & 4.811E$-$1 \\
$-$0.5 & 5.156E$-$1 & 6.745E$-$1 & 9.360E$-$1 & 2.946E$-$1 & 1.058E$-$1 & 3.402E$-$1 \\
$-$0.6 & 3.633E$-$1 & 5.119E$-$1 & 9.523E$-$1 & 2.064E$-$1 & 6.222E$-$2 & 1.971E$-$1 \\
$-$0.7 & 2.257E$-$1 & 3.338E$-$1 & 9.700E$-$1 & 1.171E$-$1 & 2.782E$-$2 & 8.563E$-$2 \\
$-$0.8 & 1.115E$-$1 & 1.671E$-$1 & 9.854E$-$1 & 4.562E$-$2 & 7.617E$-$3 & 2.246E$-$2 \\
$-$0.9 & 3.193E$-$2 & 4.636E$-$2 & 9.961E$-$1 & 7.494E$-$3 & 6.695E$-$4 & 1.897E$-$3 \\
\hline
& & & $\alpha=0.1$ & & & \\
\hline
1.0 & 0.000E$+$0 & 2.490E$-$1 & 5.941E$-$2 & 0.000E$+$0 & 0.000E$+$0 & 5.219E$-$2 \\
0.9 & 3.749E$-$2 & 2.480E$-$1 & 6.607E$-$2 & 1.141E$-$2 & 1.104E$-$3 & 5.258E$-$2 \\
0.8 & 8.080E$-$2 & 2.467E$-$1 & 7.483E$-$2 & 1.696E$-$2 & 2.410E$-$3 & 5.292E$-$2 \\
0.7 & 1.319E$-$1 & 2.440E$-$1 & 8.718E$-$2 & 2.177E$-$2 & 3.954E$-$3 & 5.295E$-$2 \\
0.6 & 1.912E$-$1 & 2.385E$-$1 & 1.053E$-$1 & 2.573E$-$2 & 5.625E$-$3 & 5.212E$-$2 \\
$-$0.2 & 5.916E$-$1 & 4.694E$-$1 & 4.925E$-$1 & 1.546E$-$1 & 5.944E$-$2 & 1.831E$-$1 \\
$-$0.3 & 5.060E$-$1 & 4.825E$-$1 & 4.910E$-$1 & 1.645E$-$1 & 5.852E$-$2 & 1.861E$-$1 \\
$-$0.4 & 4.242E$-$1 & 4.684E$-$1 & 5.224E$-$1 & 1.632E$-$1 & 5.316E$-$2 & 1.709E$-$1 \\
$-$0.5 & 3.439E$-$1 & 4.281E$-$1 & 5.802E$-$1 & 1.499E$-$1 & 4.394E$-$2 & 1.407E$-$1 \\
$-$0.6 & 2.632E$-$1 & 3.596E$-$1 & 6.605E$-$1 & 1.224E$-$1 & 3.141E$-$2 & 9.898E$-$2 \\
$-$0.7 & 1.809E$-$1 & 2.632E$-$1 & 7.617E$-$1 & 8.246E$-$2 & 1.754E$-$2 & 5.379E$-$2 \\
$-$0.8 & 9.929E$-$2 & 1.478E$-$1 & 8.706E$-$1 & 3.804E$-$2 & 5.993E$-$3 & 1.764E$-$2 \\
$-$0.9 & 3.093E$-$2 & 4.481E$-$2 & 9.627E$-$1 & 7.130E$-$3 & 6.270E$-$4 & 1.774E$-$3 \\
\hline
& & &$\alpha=0.01$ & & & \\
\hline
1.0 & 0.000E$+$0 & 2.018E$-$2 & 4.831E$-$3 & 0.000E$+$0 & 0.000E$+$0 & 4.390E$-$4 \\
0.9 & 3.318E$-$3 & 2.006E$-$2 & 5.349E$-$3 & 2.255E$-$4 & 6.494E$-$6 & 4.400E$-$4 \\
0.8 & 7.235E$-$3 & 1.984E$-$2 & 6.058E$-$3 & 3.313E$-$4 & 1.409E$-$5 & 4.391E$-$4 \\
0.7 & 1.199E$-$2 & 1.946E$-$2 & 7.109E$-$3 & 4.155E$-$4 & 2.275E$-$5 & 4.344E$-$4 \\
0.6 & 1.779E$-$2 & 1.862E$-$2 & 8.857E$-$3 & 4.593E$-$4 & 3.064E$-$5 & 4.188E$-$4 \\
$-$0.4 & 1.084E$-$1 & 7.682E$-$2 & 9.693E$-$2 & 1.105E$-$2 & 1.819E$-$3 & 5.609E$-$3 \\
$-$0.5 & 8.838E$-$2 & 8.399E$-$2 & 1.134E$-$1 & 1.363E$-$2 & 2.027E$-$3 & 6.225E$-$3 \\
$-$0.6 & 7.534E$-$2 & 8.618E$-$2 & 1.551E$-$1 & 1.509E$-$2 & 2.071E$-$3 & 6.293E$-$3 \\
$-$0.7 & 6.356E$-$2 & 8.293E$-$2 & 2.362E$-$1 & 1.517E$-$2 & 1.913E$-$3 & 5.714E$-$3 \\
$-$0.8 & 4.852E$-$2 & 6.856E$-$2 & 4.028E$-$1 & 1.229E$-$2 & 1.354E$-$3 & 3.937E$-$3 \\
$-$0.9 & 2.391E$-$2 & 3.357E$-$2 & 7.206E$-$1 & 4.664E$-$3 & 3.583E$-$4 & 1.005E$-$3
\enddata
\tablecomments{The number behind E indicates the exponent of the
power of 10.}
\end{deluxetable*}
Consider non-rotating, self-gravitating isothermal spheres with
$\Omega_s=0$, namely Bonner-Ebert spheres, in hydrostatic equilibrium. Equations \eqref{e:pos0} and \eqref{e:HSE1} are then combined to yield
\be\label{e:LE}
\frac{1}{\xi^2} \frac{d}{d\xi}
\left(\xi^2\frac{d\psi}{d\xi}\right) = \exp(-\psi),
\ee
where $\psi=(\Phi_s-C)/\cs^2$ and $\xi= \left({4\pi
G\rho_{c}}/{\cs^2}\right)^{1/2} r = \left({4\pi}/{\alpha}\right)^{1/2}
\widehat r$ are the dimensionless potential and radius, respectively.
Equation \eqref{e:LE} can be solved subject to the proper boundary
conditions, $\psi=d\psi/d\xi=0$ at $\xi=0$, to give the density
distribution $\rho=\rho_{c} \exp({-\psi})$, which shall be compared with the results of our SCF method.
As a second test, we apply our SCF method to obtain density
distributions of isothermal spheres by setting $\hatR_{\rm B}=1$. Figure
\ref{f:isosphere}(a) plots the resulting density profiles for
$\alpha=1$, 0.1, and 0.01 as functions of $\widehat r$: all the cases are
truncated at $\widehat r=1$. Note that $\rho$ varies more steeply for
smaller $\alpha$ in order to compensate for a smaller sound speed in
balancing self-gravity. Note also that when drawn against $\xi$, as
shown in Figure \ref{f:isosphere}(b), all the curves lie well (within
$1\%$) on the inner parts of the solution of Equation \eqref{e:LE}
plotted as a dashed line, confirming the accuracy of our SCF method. A smaller $\alpha$ corresponds to a larger truncation radius in $\xi$.
\subsubsection{Rigidly-rotating Isothermal Equilibria}
\begin{figure}
\includegraphics[angle=0, width=8.5cm]{fig07.pdf}
\caption{Dependence on the angular momentum $j^2$ of (a) the
angular velocity $\omega_s^2$ and
(b) the energy ratio $T/|W|$ for isothermal equilibria with $\alpha=1$, 0.1, and 0.01.
The green shade in (a) represents the regions where
spheroid-like isothermal equilibria can exist.
\label{f:isoJ}}
\end{figure}
\begin{figure*}
\includegraphics[angle=0, width=18cm]{fig08.pdf}
\caption{Dependence on $\hatR_{\rm B}$ of (a) the major axis $\widehat R_0$ and
minor axis $\widehat \eta_0$, and (b) the angular frequency ${\widehat \Omega}_s^2$ of ring-like equilibria for $\alpha=10^5$ (purple), 1 (black), 0.1 (red), and 0.01 (blue). Note that $\widehat R_0$ and $\widehat \eta_0$ for $\alpha=1$ are almost identical to those with $\alpha=10^5$. In (b), the solid lines give the results of our SCF method, while the dotted and dashed lines draw the analytic expressions of \citet{ost64b} for
incompressible and infinitely-extended isothermal rings, respectively.
\label{f:sleOmg}}
\end{figure*}
By taking $\hatR_{\rm B}$ less than unity, we obtain equilibrium density of isothermal objects in rigid rotation. Figure \ref{f:isocontour}
presents the resulting density distributions on the meridional plane
for such equilibria with differing $\hatR_{\rm B}$. The left, middle, and
right columns are for $\alpha=1$, 0.1, and 0.01, respectively. Figure
\ref{f:isoprofile} plots the exemplary density profiles along the
$\widehat R$-axis (solid lines) and the $\widehat z$-axis (dotted lines) for the spheroid-like configurations with $\hatR_{\rm B}=0.6$ and the ring-like configurations with $\hatR_{\rm B}=-0.4$. Clearly, an equilibrium is more centrally concentrated for smaller $\alpha$. The vertical extent of an equilibrium body is smaller than the horizontal extent for both spheroid-like and ring-like objects. Unlike Bonner-Ebert spheres whose density distributions are independent of $\alpha$ when expressed in terms of $\xi$, we find that the density profiles of rotating isothermal objects with different $\alpha$ along the $\widehat z$- or $\widehat R$-axis cannot be expressed by a single function of $\left({4\pi}/{\alpha}\right)^{1/2} \widehat z$ or
$\left({4\pi}/{\alpha}\right)^{1/2} \widehat R$.
Figure \ref{f:isoRB} plots the variations of the square of the angular velocity ${\widehat \Omega}_s^2$, the total mass $\widehat M$, the averaged density ${\langle\widehat\rho\rangle}$, the kinetic energy $\widehat T$, and the gravitational potential energy $\widehat W$ as functions of $\hatR_{\rm B}$ for isothermal equilibria with $\alpha=1$, 0.1, and 0.01, in comparison with the incompressible cases. Both ${\widehat \Omega}_s$ and $\widehat M$ increase as $|\hatR_{\rm B}|$ decreases from unity. For spheroid-like configurations, this is because an equilibrium body becomes more flattened and occupies a smaller volume as it rotates faster. On the other hand, ring-like configurations attain a larger volume and mass with decreasing $|\hatR_{\rm B}|$, and thus requires larger ${\widehat \Omega}_s$ to balance self-gravity. Note, however, that $\widehat M$ is not a monotonically decreasing function of $\hatR_{\rm B}$ for ring-like configurations due to complicated dependence of their volume on $\hatR_{\rm B}$ (e.g., Fig.~\ref{f:incomp_shape}b). For $\alpha=0.1$, for example, $\widehat M$ increases as $\hatR_{\rm B}$ moves away from $-1$, is maximized at $\hatR_{\rm B}=-0.30$, and starts to decreases afterwards. Obviously, ${\langle\widehat\rho\rangle}=1$ for incompressible configurations due to uniform density. Overall, ${\langle\widehat\rho\rangle}$ increases with decreasing $\hatR_{\rm B}$ and tends to unity as $\hatR_{\rm B}$ approaches $-1$. The dependency of $\widehat T$ and $\widehat W$ on $\hatR_{\rm B}$ is closely related to that of ${\widehat \Omega}_s^2$ and $\widehat M$, respectively. For $\alpha \lesssim 0.1$, $\widehat M$ and $\widehat W$ are insensitive to $\hatR_{\rm B}\gtrsim0.6$ since the density in the outer parts is very small. All the quantities are smaller with smaller $\alpha$ due to a stronger density concentration: these values are also listed in Table \ref{t:iso} for some selected values of $\hatR_{\rm B}$.
The dependencies of ${\widehat \Omega}_s$ and $\widehat M$ on $\hatR_{\rm B}$ make an
equilibrium sequence with fixed $\alpha\;(\leq 1)$ cease to exist for
an intermediate range of $\hatR_{\rm B}$: $0.13 < \hatR_{\rm B} < 0.27 $ for
$\alpha=1$, $-0.14 <\hatR_{\rm B} < 0.51$ for $\alpha=0.1$, and $-0.40 <
\hatR_{\rm B} < 0.59$ for $\alpha=0.01$, with the corresponding boundaries
marked by filled circles in Figure \ref{f:isoRB}. This is unlike the
incompressible bodies for which any value of $0\leq |\hatR_{\rm B}| \leq 1$ readily yields an equilibrium. As mentioned in Section \ref{s:bd}, the presence of a steady equilibrium requires large enough $|{\widehat \Phi}_s|$ and/or small enough ${\widehat \Omega}_s$ in order for $\rho$ to be a decreasing function of $\widehat R$ near $\widehat R=1$ (see Eq.~\eqref{e:HSE2}). The absence of an isothermal equilibrium for intermediate $\hatR_{\rm B}$ results from the fact that the self-gravitational potential is too weak to overcome the centrifugal potential in establishing gravitationally bound objects.
Figure \ref{f:isoJ} plots the square of the normalized angular
velocity $\omega_s^2$ as well as the energy ratio $t=T/|W|$ as
functions of the square of the normalized angular momentum $j^2$ for
isothermal equilibria with $\alpha=1$, 0.1, and 0.01. The
incompressible cases with $\alpha=10^5$ are compared as dashed lines.
Comparison of Figure \ref{f:isoJ} with Figures 10 and 11 of
\citet{hac86a} reveals that the $\omega_s^2$--$j^2$ and $t$--$j^2$
relationships of isothermal objects with $\alpha \sim 0.01$--1 are very close to those of polytropes with index $n\sim 0.1$--1.5.
\citet{hac86a} found that spheroid-like polytropic equilibria can be
possible only in the region $\omega_s^2+5j^2<0.185$ when $j^2<0.02$.
Our results suggest that spheroid-like isothermal equilibria can exist in the shaded region in Figure \ref{f:isoJ}(a), which is bounded by
\be
\omega_s^2+ 4.12 j^2 = 0.172,
\ee
and the incompressible $\omega_s^2$--$j^2$ relation.
\subsection{Slender Rings}
Here we focus on the properties of slender isothermal rings whose minor axis is much shorter than the major axis. Density distributions of such rings can be obtained by taking $\hatR_{\rm B}$ close to $-1$ in our SCF method. Using a perturbation analysis, \citet{ost64b} derived
approximate expressions for the density and angular frequency of both
polytropic and isothermal rings in steady equilibrium. Our objective in this subsection is to compare the results of our SCF method with
\citet{ost64b}.
For a ring-like configuration with $\hatR_{\rm B}<0$, we define its major
axis $R_0$ and minor axis $\eta_0$ as
\be\label{e:axis}
R_0 = {\mathcal V}/(2\pi {\mathcal A}),\;\;\;\text{and}
\;\;\;\eta_0=({\mathcal A}/\pi)^{1/2},
\ee
where ${\mathcal V}$ and ${\mathcal A}$ refers to the total volume and the meridional cross section, respectively, occupied by the equilibrium body. We plot in Figure \ref{f:sleOmg}(a) as solid lines
$\widehat R_0=R_0/R_{\rm A}$ and $\widehat \eta_0= \eta_0/R_{\rm A}$ together with
$\widehat R_0+\widehat \eta_0$ resulting from the SCF method as functions of
$\hatR_{\rm B}$ for $\alpha=10^5$, 1, 0.1, and 0.01. Note that $\widehat \eta_0
\simeq (1+\hatR_{\rm B})/2$ and $\widehat R_0 \simeq (1-\hatR_{\rm B})/2$, resulting
in $\widehat R_0 + \widehat \eta_0 \simeq 1$, as expected, for $\hatR_{\rm B} \lesssim -0.6$. This indicates that $R_0$ and $\eta_0$ defined in Equation \eqref{e:axis} describe the real major and minor axes of slender rings reasonably well.
Using a perturbation analysis, \citet{ost64b} showed that an
incompressible, slender ring in the absence of external gravity should obey
\be\label{e:Ostinc}
{\widehat \Omega}_s^2= \frac{M}{2\pi R_0^3\rho_{c}} \left[
\ln \left(\frac{8R_0}{\eta_0}\right) - \frac{5}{4} \right].
\ee
For an isothermal ring of infinite extent (without pressure
truncation), he also derived
\be\label{e:Ostiso}
{\widehat \Omega}_s^2= \frac{M_\infty}{2\pi R_0^3\rho_{c}} \left[
\ln \left(\frac{8R_0}{\eta_{1/2}}\right) - 2 \right],
\ee
which is valid when $\eta_{1/2}/R_0\ll1$. Here, $M_\infty$ denotes the total mass and $\eta_{1/2}=(M_\infty/2\pi^2R_0 \rho_{c})^{1/2}$ is the half-mass radius. Figure \ref{f:sleOmg}(b) plots Equation
\eqref{e:Ostinc} as dotted lines, which are in good agreement with the results of our SCF method, shown as solid lines, for $\alpha \gtrsim 1$ at $\hatR_{\rm B} \lesssim -0.4$ and even for $\alpha$ as small as 0.01 at $\hatR_{\rm B}\lesssim -0.8$. Figure \ref{f:sleOmg}(b) also plots Equation \eqref{e:Ostiso} as dashed lines after taking $M=M_\infty$, which matches well our SCF results only for $\alpha=0.01$. The discrepancy between Equation \eqref{e:Ostiso} and our results is in fact expected since isothermal rings considered in the present paper are truncated by external pressure. Since rings with $\alpha=0.01$ are highly concentrated, however, Equation \eqref{e:Ostiso} can still be a good approximation for truncated slender rings.
As the bottom panels of Figure \ref{f:isocontour} show, the density
distributions of slender rings at the meridional plane appear almost
circularly symmetric with respect to the point $(R,z)=(R_0, 0)$.
\citet{ost64b} showed that the meridional density distribution is, to
the zeroth order in $\eta_{1/2}$, given by
\be\label{e:rhosr}
\rho_{\rm sr} = \frac{\rho_{c}}{[1+ (\eta/H)^2/8]^2},
\ee
where $\eta$ denotes the distance from the density maximum and
\be
H\equiv {\cs}/{(4\pi G \rho_{c})^{1/2}}
\ee
is the characteristic ring thickness. Note that Equation
\eqref{e:rhosr} is also the solution for non-rotating isothermal
cylinders of infinite length along its symmetry axis (e.g.,
\citealt{ost64a,nag87, inu92}). Figure \ref{f:slprofile}(a) compares
Equation \eqref{e:rhosr} (black) with the density profiles from the SCF method for slender rings with $\hatR_{\rm B}=-0.8$ (or with $\widehat \eta_0=0.1$) along the radial (blue) and vertical (red) directions from the density maximum. The cases with $\alpha=1$, 0.1, and 0.01 are shown as dashed, dotted, and solid lines, respectively. The relative errors, $\rho_{\rm sr}/\rho-1$, given in Figure \ref{f:slprofile}(b) are only a few percents in the most of the dense regions, demonstrating that Equation \eqref{e:rhosr} is a good approximation to the true density distributions of slender isothermal rings.
\section{Gravitational Instability of Slender Rings}\label{s:GI}
We now analyze gravitational instability of an isothermal ring with
$\eta_0/R_0\ll1$. As a background density distribution, we take
\be\label{e:den0}
\rho_0 =
\begin{cases}
\rho_{\rm sr} ,\;\; &\textrm{for} \;\eta \leq \eta_0, \\
0, \;\; &\textrm{otherwise}.
\end{cases}
\ee
The ring is rotating at angular frequency of $\Omega_0$ mostly due
to the external gravity, such that the initial velocity is given by
${\bf v}_0 = R\Omega_0 {\bf e}_\phi$, where ${\bf e}_\phi$ is the unit vector in the $\phi$-direction.
\subsection{Perturbation Equations}\label{e:ptbeq}
The basic hydrodynamic equations governing evolution of isothermal gas read
\be\label{e:con}
\frac{\partial \rho}{\partial t} + \nabla\cdot (\rho {\bf v}) =0,
\ee
\be\label{e:mom}
\frac{\partial {\bf v}}{\partial t} + {\bf v}\cdot \nabla {\bf v} =
-\frac{\cs^2}{\rho} \nabla\rho - \Omega_e^2 {\bf R} - \nabla \Phi_s,
\ee
together with Equation \eqref{e:pos0}.
To analyze a linear stability of a ring, it is convenient to introduce the new curvilinear coordinates $(\eta, \lambda, \phi)$, as depicted in Figure \ref{f:coord}. The new coordinates are related to the Cartesian coordinate system $(x, y, z)$ through
\be\label{e:coord}
\left(
\begin{array}{c}
x \\ y \\ z
\end{array} \right)
= \left(
\begin{array}{c}
(R_0 + \eta \cos\lambda) \cos\phi \\
(R_0 + \eta \cos\lambda) \sin\phi \\
\eta \sin\lambda
\end{array} \right).
\ee
The coordinate $\eta$ is the distance from a reference circle of radius $R_0$ located in the horizontal plane, while $\lambda$ is the polar angle measured from the horizontal plane; $\phi$ is the usual
cylindrical azimuthal angle. The new coordinates are orthogonal, and
reduce to the usual spherical polar coordinates $(\eta, \pi/2- \lambda, \phi)$ in the limit of $R_0/\eta \ll 1$.
Appendix \ref{a:eqn} derives gas dynamical equations for slender rings in the new curvilinear coordinates. Fluid variables are in general three-dimensional, and finding dispersion relations of
three-dimensional perturbations applied to a ring is a daunting task. However, gravitationally-unstable modes we seek in the present work are dominated by the velocity components in the $\eta$- and
$\phi$-directions, without much involving gas motions in the
$\lambda$-direction. For simplicity, therefore, we take $v_\lambda=0$
and assume that all physical quantities are independent of $\lambda$.
We also take $\cos\lambda=1$, which allows us to fully capture the
rotational effect of the ring material around the symmetry axis.
We note however that our method, by construction, is unable to handle shear that may exist in the rotation of the ring. In Section \ref{s:sim}, we will use direct numerical simulations to show that shear does not significantly affect gravitational instabilities of slender rings.
Under these circumstances, Equations \eqref{e:con2}--\eqref{e:pos2} can be simplified to
\be\label{e:con3}
\frac{\partial \rho}{\partial t} + \frac{1}{\eta}\frac{\partial (\eta \rho v_\eta)}{\partial \eta}
+ \frac{1}{R_0} \frac{\partial (\rho v_\phi) }{\partial \phi} =0,
\ee
\be\label{e:meta3}
\frac{\partial v_\eta}{\partial t} +
\left( v_\eta \frac{\partial}{\partial \eta} + \frac{v_\phi}{R_0}\frac{\partial}{\partial \phi} \right) v_\eta
- \frac{v_\phi^2}{R_0}
= -\frac{\cs^2}{\rho}\frac{\partial \rho}{\partial \eta} - \Omega_e^2 R_0 - \frac{\partial \Phi_s}{\partial \eta}
\ee
\be\label{e:mphi3}
\frac{\partial v_\phi}{\partial t} +
\left( v_\eta \frac{\partial}{\partial \eta} + \frac{v_\phi}{R_0}\frac{\partial}{\partial \phi} \right) v_\phi
+ \frac{v_\eta v_\phi}{R_0}
= -\frac{\cs^2}{R_0\rho}\frac{\partial \rho}{\partial \phi} -
\frac{1}{R_0} \frac{\partial \Phi_s}{\partial \phi}
\ee
\be\label{e:pos3}
\frac{\partial^2\Phi_s}{\partial \eta^2}
+ \frac{1}{\eta} \frac{\partial\Phi_s}{\partial \eta}
+ \frac{1}{R_0^2} \frac{\partial^2\Phi_s}{\partial \phi^2} = 4\pi G\rho.
\ee
The initial density distribution (i.e., Eq.\ [\ref{e:den0}]) satisfies Equations \eqref{e:meta3} and \eqref{e:pos3} as long as
$\Omega_s^2 \ll \Omega_0^2$, a condition easily met for nuclear rings.
We now consider small-amplitude perturbations applied to the initial equilibrium. Denoting the background quantities and the perturbations using the subscripts ``0'' and ``1'', respectively, we linearize Equations \eqref{e:con3}--\eqref{e:pos3} to obtain
\be\label{e:lcon}
\left( \frac{\partial }{\partial t} + \Omega_0 \frac{\partial}{\partial \phi} \right) \rho_1
+ \frac{1}{\eta}\frac{\partial (\eta \rho_0 v_{\eta1})}{\partial \eta}
+ \frac{\rho_0}{R_0} \frac{\partial v_{\phi1} }{\partial \phi} =0,
\ee
\be\label{e:lmeta}
\left( \frac{\partial }{\partial t} + \Omega_0 \frac{\partial}{\partial \phi} \right) v_{\eta1}
-2 \Omega_0 v_{\phi1}
= -\frac{\partial\chi_1 }{\partial \eta},
\ee
\be\label{e:lmphi}
\left( \frac{\partial }{\partial t} + \Omega_0 \frac{\partial}{\partial \phi} \right) v_{\phi1}
+ 2 \Omega_0 v_{\eta1}
= -\frac{1}{R_0}\frac{\partial\chi_1 }{\partial \phi},
\ee
\be\label{e:lpos}
\frac{\partial^2\Phi_{s1}}{\partial \eta^2}
+ \frac{1}{\eta} \frac{\partial\Phi_{s1}}{\partial \eta}
+ \frac{1}{R_0^2} \frac{\partial^2\Phi_{s1}}{\partial \phi^2} = 4\pi
G\rho_1,
\ee
where
\be
\chi_1 \equiv \cs^2\frac{\rho_1}{\rho_0} + \Phi_{s1}.
\ee
We assume that any perturbation, $q_1$, varies in space and time as
\be\label{e:ptb}
q_1 (\eta, \phi, t) = q_1(\eta) \exp( im\phi - i\omega t ),
\ee
with $\omega$ and $m$ being the frequency and azimuthal mode number of the perturbations, respectively. Substituting Equation \eqref{e:ptb} into Equations \eqref{e:lcon}--\eqref{e:lpos},
one obtains
\be\label{e:lcon1}
\frac{d v_{\eta1}}{d\eta} + \frac{d\ln(\eta\rho_0)}{d\eta} v_{\eta1}
= i \omega_D \frac{\rho_1}{\rho_0} - i\frac{m}{R_0} v_{\phi1},
\ee
\be\label{e:lmeta1}
v_{\eta1} = \frac{i}{\omega_D^2 - 4\Omega_0^2}
\left(\frac{ 2m\Omega_0}{R_0}\chi_1 -\omega_D \frac{d\chi_1}{d\eta} \right),
\ee
\be\label{e:lmphi1}
v_{\phi1} = \frac{1}{\omega_D^2 - 4\Omega_0^2}
\left(\frac{m\omega_D}{R_0}\chi_1 - 2\Omega_0 \frac{d\chi_1}{d\eta} \right),
\ee
\be\label{e:lpos1}
\frac{d^2\Phi_{s1}}{d \eta^2}
+ \frac{1}{\eta} \frac{d\Phi_{s1}}{d \eta}
- \frac{m^2}{R_0^2} \Phi_{s1} = 4\pi G\rho_1,
\ee
where $\omega_D \equiv \omega - m\Omega_0$ is the Doppler-shifted
frequency. Eliminating $v_{\eta1}$ and $v_{\phi1}$ in favor of $\chi_1$ from Equations \eqref{e:lcon1}--\eqref{e:lmphi1}, one finally obtains
\be\label{e:lcon2}
\begin{split}
\frac{d^2\chi_1}{d\eta^2} +
\frac{d\ln(\eta\rho_0)}{d\eta} \frac{d\chi_1}{d\eta} -
\left[ \frac{2\Omega_0}{\omega_D} \frac{m}{R_0} \frac{d\ln(\eta\rho_0)}{d\eta}
+ \frac{m^2}{R_0^2} \right] \chi_1 \\
= -(\omega_D^2-4\Omega_0^2) \frac{\rho_1}{\rho_0}.
\end{split}
\ee
Equations \eqref{e:lpos1} and \eqref{e:lcon2} constitute a set of
coupled equations that can be solved simultaneously for eigenvalue
$\omega$ subject to proper boundary conditions.
\begin{figure}
\includegraphics[angle=0, width=8.5cm]{fig09.pdf}
\caption{
(a) Equilibrium density profiles of slender rings with $\hatR_{\rm B}=-0.8$ along the radial (blue) and vertical (red) directions measured from the density maximum for $\alpha=1$ (dotted), 0.1 (dashed), and 0.01 (solid), compared to the respective approximate solutions $\rho_{\rm sr}$ (black) given by Equation \eqref{e:rhosr}. (b) Relative errors, $\rho_{\rm sr}/\rho-1$, of the approximate solutions to the SCF results.
\label{f:slprofile}}
\end{figure}
\subsection{Boundary Conditions}\label{e:bdcon}
Since Equations \eqref{e:lpos1} and \eqref{e:lcon2} are second-order
differential equations, we need to have five constraints in order to
determine $\omega$ unambiguously. Since this is a linear problem, we
are free to choose the amplitude of one variable arbitrarily. Two
conditions come from the inner boundary by the requirements
\be\label{e:bd1}
\left.\frac{d\chi_1}{d\eta}\right|_{\eta=0} =
\left.\frac{d\Phi_{s1}}{d\eta}\right|_{\eta=0} =0,
\ee
for regular solutions at $\eta=0$.
The remaining two conditions can be obtained from the outer boundary.
Perturbations given in Equation \eqref{e:ptb} also disturb the ring
surface to
\be\label{e:bdsurf}
\eta = \eta_0 + \eta_1 \exp(im\phi - i\omega t ),
\ee
with amplitude $\eta_1$, implying that
\be\label{e:velbd}
v_{\eta1} (\eta_0) = -i\omega \eta_1.
\ee
The pressure equilibrium at the disturbed surface, $\cs^2
\rho_0(\eta_0)+P_1(\eta_0)=P_{\rm ext}$, with $P_1$ representing the
perturbed pressure (e.g., \citealt{nag87}), requires
\be\label{e:3rdbd}
\left.\frac{d\rho_0}{d\eta}\right|_{\eta_0} \eta_1 + \rho_1(\eta_0) = 0,
\ee
which is a third boundary condition.
\begin{figure}
\includegraphics[angle=0, width=8.5cm]{fig10.pdf}
\caption{Schematic geometry of a ring with the major axis $R_0$ and the minor axis $\eta_0$. The coordinates $\eta$, $\lambda$, and $\phi$ measure the distance from the reference circle of radius $R_0$, the polar angle measured from the horizontal plane, and the usual cylindrical azimuthal angle, respectively.\label{f:coord}}
\end{figure}
To derive a fourth boundary condition, we follow \citet{gol65} to
assume that the perturbed mass near the outer boundary is restricted to a thin annulus such that $\rho_1 =\rho_0\eta_1\delta (\eta-\eta_0)$ (see also, \citealt{nag87,kimjg}). Integrating Equation \eqref{e:lpos1} from $\eta=\eta_0$ to $\eta=\eta_0 + \eta_1$, one obtains
\be\label{e:pbd}
\frac{d\Phi_{s1}^+}{d\eta} -
\frac{d\Phi_{s1}^-}{d\eta} = 4\pi G\rho_0\eta_1\;\;\;\text{at}\;\;\;
\eta=\eta_0,
\ee
where the superscripts ``$+$'' and ``$-$'' indicate the potentials evaluated just outside and inside the ring surface, respectively. Assuming that the region outside the ring is filled with an extremely hot, tenuous gas, $\Phi_{s1}^+$ should satisfy Equation \eqref{e:lpos} with $\rho_1=0$. The regular solution at infinity can be expressed as
\be\label{e:extp}
\Phi_{s1}^+ = A K_0 \left( \frac{m}{R_0} \eta \right),\;\;\;\text{for}\;\;\;\eta/\eta_0 \geq 1,
\ee
where $A$ is a constant and $K_n$ is the second-kind modified Bessel
function of order $n$. The condition that the gravitational potential
should be continuous across the surface gives $A=
\Phi_{s1}(\eta_0)/K_0(m\eta_0/R_0)$. Plugging Equation \eqref{e:extp}
into Equation \eqref{e:pbd} gives
\be\label{e:4thbd}
\frac{d\Phi_{s1}}{d\eta} + 4\pi G\rho_0\eta_1 = -\frac{m}{R_0}\frac{K_1(m\eta_0/R_0)}{K_0(m\eta_0/R_0)} \Phi_{s1}
\;\;\;\text{at}\;\;\; \eta=\eta_0.
\ee
Equations \eqref{e:bd1}, \eqref{e:3rdbd}, and \eqref{e:4thbd} are our
complete set of the boundary conditions.
\subsection{Method of Computation}\label{e:mtd}
By writing Equations \eqref{e:lpos1} and \eqref{e:lcon2} into a
dimensionless form, one can see that the problem of finding the
dimensionless eigenvalue $\widehat \omega \equiv \omega/(G\rho_{c})^{1/2}$ is
completely specified by four dimensionless parameters, $\eta_0/R_0$,
$R_0/H$, ${\widehat \Omega}_0 \equiv \Omega_0/(G\rho_{c})^{1/2}$, and $m$. Nuclear rings typically have $\eta_0\sim 0.1\kpc$, $R_0\sim1\kpc$,
$M\sim4\times 10^8{\rm\,M_\odot}$, and $\Omega_0\sim 100$--$200
\kms\;\kpc^{-1}$. The corresponding mean density is ${\langle\widehat\rho\rangle} =
M/(2\pi^2\eta_0^2R_0) \sim 1\times 10^{-22} \rm\;g\;cm^{-3}$, and the
characteristic ring thickness is
\be
H = 35 \pc\;\left(\frac{\cs}{10 \kms}\right)
\left( \frac{\rho_{c}}{1\times 10^{-22} \rm\;g\;cm^{-3}} \right)^{-1/2},
\ee
We therefore choose $\eta_0/R_0=0.1$ and $R_0/H=30$ as our standard set of parameters, but also vary $R_0/H$ and ${\widehat \Omega}_0$ to explore the effects of ring thickness and rotation on the ring stability. For slender rings with $\eta_0/R_0=0.1$, $R_0/H$ is related to $\alpha$ through $\alpha \simeq 10.4 \left({R_0}/{H}\right) ^{-2}$.
\begin{figure*}
\includegraphics[angle=0, width=18cm]{fig11.pdf}
\caption{(a)--(c)
Imaginary parts of the eigenfrequencies of the unstable modes for
various values of the angular frequency ${\widehat \Omega}_0$ in the rings with
$R_0/H=10$, 30, and 60, and (d) the real parts of the unstable
eigenfrequencies for the rings with $R_0/H=30$. For all cases, the
ratio of the ring minor to major axes is fixed to $\eta_0/R_0=0.1$. A
ring with larger $R_0/H$ (or smaller $\alpha$) is more unstable, and
thus has a larger growth rate and a larger unstable range of $m$.
Rotation tends to reduce the growth rate.
\label{f:rdisp_all}}
\end{figure*}
As a normalization condition, we take ${\rm Re}(\chi_1)={\rm
Im}(\chi_1)=1$ at the outer boundary. For given $m$ and ${\widehat \Omega}_0$, we first choose two trial values for $\widehat \omega$ and $\Phi_{s1}$ at
$\eta=\eta_0$, and calculate $d\chi_1/d\eta$ and $d\Phi_{s1}/d\eta$ at the outer boundary from Equations \eqref{e:3rdbd} and \eqref{e:4thbd}. Next, we integrate Equations \eqref{e:lpos1} and \eqref{e:lcon2} from $\eta=\eta_0$ to $\eta=0$ and check if the two conditions in Equation \eqref{e:bd1} are satisfied. If not, we vary $\Phi_{s1}(\eta_0)$ and $\widehat \omega$ one by one and repeat the calculations until the inner boundary conditions are fulfilled within tolerance.
\subsection{Dispersion Relations}\label{s:result}
Figure \ref{f:rdisp_all}(a)--(c) plots the imaginary parts of
eigenfrequencies for gravitationally unstable modes for isothermal
rings with $R_0/H=10$, 30, and 60 (or $\alpha=1.04\times 10^{-1}$,
$1.15\times10^{-2}$, and $2.88\times10^{-3}$) as functions of the
azimuthal mode number $m$ and the rotational angular frequency
${\widehat \Omega}_0$. For all cases, the ring thickness is fixed to
$\eta_0/R_0=0.1$. When ${\widehat \Omega}_0=0$, the maximum growth rates are
Im$({\widehat \omega}_{\rm max})=0.88$, 1.01, and 1.19, which are achieved at $m_{\rm
max}=6$, 10, and 18, for the rings with $R_0/H=10$, 30, and 60,
respectively. Note that the dispersion relations with ${\widehat \Omega}_0=0$ are identical to those of axisymmetric modes in an infinite isothermal cylinder, presented by \citet{nag87}, as long as $m/R_0$ is replaced with the vertical wavenumber $k$. Without rotation, $\widehat \omega$ is always an imaginary number, corresponding to pure instability. When ${\widehat \Omega}_0\neq 0$, however, eigenfrequencies are complex numbers with the real parts almost linearly proportional to $m$, corresponding to overstability, as exemplified in Figure \ref{f:rdisp_all}(d) for $R_0/H=30$. This is a generic property of any instability occurring in a non-static background medium (e.g., \citealt{mat94,kim14a,kim15}).
\begin{figure}
\includegraphics[angle=0, width=8.5cm]{fig12.pdf}
\caption{Critical angular
frequencies as a function of $\alpha$, with the
upper-right region corresponding to stable configurations.
The red solid line with dots is the results of our full stability analysis, while the blue dashed line draws Equation \eqref{e:lcrit} from the local dispersion relation. The critical frequency from the Toomre condition is given as a horizontal dotted line for comparison. See text for details.
\label{f:crit_cal}}
\end{figure}
It is apparent that a ring with larger $R_0/H$ (or smaller $\alpha$) is more unstable owing to a smaller sound speed and/or a larger ring mass. Overall, rotation tends to stabilize the instability, reducing both the growth rate and the unstable range of $m$, although their dependence on ${\widehat \Omega}_0$ is not simple. When $R_0/H=10$ (or 30), the reduction of the growth rate due to rotation is larger at larger (or smaller) $m$, making $m_{\rm max}$ shifted to a smaller (or larger) value as ${\widehat \Omega}_0$ increases. In the case of $R_0/H=60$, on the other hand, rotation simply reduces the growth rate, without much affecting the unstable range of $m$. The instability is completely suppressed when ${\widehat \Omega}_0\gtrsim 0.81$, 0.64, and 3.70 for $R_0/H=10$, 30, and 60, respectively.
Figure \ref{f:crit_cal} plots the critical angular frequency ${\widehat \Omega}_{0,\rm crit}$ against $\alpha$ as the solid line, with the upper-right region corresponding to the stable regime. Note that ${\widehat \Omega}_{0,\rm crit}$ is almost constant at $\sim 0.7$ for $\alpha\gtrsim0.01$ and increases rapidly as $\alpha$ decreases.
It is interesting to compare our results for ${\widehat \Omega}_{0,\rm crit}$ with those from other simple estimates. The Toomre stability parameter $Q_T=\kappa_0c_s/(\pi G \Sigma_0)$ has usually been invoked to judge whether a flattened system under consideration is gravitationally stable or not. For a ring, it is unclear how to choose $\Sigma_0$ since the ring surface density varies with $R$. If we simply take $\Sigma_0=2\rho_c H$, about a half of the maximum surface density $\Sigma_{\rm max}=2\int_0^\infty\rho_{\rm sr} d\eta = 2^{1/2}\pi \rho_c H$ across the ring center, the Toomre condition for marginal stability $Q_T=1$ corresponds to ${\widehat \Omega}_{0,\rm crit} = (\pi/4)^{1/2}\approx 0.89$, independent of $\alpha$, which is plotted as the horizontal dotted line in Figure \ref{f:crit_cal}. The critical angular frequency from the Toomre condition is close to the results of our stability analysis for $\alpha \gtrsim 0.01$, but deviates considerably for smaller $\alpha$. Strictly speaking, the Toomre condition is valid only for thin disks that are infinitesimally thin in the vertical direction but infinitely extended in the horizontal direction. Even the thin-disk gravity underestimates self-gravity of highly concentrated rings with $\alpha\lesssim 0.01$.
\citet{elm94} presented a local dispersion relation for gravitational instability of nuclear rings by treating them as being thin and locally cylindrical. In Appendix \ref{a:wkb}, we solve Equations \eqref{e:lpos1} and \eqref{e:lcon2} for local waves that vary very rapidly in the azimuthal direction (i.e., $m/R_0 \gg |d\ln(\eta\rho_0)/d\eta|$) but remain constant in the $\eta$-direction (i.e., $d\chi_1/d\eta=0$). The resulting dispersion relation (Eq.~\eqref{e:ldisp}) is the same as the one given in \citet{elm94} (in the absence of magnetic fields and gas accretion). The critical angular frequency for local waves is then given by
\be\label{e:lcrit}
{\widehat \Omega}_{0,\rm crit}^2 = \max_m \left\{ \pi \left[ 1 - \frac{m\eta_0}{R_0} K_1\left( \frac{m\eta_0}{R_0} \right) \right] - \frac{\alpha}{4} m^2 \right\},
\ee
which is plotted in Figure \ref{f:crit_cal} as the blue dashed line for $\eta_0/R_0=0.1$. Although ${\widehat \Omega}_{0,\rm crit}$ from Equation \eqref{e:lcrit} is similar to the results of the full analysis for $\alpha\sim 3\times 10^{-2}$, it underestimates the latter considerably for $\alpha \lesssim 5\times 10^{-3}$.
This is because rings with smaller $\alpha$ are increasingly more strongly concentrated that the approximations of constant $\rho_0$ and $\chi_1$ over $\eta$ become invalid.\footnote{The critical density $\rho_{\rm crit}=0.6\kappa^2/G$ of \citet{elm94}, or equivalently ${\widehat \Omega}_{0,\rm crit}= 0.65$ in our notation, mentioned in Section \ref{s:intro} was based on the assumption of $m \eta_0/R_0 \lesssim 1$, which conflicts with the local approximation employed in the derivation of Equation \eqref{e:lcrit}. In view of Figure \ref{f:rdisp_all}, this cannot capture the most unstable modes for small $\alpha$, as well.} For $\alpha \propto (H/R_0)^2 \rightarrow 0$, rings can be approximated as strongly concentrated cylinders for which motions along the symmetry axis do not affect their gravitational instability much. Very large values of ${\widehat \Omega}_0$ are required to stabilize such small-$\alpha$ rings.
We thus conclude that both the Toomre condition and the local results
cannot adequately describe the critical angular frequencies of nuclear rings, especially when $\alpha$ is very small. In Section \ref{s:dis}, we will apply the stability curve derived from the full stability analysis to observed rings in real galaxies.
\begin{figure*}
\hspace{1cm}
\includegraphics[angle=0, width=16cm]{fig13.pdf}
\vspace{-0.2cm}
\caption{Snapshots of the projected density $\Sigma=\int\rho dz$ on to the equatorial plane of the
rigidly-rotating ($q=0$) model with $R_0/H=30$, $\eta_0/R_0=0.1$, and ${\widehat \Omega}_0=0.30$ at $\tau=0.0$, 5.0, 7.0, and 8.8. As a result of gravitational instability, the ring fragments into 11 clumps.
\label{f:proj}}
\end{figure*}
\begin{figure}
\includegraphics[angle=0, width=8.5cm]{fig14.pdf}
\caption{Temporal evolution of the amplitudes of even-$m$ Fourier modes for the $q=0$ model shown in Figure \ref{f:proj}. The dashed-line segment indicates a slope of $0.81$, very close to the growth of $m=8$--12 modes, consistent with the results of the linear stability analysis.
\label{f:evol}}
\end{figure}
\subsection{Nonlinear Simulations}\label{s:sim}
To check the results of our linear stability analysis as well as to explore the effect of differential rotation, we conduct direct numerical simulations for gravitational instability of a slender isothermal ring using the GIZMO code \citep{hop15,hop16}. GIZMO is a second-order accurate magnetohydrodynamics code based on a Lagrangian, mesh-free, Godunov-type method, and conserves mass, momentum, and energy almost exactly. In our calculations, the basic
equations of hydrodynamics are solved by the Meshless Finite-Mass Method known to conserve angular momentum very well.
To obtain an initial particle distribution, we first use the SCF method to construct the equilibrium density distribution of the rigidly-rotating ring with $R_0/H=30$ and $\eta_0/R_0=0.1$ in the absence of external gravity. The rotation frequency of this ring is found to be ${\widehat \Omega}_s=0.22$. The initial particle positions are then sampled by a rejection technique that uses Halton's quasi-random sequences over usual random numbers in order to reduce Poisson noises (e.g., \citealt{pre88}). We then impose a radially-variying external gravity ${\widehat \Omega}_e^2 (R) {\mathbf{\widehat R}}$ to boost the angular velocity of the ring particles according to
\be\label{e:diffOmg}
{\widehat \Omega}_0=({\widehat \Omega}_s^2+{\widehat \Omega}_e^2)^{1/2}= 0.30 \left(\frac{R}{R_0}\right)^{-q}.
\ee
Here, $q$ is the rate of shear in the ring rotation, such that
$q=0$ and $1$ corresponds to rigid-body and flat rotation, respectively. We vary $q$ from 0 to 1.5 to study the effect of shear on the growth of gravitational instability.
\begin{figure*}
\hspace{1.5cm}\includegraphics[angle=0, width=15cm]{fig15.pdf}
\caption{ Snapshots of the projected density (left)
at $\tau=8.8$, $9.0$, and $9.1$, and the temporal variations of the amplitudes of even-$m$ Fourier modes (right) in models with $q=0.5$, $1.0$, and $1.5$ from top to bottom. The dashed-line segment in each panel indicates a slope of $0.81$, which describes the growth rates of dominant modes fairly well for all models. The number of clumps produced is 11, 10, and 10 from top to bottom.
\label{f:allq}}
\end{figure*}
To represent a hot tenuous external medium, we also distribute low-mass particles/cells outside the ring but inside a cylindrical volume with radius $R/R_0 = 2$ and height $h/R_0 = 2$.
We let the external medium follow the rotation law given in Equation \eqref{e:diffOmg}, and adjust its density distribution to balance the centrifugal force and gravity of the ring. We ensure a pressure equilibrium between the ring and the external medium that has 100 times lower density than the ring at the contact surfaces. The number of particles/cells for the ring and external medium is $5\times10^5$ each.
At the very initial phase of evolution ($\tau\equiv t(G\rho_{c})^{1/2} \lesssim 0.25$), we iron out any residual Poisson sampling noises by introducing an artificial damping force $f \propto -v_\eta \widehat \eta$. The system thus evolves from a smooth, steady equilibrium state, without undergoing violent expansion or contraction, and gradually picks up gravitationally unstable modes.
Figure \ref{f:proj} plots snapshots of the projected density $\Sigma=\int\rho dz$ onto the equatorial plane at $\tau=0$, 5, 7, and 8.8 for a rigidly-rotating model with $q=0$. Defining the line density as ${\mathcal L} \equiv \int \Sigma dR$, we calculate the amplitude ${\mathcal L} _m$ of each azimuthal mode $m$ via Fourier transform at each time. Figure \ref{f:evol} displays ${\mathcal L}_m$ as functions of $\tau$ for even-$m$ modes in the $q=0$ model. Note that the modes with $m=8-12$ dominate during the linear growth phase ($4\lesssim \tau\lesssim 6$), resulting in 11 clumps at the end of the run. The growth rates of these modes are all similar at $\sim (0.80$--$0.82) (G\rho_{c})^{1/2}$, as indicated as the dashed-line segment, consistent with the linear results shown in Figure \ref{f:rdisp_all}(b). This indirectly confirms that the assumptions made in our stability analysis are quite reasonable.
The left panels of Figure \ref{f:allq} plot the snapshots of surface density in the equatorial plane at the end of the runs (at $\tau=8.8$, $9.0$, and $9.1$) for models with $q=0.5$, $1.0$, and $1.5$ from top to bottom, while the right panels give the temporal evolution of ${\mathcal L} _m$ for even-$m$ modes. In the $q=0.5$ model, the $m=8$ and 10 modes dominate almost equally, while the models with $q=1.0$ and 1.5 are dominated by the $m=10$ and $m=8$ mode, respectively. Note that the growth rates of the dominant modes in all models are very close to $0.81 (G\rho_{c})^{1/2}$, marked by the dashed-line segment in each panel. The number of clumps produced as a result of gravitational instability is 10 or 11,
insensitive to $q$, demonstrating that shear does not affect the character of gravitational instability of slender rings.
\section{Summary \& Discussion}\label{s:sum}
\subsection{Summary}
Nuclear rings at centers of barred galaxies exhibit strong star
formation activities. They are thought to undergo gravitational
instability when sufficiently massive. To study their equilibrium
properties and stability to gravitational perturbations, we approximate nuclear rings as isothermal objects. We modify the SCF method of \citet{hac86a} to make it suitable for an isothermal equation of state, and construct equilibrium sequences of rigidly-rotating, self-gravitating, isothermal bodies. A steady equilibrium is uniquely specified by two dimensionless parameters: $\alpha$ and $\widehat R_{\rm B}$ (see Eqs.~[\ref{e:alp}] and [\ref{e:rB}]). The former is the measure of the thermal energy relative to gravitational potential energy of an equilibrium body, while the latter corresponds to the ellipticity for spheroid-like configurations or the thickness for ring-like configurations. We take a convention that $\widehat R_{\rm B}$ is positive (or negative) for spheroid-like (or ring-like) objects.
To test our SCF method, we first apply it to the case of rotating
incompressible bodies, and confirm that our method is able to reproduce the Maclaurin spheroid sequence when $0.158 \leq \widehat R_{\rm B} \leq1$. With improved resolution, our method gives more accurate results than those obtained by \citet{eri81} and \citet{hac86a} for the concave hamburger sequence with $0 \lesssim \widehat R_{\rm B} < 0.158$ . Our method also successfully reproduces isothermal Bonnor-Ebert spheres, with larger $\alpha$ corresponding to a higher degree of central density concentration.
We then use our SCF method to obtain the density distributions of
rotating isothermal equilibria on the meridional plane, as illustrated in Figure \ref{f:isocontour}. We calculate the dependence on $\widehat R_{\rm B}$ of various dimensionless quantities such as the rotational angular frequency ${\widehat \Omega}_s$, the total mass $\widehat M$, the mean density ${\langle\widehat\rho\rangle}$, the total kinetic energy $\widehat T$, and the gravitational potential energy $\widehat W$. These values are tabulated in Table \ref{t:iso} and given graphically in Figure \ref{f:isoRB}. We find that an equilibrium density profile is more centrally concentrated for smaller $\alpha$. Unlike the incompressible bodies, not all values of $\widehat R_{\rm B}$ result in an isothermal equilibrium configuration. Spheroid-like equilibria exist only for $\widehat R_{\rm B,1} \leq \widehat R_{\rm B} \leq 1$, while ring-like (or hamburger-like) configurations are possible only for $-1< \widehat R_{\rm B}<\widehat R_{\rm B,2}$: otherwise, the centrifugal potential is too
large to form gravitationally bound objects. The critical $\widehat R_{\rm B}$
values are found to be $\widehat R_{\rm B,1}=0.27$, 0.51, and 0.59, and
$\widehat R_{\rm B,2}=0.13$, $-0.14$, and $-0.40$ for $\alpha=1$, $0.1$, and
$0.01$, respectively.
In general, ${\widehat \Omega}_s$ is a decreasing function of $|\widehat R_{\rm B}|$. This
is naturally expected for spheroid-like configurations since faster
rotation leads to a more flattened equilibrium. As $\widehat R_{\rm B}$ approaches $-1$, on the other hand, ring-like configurations becomes less massive and thus requires weaker centrifugal force to balance self-gravity. Due to stronger central concentration, ${\widehat \Omega}_s$, $\widehat M$, ${\langle\widehat\rho\rangle}$, $\widehat T$, and $|\widehat W|$ all become smaller as $\alpha$ decreases. For $\alpha< 0.1$, $\widehat M$ and $\widehat W$ are insensitive to $\widehat R_{\rm B}\gtrsim0.6$ for spheroid-like equilibria since the density in the outer parts becomes vanishingly small. For a given value of the normalized angular momentum $j$, the normalized angular frequency $\omega_s$ becomes smaller with decreasing $\alpha$, although the energy ratio $T/|W|$ is insensitive to $\alpha$.
\begin{deluxetable*}{rcccccccc}[!t]
\tablecaption{Properties of Observed Nuclear Rings\label{t:obsring}}
\tablewidth{0pt} %
\tablehead{ & \colhead{$R_1$}
& & \colhead{$v_{\rm rot}$}
& \colhead{$M_g$}
&
&
&
& \\
\colhead{Galaxy} & \colhead{(kpc)}
& \colhead{$e$} & \colhead{(km s$^{-1}$)}
& \colhead{$(10^7{\rm\,M_\odot})$}
& \colhead{Age Grad.}
& \colhead{$\alpha$}
& \colhead{${\widehat \Omega}_0$}
& \colhead{Ref.} \\
\colhead{(1)} & \colhead{(2)}
& \colhead{(3)} & \colhead{(4)}
& \colhead{(5)}
& \colhead{(6)}
& \colhead{(7)}
& \colhead{(8)}
& \colhead{(9)} }
\startdata
NGC 473 & 1.69 & 0.06 & 125 & 40 & Yes & 1.69E$-2$ & 1.71 & \\
NGC 613 & 0.40 & 0.26 & 115 & 40 & ? & 3.93E$-3$ & 0.76 & \\
NGC 1097 & 0.97 & 0.32 & 220 & 140 & No & 2.70E$-3$ & 1.20 & (a),(b) \\
NGC 1300 & 0.40 & 0.15 & 155 & 40 & No & 3.98E$-3$ & 1.03 & \\
NGC 1343 & 1.97 & 0.30 & 80 & 40 & Yes & 1.92E$-2$ & 1.17 & \\
NGC 1530 & 1.20 & 0.80 & 180 & 40 & Yes & 9.30E$-3$ & 1.82 & \\
NGC 2903 & 0.16 & 0.32 & 60 & 35 & ? & 1.78E$-3$ & 0.27 & (c) \\
NGC 3351 & 0.15 & 0.11 & 120 & 31 & ? & 1.89E$-3$ & 0.55 & (c) \\
NGC 4303 & 0.35 & 0.11 & 90 & 42 & No & 3.32E$-3$ & 0.54 & \\
NGC 4314 & 0.56 & 0.31 & 160 & 21 & Yes & 1.04E$-2$ & 1.71 & (d),(e) \\
NGC 4321 & 0.87 & 0.32 & 170 & 51 & Yes & 6.64E$-3$ & 1.45 & \\
NGC 5248 & 0.65 & 0.20 & 150 & 42 & Yes & 6.13E$-3$ & 1.23 & \\
NGC 5728 & 1.10 & 0.23 & 180 & 40 & Yes & 1.09E$-2$ & 1.97 & \\
NGC 5905 & 0.39 & 0.14 & 150 & 40 & ? & 3.88E$-3$ & 0.98 & \\
NGC 5953 & 1.00 & 0.43 & 150 & 40 & ? & 9.50E$-3$ & 1.54 & \\
NGC 6951 & 0.56 & 0.17 & 160 & 40 & Yes & 5.56E$-3$ & 1.25 & \\
NGC 7552 & 0.34 & 0.15 & 150 & 40 & No & 3.38E$-3$ & 0.92 & (f) \\
NGC 7716 & 1.20 & 0.04 & 150 & 40 & No & 1.20E$-2$ & 1.72 & \\
NGC IC14 & 0.68 & 0.36 & 204 & 40 & Yes & 6.57E$-3$ & 1.74 &
\enddata
\tablecomments{
Columns (2) and (3) give the semi-major axis and
ellipticity of nuclear rings adopted from \citet{com10}. Column (4) is the rotational velocity adopted from \citet{maz08} or from the
references given in Column (9). Column (5) is the total gas mass in the ring from \citet{she05} or from references in Column (9); we take $M_g = 4\times 10^8{\rm\,M_\odot}$ if no information is available. Column (6) cites the age distribution: ``Yes'' and ``No'' for the presence and absence of an azimuthal age gradient, respectively, and ``?'' for uncertain cases, adopted from \citet{all06}, \citet{maz08}, \citet{san10}, and \citet{bra12}. Columns (7) and (8) give $\alpha$ and ${\widehat \Omega}_s$ calculated by Equations \eqref{e:obs_alp} and \eqref{e:obs_Omg}. Column (9) is the references for $v_{\rm rot}$ or $M$: (a) \citet{oni15}; (b) \citet{hsi11}; (c) \citet{maz11}; (d) \citet{ gar91}; (e) \citet{ben96}; (f) \citet{bra12}.}
\end{deluxetable*}
The density distribution of finite slender rings obtained by our SCF
method for $\widehat R_{\rm B} \lesssim -0.6$ is found to be well approximated by
Equation \eqref{e:rhosr}, which is also the solution for static,
isothermal cylinders of infinite extent. This indicates that the
rotation as well as geometrical curvature effect are insignificant in
determining an equilibrium for rings with the major axis $R_0$ much
longer than the minor axis $\eta_0$. The equilibrium angular frequency for isothermal slender rings with $\alpha\gtrsim 0.1$ is well described by Equation \eqref{e:Ostinc} applicable to truncated incompressible rings \citep{ost64b}.
To explore gravitational instability of nuclear rings, we calculate the growth rates of nonaxisymmetric modes with azimuthal mode number $m$ by assuming that the rings are slender with $\eta_0/R_0=0.1$, and that perturbations are independent of the polar angle $\lambda$ in the meridional plane. In the absence of rotation, the resulting dispersion relations are the same as those of axisymmetric modes for an infinite isothermal cylinder studied by \citet{nag87} if $m/R_0$ is taken equal to the wavenumber in the direction along the cylinder (see Fig.~\ref{f:rdisp_all}). Only large-scale modes can be gravitationally unstable, and the unstable range of $m$ as well as the maximum growth rate increase with decreasing $\alpha$.
Rotation tends to stabilize the gravitational instability, reducing both the growth rates and the unstable ranges of $m$. The instability is completely suppressed when ${\widehat \Omega}_0$ exceeds the critical value that is relatively constant at $\sim 0.7$ for $\alpha\gtrsim0.01$ and increases rapidly with decreasing $\alpha$ (see Fig.~\ref{f:crit_cal}). The simple estimates of the critical angular frequencies from the Toomre condition as well as the local dispersion relation are smaller than the results of our full stability analysis for $\alpha \lesssim 5\times 10^{-3}$ due to underestimation of self-gravity at the ring centers. Shear turns out to be unimportant for the gravitational instability of rings as long as they are slender.
\subsection{Discussion}\label{s:dis}
\citet{maz08} analyzed photometric data of a sample of nuclear rings to estimate the ages of H$\alpha$-emitting star clusters and found that about a half of their sample contains an age gradient of star clusters along the azimuthal direction. Since nuclear rings with age gradient are thought to be gravitationally stable and form stars preferentially at the contact points, it is interesting to apply the results of our linear stability analysis to the observed rings to tell whether they are really stable.
Table \ref{t:obsring} lists the properties of 19 observed nuclear rings in galaxies with a noticeable bar, compiled from the literature where the information on the presence/absence of an age gradient is
available.\footnote{Most of the galaxies listed in Table
\ref{t:obsring} except for NGC 1097, NGC 2903, NGC 3351, and NGC 7752 are adopted from \citet{maz08}: NGC 1097 is from \citet{san10}, NGC 2903 and NGC 3351 from \citet{maz11}, NGC 4321 from \citet{all06}, and NGC 7752 from \citet{bra12}.} Column (1) lists each galaxy name. Columns (2) and (3) give the semi-major axis and ellipticity of nuclear rings adopted from \citet{com10}. Column (4) lists the rotational velocity $v_{\rm rot}$ adopted from \citet{maz08} or from the references given in Column (9). Column (5) lists the total gas mass $M_g$ in the ring from \citet{she05} or the references in Column (9) only for the galaxies with available data; we otherwise take $M_g=4\times10^8{\rm\,M_\odot}$ as a reference value. Column (6) indicates the presence or absence of an azimuthal age gradient of star clusters adopted from \citet{maz08}, \citet{all06}, \citet{san10}, and \citet{bra12}: a question mark is used when it is difficult to characterize the age distribution. Columns (7) and (8) give $\alpha$ and ${\widehat \Omega}_0$ calculated by
\be\label{e:obs_alp}
\alpha = 0.01 \left(\frac{c_s}{10\kms}\right)^2 \left(\frac{M_g}{4\times10^8{\rm\,M_\odot}}\right)^{-1}
\left(\frac{R_0}{1\kpc}\right),
\ee
and
\be\label{e:obs_Omg}
{\widehat \Omega}_0 = 2.1 \left(\frac{v_{\rm rot}}{200\kms}\right) \left(\frac{M_g}{4\times10^8{\rm\,M_\odot}}\right)^{-0.5}
\left(\frac{R_0}{1\kpc}\right)^{0.5},
\ee
after taking $R_0=R_1(1-e^2)^{1/4}$ corresponding to the geometric means of the major and minor axes of eccentric rings, $\cs=10\kms$, and $\eta_0=R_0/10$. We replace $\rho_c$ with ${\langle\rho\rangle}$ since the ring central density is difficult to constrain observationally.
In Figure \ref{f:crit_obs}, we plot ${\widehat \Omega}_0$ against $\alpha$ for
the rings listed in Table \ref{t:obsring} using various symbols, with
numbers indicating galaxy names. Overall, rings with larger $\alpha$ tend to have larger ${\widehat \Omega}_0$. Blue circles represent rings with an azimuthal age gradient, while red diamonds are for those with no age gradient. Rings for which the age distribution cannot be judged are marked by star symbols. It is apparent that all rings with an azimuthal age gradient are located at the stable regime, while all rings with no age gradient, except NGC 7716, correspond to unstable configurations. These results are consistent with two modes of star formation proposed by \citet{bok08}, such that rings sufficiently massive or rotating sufficiently slowly form stars in the popcorn style caused by gravitational instability, and thus do not show an apparent age gradient. On the other hand, star formation in stable rings may occur preferentially at the contact points to exhibit an azimuthal age gradient of star clusters like pearls on a string.
The ring models we have considered so far ignored the effects of
magnetic fields that are pervasive in galaxies (e.g.,
\citealt{bec96,fle11}). In spiral galaxies, the presence of toroidal
magnetic fields is known to play a destabilizing role in forming giant clouds inside spiral arms where tension forces from bent field lines resist the stabilizing effect of the Coriolis force (e.g.,
\citealt{bal88,kim01,kim02,kim06}). In addition, magnetic fields
are likely to reduce the degree of central density concentration by
exerting pressure forces. It will be interesting to study how magnetic fields embedded in nuclear rings change the critical angular
frequencies for gravitational instability compared to those of
unmagnetized rings.
\begin{figure}
\includegraphics[angle=0, width=8.5cm]{fig16.pdf}
\caption{Distributions of
$\alpha$ and ${\widehat \Omega}_0$ of the observed nuclear rings listed in Table \ref{t:obsring}. Blue circles and red diamonds represent rings with and without an azimuthal age gradient, respectively, while rings with uncertain age distributions are indicated by star symbols.
\label{f:crit_obs}}
\end{figure}
\acknowledgments
We acknowledge a helpful report from the referee, and are grateful to Dr.~Hsi-An Pan for the information on NGC 4321. This work was supported by the National Research Foundation of Korea
(NRF) grant, No.~2008-0060544, funded by the Korea government (MSIP).
The computation of this work was supported by the Supercomputing
Center/Korea Institute of Science and Technology Information with
supercomputing resources including technical support (KSC-2015-C3-027).
|
1,116,691,500,699 | arxiv | \section{Introduction}
There are three major motivations for investigating pure neutron
systems with external fields (``neutron drops'') using {\it ab initio}
approaches. First, neutron drops provide a very simple model of
neutron-rich nuclei in which the core is approximated as an external
well acting on valence
neutrons~\cite{Chang:2004a,Pieper:2005,Gandolfi:2006,Gandolfi:2008}.
Second, {\it ab initio} solutions for neutrons trapped by an external
potential can be used as data for calibrating model effective
Hamiltonians and Energy Density Functionals
(EDFs)~\cite{Bogner:2011kp,Gandolfi:2010za}, particularly for very
neutron-rich systems as occur in astrophysical environments like
neutron stars. Third, these results may serve as useful benchmarks
for testing other many-body methods.
These motivations are further supported by the advent of new
experimental facilities to probe the extremes of neutron-rich nuclei,
to map out the neutron drip line and to inform models of nuclear
astrophysical processes~\cite{erler2012limits}. Traditional energy
density functionals~\cite{bender2003self} are obtained by fitting
measured properties of stable and near-stable nuclei. Extrapolations
using these traditional EDFs to regions of extreme isospin are
sensitive to their less controlled features that result in large
variations in the predictions. Beyond these experimental vistas, we
desire control over properties of EDFs for low-density neutron
systems, since those properties are important for the processes in the
inner crust of neutron stars. It is our long-term aim to provide {\it
ab initio} calculations of trapped neutrons that address these
motivations. We report here the results with currently available
approaches that serve as a basis for long-term efforts that will
employ improved microscopic Hamiltonians and many-body methods.
We adopt two nucleon-nucleon ($N\!N$) interactions which fit
scattering data and deuteron properties with high accuracy, namely the
local Argonne $v^\prime_8$ (AV8$^\prime$)~\cite{Pudliner:1997ck} and
the nonlocal JISP16~\cite{Shirokov07}. As shown by accurate
calculations~\cite{Tucson,Urbana,Fadd-TNI,Pieper:2001,Pudliner:1997ck,Hayes:2003ni,Navratil:2007we,Maris:2011as,Roth:2011ar},
local $N\!N$ interactions are not sufficient to describe accurately
the properties of light nuclei. Even the ground-state of the simplest
three-body problem, the triton, is significantly underbound.
Different models of three-nucleon interactions (TNIs) have been
proposed to build a non-relativistic Hamiltonian that reproduces
experimental results such as ground-state energies, density profiles,
and rms radii of light nuclei~\cite{Pieper:2001}. TNIs from
meson-exchange theory are modeled through an operator structure with
parameters that are fit to experimental nuclear energies. The Urbana
IX TNI (UIX) was fit to $^3$H and nuclear matter saturation, but it
typically underbinds heavier nuclei~\cite{Pudliner:1997ck}. Other TNI
forms, namely Illinois forces, are fit to light
nuclei~\cite{Pieper:2001}. The most recent is the Illinois-7
(IL7)~\cite{Pieper:2008} which reproduces nuclear energies up to
$A=12$ with an rms error of 600 keV. However, the three-pion rings
included in IL7 give a strong overbinding of pure
neutron-matter~\cite{Sarsa:2003}. In light of the different data
selected for tuning these TNIs it is useful to observe their
similarities and differences with neutron drops.
JISP16 is a phenomenological nonlocal $N\!N$-interaction written as a
finite matrix in a harmonic oscillator (HO) basis for each of the
$N\!N$ partial waves. It is constructed to reproduce the available
$N\!N$ scattering data using the $J$-matrix inverse scattering
approach. In addition, phase-equivalent transformations have been
used to modify its off-shell properties in order to achieve a good
description of selected states in light nuclei~\cite{Shirokov07}. It
gives a good description of most narrow states in light nuclei up to
about $A=12$~\cite{Maris:2008ax,Maris:2009bx,Cockrell:2012vd}.
However it tends to overbind heavier $N=Z$ nuclei ($^{16}$O is
overbound by about 15\%), but tends to underbind as one moves away
from the valley of stability.
In this paper we analyze the ground-states and several excited states
of neutron drops for four Hamiltonians: AV8$^\prime$ without TNI,
AV8$^\prime$+UIX, AV8$^\prime$+IL7, and JISP16. We examine possible
sub-shell closures and the spin-orbit splittings of odd systems near
closed HO shells. We also compare results for the neutron matter
equations of state using AV8$^\prime$ with and without TNIs that could
be useful to calibrate bulk and gradient terms of Skyrme forces. The
techniques we use are based on two quantum Monte Carlo (QMC)
techniques for the Argonne interactions and on the No-Core Full
Configuration (NCFC) for the nonlocal interaction JISP16. We provide
quantified uncertainties where feasible.
The Green's function Monte Carlo (GFMC) provides accurate energies,
radii and and other properties of nuclei up to $A=12$ with the Argonne
interactions~\cite{Pieper:2008}; currently it can be used for systems
of up to 16 neutrons. The Auxiliary Field Diffusion Monte Carlo
(AFDMC)~\cite{Schmidt:1999} has similar statistical accuracy as GFMC
in computing energies of systems of neutrons and
can be implemented for larger systems, up to more than 100
neutrons~\cite{Gandolfi:2009,Gandolfi:2010za}. A comparison between AFDMC
and GFMC results (obtained with the same Hamiltonian) suggests
that the systematic uncertainties are of the order of a few
percent. Improving the trial wave function used in AFDMC
to include pairing, for example, could further reduce these differences.
For JISP16 we expand the neutron drop wave functions in a HO basis.
For any finite truncation of the basis, this provides us with a strict
upper limit on the total energy of the system. Exact results are
obtained by considering the limit of a complete (infinite-dimensional)
HO basis --- which we refer to as No-Core Full Configuration
(NCFC)~\cite{Maris:2008ax}. We can obtain the total energies for
systems up to $A=14$ nucleons to within a percent by a simple
extrapolation to the complete basis from a series of successive finite
truncations. The extrapolation of other observables such as radii and
densities is not as straightforward, but for small enough systems we
can simply consider a large enough basis space in order to obtain
converged results. Note that in a single run, at a fixed truncation,
we not only obtain the ground state energy, but also the low-lying
spectrum of the system.
The plan of the paper is the following: in Sec.~\ref{sec:hamiltonians}
we describe various Hamiltonians we consider in this work. Then, in
Sec.~\ref{sec:QMC} we briefly review the different Monte Carlo
many-body techniques used to solve for the neutron drops.
Sec.~\ref{sec:NCFC} presents an overview of the NCFC approach and
Sec.~\ref{sec:results} presents our main results for finite neutron
drops. We present our results for neutron matter in
Sec.~\ref{sec:neutronmatter} and our conclusions in
Sec.~\ref{sec:conclusion}.
\section{Hamiltonians
\label{sec:hamiltonians}}
We adopt non-relativistic Hamiltonians with the following general
form:
\begin{equation}
H=-\sum_i{\hbar^2\over2m}\nabla_i^2
+\sum_i U_{\hbox{\scriptsize ext}}(r_i)+\sum_{i<j}v_{ij}+\sum_{i<j<k}V_{ijk} \,.
\end{equation}
Systems consisting of only neutrons are not expected to be self-bound.
Therefore it is necessary to include an external well
$U_{\hbox{\scriptsize ext}}(r)$ in the Hamiltonian to have a confined
system. We consider both harmonic oscillator (HO) wells and a
Woods-Saxon (WS) potential.
The HO wells have the form
\begin{equation}
U_{HO}(r)=\frac{1}{2}m\Omega^2r^2 \,.
\end{equation}
This potential is useful due to its simplicity and the fact that the
ground state may be driven to arbitrary low density (i.e. with
arbitrarily weak external harmonic potential strength) or arbitrarily
large particle number. The convergence of both the Monte Carlo and
the configuration interaction methods are improved due to the lack of
any low-lying states with long-range tails in the wave function. This
feature enables applying our results for tests of EDFs over a range
from moderately low to rather high densities. Most of our results are
for HO wells. Woods-Saxon wells have been used in other calculations
of properties of neutron drops. In particular neutron drops with a WS
well and the Argonne $v_{18}$ $N\!N$ and Illinois-2 potentials have
been shown to provide a good description of oxygen
isotopes~\cite{Pieper:2005}. The WS form is
\begin{equation}
U_{WS}(r)=\frac{U_0}{1+e^{(r-R)/a}} \,,
\label{eq:ws-well}
\end{equation}
where we have used $a=1.1$~fm, $U_0=-35.5$~MeV and $R=3$~fm, that is,
the same parameters as in Ref.~\cite{Chang:2004a}.
In addition to the total energy, $E = \langle H \rangle$, we also
calculate other observables, such as the external energy $\langle
U_{\hbox{\scriptsize ext}} \rangle$, the internal energy
$E_{\hbox{\scriptsize int}} = \langle H \rangle - \langle
U_{\hbox{\scriptsize ext}} \rangle$, and the rms radius $r$. Note
that for a HO well, the external energy $\langle U_{\hbox{\scriptsize
ext}} \rangle$ is proportional to $r^2$, and thus the quantities
$E$, $E_{\hbox{\scriptsize int}}$, and $r$ are not independent
observables in a HO well; however, in a WS well they are independent.
\subsection{Argonne $N\!N$ interaction and three-body forces
\label{sec:TNIs}}
One of the $N\!N$ potentials we adopt here is the Argonne
AV8$^\prime$~\cite{Pudliner:1997ck,Wiringa:2002}. It is a simplified
version of the Argonne AV18~\cite{Wiringa:1994wb}, with the advantage
that it can be exactly included in both GFMC and AFDMC algorithms
without treating any part perturbatively. Other non-local operators
appearing in AV18 must be included as a perturbation of AV8$^\prime$
in QMC calculations~\cite{Pudliner:1997ck}. These perturbative
corrections can be accurately computed within GFMC, but not in
AFDMC. Thus we consider AV8$^\prime$ to facilitate comparisons of the
two different QMC methods. The difference between the binding
energies from AV8$^\prime$ and AV18 is very small in light nuclei, and
much smaller in pure neutron systems; about 0.06 MeV per
neutron~\cite{Pieper:2001}.
The Argonne AV8$^\prime$ is a sum of eight operators:
\begin{equation}
v_{ij}=\sum_p v_p(r_{ij})O_{ij}^p
\end{equation}
in which the $v_p(r_{ij})$ depend on the distance between the nucleons
$i$ and $j$, and $O_{ij}^p$ are operators. Their form is
\begin{equation}
\label{eq:vop}
O_{ij}^{p=1,8}=(1,\vec\sigma_i\cdot\vec\sigma_j,S_{ij},\vec
L_{ij}\cdot\vec S_{ij}) \times(1,\vec\tau_i\cdot\vec\tau_j)
\end{equation}
with $S_{ij}$ the tensor operator, $\vec L_{ij}$ the relative angular
momentum and $\vec S_{ij}$ the total spin. The $v_p(r)$ parts are fit to
reproduce the S and P partial waves as well as the $^3D_1$ wave and
its coupling to $^3S_1$ of the full AV18
potential~\cite{Wiringa:1994wb}.
In this paper we consider the AV8$^\prime$ alone and combined with two
different three-body forces:
the Urbana-IX (UIX)~\cite{Pudliner:1995wk}
and the Illinois-7 (IL7)~\cite{Pieper:2008}. Just like for the $N\!N$
interaction, the TNIs are sums of several operators:
\begin{eqnarray}
V_{ijk} &=& A_{2\pi}^{PW} O^{2\pi,PW}_{ijk}
+ A_{2\pi}^{SW} O^{2\pi,SW}_{ijk}
+ A_{3\pi}^{\Delta R} O^{3\pi,\Delta R}_{ijk}
\nonumber \\ && {}
+A_R O^R_{ijk} + A_R^{T=3/2} O^{R,T=3/2}_{ijk} \,.
\end{eqnarray}
Both TNIs include the Fujita-Miyazawa operator $O^{2\pi,PW}$ and a
phenomenological part $O^R$, while only IL7 has the
$O^{2\pi,SW}_{ijk}$ ,$O^{3\pi,\Delta R}_{ijk}$, and
$O^{R,T=3/2}_{ijk}$ terms. In the Fujita-Miyazawa
term~\cite{Fujita:1957} two pions are exchanged between the three
nucleons with the creation of an intermediate excited state. The
phenomenological part $O^R$ has no spin or isospin dependence. The
additional IL7 terms involve exchanges of two or three pions and a
pure $T=3/2$ repulsion. A full description of the operators is given
in Refs.~\cite{Pieper:2001,Pieper:2008}.
The $A_{2\pi}^{PW}$ and $A_R$ parameters of UIX are determined by
reproducing the binding energy of $^3$H and nuclear
matter~\cite{Pudliner:1995wk}. The UIX model has been used to
investigate properties of neutron matter (see for example
Refs.~\cite{Akmal:1998,Gandolfi:2009,Gandolfi:2012} and references
therein). The resulting equation of state will support neutron stars
larger than two solar masses.
The Illinois forces are more sophisticated than UIX. The $A$
coefficients are determined by fits to binding energies of light
nuclei~\cite{Pieper:2001,Pieper:2008}. The Illinois forces give a
good description of properties of nuclei up to $A=12$, including both
ground states and excited states, however three-pion operators are
very attractive in pure neutron systems and they overbind neutron
matter at large densities~\cite{Sarsa:2003}. In this work, we
consider IL7 as described in~\cite{Pieper:2008}.
\subsection{JISP16}
The JISP16 $N\!N$ interaction is determined by inverse scattering
techniques from the $np$ phase shifts and is, therefore, charge
symmetric. JISP16 is available in a relative HO
basis~\cite{Shirokov07} and can be written as a sum over partial waves
\begin{equation}
{\hat V} = \sum_{S, {\cal J}, T} {\cal P}_{S, {\cal J}, T}
\sum_{n, \ell, n^\prime, \ell^\prime}
\;| n l\rangle
\
A^{S, {\cal J}, T}_{n\ell,n^\prime \ell^\prime}
\
\langle n^\prime \ell^\prime |
\end{equation}
where $\hbar \Omega = 40$ MeV and ${\vec {\cal J}} = {\vec \ell} +
{\vec s}$. The HO relative coordinate wave function is written
$\langle r | n \ell \rangle = {\cal R}_{n \ell} (r) $. A small number
of coefficients $\{ A^{S, {\cal J}, T}_{n\ell,n^\prime \ell^\prime}\}$
are sufficient to describe the phase shifts in each partial wave.
Note that the JISP16 interaction is non-local and its off-shell
properties have been tuned by phase-shift equivalent transformations
to produce good properties of light nuclei. For example, JISP16 is
tuned in the $^3S_1-^3D_1$ channel to give a high precision
description of the deuteron's properties. Other channels are tuned to
provide good descriptions of $^3$H binding, the low-lying spectra of
$^6$Li and the binding energy of $^{16}$O~\cite{Shirokov07}. After
its initial introduction, it was realized that the $^{16}$O energy was
not fully converged and JISP16 overbinds $^{16}$O by about 15 to 20
MeV~\cite{Maris:2008ax}. With these off-shell tunings to nuclei with
$A \geq 3$ one may view JISP16 as simulating, to some approximation,
what would appear as $N\!N\!N$ interaction contributions (as well as
higher-body interactions) in alternative formulations of the nuclear
Hamiltonian.
\section{Quantum Monte Carlo methods
\label{sec:QMC}}
Both of our QMC methods use diffusion Monte Carlo to project the
lowest-energy eigenstate out of a trial wave function $\Psi_T$ by a
propagation in imaginary time $\tau$:
\begin{equation}
\Psi(\tau)=e^{-(H-E_T)\tau}\Psi_T \,,
\label{eq:qmc-prop}
\end{equation}
where $E_T$ is a normalization factor. In the $\tau\rightarrow\infty$
limit the only component of $\Psi_T$ that survives is the
lowest-energy one not orthogonal to $\Psi_T$:
\begin{equation}
\Psi_0=\lim_{\tau\rightarrow\infty}\Psi(\tau) \,.
\end{equation}
The evolution in imaginary time is performed by using Monte Carlo
integration to evaluate
\begin{equation}
\Psi(R,\tau)=\int dR^\prime G(R,R^\prime,\tau) \Psi_T(R^\prime) \,,
\end{equation}
where $G(R,R^\prime,\tau)$ is an approximation of the many-body
Green's function of the Hamiltonian, and $R$ and $R^\prime$ are the
positions of all $A$ nucleons: $ R\equiv (\vec r_1,\dots,\vec r_N)$.
The exact form of $G(R,R^\prime,\tau)$ is unknown, but it can be
accurately approximated as a product of many
$G(R,R^\prime,\Delta\tau)$ for a small time step, $\Delta\tau$.
The main difference between GFMC and AFDMC is in their representations
of $\Psi$ and the structure of the initial $\Psi_T$.
\subsection{GFMC method and trial wavefunction}
GFMC uses a complete spin-isospin representation of the many-body wave
function; $\Psi(R,\tau)$ is written as a vector with $2^A N(T)$
complex components. Here the $2^A$ allows for all possible nucleon
spin up or down combinations and $N(T)$ is the number of
proton-neutron combinations with the desired isospin. In the case of
neutron drops $N(T)=1$. The exponential growth of the vector size
with the number of neutrons currently limits GFMC calulations of
neutron systems to $N \leq 16$.
Calculations of nuclei with realistic interactions face a sign problem
eventually as the trial wave function is propagated to large $\tau$.
To deal with this problem, we use the constrained path algorithm to
obtain configurations with the largest possible overlap with the
ground state~\cite{Wiringa:2000}. This method is similar to the fixed
node approximation in that it is exact in the limit of an exact
constraint, and it is stable to large imaginary time; however it does
not provide an upper bound. We then extend the propagation without
constraint for as long as possible to obtain the energy and
ground-state properties. The constraint and the
convergence properties of GFMC are discussed in~\cite{Wiringa:2000},
Figure 3 in that paper shows some convergence results for neutron
drops.
The trial wave functions used in our GFMC calculations for neutron
drops in HO wells are somewhat simplified from the ones described in
Ref.~\cite{Pudliner:1997ck} for nuclei:
\begin{eqnarray}
|\Psi_T\rangle &=& \left[ {\cal S}\prod_{i<j}(1+U_{ij}) \right]
|\Psi_J\rangle \ ,
\label{eq:psit}
\\
|\Psi_J\rangle &=& \left[ \prod_{i<j}f_c(r_{ij}) \right]
|\Phi_N(JMTT_{3})\rangle \ .
\label{eq:jastrow}
\end{eqnarray}
The non-central $U_{ij}$ and associated central $f_c$ are optimal
correlations for neutron matter of the form described in
Ref.~\cite{Pieper:1992}. For drops with $N \leq 8$, the $\Phi_N$ is
expanded in an $LS$ basis of $s$- and $p$-shell oscillator functions
as described in Ref.~\cite{Pudliner:1997ck}. For $N \geq 8$ we use a
``BCS'' ansatz $\Phi_{\hbox{\scriptsize BCS}}$ of the form introduced in
Refs.~\cite{Carlson:2003,Chang:2004}. For $N=8$ the two forms give
very similar variational energies.
The BCS pairing is important, particularly for low-density systems and
when trying to calculate even-odd staggering of the energies. In this
paper we consider $\Phi_{\hbox{\scriptsize BCS}}$ for only $J=0$
states or, in the case of odd $N$, for states in which the total $J$
is carried by a single neutron. Such $\Phi_{\hbox{\scriptsize BCS}}$
are written using correlated pairs of neutrons with total spin $S=0$
and a $L=0$ spatial wave function $\phi_{ij}$ expanded in 0$s$, 0$p$,
1$s$, and 0$d$ single-neutron wave functions:
\begin{eqnarray}
\phi_{ij} &=& \beta_{0s} \phi_{0s}(r_i)\phi_{0s}(r_j) + \beta_{0p} \phi_{0p}(r_i)\phi_{0p}(r_j)P_1(\hat{r}_{ij}) \nonumber \\
&& {} + \beta_{1s} \phi_{1s}(r_i)\phi_{1s}(r_j) \nonumber \\
&& {} + \beta_{0d} \phi_{0d}(r_i)\phi_{0d}(r_j)P_2(\hat{r}_{ij}) \ .
\label{eq:bcs-phi}
\end{eqnarray}
The $\beta_{nl}$ are variational parameters; only their ratios are relavant.
For even $N$,
\begin{eqnarray}
\Phi_{\hbox{\scriptsize BCS}}(J=0) = \sum_{P} | \phi_{ij} | ~~| s_1s_2\cdots s_N \rangle \ ,
\label{eq:bcs-even-n}
\end{eqnarray}
where the sum is over all partitions of $N$ neutrons into $N/2$ spin-up neutrons and
$N/2$ spin-down neutrons, $i$ is from the set of spin-up neutrons and $j$ is from
the set of spin-down neutrons, $| \phi_{ij} |$ is a determinant,
and $| s_1s_2\cdots s_N \rangle$ is the spin-vector component with the given spins.
For odd $N$ and $s_{1/2}$ or $d_{5/2}$ states we have
\begin{equation}
\Phi_{\hbox{\scriptsize BCS}}(M\!=\!J) = {\cal A} \left[ \phi_{L,S,J,M} ({\bf r}_{N},\sigma_N) \ \Phi_{\hbox{\scriptsize BCS,N-1}}(J\!=0\!) \right] \ ,
\label{eq:bcs-odd-n}
\end{equation}
which is also a sum over partitions of determinants.
\begin{figure}
\center\includegraphics[height=0.99\columnwidth,angle=270]{0d-1s-10mev_contours}
\caption{(color online) Contours of VMC energies for 12 neutrons in a
10-MeV HO well with AV8$^\prime$+UIX versus the 0$d$ and 1$s$
coefficients in the BCS wave function. The dots show the cases that
were computed using GFMC.
\label{fig:bcs_contours}}
\end{figure}
Figure~\ref{fig:bcs_contours} shows the variational Monte Carlo (VMC)
energies computed for systems of 12 neutrons with different BCS
parameters for the 0$d$ and 1$s$ pairs; the $\beta_{0s}$ and $\beta_{0p}$
are fixed at 100 each so the $^8$n core is almost full.
The optimized choice is used for the GFMC calculations.
The total VMC or GFMC energy is not very sensitive to the choice of
parameters (the contour interval is only 0.08\%) - the effects on pairing can be larger.
A detailed description of the algorithm as well as the importance sampling
technique (constrained propagation) used to reduce the variance can be found in
Refs.~\cite{Pudliner:1997ck,Wiringa:2000}.
The VMC energies computed with $\Psi_{T}$ or even just $\Psi_J$ are much
closer to the final GFMC energies than is the case for real nuclei.
The values using $\Psi_J$ are typically less than 10\% above the
GFMC values and often display better convergence as the number of unconstrained
steps (see Ref.~\cite{Wiringa:2000}) is increased.
\subsection{GFMC numerical convergence and error estimate}
QMC calculations have an easily quantified statistical error arising
from the Monte Carlo method. It is not so straightforward to
determine the magnitude of the systematic errors arising from
approximations made in the constrained-path implementation. These
have been extensively discussed for GFMC in
Refs.~\cite{Pudliner:1997ck,Wiringa:2000}.
Quantities other than the energy are usually evaluated by combining
mixed and variational estimates:
\begin{equation}
\langle \Psi_0 | {\cal O} | \Psi_0 \rangle =
2 \langle \Psi_0 | {\cal O} | \Psi_T \rangle -
\langle \Psi_T | {\cal O} | \Psi_T \rangle.
\end{equation}
No extrapolation for the energy is required since the ground state is
an eigenstate of the Hamiltonian and the propagation commutes with the
Hamiltonian. This linear extrapolation is usually very accurate if
the calculation begins with a good trial wave function. It is also possible to
estimate other observables by forward walking techniques or by adding
a small perturbation $\epsilon {\cal O}$ to the Hamiltonian and
computing the difference between the energy of the original and
perturbed Hamiltonian. We have not pursued these methods for the
present paper. Hence the internal energy results as well as other
quantities reported below are extrapolated and potentially not as
accurate as the full energy.
\subsection{AFDMC method and trial wave function
\label{sec:afdmc_wf}}
The presence of spin operators in the Hamiltonian requires a summation
of all possible good spin states in the wave function. In AFDMC the
spin states are sampled using Monte Carlo
techniques~\cite{Schmidt:1999}. This sampling is performed by
reducing the quadratic dependence of spin operators in the exponential
of Eq.~(\ref{eq:qmc-prop}) to a linear form by means of a
Hubbard-Stratonovich
transformation. The effect of an exponential of a linear combination
of spin operators consists of a rotation of the spinor for each
neutron during the propagation, and this permits the use of a much
simpler basis in the trial wave function. The result is that the wave
function is less accurate, but it can be rapidly evaluated. Since
both positions and spins can be sampled, the AFDMC method can be used
to solve for the ground state of much larger systems than GFMC -- more
than one-hundred neutrons currently may be solved with AFDMC.
More detailed explanations of the AFDMC method and how to include the
full $N\!N$ interactions and TNIs in the propagator can be found in
Refs.~\cite{Sarsa:2003,Pederiva:2004,Gandolfi:2009},
where the constrained-path approximation used to control the fermion
sign problem is also discussed.
The AFDMC method projects out the lowest energy state with the same
symmetry as the trial wave function from which the projection is
started. The trial wave function used in the AFDMC algorithm has the
form:
\begin{eqnarray}
|\Psi_T (R,S)\rangle=\left[\prod_{i<j}f(r_{ij})\right]
|\Phi_{JM}(R,S)\rangle \,,
\label{eq:jasafdmc}
\end{eqnarray}
where $S\equiv (s_1,\dots ,s_N)$.
The spin assignments $s_i$ consist in giving the spinor components, namely
\begin{equation}
s_i \equiv \left(\begin{array}{c}
u_i \\ d_i
\end{array}\right)=u_i |\uparrow\rangle + d_i |\downarrow\rangle \ ,
\end{equation}
where $u_i$ and $d_i$ are complex numbers. The Jastrow function
$f(r)$ is the solution of a Schr\"odinger-like equation for $f(r<d)$,
\begin{equation}
-\frac{\hbar^2}{m}\nabla^2f(r)+\alpha v_c(r)f(r)=\lambda f(r) \,,
\end{equation}
where $v_c(r)$ is the spin-independent part of the nucleon-nucleon
interaction, and the healing distance $d$ and $\alpha$ are variational
parameters. For distances $r\ge d$ we impose $f(r)=1$. The Jastrow
part of the function in our case has the only role of reducing the
overlap of nucleons, therefore reducing the energy variance.
The antisymmetric part of the wave function is
\begin{eqnarray}
\Phi_{JM}(R,S) = \Big[\sum D\{\phi_\alpha(\vec r_i,s_i)\}\Big]_{J,M} \,,
\end{eqnarray}
where $\alpha=\{n,l,j,m_j\}$ is the set of quantum numbers of
single-particle orbitals, and the summation of determinants gives a
trial wave function that is an eigenstate of $J^2$ and $M$. The
single-particle basis is given by
\begin{eqnarray}
\phi_\alpha(\vec r_i,s_i)=\Phi_{nlj}(r_i)\left[Y_{l,m_l}(\hat r_i)\xi_{s,m_s}(s_i)\right]_{j,m_j} \,.
\end{eqnarray}
The radial components $\Phi_{nlj}$ are obtained by solving the
Hartree--Fock problem with the Skyrme force SKM~\cite{Pethick:1995},
$Y_{l,m_l}$ are spherical harmonics, and $\xi_{s,m_s}$ are spinors in
the usual up-down basis. For each $(J,M)$ set of quantum numbers
there are several combinations of single-particle orbitals. We
typically perform several simulations to identify the ground-state and
order of excited states. It is also possible to include a BCS pairing
term in the trial wave function in AFDMC, as has been done for GFMC.
This would allow more accurate treatment of pairing in very
low-density systems, and is currently under development.
For neutron matter calculations we change the antisymmetric part of
the wave function to be the ground state of the Fermi gas, built from
a set of plane waves. The infinite uniform system is simulated by a
cubic periodic box of volume $L^3$ according to the density of the
system. The momentum vectors in this box are
\begin{equation}
\bold k_\alpha=\frac{2\pi}{L}(n_{\alpha x},n_{\alpha y},n_{\alpha z}) \,,
\end{equation}
where $\alpha$ labels the quantum state and $n_x$, $n_y$ and $n_z$ are
integer numbers describing the state. The single-particle orbitals
are given by
\begin{equation}
\phi_\alpha(\vec r,s)=e^{i\vec k_\alpha\cdot\vec r}
\xi_{s,m_s,\alpha}(s) \,.
\end{equation}
Again, it is possible to generalize the neutron matter calculations to
include BCS pairing in the trial state~\cite{Gandolfi:2008b,Gandolfi:2009b}.
\subsection{AFDMC numerical convergence and error estimate
\label{sec:afdmc_conv}}
AFDMC is very similar in concept to GFMC and convergence and error
estimates are also similar. The statistical errors are easily
evaluated and controllable. AFDMC also uses a constrained-path method
to circumvent the fermion sign problem, and hence the results depend
to some degree on the choice of trial function.
Although it is possible to do some unconstrained propagation with
AFDMC, it is more limited than GFMC because the sampling of the spins
introduces a fermion sign problem earlier. Uncertainties can also be
addressed by choosing several different initial trial states. We find
the constrained path results to be reasonably accurate when compared
with GFMC for small systems.
\begin{table}
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{rr|dd|dd}
& & \multicolumn{1}{c}{GFMC}
& \multicolumn{1}{c|}{AFDMC} & \multicolumn{2}{c}{Difference}\\
& & \multicolumn{1}{c}{(MeV)}
& \multicolumn{1}{c|}{(MeV)}
& \multicolumn{1}{c}{(MeV)}
& \multicolumn{1}{c}{ \% } \\
\hline
$N$ & $J^\pi$ & \multicolumn{4}{c}{5 MeV HO well}\\
\hline
8 & 0$^+$ & 67.00(1) & 67.1(1) & $0.1(1)$ & $0.1$ \\
9 & $\frac{1}{2}^+$ & 80.90(4) & 81.2(1) & $0.3(1)$ & $0.4$ \\
9 & $\frac{5}{2}^+$ & 81.20(3) & 81.9(1) & $0.7(1)$ & $0.9$ \\
10 & 0$^+$ & 92.1(1) & 94.6(1) & $2.5(2)$ & $2.7$ \\
11 & $\frac{5}{2}^+$ & 106.3(1) & 108.0(1) & $1.7(2)$ & $1.6$ \\
11 & $\frac{1}{2}^+$ & 105.9(1) & 108.4(1) & $2.5(2)$ & $2.3$ \\
12 & 0$^+$ & 118.1(1) & 121.1(1) & $3.0(2)$ & $2.5$ \\
13 & $\frac{5}{2}^+$ & 131.5(1) & 135.7(2) & $4.2(3)$ & $3.1$ \\
13 & $\frac{1}{2}^+$ & 130.8(1) & 134.1(2) & $3.3(3)$ & $2.5$ \\
14 & 0$^+$ & 142.2(2) & 146.7(1) & $4.5(3)$ & $3.1$ \\
\hline
$N$ & $J^\pi$ & \multicolumn{4}{c}{10 MeV HO well}\\
\hline
3 & $\frac{3}{2}^-$ & 45.5(0) & 45.0(1) & $-0.5(1)$ & $-1.1$ \\
3 & $\frac{1}{2}^-$ & 46.70(1) & 46.7(1) & $ 0.0(1)$ & $ 0.0$ \\
4 & 0$^+$ & 62.00(1) & 62.9(1) & $ 0.9(1)$ & $ 1.4$ \\
5 & $\frac{3}{2}^-$ & 83.00(1) & 82.9(1) & $-0.1(1)$ & $-0.1$ \\
5 & $\frac{1}{2}^-$ & 84.00(2) & 83.7(1) & $-0.3(1)$ & $-0.3$ \\
6 & 0$^+$ & 98.90(2) & 98.4(1) & $-0.5(1)$ & $-0.5$ \\
7 & $\frac{1}{2}^-$ & 118.9(0) & 118.0(1) & $-0.9(1)$ & $-0.7$ \\
7 & $\frac{3}{2}^-$ & 121.1(0) & 120.6(1) & $-0.5(1)$ & $-0.4$ \\
8 & 0$^+$ & 135.8(0) & 134.7(1) & $-1.1(1)$ & $-0.8$ \\
9 & $\frac{1}{2}^+$ & 163.7(1) & 163.5(1) & $-0.2(2)$ & $-0.1$ \\
9 & $\frac{5}{2}^+$ & 163.2(1) & 162.5(1) & $-0.7(2)$ & $-0.4$ \\
10 & 0$^+$ & 188.1(6) & 188.5(1) & $ 0.4(7)$ & $ 0.2$ \\
11 & $\frac{5}{2}^+$ & 217.0(3) & 216.7(1) & $-0.3(4)$ & $-0.1$ \\
11 & $\frac{1}{2}^+$ & 216.1(3) & 216.6(2) & $ 0.5(5)$ & $ 0.2$ \\
12 & 0$^+$ & 242.0(6) & 240.8(1) & $-1.2(7)$ & $-0.5$ \\
13 & $\frac{5}{2}^+$ & 267.6(6) & 266.3(2) & $-1.3(8)$ & $-0.5$ \\
13 & $\frac{1}{2}^+$ & 268.0(5) & 267.2(2) & $-0.8(7)$ & $-0.3$ \\
14 & 0$^+$ & 291.9(2) & 291.2(2) & $-0.7(4)$ & $-0.2$ \\
\end{tabular}
\end{ruledtabular}
\caption{Comparison of GFMC and AFDMC total energies for neutron drops
in 5 MeV and 10 MeV wells with AV8$^\prime$+UIX. Statistical errors
due to the Monte Carlo sampling are given in brackets.
\label{tab:gfmc-afdmc}}
\end{table}
In Table~\ref{tab:gfmc-afdmc} we compare the GFMC and AFDMC total
energy results for neutron drops in 5 and 10 MeV HO wells. Overall
the agreement is of the order of a few percent or better. Statistical
errors due to sampling can be made arbitrarily small, these are about
0.2\% or less for the AFDMC and GFMC calculations.
For the lower densities generated by the 5 MeV well the GFMC energies
are significantly lower than the AFDMC energies, by up to 3\%. A
plausible explanation for these discrepancies is the fact that we have
not incorporated BCS pairing in the AFDMC calculation. Indeed, the
differences are nearly zero for the closed shell at $N=8$ (where
pairing does not play a role) and grow as we go towards the middle of
the shell, $N=14$. We are therefore pursuing the inclusion of BCS
pairing in the AFDMC calculations of neutron drops in order to improve
the results at low densities. On the other hand, for the 10 MeV well
the GFMC and AFDMC results are all within 1\% of each other, with the
AFDMC typically lower than the GFMC energies.
Systematic errors in calculations of neutron matter are similar in
spirit. The trial wave function can affect results for the energy at
low densities where pairing is important. At larger densities,
though, pairing provides a very small fraction of the total energy of
the system and calculations are much less sensitive to the choice of
the trial state.
In addition to the Monte Carlo errors and the dependence on the choice
of the trial state, we have to consider finite-size effects for
calculations of neutron matter. We enforce periodic boundary
conditions and fix the number of neutrons to be a closed shell in the
periodic free-particle basis. In order to reduce finite-size effects
we performed simulations with 66 neutrons. Free fermions for $N=66$
provide a kinetic energy very close to the infinite limit. Any
possible finite-size effect due to the truncation of the potential
energy is properly taken into account by considering several replicas
of the simulation box as described in
Ref.~\cite{Sarsa:2003,Gandolfi:2009}. Typically these uncertainties
are very small for bulk properties like the ground-state energy.
\section{No Core Full Configuration method
\label{sec:NCFC}}
The NCFC method is based on a series of no-core configuration
interaction calculations with increasing basis dimensions. In this
approach the wave function for the $N$ neutrons is expanded in an
$N$-body basis of Slater determinants of single-particle states, and
the many-body Schr\"odinger equation becomes a large sparse matrix
eigenvalue problem. We obtain the lowest eigenstates of this matrix
iteratively. In a complete basis, this method would give exact
results for a given input interaction $V$. However, practical
calculations can only be done in a finite-dimensional truncation of a
complete basis. We perform a series of calculations until we reach
numerical convergence in a sufficiently large basis space, or we
employ a simple extrapolation~\cite{Maris:2008ax} to the complete
basis.
\subsection{Description of basis space}
Our choice for the basis is the harmonic oscillator (HO) basis so
there are two basis space parameters, the HO energy $\hbar\omega$ and
the many-body basis space cutoff $N_{\hbox{\scriptsize max}}$. The
cutoff parameter $N_{\hbox{\scriptsize max}}$ is defined as the
maximum number of total oscillator quanta allowed in the many-body
basis space above the minimum for that number of neutrons. Numerical
convergence is defined as independence of both basis space parameters
$N_{\hbox{\scriptsize max}}$ and $\hbar\omega$, within evaluated
uncertainties. Note that the basis space parameter $\hbar\omega$ is
not necessarily the same as that of the HO well $\hbar\Omega$ that
confines the neutrons.
We employ a many-body basis in the so-called $M$-scheme: the many-body
basis states are Slater determinants in a HO basis, limited by the
imposed symmetries --- parity and total angular momentum projection
($M$), as well as by $N_{\hbox{\scriptsize max}}$. Each
single-particle HO state has its orbital and spin angular momenta
coupled to good total angular momentum, $j$, and magnetic projection,
$m$. Here we only consider natural-parity states, and utilize $M=0$
for an even number of neutrons, and $M=\frac{1}{2}$ for an odd number
of neutrons. In this scheme a single calculation gives the entire
spectrum for that parity and $N_{\hbox{\scriptsize max}}$.
The NCFC approach satisfies the variational principle and guarantees
uniform convergence from above to the exact eigenvalue with
increasing $N_{\hbox{\scriptsize max}}$. That is, the results for the
energy of the lowest state of each spin and parity, at any
$N_{\hbox{\scriptsize max}}$ truncation, are strict upper bounds on
the exact converged answers and the convergence is monotonic with
increasing $N_{\hbox{\scriptsize max}}$.
The challenge for this approach is that the matrix dimension grows
nearly exponentially with increasing $N_{\hbox{\scriptsize max}}$.
The calculations presented here have been performed with the code
MFDn~\cite{DBLP:conf/sc/SternbergNYMVSL08,DBLP:journals/procedia/MarisSVNY10,DBLP:conf/europar/AktulgaYNMV12}
which has been demonstrated to scale to over 200,000 cores. For small
neutron drops we are able to achieve converged results to within a
fraction of a percent by using a sufficiently large basis space, at
least for neutrons in a HO well of 10 MeV and above. In order to
achieve converged NCFC results directly (i.e. without extrapolation)
for more than 10 neutrons using JISP16, we would need to obtain
eigenstates of matrices that are beyond the reach of present
technologies. However, for up to 22 neutrons, we can utilize a
sequence of results obtained with $N_{\hbox{\scriptsize max}}$ values
that are currently accessible, in order to extrapolate to the infinite
or complete basis space limit.
\subsection{NCFC numerical convergence and error estimate}
We carefully investigate the dependence of the results on the basis
space parameters, $N_{\hbox{\scriptsize max}}$ and $\hbar\omega$. Our
goal is to achieve independence of both of these parameters as that is
a signal for convergence --- the result that would be obtained from
solving the same problem in a complete basis. For the total energy,
$\langle H \rangle$, the guarantee of monotonic convergence from above
to the exact total energy facilitates our choice of extrapolating
function.
We use an extrapolation method that was found to be reliable in light
nuclei: a constant plus an exponential in
$N_{\hbox{\scriptsize max}}$~\cite{Forssen:2008qp,Maris:2008ax}.
That is, for each set of three successive $N_{\hbox{\scriptsize max}}$
values at fixed $\hbar\Omega$, we fit the ground state energy with
three adjustable parameters using the relation
\begin{equation}
E_{gs}(N_{max}) = a \exp(-c\,N_{\hbox{\scriptsize max}}) + E_{gs}({\infty}) \,.
\label{extreq}
\end{equation}
Under the assumption that the convergence is indeed exponential,
such an extrapolation should get more accurate as
$N_{\hbox{\scriptsize max}}$ increases; we use the difference between
the extrapolated results from two consecutive sets of three
$N_{\hbox{\scriptsize max}}$ values as an estimate of the numerical
uncertainty associated with the extrapolation.
\begin{figure}[t]
\center\includegraphics[width=0.9\columnwidth]{res_JISP16_N10_total}
\caption{(color online)
Ground state energy for 10 neutrons in a HO well of 10 MeV (top)
and 20 MeV (bottom) for a series of finite basis space calculations
with JISP16. Note that our results in the 20 MeV well converge much
more rapidly than in the 10 MeV well. The symbols represent
extrapolated results as discussed in the text and the error bars
signify our uncertainty estimate at that basis space $\hbar\omega$;
the yellow band represents our final result including numerical
error estimates.
\label{fig:NCFCconvergence}}
\end{figure}
\begin{figure*}[t]
\center{
\includegraphics[width=0.9\columnwidth]{res_JISP16_N10_radius}\qquad
\includegraphics[width=0.9\columnwidth]{res_JISP16_N10_internal}}
\caption{(color online)
The rms radius (left) and internal energy (right) for 10 neutrons in
a HO well of 10 MeV (top) and 20 MeV (bottom) for a series of finite
basis space calculations with JISP16. Note that our results in the
20 MeV well converge much more rapidly than in the 10 MeV well. The
yellow band represents our best estimate (including an error
estimate) for the infinite basis space result.
\label{fig:NCFCconv_other}}
\end{figure*}
For a reasonable range of basis space parameters $\hbar\omega$, this
assumption appears to be valid, as is evident from
Fig.~\ref{fig:NCFCconvergence}. In this figure we show the ground
state energies for 10 neutrons in a 10 MeV and 20 MeV HO well for a
series of finite bases, as well as the extrapolations with their
uncertainties indicated by error bars. The error bars on the
extrapolations using calculations up to $N_{\hbox{\scriptsize max}} =
10$ are all smaller than the corresponding error bars from
calculations up to $N_{\hbox{\scriptsize max}} = 8$; and the error
bars on the extrapolations using calculations up to
$N_{\hbox{\scriptsize max}} = 12$ are all smaller than the
corresponding error bars from $N_{\hbox{\scriptsize max}} = 10$.
Furthermore, we see that the dependence on the basis space
$\hbar\omega$ decreases with increasing $N_{\hbox{\scriptsize max}}$,
and that the extrapolated results for a given $N_{\hbox{\scriptsize
max}}$ agree within each other's error bars. Our total error
estimate is based on a 5 MeV region in $\hbar\omega$ which has the
smallest error bars and minimal $\hbar\omega$ dependence.
In order to perform this extrapolation to the infinite basis space,
we need finite basis space calculations up to
$N_{\hbox{\scriptsize max}}=8$ or higher. Above 22 neutrons, our
calculations are limited to $N_{\hbox{\scriptsize max}}=4$, so we only
have variational upper bounds for the total energy of these larger
neutron drops. For the 10 MeV HO well, our results are not yet
converged at this basis space, but for the 20 MeV HO well, these
upper bounds are likely to be within a few percent of the converged
results.
The internal energies and the rms radii do not converge monotonically
with $N_{\hbox{\scriptsize max}}$, in contrast to the total energy.
Currently, we do not have a reliable method to perform the
extrapolation to the infinite basis space for these observables. The
rms radius seems to converge from above for small basis space
parameters $\hbar\omega$, but from below for large basis space
parameters, see the left panels of Fig.~\ref{fig:NCFCconv_other}.
Hence there is a 'sweet spot' in the basis space parameter
$\hbar\omega$ for which the radius and equivalently, the external
energy $\langle U_{\hbox{\scriptsize ext}}\rangle$, is approximately
independent of $N_{\hbox{\scriptsize max}}$. We use our results in
this region as an estimate of the infinite basis space result, with
error bars based on the residual $N_{\hbox{\scriptsize max}}$ and
$\hbar\omega$ dependence in a window around the 'sweet spot', as
indicated by the yellow band in the left panels of
Fig.~\ref{fig:NCFCconv_other}.
The internal energy appears to converge from above, at least for
$\hbar\omega$ values in the region that is optimal for the total and
external energies. However, we have not been able to find a robust
convergence pattern; e.g. an exponential extrapolation does not work
very well, as can be seen in the top right panel of
Fig.~\ref{fig:NCFCconv_other}. The most reliable estimate for the
internal energy in the infinite basis space appears to be the
difference between the (extrapolated) total energy and the external
energy based on the convergence at the 'sweet spot' explained above,
$E_{\hbox{\scriptsize int}} = \langle H \rangle - \langle
U_{\hbox{\scriptsize ext}} \rangle$. This is depicted by the yellow
band in the right panels of Fig.~\ref{fig:NCFCconv_other}.
In order to make a meaningful estimate of the converged (NCFC)
results for the rms radius and the external and internal energies,
we need to perform a set of calculations for a range of basis space
parameters $\hbar\omega$ at least up to
$N_{\hbox{\scriptsize max}}=10$. We therefore give these results only
up to 14 neutrons.
\section{Results for neutron drops
\label{sec:results}}
\subsection{Total energy}
\begin{table*}
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{rr|dd|dd|dd|d}
& & \multicolumn{2}{c|}{ 5 MeV HO well}
& \multicolumn{2}{c|}{10 MeV HO well}
& \multicolumn{2}{c|}{20 MeV HO well}
& \multicolumn{1}{c}{WS well} \\
\hline
$N$ & $J^\pi$ & \multicolumn{1}{c}{AV8$^\prime$+UIX} & \multicolumn{1}{c|}{JISP16}
& \multicolumn{1}{c}{AV8$^\prime$+UIX} & \multicolumn{1}{c|}{JISP16}
& \multicolumn{1}{c}{AV8$^\prime$+UIX} & \multicolumn{1}{c|}{JISP16}
& \multicolumn{1}{c}{AV8$^\prime$+UIX} \\
\hline
3 & $\frac{1}{2}^-$ & 22.89 & 22.73(1) & 46.69(1) & 46.512 & 97.1(1) & 98.094 & \\
3 & $\frac{3}{2}^-$ & 22.61 & 22.40(1) & 45.48(0) & 44.833 & 91.7(1) & 90.915 & \\
4 & $0^+$ & 29.99 & 29.69(1) & 62.04(1) & 60.842 & 131.1(1) & 126.31 & \\
5 & $\frac{1}{2}^-$ & 41.22(1) & 40.65(15)& 84.02(2) & 82.86(2) & 175.2(1) & 173.00 & \\
5 & $\frac{3}{2}^-$ & 41.02 & 40.4(2) & 82.97(1) & 80.68(2) & 169.5(1) & 162.71 & \\
6 & $0^+$ & 48.52(1) & 47.6(2) & 98.95(2) & 95.74(3) & 205.8(2) & 193.64 & $-$80.6\hphantom{(1)} \\
7 & $\frac{1}{2}^-$ & 59.17(1) & 57.9(2) & 118.9(0) & 115.67(5) & 246.4(2) & 237.11 & $-$90.9(1) \\
7 & $\frac{3}{2}^-$ & 59.73(1) & 58.5(2) & 121.1(0) & 118.9(1) & 254.7(2) & 249.85 & $-$88.6(1) \\
8 & $0^+$ & 67.01(1) & 65.4(3) & 135.8(0) & 132.5(1) & 287.4(2) & 278.32(1)&$-$103.9(1) \\
9 & $\frac{1}{2}^+$ & 80.92(4) & 78.9(1.5)& 163.7(1) & 159.6(4) & 349.8(2) & 334.32(1)&$-$107.8(1)\\
9 & $\frac{3}{2}^+$ & & 80.0(1.5)& & 162.8(6) & 354.5(2) & 344.42(1)& \\
9 & $\frac{5}{2}^+$ & 81.20(3) & 79.3(1.5)& 163.2(1) & 159.4(4) & 343.9(2) & 331.15(1)&$-$106.6(1) \\
10 & $0^+$ & 92.14(8) & 90. (1.5)& 188.1(6) & 182.1(5) & 400.5(2) & 380.41(1)& $-$113.4(1) \\
11 & $\frac{1}{2}^+$ & 105.9(1) & & 216.1(3) & 208.4(1.0)& & 434.38(5)& $-$115.9(2) \\
11 & $\frac{3}{2}^+$ & & & & 208.0(1.0)& & 430.10(4)\\
11 & $\frac{5}{2}^+$ & 106.3(1) & & 217.0(3) & 207.9(1.0)& & 430.41(4)& $-$116.9(2) \\
12 & $0^+$ & 118.1(1) & 116.(6) & 242.(1) & 230.0(1.0)& 509.1(4) & 477.05(5)& $-$123.6(3) \\
13 & $\frac{1}{2}^+$ & 130.8(1) & & 268.0(1.0)&255.8(1.0)& & 529.07(6)& $-$125.0(3) \\
13 & $\frac{3}{2}^+$ & & & & 256.3(1.0)& & 528.74(6)\\
13 & $\frac{5}{2}^+$ & 131.5(1) & & 267.6(6) & 255.7(1.0)& & 524.77(6)& $-$125.9(3) \\
14 & $0^+$ & 142.2(2) & 140.(10) & 291.9(2) & 277.5(1.4)& & 569.3(1) & $-$131.6(7) \\
15 & $\frac{1}{2}^+$ & 160.1(1) & & 316.3(2) & 303.(5) & & 619.4(6) \\
15 & $\frac{3}{2}^+$ & 159.1(2) & & 320.2(2) & & & \\
15 & $\frac{5}{2}^+$ & 160.0(1) & & 317.0(2) & & & & $-$139.3(3) \\
16 & $0^+$ & 171.6(1) & & 341.5(2) & 326.(6) & 730.3(3) & 667.7(6) & $-$142.4(7) \\
17 & $\frac{1}{2}^+$ & 185.5(2) & & 368.8(3) & & & 725.1(8) & \\
17 & $\frac{3}{2}^+$ & 183.9(2) & & 366.5(3) & 352.(5) & & & $-$148.8(2) \\
17 & $\frac{5}{2}^+$ & 184.9(2) & & 371.1(2) & & & & \\
18 & $0^+$ & 195.6(2) & & 392.6(3) & 377.(7) & & 781.(1) & $-$155.1(4) \\
19 & $\frac{1}{2}^+$ & 209.4(2) & & 420.1(2) & 407.(7) & 919.0(3) & 850.(2) & \\
19 & $\frac{3}{2}^+$ & 208.4(2) & & 417.9(3) & 403.(7) & 914.9(4) & 838.(1) & $-$159.6(3) \\
19 & $\frac{5}{2}^+$ & 210.0(3) & & 422.1(2) & 408.(8) & 926.7(4) & 855.(2) & \\
20 & $0^+$ & 219.9(3) & & 441.7(4) & 430.(10) & 976.0(4) & 894.(2) & $-$165.0(1) \\
21 & $\frac{7}{2}^-$ & & & 476.8(4) & 465.(25) & & 956.(4) & \\
22 & $0^+$ & 254.(1) & & 510.5(5) & 495.(25) & 1123.3(7) & 1018.(5)& \\
24 & $0^+$ & 289.1(7) & & 578.9(5) & < 596. & 1268.(1) & \le 1144. & \\
26 & $0^+$ & 324.0(8) & & 645.0(6) & < 660. & & \le 1266. & \\
28 & $0^+$ & 355.(1) & & 707.6(7) & < 723. & 1551.(1) & \le 1379. & \\
30 & $0^+$ & 390.0(8) & & 776.0(9) & < 786. & & \le 1499. & \\
32 & $0^+$ & 422.(1) & & 843.5(9) & < 847. & & \le 1614. & \\
34 & $0^+$ & 453.(1) & & 909.9(9) & < 914. & & \le 1750. & \\
36 & $0^+$ & 486.(1) & & 982.7(8) & < 986. & & \le 1895. & \\
38 & $0^+$ & 514.(1) & &1046.4(8) & < 1057. & & \le 2037. & \\
40 & $0^+$ & 546.(1) & &1114.3(9) & < 1128. & & \le 2177. & \\
42 & $0^+$ & 591.(1) & &1197.1(8) & < 1206. & & \le 2320. & \\
44 & $0^+$ & & &1278.(1) & & & \le 2473. & \\
\end{tabular}
\end{ruledtabular}
\caption{Total energy with AV8$^\prime$+UIX and JISP16 for several
different confining wells. Error bars for the AV8$^\prime$+UIX
results are statistical only; in addition we expect systematic
errors of a few percent, as discussed in Sec.~\ref{sec:afdmc_conv}.
Error bars for the JISP16 results are total error estimates. (Errors
that are not shown are less than 1 in the last digit.)
\label{tab:totalE}}
\end{table*}
In Table~\ref{tab:totalE} we present the principal results of this
study: the total energies for neutron drops confined in 5 MeV, 10 MeV,
and 20 MeV HO wells with the AV8$^\prime$+UIX and JISP16 potentials.
These HO wells are convenient for ab-initio calculations because one
can probe very low to very high densities with a simple asymptotic
form of the wave function and an arbitrary number of neutrons can be
bound in the well. In order to provide a very different probe of
density functionals in the extreme isospin limit, we also include
select results in a WS well with the AV8$^\prime$+UIX potential.
We show the lowest 0$^+$ energy for even $N$ and lowest values for
several $J^\pi$ for odd $N$. We present results only for natural
parity states. The AV8$^\prime$+UIX values up to $N$=14 were computed
by GFMC while the larger drops were computed using AFDMC, with the
exception of the 20 MeV HO well results. There are no results from
GFMC available for the 20 MeV HO well, due to the strong fermion sign
problem with that external field strength; those results were all
obtained using AFDMC.
The JISP16 values were all computed by NCFC. There are no results
from NCFC available in the 5 MeV trap above 14 neutrons due to poor
convergence with available computer resources. Above 22 neutrons we
only provide strict upper bounds; for the 20 MeV HO well we expect the
converged energies to be within a few percent of these upper bounds.
\begin{figure}
\begin{center}
\center\includegraphics[width=0.99\columnwidth]{res_Etotal_scaled}
\caption{(color online)
Energy of the lowest neutron drop states confined in a HO well with
$\hbar\Omega=10$ MeV (top) and $\hbar\Omega=20$ MeV (bottom) as a
function of the number of neutrons. Results for AV8$^\prime$ (plus
TNI) where obtained using AFDMC, with MC statistical error bars
as well as a band indicating the 1\% systematic uncertainty
discussed in Sec.~\ref{sec:afdmc_conv};
results for JISP16 are obtained from NCFC with error bars reflecting
the total numerical uncertainty, and strict upper bounds obtained
with NCSM in finite basis spaces. Note the pronounced dips at the
expected HO magic numbers $N=2$, $8$, and $20$.
\label{fig:gs_scaled}}
\end{center}
\end{figure}
Figure~\ref{fig:gs_scaled} shows the energies of $N$ neutrons in two
different HO wells, scaled by $\hbar\Omega\,N^{(4/3)}$; for odd $N$
only the lowest energy found is shown.
The scaling by $\hbar\Omega\,N^{(4/3)}$ is motivated by the expected
results in local density approximation. The factor $N^{4/3}$ comes
from the traditional scaling with $N$ times the increase in potential
energy arising from the increase in radius of the system with particle
number proportional to $N^{1/3}$. In addition to the AV8$^\prime$+UIX
and JISP16 values presented in Table~\ref{tab:totalE}, we also show
results for AV8$^\prime$ without any TNI and AV8$^\prime$+IL7. All
interactions show a very pronounced peak for three neutrons, and dips
at the expected HO magic numbers, $N=2$, $8$, $20$, and $40$. The
dips at the HO magic numbers are expected due to the HO nature of the
confining well.
With an equation of state of the form
$E = \xi \frac{\hbar^2}{2m} k_F^2$ with $k_F = [ 3 \pi^2 \rho ]^{1/3}$,
the energy is given by the Thomas--Fermi expression:
$E_{TF} = \xi^{1/2} \hbar\Omega\,(3N)^{(4/3)}/4$. For free fermions
($\xi =1$) the Thomas--Fermi results would be a horizontal line at
$3^{4/3}/4 \approx 1.081$, for a unitary Fermi gas with $\xi = 0.4$
the Thomas--Fermi results would be a horizontal line at 0.684.
The calculated results are all below the free Fermi gas (even for the
case of three neutrons) since the interaction is attractive.
All our results are above the unitary Fermi gas because
there are significant finite-range corrections for neutron matter.
In addition repulsive gradient terms in the density functional are
required to reproduce the ab-initio results~\cite{Gandolfi:2010za}. A detailed investigation
of these effects is being pursued.
From Fig.~\ref{fig:gs_scaled} it is evident that adding UIX to
AV8$^\prime$ increases the energies of neutron drops, whereas IL7
decreases the energies. These results were expected; the two-pion
part of UIX is attractive in the isospin $T=1/2$ triples that appear
in nuclei. However neutron drops contain only $T=3/2$ triples for
which the two-pion part is very small~\cite{Pudliner96,Gandolfi:2012};
this leaves only the repulsive central part of UIX. On the other hand,
IL7 contains the three-pion term that is strongly attractive in
$T=3/2$ triplets~\cite{Pieper:2001}.
The energies with the nonlocal 2-body interaction JISP16
are generally below the AV8$^\prime$ results, but above the
AV8$^\prime$+IL7 ground state energies. In fact, in the 10 MeV
HO trap, the JISP16 results are nearly identical to those with
AV8$^\prime$+IL7 up to about 12 neutrons; as the number of
neutrons increases, the results with JISP16 deviate more and more from
the AV8$^\prime$+IL7 results. In the 20 MeV HO trap, for which
we have more accurate results with JISP16, the results with JISP16 and
with AV8$^\prime$ without TNI are quite similar, even in the
$sd$-shell and beyond. The trend of the upper bounds obtained with
JISP16 follows the trend of the AV8$^\prime$ ground state energies
through the $sd$-shell and into the $pf$-shell, both in the 10 MeV and
in the 20 MeV trap.
As discussed in Sec.~\ref{sec:TNIs} and in
Sec.~\ref{sec:neutronmatter} below, recent studies of the neutron
star mass-radius relationship~\cite{Steiner:2012,Gandolfi:2012} suggest that,
at least at higher densities, the AV8$^\prime$ +
UIX interactions gives a reasonable neutron matter equation of state.
The requirement of a two-solar mass neutron star
implies a repulsive three-neutron interaction at moderate and high densities.
On the other hand, AV8$^\prime$+IL7 gives a much
better description of the ground state energies, spectra, and other
observables for light nuclei (up to $A=12$) than either AV8$^\prime$
or AV8$^\prime$+UIX. This may be why the results
with AV8$^\prime$+IL7 and with JISP16 (which also gives a good
description of light nuclei) are quite similar below 12 neutrons.
However, none of these interactions have been fit to any data beyond
the $p$-shell, and it is unclear which of these interactions is more
realistic for the neutron drops in the $N=8$ to $N=40$ range.
At larger densities AV8$^\prime$+IL7 is too attractive as discussed below.
In the 10 MeV well, the dips in the energies at $N=16$ and $N=32$
suggest subshell closure with AV8$^\prime$+IL7, but not with
AV8$^\prime$+UIX, while the results for AV8$^\prime$ show a hint of
subshell closure at $N=32$. The IL7 TNI does provide a larger
spin-orbit splitting than the UIX three-nucleon interaction.
Similarly, the energies with JISP16
suggest subshell closure at $N=16$ and $N=32$ in the 20 MeV well.
The JISP16 results in the 10 MeV well are not quite accurate enough to
draw firm conclusion regarding subshell closure; and we have
insufficient results for AV8$^\prime$ in the 20 MeV well.
Somewhat surprisingly, there is no indication of subshell closure at
$N=28$. In other words, these results seem to suggest closure of the
combined $f_\frac{7}{2}$ and $p_\frac{3}{2}$ subshell at $N=32$,
rather than closure of just the $f_\frac{7}{2}$ at $N=28$. Note that
the closure of the combined $d_\frac{5}{2}$ and $s_\frac{1}{2}$
subshell at $N=16$ corresponds to the recently discovered subshell
closure at $^{24}$O~\cite{ClosedO24}.
\subsection{Energy differences}
\begin{figure}[thb]
\center\includegraphics[width=0.99\columnwidth]{res_single_diffs}
\caption{(color online) Single energy differences in a 10 MeV (top)
and 20 MeV (bottom) HO well. Results of different Hamiltonians are
compared. AFDMC and GFMC error bars are statistical only; NCFC
error bars reflect the total numerical uncertainty. Horizontal line
segments indicate energy differences expected from pure HO energies.
\label{fig:sdiffs}}
\end{figure}
In Fig.~\ref{fig:sdiffs} we show the difference in total energy
between neutron drops with $N$ and with $N-1$ neutrons. We clearly
see the effect of the HO shells: jumps at 2, 8, and 20 neutrons, at
which the next neutron has to go to the next HO shell. Without
interactions between the neutrons, we would still have this shell
structure, but within each shell, all single energy differences would
be equal, as indicated by the solid reference lines in
Fig.~\ref{fig:sdiffs}. That is, the gross feature of shell structure
arises from the confining well and is evident in the plot of the
single differences as a jump in the calculated energy differences as
one goes from one shell to the next.
\begin{figure}[thb]
\center\includegraphics[width=0.99\columnwidth]{res_double_diffs}
\caption{(color online)
Double energy differences
$\Delta(N)=(-1)^{N+1} [E(N) - \frac{1}{2}\left(E(N-1) + E(N+1)\right)]$
in a 10 MeV (top) and 20 MeV (bottom) HO well. Results of different
Hamiltonians are compared. AFDMC and GFMC error bars are
statistical only; the NCFC error bars are omitted for the 10 MeV HO
well, because they would cover the entire vertical range for 12
neutrons and above, though a significant part of the NCFC
numerical error is systematic, and cancels between neighboring
neutron drops; for completeness, we did include the total numerical
uncertainty for the NCFC results in the 20 MeV HO well.
\label{fig:ddiffs}}
\end{figure}
The detailed fluctuations within a shell are entirely due to the
neutron interactions. The most prominent feature is the neutron
pairing, in particular in the $p$-shell and also in the (beginning) of
the $sd$-shell. This effect can be seen more clearly by looking at the
double difference in total energy
$\Delta(N)=(-1)^{N+1} [E(N) - \frac{1}{2}\left(E(N-1) + E(N+1)\right)]$,
see Fig.~\ref{fig:ddiffs}. The phase $(-1)^{N+1}$ is included to make
the pairing positive definite in the standard BCS theory. Without
interactions, the double differences would be zero, except at the
magic numbers 2, 8, and 20.
Overall, the pairing seems to decrease as $N$ increases, except for
the closed (sub)shells. Note that the pairing in nuclei also
decreases for larger nuclei~\cite{Bertsch:2012}. On the other hand,
the numerical uncertainties increase with $N$, preventing us from
obtaining meaningful results for the pairing beyond 22 neutrons using
the NCFC approach. AFDMC calculations of the pairing gaps will be
more reliable once BCS correlations have been included in the trial
state and this is being pursued. For all methods we expect that the
error in neighboring neutron drops is correlated, resulting in a
reduced error in calculations of energy differences and pairing.
Despite the numerical uncertainties, there are some features that are
likely to be robust in Fig.~\ref{fig:ddiffs}. As expected, the peaks
in the double difference $\Delta(N)$ at the magic numbers 8 and 20
stand out for all of the interactions for which we have results, in
particular for the 10 MeV HO well. In addition, our results suggest
subshell closure at $N=16$ for AV8$^\prime$ without TNIs, with
AV8$^\prime$+IL7, and with JISP16, but not with AV8$^\prime$+UIX.
This closed subshell corresponds to the recently discovered subshell
closure at $^{24}$O~\cite{ClosedO24}, in which TNIs play a crucial
role.
In addition to the closure at $N=16$, we see evidence for subshell
closure at $N=6$ (the $p_\frac{3}{2}$) in the 20 MeV HO well both with
AV8$^\prime$+IL7 and with JISP16, but not in the 10 MeV HO well. We
do not have sufficient data yet to examine the expected closure of the
$f_\frac{7}{2}$ at $N=28$, which was not evident in the plots of the
total energy (see Fig.~\ref{fig:gs_scaled}), nor for a more detailed
analysis of the closure at $N=32$ suggested in
Fig.~\ref{fig:gs_scaled}.
\subsection{Level splittings}
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.99\columnwidth]{splittings_p-shell}
\includegraphics[width=0.99\columnwidth]{splittings_sd-shell}
\caption{(color online)
Spin-orbit splitting in the $p$-shell (top)
and level splittings in the $sd$-shell (bottom)
and as a function of external field strength.
Results of different Hamiltonians are compared.
Inset: blowup of the $s_\frac{1}{2}$ and $d_\frac{5}{2}$ levels
for the 5 MeV and 10 MeV H.O. wells.
\label{fig:splittings}}
\end{center}
\end{figure}
If we look at the single-particle and single-hole states at the
beginning and end of the $p$-shell, see Fig.~\ref{fig:splittings},
we find that the spin-orbit splitting between the $\frac{1}{2}^-$ and
$\frac{3}{2}^-$ increases with $\hbar\Omega$ for all interactions.
For three neutrons, the splitting between these levels is almost the
same for JISP16 and AV8$^\prime$+IL7; however, for seven neutrons the
splitting is significantly enhanced with IL7. On the other hand,
AV8$^\prime$ without TNI and AV8$^\prime$+UIX have almost the same
splitting.
The systematic increase in level splittings with increasing
$\hbar\Omega$ can be understood as follows: With increased
$\hbar\Omega$, the radial shape is increasingly constrained by the HO
potential and the associated gaussian falloff of the radial densities
in the surface region. This increase in level splittings with
$\hbar\Omega$ may then be interpreted as a consequence of the
increased density gradient in the surface region.
In the $sd$-shell the splitting between the $d_\frac{5}{2}$ and
$s_\frac{1}{2}$ levels (solid lines in Fig.~\ref{fig:splittings}) is
much smaller than the splitting between these two levels and the
$d_\frac{3}{2}$ level, in particular for AV8$^\prime$+IL7. This
confirms the subshell closure at $N=16$ that was evident from the
pairing, see Fig.~\ref{fig:ddiffs}. It is also in apparent agreement
with the observation in known nuclei that the subshell closure at 16
neutrons (both $d_\frac{5}{2}$ and $s_\frac{1}{2}$ levels filled) is
much stronger than the subshell closure at 14 neutrons (only the
$d_\frac{5}{2}$ level filled).
Furthermore, notice that the level ordering can change as the strength
of the HO well increases in the case of 9 neutrons: in the weakest
well of 5 MeV (i.e. at very low density), the $s_\frac{1}{2}$ is
slightly below the $d_\frac{5}{2}$ level, but as $\hbar\Omega$
increases, the $d_\frac{5}{2}$ becomes the lowest level.
Interestingly, this happens both with JISP16 and with
AV8$^\prime$+IL7, whereas with AV8$^\prime$+UIX and with AV8$^\prime$
(without TNIs) the $d_\frac{5}{2}$ and $s_\frac{1}{2}$ are basically
degenerate for the 5 MeV HO well.
In the $pf$-shell we find qualitatively similar results with JISP16: a
large spin-orbit splitting between the $f_\frac{7}{2}$ and
$f_\frac{5}{2}$ levels and between the $p_\frac{3}{2}$ and
$p_\frac{1}{2}$ levels, a smaller splitting between the
$p_\frac{1}{2}$ and $f_\frac{5}{2}$ levels, and an even smaller
splitting between the $f_\frac{7}{2}$ and $p_\frac{3}{2}$ levels. All
of these level splittings increase significantly with the strength of
the HO well: at $\hbar\Omega = 5$~MeV, the splittings are almost
negligible, less than an MeV, and within the numerical uncertainty.
On the other hand, at $\hbar\Omega = 20$~MeV (the largest value that
we have considered) the spin-orbit splittings are of the order of
ten(s) of MeV.
\subsection{Internal energies and radii}
\begin{table*}
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{rr|dd|dd|dd|dd|dd|dd}
& & \multicolumn{4}{c|}{ 5 MeV HO well}
& \multicolumn{4}{c|}{10 MeV HO well}
& \multicolumn{2}{c|}{20 MeV HO well}
& \multicolumn{2}{c}{WS well} \\
\hline
$N$ & $J^\pi$ & \multicolumn{2}{c|}{AV8$^\prime$+UIX} & \multicolumn{2}{c|}{JISP16}
& \multicolumn{2}{c|}{AV8$^\prime$+UIX} & \multicolumn{2}{c|}{JISP16}
& \multicolumn{2}{c|}{JISP16} & \multicolumn{2}{c}{AV8$^\prime$+UIX} \\
\hline
3 & $\frac{1}{2}^-$ & 11.5 & 3.56 & 11.2 & 3.58 & 22.3 & 2.60 & 22.02 & 2.60 & 44.31 & 1.928(1)\\
3 & $\frac{3}{2}^-$ & 11.3 & 3.52 & 11.2 & 3.51 & 22.3 & 2.53 & 22.26 & 2.50 & 43.80 & 1.805(1)\\
4 & $0^+$ & 14.8 & 3.55 & 14.4 & 3.56 & 29.1 & 2.61 & 29.10 & 2.57 & 58.92 & 1.869(1)\\
5 & $\frac{1}{2}^-$ & 20.6 & 3.70 & 20.1(2) & 3.70(2) & 40.1 & 2.70 & 39.8 & 2.67 & 79.5 & 1.969(2)\\
5 & $\frac{3}{2}^-$ & 20.6 & 3.68 & 20.3(2) & 3.65(2) & 40.0 & 2.70 & 40.5 & 2.58 & 78.5 & 1.869(2)\\
6 & $0^+$ & 24.2 & 3.67 & 23.8(4) & 3.63(3) & 47.3 & 2.67 & 47.8 & 2.58 & 92.8 & 1.867(1)& 46.9 & 2.70 \\
7 & $\frac{1}{2}^-$ & 29.7 & 3.74 & 29.7(5) & 3.66(3) & 56.8 & 2.71 & 57.3(2)& 2.63 & 111.3(1)& 1.930(2)& 55.6 & 2.75 \\
7 & $\frac{3}{2}^-$ & 29.9 & 3.76 & 29.7(5) & 3.70(3) & 57.2 & 2.75 & 56.9(2)& 2.71 & 113.2(1)& 2.012(2)& 55.2 & 2.81 \\
8 & $0^+$ & 35.0 & 3.64 & 33.2(6) & 3.65(3) & 64.4 & 2.72 & 63.5(2)& 2.68(1)& 126.2(1)& 1.986(2)& 63.2 & 2.75 \\
9 & $\frac{1}{2}^+$ & 41.9 & 3.79 & 40.(2) & 3.77(5) & 77.9 & 2.81 & 76.5(1.)& 2.77(2)& 152.7(2)& 2.045(2)& 70.5 & 3.03 \\
9 & $\frac{3}{2}^+$ & & & 41.(2) & 3.82(5) & & & 77.1(1.)& 2.81(3)& 155.2(2)& 2.088(2)&\\
9 & $\frac{5}{2}^+$ & 42.2 & 3.79 & 41.(2) & 3.78(5) & 78.3 & 2.80 & 77.4(1.)& 2.75(2)& 153.2(2)& 2.024(2)& 73.6 & 2.93 \\
10 & $0^+$ & 46.7 & 3.88 & 45.(2) & 3.85(10)& 88. & 2.89 & 87.0(1.)& 2.81(2)& 176.0(3)& 2.059(2)& 78. & 3.11 \\
11 & $\frac{1}{2}^+$ & 53.7 & 3.97 & & & 99. & 2.97 & 100.(2.) & 2.86(3)& 201.7(8)& 2.095(4)& 89. & 3.21 \\
11 & $\frac{3}{2}^+$ & & & & & & & 101.(2.) & 2.84(3)& 202.1(5)& 2.074(3)&\\
11 & $\frac{5}{2}^+$ & 53.2 & 4.00 & & & 102. & 2.95 & 100.(2.) & 2.85(3)& 202.1(5)& 2.075(3)& 87. & 3.23 \\
12 & $0^+$ & 59.4 & 4.03 & & & 110. & 3.02 & 110.(2) & 2.88(3)& 224.0(5)& 2.090(3)& 97. & 3.20 \\
13 & $\frac{1}{2}^+$ & 65.5 & 4.08 & & & 121. & 3.06 & 123.(2) & 2.91(3)& 248.5(6)& 2.116(3)&106. & 3.29 \\
13 & $\frac{3}{2}^+$ & & & & & & & 123.(2) & 2.91(3)& 249.5(6)& 2.111(3)& \\
13 & $\frac{5}{2}^+$ & 65.9 & 4.09 & & & 120. & 3.06 & 124.(2) & 2.90(3)& 249.9(6)& 2.094(3)&103. & 3.37 \\
14 & $0^+$ & 71.1 & 4.10 & & & 132. & 3.08 & 134.(3) & 2.92(4)& 271.8(7)& 2.099(3)&115. & 3.31 \\
\end{tabular}
\end{ruledtabular}
\caption{
Internal energies (in MeV) and rms radii (in fm) of neutron drops
with various external potentials and a selected set of Hamiltonians.
The results are plotted in Figs.~\ref{fig:int} and \ref{fig:radii}.
The particular many-body method used to produce these results
depends on the external field, the number of neutrons and the
Hamiltonian as described in the text.
\label{tab:intErad}}
\end{table*}
In Table~\ref{tab:intErad} we list our results for the internal energy
$E_{\hbox{\scriptsize int}} =
\langle H \rangle - \langle U_{\hbox{\scriptsize ext}} \rangle$,
as well as for the rms radii for systems up to 14 neutrons
in a HO well with JISP16 and with AV8$^\prime$+UIX,
as well as in a WS well with AV8$^\prime$+UIX. Note that
for neutron drops in a HO well the radius is directly related to the
external energy, $\langle U_{\hbox{\scriptsize ext}} \rangle =
\frac{1}{2} m \omega^2 \langle r^2 \rangle$
for a HO external field. Overall, the internal energy is typically
slightly less than half of the total energy (see
Table~\ref{tab:totalE} for comparison),
and $\langle U_{\hbox{\scriptsize ext}} \rangle$ is slightly more than
half the total energy. This is to be expected, since the total energy
scales approximately as $\hbar\Omega\,N^{(4/3)} \propto \rho^{2/3}$,
and for all cases where the equation of state is proportional to
$\rho^{2/3}$, the virial theorem will give equal internal energies and
one-body potential energies, each one-half of the total energy.
\begin{figure}
\center\includegraphics[width=0.99\columnwidth]{internal_scaled}
\caption{(color online)
Internal energy for up to 14 neutrons in a HO trap with
AV8$^\prime$+UIX and with JISP16. For details see
Table~\ref{tab:intErad}.
\label{fig:int}}
\end{figure}
In Fig.~\ref{fig:int} we show the internal energy
$E_{\hbox{\scriptsize int}}$, scaled by $\hbar\Omega\,N^{(4/3)}$, of
the lowest $J=0$ and $J=\frac{1}{2}$ states for up to 14 neutrons in a
HO well. In the 10 MeV trap the JISP16 and the AV8$^\prime$+UIX
results are rather close to each other, significantly closer than
the total energies shown in Fig.~\ref{fig:gs_scaled}. Apparently, the
larger differences observed in Fig.~\ref{fig:gs_scaled} arise
primarily from differences in their
$\langle U_{\hbox{\scriptsize ext}} \rangle$ energy shifts. Indeed,
the corresponding rms radii,
and thus $\langle U_{\hbox{\scriptsize ext}} \rangle$, start to deviate
from each other above $N=10$, see Fig.~\ref{fig:radii}.
\begin{figure}
\begin{center}
\includegraphics[width=0.99\columnwidth]{radius}
\caption{(color online)
Radii for the lowest $J$ states up to 14 neutrons in a HO trap with
AV8$^\prime$+UIX and with JISP16. For details see
Table~\ref{tab:intErad}.
\label{fig:radii}}
\end{center}
\end{figure}
The two interactions also give quite similar internal energy results
in the 5 MeV trap as seen in Fig.~\ref{fig:int}, given the rather
large error bars of the NCFC results, and the corresponding radii are
almost identical, at least up to 10 neutrons.
Table~\ref{tab:intErad} shows that both the internal energies and the
rms radii in the WS well are of the same order as those in the 10 MeV HO
trap, even though the total energies are very different. In fact, the
rms radii are nearly identical for the three $p$-shell neutron drops
($N=6$, $7$, and $8$), but in the $sd$-shell there are significant
differences between the HO and the WS radii.
We note that the internal energy Fig.~\ref{fig:int} displays a similar
odd-even effect due to pairing as we observed in the total energy in
Fig.~\ref{fig:gs_scaled}. The radii of the $J=0$ and $J=\frac{1}{2}$
states also show an odd-even effect, but only in the $p$-shell, for
three to seven neutrons; there is no a significant odd-even effect for
these states above $N=8$ in Fig.~\ref{fig:radii}. Also note that the
radii of states with different $J$ in odd neutron systems are slightly
different, in particular in the $p$-shell, and more so with JISP16
than with AV8$^\prime$ + UIX, as can be seen in
Table~\ref{tab:intErad}.
\subsection{Densities}
\begin{figure}
\begin{center}
\includegraphics[width=0.99\columnwidth]{density_Combined_N8}
\includegraphics[width=0.99\columnwidth]{density_Combined_N14}
\caption{(color online)
Radial density distributions for 8 (top) and 14 (bottom) neutrons in
different HO traps with JISP16 and with AV8$^\prime$+UIX.
\label{densities}}
\end{center}
\end{figure}
We present a sample set of radial density distributions computed with
JISP16 and with the AV8$^\prime$+UIX Hamiltonian in
Fig.~\ref{densities}. The band thickness of the JISP16 results,
obtained with NCFC, is our best estimate of the total numerical
uncertainty in these densities; the AV8$^\prime$+UIX results were
calculated with GFMC, and the error bars correspond to the statistical
errors in the GFMC approach. Given the HO nature of the trap, all
density distributions fall like gaussians at distances sufficiently
far from the origin.
The various densities for 8 neutrons (top panel of
Fig.~\ref{densities}, closed $p$-shell) are quite similar for the two
different interactions. The only difference is that the central
densities, below 1 fm, are about 10\% to 20\% higher with JISP16 than
with AV8$^\prime$+UIX, but the shape is essentially the same, and
above 2 to 3~fm the densities are practically on top of each other.
This could be expected from the similar rms radii for these cases
shown in Table~\ref{tab:intErad}. As the HO trap strength increases,
the density distribution gets compressed, the rms radius decreases,
and the central density increases, as one would expect. The radial
shape, a slight dip at $r=0$, is typical for the closed $p$-shell, and
is qualitatively the same for weak and strong HO traps.
However, the shape of the density profile for 14 neutrons is somewhat
different for the two interactions; furthermore, the shape seems to
depend on the strength of the HO trap, at least for JISP16. With
JISP16 in the 20 MeV trap (dashed curve in the bottom panel of
Fig.~\ref{densities}), the density has a clear dip at the center, and
peaks at a distance of about 1~fm from the center. In fact, the shape
of this density is rather similar to that of 8 neutrons in a HO trap.
On the other hand, in the 10 MeV trap there is no evidence for such a
dip; rather, within the estimated numerical accuracy, the density
seems to fall off monotonically from a central value of about 0.11 to
0.12 fm$^{-3}$ with JISP16. On the other hand, the densities obtained
with the AV8$^\prime$+UIX Hamiltonian for 14 neutrons seem to be
slightly enhanced in the central region: in the 10 MeV trap the
central density with with the AV8$^\prime$+UIX is about 20\% higher
than with JISP16.
This difference between the density profiles of 14 neutrons in a HO
trap with JISP16 and AV8$^\prime$+UIX could well be related to the
presence (JISP16) and absence (AV8$^\prime$+UIX) of sub-shell closure
for 16 neutrons; and both are likely to be related to differences in
spin-orbit splittings. It would be interesting to compare these
densities with those obtained with other realistic potentials, and in
particular to investigate the effect of different 3-body forces on the
density profiles as well as on the spin-orbit splittings and sub-shell
closures.
\section{Neutron matter
\label{sec:neutronmatter}}
The equation of state of neutron matter is important to properly fix
the bulk term of Skyrme-type EDFs. We report the AFDMC results for
the energy per neutron as a function of the density in
Table~\ref{tab:eos} and display them in Fig.~\ref{fig:eos}. For the
AFDMC Quantum Monte Carlo calculations, the results are for a system
of 66 neutrons with periodic boundary conditions. The calculation is
very similar to those of the neutron drops, except the single-particle
orbitals in the trial wave function are replaced by plane waves that
respect the periodic boundary condition, as described in
Sec.~\ref{sec:afdmc_wf}. More details can be found in
Refs.~\cite{Gandolfi:2009,Sarsa:2003}. The energy corrections due to
finite-size effects arising from such a simulation are expected to be
extremely small compared to the bulk energies considered here.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{eos}
\caption{(color online)
Equation of state of neutron matter as a function of the density for
different Hamiltonians.
\label{fig:eos}}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{c|ccc}
\hline
$\rho$ [fm$^{-3}$] & AV8$^\prime$ & AV8$^\prime$+UIX & AV8$^\prime$+IL7 \\
\hline
0.04 & 6.55(1) & 6.79(1) & 6.42(1) \\
0.05 & 7.36(1) & 7.73(1) & 7.11(1) \\
0.06 & 8.11(1) & 8.65(1) & 7.77(1) \\
0.07 & 8.80(1) & 9.57(1) & 8.26(1) \\
0.08 & 9.47(1) & 10.49(1) & 8.75(2) \\
0.09 & 10.12(1) & 11.40(1) & 9.14(2) \\
0.10 & 10.75(1) & 12.39(1) & 9.50(2) \\
0.11 & 11.37(1) & 13.39(1) & 9.78(2) \\
0.12 & 12.00(1) & 14.42(1) & 10.03(2) \\
0.13 & 12.64(1) & 15.52(1) & 10.27(2) \\
0.14 & 13.21(1) & 16.66(1) & 10.41(2) \\
0.15 & 13.84(2) & 17.87(2) & 10.54(3) \\
0.16 & 14.47(2) & 19.10(2) & 10.62(3) \\
\hline
\end{tabular}
\caption{Equation of state of neutron matter as a function of the
density for various Hamiltonians.
\label{tab:eos}}
\end{center}
\end{table}
The effect of TNI is important in the equation of state of neutron
matter beyond half nuclear matter saturation density ($\rho =
0.08~{\rm fm}^{-3}$), as is clear in Fig.~\ref{fig:eos}. The two
different TNIs added to AV8$^\prime$ have opposite effects: UIX is
repulsive, while IL7 is attractive. This is in agreement with the
trend in neutron drops shown in Fig.~\ref{fig:gs_scaled}. Our earlier
discussion of the effects of the different terms in UIX and IL7 apply
equally to the differences observed here. Furthermore, in moderately
large neutron drops ($N > 12$) we have seen that the trend with JISP16
is similar to the trend with AV8$^\prime$ without TNI. We therefore
expect that the equation of state with JISP16 will be similar to that
of AV8$^\prime$ without TNIs.
The equation of state of pure neutron matter with the AV8$^\prime$ NN
interaction alone is rather soft at high densities. It is only
marginally compatible with the recently-observed two-solar mass
neutron
star~\cite{Demorest:2010,Gandolfi:2012,Akmal:1998,Steiner:2012}. The
relevant three-neutron force must be, in aggregate, repulsive at high
densities~\cite{Akmal:1998}. While the three-pion terms in IL7 (and
in $\chi$PT forces) are certainly present, they are far too attractive
in the IL7 model alone. It is possible to adjust the short- and
long-range three neutron forces to be repulsive by varying the
contributions of these magnitudes, as obtained in the recent studies
of three-neutron interactions and the neutron star mass-radius
relations~\cite{Gandolfi:2012}.
\section{Conclusions
\label{sec:conclusion}}
We have computed the properties of neutron drops confined by external
harmonic oscillator (HO) and Woods Saxon (WS) traps using a variety of
realistic nucleon-nucleon and nucleon-nucleon plus three-nucleon
interactions (TNIs). The combination of results with HO and WS wells
should prove useful in separating bulk and gradient (surface) effects,
and testing the general form of the density functional. The pairing
and spin orbit splittings may also have very different behavior.
We have employed currently available state-of-the art many-body
methods to obtain these results and we have quantified the
uncertainties in the results. We observe characteristic features such
as pairing and subshell closures in qualitative agreement with
expectations. Differences in results for the same systems are
attributable in large part to differences in the interactions. We
present and interpret significant sensitivity of some obervables to
the TNI. The radii of the neutron drops appear to be rather robust -
that is approximately independent among the interactions we employ.
The results we obtain for the neutron equation of state as a function
of density follow trends seen in the neutron drop results as a
function of the external HO trap. This is significant since the
neutron drops have quantified uncertainties on their total and
internal energies as well as their rms radii, while it is more
difficult to quantify the uncertainty in the calculated neutron matter
equation of state.
We anticipate that results for these extreme and idealized systems may
serve as guides to experiments on very neutron-rich nuclei. We also
hope these results will inform developments of improved energy-density
functionals~\cite{Gandolfi:2010za,Kortelainen:2011ft}.
\begin{acknowledgments}
We thank G. F. Bertsch, S. Bogner, A. Bulgac, F. Coester,
J. Dobaczewski, W. Nazarewicz, S. Reddy, A. Shirokov and R. B. Wiringa
for valuable discussions.
This work is supported by the U.S. DOE SciDAC program
through the NUCLEI collaboration,
by the U.S. DOE Grants
DE-SC-0008485 (SciDAC/NUCLEI),
DE-FG02-87ER40371,
and by the U.S. DOE Office of Nuclear Physics under Contracts
DE-AC02-06CH11357,
and DE-AC52-06NA25396.
This work is also supported by
the U.S. NSF Grant 0904782,
and by the LANL LDRD program.
We thank the Institute for Nuclear Theory at the University of
Washington for its hospitality and the DOE for partial support
during various stages of this work.
Computer time was made available by Argonne's LCRC,
the Argonne Mathematics and Computer Science Division,
Los Alamos Institutional Computing,
the National Energy Research Scientific Computing Center (NERSC),
which is supported by the DOE Office of Science under Contract
DE-AC02-05CH11231,
and by an INCITE award, Nuclear Structure and Nuclear Reactions,
from the DOE Office of Advanced Scientific Computing.
This research used resources of the Oak Ridge Leadership Computing
Facility at ORNL, which is supported by the DOE Office of Science
under Contract DE-AC05-00OR22725,
and of the Argonne Leadership Computing Facility at ANL, which is
supported by the DOE Office of Science under Contract
DE-AC02-06CH11357.
\end{acknowledgments}
|
1,116,691,500,700 | arxiv |
\section*{Appendix: Plots for {\boldmath $Z$} and {\boldmath $W$} Events}
\input{zcomp.tex}
\input{wcomp.tex}
\section{Backgrounds}
\label{sec:background}
There are three significant backgrounds in the $\ensuremath{W \rightarrow e \nu}$ sample, whose shapes need to be added to the fast MC templates before comparing to the data distributions:
\begin{itemize}
\item $Z\to ee$ events in which one electron is not detected in a poorly instrumented region of the detector.
\item Multijet events (MJ) in which a jet is misidentified as an electron and $\,\ensuremath{{\slash\kern-.7emE}_{T}}$ arises from misreconstruction.
\item $W\to \tau\nu \to e\nu\nu\nu$ events.
\end{itemize}
\noindent
The $\ensuremath{Z \rightarrow ee}$ component is estimated directly from the $\ensuremath{W \rightarrow e \nu}$ data sample, the MJ component using a matrix method, and the $W\to\tau\nu$ from simulation. The subsections below provide detailed description of their determination.
\subsection{$\ensuremath{Z \rightarrow ee}$ Background}
\label{sec:zee_bkgd}
$Z\rightarrow ee$ events are present in the $W\rightarrow e\nu$ sample when there is substantial $\ensuremath{{\slash\kern-.7emE}_{T}}$ from mismeasurement of energy. We directly estimate the $\ensuremath{Z \rightarrow ee}$ contamination from the $W\rightarrow e \nu$ sample, selecting events that pass the full $W$ sample selection, modified to include selection of an additional reconstructed cluster chosen to indicate that the selected event is likely a $Z$ boson decay. Most often the second cluster is in the inter-cryostat region (ICR), which is outside the electron acceptance in this analysis and has poor sampling of the event energy flow since the ICD is not included in $\ensuremath{{\slash\kern-.7emE}_{T}}$ reconstruction. The $\ensuremath{Z \rightarrow ee}$ background from events where neither electron is in the ICR is negligible.
Since we cannot directly identify electrons in the ICR, we estimate the number of $\ensuremath{Z \rightarrow ee}$ events using electrons reconstructed as jets in this region and electron tracks candidates. The jet is required to have a matched track such that the invariant mass of this track and the electron is consistent with the $Z$ boson mass. To estimate the absolute number of $\ensuremath{Z \rightarrow ee}$ events in the $\ensuremath{W \rightarrow e \nu}$ sample, we count the number of candidates passing the $W$ plus the additional jet selection ($N(e,\text{jet})$) and use:
\begin{equation}
N({\ensuremath{Z \rightarrow ee}} \hspace{1mm} \mbox{background}) = {N(e,\text{jet})\over {\epsilon'_{\text{jet}} \times A(e,\text{trk}) }},
\label{eqn:zeebkgdformula1}
\end{equation}
where $\epsilon'_{\text{jet}} = \epsilon_{\text{jet}} \times A(e,\text{jet})/A(e,\text{trk})$ is the relative efficiency to find a jet given the presence of a matching track and $A(e, \text{trk})$ is the track acceptance in the invariant mass window $70 < m_{e,\text{trk}} < 110\,\text{GeV}$, both measured in data control samples. The fraction of $\ensuremath{Z \rightarrow ee}$ background events in the $\ensuremath{W \rightarrow e \nu}$ candidate sample is found to be (1.08$\pm$0.02)\%. The uncertainty is dominated by the precision with which the efficiency $\epsilon'_{\text{jet}}$ is determined and by the limited number of jet objects reconstructed in the ICR consistent within the $\ensuremath{Z \rightarrow ee}$ mass window.
\subsection{Multijet Background}
\label{sec:mj_bkgd}
The MJ background is determined using a loose sample obtained by only requiring that the matched track is within $0.05$ in $\Delta \eta$ and within $0.05$ in $\Delta \phi$ from the EM cluster (Sec.~\ref{sec:elreconstruct}), instead of using the standard track matching, which contains track quality requirements (Sec.~\ref{sec:eventselection}). This sample contains all events satisfying the standard selection requirements, but has a significantly higher contamination from MJ background than the standard sample. The probabilities for electron candidates in $\ensuremath{W \rightarrow e \nu}$ events ($\epsilon_e$) and in MJ events ($\epsilon_f$) to pass the complete matching requirements given that they already satisfy the loose match requirement are determined in control samples. The probability for real electrons is determined from $\ensuremath{Z \rightarrow ee}$ data using tag and probe, and the probability for electron candidates in MJ events is determined from data dijet events. They are parametrized as a function of electron $p_T$ and can be seen in Figs.~\ref{bkgd:eff} and~\ref{bkgd:fr}. The loose sample event yield, $N_L$, the standard sample event yield, $N$, and the two probabilities are then used to determine the MJ background yield in each bin $i$ of a distribution by solving the system of equations
\begin{equation}
\begin{split}
N_{L}^{(i)} & = N_{W}^{(i)} + N_{\rm MJ}^{(i)}, \\
N^{(i)} & = \epsilon_e^{(i)} N_{W}^{(i)} + \epsilon_f^{(i)} N_{\rm MJ}^{(i)},
\end{split}
\end{equation}
for the MJ background, given by $\epsilon_f N_{\rm MJ}$. The contribution from MJ events is found to be $(1.02\pm0.06)$\% of the selected $W\to e\nu$ candidate sample. The uncertainty is dominated by the precision with which the tight track match efficiency is determined.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig37.eps}
\caption{[color online] Tight track match efficiency as a function of the electron $\ensuremath{p_T^e}$ measured relative to the loose track match requirement.\label{bkgd:eff}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig38.eps}
\caption{[color online] Probability of a jet object that passes the loose track match requirement to pass the tight track match requirement.\label{bkgd:fr}}
\end{figure}
\subsection{$W\to\tau\nu$ Background}
\label{sec:tau_bkgd}
The $W\to\tau\nu\to e\nu\nu\nu$ contribution is determined from a simulation of the process using {\sc resbos} for event generation, {\sc tauola}~\cite{b-tauola_1, b-tauola_2, b-tauola_3, b-tauola_4} for $\tau$ lepton decay, and fast MC for detector simulation. Because the electrons arise from a secondary decay, their momenta are lower than for electrons from $\ensuremath{W \rightarrow e \nu}$ decays and their distribution is broader. The background contribution from $W\to\tau\nu$ decays is found to be $(1.668 \pm 0.004)$\%, with the uncertainty dominated by the uncertainty in the $\tau\rightarrow e\nu\nu$ branching ratio~\cite{PDG2012}. The uncertainty in the $M_W$ measurement arising from incorporating the $W\to\tau\nu\to e\nu\nu\nu$ events as background instead of a $M_W$ dependent signal is small.
Propagated $M_W$ uncertainties are at most 1 MeV for both MJ and $W\to \tau \nu$ backgrounds for all three observables, and 1~MeV, 2~MeV, and 1~MeV for the $m_T$, $p^e_T$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables for the $Z\to ee$ background. Distributions of the three background contributions are shown in Fig.~\ref{f:bkg}.
\begin{figure*}
\centering
\includegraphics[keepaspectratio,width=\textwidth]{fig39.eps}
\caption{[color online] The (a) $\ensuremath{m_T}$, (b) $\ensuremath{p_T^e}$, and (c) $\ensuremath{{\slash\kern-.7emE}_{T}}$ distributions for the three backgrounds $\ensuremath{Z \rightarrow ee}$ (red), multijet (black) and $W\to\tau\nu$ (blue) with absolute
normalization.\label{f:bkg}}
\end{figure*}
\section{Combination}
\label{sec:combination}
The measurements from the three observables are correlated. Correlation matrices for the $W$ boson data sample statistical uncertainties, the electron energy scale, the recoil scale and resolution, and the PDFs are determined using ensemble tests and standard uncertainty propagation. The resulting correlation matrices are shown in Table~\ref{t-corr}. The other model uncertainties besides PDF listed in Table~\ref{t:syst} are assumed to have a 100\% correlation among the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$ and $\ensuremath{{\slash\kern-.7emE}_{T}}$ fits. The electron energy scale uncertainty is shown to also be fully correlated among the three results. The different sources of uncertainty are assumed to be uncorrelated with each other.
\def\hphantom{00}{\hphantom{00}}
\begin{table}[hbtp]
\caption{Correlation matrices for the $W$ boson statistical, recoil scale and resolution, and the PDF uncertainties determined for the $4.3\,\text{fb}^{-1}$ data sample. The correlation matrices use the same ordering as in Eq.~\ref{e:cmdef}.\label{t-corr}}
\begin{tabular}{lc}\hline\hline
Source & \hphantom{00} 4.3~fb$^{-1}$ Correlation Matrices \hphantom{00} \T\B\\ \hline
$W$ boson statistics &
$\left(
\begin{array}{ccc}
1 & 0.658 & 0.744 \\
0.658 & 1 & 0.436 \\
0.744 & 0.436 & 1 \\
\end{array}
\right)$
\rule{0pt}{7.0ex} \\
\hphantom{SPACE} & \\
Recoil scale and resolution &
$\left(
\begin{array}{ccc}
1 & 0.754 & 0.571 \\
0.754 & 1 & 0.128 \\
0.571 & 0.128 & 1 \\
\end{array}
\right)$
\\
\hphantom{SPACE} & \\
PDF &
$\left(
\begin{array}{ccc}
1 & 0.990 & 1 \\
0.990 & 1 & 0.988 \\
1 & 0.988 & 1\\
\end{array}
\right)$
\rule[-6.0ex]{0pt}{0pt}\\\hline\hline
\end{tabular}
\end{table}
The total correlation matrix including all uncertainties is
\begin{equation}
\bordermatrix{
& \ensuremath{m_T} & \ensuremath{p_T^e} & \ensuremath{{\slash\kern-.7emE}_{T}} \cr
\ensuremath{m_T} & 1 & 0.89 & -0.86 \cr
\ensuremath{p_T^e} & 0.89 & 1 & -0.75 \cr
\ensuremath{{\slash\kern-.7emE}_{T}} & -0.86 & -0.75 & 1}.
\label{e:cmdef}
\end{equation}
The three measurements can be combined using the BLUE method~\cite{blue_1, blue_2}. Using the correlation matrices from Table~\ref{t-corr} and the uncertainties from Tables~\ref{t:answ} and \ref{t:syst}, we find weights of 1.08, 0.11, and -0.19 for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ measurements, respectively. The negative weight for the $\ensuremath{{\slash\kern-.7emE}_{T}}$ measurement arises from the large correlation it has with the other measurements, as well as its relatively larger uncertainty. The values of the correlations between the $\ensuremath{{\slash\kern-.7emE}_{T}}$ measurement and the other two receive large contribution from the assumed 100\% correlation in the $W$ production and decay model uncertainties. Because of the relatively larger uncertainty, the inclusion of the $\ensuremath{{\slash\kern-.7emE}_{T}}$ measurement in the combination would not modify the final uncertainty. Thus, we choose to combine only the $\ensuremath{m_T}$ and the $\ensuremath{p_T^e}$ measurements, which despite being strongly correlated, have similar systematic uncertainties. With this choice, the weights for the combination are 0.87 and 0.13 for the $\ensuremath{m_T}$ and $\ensuremath{p_T^e}$ measurements, respectively. We obtain:
\begin{equation}
\begin{split}
M_W &= 80.367 \pm 0.013\thinspace \text{(stat)} \pm 0.022\thinspace \text{(syst)\ GeV}\\
&= 80.367 \pm 0.026\, \text{GeV}.
\end{split}
\end{equation}
The $\chi^2$ probability of this combination is 2.8\%. The inclusion of the $\ensuremath{{\slash\kern-.7emE}_{T}}$ measurement would give a negligible change in the average value of $M_W$. This result is combined with an earlier D0 measurement~\cite{OurPRL} to give the new D0 Run~II result of
\begin{equation}
M_W = 80.375 \pm 0.023\ \text{GeV}.
\end{equation}
For the combination of this new measurement and the measurement in
Ref.~\cite{OurPRL}, the production model uncertainties are treated as
fully correlated between the two measurements, and all other uncertainties,
dominated by statistics, are assumed to be uncorrelated.
\section{Results}
\label{sec:results}
Figure~\ref{fig:zfinal} shows the agreement between data and fast MC in fitting the invariant mass distribution of $\ensuremath{Z \rightarrow ee}$ events. For an input value $M_Z = 91.188\,\text{GeV}$ used in the fast MC tuning, the value returned from the post-tuning was $91.193 \pm 0.017\,\text{(stat)}\,\text{GeV}$. Figure~\ref{fig:fits} shows comparisons of the data to the fast MC for the distributions we use to measure the $M_W$ including the fitting range used. The fitting ranges are determined by minimizing the sum in quadrature of the PDF and the expected statistical uncertainties from pseudo-experiments, which are the most sensitive uncertainties to the choice of fitting range.
\begin{figure}[b]
\includegraphics[width=\linewidth]{fig40.eps}
\caption{[color online] (a) The dielectron invariant mass distribution in $\ensuremath{Z \rightarrow ee}$ data compare to the fast MC and (b) the $\chi$ values, where $\chi_i = \Delta N_i/\sigma_i$ for each bin in the distribution. $\Delta N_i$ is the difference between the number of events for data and fast MC and $\sigma_i$ is the statistical uncertainty in bin $i$.
\label{fig:zfinal}}
\end{figure}
\begin{figure*}[hbpt]
\includegraphics[width=0.32\linewidth]{fig41a.eps}
\includegraphics[width=0.32\linewidth]{fig41b.eps}
\includegraphics[width=0.32\linewidth]{fig41c.eps}
\caption{[color online] Distributions of (a) $\ensuremath{m_T}$, (b) $\ensuremath{p_T^e}$, and (c) $\ensuremath{{\slash\kern-.7emE}_{T}}$ for data and fast MC with backgrounds. The $\chi$ values are shown below each distribution, where $\chi_i = \Delta N_i/\sigma_i$. $\Delta N_i$ is the difference between the number of events for data and fast MC and $\sigma_i$ is the statistical uncertainty in bin $i$. The fit ranges are indicated by the double-ended horizontal arrows.\label{fig:fits}}
\end{figure*}
\section{Conclusions}
\label{sec:conclusions}
We have presented a detailed description of the $W$ boson mass measurement using the $\ensuremath{W \rightarrow e \nu}$ mode and $4.3$~fb$^{-1}$ of D0 integrated luminosity recorded between 2006 and 2009. Three measurements are performed, using three kinematic variables $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$. The $\ensuremath{m_T}$ and $\ensuremath{p_T^e}$ measurements are combined to give the result
\begin{equation}
\begin{split}\nonumber
M_W & = 80.367 \pm 0.013\thinspace \text{(stat)} \pm 0.022\thinspace \text{(syst)}\, \text{GeV} \\
& = 80.367 \pm 0.026\,\text{GeV}.\nonumber
\end{split}
\end{equation}
This result is combined with an earlier D0 measurement based on $1\,\text{fb}^{-1}$ of data and similar analysis techniques to give
\begin{equation}\nonumber
M_W = 80.375 \pm 0.023\, \text{GeV}.
\end{equation}
This measurement is in agreement with other measurements and has a precision equal to the world average prior to this paper and the most recent CDF measurements~\cite{combD0CDF}.
Figure~\ref{fig:higgs12} shows this combined measurement, the world average top quark mass measurement~\cite{Aaltonen:2012ra}, and the consistency among these and a Higgs boson mass of $M_H = 125.7\,\text{GeV}$.
\begin{figure}
\includegraphics[width=\linewidth]{fig50.eps}
\caption{[color online] The D0 Run II measurement of $M_W$ shown with the world-average mass of the top quark $m_t$~\cite{Aaltonen:2012ra} at 68\% C.L. by area. The new world-average for $M_W$~\cite{combD0CDF} is also shown. The thin blue band is the prediction of $M_W$ in the Standard Model given by Eq.~\ref{eq:SMWmassPred}, assuming $M_H=125.7\pm 0.4\,\text{GeV}$\label{fig:higgs12}}
\end{figure}
\section{Consistency Checks}
\label{sec:checks}
In this section we present consistency checks of the analysis. Two kinds of checks are performed. For the first, we vary the fit ranges shown in Table~\ref{t:answ} used in the final $M_W$ fits. For the second, we determine the $W$ and $Z$ boson masses for many different subsets of the data. We then determine whether the ratio of the $W$ boson mass to the $Z$ boson mass is stable. The subsets are defined using variables that are {\em a priori} considered to be difficult to describe or which have critical impact on the result. After consideration of the systematic uncertainties and their correlations, we find that each of these consistency checks shows good agreement among the subsets of data.
\subsection{Fitting Range}
To study the impact of the fit ranges shown in Fig.~\ref{fig:fits} and used to
determine $M_W$, the $M_W$ measurements are repeated by changing the range.
Figure~\ref{f:range} shows the variation resulting from these tests applied to
the $\ensuremath{m_T}$ distribution. The result is stable within the uncertainty as the
fit range is varied. Similar studies of the fit ranges for $\ensuremath{p_T^e}$ and $\ensuremath{{\slash\kern-.7emE}_{T}}$ also show stable results.
\begin{figure}[ht]
\includegraphics[width=0.94\linewidth]{fig42.eps}
\caption{Variations in $M_W$ determined from fits to the $\ensuremath{m_T}$ spectrum as the fit range is changed. (a) Impact of varying the lower edge of the $m_T$ fit range, and (b) the impact of varying the upper edge. For each of the variations the differences between the result from the varied range and the result from the nominal range are shown. The uncertainties represent the statistical uncertainties of the varied range fits.\label{f:range}}
\end{figure}
\subsection{Instantaneous Luminosity}
We divide the $W$ and $Z$ boson samples into four subsets of different instantaneous luminosity per bunch using the the same criteria as for the parametrization of the electron identification efficiencies (Sec.~\ref{sec:DataHack}) and for the tuning of the absolute EM energy scale (Sec.~\ref{sec:elec_energy}). The ratio of the $W$~boson mass and $Z$~boson mass measurements are shown in Fig.~\ref{fig:CheckLumi}.
\begin{figure}[ht]
\centering
\includegraphics [width=0.94\linewidth] {fig43.eps}
\caption{
[color online] The measured $M_W/M_Z$, separately for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables and in four bins of instantaneous luminosity, in units of $36\times 10^{30}\,\text{cm}^{-2}\text{s}^{-1}$. The error bars for each observable represent the statistical uncertainty due to limited size of the $W$~boson sample. The yellow bands indicate the contribution from the $Z$~boson statistics, which is fully correlated for the three observables. The three vertical lines with hashed bands indicate the results from the three observables for the full data sample. When systematic uncertainties are considered, the measured $M_W/M_Z$ values are consistent.}
\label{fig:CheckLumi}
\end{figure}
\subsection{Data-Taking Period}
We divide the data into four data-taking periods. The first two and the last two are separated by a one-month accelerator shutdown. Each half is divided into two periods with equal integrated luminosities. The results are given in Fig.~\ref{fig:CheckTime}.
\begin{figure}[ht]
\centering
\includegraphics [width=0.94\linewidth] {fig44.eps}
\caption{
[color online] The measured ratio $M_W/M_Z$, separately for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables and for four data-taking periods. The uncertainties for each observable represent the combined statistical uncertainty due to limited $W$~statistics and $Z$~statistics. The three vertical lines with hashed uncertainties indicate the results from the three observables for the full data sample.}
\label{fig:CheckTime}
\end{figure}
\subsection{Electron {\boldmath $\eta_{\rm det}$}}
We divide the data into five samples as defined in Table~\ref{table:StandardEtaBins}. This is the same categorization that is used in the determination of the $\eta_{\rm det}$~dependence of the EM energy scale (Sec.~\ref{sec:EMscaleEtaAdj}). The measured $W$~boson mass for each of the five sub-samples is shown in Fig.~\ref{fig:CheckEta}. We do not show the mass ratio because we apply an explicit $\eta_{\rm det}$ dependent calibration and there are two electrons in each $\ensuremath{Z \rightarrow ee}$ event.
The electron energy scale in a single $\eta_{\rm det}$ region is determined from $\ensuremath{Z \rightarrow ee}$ events in which one decay electron in the given region but the other can be anywhere else in the CC. Therefore, there are systematic anti-correlations between the $\eta_{\rm det}$ bins whose precise values are difficult to calculate since it would involve a simultaneous 10-dimensional fit for the parameters in the electron energy response model (Sec.~\ref{sec:elec_energy}) in each of the five sub-samples. If this simultaneous determination could be performed, we could calculate the probability that a disagreement at least as extreme as the one observed in the data would happen assuming a common value for $M_W$ across the five $\eta_{\rm det}$ bins. In the absence of the exact value, a lower bound on this probability can be given assuming no correlation between the five bins and that the systematic uncertainty in each bin scales as $\sqrt{5}\times 16\,\text{MeV}$. With these assumptions, considering the electron energy scale and PDF systematic uncertainties together with the statistical uncertainties, we find lower bounds on the probability of 35\%, 26\%, and 81\% for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ fits, respectively, which shows consistency among the $\eta_{\rm det}$ regions.
\begin{figure}[ht]
\centering
\includegraphics [width=0.94\linewidth] {fig45.eps}
\caption{
[color online] Measured $M_W$ from the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables, separately for five different regions in electron~$|\eta_{\rm det}|$. The error bars for each observable represent the statistical uncertainty of the $W$~boson sample. The three vertical lines with hashed bands indicate the results from the three observables for the full data sample. When systematic uncertainties are considered, the measured $M_W$ values are consistent.}
\label{fig:CheckEta}
\end{figure}
\subsection{Hadronic Recoil {\boldmath \ensuremath{u_{\parallel}}}}
We split the $W$~boson sample into a sample with $\ensuremath{u_{\parallel}} < 0$ and a sample with $\ensuremath{u_{\parallel}} > 0$. There are no equivalent splitting for the $Z$~boson sample because the two electrons from each $Z$~boson decay are reconstructed in approximately opposite directions in the transverse plane. We therefore show only the $M_W$ fits in Fig.~\ref{fig:CheckUpar}.
\begin{figure}[ht]
\centering
\includegraphics [width=0.94\linewidth] {fig46.eps}
\caption{[color online] The measured $M_W$ from the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$ and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables, separately for positive and negative~$\ensuremath{u_{\parallel}}$. The three vertical lines with hashed bands indicate the results from the three observables for the full data sample.
}
\label{fig:CheckUpar}
\end{figure}
\subsection{Electron {\boldmath $\phi_{\text{mod}}$} Fiducial Requirement}
The nominal requirement, $0.1 \leq \phi_{\mathrm{mod}} \leq 0.9$, removes 10\% of the phase space at each edge of each CC EM~module (Sec.~\ref{sec:eff_phimod}). We also study four tighter versions of the requirement, namely $0.125 \leq \phi_{\mathrm{mod}} \leq 0.875$, $0.15 \leq \phi_{\mathrm{mod}} \leq 0.85$, $0.2 \leq \phi_{\mathrm{mod}} \leq 0.8$, and $0.25 \leq \phi_{\mathrm{mod}} \leq 0.75$, which remove 12.5\%, 15\%, 20\% and 25\%, respectively, of the acceptance at each edge of each CC EM~module. The effects of these variations are summarized in Fig.~\ref{fig:CheckPhiMod}. The measured $M_W/M_Z$ values are consistent for all variations.
\begin{figure}[ht]
\centering
\includegraphics [width=0.94\linewidth] {fig47.eps}
\caption{
[color online] The measured ratio $M_W/M_Z$, separately for the $\ensuremath{m_T}$,
$\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables and for four $\phi_{\mathrm{mod}}$ selection variations. The numbers in parenthesis indicate which fraction of the CC~EM~module around its center is included in the electron fiducial region. The three vertical lines with hashed bands indicate the results from the three observables for the full data sample.
}
\label{fig:CheckPhiMod}
\end{figure}
\subsection{Hadronic Recoil {\boldmath $u_T$} Requirement}
The nominal requirement of $u_T < 15$~GeV is changed to $u_T < 10$~GeV, and $u_T < 20$~GeV. The effects of these variations are summarized in Fig.~\ref{fig:CheckUt}. We find that, for both variations of the maximum $u_T$ requirement, the measured values of $M_W$ are consistent of the nominal one.
\begin{figure}[ht]
\centering
\includegraphics [width=0.94\linewidth] {fig48.eps}
\caption{
[color online] The measured ratio $M_W/M_Z$, separately for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables and for two $u_T$ variations. The three vertical lines with hashed bands indicate the results from the three observables with the nominal $u_T$ requirement.
}
\label{fig:CheckUt}
\end{figure}
\subsection{Hadronic Recoil {\boldmath $\phi$}}
The last division is based on recoil~$\phi$. We divide the data sample into eight subsets, as defined in Fig.~\ref{fig:CheckPhiRecoil}. The results of the ratio of the $W$ mass to the $Z$ mass are shown in the same figure. The measured $M_W/M_Z$ values are consistent for all regions.
\begin{figure}[ht]
\centering
\includegraphics [width=0.94\linewidth] {fig49.eps}
\caption{
[color online] The measured ratio $M_W/M_Z$, separately for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables and for eight bins in recoil~$\phi$.
}
\label{fig:CheckPhiRecoil}
\end{figure}
\section{Data Reconstruction}
\label{sec:dat}
The data sample for this Run~IIb measurement includes data with a total integrated luminosity of $4.3\, {\rm fb}^{-1}$ taken between June 2006 and June 2009. Figure~\ref{fig:lumiprofiles} compares the instantaneous luminosity ($L$) profile of this Run~IIb measurement with the profile of our previous measurement~\cite{OurPRL} using data recorded from 2002--2006 (Run~IIa). Here and in the rest of the paper, instantaneous luminosity is given as a multiple of $36\times 10^{30}\,\text{cm}^{-2}s^{-1}$ since there were 36 $p\bar{p}$ bunch crossings per turn in the Tevatron Collider. The Run~IIb instantaneous luminosity is significantly higher and much of the effort in preparing this measurement is dedicated to dealing with the multiple interactions (pileup) and calorimeter gain variation resulting from the high beam intensity in Run~IIb. For an instantaneous luminosity of $8\times 36\times 10^{30}\,\text{cm}^{-2}s^{-1}$, we expect an average of 10 simultaneous inelastic interactions per bunch crossing.
\begin{figure}[hbpt]
\centering
\includegraphics [width=\linewidth] {fig04.eps}
\caption{[color online] Instantaneous luminosity profiles for Run~IIa and Run~IIb. The instantaneous luminosity is given as a multiple of $36\times 10^{30}\,\text{cm}^{-2}s^{-1}$ since there were 36 $p\bar{p}$ bunch crossings per turn in the Tevatron Collider.}
\label{fig:lumiprofiles}
\end{figure}
This high instantaneous luminosity results in extra $p{\overline p}$ interactions in the same beam crossing as the event of interest. We measure the effect of this pileup by collecting $p{\overline p}$ interactions in random beam crossings which are labeled zero-bias (ZB) events. There are also extra interactions not due to the hard parton-parton scattering of interest coming from spectator partons in the same $p{\overline p}$ collision as the hard scattering. These extra interactions are studied using minimum-bias (MB) events, selected by requiring a coincidence between luminosity-monitor scintillation counters. The selection requires zero or one reconstructed primary (hard collision) vertex. The number of multiple interactions accompanying an event of interest scales with instantaneous luminosity, while the contribution of spectator partons is independent of it.
\subsection{Electron Reconstruction}
\label{sec:elreconstruct}
The measured EM energy associated with an electron ($E^{\rm unc}_{\rm EM}$) in the central calorimeter is the sum of the energies in all EM cells whose centers lie in a cone of radius \mbox{$\Delta R = \sqrt{(\Delta\eta)^2 + (\Delta\phi)^2}=0.2$} centered on the tower with the highest transverse energy. The definition of the electron energy reconstruction cone (13 towers) is shown in Fig.~\ref{fig:ewindow}. The total uncorrected energy $E^{\rm unc}_{\rm tot}(\Delta R)$ is the sum of the energies in all cells within a given cone of size $\Delta R$ centered on the central tower, over all layers of the calorimeter, including the hadronic calorimeter layers.
\begin{figure}[htbp]
\includegraphics[width=0.8\linewidth]{fig05.eps}
\caption{The 13 calorimeter towers defined as the electron reconstruction cone. The cone is centered on the tower with the highest transverse energy. A circle of radius $\Delta R=0.2$ is shown for comparison.}
\label{fig:ewindow}
\end{figure}
The identification of this cluster of EM energy as a candidate true electron is based on the following four parameters:
\begin{itemize}
\item{\bf EM fraction:} A true electron will deposit nearly all of its energy in the EM layers of the calorimeter. Therefore the EM fraction
\begin{equation}
f_{\text{EM}} \equiv {E^{\rm unc}_{\rm EM}(\Delta R<0.2) \over E^{\rm unc}_{\rm tot}(\Delta R<0.2)}
\end{equation}
is expected to be close to 1.
\item{\bf Isolation:} In an electron shower most of the energy is deposited in a narrow cone with little energy around it. Therefore
\begin{equation}
f_{\text{iso}} \equiv { {E^{\rm unc}_{\rm tot}(\Delta R<0.4) - E^{\rm unc}_{\rm EM}(\Delta R<0.2)} \over {E^{\rm unc}_{\rm EM}(\Delta R<0.2)}}
\end{equation}
is expected to be close to zero. Isolation provides discrimination against hadronic showers, which tend to be wider.
\item{\bf HMatrix:} The transverse and longitudinal shapes of an electron shower are well-modeled by MC simulations. Therefore, it is possible to determine a multivariate likelihood based on a set of variables whose correlations and variances allow the discrimination of electron showers. The variables used are:
\begin{itemize}
{\bf HMatrix7} (used in the CC) is built from the following variables: EM fractions in layers 1, 2, 3, 4, shower transverse width in the $\phi$ direction, $\log(E^{\rm unc}_{\rm tot})$, and $z_V$ (the production vertex $z$ coordinate).
{\bf HMatrix8} (used in the EC) is built from the same variables as HMatrix7 plus the shower width in the direction perpendicular to the beam in the plane of third layer of the calorimeter (EM3).
\end{itemize}
The inverse of the likelihood covariance matrix is used to determine a $\chi_{\text{HM}}^2$ value for an EM cluster, which should be small if the cluster results from an electron shower~\cite{HMatrix_1, HMatrix_2}.
\item{\bf Track match:} A track is reconstructed from SMT and CFT hits and is required to have $p_T>10$ GeV. It is considered to be matched with an EM cluster if it is within $0.05$ in $\Delta \eta$ and within $0.05$ in $\Delta \phi$. Here, $\Delta\eta$ and $\Delta\phi$ are the distances between the cluster centroid, as determined by its cells in EM3 with weights proportional to the logarithm of the cell energy, and the extrapolation of the track to this layer of the calorimeter. The quality of the match is determined by
\begin{equation}
\chi_{\rm TM}^2 \equiv {\Bigl({{\Delta \phi} \over {\sigma_\phi}}\Bigr)^2
+ \Bigl({{\Delta \eta} \over {\sigma_\eta}}\Bigr)^2},
\end{equation}
where $\sigma_\phi$ and $\sigma_\eta$ are the measured resolutions of $\Delta\phi$ and $\Delta\eta$.
\end{itemize}
In the initial reconstruction, electromagnetic clusters are required to have transverse energy $E^{\rm unc}_T>1.5$ GeV and EM fraction $f_{\rm EM}>0.9$. If the cluster has a track matched to it, it is considered a candidate electron.
The energy of an electron $E^{e,{\rm unc}}$ is defined as the sum of the energies in all four electromagnetic calorimeter (EM1 to EM4) and first fine hadronic layer (FH1) cells in the 13 towers of the electron cone (Fig.~\ref{fig:ewindow}) centered on the tower with the highest transverse energy:
\begin{equation}
{E}^{e,{\rm unc}} = \sum_{i}E^{\rm unc}_{i}.
\end{equation}
\noindent
The FH1 layer is included to more fully contain the electromagnetic shower. The corrected electron energy $E^e$ is defined by applying the energy loss correction (Sec.~\ref{sec:deadmat}).
In this analysis, the direction of the electron is always taken to be the direction of the matched track:
\begin{displaymath}
\begin{array}{ll}
\theta^e = & \theta_{\rm track}, \\
\phi^e = & \phi_{\rm track}.
\end{array}
\end{displaymath}
\noindent The track direction is determined with a resolution of $0.002\,\text{rad}$ in $\theta$ and $0.0004\,\text{rad}$ in $\phi$, which have a negligible impact on this measurement. The momentum of the electron, neglecting its mass, is given by
\begin{displaymath}
\vec{p}^{\,e} = E^e \begin{pmatrix}
\sin \theta^e \cos \phi^e \\
\sin \theta^e \sin \phi^e \\
\cos \theta^e
\end{pmatrix},
\end{displaymath}
and the transverse energy of the electron is defined as $E^e_T = E^e \sin \theta^e$. Corresponding to this definition, the uncorrected transverse energy of the electron is given by $E^{e,{\rm unc}}_T = E^{e,{\rm unc}}\sin \theta^e$.
\subsection{Vertex Reconstruction}
\label{sec:vertex}
The coordinate of the $W$ boson production vertex along the beam line, $z_V$, is determined either using the standard D0 vertex algorithm (which uses a Kalman filter algorithm~\cite{ref:Ariel}), or is taken as the point of closest approach of the electron track to the beam line if this electron track vertex position differs by more than 2 cm from the point selected by the vertex algorithm. For $Z$ boson events, $z_V$ is taken to be the average of the two points of closest approach of the electron tracks.
\subsection{Uncorrected Missing {\boldmath $E_T$} and Recoil Reconstruction}
The uncorrected missing energy vector in the transverse plane is calculated by taking the vector sum
\begin{equation}
\ensuremath{\vec{\slash\kern-.7emE}_{T}}^{\,{\rm unc}} = -\sum_{i}E^{\rm unc}_{i} \mbox{sin}\theta_{i} \left( \begin{array}{c}
\mbox{cos}\phi_{i} \\
\mbox{sin}\phi_{i} \end{array} \right)
= - \sum_{i}\vec{E}^{\,i\ {\rm unc}}_{T},
\end{equation}
\noindent
where the sum runs over all calorimeter cells that were read out except cells in the coarse hadronic calorimeter and ICD. Here, the $E^{\rm unc}_{i}$ are cell energies, and $\phi_{i}$ and $\theta_{i}$ are the azimuthal and polar angle of the center of the cell $i$ with respect to the vertex.
The recoil transverse momentum $\vec{u}_T$ for $W/Z$ boson events is calculated from the $\ \ensuremath{\vec{\slash\kern-.7emE}_{T}}$ and the electron transverse momentum:
\begin{equation}
\ensuremath{\vec{u}_T}^{\,{\rm unc}}\,=\,-\,\ensuremath{\vec{\slash\kern-.7emE}_{T}}^{\,{\rm unc}}\,-\,\sum_{e}{\vec{p}_{T}^{\,e\,{\rm unc}}}.
\end{equation}
The average energy deposition in the calorimeter cells away from the electron cluster is usually small. Thus, a hadronic energy scale would be dependent on specific details of the readout noise and suppression algorithms. Since these details are not correlated with the $W/Z$ event and vary with the run condition, we choose not to use a hadronic energy scale correction for the recoil $p_T$, and thus:
\begin{equation}
\ensuremath{\vec{u}_T} \equiv \ensuremath{\vec{u}_T}^{\,{\rm unc}}.
\end{equation}
\subsection{SET Reconstruction}
The scalar sum of the transverse energies of all calorimeter cells is defined as:
\begin{equation}
\mathrm{SET} = \sum_{i}E_{i}^{\rm unc} \sin\theta_{i},
\end{equation}
excluding cells inside the electron reconstruction cluster, from the coarse hadronic calorimeter and from the ICD detector.
\subsection{Corrected {\boldmath $\slashed{E}_T$} Reconstruction}
The corrected $\ensuremath{\vec{\slash\kern-.7emE}_{T}}$ is calculated from $\vec{u}_T$ and corrected $\vec{p}_T^{\,e}$. For $W\rightarrow e\nu$ events,
\begin{equation}
\ensuremath{\vec{\slash\kern-.7emE}_{T}} = -\vec{u}_T-\vec{p}_T^{\,e},
\end{equation}
and for $Z\rightarrow ee$ events,
\begin{equation}
\ensuremath{\vec{\slash\kern-.7emE}_{T}} = -\vec{u}_T-\vec{p}_T^{\,e_1}-\vec{p}_T^{\,e_2}.
\end{equation}
\subsection{Event Selection}
\label{sec:eventselection}
We select $\ensuremath{Z \rightarrow ee}$ and $\ensuremath{W \rightarrow e \nu}$ candidate events using the decay electrons and the $\ensuremath{{\slash\kern-.7emE}_{T}}$. The vertex is required to be within $|z_V| < 60\,\text{cm}$. The following electron requirements are applied to the reconstructed electron with the highest $p_T$ for $\ensuremath{W \rightarrow e \nu}$ candidate events and the two electrons with the highest $p_T$ for $\ensuremath{Z \rightarrow ee}$ candidate events.
\begin{itemize}
\item $f_{\text{EM}}>0.9$, $f_{\rm iso}<0.15$.
\item HMatrix7$\ <12$ in CC and HMatrix8$\ <20$ in EC (the EC electrons are used for tag and probe studies).
\item Regions near the edges of a calorimeter EM module in $\phi$ are excluded, see Sec.~\ref{sec:eff_phimod}.
\item $\ensuremath{p_T^e}>25$ GeV.
\item The associated track must have $p_{T}>10$ GeV, a track match with a probability of $P(\chi_{\rm TM}^2)>0.01$ (see Sec.~\ref{sec:elreconstruct}), and at least one SMT hit. No requirement is made on the number of CFT hits.
\end{itemize}
$Z \rightarrow ee$ candidate events are selected by requiring:
\begin{itemize}
\item At least one electron passes the trigger requirements of all three trigger levels.
\item Electron $|\eta_{\text{det}}|<1.05$,
except for studies of the electron efficiency, where one electron can be in the EC region $1.5<|\eta_{\text{det}}|<2.5$.
\item $u_{T}<15$ GeV.
\item $70<m_{ee}<110$ GeV.
\end{itemize}
$W \rightarrow e\nu$ candidate events are selected by requiring:
\begin{itemize}
\item The electron must pass the trigger requirements of all three trigger levels.
\item $\ensuremath{{\slash\kern-.7emE}_{T}} > 25$ GeV.
\item Electron $|\eta_{\text{det}}|<1.05$.
\item $u_{T}<15$ GeV.
\item $50<m_{T}<200$ GeV.
\end{itemize}
After the selections, 54,512 candidate $\ensuremath{Z \rightarrow ee}$ events remain with both electrons in the CC, which we use to determine the EM calibration, and 1,677,489 candidate $\ensuremath{W \rightarrow e \nu}$ events remain that are used to determine $M_W$.
\section{The D0 Detector}
\label{sec:det}
The D0 detector~\cite{RunIdetector} was built for the Run I of the Fermilab Tevatron Collider and upgraded~\cite{RunIIdetector} for the Run~II to the configuration relevant to the measurements described here. It contains central tracking, calorimeter and muon subdetector systems. The silicon microstrip tracker (SMT) detector located near the $p{\bar p}$ interaction point covers $|\eta_{\rm det}|<3$. The central fiber tracker (CFT) surrounds the SMT and provides complete coverage out to $ |\eta_{\rm det}| \approx 1.7$. A $1.9\, {\rm T}$ solenoid surrounds the central tracking system and gives a typical transverse momentum resolution of $10\%$ -– $16\%$ for tracks of $p_T = 40\,\text{GeV}$~\cite{muonid}.
Three uranium liquid-argon calorimeters measure particle energies. The central calorimeter (CC) covers $|\eta_{\rm det}|<1.1$ and two end calorimeters (EC) extend the coverage to $|\eta_{\rm det}| \approx 4.2$. The CC is segmented in depth into eight layers. The first four layers are used primarily to measure the energies of photons and electrons and are collectively called the electromagnetic (EM) calorimeter. The remaining four layers (three fine hadronic (FH) layers and one coarse hadronic (CH) layer), along with the first four, are used to measure the energies of hadrons. Most layers are segmented into $0.1 \times 0.1$ regions (cells) in $(\eta , \phi)$ space. The third layer of the EM calorimeter is segmented into $0.05 \times 0.05$ regions. Between the central and end cryostats, the inter-cryostat detector (ICD) provides sampling of particles in the range $1.1 < | \eta_{\rm det} | < 1.4$ using scintillator pads. The calorimeter system is completed with central and forward preshower detectors located just before the central and forward cryostats up to $| \eta_{\rm det} | = 2.5$. Figure~\ref{fig:cal_quadratic_view} shows a cross sectional $r$-$z$ view of one quarter of the D0 detector, showing the calorimeter $\eta$ and depth segmentation, which indicates how the calorimeter system forms projective towers of size $0.1 \times 0.1$ in $(\eta , \phi)$ space.
\begin{figure}[ht]
\centering
\includegraphics [scale=0.65] {fig03.eps}
\caption{Side view of one quadrant of the D0 detector, not showing the muon subdetector system. The calorimeter segmentation and tower definition are shown in both CC and EC. The lines extending from the center of the calorimeter denote the pseudorapidity ($\eta_{\rm det}$) coverage of cells and projected towers. The solenoid and tracking detectors are shown in the inner part of the detector.}
\label{fig:cal_quadratic_view}
\end{figure}
Muon trajectories are identified and measured outside the calorimeter system using a system of proportional drift tube chambers, scintillation counters, and toroidal iron magnets.
The luminosity of $p {\bar p}$ collisions is monitored using two sets of 24 wedge-shaped scintillation counters, each placed on the face of one of the end calorimeters. These counters are used to detect inelastic non-diffractive collisions~\cite{lumipaper}.
The D0 calorimeter is read out by a total of 47,032 electronic channels. The electronic pedestal is measured frequently for each channel using special calorimeter pedestal runs during the quiet time between stores when there is no beam in the Tevatron. The energy measured for each channel in collider data is the energy recorded minus the pedestal. The calorimeter readout uses zero suppression to avoid reading out noise. If $\sigma_{\text{PED}}$ is defined as the root-mean-square variation of the pedestal of each channel about its mean, the criterion deciding whether or not to read out a channel is expressed in terms of its $\sigma_{\text{PED}}$. Normally, in zero-suppressed data, a cell is read out by the D0 electronic system only if its energy differs from the pedestal by more than 1.5$\sigma_{\text{PED}}$. The D0 electronic system also records data in which all the channels are read out with no zero suppression. The D0 event reconstruction requires an energy deposit in a cell to exceed the pedestal by at least 4.0$\sigma_{\text{PED}}$ if it is to be considered the central cell of an energy cluster. An adjacent cell with energy exceeding its pedestal by at least 2.5$\sigma_{\text{PED}}$ is considered to be part of this same cluster. Cells with energy less than 2.5$\sigma_{\text{PED}}$ above pedestal are not considered for reconstruction in normal (zero-suppressed) collider data.
Events are selected for this analysis if they pass a single electron trigger requirement in the CC. In this way, trigger and other efficiencies can be measured with $Z\to ee$ events using the tag and probe method: if the tag electron is required to satisfy the trigger, the probe electron is considered to be unbiased. Each trigger is a combination of requirements at three trigger levels (L1, L2, L3). At each succeeding level the trigger uses more detailed detector information and becomes more precise.
The trigger towers in the calorimeter are $0.2\times 0.2$ in $(\eta , \phi)$ space. The triggers used in this analysis require, at the L1 trigger level, at least one EM object, defined by two neighboring trigger towers~\cite{L1trigger}. The EM object must satisfy $E_T^{\text{L1}} > 19\,\text{GeV}$ and $|\eta^{\text{L1}}|<3.2$. Two different L2 trigger level requirements are used, depending on the period the data were taken. An early version of the trigger, {\em v15}, requires the EM object to be isolated if $19 < E_T^{\text{L1}} < 22\,\text{GeV}$, but makes no requirement above $22\,\text{GeV}$. A later version, {\em v16}, requires a more complex likelihood criterion based on the energy distribution in the L1-triggered EM trigger towers and in their neighboring towers if $19 < E_T^{\text{L1}} < 25\,\text{GeV}$, but no requirement above $25\,\text{GeV}$. At the L3 trigger level, the EM objects must satisfy $E^{\text{L3}}_T>25\,\text{GeV}$, $|\eta^{\text{L3}}|<3.6$, and a shower shape requirement. At higher instantaneous luminosities, the L3 threshold is increased to $E^{\text{L3}}_T>27\,\text{GeV}$ to cope with the higher trigger rate. For the trigger with the L3 threshold $E^{\text{L3}}_T>27\,\text{GeV}$, only the L2 likelihood criterion is used.
\section{Uninstrumented Material Correction to the Electron Response}
\label{sec:deadmat}
Figure~\ref{fig:Material} shows an overview of the material in front of the CC cryostat. An electron traveling from the interaction point to the CC at normal incidence encounters about 3.7 radiation lengths ($X_0$) of material before reaching the first active layer of liquid argon: $0.2\,X_0$ in the inner detector, $0.9\,X_0$ in the solenoid, $0.3\,X_0$ in the preshower detector plus $1.0\,X_0$ in the associated lead, and $1.3\,X_0$ in the cryostat walls plus related support structures. As a consequence of the uninstrumented material in front of the CC, the measured response to incident electron energy has a significant non-linear dependence on the true energy and the angle of impact. In this section we describe the derivation of the corrections to the electron response that are applied to data to account for the uninstrumented material. This correction is derived from a simulation of the detector response to electrons in which the shower description has been improved relative to the standard {\sc geant}3~\cite{GEANT3} description and the amount of uninstrumented material has been tuned. The uninstrumented material tuning was derived with the $\ensuremath{Z \rightarrow ee}$ data sample of the $1\, {\rm fb}^{-1}$ (Run~IIa) analysis~\cite{OurPRL} and re-validated for this analysis (Sec.~\ref{sec:EMfracValidation}). A comprehensive account of the calibration method can be found in~\cite{JanHabilitation, RclsaPhD}.
\begin{figure}[hbpt]
\centering
\includegraphics [width=\linewidth] {fig06.eps}
\caption{Overview of the material in front of the CC. This drawing shows a cross sectional view of the central tracking system in the $x$~-~$z$~plane. Also shown are the locations of the solenoid, the preshower detectors, luminosity monitor, and the calorimeters.}
\label{fig:Material}
\end{figure}
\subsection{Improvements in the Simulation of Electromagnetic Showers}
\label{sec:improve_shower}
Because of the large amount of material preceding the active layers of the calorimeter, a precise simulation of the electromagnetic shower is needed to ensure acceptable understanding of the electron energy reconstruction as a function of true energy and angle of incidence. Several improvements are needed to the standard {\sc geant}3 simulation to have a good description of the energy deposition and depth of the shower.
To improve the transport algorithm for low energy particles in the shower, we configure {\sc geant}3 to evaluate steps as small as $10^{-7}\, {\rm cm}$ in the tracking of particles~\cite{molierefootnote}. We also force the maximum step length to be smaller than $10^{-1}\, {\rm cm}$. These modifications are chosen so that the Moli\`ere theory of multiple scattering is guaranteed to be valid in our simulation~\cite{moliere, molierebethe}.
The standard {\sc geant}3 parametrizations for bremsstrahlung and pair creation cross sections in matter are also insufficiently precise. We replace these with tables of cross sections from~\cite{Seltzer198595} and~\cite{hubbell:1023}, respectively.
Finally, the low energy cut-off for explicit simulation of $\delta$-rays was lowered from 1 MeV to 10 keV. This was necessary to obtain an adequate description for the local energy deposition of low energy electrons and photons, especially near the uranium--liquid argon boundary~\cite{slowgeantfootnote}.
\subsection{Observables Used for Tuning the Simulation}
\label{sec:observable_mat_tune}
To estimate the contribution of uninstrumented material, we exploit the segmentation of the calorimeter readout by studying the EM layer energy fractions, {\em i.e.}, the fraction of the measured electron energy deposited in each one of the layers EM1, EM2, EM3, EM4, and FH1. The depositions in EM4 and FH1 give contributions that are negligible in the tuning procedure.
\begin{figure}[hbpt]
\centering
\includegraphics [width=\linewidth] {fig07a.eps}
\includegraphics [width=\linewidth] {fig07b.eps}
\caption{The average shower energy deposition profile (along the shower axis) of electrons with $E=45\,\text{GeV}$ simulated using the GFlash parametrization~\cite{GFlash}. The depth of each readout section of the central calorimeter is indicated for an electron with (a) normal incidence $\eta=0$ and with (b) non-normal incidence $\eta=1$.}
\label{fig:ShowerAverage}
\end{figure}
Electrons produced at different angles cross different amounts of uninstrumented material and the fraction of energy deposited in each layer will therefore be different, as shown in Fig.~\ref{fig:ShowerAverage}. We split the $\ensuremath{Z \rightarrow ee}$ event sample into categories based on electron $\eta$. We define five bins of $|\eta|$ used as a measure of the angle of incidence on the uninstrumented material. The definition of the bins is given in Table \ref{table:StandardEtaBins}. We classify a $\ensuremath{Z \rightarrow ee}$ event into one of 15~distinct categories shown in Table~\ref{table:StandardEtaCategories} according to the $|\eta|$~bins of the two electrons. We do not distinguish between the leading and the subleading electron transverse momentum to avoid consideration of the calorimeter energy corrections that we are trying to determine.
\begin{table}
\begin{center}
\caption{Definition of bins in electron $|\eta|$ used for uninstrumented material studies.}
\label{table:StandardEtaBins}
\begin{tabular}{c|c}\hline\hline
Bin Number & $\eta$ range\\\hline
Bin 0 & $|\eta| < 0.2$ \\
Bin 1 & $0.2 \le |\eta| < 0.4$ \\
Bin 2 & $0.4 \le |\eta| < 0.6$ \\
Bin 3 & $0.6 \le |\eta| < 0.8$ \\
Bin 4 & $0.8 \le |\eta|$ \\\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Definition of the di-electron $\eta$ categories for \ensuremath{Z \rightarrow ee}\ events.}
\label{table:StandardEtaCategories}
\begin{tabular}{c|c}\hline\hline
Category & Combination of electron $\eta$ bins\\\hline
10 & 0, 0\\
11 & 0, 1\\
12 & 0, 2\\
13 & 0, 3\\
14 & 0, 4\\
15 & 1, 1\\
16 & 1, 2\\
17 & 1, 3\\
18 & 1, 4\\
19 & 2, 2\\
20 & 2, 3\\
21 & 2, 4\\
22 & 3, 3\\
23 & 3, 4\\
24 & 4, 4\\\hline\hline
\end{tabular}
\end{center}
\end{table}
We compare the mean of the EM layer energy fraction distribution for each layer in each category between $Z\rightarrow ee$ data and the full MC simulation using the improved shower simulation described in Sec.~\ref{sec:improve_shower}. As can be seen in Fig.~\ref{fig:EMFDMnoMat}, the agreement between the EM fraction from electrons produced at different angles is poor in each layer. The differences arise from inadequacies in the D0 material model included in the full MC.
\begin{figure}[hbpt]
\includegraphics [width=\linewidth] {fig08.eps}
\caption{[color online] The ratio of data to simulation for the means of the EM layer energy fraction distributions in \ensuremath{Z \rightarrow ee}\ events for each of the first three EM layers and each of the 15~$\eta$~categories shown before the correction described in Sec.~\ref{sec:SystElecNonLin} is applied. Each of the three horizontal lines indicates the result of a fit of a common constant to the 15~data points from a given EM layer.}
\label{fig:EMFDMnoMat}
\end{figure}
\subsection{Improvements in the D0 Material Model}
\label{sec:SystElecNonLin}
\begin{figure}[hbpt]
\centering
\includegraphics [width=\linewidth] {fig09.eps}
\caption{[color online] Fit for $nX_0$, the amount of uninstrumented material (in radiation lengths) added to the nominal material in the improved simulation of the D0 detector. The solid and dotted vertical lines show the best fit and one standard deviation uncertainties for $nX_0$. This fit is performed with the $\ensuremath{Z \rightarrow ee}$ data sample from our $1\, {\rm fb}^{-1}$ measurement~\cite{OurPRL}.}
\label{fig:FitnX0}
\end{figure}
As shown in Fig.~\ref{fig:EMFDMnoMat}, the data have a higher deposition in EM1 than the MC, so additional uninstrumented material must be added in front of the calorimeter to the detector model in the full MC. We choose a relatively low atomic number material, copper, and add it to the simulation inside the solenoid. The shape of the copper is a cylindrical shell with the same axis as the solenoid and uniform thickness. Along the $z$ direction, it extends over the length of the solenoid. The shape of the missing material is driven by the observation that the materials in front of the central calorimeter have a geometry that is close to cylindrical.
We use the improved {\sc geant}3 model described in Sec.~\ref{sec:improve_shower} to simulate the electrons from $Z\rightarrow ee$ events. For these events, the thickness of the additional copper material is varied. We then build a parametrized model of the mean EM layer energy fractions and the fluctuations around the average as a function of the copper thickness. As shown in Fig.~\ref{fig:EMFDMnoMat}, we fit the ratio of the mean EM layer energy fraction in data to that in MC as a function of the $\ensuremath{Z \rightarrow ee}$ event category to a constant for each of the first through third EM layers. We then form a total $\chi^2$ from the sum of the individual $\chi^2$ values from the three layer fits:
\begin{equation}
\chi^2 = \sum_{{\rm layer}(i)}\sum_{{\rm categ}(j)}\left[\frac{f^{\rm EML}_{ij} - \bar{f}^{\,\rm EML}_i}{\sigma_{ij}^{\rm EML}}\right]^2,
\end{equation}
where $f^{\rm EML}_{ij}$ (and $\sigma^{\rm EML}_{ij}$) are the data/full MC ratios of the mean EM layer energy fraction deposited by electrons with category $j$ at layer $i$ (and associated uncertainty), and $\bar{f}^{\,\rm EML}_i$ is the mean value of $f^{\rm EML}_{ij}$ for layer $i$. This is shown as a function of the thickness of the additional copper material in Fig.~\ref{fig:FitnX0}. The thickness of the cylinder is given as a multiple $nX_0$ of the thickness of one radiation length $X_0$ of copper. This figure also shows the parabolic fit giving the minimum $\chi^2$ corresponding to the final thickness used in our tuned simulation, $nX_0 = 0.1633 \pm 0.0095$. Because of the small energy deposit in EM4, we do not include it in our fits.
As a cross check, we repeat the fit for $nX_0$ separately for each of the three layers. The results are summarized in Fig.~\ref{fig:PerLayerCheck}. Good agreement is found between the overall fit and the results of the individual layers. The ratio of mean EM layer energy fraction in data to that in full MC after adding the missing material is shown in Fig.~\ref{fig:EMFDMtuned}. We interpret the deviations from unity as layer-intercalibration gain factors, which are applied during data reconstruction to have agreement with the detailed simulation.
\begin{figure}[hbpt]
\centering
\includegraphics [width=\linewidth] {fig10.eps}
\caption{[color online] Stability check: results of the fit for $nX_{0}$, performed separately for each of the three layers (EM1, EM2, and EM3). The result of the combined fit is also shown for comparison.}
\label{fig:PerLayerCheck}
\end{figure}
\begin{figure}[hbpt]
\includegraphics [width=\linewidth] {fig11.eps}
\caption{[color online] The ratio of data to simulation for the means of the EM layer energy fraction distributions in \ensuremath{Z \rightarrow ee}\ events for each of the first three EM layers and each of the 15~$\eta$~categories shown after the correction described in Sec.~\ref{sec:SystElecNonLin} is applied. Each of the three horizontal lines indicates the result of a fit of a common constant to the 15~data points from a given EM layer.}
\label{fig:EMFDMtuned}
\end{figure}
Figure~\ref{fig:EMFDMW} shows the data/full MC ratio of the mean EM layer energy fraction for electrons from $W$ boson decays, using the same binning as in Table~\ref{table:StandardEtaBins}, after adding to the simulation the copper cylinder with thickness derived above and the layer intercalibration factors. Because of the larger number of $\ensuremath{W \rightarrow e \nu}$ events, it is possible to see non-statistical deviations from unity. These systematic deviations are an indication that the assumption of a cylindrical shape for the missing material is not perfect. Nevertheless, the mean values of the ratio across the central calorimeter are consistent with unity in EM1, EM2, and EM3.
\begin{figure}[hbpt]
\includegraphics [width=\linewidth] {fig12.eps}
\caption{[color online] The data/full MC ratios for the means of the EM layer energy fraction distributions in $W\rightarrow e\nu$ events for the (a) EM1, (b) EM2, and (c) EM3 layers. The ratio is shown in five electron~$\eta$~bins. The thick horizontal lines indicate the average ratio across the central calorimeter and the yellow band represents the systematic and statistical uncertainty in the mean.}
\label{fig:EMFDMW}
\end{figure}
Figure~\ref{fig:EMFDMW_mean} shows the mean values of the data/full MC ratio of the mean EM layer energy fraction for electrons from $W$ decays and the relative contributions for its uncertainty from the $W$ sample size, from the $Z$ sample size through the uncertainty in the thickness of the copper cylinder added to the simulation, and from the limited number of full MC events simulated with the improvements described in Sec.~\ref{sec:improve_shower}.
\begin{figure}[hbpt]
\includegraphics [width=\linewidth] {fig13.eps}
\caption{[color online] The mean data/full MC ratio for the means of the EM layer energy fractions, for electrons from $\ensuremath{W \rightarrow e \nu}$ decays, in each of the three first layers of the EM calorimeter. The innermost error bar (red) indicates the uncertainties from the $W$ boson sample size. The middle error bar (green) indicates the quadrature sum of the uncertainty from the $W$ boson sample size with the one from the $Z$ boson sample size, determined from the uncertainty in the thickness of the added material. Finally, the outermost error bar (blue) represents the quadrature sum of the two previous uncertainties with the one arising from the limited number of full MC events. In all three layers, the ratio is consistent with unity when all uncertainties are considered.}
\label{fig:EMFDMW_mean}
\end{figure}
The precision of the measurement of the material in front of the calorimeter contributes directly to the energy measurement of the electron and therefore to the $W$ boson mass. Our measurement of $M_W$ depends critically on the assumption that the calibration made at the $Z$ boson mass is valid at the $W$ boson mass scale. A mismeasured material distribution would be the primary source of a non-linearity in this scaling. The uncertainty on the $W$ boson mass arising from the material tune is derived by varying the additional material by $\pm 1$ standard deviation (shown in Fig.~\ref{fig:FitnX0}) and recalibrating the EM calorimeter for each variation. We build fast MC models of the response considering the combination of the material variation and impact of calibration procedure.
The fast MC models resulting from $\pm 1$ standard variations in the additional material are used to generate $W$ boson events. The $m_T$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ distributions from these events are fit to templates generated with the standard parameterization and the resulting $M_W$ is compared to the input mass. We find shifts of 4 MeV using the $m_T$ distribution for the fit, 6 MeV using the electron $\ensuremath{p_T^e}$ distribution, and 7 MeV for the $\ensuremath{{\slash\kern-.7emE}_{T}}$ distribution.
\subsection{Energy Loss Corrections}
\label{sec:QElossCorr}
The average electron energy loss is recovered with correction functions determined using full MC samples of single-energy electrons with incident energies from 1 GeV to 135 GeV and applying the improvements described above. The precision of the corrections is therefore limited by the statistical precision of the full MC sample. As will be discussed in Sec.~\ref{sec:elec_energy}, the final tuning of the electron energy response using $\ensuremath{Z \rightarrow ee}$ events from the data fixes some imperfections in the energy-loss parametrization, for example, a global scale shift in the energy-loss function.
Because of the difference between the $Z$ and $W$ boson masses, the electrons from $\ensuremath{Z \rightarrow ee}$ decays populate one band in $E^e$ versus $\eta$ space and electrons from $W\to e\nu$ populate another band (see Fig.~\ref{fig:e_vs_eta_W_and_Z}). If the energy dependence of the energy loss correction is not correctly derived, the energy scale tuned on $\ensuremath{Z \rightarrow ee}$ events will be slightly incorrect when applied to $\ensuremath{W \rightarrow e \nu}$ events. To estimate this effect, we calculate the mean difference between reconstructed and true electron energies, divide it by the true energy for electrons from $\ensuremath{W \rightarrow e \nu}$ events, and subtract the same quantity calculated using electrons from $\ensuremath{Z \rightarrow ee}$ decays. The difference between the two averages reflects the imperfection of the energy loss corrections that cannot be corrected by the final tuning in the fast MC. The result is shown in Fig.~\ref{fig:EscaleMistakeBottomLine}. In order to estimate a systematic uncertainty for this imperfection in the energy loss corrections, we translate the difference between the corrections in $\ensuremath{W \rightarrow e \nu}$ and $\ensuremath{Z \rightarrow ee}$ events as an electron energy shift in fast MC pseudo-experiments. After propagating the shift to the $W$ boson mass, we assign an uncertainty of 4 MeV for the fit with the $m_T$, $p_T^e$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{fig14.eps}
\caption{[color online] The mean electron energy versus $\eta$ for electrons from $W$ boson (black solid line) and $Z$ boson (red dashes) events. The thin lines indicate the one standard deviation bands of the energy distributions versus $\eta$. \label{fig:e_vs_eta_W_and_Z}}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics [width=\linewidth] {fig15.eps}
\caption{(a) The true energy spectrum for electrons in simulated $W$ boson events that pass the full selection after reconstruction, and (b) the mean ratio of measured minus true energy to the true energy for electrons from $\ensuremath{Z \rightarrow ee}$ events minus the same quantity for electrons from $\ensuremath{W \rightarrow e \nu}$ events as a function of true electron energy.}
\label{fig:EscaleMistakeBottomLine}
\end{figure}
\subsection{Validation of Analysis for {\boldmath $4.3 \text{fb}^{-1}$} Data Set}
\label{sec:EMfracValidation}
The uninstrumented material correction presented here is derived with the $\ensuremath{Z \rightarrow ee}$ data sample of our $1\, {\rm fb}^{-1}$ analysis (Run~IIa). It is used again here, for the analysis of the Run~IIb data corresponding to $4.3\, {\rm fb}^{-1}$, because the distribution of EM layer energy fractions is essentially identical to the distribution of EM layer energy fractions in the Run~IIa measurement. There are two differences between the running conditions during Run~IIa and Run~IIb relevant to EM showers:
\begin{itemize}
\item Increased pileup in Run~IIb.
\item Insertion of an inner silicon tracking layer (L0) between Run~IIa and Run~IIb ($\approx$ 0.003 $X_0$).
\end{itemize}
The inclusion of L0 represents a small contribution to the total amount of uninstrumented material when compared to the CFT, solenoid, CPS, and cryostat, all of which remained unchanged throughout Run~II.
Figure~\ref{fig:EMpileupab} shows the contribution from extra $p\overline{p}$ interactions and noise to the mean EM layer energy fractions in $Z\rightarrow ee$ events, estimated separately for Run~IIa and Run~IIb. Figure~\ref{fig:EMcmpab} shows the EM layer energy fraction distributions in $Z \rightarrow ee$ data for Run~IIa and Run~IIb, after correcting the Run~IIb data by the Run~IIa/Run~IIb ratio from Fig.~\ref{fig:EMpileupab}. The differences between Run~IIb and Run~IIa EM layer energy fractions are compatible with statistical fluctuations from the size of the $\ensuremath{Z \rightarrow ee}$ data sample, with $\chi^2$ of 13.5, 23.0 and 22.0 for 15 degrees of freedom in EM1, EM2, and EM3, respectively.
\begin{figure}[hbpt]
\centering
\includegraphics[width=\linewidth]{fig16.eps}
\caption{[color online] Each line represents the ratio of the mean EM layer energy fractions simulated with zero-bias overlay to the same sample simulated without overlay. It represents the contribution from extra $p\overline{p}$~interactions and noise to the mean EM layer energy fractions, which is determined separately for the Run~IIa sample (dotted lines) and the Run~IIb sample (continuous lines). The ratio between the continous to the dashed line is used as a correction factor to the EM layer energy fractions measured in Run~IIa when comparing them to the Run~IIb fractions (Fig.~\ref{fig:EMcmpab}).
\label{fig:EMpileupab}}
\end{figure}
\begin{figure}[hbpt]
\centering
\includegraphics[width=\linewidth]{fig17.eps}
\caption{[color online] Ratio of the means of the EM layer energy fraction distributions in \ensuremath{Z \rightarrow ee}\ events between the Run~IIa analysis and the present Run~IIb analysis, separately for each of the four EM layers and each of the 15 standard $\eta$~categories.
\label{fig:EMcmpab}}
\end{figure}
\section{Generators for Full and Fast Simulation}
\label{sec:gener}
The initial step in constructing templates for extracting the $W$ boson mass is simulation of vector boson production and decay kinematics. The complete list of event generators used in this analysis is shown in Table~\ref{tab:generators}. We use the {\sc resbos}~\cite{resbos,resbos1,resbos2} program coupled with the {\sc CTEQ6.6} NLO parton distribution functions (PDFs)~\cite{cteq66} as the standard event generator. {\sc resbos} provides a good description of the dominant QCD effects, namely the emission of multiple gluons, that influence the shape of the boson~$p_T$ distribution at low boson~$p_T$. The $W$ boson $p_T$ spectrum has a significant impact on the generated $\ensuremath{p_T^e}$ and $\ensuremath{p_T^\nu}$ spectra. Its accurate description is an important ingredient of the $M_W$ measurement.
The dominant effect from EW corrections on the $M_W$ measurement arises from radiation of a single photon from the final state charged lepton. This process is simulated by combining {\sc resbos} with {\sc photos}~\cite{photos}.
\begin{table}
\begin{center}
\caption {{\label{tab:generators}} Event generators for $W$ boson and $Z$ boson
processes used in this analysis. {\sc pythia} is
used for the full MC closure test and for estimating PDF uncertainties.
{\sc wgrad} and {\sc zgrad} are used only for estimation of QED theory
uncertainty.}
\begin{tabular}{c | c c c}\hline\hline
{Tool} & {Process} & {QCD} & EW \\ \hline
{{\sc resbos}} & $W$,$Z$ & NLO & - \\
{{\sc pythia}} & $W$,$Z$ & LO & QED FSR\\\hline
{{\sc wgrad}} & $W$ & LO & complete $\mathcal{O}(\alpha)$ \\
{{\sc zgrad}} & $Z$ & LO & complete $\mathcal{O}(\alpha)$ \\
{{\sc photos}} & & & QED FSR \\\hline\hline
\end{tabular}
\end{center}
\end{table}
\subsection{QCD Corrections and Boson {\boldmath $p_T$}}
\label{sec:bosonpT}
{\sc resbos} uses the triple differential cross section $d^3\sigma/dp_T\,dy\,dM$ for $Z/\gamma^{*}$ and $W$ boson production, where $p_T$ is the boson transverse momentum, $y=\frac{1}{2}\ln[(E+p_z)/(E-p_z)]$ is the boson rapidity, and $M$ is the boson mass. The triple differential cross section is tabulated on a grid for discrete values of $p_T$, $y$, and $M$. They are calculated using the CSS $p_T$ resummation technique~\cite{b-css_1, b-css_2} for low boson $p_T$ matched to a fixed order calculation at high $p_T$. The resummation is performed in impact parameter space with Sudakov exponents calculated to NNLL precision and Wilson coefficients calculated to NLO precision. At large impact parameters, the perturbative calculation is modified by a phenomenological non-perturbative factor. In this measurement, we use the BNLY~\cite{g2} parametrization for the non-perturbative factor which is a function of three variables, $g_1$, $g_2$, and $g_3$.
The observed boson $p_T$ spectrum in this measurement is mostly sensitive to $g_2$ and has very limited sensitivity to the other non-perturbative parameters and scales in the cross section. Therefore, we take the uncertainty in $g_2$ as representative of the boson production model uncertainty. We use the world average for $g_2$~\cite{g2}, and the uncertainty is propagated using pseudo-experiments generated by varying $g_2$ within its quoted uncertainty. We find uncertainties of 2~MeV, 5~MeV, and 2~MeV for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ fits, respectively.
\subsection{Electroweak Corrections}
\label{sec:EWcorrections}
In our fast MC, care is taken to model the EW corrections to $W$ boson production and decay as well as the detector response to the emitted photons. The most important correction is the real emission of final state photons, since it takes away some of the energy of the electron, and the invariant mass of the electron and neutrino will be smaller than the $W$ boson invariant mass, biasing the measurement.
As discussed above, we use {\sc photos} to simulate the leading effects of real photon emission. To estimate the uncertainties from this modeling, we explore the difference between the shower simulation done by {\sc photos} and the EW NLO calculation available in {\sc wgrad}~\cite{wgrad} and {\sc zgrad}~\cite{zgrad}. In the shower simulation done by {\sc PHOTOS}, a final state radiation (FSR) emission probability kernel is introduced that is accurate only in the collinear limit. In the NLO simulation done by {\sc wgrad} and {\sc zgrad}, all one-loop real and virtual contributions are considered, including interference terms, but a soft and a collinear cutoff are introduced to avoid infrared divergencies. {\sc wgrad} and {\sc zgrad} cannot be used to measure $M_W$, since they do not include higher-order QCD corrections, but are adequate to estimate the purely EW uncertainties.
{\sc wgrad} allows both shower and EW NLO calculations. We generate pseudo-experiments using both options and fit them against templates prepared with {\sc photos}. The difference of the fitted $M_W$ is taken as a measure of the uncertainty and is found to be 5 MeV for the $m_T$, $p_T^{\ e}$ and $\ensuremath{{\slash\kern-.7emE}_{T}}$ fits.
To estimate the uncertainty in the EW NLO calculation itself, we study the dependence of the measured $M_W$ on the soft and collinear cutoffs. No variation is observed by changing the collinear cutoff, but a non-negligible effect is seen when varying the soft cutoff. We consider the difference between the cutoff at $10$ MeV and at $800$ MeV as an estimation of the uncertainty due to higher-order corrections~\cite{wgradnote}. We find shifts of 2~MeV, 1~MeV, and 3~MeV for the $m_T$, $p_T^e$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ fits, respectively.
Finally, an experimental scale is also present in the FSR simulation: the radius of the cone used as the boundary between photons whose energy is detected as part of the electron cluster or as part of the unclustered recoil. The simulation uses the value $\Delta R=0.3$ as standard, and we vary it by the size of a cell of the D0 calorimeter, between $\Delta R = 0.2$ and 0.4, to estimate the uncertainty coming from this experimentally introduced scale. We find uncertainties of 1~MeV, 1~MeV, 5~MeV for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ fits, respectively.
\subsection{Parton Distribution Functions}
\label{sec:PDF}
The $M_W$ measurement is sensitive to PDF variations because of the limited detector acceptance and the thresholds on the selection of the decay products kinematics. In the ideal case of full pseudorapidity acceptance by the detector and no kinematic cuts, the lack of knowledge of the PDFs would introduce a negligible uncertainty on $M_W$. We determine the systematic uncertainty arising from the PDFs using {\sc pythia} and the CTEQ6.1 PDF set~\cite{cteq}, which is available at LO. We generate pseudo-experiments using the 40 members of the CTEQ6.1 error set, each of which corresponds to a one-sided uncorrelated variation of the effective $\chi^2_{\text{eff}}$ used for the PDF determination. The variation adopted in the CTEQ6.1 error set corresponds to $\Delta\chi^2_{\text{eff}} = 100$. Studies from the CTEQ collaboration show that a 90\% C.L. can be achieved with $\Delta\chi^2_{\text{eff}}$ between 100 and 225, depending on the specific experiment in the global analysis~\cite{cteq_1, cteq_2}.
The pseudo-experiments from each of the 40 members of the error set are compared to mass templates generated using the nominal set. Following the CTEQ prescription, we take the average of the two-sided variation $|M^{+}-M^{-}|/2$ as the estimate of the uncertainty for each uncorrelated combination of the PDF parameters. The total uncertainty is determined with the prescription:
\begin{eqnarray}
\Delta M_W =\frac{1}{1.64}\sqrt{\sum_{i=1}^{20} \left( \frac{M^{+}_{i}-M^{-}_{i}}{2} \right)^{2}},
\end{eqnarray}
where the factor $1/1.64$ brings the coverage of the uncertainty to 68\% C.L.
The final PDF uncertainties are found to be 11~MeV, 11~MeV, and 14~MeV for the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ methods. These values are slightly larger than those in our Run IIa measurement~\cite{OurPRL}, which uses exactly the same prescription, due to the deterioration of our hadronic recoil resolution at higher luminosity.
\section{Introduction}
\label{sec:intro}
The 1983 observation of the $W$ and $Z$ vector bosons~\cite{CERN83_1, CERN83_2, CERN83_3, CERN83_4} provided important evidence for the electroweak (EW) sector of the standard model (SM) of particle physics~\cite{GSW_1, GSW_2, GSW_3}. Increasingly precise measurements of the vector boson masses, with a precision of $2.1\,\text{MeV}$, corresponding to 2 parts in $10^5$ for the $Z$ boson mass~\cite{LEPEWWG08}, and their properties compiled over the course of the following 30 years have verified the structure of the electroweak theory, which has been further confirmed by the recent discovery of the Higgs boson with mass $125.7\,\text{GeV}$.
.
In the electroweak theory, for a given renormalization scheme~\cite{RenScheme}, there is a well-defined relationship between the EW boson masses, coupling constants, and the other EW parameters arising from radiative corrections. Precise measurements of these observables provide tests of this relationship and constrain the size of additional corrections from unobserved fields. In the {\it on-shell} scheme~\cite{OnShell}, the SM relationship can be written as
\begin{equation}
\label{eq:SMWmassPred}
M_W^2 \left(1-\frac{M_W^2}{M_Z^2}\right) = \left(\frac{\pi\alpha}{\sqrt{2} G_F }\right)(1+\Delta r),
\end{equation}
where $M_W$ and $M_Z$ are the masses of the $W$ and $Z$ bosons, $G_F$ is the Fermi constant, and $\alpha$ is the fine structure constant at zero momentum. The quantity $\Delta r$ contains all radiative corrections including the running of $\alpha$ and of the SM $\rho$ parameter~\cite{PDG2012}. The renormalization of the $\rho$ parameter includes a large contribution from the virtual top (with mass $m_t$) and $b$ quark loop, whose one-loop effect can be written~\cite{tbLoop_1, tbLoop_2} as
\begin{equation}
\Delta r_{tb} \approx \frac{-3G_F M_W^2 m_t^2}{8\sqrt{2}\pi^2(M_Z^2 - M_W^2)},
\end{equation}
after neglecting terms of order $m_b^2/m_t^2$, where $m_b$ is the $b$-quark mass. The value of $\Delta r$ also depends logarithmically on the Higgs boson mass.
Using Eq.~\ref{eq:SMWmassPred} as a prediction of the $W$ boson mass, the theoretical uncertainty on the $W$ mass arising from higher-order corrections to $\Delta r$ is estimated to be 4 {$\rm MeV$} using the complete two-loop SM prediction~\cite{SMWmass}. This can be compared with the world-average uncertainty on the measured value of the $W$ boson mass of 23 {$\rm MeV$} before the measurement reported here and the recent result from the CDF Collaboration~\cite{CDFnew}. This 23 {$\rm MeV$} uncertainty results from a compilation of measurements from the four LEP experiments (ALEPH~\cite{AlephW}, DELPHI~\cite{DelphiW}, L3~\cite{L3W}, and OPAL~\cite{OpalW}) and from the D0~\cite{D0W_1, D0W_2, D0W_3} and CDF~\cite{CDFW} Collaborations at the Tevatron, including the earlier CDF~\cite{CDFNewW_1, CDFNewW_2} and D0~\cite{OurPRL} Run~II results.
The $W$ boson mass, and to some extent the top quark mass, are the limiting factors in our ability to tighten the constraints on new physics that couples to the EW sector. Improving the measurement of $M_W$ is, therefore, an important contribution to our understanding of the electroweak interaction.
This article presents the details of a previously published measurement of the $W$ boson mass~\cite{OurNewPRL} using data taken with the D0 detector during the 2006 -- 2009 Fermilab $p\bar{p}$ Tevatron run, \textit{ie.} during part of the Tevatron Run~IIb, with a total integrated luminosity of 4.3 fb$^{-1}$ and the combination of that measurement with our previous result~\cite{OurPRL} based on 1.0 fb$^{-1}$ of integrated luminosity collected in 2002 -- 2006 (Run~IIa). Both results were obtained using similar analysis techniques.
\section{Determination of the Fast Simulation Parameters}
\label{sec:param}
As described in Sec.~\ref{sec:eventcharacteristics}, $W(Z)$ events are characterized by the measurements of the electron(s) and the hadronic recoil in the event. Our fast simulation is designed to reproduce these measurements and their correlations starting from the four-vectors provided by an event generator (Sec.~\ref{sec:gener}). The simulation consists of four parts: (1) simulation of the vertex $z$ coordinate, (2) simulation of the electron reconstruction and identification efficiency, (3) simulation of the electron energy measurement, and (4) simulation of the hadronic recoil energy measurement. The vertex $z$ coordinate is needed to predict the detector regions with which the electrons interact when computing efficiencies and reconstructed energy. In our fast simulation, photons within the electron energy reconstruction cone (Fig.~\ref{fig:ewindow}) of a parent electron are merged back into the electron, treating the resulting electron plus photons system as the reconstructed electron. This procedure takes into account the reconstruction inefficiency induced by the photons as well as the probability of low energy photons to reach the calorimeter. Photons far from electrons are reconstructed as part of the recoil system and are so described in our fast simulation.
Here, we describe the models used in the fast MC to simulate data and full MC. Separate tunes are required for data and for full MC because our full MC does not describe our data with an accuracy sufficient to measure $M_W$. We perform the full measurement of $M_W$ twice: once using as input full MC and once with data. By treating the full MC events as data and using the same parametrized detector model, but with different parameters, we validate our experimental procedure. In our full MC measurement, we obtain a difference of our measured $Z$ mass from the input mass of $-3 \pm 4$ MeV and a difference of our measured $W$ mass from the input mass of $-2 \pm 5$ MeV from the fit to the $m_T$ distribution, $-2 \pm 5$ MeV from the fit to the $p^e_T$ distribution, and $+5 \pm 6$ MeV from the fit to the $\ensuremath{{\slash\kern-.7emE}_{T}}$ distribution. These uncertainties are statistical, reflecting the size of the full MC sample.
\subsection{Vertex Parametrization}
We select only events with vertex position $|z_V|<60$~cm and electrons with $|\eta_{\rm det}|<1.05$ for the final analysis. Since the electron $\eta_{\rm det}$ depends on the electron $\eta$ and the vertex position, we need a model that can be used in the fast MC to predict the vertex distribution. The beam shape is modeled as a product of a Gaussian bunch length with a Lorentzian shape set by the accelerator $\beta^*$ functions in both transverse directions. The parameters are determined from fits to the vertex distribution for randomly triggered beam crossings.
\ \
\subsection{Electron Efficiency Parametrization}
\label{sec:eff}
The $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ distributions are modified by inefficiencies in electron identification that depend on the event kinematics and on the hadronic environment. These effects introduce biases in the measured $M_W$ which must be accounted for. We accomplish this by building an efficiency model in the fast MC that reproduces the inefficiencies effects. In this section we discuss the components of the fast MC model used to predict the combined electron reconstruction and identification efficiencies. We begin by giving an overview of the model, then discuss each of the components, and end with a discussion of the model validation.
The efficiency model begins by describing the effect of FSR photons in the electron reconstruction and identification efficiency. Then, from $\ensuremath{Z \rightarrow ee}$ data, we determine the effect of known sources of inefficiency, such as those arising from the trigger system or from the HMatrix, isolation, EM fraction and track matching requirements. The collective effect from other sources of inefficiency, such as pile-up, is modeled using full MC simulation. In the last step, data control samples are used again to provide final corrections to the full MC model. The final corrections are small because the full MC used as reference has collider data zero-bias events added to the simulated hard scatter. These zero-bias events are added to the low-level channel information without zero-suppression, allowing for the modeling of the impact of hadronic energy in the reconstruction and identification of electrons, which is the leading source of inefficiency in a high instantaneous luminosity environment.
The electron identification efficiency model must be multi-dimensional and must depend on all quantities that introduce biases in the reconstructed $M_W$. In the ideal case, a single multi-dimensional efficiency would depend on all necessary variables and automatically include all correlations. However, the \ensuremath{Z \rightarrow ee}\ control data sample is not large enough to establish a model by binning the efficiency in all relevant variables to derive a single function. Many of the dependencies are, however, largely uncorrelated with each other, and our full MC program can describe parts of the efficiency reasonably well.
The overall efficiency $\epsilon$ can be written as a product of several terms:
\begin{widetext}
\begin{eqnarray}
\epsilon &=& \epsilon_{\rm trig}(p_T^e)
\, \times \, \epsilon_{\rm FSR}(X,\Delta R,\eta,E^e)
\, \times \, \epsilon_{\rm trk}(z_V,\eta,p_T^e)
\, \times \, \epsilon_{\rm EMID}(\eta_{\rm det},p_T^e)
\, \times \, \epsilon_{\phi_{\mathrm{mod}}}(\phi_{\mathrm{mod}}) \nonumber \\
& & \, \times \, \epsilon_{\phi}(\phi^e)
\, \times \, \epsilon_{\mathrm{had}}(\text{SET},p_T^e,\eta_{\rm det},L,\ensuremath{u_{\parallel}})
\, \times \, R_1(\text{SET},L)
\, \times \, R_2(\ensuremath{u_{\parallel}}),
\label{e-effi}
\end{eqnarray}
\end{widetext}
in which $\epsilon_{\rm trig}$ measures the trigger efficiency for recording events in the sample, $\epsilon_{\rm FSR}$ the efficiency arising from radiated photons, $\epsilon_{\rm trk}$ the efficiency of the track selection requirement, $\epsilon_{\rm EMID}$ the efficiency of the calorimetric requirements used in the electron selection, and $\epsilon_{\phi_{\mathrm{mod}}}$ the efficiency loss caused by the calorimeter module boundaries. The efficiency $\epsilon_\phi$ models the electron $\phi$ dependent efficiency and $\epsilon_{\mathrm{had}}$ the effect on electron finding arising from hadronic activity in the event. The term $\epsilon_{\mathrm{had}}$ also describes the effect of multiple $\ensuremath{p\overline{p}}$ interactions on the electron identification.
Finally, $R_1(\text{SET}, L)$ is introduced to account for imperfections in the efficiency description in full MC at high instantaneous luminosity, especially the one related to track matching, while $R_2(u_{\parallel})$ is introduced to describe the fine details of the hard recoil (see Sec.~\ref{smearing_model}) in the electron identification and reconstruction efficiency that were not fully described by the hadronic energy dependent efficiency. The correction $R_1$ is derived from a comparison of the efficiency in data and full MC, while $R_2$ is derived from a comparison of the efficiency between data and fast MC in which all previously determined efficiencies are applied to the fast MC. We describe each of these efficiencies in the following sections. The overall normalization of the total efficiency does not enter this analysis because the fast MC yields are always normalized to the data or full MC yield.
\subsubsection{Trigger Efficiency}
\label{sec:TriggerEff}
Events used in this analysis must satisfy one of the single-electron triggers described in Sec.~\ref{sec:det}. For this analysis, a one-to-one correspondence between a run period and a specific trigger is enforced. To achieve correspondence, we choose the lowest $p_T$ threshold unprescaled trigger available for the period. The efficiency for any of these three triggers will be less than unity near the threshold because the measured energy differs between the trigger system and the offline reconstruction program. The efficiency modeling these effects, $\epsilon_{\rm trig}$, is thus a function of electron $p_T$.
A tag and probe method is used with data $\ensuremath{Z \rightarrow ee}$ candidate events to measure the trigger efficiency as a function of $p^e_T$. We require one electron (the tag) in a $\ensuremath{Z \rightarrow ee}$ event to pass all selection requirements including the trigger. The other electron (the probe) is initially required to pass the full selection except a requirement regarding the trigger. The efficiency is then determined from the rate of electrons passing the trigger whose efficiency is being measured. For this efficiency determination, we allow the tag electron to be in the EC to gain statistics, but the probe electron must be in the CC.
The resulting measured efficiency is shown in Fig.~\ref{fig:trigeff} for each of the three triggers. When simulating the trigger in the fast MC, a mix of the three efficiencies is used such that each replicates the frequency in data as determined from the integrated luminosity exposure for each trigger. This efficiency is only used when using the fast MC to simulate events for comparison to collider data. It does not apply to the full MC analysis.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{fig18.eps}
\caption{Trigger efficiency as a function of $p^e_T$ for the three triggers
used.} \label{fig:trigeff}
\end{figure}
\subsubsection{FSR Efficiency}
\label{sec:elec_fsr}
Radiated photons (FSR) close to or inside the electron reconstruction cone will affect the electron identification efficiency because of isolation, shower shape and track matching requirements. To account for these effects, we introduce an electron efficiency $\epsilon_{\rm FSR}(X,\Delta R, \eta, E^e)$. Here, $X$ is the fraction of the electron energy carried by the photon and $ \Delta R = \sqrt{\left[\phi(e)\,-\,\phi(\gamma)\right]^2\,+\,\left[\eta(e)\,-\,\eta(\gamma)\right]^2} $ measures the separation between the electron and photon.
The parametrization is derived by studying the electron reconstruction efficiency using two full MC samples: one with single electrons having the kinematics of those from $W\to e\nu$ decay that are accompanied by FSR photons, and a second sample that includes exactly the same events as the first one, except that the FSR photon energy has been added to the energy of the electron, and the photons themselves are removed. Both samples have zero-bias event overlay, and the same zero-bias event is overlaid on a given $W$ boson event in each of the two samples. The ratio of the electron yields in the first sample to that in the second sample defines this efficiency. The efficiency is determined in bins of the four variables, $X$, $\Delta R$, $\eta$, and $E^e$. Figure~\ref{fig:eideff_gammafrac} shows examples of the electron reconstruction efficiency versus $X$ in twelve $\Delta R$ bins. The shapes of these efficiencies as functions of $X$ and $\Delta R$ are primarily a combination of effects of the photon distorting the cluster shower shape and cluster centroid position, causing the EMID or track match requirements to fail, and of the photon carrying sufficient energy that the electron fails either the track or cluster $p_T$ requirement.
The efficiency in the first three $\Delta R$ bins is mainly driven by the track matching requirement and, to a lesser extent, by the shower shape requirement. While the photon is still close enough to the electron for most of its energy to be deposited in the same reconstruction cone, the shower shape at large values of $X$ becomes too different from that expected for a single electron, and, more importantly, the calorimeter-based estimate of the cluster position deviates significantly from the track-based expectation. In intermediate $\Delta R$ bins, the photon is in the region that interferes with the cluster isolation requirement. The peak at intermediate values of $X$ separates the regimes in which the cluster is reconstructed around the electron or around the photon. In the last three $\Delta R$ bins, the photon is far away from the electron cone and does not directly interfere with the electron reconstruction.
\begin{figure*}[t]
\includegraphics [width=0.75\textwidth] {fig19.eps}
\caption{ Electron identification efficiency for electrons accompained by FSR determined from full MC as a function of the fraction of the energy carried by the photon. Each pane corresponds to a different $\Delta R$ region, and the distributions are integrated over $\eta$ and $E^e$.}
\label{fig:eideff_gammafrac}
\end{figure*}
\subsubsection{Track-Matching Efficiency}
\label{sec:eff_track}
The track-matching efficiency $\epsilon_{\rm trk}(z_V,\eta,p_T)$ is
described as a product of two efficiencies, one expressed as a
function of $z_V$ and $\eta$ and the second expressed as a function of
$p_T$ and $\eta$. The first of these is derived using the
tag and probe method applied to $\ensuremath{Z \rightarrow ee}$ candidate events. The probe electron
is initially required to pass all selections except the tracking requirements.
The resulting efficiency is shown in Fig.~\ref{fig:tight_trk_eff}. Because
this is derived for both variables simultaneously, the correlations are
automatically included. The second function describes the $p_T$ dependence and
correlation with $\eta$ of the track requirements. Because of the limited size of the $Z$ boson sample, the dependence is derived from
full MC. It is modeled with an $\eta$-dependent logarithmic function $\epsilon_{\rm trk}(z_V,\eta,p_T) = \epsilon_{\rm trk}(z_V,\eta)\times \left[p_0(\eta) + p_1(\eta)\log(p_T)\right]$, and shown in Fig.~\ref{fig:tight_trk_eff_ptdep} for different $\eta$ regions. It is interpreted as a perturbation over the efficiency $\epsilon_{\rm trk}(z_V,\eta)$, without changing the relative normalization in each $\eta$ region.
\begin{figure}[htp]
\includegraphics [width=\linewidth] {fig20.eps}
\caption{Track-matching efficiency as a function of $z_{V}$ and $\eta$ in data. The efficiency is proportional to the area of the boxes.}
\label{fig:tight_trk_eff}
\end{figure}
\begin{figure}[htp]
\includegraphics[width=\linewidth]{fig21.eps}
\caption{The $\ensuremath{p_T^e}$-dependent perturbation over the track-matching efficiency in 11 bins of $\eta$ in full MC. The slopes change because electrons with higher energy are more easily matched to the calorimeter cluster. This efficiency perturbation is normalized at $p^e_T=45$ GeV. The total track-matching efficiency is the product of the efficiencies shown in Fig.~\ref{fig:tight_trk_eff} and~Fig.~\ref{fig:tight_trk_eff_ptdep}.}
\label{fig:tight_trk_eff_ptdep}
\end{figure}
\subsubsection{EM Identification Efficiency}
The efficiency accounting for the EM cluster finding, HMatrix, isolation, and EM fraction requirements is derived from $\ensuremath{Z \rightarrow ee}$ data, again using the tag and probe method. For this determination, the probe object is a track that passes the tracking requirements, and the invariant mass of the track and tag electron is required to be consistent with that of a $Z$ boson.
\subsubsection{Electron $\phi_{\mathrm{mod}}$ Efficiency}
\label{sec:eff_phimod}
The D0 calorimeter has 32 EM modules in the CC region. Each module is two cells wide, hence has a width of $2\pi/32\approx 0.2$~radian in $\phi$. Between any two adjacent modules, there is an uninstrumented region (crack) of width of $\approx 0.02$~radian in $\phi$. An intra-module $\phi$ variable $\phi_{\mathrm{mod}}$ is defined as the fractional part of $32\phi/2\pi$. This variable measures the angular position within any module as a fraction of module width (with $0 \leq \phi_{\mathrm{mod}} \leq 1$). Each of the EM1, EM2, and EM4 layers in an EM module consists of two readout cells. The central value $\phi_{\mathrm{mod}}=0.5$ corresponds to the inter-cell boundary in $\phi$ and values close to 0 and 1 are the module edges. The EM3 layer is segmented twice as finely in both $\eta$ and $\phi$ (0.05 radian wide). The $\phi_{\mathrm{mod}}$ values at 0.25, 0.5, and 0.75 correspond to the inter-cell boundaries of EM3.
Because of cell boundaries inside and uninstrumented regions outside an EM module, the reconstructed electron cluster center $\phi^{\rm EM}$ is biased away from these regions. Figure~\ref{fig:PhiModBias} shows the $\phi^{\rm EM}_{\mathrm{mod}}$ shift, $\phi^{\rm EM}_{\mathrm{mod}} - \phi^{\rm trk}_{\mathrm{mod}}$ as a function of $\phi^{\rm trk}_{\mathrm{mod}}$, which is calculated from the track $\phi$ extrapolated to the depth of EM3. Since $\phi^{\text{trk}}_{\text{mod}}$ is unbiased by calorimeter uninstrumented regions, we see a strong tendency for the $\phi^{\rm EM}_{\text{mod}}$ to move away from these regions, resulting in a bias in the EM cluster center. We also observe direction biases near the inter-cell boundaries with a complicated structure that arises from the sharing of shower energy among the EM3 cells used to define the calorimeter-based $\phi$ measurement.
The $\phi_{\mathrm{mod}}$ efficiency is derived using the tag and probe method applied to $Z$ candidate events and is shown in Fig.~\ref{fig:PhiModEff}. The efficiency variation with $\phi^{\rm trk}_{\mathrm{mod}}$ is small except near the edges. We therefore apply a fiducial requirement, $0.1\leq \phi^{\rm trk}_{\mathrm{mod}} \leq 0.9$, restricting the analysis to the region of stable $\phi_{\mathrm{mod}}$ efficiency.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig22.eps}
\caption{Average difference between $\phi^{\rm EM}$ and $\phi^{trk}$
in module units as a function of $\phi^{\rm trk}_{\mathrm{mod}}$ extrapolated to EM3.}
\label{fig:PhiModBias}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig23.eps}
\caption{Dependence of the electron reconstruction efficiency on the extrapolated
track $\phi_{\mathrm{mod}}$. }
\label{fig:PhiModEff}
\end{figure}
\subsubsection{Hadronic Energy Dependent Electron Efficiency}
\label{sec:eff_set}
The efficiencies described thus far are directly related to kinematic properties of the electron and radiated photons. Indirect effects arising from the presence of hadrons in the same event have been accounted for through the presence of the recoil and additional $\ensuremath{p\overline{p}}$ interactions in events used to derive the efficiencies, but the independent effects of the hadronic energy are not specifically studied. The hadronic energy dependent electron efficiency model accounts for the EM cluster reconstruction efficiency which is strongly affected by the presence of hadronic energy near the electron in the calorimeter cells. It also collectively describes any residual $\ensuremath{p_T^e}$ and SET dependency of the electron reconstruction and identification efficiency.
The hadronic efficiency $\epsilon_{\mathrm{had}}(\mathrm{SET},L,\ensuremath{u_{\parallel}},p_T^e,\eta_{\rm det})$ depends on five variables, the first three being direct measures of the hadronic energy. The instantaneous luminosity of the full MC event is taken from the zero-bias event overlaid on the hard scatter. The use of SET accounts for the impact of energy from additional interactions, and the use of $\ensuremath{u_{\parallel}}$ accounts for the magnitude of the hard hadron recoil energy and its orientation with respect to the electron. The $p_T^e$ dependence arises because higher energy electrons are less affected by a fixed amount of nearby hadronic energy than lower energy electrons. Finally, the use of instantaneous luminosity $L$ accounts for the different behavior of the calorimeter read out at different instantaneous luminosity regimes.
The efficiency is derived in a multi-step process and uses the zero-bias-event SET and true $p_T$ of the electron in both full MC and fast MC as observables. These variables are chosen because they are not modified during the fast simulation and, thus, provide robust observables to describe the cluster reconstruction efficiency and its dependence on the hadronic energy, especially in high instantaneous luminosity environment. The first step is to create a version of the fast MC that has the zero-bias-event SET and electron true $p_T$ distribution reweighted to agree with the full MC distribution. This provides a high statistics target model for the fast MC.
In the next step, we compare the number of events in the original and reweighted fast MC in bins of $u_{\parallel}$, $\ensuremath{p_T^e}$, $\eta_{\text{det}}$, and $L$. Their ratio is taken as the initial estimate of the efficiency. In each bin, we compare the distribution of $\text{SET}/\ensuremath{p_T^e}$ between the original and the reweighted fast MC. The ratio is smoothed using a polynomial function and the average value is shifted to one, so that it can be interpreted as a perturbation over the initial estimate from the full MC. The hadronic efficiency is then the product of the initial estimate and the $\text{SET}/\ensuremath{p_T^e}$ perturbation in each bin.
\subsubsection{Electron $\phi$ Efficiency}
\label{sec:eff_phi}
The reconstructed electron $\phi$ distribution in $\ensuremath{W \rightarrow e \nu}$ events is not uniform. Once the $\phi_{\mathrm{mod}}$ induced effects are incorporated, we attribute the remaining overall $\phi$ dependence to small-scale imperfections in the detector, primarily inefficient tracker regions and calorimeter cells, which have no significant effect on the electron energy scale. This efficiency is determined by dividing the $\phi$ distribution in data or full MC by that from the corresponding fast MC after including all other fast MC efficiencies. Figure~\ref{fig:elec_phi_eff} shows this efficiency for data $\ensuremath{W \rightarrow e \nu}$ events with the maximum efficiency value normalized to one.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig24.eps}
\caption{The ratio data/fast MC of electron yield in $\ensuremath{W \rightarrow e \nu}$ events as a function of electron $\phi$ after all other effiencies have been applied to the fast MC. The ratio is used as a final efficiency correction. The maximum efficiency value is normalized to one.}
\label{fig:elec_phi_eff}
\end{figure}
\subsubsection{Monte Carlo Validation}
\label{sec:eff_valid}
We validate our parametrized model derived from full MC using the generator level information by studying the efficiency as a function of the variables that are used to parametrize it. Figures~\ref{fig:TrueEptEff} and \ref{fig:DetEtaEff} show the comparison of the total electron efficiency, except for the trigger efficiency whose effect is not included in the full MC simulation, as a function of true $p_T^e$ and $\eta_{\rm det}$ in full MC and fast MC. For electrons in $Z\rightarrow ee$ events and in $W \rightarrow e\nu$ events, we observe that our efficiency model in fast MC accurately reproduces the efficiency in full MC.
\begin{figure}[htbp]
\includegraphics [width=\linewidth] {fig25a.eps}
\includegraphics [width=\linewidth] {fig25b.eps}
\caption{[color online] The reconstruction and identification efficiency as a function of true $p_T^e$ in full MC and fast MC for electrons in (a) $Z\rightarrow ee$ and (b) $W\rightarrow e\nu$ events. In $\ensuremath{Z \rightarrow ee}$ events, when the probed electron has high $\ensuremath{p_T^e}$, the other electron in the event is soft. When the soft electron is not properly identified, the $\ensuremath{Z \rightarrow ee}$ event is not identified either. Thus, we observe a drop in the identification efficiency of $\ensuremath{Z \rightarrow ee}$ events with high $\ensuremath{p_T^e}$ electrons, but not in $\ensuremath{W \rightarrow e \nu}$ events.}
\label{fig:TrueEptEff}
\end{figure}
\begin{figure}[htbp]
\includegraphics [width=\linewidth] {fig26a.eps}
\includegraphics [width=\linewidth] {fig26b.eps}
\caption{[color online] The reconstruction and identification efficiency as a function of $\eta_{\rm det}$ in full MC and fast MC for electrons in (a) $\ensuremath{Z \rightarrow ee}$ and (b) $\ensuremath{W \rightarrow e \nu}$ events.}
\label{fig:DetEtaEff}
\end{figure}
We conclude that the full MC electron reconstruction and identification efficiency is well described by the parametrized model. This validates the strategy adopted for the derivation of the hadronic energy dependent efficiency (Sec.~\ref{sec:eff_set}).
\subsubsection{Residual Efficiency Corrections\label{sec:DataHack}}
The efficiencies discussed thus far assume that the full MC can be used to accurately describe the efficiency dependencies. After applying the above efficiencies to the fast MC, we compare full MC or fast MC and data to derive two independent residual efficiency corrections $R_1(\mathrm{SET},L)$ and $R_2(\ensuremath{u_{\parallel}})$.
The correction $R_1(\mathrm{SET},L)$ is derived by measuring the electron identification efficiency as a function of SET and $L$ in both $\ensuremath{Z \rightarrow ee}$ data and full MC. The ratio of data and full MC efficiency defines this correction. The ratios are shown in Fig.~\ref{fig:EffRatioSETLumi} for projections on the SET and $L$ axes. The correction $R_1(\mathrm{SET},L)$ is needed only for data analysis as a correction to the efficiency $\epsilon_{\mathrm{had}}$ derived previously by comparing full MC and fast MC.
To determine $R_1(\mathrm{SET},L)$, rather than directly counting the number of probe electrons, we study the $m_{ee}$ distribution in bins of the variables used to parametrize the correction. Two $m_{ee}$ distributions are used, from a loose sample, when the probe electron is not required to satisfy the selection under study, and from a tight sample, when the probe electron is required to satisfy all the selection requirements. The $\ensuremath{Z \rightarrow ee}$ yield in each distribution is determined by fitting the distribution to $\ensuremath{Z \rightarrow ee}$ signal and background components.
The second residual efficiency correction, $R_2(\ensuremath{u_{\parallel}})$, addresses imperfections in the $\ensuremath{u_{\parallel}}$ dependency of the efficiency model. This is derived using the same technique of measuring the $m_{ee}$ distribution in bins of $\ensuremath{u_{\parallel}}$, but taking the ratio of the efficiencies calculated in data to those derived in fast MC $\ensuremath{Z \rightarrow ee}$ events.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{fig27.eps}
\caption{[color online] Electron efficiency correction (data/full MC) as a function of (a) SET and (b) instantaneous luminosity. The contribution of each electron selection requirement to the efficiency correction is shown by the ratios derived from samples in which only the HMatrix (blue), loose track match (red), and tight track match (magenta) criterion is used (see Sec.~\ref{sec:eventselection} for the definition of each criterion).}
\label{fig:EffRatioSETLumi}
\end{figure}
\subsubsection[Systematic Uncertainty]{Systematic Uncertainty due to Efficiencies}
\label{sec:sys_eff}
The most significant efficiency-related uncertainty results from the adjustment of the final residual uncertainty correction $R_2$. The resulting uncertainties on $M_W$ are 1~MeV, 2~MeV, and 5~MeV, respectively, for the $m_T$, $p^e_T$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ methods.
\subsection{Electron Response Parameterization}
\label{sec:pmcs_elec}
The electron response model comprises three components: the response model, the resolution model and the underlying energy model. A short introduction to the model is provided here, and detailed descriptions are then given in each of the following subsections.
The response model describes the average reconstructed electron energy for a
given electron true energy. We build a parametrized model for the contribution of radiated photons to the reconstructed electron cluster energy, since the energy from these photons is included in the electron energy after reconstruction. We correct for residual luminosity and $\eta$ dependencies of the response that are not described by the data calibration. Finally, we use $\ensuremath{Z \rightarrow ee}$ data and the measured value of the $Z$ boson mass~\cite{LEPZ_1, LEPZ_2, LEPZ_3, LEPZ_4} to calibrate the absolute energy scale.
The resolution model describes the fluctuations in the reconstructed electron energy. The EM calorimeter sampling resolution is modeled from a full MC sample that includes the improvements described in Sec.~\ref{sec:improve_shower}. This allows a detailed description of the dependence of the sampling term on the amount of uninstrumented material upstream of the calorimeter, as well as the energy and angular dependencies that it creates. We use $\ensuremath{Z \rightarrow ee}$ data and the measured value of the $Z$ boson width~\cite{LEPZ_W_1, LEPZ_W_2, LEPZ_W_3, LEPZ_W_4} to calibrate the constant term of the resolution. Most of the noise fluctuations comes from fluctuations in the underlying hadronic energy inside the electron reconstruction cone. We therefore do not include an explicit noise term in the resolution model.
The underlying energy model describes the average contribution of hadrons to the electron's reconstructed energy and its fluctuations.
\subsubsection{Photon Radiation Effects}
\label{sec:pmcs_elec_fsr}
Photon radiation from the $W$ boson decay electron can bias the $M_W$ measurement when the energy from radiated photons is not included in the reconstructed electron energy. This occurs if the radiated photon is separated from the electron and its energy is not counted in the electron energy, or if the photon shower is absorbed totally or partially by uninstrumented material in front of the calorimeter.
The radiated photons arise either from FSR or during the interaction of the electron with material in front of the calorimeter (bremsstrahlung). The bremsstrahung energy loss is corrected in full MC by the electron energy loss correction (see Sec.~\ref{sec:deadmat}). The FSR energy loss is modeled in fast MC.
The average contribution of FSR photons to the electron's reconstruction energy (for the same $\Delta R$ bins as Fig.~\ref{fig:eideff_gammafrac}) is given in Fig.~\ref{fig:eidloss_gammafrac}. The fraction of energy carried by the photon is denoted by $X$. The vertical axis is the ratio $\kappa$, defined as the negative of the ratio of the difference of the reconstructed electron energy with and without FSR to the same difference for the true MC electron energy:
\begin{equation}
\kappa = - \frac{\displaystyle E^{e}_{\rm reco}\lbrack \mathrm{no}\,\mathrm{FSR} \rbrack -E^{e}_{\rm reco}\lbrack \mathrm{with}\,\mathrm{FSR} \rbrack}{\displaystyle E^{e}_{\rm true}\lbrack \mathrm{no}\, \mathrm{FSR} \rbrack -E^{e}_{\rm true}\lbrack \mathrm{with}\,\mathrm{FSR} \rbrack}
\end{equation}
\begin{figure*}[htp]
\includegraphics [scale=0.75] {fig28.eps}
\caption{ Electron energy correction determined from full MC as a function of fraction of the energy carried by the photon.}
\label{fig:eidloss_gammafrac}
\end{figure*}
At high $\Delta R$ we expect $\kappa = -1$ because the photon is well separated from the electron and does not contribute to the reconstructed electron energy. At low $\Delta R$, we expect negative values of $\kappa$ due to losses in the uninstrumented material, which decreases as $X$ increases. At intermediate $\Delta R$ and large values of $X$, $\kappa\approx 0$ since here the EM cluster is resconstructed around the photon. The final FSR energy loss parametrization is performed as a function of the same variables as the FSR efficiency: $\Delta R$, $X$, $\eta_{\rm det}$ and $E^{e}$.
\subsubsection{Dependence of the Calibration on the Instantaneous Luminosity}
\label{sec:gainlumi}
The $M_W$ measurement explores a much higher instantaneous luminosity regime than our previous measurement (Fig.~\ref{fig:lumiprofiles}), and we observe a significant dependence of the energy response on the instantaneous luminosity. Two opposite effects contribute to the change in energy response. The first is the extra energy in the calorimeter due to additional $p\bar{p}$ interactions, which causes an apparent increase in the response. The extra energy is correctly accounted for in the full simulation by overlaying data zero-bias events that have the same time and luminosity profile as the collider data and in the fast simulation by a parametrized model that will be described in Sec.~\ref{sec:ewindow}.
The second effect is due to a drop in the high voltage (HV) applied across the LAr gap that collects the ionization charge, causing an apparent reduction in the energy response if not corrected. The loss of HV occurs across the resistive coat on the signal boards~\cite{RunIdetector} that are used to deliver the HV to the LAr gaps. The resistitivity of this coat was measured {\em in situ}, at the temperature of liquid argon, to be of the order of 180 M$\Omega$ per square, with a large spread from one board to another. Whenever large currents flow through this coat, as is the case in high instantaneous luminosity operations, a sizable HV drop occurs and the ionization charge collected is reduced. In the CC, the detector modules extend across the full $\eta_{\rm det}$ range and the HV is delivered from the edges, at $\eta_{\rm det} = \pm 1.2$, making the drop most pronounced at the center ($\eta_{\rm det}=0$).
The average current from each calorimeter cell as a function of instantaneous luminosity is determined using the energy deposited in zero-bias events. Using a simple resistive circuit model of the calorimeter HV distribution, the current is translated into an $\eta_{\rm det}$ and luminosity dependent model of the HV drop. We use measurements of the electron drift velocity as a function of the electric field~\cite{bib:Walkowiak} and the cell geometry to determine the fractional loss in response. A final overall correction is derived from the instantaneous luminosity dependence of the $m_{ee}$ peak position measured in data.
We simulate single electrons at different energies, angles and luminosities, both with and without the tuned model of luminosity dependence, to parameterize the response change for electrons as a function of instantaneous luminosity and $\eta_{\rm det}$. For electrons at normal incidence, where the effect is maximal, the fractional change in response at an instantaneous luminosity of $L=120\times 10^{30}\, {\rm cm}^{-2}{\rm s}^{-1}$ is 0.42\%. A possible dependence on electron energy has been considered and found to be negligible.
\subsubsection{Dependence of the Calibration on electron {\boldmath $\eta$}}
\label{sec:EMscaleEtaAdj}
The procedure used to calibrate the EM calorimeter includes an equalization of the energy response of towers at different $\eta$ values. This procedure adjusts the gains until the position of the $Z$ boson mass peak in data is the same for any combination of $\eta$ values of the two electrons in a \ensuremath{Z \rightarrow ee}\ event. This procedure does not account for the $\eta$ dependence of the underlying energy flow which implies that reconstructed $Z$ boson mass should have a small $\eta$ dependence. This is a small effect, but we take it into account in the measurement of $M_W$ by simulating this dependence in fast MC.
To derive an $\eta_{\rm det}$-dependent correction to the electron energy scale, we split our sample of CC-CC \ensuremath{Z \rightarrow ee}\ events into 15~categories as defined in Table~\ref{table:StandardEtaCategories} (Sec.~\ref{sec:observable_mat_tune}). We use our standard procedures to fit for the $Z$~boson mass, separately for each category. These procedures use $m_{ee}$ templates produced using fast MC, in which the effect of the underlying energy is included. The results of these mass fits are summarized in Fig.~\ref{fig:EMscaleEtaAdj}. We define one relative gain constant for each $|\eta_{\rm det}|$~bin (Table~\ref{table:StandardEtaBins}) and we translate the 15~mass values from Fig.~\ref{fig:EMscaleEtaAdj} into the values of the 5~relative gain constants. The world average value~\cite{LEPZ_1, LEPZ_2, LEPZ_3, LEPZ_4} of the $Z$~boson mass is used to translate energies into per-electron relative gains.
The results of the translation are shown in Fig.~\ref{fig:EMscaleEtaAdj}. They are used in fast MC for the simulation of the $\eta_{\rm det}$~nonuniformity in the calorimeter gains.
\begin{figure}[hbpt]
\centering
\includegraphics[width=\linewidth]{fig29a.eps}\\
\includegraphics[width=\linewidth]{fig29b.eps}
\caption{(a) Result of the $Z$ boson mass fit per $\eta_{\rm det}$ category prior to applying $\eta$-dependent corrections (Table~\ref{table:StandardEtaCategories}). (b) Result of the translation into one relative gain constant per $\eta_{\rm det}$ bin.
\label{fig:EMscaleEtaAdj}}
\end{figure}
\subsubsection{Energy Response and Resolution}
\label{sec:elec_energy}
The reconstructed electron energy $E$ is simulated as:
\begin{equation}
\begin{split}
E=R_{\rm EM}(E_{0}, \eta_{\text{det}}, L)\,&\otimes\, \sigma_{\rm EM}(E_{0}, \eta) \\ &+ \Delta E(\text{SET}, L, \ensuremath{p_T^e}, \eta_{\text{det}}, u_{\parallel}),
\label{eqn:elec_energy1}
\end{split}
\end{equation}
where $E_0$ is the electron energy after the FSR simulation, $R_{\rm EM}\otimes \sigma_{\rm EM}$ is distributed as a gaussian with mean given by the energy response $R_{\rm EM}$, and width given by the energy resolution $\sigma_{\rm EM}$. The term $\Delta E$ describes the deposition of energy from hadronic showers inside the electron reconstruction cone.
The resolution of the EM calorimeter $\sigma_{\rm EM}$ is modeled as:
\begin{equation}
\frac{\sigma_{\rm EM}} {E_0} = \sqrt{C^{2}_{\rm EM} + \frac{S^{2}_{\rm EM}}{E_0} + \frac{N^{2}_{\rm EM}}{E_0^{2}} }\ \ .
\end{equation}
in which $C_{\rm EM}$, $S_{\rm EM}$ and $N_{\rm EM}$ correspond to the constant, sampling and noise terms, respectively. Owing to the uninstrumented material in front of the calorimeter, the sampling term parameter $S_{\rm EM}$ depends on electron energy and incident angle, and is parametrized as:
\begin{equation}
S_{\rm EM} = S_0\exp\left[S_1\left(\frac{1}{\sin\theta} - 1\right)\right] +\frac{(S_2\eta+S_3)}{\sqrt{E_0}},
\label{eq:resolution_model}
\end{equation}
where,
\begin{equation}\nonumber
\begin{split}
S_0=&0.15294\pm 0.00005\,{\rm GeV}^{1/2}\\\nonumber
S_1=&1.543\pm 0.007\\\nonumber
S_2=&-0.025\pm 0.001\,{\rm GeV}\\\nonumber
S_3=&0.172\pm 0.002\,{\rm GeV}.
\label{eq:resolution_parameters}
\end{split}
\end{equation}
The values of the smearing parameters $S_{0}$ to $S_{3}$ are determined from the improved simulation of the D0 detector, as discussed in Sec.~\ref{sec:deadmat}. The uncertainties quoted in the parameters are determined by propagating the uncertainty in the thickness of the cylinder added to the full MC simulation, which comes from the limited size of the $\ensuremath{Z \rightarrow ee}$ sample used in the tuning procedure. Figure~\ref{fig:samp_res} shows the electron energy sampling resolution $S_{\rm EM}/\sqrt{E_0}$ for four different values of electron $\eta$. The strong energy and angular dependencies in Eq.~\ref{eq:resolution_model} are caused by the energy lost in the uninstrumented material before the calorimeter.
\begin{figure}[hbpt]
\centering
\includegraphics[width=\linewidth]{fig30.eps}\\
\caption{Sampling contribution $S_{\rm EM}/\sqrt{E_0}$ to the electron fractional energy resolution as a function of the electron energy $E_0$ for four different electron incident angles $\eta=0.0$, $0.5$, $0.75$, and $1.0$. The strong energy and angular dependencies are caused by the energy lost in the uninstrumented material before the calorimeter.
\label{fig:samp_res}}
\end{figure}
The value of $N_{\rm EM}$ is set to zero since the contribution of the noise term is small at the energies of electrons from $W$~boson and $Z$~boson decay and since the most important source of noise is already discribed in the fluctuations of $\Delta E$. The extraction of $C_{\rm EM}$ from the width of the $Z$~boson mass peak is discussed in Sec.~\ref{sec:ConstantTerm}.
In the vicinity of the $\phi$-module boundaries of the central calorimeter, the modeling of the electron energy response and resolution in fast MC is modified compared to the description above. The Gaussian resolution model is modified to include a {\it lossy tail} given by a Crystal Ball function~\cite{CrystalBall} for $\phi_{\text{mod}} < 0.2$ and $\phi_{\text{mod}} > 0.8$. In the same range, the loss in average response is modeled by a simple linear function. The parameters for both response and resolution modifications near the module boundary are determined from template fits to $\ensuremath{Z \rightarrow ee}$ data.
The energy response for electrons in Eq.~(\ref{eqn:elec_energy1}) is modeled as:
\begin{equation}
\begin{split}
R_{\rm EM} = F_{\rm \eta-eq}(\eta_{\det}) &\times F_{\rm HV-loss}(L, \eta_{\det})\\ & \times \left(\alpha ( E_{0} - \overline{E_0}) + \beta + \overline{E_0} \right)
\label{eq:response}
\end{split}
\end{equation}
where $F_{\rm HV-loss}( L, \eta_{\det})$ implements the model of the luminosity dependence of the calorimeter gains due to the HV~loss that is discussed in Sec.~\ref{sec:gainlumi}, and $F_{\rm \eta-eq}(\eta_{\det})$ describes the $\eta$ nonuniformity discussed in Sec.~\ref{sec:EMscaleEtaAdj}. The parameters $\alpha$ and $\beta$ are referred to as scale and offset, and $\overline{E_0}=43$~GeV is a reference value for the energy of electrons in $\ensuremath{Z \rightarrow ee}$ events. The values of $\alpha$ and $\beta$ are determined from $\ensuremath{Z \rightarrow ee}$ events in collider data. The constant $\overline{E_0}$ is introduced to reduce the correlation between the
parameters $\alpha$ and $\beta$ to improve the stability of the numerical evaluation of the covariance matrix of the simultaneous fit for $\alpha$ and $\beta$.
The determination of the parameters of the energy response of the calorimeter to electrons is one of the most important steps in the measurement of $M_W$. The scale and offset cannot be distinguished from one another to the precision required using only the $Z$~boson mass distribution. However, the different electron energies from $Z$~boson decays can be exploited to constrain the energy dependence of the energy response. The measured $m_{ee}$ is calculated from:
\begin{equation}
m_{ee}=\sqrt{2 E^{e_1} E^{e_2}(1-\cos\omega)},
\end{equation}
where $\omega$ is the opening angle between the two electrons.
Substituting and expanding in a Taylor series with $\beta \ll E^{e_1} + E^{e_2}$ gives (ignoring $\overline{E_0}$)
\begin{equation}
m_{ee} = \alpha m^0_{ee} + \beta f_Z^0 + \mathcal{O}(\beta^2),
\label{eq:fder}
\end{equation}
where $f_Z$ is a kinematic variable defined as:
\begin{equation}
\label{eq:fzdef}
f_Z^0 = \frac{(E_0^{e_1}+E_0^{e_2})(1-\cos\omega)}{m^0_{ee}}
\end{equation}
in which quantities with a zero subscript or superscript are calculated with all corrections except the $\alpha$ and $\beta$ correction.
Equation~\ref{eq:response} relates the observed $m_{ee}$ to the scale, offset and the true energies of the electrons. By varying both the scale and offset, templates of the two dimensional distribution of $m_{ee}$ versus $f_Z$ in the fast MC are compared to the equivalent distribution in data. The final values of $\alpha$ and $\beta$ used are found by maximizing the likelihood formed during the comparison.
The scale and offset are determined separately in different bins of instantaneous luminosity, which is expressed in units of $36\times10^{30}\, {\rm cm}^{-2}{\rm s}^{-1}$. Table~\ref{tab:EMscaleFits} summarizes the scale and offset parameters, along with the correlation coefficients from the fits. The fit results are shown in Fig.~\ref{fig:EMscalePlots}. The results from the fits for each of the bins in instantaneous luminosity agree well with each other. This shows that our model of the underlying energy flow into the electron cone (Sec.~\ref{sec:ewindow}) and the model of the luminosity-dependence of the calorimeter gains (Sec.~\ref{sec:gainlumi}) are correctly accounting for the luminosity dependence of the detector response to electrons. Rather than defining one luminosity-averaged set of parameters for the scale and offset, we use the different values per bin in luminosity, because there is no loss in statistical power, {\em i.e.}, the systematic uncertainty on $M_W$ due to the electron energy scale is not increased by splitting into luminosity bins.
\begin{table*}[htp]
\begin{center}
\caption{Results of the fits for electron energy scale and offset to the collider data.
\label{tab:EMscaleFits}}
\begin{tabular}{c|c|c|c|c}\hline\hline
& $0<L<2$ & $2<L<4$ & $4<L<6$ & $L>6$ \\ \hline
$\alpha$ & $1.0237 \pm 0.0043$ & $1.0164 \pm 0.0030$ & $1.0181 \pm 0.0047$ & $1.0300 \pm 0.0074$ \\
$\beta$ (GeV) & $0.129 \pm 0.032$ & $0.188 \pm 0.022$ & $0.208 \pm 0.034$ & $0.158 \pm 0.053$ \\
Correlation & $-0.796$ & $-0.786$ & $-0.783$ & $-0.764$ \\\hline\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{fig31.eps}
\caption{[color online] Central values and one standard deviation contours of the fits for electron energy scale and offset to the collider data. Instantaneous luminosity is given in units of $36\times10^{30}\, {\rm cm}^{-2}{\rm s}^{-1}$.}
\label{fig:EMscalePlots}
\end{figure}
The dominant systematic uncertainty on our measurement of $M_W$ is the precision with which we measure the mean electron energy response. The uncertainties in the energy scale and offset are individually large but they are highly correlated. We propagate the correlated uncertainties in the scale and offset parameters to our measurement of $M_W$ and obtain uncertainties of 16 MeV on $M_W$ using the $m_T$ and $\ensuremath{{\slash\kern-.7emE}_{T}}$ distributions and 17 MeV for the $p^e_T$ distribution.
\subsubsection{Determination of the Constant Term}
\label{sec:ConstantTerm}
For data, unlike the full MC, the constant term $C_{\rm EM}$ is also important. It arises primarily from residual channel-to-channel calibration differences and describes an energy-independent contribution to the fractional energy resolution. Thus, its main impact is felt at high electron energies where the sampling term is suppressed by its approximate $1/\sqrt{E}$ behavior. The value of $C_{\rm EM}$ is extracted from the width of the $Z$~boson mass peak with the sampling term modeled as described above. The value of $C_{\rm EM}$ is determined using template fitting to the $m_{ee}$ distribution. The best fit value for $C_{\rm EM}$ is
\begin{eqnarray*}
C_{\rm EM} & = & (1.997 \pm 0.073)\% ,
\end{eqnarray*}
which is in good agreement with our determination in Run~IIa and with the Run~II design goal of 2\%.
In order to propagate the uncertainty from the electron energy resolution model to $M_W$, we use fast MC pseudo-experiments in which we vary the sampling resolution function parameters by their uncertainty (Eq.~\ref{eq:resolution_parameters}). For each of these fast MC pseudo-experiments, we fit the constant term to account for the correlation between the two components of the resolution model. Using the procedure described in Sec.~\ref{sec:syst}, we estimate the uncertainty to be 2~MeV for the $M_W$ measurement using $m_T$ and $p^e_T$ and 3~MeV for $\ensuremath{{\slash\kern-.7emE}_{T}}$.
\subsubsection{Electron Cone Effects}
\label{sec:ewindow}
To reconstruct an electron, we must define an electron reconstruction cone (Fig.~\ref{fig:ewindow}). The energy in this cone arises not only from the electron, but also from hadronic recoil, spectator parton interactions, and additional $p\bar{p}$ collisions. There are also effects from the suppression of electronic noise. These bias both the reconstructed electron energy and the reconstructed recoil energy. Extra energy is given to the electron from the recoil and it is excluded from the reconstruction of $u_T$. The additional energy added to the electron cone is denoted by $\Delta E$ (Eq.~\ref{eqn:elec_energy1}, Sec.~\ref{sec:elec_energy}), while the additional transverse energy subtracted from the recoil in the electron cone is denoted by $\Delta u_{\parallel }$ (Eq.~\ref{eqn:duparadef}) in Sec.~\ref{sec:recoil}.
The value of $\Delta u_{\parallel }$ is not equal to $\Delta E \sin\theta^e$ for two reasons:
\begin{itemize}
\item The energy loss due to uninstrumented material in front of the calorimeter is corrected for the electron, but not for the recoil.
\item Zero suppression has different effects near a large concentrated energy ($\Delta E$) compared to a small diffuse background energy ($\Delta u_{\parallel }$).
\end{itemize}
To study electron cone effects, we construct a $\Delta u_{\parallel }$ library by recording the energy deposition in random cones from $W\rightarrow e\nu$ events in collider data and in full MC. These random electron reconstruction cones are selected in such way to avoid any electron energy contribution. Events in this library sample the same luminosity profile as the data used to measure $M_W$. For each electron in the fast MC simulation, we simulate its $\Delta u_{\parallel}$ by selecting a random cone from the library based on the electron's $\eta$, $\eta_{\text{det}}$, and $\ensuremath{u_{\parallel}}$, as well as on the event's SET and luminosity.
To model the change in the electron energy $\Delta E$ associated to a given $\Delta u_{\parallel}$, we perform a dedicated full MC simulation in which we extract the electron and FSR photon energies separately from the hadronic recoil particles energies in each cell, and generate three $W \rightarrow e\nu$ full MC samples based on the same full detector simulation of each $W\rightarrow e\nu$ event:
\begin{itemize}
\item {\it Electron only}: contains only the electron and FSR photons.
\item {\it No electron}: contains everything except the electron and FSR photons, {\it i.e.}, the hard recoil, spectator parton interactions, and additional $p\bar{p}$ interactions.
\item {\it Full sample}: contains the complete event.
\end{itemize}
For a given reconstructed electron cluster in the {\it full sample}, the value of $\Delta u_{\parallel}$ can be determined by the sum, in the sample with {\it no electron}, of the energies over the calorimeter cells that compose the cluster. The value of $\Delta E$ corresponding to this $\Delta u_{\parallel }$ is determined from the difference of the reconstructed electron energy in the {\it full sample} to the one in the sample with {\it electron only}. The relationship between $\Delta u_{\parallel}$ and $\Delta E$ is strongly dependent on SET, $L$, $p^e_T$, $\eta_{\rm det}$, and $u_{\parallel }$ and those variables are used to parametrize the model in the fast MC. Figure~\ref{fig:compare_mean_de_set_lumi_deteta_upara} shows the comparison between the $\Delta E$ distribution in full MC and the one in fast MC, which uses the $\Delta u_{\parallel}$ library and the parametrized model for the relationship with $\Delta E$.
\begin{figure*}[ht]
\includegraphics[width=\linewidth]{fig32.eps}
\caption{[color online] Mean $\Delta E$ as a function of (a) SET, (b) instantaneous luminosity, (c) $\eta_{\rm det}$, and (d) $u_{\parallel }$ comparing full and fast MC. }
\label{fig:compare_mean_de_set_lumi_deteta_upara}
\end{figure*}
The $\Delta u_{\parallel}$ library determined in the studies of the electron cone effects also provide information for the recoil system, as discussed in Sec.~\ref{sec:recoil}. We show the dependence of the mean $\Delta u_{\parallel }$ ($\langle \Delta u_{\parallel } \rangle $) on $L$ in Fig.~\ref{fig:dupara_combo}(a) for various bins of SET. In a given bin of SET, there is almost no dependence of $\langle \Delta u_{\parallel } \rangle$ on $L$, while, for the full SET range, the strong dependence on $L$ comes only from the correlation between SET and $L$. We show the dependence of mean $\langle \Delta u_{\parallel } \rangle$ on $u_{\parallel }$ in Fig.~\ref{fig:dupara_combo}(b) for various bins of SET. In a given bin of SET, the $\langle \Delta u_{\parallel } \rangle$ always increases with increasing $u_{\parallel }$ as the recoil gets closer to the electron cone. Our interpretation is that, at a fixed SET, the soft recoil component is fixed and we can study the hard recoil contribution which is controlled by $u_{\parallel }$. For the full SET range, the dip around $u_{\parallel} \approx 0$ happens because, in a high pileup environment, a small $u_{\parallel}$ almost always implies small SET.
\begin{figure}[htbp]
\includegraphics[width=1.1\linewidth]{fig33.eps}
\caption{[color online] (a) Mean $\Delta u_{\parallel }$ as a function of $L$ separately for sub-samples with different SET.
Instantaneous luminosity $L$ is given in units of $36 \times 10^{30} cm^{-2} s^{-1}$. (b) Mean $\Delta u_{\parallel }$ as a function of $u_{\parallel }$ separately for various bins of SET.}
\label{fig:dupara_combo}
\end{figure}
\subsection{Hadronic Recoil Parameterization}
\label{sec:recoil}
The hadronic recoil simulation in the fast MC uses a multi-component model that can be decomposed into:
\begin{equation}
\vec{u}_T = \vec{u}^{\rm ~HARD}_T + \vec{u}^{\rm ~SOFT}_T + \vec{u}^{\rm ~ELEC}_T + \vec{u}^{\rm ~FSR}_T,
\label{eqn:utdef}
\end{equation}
\noindent
where $\vec{u}_T^{\rm ~HARD}$ is the dominant part of the recoil balancing the vector boson, $\vec{u}_T^{\rm ~SOFT}$ describes the zero-bias and minimum-bias contribution, $\vec{u}^{\rm ~ELEC}_T$ models the hadronic energy in the electron cone and electron energy leakage out of the cone, and $\vec{u}^{\rm ~FSR}_T$ is the out-of-cone electron FSR contribution. The contribution of out-of-cone photons to the recoil transverse momentum, $\vec{u}^{\rm ~FSR}_T$, is parametrized as a function of the photon pseudorapidity and energy, derived from a dedicated full MC simulation. The third component, $\vec{u}^{\rm ~ELEC}_T$, is defined as:
\begin{equation}
\vec{u}_T^{\rm ~ELEC}= -\Delta u_{\parallel} \widehat{p_T^e} + \vec{p}_T^{\rm~LEAK},
\label{eqn:duparadef}
\end{equation}
where $\widehat{p_T^e}$ is an unit vector in the direction of $\vec{p}_T^{\,e}$ and $\Delta u_{\parallel}$ is discussed in Sec.~\ref{sec:ewindow}. The value of $\vec{p}_T^{\rm~LEAK}$, which describes the energy leakage from the electron reconstruction cone due to calorimeter shower development, is determined using single electron full MC as a $\eta^e$-dependent fraction of $\ensuremath{p_T^e}$. Figure~\ref{fig:leakage} shows the fraction of electron showers that leak outside the reconstruction cone and the fraction of their energy that is added to the recoil system. The electron shower leakage is parametrized independently for electrons with and without in-cone FSR, since the photon shower contributes to the total energy leaked.
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.5]{fig34.eps}
\caption{(a) The fraction of electron showers that leak outside the reconstruction cone in the CC, and (b) the fraction of the electron transverse momentum that is added to the recoil system for clusters without in-cone FSR photons, both as a function of electron $\eta$. }
\label{fig:leakage}
\end{figure}
\subsubsection{Hard and Soft Recoil Models}
\label{smearing_model}
The hard recoil model is derived from a special sample of $Z\rightarrow \nu\nu$ full MC events generated with {\sc pythia} without simulation of multiple parton interactions and without overlay of zero-bias events. The generated events are processed through the full chain of the detector simulation and reconstruction. Since the neutrinos escape undetected, all the energy measured in the detector can be attributed to the recoil alone. To obtain kinematics similar to $Z\rightarrow ee$ events, both neutrinos from a $Z$ boson decay are required to have $|\eta|<1.3$.
The model simulates the magnitude (${u}_T^{\nu\nu}$) and direction ($\phi$) of the reconstructed hard recoil as a function of the negative of the generator-level transverse momentum of the vector boson, $\vec{p}_T^{\,V}$. The model is parametrized using two variables, the relative transverse momentum
\begin{equation}
R = \frac{u^{\nu\nu}_T-p_T^{V}}{p_T^{V}},
\end{equation}
and the angular resolution
\begin{equation}
\displaystyle \Delta\phi = \phi(\vec{u}^{\,\nu\nu}_T)-\phi(\vec{p}_T^{\,V}), \mbox{with}\ \
(|\Delta \phi|<\pi).
\end{equation}
\noindent
The $Z\rightarrow \nu\nu$ sample is divided into 32 bins of $p_{T}^V$. For each
bin the distribution of $R$ versus $\Delta\phi$ is smoothed to obtain a continuous probability
density $P(R,\Delta\phi)$. The smoothing function is a product of a
log-normal distribution in $R$ with a normal distribution in $\Delta\phi$. Two
examples of such probability density functions are shown in
Fig.~\ref{fig:fit:combo} for $4.5 < p_T^V < 5.0\,\text{GeV}$ and for $18 < p_T^V < 20\,\text{GeV}$.
The correlation between $R$ and $\Delta\phi$ is described by assuming that the mean of the log-normal distribution
has a linear dependence on $\Delta\phi$. The smoothing fits are shown in
Fig.~\ref{fig:fit:combo} as colored contours. From
these, the simulated $R$ and $\Delta\phi$ values for a fast MC event are chosen by randomly sampling the probability density corresponding to the
boson $p_T$.
\begin{figure}[hbtp]
\centering
\includegraphics [scale=0.4, angle=270] {fig35.eps}
\caption{[color online] The distribution of the recoil relative transverse momentum and $\Delta\phi$ resolutions for
full MC (boxes) and fit (contours) for (a) $4.5 < p_T^V < 5.0\,\text{GeV}$ and for (b) $18 < p_T^V < 20\,\text{GeV}$.}
\label{fig:fit:combo}
\end{figure}
The hard recoil model described thus far applies to full MC $Z\to\nu\nu$ events. To correct for imperfections in the simulation, additional smearing parameters are introduced and applied to the component $u^{\nu\nu}_\parallel = u^{\nu\nu}_T\cos(\Delta\phi)$ in the direction of $\vec{p}_T^{\,V}$ to give the corrected recoil denoted by $u_{\parallel}^{\rm HARD}$:
\begin{eqnarray}
u_{\parallel}^{\rm HARD}/p_T^V = (r_0+r_1 e^{-p_T^V / \tau_{\rm HAD}}) (\overline{R}(p_T^V) + 1)\nonumber \\
+ \sigma_0 (u^{\nu\nu}_\parallel/p_T^V - \overline{R}(p_T^V) - 1).
\end{eqnarray}
The perpendicular component
\begin{equation}
\nonumber u^{\rm HARD}_\perp = u^{\nu\nu}_T\sin(\Delta\phi)
\end{equation}
remains unmodified. The mean values $\overline{R}(p_T^V) = \langle{(u^{\nu\nu}_\parallel}-p_T^V)\big/{p_T^V}\rangle$ are determined from the smoothed distributions for $(R,\Delta\phi)$. The smearing parameters $r_0$, $r_1$, $\tau_{\rm HAD}$, and $\sigma_0$ are determined as described below.
The soft recoil is modeled from the measured recoils in collider data minimum-bias and zero-bias events. In addition to being selected by the minimum-bias trigger, the minimum-bias events are required to have zero or one reconstructed primary vertex. The zero-bias events are sampled to give the instantaneous luminosity distribution observed in the data. We create lists of the magnitude and direction of recoil in the minimum-bias and zero-bias events, and for a given fast MC event, the simulated soft recoil is created by taking one $\vec{\mathrm{u}}_T$ value from each of the minimum-bias and zero-bias lists and combining them to give the soft recoil
\begin{equation}
\vec{u}_{T}^{\rm SOFT} = \sqrt{\alpha_{\rm MB}}\ \vec{u}_T^{\rm MB} + \vec{u}_T^{\rm ZB},
\end{equation}
where $\alpha_{\rm MB}$ is a parameter that controls the soft recoil resolution.
We determine values for the five parameters $r_0$, $r_1$, $\tau_{\rm HAD}$, $\sigma_0$ and $\alpha_{\rm MB}$ by fits comparing data (or full MC) to the fast MC simulation using a method first used by the UA2 collaboration \cite{UA2}. The momentum imbalance between the $p_T$ of the dielectron system and the recoil $u_T$ in $Z\to ee$ events is projected on the bisector $\hat\eta$ of the electron and positron directions
\begin{equation}
\eta_{\text{imb}} \equiv (\vec{p}_T^{\ ee}+\vec{u}_T)\cdot\hat{\eta}
\end{equation}
as shown in Fig.~\ref{fig:eta_upara}. The bisector is chosen to reduce the dependence between the electron energy scale and the hadronic recoil, because the bisector is independent of fluctuations in the measured electron energies. The $\eta_{\text{imb}}$ distributions are made in bins of reconstructed $p_T^{ee}$ for both data (or full MC) and fast MC. The five parameters are determined by constructing separate fast MC samples with varying values of the parameters and finding the parameter values that minimize the $\chi^2$ difference between the mean (as functions of $r_0$, $r_1$ and $\tau_{\rm HAD}$) and RMS (as functions of $\sigma_0$ and $\alpha_{\rm MB}$) of $\eta_{\text{imb}}$ for data and fast MC distributions. The fits using the mean and the RMS are performed independently.
\subsubsection{Fit Results}
\label{fitting_results}
The results from the minimization of the mean $\eta_{\text{imb}}$ as a function of $p_T^{ee}$ for collider data are
\begin{eqnarray}\nonumber
r_0 &=&1.047\pm 0.008,\\\nonumber
r_1 &=&2.07 \pm 0.39,\\\nonumber
\tau_{\rm HAD} &=&2.51\pm 0.32\ {\rm GeV},
\end{eqnarray}
and the results from the minimization of the RMS are
\begin{eqnarray}\nonumber
\sigma_0 &=&1.238 \pm 0.040,\\\nonumber
\alpha_{\rm MB} &=&0.633 \pm 0.064.
\end{eqnarray}
The corresponding two correlation matrices are:
\begin{equation}\nonumber
\bordermatrix{
& r_0 & r_1 & \tau_{\rm HAD}\cr
r_0 & 1 & 0.30 & -0.49 \cr
r_1 & 0.30 & 1 & -0.90 \cr
\tau_{\rm HAD} & -0.49 & -0.90 & 1},
\end{equation}
and
\begin{equation}\nonumber
\bordermatrix{
& \sigma_0 & \alpha_{\rm MB}\cr
\sigma_0 & 1 & -0.68 \cr
\alpha_{\rm MB} & -0.68 & 1 }.
\end{equation}
\noindent Figure~\ref{fig:imbal.combo} shows the comparison of the mean and the width of the $\eta_{\text{imb}}$ momentum imbalance distributions between data and fast MC for the ten different $p_{T}^{ee}$ bins. The quantity $\chi$ is defined as the ratio of the difference between data and fast MC divided by the uncertainty in the data for each bin.
\begin{figure*}[htbp]
\includegraphics [width=0.8\linewidth] {fig36.eps}
\caption{[color online] Data and fast MC comparison of the (a) mean and (c) width of the $\eta_{\text{imb}}$ for the ten different bins in $p_{T}^{Z}$. The $\chi$ value per $p_{T}^{Z}$ bin for the (b) mean and (d) width of the $\eta_{\text{imb}}$.}
\label{fig:imbal.combo}
\end{figure*}
\subsubsection{Recoil Modeling Systematic Uncertainties}
\label{sec:recoiluncertainty}
The size of the $\ensuremath{Z \rightarrow ee}$ sample determines the statistical precision of the five smearing parameters. We use pseudo-experiments, as described in Sec.~\ref{sec:syst}, to propagate their uncertainties to the measured $M_W$ and determine the recoil modeling systematic uncertainty. We find uncertainties of $5$~MeV, $6$~MeV and $14$~MeV for the $m_T$, $p_T^e$ and $\ensuremath{{\slash\kern-.7emE}_{T}}$ results.
\section{Measurement Strategy}
\label{sec:strat}
\subsection{Conventions}
A momentum vector $\vec{p}$, in the D0 standard coordinate system, is represented using a right-handed Cartesian coordinate system, $p_x$, $p_y$, $p_z$ where ${\hat z}$ is the direction of the proton beam and ${\hat y}$ points upward. It is convenient to use a cylindrical coordinate system in which the same vector is given by the magnitude of its components perpendicular to the beam (transverse momentum) $p_T$, its azimuthal angle $\phi$, and $p_z$. In spherical coordinates, the polar angle $\theta$ is sometimes replaced by the pseudorapidity $\eta\,=\,-\ln\tan[\theta/2]$. In this paper, by {\it electron} we mean electron or positron unless specifically noted. When refering to instrumental effects, sometimes it is convenient to define $\eta_{\rm det}$ as the pseudorapidity of the particle determined as if it had been produced at the center of the calorimeter.
\subsection{{\boldmath $W$} and {\boldmath $Z$} Boson Production and Decay}
$W$ and $Z$ bosons are produced at the Tevatron predominantly through valence quark-antiquark annihilation with a smaller contribution involving the sea. Gluons may be radiated from quarks in the initial state. These gluons usually have lower transverse momentum than the boson (soft gluons) but could be energetic enough to give rise to hadron jets. Consequently, the transverse momentum of the boson is typically small compared to its mass, but has a long tail extending to large $p_T$ associated with events having jets. Spectator partons in the proton and antiproton, which remain after the hard annihilation, hadronize into low-$p_T$ hadrons. Since the transverse momentum vectors of the initial proton and antiproton are zero, the sum of the transverse momenta of the recoiling particles must balance the transverse momentum of the boson.
We measure the decays of the $W$ boson in the electron channel \ensuremath{W \rightarrow e \nu}\ and at the same time measure $Z\to ee$ decays that provide an important calibration sample. The size of the $\ensuremath{Z \rightarrow ee}$ sample is limited by its relatively small branching fraction $\text{BR}(\ensuremath{Z \rightarrow ee})/\text{BR}(\ensuremath{W \rightarrow e \nu}) = 0.31$. The electrons typically have transverse momenta of about half the mass of the decaying boson and are well-isolated in the calorimeter. Isolated high-$p_T$ electrons are dominantly produced by $W$ and $Z$ decays and allow us to select a clean sample of $W$ and $Z$ boson events. The D0 calorimeter (Sec.~\ref{sec:det}) is well-suited for a precise measurement of electron energies, providing a single electron energy resolution of 4.5\%, using the angular and energy spectra of electrons from $W$ boson decay when averaged over the electron geometric acceptance and momentum distribution in this analysis. The small tracking volume of the D0 detector limits the momentum resolution of tracks, therefore we do not use $W\to \mu\nu$ decays in this measurement.
\subsection{Event Characteristics}
\label{sec:eventcharacteristics}
For the process $\ensuremath{p\overline{p}} \rightarrow W + X \rightarrow e + \nu + X$, we select the electron by requiring $|\eta_{\rm det}|<1.05$ and use all other particles detected up to $|\eta_{\rm det}| \lesssim 4.2$ for the hadronic recoil measurement. We cannot detect recoil particles with $|\eta_{\rm det}| \gtrsim 4.2$, but their transverse momenta are generally small and can be neglected in the recoil system transverse momentum, $\ensuremath{\vec{u}_T}$.
A candidate $W$ boson event is characterized by a measurement of the electron momentum $\vec{p}^{\,e}$ and $\ensuremath{\vec{u}_T}$. The neutrino escapes undetected but the magnitude and direction of its transverse momentum are inferred from the event's missing transverse energy, $\,\ensuremath{\vec{\slash\kern-.7emE}_{T}} \equiv -(\vec{p}^{\,e}_T + \ensuremath{\vec{u}_T})$. The signature of a $W \rightarrow e \nu$ decay is therefore an isolated high-$p_{T}$ electron and large $\ensuremath{{\slash\kern-.7emE}_{T}}$.
The signature of $Z \rightarrow ee$ decays consists of two isolated high-$p_{T}$ electrons. In a manner similar to candidate $W$ boson events, a candidate $Z$ boson event is characterized by a measurement of the two electron momenta and $\ensuremath{\vec{u}_T}$.
\subsection{Mass Measurement Strategy}
\label{sec:measurementstrategy}
Since we cannot reconstruct the longitudinal component of the neutrino momentum, we must resort to variables different from the invariant mass. To measure the $W$ boson mass, we use the following three kinematic variables: the $W$ boson transverse mass, \ensuremath{m_T}, the electron transverse momentum, $p_T^{e}$, and the neutrino transverse momentum, $p_T^{\nu}$ ($\ensuremath{{\slash\kern-.7emE}_{T}}$).
The $W$ boson transverse mass is calculated with the formula
\begin{equation}
\ensuremath{m_T} = \sqrt{2{p_T^e}p_T^{\nu}(1-\cos(\phi^e-\phi^{\nu}))},
\end{equation}
where $\phi^e$ and $\phi^{\nu}$ are the azimuthal angles of $\vec{p}^{\,e}_T$ and $\ensuremath{\vec{\slash\kern-.7emE}_{T}}$, respectively.
The $\ensuremath{m_T}$ and $p_T^e$ measurements provide a powerful cross-check because of their different systematic uncertainties. The shape of the $\ensuremath{m_T}$ distribution is dominated by the detector resolution (mainly the resolution due to the recoil system energy measurement), while the $\ensuremath{p_T^e}$ spectrum is affected by the transverse momentum of the $W$ boson, as well as by the recoil system and initial state radiation transverse momenta. This is illustrated in Fig.~\ref{fig:shape}.
\begin{figure}[hbpt]
\includegraphics[width=\linewidth]{fig01a.eps}
\includegraphics[width=\linewidth]{fig01b.eps}
\caption{[color online] The (a) $p_{T}^{e}$ and (b) $\ensuremath{m_T}$ spectra for simulated $W$ bosons without detector resolution effects and $W$ boson transverse momentum $p_{T}^{W}=0$ (solid line), with the natural $p_{T}^{W}$ spectrum at the Tevatron (shaded area), and with the natural $p_{T}^{W}$ distribution and all detector resolution effects included (points). All curves are normalized to unit area.\label{fig:shape}}
\end{figure}
The $p_T^{\nu}$ measurement is sensitive to the same systematic uncertainties as both \ensuremath{m_T}~and $p_T^e$ and has poorer experimental resolution, but this measurement is still useful for a cross-check. These measurements are not fully correlated so we can combine them to improve precision.
The shapes of the distributions of these variables cannot be calculated analytically because of the various complex detector acceptance and resolution effects. The measurement of $M_W$ is obtained by a comparison of the spectra of the three different measurement variables with templates generated from a highly detailed Monte Carlo (MC) simulation with a series of $M_W$ hypotheses. This requires high statistics templates ($\approx\!10^9$ events) to characterize the different systematic uncertainties while ensuring that statistical fluctuations from the MC simulation are negligible. The detailed detector simulation (full MC) is too slow to generate many samples of this size, and it also does not reproduce the detector performance well enough to measure $M_W$ precisely. To generate appropriate templates, a parametrized MC simulation (fast MC) has been developed to generate large samples on a reasonable time scale and to provide a detailed description of the detector performance. Here, $\ensuremath{Z \rightarrow ee}$ events are used to determine the parameters, since both electrons from $Z$ boson decays are well-measured by the calorimeter and the $Z$ boson properties are well known. This allows a determination of the fast MC parameters, including details of the hadronic recoil system, from the data itself. Since the $Z$ boson mass is known with high precision~\cite{LEPZ_1, LEPZ_2, LEPZ_3, LEPZ_4}, its value can be used to calibrate the energy scale of the electromagnetic (EM) part of the calorimeter. Care must be taken to ensure that the calibrations using the $Z$ boson are valid at the lower average energy of the electrons from $W$ boson decay. Once this has been established, the $M_W$ measurement is effectively a measurement of the ratio of $W$ boson and $Z$ boson masses.
A binned log-likelihood comparing collider data and simulated event distributions (a template) is computed for each of the $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ observables. The log-likelihoods are calculated using the Poisson probability for bin $i$ with $m_i$ expected events from the template to have $n_i$ observed events from the data distribution. The total log-likelihood is formed from the sum over all bins:
\begin{equation}
-\ln{\mathcal{L}} =\sum_{i=1}^N \left[- n_i \ln{m_i} + m_i + \ln(n_i!) \right].
\label{eq:ll}
\end{equation}
Templates are generated for different hypothetical $M_W$ values in $10\,\text{MeV}$ intervals. This procedure gives a mass-dependent likelihood for each of the $m_T$, $p_T^e$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ distributions. We then measure $M_W$ using \textsc{Minuit}~\cite{minuit} by finding the $M_W$ value that maximizes the mass-dependent likelihood. The determination is performed separately for each of the three observables after the likelihood distribution is interpolated between the values at each discrete input mass.
Given the precision achievable in this analysis, we use a technique to avoid any bias which could arise from the knowledge of the current world average. To eliminate such bias, a blinded analysis procedure has been developed. The code that provides the template fits uses an unknown but recoverable offset in an interval of $[-2,\ 2]~{\rm GeV}$ around the $M_W$ value with which the templates have been generated. It therefore reports true differences between different mass fits, allowing systematic studies, while keeping the measured $M_W$ value unknown. The same offset is applied to the result of the fit to $\ensuremath{m_T}$, $\ensuremath{p_T^e}$, and $\ensuremath{{\slash\kern-.7emE}_{T}}$ so that the relative agreement between the three observables is known before unblinding. Hence, the $M_W$ measurement reported in~\cite{OurNewPRL} and described here was reviewed and approved by the D0 Collaboration based on the studies performed before the resulting value for $M_W$ was known.
\subsection{Systematic Uncertainties\label{sec:syst}}
The systematic uncertainties are determined using a large ensemble of pseudo-experiments simulated with the fast MC. Pseudo-experiments are generated in which a given parameter is varied independently in steps of multiples of $\pm 0.5 \sigma$ (where $\sigma$ is the one standard deviation uncertainty for the parameter under study) while holding all other parameters constant. For each variation, $M_W$ is determined using the standard fit comparing the distribution for each pseudo-experiment to that of the unmodified template(s). This yields a value $M_{W_i}$ for each variation $\delta_i$. The set of $(\delta_i,\ M_{W_i})$ pairs is fitted to a straight line. For all systematic uncertainties in this measurement, we verify that the linear regime assumed by this procedure is a valid approximation. The slope of the line determines the systematic uncertainty in $M_W$ as in the usual error propagation:
\begin{equation}
\sigma_{M_W}^2(X) = {\left(\frac{\partial M_W}{\partial X}\right)}^2 \sigma_X^2,
\label{eq:mwapprox}
\end{equation}
where $\sigma_X$ is the uncertainty in the determination of the parameter $X$ in the simulation.
This equation does not include correlations. In many cases we can safely assume that the parameters are uncorrelated. When this is not true, the correlations are taken into account by diagonalizing the covariance matrix prior to the propagation of the uncertainty to the $M_W$. The diagonalization defines uncorrelated parameters and uncertainties. The above procedure is applied to the uncorrelated uncertainties to determine the uncertainties on the measured $W$ boson mass.
\subsection{Additional Kinematic Variables}
\label{ssec:kinematics}
In $\ensuremath{Z \rightarrow ee}$ decays, the di-electron momentum is given by \mbox{$\vec{p}_{ee}=\vec{p}^{\,e_1}+\vec{p}^{\,e_2}$} and the di-electron invariant mass is \mbox{$m_{ee}=\sqrt{2E^{e_1}E^{e_2}(1-\cos\omega)}$}, where $\omega$ is the opening angle between the two electrons. When tuning the simulation and making comparisons with $\ensuremath{Z \rightarrow ee}$ data, it is useful to define a coordinate system, first introduced by the UA2 experiment\cite{UA2}, in the plane transverse to the beams that depends only on the electron directions, not on electron energies. We call the axis along the inner bisector of the two electrons the $\eta$ axis and the axis perpendicular to that, in the $(\vec{p}^{\,e_1}, \vec{p}^{\,e_2})$ plane, the $\xi$ axis. Figure~\ref{fig:eta_upara}(a) illustrates these definitions.
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=0.3\textwidth]{fig02a.eps}
\includegraphics[width=0.3\textwidth]{fig02b.eps}
\end{center}
\caption{[color online] (a) Definition of $\eta$ and $\xi$ axes for $\ensuremath{Z \rightarrow ee}$ events. (b)
Definition of $\ensuremath{u_{\parallel}}$ and $\ensuremath{u_{\perp}}$. The variable $\ensuremath{u_{\parallel}}$ is negative when opposite to the electron direction.}
\label{fig:eta_upara}
\end{figure}
For $\ensuremath{W \rightarrow e \nu}$ decays, useful quantities are the projection of the recoil system transverse momentum onto the electron direction:
\begin{equation}
\ensuremath{u_{\parallel}} = \vec{u}_T \cdot \hat{p}_T^e,
\end{equation}
and the projection on the direction perpendicular to the electron:
\begin{equation}
\ensuremath{u_{\perp}} = \vec{u}_T \cdot (\hat{p}_T^e \times \hat{z}),
\end{equation}
where $\hat{z}$ is an unit vector in the $z$ direction. Figure~\ref{fig:eta_upara}(b) illustrates these definitions for $W$ boson events, but the definitions also apply for each electron from $Z \to ee$ events.
The two variables $u_{\parallel}$ and $\ensuremath{u_{\perp}}$ are useful to study the correlation between the recoil system and the electron direction. Another variable, the scalar sum of all transverse energies (SET) measured by the calorimeter except those energies associated with electrons or with potential noise, reflects the total hadronic activity in the calorimeter.
|
1,116,691,500,701 | arxiv | \section{INTRODUCTION}
High-speed, autonomous navigation is predicated on the ability to reason about the environment for effective, collision-free path planning. Existing approaches operate on current sensor readings to update an occupancy map corresponding to an internal representation of where obstacles exist in the environment. These occupancy maps are then used by planning algorithms to generate a collision-free path to a target goal. One of the limitations of this approach is that the planning horizon is limited to the field of view (FOV) of the sensor.
On the other hand, behavioral neuroscience and biological psychology point to the potential role of prediction for navigation in animals and humans. In particular, the hypocampus appears to exhibit some neuronal structures as well as firing sequences that could support not only mapping but also predictive mapping capabilities~\cite{buckner2010role}.
Indeed, humans continuously make predictions of what to expect based on past experiences. This allows us to adjust our control policy in real time depending on how close our observations match our predictions~\cite{doi:10.1152/jn.00036.2017}. The advantage is most evident while running along a hallway approaching a T-intersection. Even though we cannot see the left or right paths, we generally assume the straight lines will continue and we can predict how the hallway will appear as we turn the corner. Because of this prediction, we do not adjust our running speed unless our prediction is inaccurate. Following this intuition, we believe future predictions of occupancy maps can enable risk sensitive control policies for mobile and aerial vehicles. By being able to predict occupancy maps, we can enable faster navigation as the planning horizon can extend beyond the sensor's limited FOV.
This concept is similar to image completion, a problem for which multiple solutions have been suggested in the past~\cite{Bertalmio:2000:II:344779.344972,Chan02mathematicalmodels}. We take an alternate approach, leveraging the fact that structural information from the observed geometry of the world that can help us make useful predictions about the environment. Deep neural networks have significant advantages over other approaches when used for image completion or image generation~\cite{DBLP:journals/corr/YehCLHD16}. One of the most promising innovations in deep learning has been the development of autoencoder networks and generative adversarial networks (GANs)~\cite{NIPS2014_5423}. These networks use a minimax game adversarial training with opponent generative and discriminative networks, and are capable of encoding a latent representation of images used to generate new examples from latent space.
\begin{figure}[t]
\centering
\includegraphics[width=.95\columnwidth]{images/sample_pred.png}
\caption{Two samples of predicted images. The left image is the input based on sensor readings, middle image is the predicted expanded occupancy map (1.50x) and the right image is the ground truth.}
\label{fig:sample_pred}
\end{figure}
In this paper, we demonstrate the ability to generate future predictions of occupancy maps without an explicit model using a variety of different neural network architectures with two examples shown in Fig.~\ref{fig:sample_pred}.
The main contributions of this work include:
\begin{itemize}
\item A dataset consisting of simulated and physical occupancy maps that can be used to train and validate neural networks for predicting occupancy maps,
\item A framework to evaluate the performance and accuracy of different neural network architectures,
\item Qualitative and quantitative analysis of the prediction capabilities, performance and accuracy of various network architectures,
\item Validation of our approach using occupancy maps generated by a physical LIDAR sensor.
\end{itemize}
\section{RELATED WORK}
\subsection*{Model Predictive Control}
High-speed navigation has been an active area of research primarily focusing on trajectory optimization, path planning and state estimation. Several papers have investigated model predictive control (MPC) techniques for navigation including~\cite{5152468,6580446} however these approaches typically model the vehicle dynamics to predict vehicle motion and not necessarily the environment.
\subsection*{Deep Learning for Generative Models}
Deep neural networks have been used in a number of promising ways to achieve high performance in domains such as vision, speech and more recently in robotics manipulation~\cite{finn2017deep,levine2016learning}.
Oh et. al. used feedforward and recurrent neural networks to perform action-conditional video prediction using Atari games with promising results~\cite{NIPS2015_5859}. These have also been used in image completion, e.g., by Ulyanov et al.~\cite{DBLP:journals/corr/abs-1711-10925}. In addition, GANs have demonstrated a promising method for image generation~\cite{NIPS2014_5423}. Isola et al. proposed an approach for training conditional GANs which create one image from another image~\cite{pix2pix2017}.
\subsection*{Deep Learning for Navigation}
More recently, several papers have described approaches to combine elements of deep neural networks with autonomous navigation. These include using deep neural networks for model predictive control~\cite{finn2017deep}. Tamar et al. proposed Value Iteration Networks, which embed a planner inside a deep neural net architecture~\cite{tamar2016value}. Several papers investigate the use of deep reinforcement learning to develop collision-free planning without the need of an internal map, however, these are still restricted by the sensor's FOV~\cite{DBLP:journals/corr/TaiPL17, DBLP:journals/corr/abs-1709-10489}.
While each of these papers makes promising contributions to their respective fields, none of the prior works use neural networks and in particular, GANs to generate predictions of future occupancy maps, nor do they focus on extending the planning horizon beyond the sensor's FOV.
\section{PROPOSED ARCHITECTURES}
The goal of our network architecture is to learn a function that maps an input occupancy map to an expanded occupancy map that extends beyond the FOV of the sensor. More formally, we are learning the function
\[
f : x \rightarrow y_i
\]
\noindent where \begin{math}x\end{math} represents the state, in this case, the input occupancy map as an image, \begin{math}y_i\end{math} represents the output occupancy map and \begin{math}i\in\mathbb{R}\end{math} represents percent increase of the expanded occupancy map. Components of the function $f$ include an encoding function $f_{enc}(x)\rightarrow h\in \mathcal{H}$ which maps the state space, input occupancy maps to a hidden state and $f_{dec}(h) \rightarrow (y_i)$, which is a decoding function mapping the hidden state to an expanded, predicted occupancy map.
In our experiments, we compare several different neural network architectures including:
\noindent
(A) a feedforward network based on a U-Net architecture (\textbf{unet\_ff}) \\
(B) a feedforward network based on the ResNet architecture (\textbf{resnet\_ff}) \\
(C) a GAN using the feedforward network from (a) as the generative network (\textbf{gan})\\
A 5-layer multilayer perceptron was also evaluated as a baseline, however in our testing, it did not produce reliable predictions.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{images/simulated_maps.pdf}
\caption{Simulated ground truth maps with white representing free space, black is occupied, and gray is unknown. Four trajectories were used for training (depicted in solid red), and two paths were used as the test set (dotted black).}
\label{fig:simulated_paths}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=.95\textwidth]{images/good.png}
\caption{This figure describes the input data, the predicted images and the ground truth for each of the neural networks evaluated on the simulated dataset across an expanding prediction window from 1.10x increase to 1.70x increase.}
\label{fig:good_sim_images}
\end{figure*}
\subsection{U-Net Feedforward Model}\label{unet}
The U-Net feedforward model is based on the network architecture defined by Ronneberger et. al~\cite{DBLP:journals/corr/RonnebergerFB15} and consists of skip connections which allows a direct connection between layers \begin{math}i\end{math} and \begin{math}n-i\end{math} enabling the option to bypass the bottleneck associated with the downsampling layers in order to perform an identity operation. Similar to~\cite{pix2pix2017}, the encoder network consists of 8 convolution, batch normalization and ReLU layers where each convolution consists of a $4 \times 4$ filter and stride length of 2. The number of filters for the 8 layers in the encoder network are: (64, 128, 256, 512, 512, 512, 512, 512). The decoder network consists of 8 upsampling layers with the following number of filters: (512, 1024, 1024, 1024, 1024, 512, 256, 128).
\subsection{ResNet Feedforward Model}
The ResNet feedforward model is based on the work by Johnson et. al~\cite{DBLP:journals/corr/JohnsonAL16} which consists of 2 convolution layers with stride 2, 9 residual blocks as defined by~\cite{DBLP:journals/corr/HeZRS15} and two deconvolution layers with with a stride of \( \frac{1}{2} \). A key reason this network was selected was based on the ability to learn identify functions, which is key to image translation as well as the success in image-to-image translation demonstrated by the CycleGAN network~\cite{DBLP:journals/corr/ZhuPIE17}.
\subsection{GAN Model}
The GAN networks is based on the pix2pix architecture~\cite{pix2pix2017} which has demonstrated impressive results in general purpose image translation including generating street scenes, building facades and aerial images to maps. This network uses the U-Net Feedforward model defined in section ~\ref{unet} and consists of a 6 layer discriminator network with filter sizes: (64, 128, 256, 512, 512, 512).
\section{SIMULATED DATA EXPERIMENTS}
Our approach to testing occupancy map prediction using the networks defined above
first involved generating a dataset and then performing qualitative and quantitative analysis of the predicted images compared to the ground truth.
\subsection{Data Collection}
A dataset of approximately 6000 images of occupancy map subsets was created by simulating a non-holonomic robot moving through a two-dimensional map with a planar LIDAR sensor in C++ with ROS and the OctoMap library \cite{hornung13auro}. Two maps, shown in Fig.~\ref{fig:simulated_paths}, were created in Solidworks with the path width varying between 3.5\,m to 10\,m. These were converted into OctoMap's binary tree format using binvox \cite{binvox, nooruddin03} followed by OctoMap's binvox2bt tool. The result is an occupancy map with all unoccupied space set as free. We require space outside of the walls, shown as grey in Fig.~\ref{fig:simulated_paths}, to be marked as unknown to provide a ground truth for our estimated maps. These ground truth maps were created by fully exploring the original occupancy maps.
The robot is modeled as a Dubin’s car, with a state vector $\mathbf{x} = [x, y, \theta]$ and inputs $\mathbf{u} = [v, \dot{\theta}]$ where ($x,y$) is the robot's position, $v$ is the velocity, and $\theta$ and $\dot{\theta}$ are the heading angle and angular velocity, respectively. For simplicity, the robot is constrained to move at fixed forward velocity of 0.5\,m/s. A planar LIDAR sensor with a scanning area of 270$^\circ$ and range of 20\,m is used to simulate returns given the robot's current pose against the ground truth map. These simulated returns are used to create the “estimated” occupancy map. Path planning is done with nonlinear model-predictive control and direct transcription at 10\,Hz. At each time step, a subset of the maps (both the estimated and ground truth) are saved. A 5\,m by 5\,m square centered around the robot's pose was chosen with a resolution of 0.05\,m. At each time step, the robot's current state and action space are also logged. Occupancy maps are expanded over time, so our simulation performs a continuous trajectory and the data set is built consecutively instead of randomly sampling throughout a map. A total of six trajectories were simulated. Four paths were used for training data (5221 images) and two were used as a test set (1090 images). Ground truth datasets of the expanded occupancy maps were also generated. These expanded occupancy maps range from 1.10x to 2.00x expansion in increments of 0.10x, e.g., a 2.00x expansion results in a 10\,m by 10\,m square subset centered around the robot.
\subsection{Training Details}
We trained each variant of the neural network using the expanded ground truth occupancy maps from scratch for 200 epochs with a batch size of 1. A total of 15 training sessions were performed to evaluate each of the three neural network architectures across five expansion increases (1.10x, 1.30x, 1.50x, 1.70x, and 2.00x).
We use the Adam optimizer with an initial learning rate of 0.0002 and momentum parameters \begin{math}\beta_1 = 0.5, \beta_2 = 0.999\end{math}. In the feedforward models, L1 loss was used as proposed in PatchGan~\cite{pix2pix2017} and in the GAN model L1+discriminator loss was used. The decoder layers of the network used a dropout rate of 0.50 and weights were initialized from a Normal distribution ($\mu=0, \sigma=0.2$). All models were implemented using PyTorch~\cite{paszke2017automatic}.
\begin{figure}[t]
\centering
\includegraphics[width=.75\columnwidth]{images/turtlebot_map_hardware.png}
\caption{a). TurtleBot2 robot used to validate physical experiments. The robot has an top-mounted LIDAR sensor used to compute occupancy maps for navigation in unknown environments. b) Occupancy map created with hardware from (a). Red arrows show the robot's path.}
\label{fig:turtlebot_map}
\end{figure}
\subsection{Simulation Results}
We evaluated the performance of each neural network architecture across a span of five increasing occupancy map predictions. Fig.~\ref{fig:good_sim_images} provides a snapshot of the qualitative assessment of the predicted images for each of the neural networks. This example was selected because it demonstrates that even with very little information, the U-Net feedforward model was able to accurately predict the presence of the surrounding obstacles while the other networks were unable to detect it. Table~\ref{table:sa} provides the structural similarity index metric (SSIM) for each of the networks. Based on the SSIM metric, it can be seen that the U-Net feedforward model outperforms the other networks at 1.10x and 1.30x expansion confirming the qualitative assessment. The quality of the prediction generally decreases as the expansion percentage increases and with expansions 1.50x and above the three networks achieve similar performance.
\begin{table}[h]
\centering
\caption{SSIM Analysis for Simulation Data}
\label{table:sa}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Expansion} & \textbf{unet\_ff} & \textbf{resnet\_ff} & \textbf{gan} \\ [0.05ex]
\hline\hline
1.10x & 0.899 & 0.861 & 0.818\\
\hline
1.30x & 0.818 & 0.780 & 0.790 \\
\hline
1.50x & 0.770 & 0.773 & 0.759 \\
\hline
1.70x & 0.760 & 0.752 & 0.736 \\
\hline
2.00x & 0.767 & 0.770 & 0.574 \\
\hline
\end{tabular}
\end{table}
\section{PHYSICAL EXPERIMENTS}
\begin{figure*}[t]
\centering
\includegraphics[width=.95\textwidth]{images/good_phys.png}
\caption{This figure describes the input data, the predicted images and the ground truth for each of the neural networks evaluated on the physical dataset across an expanding prediction window from 1.10x increase to 1.70x increase.}
\label{fig:good_phys_images}
\end{figure*}
Our next experiment focused on validating our approach with occupancy maps generated by a physical LIDAR sensor.
\subsection{Data Collection and Training Details}
In this experiment, we teleoperated a TurtleBot2 robot with a mounted Hokuyo UST-20LX LIDAR sensor (shown in Fig.~\ref{fig:turtlebot_map}(a)) around a building. The OctoMap library \cite{hornung13auro} along with a custom C++ implementation of a particle filter running at 20\,Hz was used for simultaneous localization and mapping. The final map was used as ground truth (shown in Fig.~\ref{fig:turtlebot_map}(b)). At each time step a 5\,m by 5\,m square subset centered around the robot's current pose of both the ground truth and estimated maps was saved (100 images). Expanded ground truth occupancy maps were generated ranging from 1.10x to 2.00x in 0.10x increments.
Our objective was to evaluate whether training performed on a simulated dataset could be directly transferred to occupancy maps generated by a physical LIDAR sensor. For this reason, we opted to not fine tune the networks using the physical dataset.
\subsection{Physical Results}
Fig.~\ref{fig:good_phys_images} represents sample predictions obtained by running the networks trained using simulation data on the occupancy maps generated by the physical sensor. Table~\ref{table:sa_phys} displays the SSIM metric across each of the networks. In the physical experiments, the data is more inconclusive. Similar to the simulation experiments, the quality of the predictions generally decrease as the predicted distance increases, however there was no noticeable difference across the three networks.
\section{DISCUSSION}
The ability to perform predictions is key to navigation. This capability is also motivated from the perspective of behavioral neuroscience and psychology. In particular it has been found~\cite{buckner2010role} that certain neuronal structures point to mapping capabilities and may be involved in encoding predictive mapping events based on past experience. The net product is that neurons do not activate solely based on current visual input, but also based on a sequence of locations, so as to enable prediction (see ~\cite{buckner2010role}). In this paper, our goal is to develop techniques that enable future predictions of occupied space for robotic navigation.
The main intuition behind our predictive approach is that knowledge of the geometry of existing occupied space can serve as a prior for generating predictions. Prior to deep learning, the best methods of generating predictions were through {\it explicit models}, however, modeling observations and experiences can be difficult if not impossible. Deep learning enables the ability to find hidden representations that encode prior knowledge by collecting datasets that represent experiences. In our work, we leveraged the power of deep learning to encode prior knowledge of likely spatial structures and used this representation to generate future predictions without an explicit model.
Based on the above experiments, the proposed approach is generally very stable, particularly when predicting occupancy maps representing 1.10x or 1.30x expansion increases. Considering U-Net's superior performance on the simulated data, we use it next to demonstrate the general robustness of our approach in Fig.~\ref{fig:random_pred} where we display five randomly selected images from the test dataset. A promising benefit of our method is that with very little information, predictions can be extremely accurate as shown in Fig.~\ref{fig:good_sim_images}. This is further evidenced by the supplementary video demonstrating accurate frame-by-frame prediction of a robot navigating a hallway in simulation. As expected, when the predicted area of the occupancy map increases, the results exhibit more uncertainty as demonstrated by the 2.00x predictions in Fig.~\ref{fig:bad_pred}. While falling short of the exact ground truth, these examples still contain useful information beyond the observed input, which can can be exploited by the planning algorithms.
Overall, when compared to the simulated data, the physical data performs worse quantitatively. This is likely due, to the fact that the physical data exhibits more details that are hard to predict given the simulated training data, which does not have the same level of detail (e.g., chairs, boxes, people walking through the scene). Using augmentation methods may help address this issue.
Looking back at the physical data prediction from Fig.~\ref{fig:good_phys_images}, one notes that at a high-level the predictions are not only informative, but also all predictions are qualitatively correct as they all point to the coming of a T-like intersection. This suggests that from the perspective of the end goal of assessing navigational risks, selecting the navigation behavior, or simply deciding on if/when to decelerate, this high level qualitative information is very useful.
We note also that in the physical data results in Table II, there is much less quantitative difference between GAN and fully convolutional models performance, and in fact GAN seem to have a little edge qualitatively over the other methods as it is able to predict a more detailed map than the other approaches.
As demonstrated in Fig.~\ref{fig:good_phys_images}, not only can our approach be used to predict occupied space, it appears to have the beneficial effect to filter out transient obstacles found by noisy sensor readings.
While higher resolution details might be desirable for collision avoidance, this will solved to a large extent with the current sensor measurements in the FOV. We argue that as we expand our temporal horizon, less spatial resolution is necessary in the prediction. In this sense it would be beneficial to use alternate metrics that take this fact into account. One way to achieve this is possibly to compute SSIM at different (coarser) resolution level for more distant future time instants to characterize the ability of the prediction method to capture the future at different scales. This is left to future work.
Additional future work will also focus on improving the current methods for extending predictions and combining them with the stable results generated by the shorter horizon predictions.
\begin{figure}
\centering
\vspace{-2.5mm} \includegraphics[width=.95\columnwidth]{images/bad_pred.png}
\caption{Failure examples when predicting 2.00x increase with the U-Net FF architecture.}
\label{fig:bad_pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth]{images/random_samples.png}
\caption{Random examples at 1.30x expansion using the U-Net FF architecture}
\label{fig:random_pred}
\end{figure}
\begin{table}[h]
\centering
\caption{SSIM Analysis for Physical Data}
\label{table:sa_phys}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Expansion} & \textbf{unet\_ff} & \textbf{resnet\_ff} & \textbf{gan} \\ [0.05ex]
\hline\hline
1.10x & 0.511 & 0.523 & 0.489\\
\hline
1.30x & 0.517 & 0.534 & 0.524 \\
\hline
1.50x & 0.498 & 0.489 & 0.511 \\
\hline
1.70x & 0.486 & 0.504 & 0.486 \\
\hline
2.00x & 0.484 & 0.499 & 0.424 \\
\hline
\end{tabular}
\end{table}
\section{CONCLUSION}
Our long term objective is to develop risk-sensitive control algorithms capable of leveraging known obstacles in the environment as well as predicted obstacles. In this paper, we have laid the foundation to demonstrate deep networks can be used to make predictions of occupancy maps that extend beyond the FOV of the sensor. In our evaluation, we uncovered conditions where predictions were highly accurate and examples where the predicted results could be improved.
As future work, we plan to evaluate prediction mechanisms operating on raw depth data, combining visual and depth data, to develop an on-line learning policy and also to further develop risk-sensitive control policies for high speed navigation based on these predictions.
{\small
\bibliographystyle{ieee}
|
1,116,691,500,702 | arxiv | \section{INTRODUCTION}
It is well-known that in 1D systems the interaction between electrons
cannot be considered as a small perturbation and the system is
described as the Luttinger liquid (LL) that is an alternative to the
Fermi liquid for 1D electronic systems (for a review see
Ref.~\cite{Giamarchi,Voit}), and the Landau's Fermi-liquid picture
where low-energy excitations are single-electron quasiparticles that
in many respects behave like non-interacting electrons is not
applicable. There are different realizations of 1D electronic systems
demonstrating properties of the LL. The examples are
semiconductor-based quantum wires in which dimensionality of the
conduction electrons is reduced by dimensional quantization
and
carbon nanotubes, and such distinctive features of the LL as power-law
suppression of tunneling into 1D systems and spin-charge separation
and have been confirmed experimentally, see
e. g. Ref.~\cite{Cambridge}.
Electron-electron interaction greatly affects electronic transport in
1D systems. In particular, the back-scattering component of the
impurity potential in 1D systems with repulsive inter-electronic
interaction scales to infinity under renormalization group
transformations. Hence, even isolated impurities form effectively
large barriers and strongly suppress
conductance~\cite{KaneFisher,MatveevGlazman,FuruNag}.
On the other hand, the limit of strong interaction between electrons
in solids usually leads to the Wigner crystallization. However, in 1D
systems the long-range order is destroyed by
fluctuations~\cite{LL}.
So, strictly speaking, 1D Wigner crystals do not exist, but the
density-density correlation functions of 1D gas with Coulomb repulsion
contain the $4k_F$ oscillating part which decays extremely
slowly~\cite{Schulz}, like $e^{-c\sqrt{\ln x}}$, that is slower than
any power-law. As the period corresponding to $4k_F$ oscillations is
exactly the average inter-electron spacing, such a system can be
considered as a 1D Wigner crystal with pseudo-long-range
order~\cite{Schulz}. In case of short range inter-electronic
interaction (which takes place in gated quantum wires where the
long-range part of the Coulomb interaction is screened by electrons in
the metallic gate) the $4k_F$ density correlations decay slowly as
well, as the power-law with a small exponent.
Sliding of electronic crystals contributes to conductance, the most
studied case being quasi-1D CDW compounds~\cite{Gruner}. Defects pin
the CDW but when the driving electric field exceeds a threshold field
the CDW starts to slide resulting in non-linear conductance and ac
generation at washboard frequencies corresponding to a shift of the
CDW by one period~\cite{Gruner}. As long as the LL can be interpreted
as a 1D form of the 1D Wigner crystal, one can expect a similar
dynamic regime of depinning, sliding and ac generation in correlated
1D electron system as well. We show that such a regime does exist, at
least, in the quasiclassical limit when quantum fluctuations at the
impurity site are suppressed by strong electron-electron interaction.
Such a scenario was addressed earlier in our letter~\cite{ARS} where
the dynamic regime of conduction accompanied by oscillations of
frequency $f = \bar I/e$ was predicted in a spinless LL.
Full I-V curves of a single-channel LL with a single impurity were
studied by means of thermodynamic Bethe ansatz technique by Fendley et
al~\cite{FeLuSa}. Egger and Grabert~\cite{Grabert} calculated the I-V
curves for specific value of interaction parameter $K_\rho =1/2$ using
the refermionization technique which makes the Hamiltonian quadratic
and, hence, solvable exactly. But no non-stationary regime was found.
Possibility of generation of self-sustained current oscillations in a
quantum wire in a properly designed load circuit was considered in
Ref.~\cite{EgKoutouza}, but these oscillations are a consequence of
instability induced by S-shaped I-V curves, and their origin is
different from the mechanism discussed in the present work. We
suppose that the main difference between our approach and
Refs.~\cite{FeLuSa,Grabert,EgKoutouza} is that the equilibrium
distribution of incident particles (non-interacting fermions, kinks
and anti-kinks, etc) was assumed in these papers. However, as the
distribution of the particles transmitted through the defect is not
the equilibrium one, and the bosonic excitations of the LL are
reflected from the leads to the quantum wire even in case of adiabatic
contacts since the reflection coefficient $r =
\frac{1-K_\rho}{1+K_\rho}$~\cite{SafiSchulz}. Then the incident waves
consist in part of the particles reflected from the contact. So if the
relaxation inside the conducting channel is small the distribution of
the incident particles must not be necessarily the equilibrium one,
and this applies equally to fermions derived from bosons after the
refermionization. Therefore, one needs to calculate the distribution
function of the incident particles, and we perform this by means of
boundary conditions which take into account relaxation processes
induced by coupling of the quantum wire to the Fermi liquid of the
current leads considered as a heat bath. These boundary conditions are
valid for non-ideal contacts, and they generalize the boundary
conditions by Egger and Grabert~\cite{Grabert} and the results of Safi
and Schulz~\cite{SafiSchulz,Safi} derived for expectation values and
ideal adiabatic contacts.
We think that the results of Refs.~\cite{FeLuSa,Grabert,EgKoutouza}
are applicable in the limit of conducting channels longer than the
damping length of excitations due to coupling of electrons inside the
wire to a dissipative bosonic bath (phonons, density fluctuations in a
metallic gate, and so on). And we obtain the non-stationary regime of
conduction for practically important case of the quantum wire which is
shorter than the relaxation length, so that the relaxation is governed
by boundary conditions.
The structure of the paper is as follows. In Sec.~\ref{form} we
formulate the problem, derive boundary conditions at the contacts, and
derive equations of motion for the displacement field at the impurity
position. These equations resemble equations of motion of cou0led
quantum pendulums. In Sec.~\ref{spinless} we use our equations to
study electronic transport in spinless LL. Using the Gaussian model to
account for fluctuations, we study I-V curves, analyze noise spectrum,
study non-Gaussian corrections and find that the Gaussian
approximation is justified in the limit of strong interaction between
electrons and large voltages. In Sec.~\ref{spinful} we consider the
spinful LL with strong enough interaction between electrons when
charge fluctuations at the defect position are small. However, spin
fluctuations are large and they are taken into account strictly by
means of refermionization method in spin sector valid in case of
spin-rotation invariant interaction ($K_\sigma =1$). In
Sec.~\ref{N-ideal} we show that non-adiabatic contacts induce
non-stationary effects similar to those induced by impurities. In
Sec.~\ref{concl} we formulate conclusions.
Below we set $e$, $\hbar$ and $k_B$ to unity, restoring dimensional
units in final expressions when necessary.
\section{GENERAL FORMULATION}\label{form}
\subsection{Problem formulation}
We consider a correlated 1D conductor with an impurity at $x=0$ and
connected to ideal Fermi-liquid reservoirs at $x= \pm L/2$. The
Hamiltonian of the system with impurity consists of two terms $ H=H_0+
H_{i}. $ The first one is the bosonised Tomonaga-Luttinger (TL)
Hamiltonian that maps the 1D system of interacting electrons to free
massless bosons described in terms of the displacement fields
$\hat\Phi_{\nu} (t,x)$ and the conjugated momentum density
$\hat\Pi_\nu (t,x) = \partial_x \hat\Theta_{\nu}/\pi$. Here $\nu =
\rho, \sigma$ denotes charge and spin channels, correspondingly. The
standard TL Hamiltonian in the Fourier transformed form
reads~\cite{Giamarchi,Voit}
\begin{equation}
\hat H_0 = \frac{\pi v_{F}}{2} \sum_{\nu=\rho,\sigma} \int
\frac{dq}{2 \pi} \left\{\hat \Pi_\nu^2 +
\frac{1}{\pi^2 K_\nu^2} q^2 \hat \Phi_\nu ^2\right\}. \label{H0}
\end{equation}
Here the LL parameters $K_\nu$, playing the role of the stiffness
coefficients of the elastic string described by Hamiltonian
(\ref{H0}), are related to the electron-electron interaction
potential, and measure the strength of interaction between
electrons. In the spin-rotation invariant case considered in our
study, $K_\sigma =1$, $K_\rho(q)= 1/\sqrt{1+\frac{g(q)}{\pi v_F}}, $
where $g(q)$ is the Fourier transformed interaction potential. In case
of the short-range interaction the dependence of $g$ on wave-vector
$q$ is usually neglected. For repulsive interaction $K_\rho <1$. In
infinite 1D gas with long-range Coulomb interaction described by the
approximate form $V_C(x) = \frac{e^2}{\epsilon \sqrt{x^2 +d^2}}$,
where $\epsilon$ is a background dielectric constant and $d$ is a
diameter of quantum wire, one obtains $g(q) = 2 \frac{e^2}{\epsilon}
{\rm K_0} (|qd|)]$~\cite{Schulz}. Thus,
\begin{equation}
K_\rho(q)=\frac{1}{\sqrt{1+\gamma {\rm K_0} (|qd|)}}, \quad \gamma = \frac{2 e^2}{\pi \hbar v_F \epsilon} \approx \frac{2}{137\pi} \left(\frac{c}{v_F }\right) \frac{1}{\epsilon} , \label{gamma}
\end{equation}
where $\gamma$ is dimensionless parameter which measures the strength
of the Coulomb repulsion between the electrons.
In case of the long-range interaction and finite length of the
conducting channel the Coulomb potential is modified by screening of
the interaction by current leads. The exact form of the screening
depends on the geometry of the system. We consider 3D metallic leads
forming sheets of a plane capacitor connected by the quantum
wire. Then the screening by the leads can be depicted in terms of the
image charges, and the interaction potential between charges located
at $x$ and $x'$ is described as
\begin{equation}
V(x,x') = \sum_{n=-\infty}^{\infty} \left[V_C (x-x'+2nL) - V_C (x+x'+2nL+L) \right],
\label{pot}
\end{equation}
where the term with $n=0$ describes the direct Coulomb interaction,
and other terms are induced by image charges. Its contribution to the
$\nu = \rho$ term in the Hamiltonian (\ref{H0}) in the coordinate
representation reads
\begin{equation}
\int dx dx'
\left\{\partial_x \hat\Phi_\rho (x) V(x,x')
\partial_{x'} \hat\Phi_\rho (x') \right\}
\end{equation}
Since the operator of the particle density is given by expression
$\hat\rho = -(\sqrt{2}/\pi)\partial_x\hat\Phi(x)$, this term has
rather transparent physical meaning.
Interaction with the impurity is described in terms of the phase
fields $\hat\Phi_\nu (t,x)$ at the impurity position $x=0$
\cite{Giamarchi,Voit}
\begin{equation}
\hat H_{i} = - \frac{W}{\pi} \cos{\sqrt{2} \hat\Phi_\rho (0)}\cos{\sqrt{2} \hat\Phi_\sigma (0)},
\label{impu}
\end{equation}
where the impurity strength $W$ is related to the back-scattering part
of the impurity potential. The forward scattering is not included
because it can be eliminated from the problem by redefinition of the
field $\hat\Phi_\rho$~\cite{Giamarchi}. The impurity Hamiltonian is
related to $2k_F$-components of electron density and in the Luttinger
model used here it does not contain higher harmonics, which are
present in more general models~\cite{Voit}.
Current in the system can be calculated in terms of $\hat\Phi_\rho$ by
means of thermodynamic averaging of the expressions for the operator
\begin{equation}
\hat I = \frac{\sqrt{2}}{\pi}\partial_t \hat \Phi_\rho. \label{i}
\end{equation}
The expectation value of the displacement field in (\ref{i}) can be
found from equation of motion for the Heisenberg operator
$\hat\Phi_\rho (t,x)$. Commuting $\hat\Phi_\rho$ with the Hamiltonian
we find for the case of short range interaction
\begin{equation}
\left( v_\rho^2 \partial^2_{x} - \partial^2_{t}\right)\hat\Phi_\rho (t,x) = \sqrt{2} \pi v_F W \sin \sqrt{2} \hat\Phi_{\rho } \cos{\sqrt{2} \Phi_\sigma} \delta (x) ,
\label{phiop}
\end{equation}
where $v_\rho = v_F/K_{\rho}$ is the velocity of charge (plasmonic)
excitations. Equation of motion for the spin field has similar form,
it can be obtained from (\ref{phiop}) by substitution subscripts
$\rho$ by $\sigma$ and vice versa.
At the contacts we apply the boundary conditions which take into
account injection of electrons induced by external bias and relaxation
processes induced by coupling of the quantum wire to 2D or 3D Fermi
liquid in the current leads. The boundary conditions are considered in
details in the next subsection.
\subsection{Boundary conditions}
Boundary conditions for the single mode (spinless or spin-polarized)
wire contacting with a 2D or 3D leads were derived in
Ref.~\cite{ArtAsSh}. Here we generalize this result for the spinful
case. In order to derive the boundary conditions we use the ideas of
the scattering approach (for a review see Ref.~\cite{Buttiker}).
We assume that electrons in the leads do not interact and that
longitudinal (along the $x$-axis) and transverse motions are
separable. Here we concentrate on the case of
contacts at $x=\pm L/2$ with an arbitrary transverse profile of the
potential. The longitudinal motion in the leads is characterized by
wave vector $k$, spin $s$ and energy $\varepsilon_l =
\frac{k^2}{2m}$. The transverse motion is described by energy
$\varepsilon_n$, the total energy being $\varepsilon = \varepsilon_l +
\varepsilon_n$, where $n$ is an index labeling transverse modes.
In case of non-interacting electrons we match the electron field
operators in the lead and in the wire and, using independency of the
annihilation operators $\hat c_{n,k}$ of the incident electrons on the properties of the
contact, we derive boundary conditions for the lowest subband, which
is responsible for an electronic transport in the wire. The detailed derivation is given in the Appendix.
It is convenient to express the boundary conditions in terms of physical
values: the current $\hat j = v_F\left(\hat \psi_R^\dag \hat \psi_R -
\hat \psi_L\hat \psi_L \right)$, the smooth part of charge density
perturbations $\hat \rho = \hat\psi_R^\dag \hat\psi_R +
\hat\psi_L^\dag\hat\psi_L$, and the $2k_F$-component of charge density
perturbations $\hat \rho_F = \hat \psi^\dag_{L} \hat \psi_R e^{2iq_F
x} + c.c.$, which is related to the Friedel oscillations, where
$\hat \psi_{R,L}$ are field operators for right and left moving
electrons in the wire. The details of derivation can be found in the
appendix. Then the boundary conditions at the left(right) contact read
\begin{equation}
\frac{v_F}{T}\hat \rho \pm \hat j + v_F f \hat \rho_F =
\frac{1}{V} \sum_{\mathbf{n},\mathbf{n'}}\hat c_{\mathbf{n'}}^+ \hat c_{\mathbf{n}} e^{i(\varepsilon_{\mathbf{n'}}-\varepsilon_{\mathbf{n}})t}.
\label{bc-rho}
\end{equation}
Here $T$ is a parameter that
characterizes reflection from the contact, and $T=1$ corresponds to an
adiabatic contact. Parameter $f$ descibes the amplitude of the Friedel
oscillations, it is a number of the order unity if $T$ is not close to
unity, and $f \simeq \sqrt{2(1-T)}$ if the contact is neally adiabatic, $1-T\ll 1$. Thus the Friedel
oscillations disappear if the contacts are ideal. These parameters are local in the
sense that they depend only on the properties of the given contact and
do not depend neither on the lead at the opposite end of the 1D channel nor on the presence
of an impurity or electron-electron interaction provided the latter vanishes in the leads. The
explicit expressions for $T$ and $f$ are given in the Appendix.
In order to check the validity of conditions~(\ref{bc-rho}), we
considered a wire with non-interacting 1D electrons attached to
smoothly widening nearly adiabatic leads. We also assumed that there
might be a potential step of the height $U_0 \ll \varepsilon_F$ at the
interface. In this case we can find the solution directly, using the
quasiclassical approximation in the lead and matching the
quasiclassical solution outside the 1D conductor with the exact
solution inside the channel. And we found that the
condition~(\ref{bc-rho}) fulfils again and yields the conductance
$G=TG_0$ in agreement with the Landauer formula.
As we need the boundary conditions in the bosonic representation, we
have to bosonize (\ref{bc-rho}). Note that the LL theory is valid
provided that all energies are small in comparison with the Fermi
energy, while the amplitude of the term $v_F f \hat \rho_F$ which is
responsible for the Friedel oscillations is of the order of the Fermi
energy if $f =2\sqrt{1-T}$ is not small. Therefore, we limit our study
by nearly adiabatic contacts with $\sqrt{1-T} \ll 1$, and neglect
terms of the higher order in $f$. Transforming then in a standard way
the fermionic operators to charge and spin density
variables~\cite{Voit} we obtain the boundary conditions for bosonic
field $\hat \Phi_\rho$ at the left (right) contacts
\begin{eqnarray}
&&
v_F \partial_x \hat\Phi_\rho \mp \partial_t \hat\Phi_\rho + \sqrt{2}f\varepsilon_F \sin (\sqrt{2}\hat\Phi_\rho \mp k_FL)\cos\sqrt{2}\hat\Phi_\sigma = \hat P^{L,R}_\rho, \label{bc-operator1} \\
&&
v_F \partial_x \hat\Phi_\sigma \mp \partial_t
\hat\Phi_\sigma +
\sqrt{2}f\varepsilon_F \cos (\sqrt{2}\hat\Phi_\rho \mp k_FL) \sin \sqrt{2} \hat\Phi_\sigma
= \hat P^{L,R}_\sigma , \label{bc-operator-sigma}
\end{eqnarray}
where $\hat P_\nu^{L,R} = 2\pi v_F \hat N_\nu^{L,R}$, $N_\nu^{L,R}$ is
the operators of excess number of charge ($\nu=\rho$) and spin
($\nu=\sigma$) densities in the left ($L$) and right ($R$) leads,
respectively. The expectation values of the operators $P^{L,R}_\nu$
and correlation functions of their fluctuating parts $\delta\hat
P^{L,R}_\nu = \hat P^{L,R}_\nu - \langle \hat P^{L,R}_\nu \rangle$ can
be calculated easily from the right-hand part of (\ref{bc-rho}). The
average of $P^{L,R}_\rho$ for charge channel is proportional to the
potentials $U^{L,R}_\rho$ applied to the left (right) contact,
$\langle P^{L,R}_\rho \rangle = U^{L,R}_\rho/\sqrt{2}$. Similarly, the
expectation values $\langle \hat N_\sigma^{L,R}\rangle$ equal to the
excess spin densities in the leads, and $\langle \hat P_\sigma^{R} -
\hat P_\sigma^{L} \rangle = V_\sigma/\sqrt{2}$ where $V_\sigma$ is a
``spin bias''.
Correlation functions are identical for both channels and for both
contacts, while correlations between left and right contacts and
between charge and spin operators are absent. In the frequency
representation correlation functions read
\begin{equation}
\langle \delta\hat P (\omega)\delta\hat P (\omega') \rangle = 4\pi^2 \omega N(\omega') \delta (\omega + \omega').
\label{P-corr}
\end{equation}
where $N(\omega')$ is the Planck distribution function. The
fluctuating part of the boundary conditions takes into account that
the leads play a role of a heat bath and leads to the equilibrium
distribution functions of the excitations in the quantum wire.
If there is a metallic gate near the quantum wire we must take into
account screening by the gate. Following the approach of
Ref.~\cite{Grabert} we find that the screening by the gate results in
a modification of the factor in the first term of
(\ref{bc-operator1}). Then the boundary conditions for the case of
short-range interaction acquire the form
\begin{equation}
\frac{v_F}{K_\rho^2} \partial_x \hat\Phi_\rho \mp \partial_t \hat\Phi_\rho + \sqrt{2}f\varepsilon_F \sin (\sqrt{2}\hat\Phi_\rho \mp k_FL)\cos\sqrt{2}\hat\Phi_\sigma = \hat P^{L,R}_\rho . \label{bc-operator-rho}\\
\end{equation}
The modification of the factor before the spatial derivative can be
also illustrated by means of the simple model in which the factor
$K_\rho$ is equal to $1$ at the non-interacting lead $x=-L/2-0$ and
step-like reaches its value in the wire at $x=-L/2+0$. Then we
integrate the equation of motion~(\ref{phiop}) from $x=-L/2-0$ to
$x=-L2/+0$ and obtain that $\hat \Phi_\rho$ is a continuous function
of $x$ but its spatial derivative satisfies
$$
\partial_x \hat \Phi_\rho(-L/2-0) = \frac{1}{K_\rho^2}\partial_x \hat
\Phi_\rho(-L/2+0),
$$
which explain transition
from~(\ref{bc-operator1})~to~(\ref{bc-operator-rho})
In case of a wire adiabatically connected to ideal Fermi-liquid
reservoirs at $x= \pm L/2$ the boundary conditions
(\ref{bc-operator-sigma}) and (\ref{bc-operator-rho}) reduce to
\begin{equation}
\left ( \frac{v_F}{K_{\nu}^2}\partial_x \mp \partial_t \right )\hat\Phi_\nu (x{=} \pm L/2){=}\hat P_\nu^{L,R} , \qquad \nu = \rho, \sigma \label{bc-operator}
\end{equation}
in agreement with the results of Ref.~\cite{Grabert,SafiSchulz,Safi}.
It looks natural that in case of gated quantum wire the gate screens
externally applied electric field and the problem is described in
terms of boundary conditions, as it was discussed in
Ref.~\cite{Grabert}. Of course, inside the wire there is also an
electric field induced by non-uniform distribution of electrons, but
this electric field is taken into account by the interaction between
electrons. However, it looks less clear whether one can describe the
driving voltage by boundary conditions when there is no gate (the case
of long-range interaction). Therefore, in case of long-range Coulomb
interaction we considered two approaches. First, we inserted the
driving dc electric field into the Hamiltonian, when the external
field appears in the equation of motion for the displacement field
$\hat\Phi_\rho(x,t)$. Second, we derived equations of motion for the
phase fields with driving dc voltage taken into account by boundary
conditions. But the equation of motion for the displacement field
$\hat\Phi_\rho(t)$ at the defect site turned out to be the same and
the results of two approaches for the case of dc voltage in both cases
are equivalent.
\subsection{Equations of motion of the displacement field at the
impurity site}\label{equations}
In this section we derive equations of motion for the phases
$\hat\Phi_{\rho }$ and $\hat \Phi_{\sigma}$ at the impurity for the
wire with adiabatic contacts. Consider first the case of short-range
interaction. We solve equation of motion (\ref{phiop}) for
$\hat\Phi_\nu (\omega,x)$ formally using Fourier transformation with
respect to time, and match the solutions at the impurity site using
boundary conditions (\ref{bc-operator}). In this way we express
operators $\hat\Phi_\nu (\omega,x)$ in terms of their values at the
impurity site, $x=0$, and after inverse Fourier transformation obtain
equations of motion for the displacement field at the impurity
site. The equations read
\begin{eqnarray}
&&
\partial_t \hat \Phi_{\rho} + \frac{W}{\sqrt{2}} Z \otimes \sin \sqrt{2}\hat \Phi_{\rho}\cos \sqrt{2}\hat \Phi_{\sigma}
= F \otimes \hat P_\rho ,
\label{em-rho}
\\
&&
\partial_t \hat \Phi_{\sigma} + \frac{W}{\sqrt{2}} \sin \sqrt{2}\hat \Phi_{\sigma}\cos \sqrt{2}\hat \Phi_{\rho}
= \hat P_\sigma \left(t-\frac{L}{2v_F} \right) .
\label{em-sigma}
\end{eqnarray}
Here $\otimes$ means convolution in time, $P_\nu = \hat P_\nu^{R} -
\hat P_\nu^{L}$, $Z(t)$ and $F(t)$ are defined by means of Fourier
components
\begin{equation}
Z(\omega)= K_{\rho} \frac{1- iK_{\rho} \tan \omega t_L}{K_{\rho} -i \tan \omega t_L}, \quad F (\omega) = \frac{K_{\rho} }{2[K_{\rho} \cos \omega t_L -i \sin \omega t_L] },
\label{ZF}
\end{equation}
where $\quad t_L = \frac{L K_\rho}{2v_F}$. Oscillatory dependence of
$Z(\omega)$ and $F(\omega)$ describes multiple reflections of the
bosonic excitations of the LL from contacts. This statement can be
illustrated by the expression for $Z$ in time representation
\begin{equation}
Z(t) = K_\rho \left[\delta (t)+ 2 \sum_{m=1}^\infty r^m \delta \left(t - m t_L \right) \right], \quad r=\frac{1-K_\rho}{1+K_\rho},
\label{z}
\end{equation}
where $r$ is the reflection coefficient of plasma excitations from the
contacts~\cite{SafiSchulz}.
Consider now the case of long-range Coulomb interaction between the
electrons. Formally, the interaction potential in the system of finite
length (\ref{pot}) is symmetric with respect to the contacts and
periodic with period $2L$. Therefore, we can expand the field
operators in Fourier series and find a simple and easily soluble
equation of motion for Fourier components. Then using the boundary
conditions we obtain equations of motion similar to
(\ref{em-rho}-\ref{em-sigma}) but with different memory functions $F$
and $Z$
$$
Z(\omega) = \frac{i\omega R_+ - 2\omega^2(R_+^2 - R_-^2)}{1+2i\omega
R_+ },\, F(\omega) = \frac{i\omega R_- }{1+2i\omega R_+ }, \,
R_{\pm}(\omega) = \frac{v_F}{L}\sum_{k=-\infty}^{\infty} \frac{(\pm
1)^k}{\omega^2 - q_{2k}^2 v^2(q_{2k})} $$ with $ q_n = \frac{\pi
n}{L}$, $v^2=v_F^2[1+2\gamma {\rm {\rm K}_0} (|qd|)]$. The exact
analytical summation in $R_{\pm}$ is difficult, but the sums can be
calculated with logarithmic accuracy as
$
R_{+}(\omega) = \frac{K_\rho(q_\omega)}{2\omega \tan \frac{\omega L}{2 v_\omega}}, \quad R_{-}(\omega) = \frac{K_\rho(q_\omega)}{2\omega \sin \frac{\omega L}{2 v_\omega}},\quad K_\rho(q_\omega) = \frac{1}{ \sqrt{1+2\gamma {\rm {\rm K}_0} (|q_\omega d|)} },
$
where $q_\omega$ is a solution of equation $\omega= q_\omega v(q_\omega)$. This approximation results in expressions for $Z$ and $F$ that coincide with (\ref{ZF}) but with $K_\rho(q_\omega)$ depending on frequency.
In the simplest case of the single-mode (spinless) LL the equation of motion for the phase at the impurity site reads
\begin{equation}
\partial_t \hat \Phi (t) + W_i Z \otimes \sin 2 \hat\Phi = F\otimes\hat P.
\label{phi0op}
\end{equation}
Equations (\ref{em-rho}-\ref{em-sigma}) and (\ref{phi0op}) resemble equations of motion of an overdamped pendulums, therefore, one can expect that when the system is driven by a constant external bias the phase increases non-uniformly, which in our case means presence of both dc and ac current. It is not easy to solve the non-linear equations for operators in general case. So we solve them in the limit of strong inter-electronic interaction when fluctuations of the phase field $\hat \Phi_{\rho}$ are relatively small and can be described by Gaussian approximation. Fluctuations in the spin channel are not small and are not Gaussian, however, they will be taken into account strictly by means of refermionization.
\section{DYNAMIC REGIME OF CONDUCTION IN THE SPINLESS LUTTINGER LIQUID}\label{spinless}
\subsection{Gaussian approximation}
In this section we will consider the most technically simple case of the single-mode LL with short-range interaction between electrons.
First, we represent the bosonic field operator at the impurity site as a sum of its expectation value and fluctuating part, $\hat\Phi = \Phi + \hat\phi$, $\Phi = \langle \hat\Phi\rangle$. Then we perform thermodynamic averaging of both sides of Eq.~(\ref{phi0op}) and obtain equation for expectation value $\Phi$ of the field operator at the impurity site
\begin{equation}
\partial_t \Phi (t) + W_i Z \otimes \langle \sin 2 \hat\Phi \rangle = F\otimes V ,
\label{phi0}
\end{equation}
Equation (\ref{phi0}) is not a closed equation for $\Phi (t)$ since it contains an expectation value of $\sin 2 \hat\Phi (t)$ which depends both on expectation value $\Phi$ and on fluctuations $\hat\phi$ of the displacement field. Therefore, in order to calculate the expectation value we need to study fluctuations. The equation of motion for the fluctuating part $\hat\phi$ of the displacement field we obtain subtracting (\ref{phi0}) from (\ref{phi0op}). Then we simplify the problem assuming that fluctuations are Gaussian. Strictly speaking, the fluctuations are not Gaussian, and in general case this is just a model assumption. However, we show below that this approach can be justified in case of strong inter-electronic repulsion and in the limit of high voltages, where the Gaussian fluctuations dominate.
Thus we solve the problem by means of the self-consistent harmonic approximation~\cite{Giamarchi}, in which fluctuations are assumed to be Gaussian. In this approximation, we replace
\begin{equation}
\sin 2 \hat\phi \to 2 h \hat\phi, \quad h \equiv e^{-2 \langle \hat\phi^2 \rangle} ,
\label{scha}
\end{equation}
and instead of (\ref{phi0}) we obtain more simple equation for the expectation value $\Phi (t)$
\begin{equation}
\partial_t \Phi (t) + W_i Z\otimes h \sin 2 \Phi = F \otimes V ,
\label{phi0s}
\end{equation}
and a linear equation for fluctuations
\begin{equation}
\partial_t \hat\phi (t) + 2 W_i Z \otimes h \cos2\Phi \hat\phi = F \otimes \delta\hat P(t_1) .
\label{phi-flu}
\end{equation}
Coefficients of this equation depend both on the mean square fluctuations $\langle \hat\phi^2 (t) \rangle$ and on the expectation value $\Phi$, so it must be solved self-consistently with (\ref{phi0s}).
If the applied dc voltage is small enough,
equations (\ref{phi0s}) and (\ref{phi-flu}) have stationary solutions for phase $\Phi$ and for mean square fluctuations $\langle \hat\phi^2 \rangle$.
In the stationary case (\ref{phi0s}) reads
\begin{equation}
W_i h \sin 2 \Phi = V ,
\label{phi-st}
\end{equation}
and Fourier transformed (\ref{phi-flu}) reduces to the simple form
\begin{equation}
-i\omega \hat\phi (\omega) + 2 W_i h Z(\omega) \cos2\Phi \hat\phi (\omega) = F(\omega) \delta\hat P(\omega) .
\label{eqflu-st}
\end{equation}
This equation can be solved easily. Taking into account correlation functions given by (\ref{P-corr}), (\ref{eqflu-st}) and (\ref{ZF}) we can calculate mean square fluctuations
\begin{equation}
\langle \hat\phi^2 \rangle = \frac{K_\rho^2}{2} \int_{-\infty}^\infty \frac{\omega \coth\frac{\omega}{2T} d\omega}{(\omega^2 + W_c^2)[(1+K_\rho^2) + (1- K_\rho^2)\sin (\omega t_L - \alpha_\omega)]},
\label{flu-st}
\end{equation}
where
$
\alpha_\omega = \arctan \frac{W_c^2 - \omega^2}{2\omega W_c}, \quad W_c= 2W_i K_\rho h \cos 2\Phi.
$
Since $W_c$ depends on $\langle \hat\phi^2 \rangle$, (\ref{flu-st}) determines the self-consistency condition for $\langle \hat\phi^2 \rangle$. The result of integration depends on relation between $V_T$
and temperature $T$. First, we consider the limit of zero
temperature. In pure LL this integral would diverge
logarithmically both at high and low frequencies. The
divergence at the upper limit in the TL formalism must be
cut off at frequency $\Lambda$ of the order of the bandwidth
or the Fermi energy. The infrared divergence at low
frequencies is a distinctive feature of 1D
systems, and in the presence of impurity the infrared divergence is cut off at a frequency related to the impurity potential.
In addition, the
denominator contains the oscillating factor induced by
reflections of fluctuations from contacts. If the length of
the quantum wire is large enough the main
contribution to the integral is determined by frequencies
$\omega t_L \gg 1$ and oscillations contribute little to the
integral and we obtain
\begin{equation}
\langle \hat\phi^2 \rangle = \frac{K_\rho}{2(1-K_\rho)} \ln \frac{\Lambda}{2 W_i K_\rho \cos 2\Phi}.
\label{phi2}
\end{equation}
Now using this equation we can calculate maximum value
of the left hand side of (\ref{eqflu-st}) which
determines the value of the threshold voltage $V_T$ below which
the static solutions for mean phase $\Phi$ exist. We find
\begin{equation}
V_T = 2W_i \left( \frac{2W_i \sqrt{K_\rho^3} }{\Lambda}\right)^{\frac{K_\rho}{1-K_\rho}}\sqrt{1-K_\rho}.
\label{VT}
\end{equation}
We see that the threshold voltage at low temperatures is
determined by the impurity potential renormalized by quantum
fluctuations. In case of interelectronic repulsion, $K_\rho
< 1$, the mean square fluctuations $\langle \hat\phi^2
\rangle$ and, hence, $V_T$ are finite, while in
non-interacting system, when $K_\rho = 1$, fluctuations
become infinite and $V_T$ is destroyed by quantum
fluctuations. Thus we find that the solution for $\Phi$ is
stationary at $V<V_T$, that is current cannot pass an
impurity. This result is a consequence of our approximation
in which only Gaussian fluctuations were taken into account.
If we took into account fluctuations of solitonic type for
which the phase increases by $\pi$ due to tunneling, we
would obtain a small tunneling current at $V < V_T$ described by the well-known power-law
I-V curves~\cite{Giamarchi}. Thus the current is small at $V < V_T$ and starts to increase rapidly at $V > V_T$.
In case of finite temperatures the
self-consistency equation has solutions which correspond to
a finite value of fluctuations only if $T < T_c \sim V_{T,0}
\equiv V_{T}(T=0, L=\infty)$, so there is a characteristic
temperature above which $V_T$ is destroyed by thermal
fluctuations and the impurity does not suppress electronic
transport.
If the quantum wire is short enough, $L \sim v_\rho/V_{T,0}$, we must not average (\ref{flu-st}) over oscillations at $\omega t_L \sim 1$. At these frequencies $\langle \hat\phi^2(\omega) \rangle$ in (\ref{flu-st}) is proportional to $\omega^{-1}$ as before but with a different factor. As a consequence $V_T$ is suppressed in short wires, and impurities do not destroy the linear conduction when $L < L_c \sim v/V_{T,0}$. This happens due to increase of fluctuations at the impurity site because fluctuations are reflected back from the contacts, while the distance to the contacts becomes smaller than the correlation length of the fluctuations.
\subsection{I-V curves and noise spectrum at high voltages}
As it was noted already, it is difficult to obtain I-V curves at low voltages accurately because of time dependence of the mean square value of fluctuations. The problem is simplified at high voltages, $V \gg V_T$, when the mean square value $\langle \hat\phi^2 \rangle$ becomes nearly constant with small oscillating component. In this case (\ref{phi0s}-\ref{phi-flu}) can be solved perturbatively assuming that the oscillating parts of both mean square fluctuations $\langle \hat\phi^2 \rangle$ and of the mean phase $\Phi$ are small.
In this subsection we consider the limit of relatively long
conducting channel, $V_T t_L \gg 1$, but not too long, so
that the wire is short in comparison with the damping length
related to relaxation due to coupling to phonons etc. In this case we have to use the exact form of
$Z(t)$ in equation for the expectation value
(\ref{phi0s}) but can keep only the first
delta-function in kernel $Z(t)$ in equation for fluctuations
(\ref{phi-flu}). In time representation this means that
we take into account current pulses reflected from the
contacts but we ignore correlations between fluctuations
shifted by time $n t_L $ necessary for the excitation to
return to the impurity after multiple reflections
from the contacts. Then (\ref{phi-flu}) acquires simple
form and can be solved easily
\begin{equation}\label{hp11}
\hat \phi = \int_{-\infty}^{t} dt_1 \int_0^\infty dt_1 F(t_1-t_2) \delta\hat P(t_2) e^{-\int_{t_1}^{t} W(t_2) dt_2 },
\end{equation}
where $ W(t) = 2 K_\rho W_i h(t) \cos2\Phi (t)$. Using (\ref{hp11}) we can calculate mean square fluctuations $\langle \hat\phi^2 \rangle$. As we consider the long channel we average, again, over oscillatory factor in $F(t)$ and find
\begin{equation}
\langle \hat\phi^2 \rangle =
\frac{K_\rho}{4} \int_{-\infty}^{t} dt_1 \, dt_{3} \int d\omega \omega \coth\frac{\omega}{2T} e^{-\int_{t_1}^{t} W(t_2)dt_2 - \int_{t_3}^{t} W(t_2) dt_2-i\omega(t_1-t_3)}.
\label{cor}
\end{equation}
To solve this equation we need to calculate, first, $W(t)$ which is determined by fluctuations. In order to do this we solve (\ref{phi0s}) and (\ref{cor}) for fluctuations seeking for $\langle \hat\phi^2 \rangle$ in the form $\langle \hat\phi^2 \rangle = c\cos \omega_0 t + s \sin \omega_0 t$, where $\omega_0 \equiv 2\pi \bar I$, and $\langle \cdots \rangle_t$ denotes averaging in time. We assume also that $c,s \ll 1$. Substituting this form into (\ref{cor}) and keeping only leading terms we obtain in the limit of low temperatures
\begin{equation}
\langle \hat\phi^2 \rangle = \frac{K_\rho}{2} \left[ \ln\frac{\Lambda}{b} - \frac{\pi W_0}{\omega_0} \cos \omega_0 t\langle \hat\phi^2 \rangle_t - \frac{2W_0}{\omega_0} \ln\frac{\omega_0}{b} \sin \omega_0 t \right],
\label{cor2}
\end{equation}
where $W_0= 2W_i K_\rho e^{-2\langle \hat \phi^2 \rangle_t},\; b= \langle W(t) \rangle_t = |c|W_0$.
Thus we have found that the main logarithmic contribution to $\langle \hat\phi^2 \rangle_t$ is determined by relation similar to (\ref{flu-st}) valid in case of small voltages, but with different infrared cut-off frequency $b$ which is much smaller than $W_c$ in (\ref{flu-st}). From the self-consistency condition we find
\begin{equation}
c = - \frac{\pi K_\rho W_0}{2\omega_0}, \; s = \frac{2 c}{\pi} \ln \frac{2\omega_0^2}{\pi W_0^2}, \; W_0 = W_i K_\rho^{\frac{1+ K_\rho}{1 - 2 K_\rho}} \left(\frac{ \pi W_i^2}{2\Lambda V}\right)^{\frac{ K_\rho}{1 - 2 K_\rho}}.
\label{dV}
\end{equation}
Here we have expressed $W_0$ from (\ref{VT}) in terms of
$V_T$ at zero temperature in the limit of the long wire.
We see that at high voltages the solution with finite
amplitude of the oscillations exists only at $K_\rho < 1/2$,
i.e., when inter-electronic interaction is strong enough.
The result differs from that for the regime of small
voltages when fluctuations do not destroy the dynamic regime
at any repulsion strength, $K_\rho < 1$.
Now, using (\ref{cor2}), we can solve (\ref{phi0s}) in the limit of high voltages, $V \gg V_T$, and calculate current. The total current calculating near the contact consists of dc part, $\bar I = V G_0 + I_{nl}$, where $I_{nl}$ is non-linear correction to Ohm's law, and of ac part, $I_{ac} \sin \omega_0 t$, which oscillates with frequency $\omega_0 = 2\pi \bar I/e\approx eV/\hbar$ (in dimensional units)
\begin{eqnarray}
&&
I_{ac} = \frac{\sqrt{2} G_0 W_0 }{\sqrt{1+K_\rho^2-(1-K_\rho^2)\cos\omega_0 t_L}},
\label{Iac} \\
&&
I_{nl} = - \frac{2G_0 W_0^2}{V} \left[\ln \frac{2V^2}{\pi W_0^2} + \frac{1}{(1+ K_\rho^2) - (1 - K_\rho^2)\cos \omega_0 t_L}\right].
\label{Inl}
\end{eqnarray}
The oscillating factors in these expressions are due to
reflections of current pulses generated at the impurity from
the contacts. The presence of such characteristic oscillations in the static I-V curves was first noted by Dolcini et al ~\cite{Dolcini}.
In the same approximation we can calculate the noise spectrum and we find two maxima of the noise spectrum around frequencies $\omega = \pm \omega_0$
\begin{equation}
\langle \delta \hat I(\omega) \delta \hat I(\omega') \rangle \approx \frac{\pi \Gamma (1-2 K_\rho) \sin \pi K_\rho G_0^2 V_T^{2(1-K_\rho)} \delta (\omega + \omega')}{2 (1-K_\rho)^{1-K_\rho} K_\rho^{3K_\rho}||\omega|- \omega_0|^{1-2K_\rho}} .
\label{noise}
\end{equation}
Note that the maxima are present under the same condition $K_\rho < 1/2$ for which the solution with finite amplitude of the oscillations at high voltages was found. According to (\ref{noise}) the integral noise power is of the order of $\sim G_0^2 W_i^2$ which is much larger than the ac signal power $ \sim I_{ac}^2$ at frequency $\omega_0$.
In case of long-range Coulomb interaction the correlation function can be found similarly, and we find at $\omega \gg v_F/L$
$$
\langle \delta \hat I(\omega) \delta \hat I(\omega') \rangle \sim \frac{W_i^2}{8\gamma ||\omega|- \omega_0| \ln\frac{2v_F}{\omega d}}\left(\ln \frac{|\omega|||\omega|- \omega_0|}{W_i^2}\right)^{\frac{1}{4\pi\gamma}} \delta (\omega + \omega').
$$
\subsection{Validity of Gaussian approximation}\label{nonG}
Now we discuss conditions under which the Gaussian model that we have used to describe fluctuations can be justified quantitatively. Note that fluctuations of the displacement field $\hat \phi$ in pure 1D system are Gaussian because the TL Hamiltonian is quadratic, and mean square fluctuations are infinite, $\langle \hat\phi^2 \rangle = \infty$. Impurity makes fluctuations at the impurity site finite, see (\ref{phi2}), but fluctuations become non-Gaussian because of the cosine impurity term in the Hamiltonian. As the current passes the impurity, the impurity term oscillates, and frequency of the oscillations increases with voltage increasing. This results in a decrease of the time-averaged impurity potential making the impact of the impurity effectively smaller. Therefore one should expect that relative contribution of the non-Gaussian part to fluctuations must decrease in comparison with the Gaussian part. Then at voltages $V \gg V_T$ we can try to calculate non-Gaussian contribution to fluctuations perturbatively.
We select two contributions of the fluctuating part of the phase, $\hat\phi = \hat\phi_G+\hat\phi_1$, where the first term is the Gaussian contribution which satisfies simplified equation (\ref{phi-flu}), while $\hat\phi$ satisfies full equation (\ref{phi0op}). Considering non-Gaussian part $\hat\phi_1$ as a small correction we linearize (\ref{phi0op}) and obtain equation for $\hat\phi_1$. Considering, again, zero temperature and long conducting channel, $V_T t_L \gg 1$, when $Z(t) \approx K_\rho \delta (t)$ we derive equation of motion for the third cumulant $C_3$. In the first approximation, $C_3(t) = \langle \hat \phi_1 (t) \hat\phi_G (0)^2 \rangle$.
$$
\partial_t C_3(t) + W (t) C_3 (t) = 4 W_i K_\rho h(t) \sin 2 \Phi (t) \langle \{\phi_G(t) \phi_G(0)\}\rangle^2 .
$$
Solution of this equation has the form
$$
C_3(t) = -\int_{-\infty}^t dt_1 e^{ - \int_{t_1}^{t} W (t_2) dt_2 } 4 W_i K_\rho h(t_1) \sin 2 \Phi (t_1) \langle \{\phi_G(t_1) \phi_G(0)\}\rangle^2
$$
Calculating the integral, and keeping the leading terms we find
\begin{equation}
C_3(0) \approx
0.35 K_\rho\left[1 - K_\rho \ln \frac{4V^2}{\pi W_0^2} \right].
\label{c3}
\end{equation}
Similarly, we can calculate the fourth cumulant $C_4 = \langle \hat \phi_1 \hat\phi_G (0)^3 \rangle $. Then we can compare non-Gaussian contributions with Gaussian contributions (\ref{cor2}), and find that non-Gaussian contributions are relatively small compared to Gaussian contributions, $C_3 \ll \langle \hat\phi_G^2 \rangle^{3/2}$, $C_4 \ll \langle \hat\phi_G^2 \rangle^{2}$ at small $K_\rho$ and large voltages.
\section{DYNAMIC REGIME OF CONDUCTION IN THE SPINFUL LUTTINGER LIQUID}\label{spinful}
\subsection{Refermionization in the spin channel}\label{Re-spin}
In the spinful LL, similarly to the results of Sec.~\ref{spinless}, the Gaussian approximation for fluctuations in the charge channel
can be justified in the limits of strong interaction and at high voltages. But in the spin
channel, fluctuations at the impurity site are always non-Gaussian. However, if interaction is spin-rotation
invariant ($K_\sigma =1$) and the impurity is situated in the middle of the wire
we can solve the
problem strictly using the refermionization method. This
method consists in introducing new fermionic variables for
spin channel. Equations of motion for these variables
are linear, and, hence, soluble. Refermionization was
used successfully to treat charge fluctuations in the
spinless case for the specific value of interaction
parameter $K_\rho = 1/2$ \cite{Grabert,
Chamon_Freed_Wen} and to describe spin fluctuations in the
spinful case for $K_\sigma =1$ \cite{Matveev,ArtVaRe}. Following the
approach of Ref.\cite{Grabert} we introduce new phase
fields
\begin{equation}
\hat \phi_{\pm} (x) = \frac{1}{\sqrt{2}}\left[\hat \Phi_{\sigma} (x) + \hat \Theta_{\sigma}(x) \right] \pm \frac{1}{\sqrt{2}}\left[\hat \Phi_{\sigma} (-x) - \hat \Theta_{\sigma}(-x) \right].
\label{boso}
\end{equation}
New fields are completely decoupled and the impurity term couples to the field $\hat \phi_+$ only. Then we introduce new fermion variables
\begin{equation}
\sqrt{\frac{1}{2\pi a}}e^{i\hat \phi_+} = \hat g \hat \psi, \quad \hat g = \hat c + \hat c^\dag, \label{e2}
\end{equation}
where $\hat g/\sqrt{2}$ is an auxiliary Majorana
fermion operator, and derive equations of motion for
Heisenberg operators $\hat \psi$ and find that they depend
on $x-v_F t$. Equations of motion for operators $\hat
\psi_{1,2} (t) = \hat \psi (x= \mp 0, t)$ at the impurity
site and for $\hat g$ have the form
\begin{equation}
v_F (\hat \psi_2 - \hat \psi_1) = i \hat g f, \quad
\partial_t \hat g = i [f(\hat \psi_1 + \hat \psi_2) - f (\hat \psi^\dag_1+ \hat \psi^\dag_2)],
\label{dtg}
\end{equation}
where $f(t) = \sqrt{2\pi a} W \cos \sqrt{2}\hat \Phi_{\rho}.$
Density perturbations of new fermions are related in a standard way to the gradient of the displacement field
\begin{equation}
\hat\psi^{+}_{x}\hat\psi_{x} - \langle \hat\psi^{+}_{x}\hat\psi_{x} \rangle_0 = \frac{1}{2\pi}\partial_x\hat\phi_{+}(x).
\label{ref-density}
\end{equation}
Consider, again, the limit of strong electron-electron interaction when fluctuations in the charge channel are small and represent the field operator at the impurity site
as a sum of its expectation value and fluctuating part,
$\hat\Phi_\rho = \Phi_\rho + \hat\phi_\rho$, $\Phi_\rho =
\langle \hat\Phi_\rho\rangle$, taking into account the
fluctuations $\hat\phi_\rho$ in the linear
approximation. Then commutators of $f$ at different times
are small and we can ignore time-ordering and solve
(\ref{dtg}) for $\hat g(t)$
$$
\hat g = 2i \int^t dt_1 [ f(t_1) \hat \psi_1 (t_1)- f(t_1) \hat \psi^\dag_1(t_1)] \exp{\left[-\frac{2}{v_F} \int^t_{t_1} f(t_2)^2 dt_2\right]}.
$$
Now using (\ref{dtg}) and anticommutator $ \{\hat g (t),\hat \psi^\dag_1 (t)\} = \frac{if}{v_F}$ we can obtain the following expression for
$\cos \sqrt{2}\hat \Phi_{\sigma}$:
\begin{eqnarray}
&&
\cos \sqrt{2} \hat \Phi_\sigma (t) = 2 i \pi a W \int^t_{-\infty} \! dt_1 \cos \sqrt{2} \hat \Phi_\rho (t_1) e^{\left[-\frac{2}{v_F} \int^t_{t_1} f(t_2)^2 dt_2\right]}
\label{I2}
\\
&&
\times \left\{
[\hat \psi_1 (t_1)- \hat \psi^\dag_1(t_1)]\hat \psi_1 (t) + \hat \psi^\dag_1(t)[\hat \psi_1 (t_1)- \hat \psi^\dag_1(t_1)]
\right\}
.
\nonumber
\end{eqnarray}
We insert (\ref{I2}) into the equation of motion for the
charge phase (\ref{em-rho}). In the limit of small
fluctuations averaging over charge and spin
variables can be performed separately since the fluctuations
in spin and charge sectors are independent. Expectation
values of fermionic densities in averaged equation (\ref{I2}) can be associated with distribution function of
new fermions by the relation
\begin{equation}
\langle \hat \psi^\dag_1 (t_1),\hat \psi_1 (t_2) \rangle = \int \frac{d\varepsilon}{2\pi v_F} n (\varepsilon, t) e^{i\varepsilon (t_1-t_2)},
\label{n}
\end{equation}
where $t =(t_1 +t_2)/2$. Pairings $\langle \hat \psi_1 (t_1),\hat \psi_1 (t_2) \rangle =0$ because operators with subscript 1
are related to the incident spin excitations which are not affected by the impurity because
the coefficient of reflection from the contact
$r=\frac{1-K_\sigma}{1+K_\sigma}$ is equal to zero for
$K_\sigma=1$. Note that this is different from the
case of charge channel considered in Ref.~\cite{Grabert}, because charge excitations incident on
the impurity contain the fraction transmitted through the
impurity and reflected then from the contact.
Now we need to find distribution function $n(\varepsilon, t)$. To do this we, first, subtract boundary conditions (\ref{bc-operator}) at $x=-L/2$ and $x=L/2$ for spin sector and obtain
\begin{equation}
v_F \partial_x \hat\phi_{+}\left(-\frac{L}{2}, t \right) = \hat P_\sigma .
\label{bc-refermion}
\end{equation}
We express the derivative $\partial_x\hat\phi_{+}$ using (\ref{ref-density}), take the expectation value and find the condition for the fermion density expressed in terms of the distribution function
\begin{equation}
\int \frac{d\varepsilon}{2\pi} [n (\varepsilon, t) - n_F (\varepsilon)] = V_\sigma (t).
\label{refermion-cond}
\end{equation}
Next we multiply equations (\ref{bc-refermion}) taken at different times $t_1$ and $t_2$ and calculate the expectation value. Reducing products of four fermions to sum of products of pairs in a standard way and using (\ref{P-corr}) we end up with the kinetic equation
\begin{equation}
\int
n (\varepsilon-\omega, t)[1-n (\varepsilon , t)]d\varepsilon=\frac{\omega}{2}
\left(1+\coth\frac{\omega}{2T}\right). \label{ke}
\end{equation}
Solution of equations (\ref{refermion-cond}-\ref{ke}) has the form of the equilibrium function with the chemical potential equal to spin bias
\begin{equation}
n (\varepsilon , t) =[1+e^\frac{\varepsilon - V_\sigma (t)}{T}]^{-1}.
\label{n-sol}
\end{equation}
Note that the distribution function has such a form because at $K_\sigma =1$ there are no reflections of excitations from the contacts. In case of spinless electrons with $K_\rho =1/2$ we would obtain kinetic equation different from (\ref{ke}) which does not have solution in the form of the equilibrium distribution because particles, incident on the impurity, contain a fraction that passed the impurity and then reflected ($r = \frac{1-K_\rho}{1+K_\rho} = \frac{1}{3}$) from the contact. Therefore, the equilibrium form of the distribution function of fermions assumed in Ref.~\cite{Grabert} needs a justification.
Using (\ref{n-sol}) in (\ref{n}) we insert (\ref{I2}) into (\ref{em-rho}) and perform integration over energies. Then we find closed equation for the charge phase
\begin{eqnarray}
&&
\partial_t \hat\Phi_{\rho} + \frac{w}{\sqrt{2}} Z \otimes \sin \sqrt{2} \hat\Phi_{\rho}(t) \int^{\infty}_{0} dt_1 \frac{T\cos\sqrt{2} \hat\Phi_{\rho}(t-t_1)}{\sinh \pi T t_1}
\label{I4}
\\
&&
\times e^{-2w \int^{t}_{t-t_1} \cos^2\sqrt{2} \hat\Phi_{\rho}(t_2) dt_2} \cos{V_\sigma \left(t-\frac{t_1}{2}\right)t_1} = F \otimes \hat P_\rho ,
\nonumber
\end{eqnarray}
where $w = 2\pi a W^2/v_F$ is the characteristic potential related to the impurity potential renormalized by spin fluctuations. This expression is strict in the limit of strong interaction, and now we will discuss the conditions of validity of our approach that assumes smallness of the fluctuations at the impurity site.
To estimate fluctuations we will simplify (\ref{I4}) taking into account logarithmic divergence of the integral at $t_1 =0$. Then with the logarithmic accuracy we
perform integration neglecting $t_1$ dependence of the regular part of the integrand and using the standard ultraviolet cut-off of the integration at $t_1 \sim 1/\Lambda$. This gives
\begin{equation}
\partial_t \hat\Phi_{\rho} + V_0 Z \otimes \sin 2\sqrt{2} \hat\Phi_{\rho}
= F \otimes \hat P_\rho, \quad V_0 = \frac{w}{2\pi\sqrt{2}} \ln\frac{\Lambda}{w},
\label{em-fluc}
\end{equation}
This equation is similar to (\ref{phi0op}) for a single-mode LL and can be made identical to (\ref{phi0op}) by changing notations. Therefore, for the case of short-range interaction we can use the results of Sec.~\ref{spinless}. Then we find that in the limit of low voltages fluctuations are small provided $K_\rho \ln \frac{\Lambda}{w} \ll 1$, while from (\ref{cor2}-\ref{dV}) we find that in the limit of large voltages fluctuations are small under condition $K_\rho \ln \frac{\Lambda \omega_0}{w^2} \ll 1$.
In case of long-range interactions we solve \ref{em-fluc} in linear approximation in fluctuating part of the displacement field $\hat \phi_\rho = \hat \Phi_\rho - \Phi_\rho$, $\Phi_\rho = \langle \hat \Phi_\rho \rangle$ and find
$$
\hat\phi_\rho = \frac{F(\omega) \delta \hat P_\rho}{-i\omega + C }, \quad C = 2\sqrt{2} V_0 \langle Z \otimes \cos 2\sqrt{2} \Phi_\rho \rangle_t,
$$
where $\langle\rangle_t$ means time-averaging. To calculate constant $C$ we must solve (\ref{em-fluc}) for expectation value $\Phi_\rho$. Here we will assume that temperature $T$ is low enough, $T \ll V_0$, and limit our estimation by the cases of small and large voltages.
According to study of the dynamics in Sec.~\ref{spinless} we obtain, again, with logarithmic accuracy
$$
\langle \delta \hat\Phi_\rho^2 \rangle \approx \sqrt{ \frac{1}{8 \gamma} \ln\left(\frac{v_F \sqrt{\gamma}}{d V_0}\right) }.
$$
In the limit of large voltages the phase increases linearly $2\sqrt{2}\Phi \approx \omega_0 t$ with $\omega_0 = 2\pi f = 2\pi\bar I \approx 2V$, and with the logarithmic accuracy we find
$$
\langle \delta \hat\Phi_\rho^2 \rangle \approx \sqrt{ \frac{1}{8 \gamma} \ln\left(\frac{v_F \omega_0 \sqrt{\gamma}}{d V_0^2}\right)} .
$$
Then we conclude that in case of long-range Coulomb interaction fluctuations of the displacement field at the impurity site in the charge channel are not large at values of parameter $\gamma$ of the order of unity or larger, which is satisfied for typical values of Fermi velocity, confer (\ref{gamma}).
\subsection{I-V curves and current oscillations}\label{IV-spin}
To calculate current we must solve (\ref{I4}). In the quasiclassical limit we can neglect fluctuations and substitute $\hat \Phi_\rho$ by its expectation value $\Phi_\rho$. But it is not simple to find the solution analytically, therefore, we restrict our study by limiting cases.
The simplest case is the regime of current bias. For the time-independent voltage bias $V_\sigma$ we obtain
\begin{eqnarray}
&&
V (t) = \frac{\omega_0}{2} + \frac{w}{\sqrt{2}} \int_0^\infty \! d\tilde t \, d t_1 \, Y(\tilde t) \sin \frac{\omega_0(t-\tilde t)}{2} \cos\frac{\omega_0(t-\tilde t - t_1)}{2}
\label{V}
\\
&&
\times \cos{V_\sigma t_1}
\frac{T }{\sinh \pi T t_1} \exp\left\{-w t_1 - z \;\cos \left[\frac{\omega_0 t_1}{2} + \omega_0( t-\tilde t)\right] \right\} ,
\nonumber
\end{eqnarray}
where $Y(\omega) = Z(\omega)/F(\omega)$, $z=\frac{2w}{\omega_0}\sin \frac{\omega_0 t_1}{2}$.
If we perform time averaging of (\ref{V}) we find the static I-V curves.
$$
V_{dc} = \frac{\omega_0}{2} \label{V2} + \frac{w}{4} \int^{\infty}_{0} d t_1 \frac{T \cos{V_\sigma t_1}}{\sinh \pi T t_1}
\left[ \sin\omega_0 t_1 \; I_1 \! \left( z \right) + \sin \frac{\omega_0 t_1}{2}\,I_0 \!\left( z \right) \right]e^{-wt_1} ,
$$
This result is the same for both short-range and long-range interaction, which is not very strange since we consider here the limit of strong interaction, and fluctuations in the charge channel are neglected. The general view of the I-V curves for different values of the spin bias is presented in Fig.~\ref{fig-IV}. In the limit of small current, $\omega_0 \ll w$, the second term dominates and $I \propto \sqrt{\frac{\omega_0}{w}}$.
In the opposite limit of large currents, $\omega_0 \gg w$, the results are similar for both voltage and current bias and the asymptotic I-V curve is parallel to the Ohm's law corresponding to conductance quantum $2G_0 = e^2/(\pi \hbar)$ with the excess voltage $V_{exc} =\frac{w}{8}$.
\begin{figure}[!ht]
\vskip 0mm \centerline{ \psfig{figure=iv-inset3.eps,height=8cm
,angle=0} }
\caption{I-V curves at different temperatures $T$ and spin bias
$V_\sigma$. Voltage is measured in units of the characteristic
potential $w$, and current in units of $w/G_0$. Dotted line: Ohm's
law $I=2G_0 V$. Solid lines: $V_\sigma = 0$ for $T = 0, 0.2w,
1.0w$ from bottom to top. Dashed lines: $T=0$ for $V_\sigma =
0.1w, 0.3w, 0.5w$ from bottom to top. The initial part of the I-V
curves is shown in the inset.} \label{fig-IV}
\end{figure}
Time dependence of voltage (\ref{V}) can be characterized by amplitudes of harmonics $n>0$.
At small currents, $\omega_0 \ll w$, amplitude of harmonics decays slowly, approximately as $1/\sqrt{n}$.
In the limit of large currents, $\omega_0 \gg w$, harmonics decay as power law, and with logarithmic accuracy we obtain for $n>0$
$
V_{n} \approx \frac{w}{8\pi} |Y(n \omega_0)| \left(\frac{w}{2\omega_0}\right)^{n-1}\ln\frac{\Lambda}{\omega_0}.
$
Consider now the case of the voltage bias when the system is driven by external voltage $V + V_1 \cos \omega t$, and
assume the limit of large voltages, $V \approx \omega_0 \gg w$ when the second term in the
left-hand-side of (\ref{I4}) is a small perturbation.
The ac voltage modifies I-V curves, and the most impressive part of this modification is the
resonant steps analogous to the Shapiro
steps in the Josephson junctions. In
contrast to Josephson junctions these steps are not at
constant voltage, but at constant current $I=ef$ like in the
regime of Coulomb blockade~\cite{AverinLi} and in the regime
of sliding CDW in linear-chain conductors~\cite{Gruner}. At
this current the frequency of the ac voltage is equal to the
frequency of current oscillations in the wire. The width of
the step at $V \gg w$ and $V_1 \ll V$ can be calculated
straightforwardly using perturbative approach. We find with
logarithmic accuracy
$$
V_{step}= \frac{V_1 w}{\pi V} |F(\omega_0)| \ln \frac{\Lambda}{\omega_0}.
$$
Non-zero dc spin bias induces a spin current which contains both dc and ac parts. The spin current can be calculated according to relation
$
I_\sigma = \frac{\sqrt{2}}{\pi}\partial_t \langle \hat \Phi_\sigma \rangle,
$
where $\hat \Phi_\sigma $ can be found from equation of motion (\ref{em-sigma}) using equations (\ref{e2}) and (\ref{n}).
In the limit of large voltages, $\omega \gg w$ the spin current can be presented in a simple analytic form
$$
I_\sigma = \frac{V_\sigma}{2\pi}\left(1 + \frac{ w}{\pi \omega_0} \sin \omega_0 t \right).
$$
\section{NON-IDEAL CONTACT}\label{N-ideal}
As non-ideal contacts induce Friedel oscillations in the
quantum wire, one can expect that such contacts must induce
the effects in transport which are similar to those in the
system with impurity studied in previous sections. This statement is
supported by the results of our letter~\cite{ArtAsSh} where we have studied the spinless LL with
two identical non-adiabatic contacts. However the problem of
transport through non-adiabatic contact to a quantum wire
with a spinful interacting electron gas was not solved. In
this section we consider electronic transport through a
clean quantum wire described as a spinful LL with one ideal
adiabatic and the second non-ideal contact. The main
difficulty in solving this problem is, again, large
fluctuations of the displacement field $\hat\Phi_\sigma$.
And, again, we solve this problem by means of
refermionization in the spin channel.
To study the role of the non-ideal contacts we act similarly to the previous sections, solving equation of motion for the displacement fields with boundary conditions.
Consider boundary conditions for the spin channel with $K_\sigma=1$ with ideal adiabatic contact at $x=L$ and non-adiabatic contact at $x=0$. Then boundary conditions read
\begin{eqnarray}
&&
(v_F \partial_x - \partial_t )\hat\Phi_\sigma (x=0) = \hat P^{L}_\sigma -\sqrt{2}f\varepsilon_F \sin \sqrt{2}\hat\Phi_\sigma \cos\sqrt{2}\hat\Phi_\rho \label{bc-1nidal} \\
&&
(v_F \partial_x + \partial_t)\hat\Phi_\sigma (x=L)
= \hat P^{R}_\sigma . \nonumber
\end{eqnarray}
As $\hat\Phi_\sigma (x,t)$ satisfies the equation of
motion
\begin{equation}
\left( v_F^2 \partial^2_{x} - \partial^2_{t}\right)\hat\Phi_\sigma (t,x) = 0,
\label{phiB}
\end{equation}
we can find solution for $\hat\Phi_\sigma (x,t)$ in terms of its values at the contacts. Using then boundary conditions (\ref{bc-1nidal}) we obtain equation of motion for the displacement field at the non-ideal contact
\begin{equation}
\partial_t \hat \Phi_{\sigma} + \sqrt{2}f\varepsilon_F \sin \sqrt{2}\hat \Phi_{\sigma}\cos \sqrt{2}\hat \Phi_{\rho}
= \frac{1}{2} [\hat P^{L}_\sigma (t) - \hat P^{R}_\sigma (t-t_L)] .
\label{em-sigmaB}
\end{equation}
This equation resembles equation of motion for the displacement field at the impurity site. We map the problem of non-ideal contact to the impurity problem in the LL with ideal contacts at $x= \pm L$ and an impurity characterized by back-scattering matrix element $\tilde W$ at $x=0$. The equation of motion for the displacement field and boundary conditions for such an impurity read
\begin{eqnarray}
&&
\left( v_F^2 \partial^2_{x} - \partial^2_{t}\right)\hat\Phi_\sigma (t,x) =\sqrt{2}v_F \tilde W \sin \sqrt{2}\hat \Phi_{\sigma}\cos \sqrt{2}\hat \Phi_{\rho} \delta (x), \label{fimp} \\
&&
(v_F \partial_x \mp \partial_t)\hat\Phi_\sigma (x= \mp L)
= \hat Q^{L,R} . \nonumber
\end{eqnarray}
Here we denote external sources of fluctuations as $\hat Q$, and later we will relate them to the source terms $\hat P$. The equation of motion for the phase at the impurity site $x=0$ for this model has a form
\begin{equation}
\partial_t \hat \Phi_{\sigma} + \frac{\tilde W}{\sqrt{2}} \sin \sqrt{2}\hat \Phi_{\sigma}\cos \sqrt{2}\hat \Phi_{\rho}
= \frac{1}{2} [\hat Q^{L} (t-t_L) - \hat Q^{R} (t-t_L)] .
\label{em-sigmaFI}
\end{equation}
Comparing now equations (\ref{em-sigmaB}) and (\ref{em-sigmaFI}) we find that equations of motion become identical if we
choose
$$
\tilde W = 2 f\varepsilon_F, \quad \hat Q^{L} (t) = \hat P^{L} (t+t_L), \quad \hat Q^{R} (t) =\hat P^{R}_\sigma (t).
$$
Thus using such substitutions we can use the results for spin channel obtained in Sec~\ref{Re-spin} for quantum wire with one non-ideal contact.
Now let us consider the charge channel. Following the method used in Sec.~\ref{equations} we find solution of equation of motion for $\hat \Phi_\rho (x,\omega)$ satisfying the boundary conditions. In such a way we obtain expression for the displacement field which depends on the values of both $\hat \Phi_\rho$ and $\hat \Phi_\sigma$ at the boundary with non-adiabatic contact, as both these fields are present in the non-linear term of the boundary condition (\ref{bc-operator-sigma}). Then using this solutions at $x=0$ we find non-linear equations of motion for $\hat \Phi_\rho (x=0)$ which are similar to (\ref{em-rho}), but with different memory function $Z$ and different right-hand containing the source terms $P_{L,R}$ in a non-symmetric way
$
\partial_t \hat \Phi_{\rho} + f\varepsilon_F Z \otimes \sin \sqrt{2}\hat\Phi_\sigma \cos\sqrt{2}\hat\Phi_\rho
= Z \otimes \hat P^L_{\rho} - F \otimes \hat P^R_{\rho}.
$
For the short-range interaction Fourier components of the memory functions read
$
Z(\omega)= K_{\rho} \frac{1- iK_{\rho} \tan 2\omega t_L}{2 K_{\rho} -i(1+K_\rho^2) \tan 2\omega t_L}, \; F (\omega) = \frac{K_{\rho} }{2K_{\rho} \cos 2\omega t_L -i(1+K_\rho^2) \sin 2\omega t_L}.
$
In case of long-range electron-electron interaction we act as in Sec.~\ref{equations} and find similar relations for memory functions but with $K_{\rho}(q_\omega)$ depending on frequency (confer Sec.~\ref{equations}).
In the limit of strong interaction these equations give the results similar to the case of impurity.
Thus we find that the problem of electron transport in quantum wire with one non-ideal and the second ideal contacts is mapped to the impurity problem. Then all the results obtained in Sec.~\ref{spinful} can be used for the case of non-ideal contacts if we substitute the impurity potential $W$ for $f\varepsilon_F$ which is the amplitude of the Friedel oscillations induced by the non-ideal contact.
\section{CONCLUSIONS}\label{concl}
Using the approach based on the bosonised Tomonaga-Luttinger
Hamiltonian we have studied electronic transport in 1D
conductors with a single isolated impurity
or with non-ideal contacts to leads of higher dimension, and
predicted a new dynamical regime of conduction in which the
the dc-current is supplemented by ac oscillations with the
wash-board frequency $f = \bar I/e$.
As thermal fluctuations strongly reduce the effect of impurity on conductance at temperatures $T > T_0 \sim V_T$, and the effect is also destroyed by fluctuations
in relatively short wires, shorter than the length of the order of $v/V_T$, the dynamic regime predicted in our
work can be observed at low enough temperature in
a relatively long quantum wire, the minimal length and maximal
temperature being related to the magnitude of the defect potential
and the strength of inter-electronic repulsion.
The impurity potential $W$ can be of different origin and of
different strength. The value of $W$ can be quite small if a
defect is made artificially, say, by a potential of a point
contact. If the defect is induced by an impurity
atom in the conduction channel then the potential can be
quite large, of the order of the Fermi energy.
In semiconductor based quantum wires with shallow impurity
the value of $W$ is expected to be of the order of few millivolts. In
this case the range of frequencies of generated ac signal
can be quite large, up to practically important teraherz
region, depending on material of the quantum wire and the
origin of the defect.
\section*{Acknowledgments}
We are grateful to S. V. Remizov for useful discussions and
helpful comments. The work was supported
by Russian Foundation for Basic Research (RFBR)
and by Fund of non-profit programs ''Dynasty``.
|
1,116,691,500,703 | arxiv | \section{DeHumor}
We describe the {{\textit{humor exploration}}} view of {{\textit{DeHumor}}} in a bottom-up manner (Fig.~\ref{fig:teaser}C).
First, we illustrate the extraction and encoding of computational humor features in the \textbf{sentences and contexts}.
Then, the visual summary of the \textbf{whole speech}, as well as interactive features of the system, will be explained in detail.
\subsection{Humor Feature Analysis and Encoding}
\label{subsec:humorFeatureExtraction}
We utilize computational humor features to guide and enhance users' reasoning about the styles of verbal humor.
First, we describe how the textual (Sec.~\ref{subsub:text}) and audio (Sec.~\ref{subsub:audio}) features in Tab.~\ref{table:featuretable} are defined, computed, and represented at the \textbf{sentence level} (\textit{\textbf{R2}}).
To emphasize their potential co-occurrence, we encode all the features with inline glyphs (\textit{\textbf{R1, R4}}).
To further facilitate the \textbf{context} analysis (\textit{\textbf{R3}}), we design tools for extracting and visualizing the relationship among humor build-ups (Sec.~\ref{subsubsec:context_analysis}).
\subsubsection{Language Features and Glyphs}
\label{subsub:text}
We compute and encode three types of semantic features at the sentence level (\textit{\textbf{R4}}): incongruity, sentiment, and phonetics.
For each feature, a meaningful threshold is used to identify important words or phrases in the sentence, which are annotated with intuitive glyphs. These thresholds can be interactively adjusted by users according to the feature distribution in Fig.~\ref{fig:teaser}A2.
\textbf{Incongruity.}
Contrasting incongruous concepts (e.g., ``clean desk'' and ``cluttered desk drawer''~\cite{mihalcea2005making}) is classic for achieving the comic effect.
The semantic incongruity of a sentence
can be modeled by the repetition \xbRev{and} disconnection, or the relative semantic similarities between word pairs~\cite{yang2015humor}.
\textbf{Disconnection} captures the least semantically similar word pair in the sentence.
As shown in Fig.~\ref{fig:incongruity}A, a pair of dashed arrows (\includegraphics[height=\fontcharht\font`\B]{pictures/discon0.png}, \includegraphics[height=\fontcharht\font`\B]{pictures/discon1.png})
are placed above the words ``brother'' and ``prison'' to indicate their disconnection.
In contrast, \textbf{\xingbo{intra-sentence} repetition} focuses on the most similar pair.
In Fig.~\ref{fig:incongruity}B, a pair of curved arrows
(\textcolor{repetition}{\reflectbox{\faUndo}}, \textcolor{repetition}{\rotatebox[origin=c]{360}{\faUndo}}) show the repetition of the two ``cousin''s.
The arrows in a pair are pointed to each other, showing the sequential orders and positions of the corresponding word pair in the sentence.
We calculate semantic similarity using the cosine distance on the GloVe~\cite{pennington2014glove} embedding.
At the sentence-level, we also annotate the sentences that have word pairs with strong disconnections (\textcolor{disconnection}{\faCircleONotch}) or repetitions (\textcolor{repetition}{\faRefresh}).
\begin{figure}[!htb]
\centering
\vspace{-3mm}
\includegraphics[width=1\columnwidth]{pictures/incongruity.png}
\vspace{-5mm}
\caption{Examples of incongruity: (A) Disconnection (at beginning of a speech) and
(B) \xingbo{Intra-sentence} repetition.
}
\vspace{-3mm}
\label{fig:incongruity}
\end{figure}
\textbf{Sentiment.}
Expressing strong sentiment using polarized expressions (how emotionally positive and negative) and subjective statements (how personal) enables a speaker to empathize with the audience.
The \textbf{polarity} includes both the sentiment direction and \xbRev{sentiment intensity}.
We use vertical offsets to indicate words with strong polarity.
For example, ``stupid'' in Fig.~\ref{fig:sentiment} has a negative polarity, and is therefore displayed with a downside vertical offset.
The \textbf{subjectivity}, on the other hand, is shown with brackets ``\textcolor{subjectivity}{\textbf{( )}}'' around a word.
As shown in Fig.~\ref{fig:sentiment}, ``stupid'' is associated with the speaker's subjective opinion.
We measure word-level and sentence-level polarity and subjectivity using the resource of word annotations and clues for sentiment in \cite{wilson2005recognizing}.
\begin{figure}[!htb]
\vspace{-3mm}
\centering
\includegraphics[width=\columnwidth]{pictures/sentiment1.png}
\vspace{-7mm}
\caption{An example of sentiment expression.}
\vspace{-2mm}
\label{fig:sentiment}
\end{figure}
\textbf{Phonetic style.}
Phonetic style is often used to achieve catchy verbal deliveries, making the comic effect more memorable and engaging~\cite{mihalcea2005making}.
The most common techniques include (1) \textbf{alliteration chains}, which denote multiple words that begin with the same phones, and (2) \textbf{rhyme chains}, which include words ending with the same syllables.
We utilize the CMU Pronouncing Dictionary~\footnote{\url{http://www.speech.cs.cmu.edu/cgi-bin/cmudict}} within every sentence, and visually underline the corresponding characters that are responsible for creating the chain.
\begin{figure}[!htb]
\centering
\vspace{-3mm}
\includegraphics[width=\columnwidth]{pictures/phonetic.png}
\vspace{-7mm}
\caption{Examples of phonetic styles.
A: Alliteration.
B: Rhyme.}
\vspace{-5mm}
\label{fig:phonetic}
\end{figure}
\subsubsection{Audio Features and Glyphs}
\label{subsub:audio}
We extract and encode the following four representative audio features that reveal the speaker's vocal delivery style.
Most of these features are captured by computing its relative significance within a sentence or paragraph.
For example, in the speed variation below, we define words to be significantly faster, if it is $N$ times faster than the average of the speed in a sentence, or $M$ times of standard deviations away from the average, with given thresholds $N$ and $M$ (defaulted to $1$ and $1.5$ respectively).
\textbf{Speed variation.} We compute the Syllables Per Minute (SPM) for each word,
as well as the average and standard deviation (SD) of SPM for each sentence.
As shown in Fig.~\ref{fig:audiofeature}A, we encode the words which are significantly faster than the sentence as ``faster~(\textcolor{fast}{\faAngleDoubleRight})''.
Similarly, the words which are significantly slower
will be
labeled as ``slower~(\textcolor{slow}{\faAngleDoubleLeft})''.
\textbf{Pause.} We calculate the time intervals between two words. If the interval exceeds a threshold (that defaults to 0.5s), a dark blue rectangle will be drawn in front the corresponding word (e.g., ``So'' in Fig.~\ref{fig:audiofeature}B). The width of the rectangle encodes the pause length.
\textbf{Volume variation.}
We mark the words that are significantly louder or softer than the preceding word in the sentence.
They are labeled as ``louder (\textcolor{louder}{\faLevelUp})'' or
``softer (\textcolor{softer}{\faLevelDown})''.
\textbf{Pitch stress.}
Similar to the volume variation,
we
derive
words that are significantly higher pitched or have more pitch variation based on the pitch contours, and encode them with ``stress (\textcolor{stress}{\underline{\textbf{U}}})''.
The thresholds above are set according to \cite{rubin2015capture, wang2020voicecoach}.
They are fine-tuned empirically by testing on audio samples of speech data
and can be interactively adjusted in Fig.~\ref{fig:teaser}A.
\begin{figure}[!htb]
\centering
\vspace{-3mm}
\includegraphics[width=\columnwidth]{pictures/audiofeature.png}
\vspace{-7mm}
\caption{Examples of vocal delivery styles. A: Speed variations. B: A combination of pause, and volume and speed variations.}
\label{fig:audiofeature}
\vspace{-2mm}
\end{figure}
\subsubsection{Humor Context Analysis and Linking}
\label{subsubsec:context_analysis}
To reveal the relationship among build-ups of a punchline (\textit{\textbf{R3}}),
we extract and link similar concepts in the punchline context.
As mentioned in Sec.~\ref{subsec:designrequirements}, a speaker would more frequently repeat useful concepts to help prepare the audience for the upcoming punchline.
In the example below,
the core takeaway of the punchline is that Germany \emph{does not} have \example{fantastic food}.
The message becomes clear because of several repetitions in preceding lines.
First, the speaker emphasizes his/her focus on the two countries by repeating (\example{the Italian community}, \example{Their people}, \example{Italy}) and (\example{Germany}) in several places.
Second, in Lines \#3 and \#5, the different modifiers \example{a} and \example{no} before the repeated \example{stigma for (being) evil} highlights the opposite reputations of Germany and Italy after WWII, and therefore builds a natural comparison between the two.
The comparison is then carried on to the punchline, implying the German food is the opposite of Italian's.
\qbox{
Let me go after \gitaly{the Italian community}.\\
\gitaly{Their people} get off easy.\\
\ggermany{Germany} \gstigma{has a stigma} \gevil{for being evil}.\\
But if you check history, \gitaly{Italy} \ggermany{fought right alongside Germany} in WWII.\\
But we \gstigma{have no stigma} \gevil{for evil}, and do you guys know why?\\
It's because we have fantastic food.
}
With this example, we first present an algorithm that captures such \xingbo{inter-sentence} repetitions and then describe the visual display.
\paragraph*{\textbf{Concept Grouping Algorithm}}
Concept grouping may sound trivial at first glance---Naive string match among different tokens may suffice if we assume concepts are always repeated in strictly identical forms.
However, in practice, we frequently observe context rephrasing.
For example, while the concept entity \example{Italy} is introduced as a modifier for its community in the first sentence, in Line \#2 it is just implicitly referred by a pronoun.
Beyond entities, more concepts appear in the form of modifier segments (synonym adjectives, similar prepositions on different entities, etc.), just like \example{for being evil} and \example{for evil} in Lines \xbRev{\#3 and \#5} respectively.
Another intuitive method---grouping semantically similar full sentences---can relax the constraint of ``identical repetition'', but is likely to miss cases when only a small part of the sentences have overlapping concepts (e.g., \example{Germany} in Lines \#3 and \#4).
\begin{figure}[!htb]
\vspace{-0.1in}
\centering
\includegraphics[width=\columnwidth]{pictures/sen1_dep.pdf}
\vspace{-0.2in}
\caption{The dependency tree of Sentence \#1 in the Sec.~\ref{subsubsec:context_analysis} example.
}
\label{fig:parse_tree}
\vspace{-0.1in}
\end{figure}
To capture free-form concepts hidden within full sentences, our grouping method performs two crucial steps:
First, \textbf{to separate concepts from long sentences}, we induce subphrases by traversing the dependency tree of a given full sentence\footnote{With the NLP processing library SpaCy: \url{https://spacy.io/}}.
For example, with Line 1 parsed into Fig.~\ref{fig:parse_tree}, we get verb phrases like \example{go after the Italian community} as well as noun phrases like \example{the Italian community}.
Second, \textbf{to merge the rephrasings}, we perform density-based clustering on the induced subphrases based on their semantic similarity:
we resolve coreferences between sentences (e.g., \example{Their people} in Line \#2 becomes \example{Italian people})\footnote{With \url{https://github.com/huggingface/neuralcoref}}.
Then, we transform phrases into feature vectors with a state-of-the-art universal sentence encoder~\cite{reimers-2019-sentence-bert} and then compute the cosine similarity in the embedding space. \xingbo{This approach is effective for semantic textual similarity (STS) task (with an accuracy score of 85\% on STS Benchmark).}
Finally, because suphrases recognized through the parsing tree overlap with each other, we reduce redundancy in the extracted repetitions by keeping the segment with the largest possible similarity with its cluster (e.g., in \example{go after the Italian community}, the first two words are considered unnecessary.)
\begin{comment}
As shown in Fig.~\ref{fig:teaser}B, similar concepts that appear in the context are grouped, and visually linked to display the connections among sentences of a snippet.
Here, we first present an algorithm for extracting \xingbo{inter-sentence} repetition extraction (Fig.~\ref{fig:algorithm}).
Then, we introduce the visualization of context connections in a humor snippet \xingbo{(Fig.xx)}.
To extract coreferences of noun phrases,
we first extract all noun phrases in the snippet (Fig.~\ref{fig:algorithm}F) according to syntactic structures of the sentences (Fig.~\ref{fig:algorithm}B,C), then find all expressions that refer to the same entity (Fig.~\ref{fig:algorithm}G).
We first transform the extracted subphrases into feature vectors,
which describes the semantic meaning in a high-dimensional space.
Then, we compute pairwise cosine similarity among all subphrase based on the embeddings. Afterwards, we run a density-based clustering on subphrases to find semantically similar groups (Fig.~\ref{fig:algorithm}E).
\xingbo{To find repetitive concepts in a snippet,
we perform density-based clustering on subphrases of all the sentences based on their semantic similarity.
However, they are insufficient since sometimes repetitive noun phrases are not necessarily semantically similar.
For example, ``Linda'' and ``she'' in Fig.~\ref{fig:algorithm}A refer to the same person, but
this relationship is likely lost in their semantic embeddings.
Thus, we resolve coreferences of noun phrases and add them as a supplement to \xingbo{inter-sentence} repetition.
Specifically, we walk through our algorithm with an example (Fig.~\ref{fig:algorithm}).
First, we analyze syntactic structures of sentences in a snippet by part-of-speech tagging and dependency parsing.
To extract subphrases, we follow the dependency relationships
to navigate the parse subtree of every word in the sentences (Fig.~\ref{fig:algorithm}B).
All word combinations in the subtree define a group of subphrases (Fig.~\ref{fig:algorithm}D).
Next, we categorize subphrases into groups according to their semantic meanings.
We first transform the extracted subphrases into feature vectors,
which describes the semantic meaning in a high-dimensional space.
Then, we compute pairwise cosine similarity among all subphrase based on the embeddings. Afterwards, we run a density-based clustering on subphrases to find semantically similar groups (Fig.~\ref{fig:algorithm}E).
To extract coreferences of noun phrases,
we first extract all noun phrases in the snippet (Fig.~\ref{fig:algorithm}F) according to syntactic structures of the sentences (Fig.~\ref{fig:algorithm}B,C), then find all expressions that refer to the same entity (Fig.~\ref{fig:algorithm}G).
In the final step, we reduce the redundancy in the extracted repetitions (Fig.~\ref{fig:algorithm}H).
Specifically,
we remove groups of repetitions that only occur within a sentence.
In addition, when multiple subphrases from the sample group appear in the same sentence, the first occurence of the subphrases will be taken.
To analyze sytactic structure of sentences,
we use Spacy\footnote{https://spacy.io/} to ensure high-quality results.
To transform phrases into feature vectors, we leverage a state-of-the-art universal sentence encoder in \cite{reimers-2019-sentence-bert}.
}
\end{comment}
\paragraph*{\textbf{Visualizing Contextual Repetitions}}
We design a context linking graph to display the extracted \xingbo{inter-sentence repetition} occurrences.
As shown in Fig.~\ref{fig:contextalternatives}C,
(or a more complete version can be seen in Fig.~\ref{fig:comedy-cousin_case}),
the graph follows a three-stage design, such that it gradually reveals the concrete context information to the user and traverse from the context-level (\textbf{\textit{R2}}) to the sentence-level (\textbf{\textit{R3}}).
The graph first provides a \textbf{context distribution summary}, which shows \emph{how}
\xbRev{the sentences are}
connected to each other through repeated concepts.
A rounded gray rectangle represents a sentence, \xbRev{and its} \xingbo{horizontal length} encodes the sentence length.
We highlight the most important punchline with a denser gray color (\textbf{\textit{R4}}).
We connect rectangles with arc links if their corresponding sentences share repetitive concepts, and add thin lines below the rectangles to denote the presence of these concept.
The combination of the links and the lines help highlight different structures of build-up for humor.
For example, Fig.~\ref{fig:contextalternatives}C implies that most repetitions occur in the first half of the context, especially the third sentence, with three concepts repeated elsewhere.
There is no link between the punchline and the context, suggesting that the punchline is disconnected from the previously mentioned ideas.
Then, it presents the concrete \textbf{repetitive concepts} to show \emph{what} are used for building up the context, but still omits other details in the related sentences.
These concepts are sorted based on the order they occur in the text segment, and each small dark rectangle marks the boundary for a sentence.
Finally, the repetitive concept helps locate \textbf{detailed sentence} contents and their associated humor features line by line, such that the abstract concepts can be integrated with the complete story.
\xbRev{We also design a set of interactions to enable the traversal of the context summary, repetitive concepts, and detailed sentences.}
Specifically, when users hover over a rectangle (sentence) of interest in the context summary,
all its connections and the corresponding groups of repetitive concepts will be highlighted in different colors.
Conversely, as users hover over a phrase in the repetitive concepts, its repeated concepts in other sentences and the corresponding links in the context summary will be highlighted.
To facilitate more insights into word usage and verbal delivery,
we support a quick reference to the original content in the sentence when users click in context summary or on a specific phrase.
\begin{figure}[t]
\centering
\vspace{-3mm}
\includegraphics[width=\columnwidth]{pictures/alternative-design1.pdf}
\vspace{-6mm}
\caption{Alternative designs for context linking.
Compared to our current design (C), A and B are less space-effective and more cluttered.}
\label{fig:contextalternatives}
\vspace{-5mm}
\end{figure}
\textbf{Design alternatives.}
We discuss the trade-offs of alternative designs for the context summary and repetitive concepts in our iterations.
Initially, we attempted to combine the links with concrete concepts.
In Fig.~\ref{fig:contextalternatives}A, each tick in the horizontal axis marks the corresponding sentence.
Below the tick, repetitive concepts are listed vertically according to the order of their occurrences.
While this design does not require separating concepts from the links, this layout could easily exhaust
\xbRev{the available horizontal or vertical space when we have a long context or a large number of repeated concepts.}
It also sacrifices the temporal ordering of the concept occurrence and makes the concept exploration less intuitive.
We then tried to place repetitive concepts slantingly along the axis according to their occurrence ordering, and directly link the repeated concepts.
Because the notion of ``sentence separation'' becomes less apparent, we further add a repetition distribution on the top, such that users can count the repeated concepts.
The design (Fig.~\ref{fig:contextalternatives}B) saves the space and recovers temporal information, but
\xbRev{causes}
visual cluttering issue.
Specifically, when the number of concepts increases, linking concepts---instead of their corresponding sentences like in Fig.~\ref{fig:contextalternatives}A---induces
additional overhead for distinguishing the intertwined links.
That said, the concept of overview-to-detail was favored in some preliminary discussions with end-users.
Thus, we thought short links among sentence glyphs and concepts overflowed among multiple lines would create the least cognitive load and would be the most space-efficient---which is exactly our design in Fig.~\ref{fig:contextalternatives}C.
\subsection{Augmented Time Matrix}
\label{subsec:augmented_timematrix}
Besides sentence- and context-level, we design an augmented time matrix that provides an overview of \xingbo{distribution of humor occurrences and the related features of speech content and vocal delivery
at the \textbf{speech level} (\textit{\textbf{R1, R2}}).}
\xingbo{As shown in Fig.~\ref{fig:summaryalternatives}C, the barcode chart of the time matrix shows the humor distribution.}
The big gray rectangle shows the whole time matrix from the top to the down.
The darker horizontal lines in the time matrix indicate timestamps where the punchlines occurred.
Therefore, by definition of the humor snippet (Sec.~\ref{subsec:process}), the light gray area between two horizontal lines \xbRev{indicates} the context length between punchlines.
\xingbo{If the time intervals between punchlines are too small, the horizontal gray lines (i.e., punchlines) are merged into one thick line to reduce visual clutter.}
\xingbo{Besides, we organize the humor features, including the word usage, vocal delivery, and key concepts, around the matrix to summarize
their distribution for each punchline and across different punchlines.}
A bar chart is placed at the top to show the total occurrences of humor features in the punchlines, where each bar represents a feature, and the height of the bar indicates the feature frequency.
Then, for each punchline, a stacked bar is placed on the left at the same vertical position, where the dark green bar reveals the frequency of textual features, and the light green bar reveals the frequency of audio features.
Moreover, colored boxes on the punchlines (dark gray lines) imply the frequencies of humor features.
The darker the color, the higher the frequency.
To reveal the key concepts for humor snippets,
we extract keywords for each snippet using TextRank~\cite{mihalcea2004textrank},
and place them along the time matrix in temporal order.
A horizontal blue bar is overlaid to denote the frequency of the keywords.
\xingbo{Users can hover over a feature-of-interest (i.e., a bar at the top or a colored box in the matrix) or a keyword to highlight its occurrences across the whole speech in the time matrix.
When a user clicks a keyword or in the time matrix,
the system will
highlight the corresponding
punchline and its context in the context linking graph.}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{pictures/summary-alternatives2.pdf}
\vspace{-4mm}
\caption{\xingbo{Alternative designs for speech summary using an annotated barcode chart (A) and a matrix summary of humor features (B). Our current augmented time matrix design (C) combines (A) and (B) to summarize the timing of humor and the distribution of humor features. Users can hover over a feature or a keyword to highlight its occurrences.}}
\vspace{-4mm}
\label{fig:summaryalternatives}
\end{figure}
\textbf{Design alternatives.}
Initially, we have considered separating the humor feature summary (Fig.~\ref{fig:summaryalternatives}B) from the content summary (Fig.~\ref{fig:summaryalternatives}A).
However, the sparse feature matrix takes up a large space and do not provide any context information about the punchline. Particularly, some of our end-users complained that it is hard to figure out where the corresponding punchlines when exploring the feature summary.
Thus, in our current design, we integrate the feature summary into the timeline to enhance both the temporal and contextual information for humor features.
\subsection{Interactions}
Our system supports a rich set of interactions to ease the multi-level exploration of humor \textbf{\textit{(R5)}}.
\textbf{Details-on-demand through clicking.}
Once a user clicks a \xingbo{speech} of interest in the {\textit{{control panel}}},
the {{\textit{humor exploration}}} will be updated accordingly.
When the user clicks on a keyword or in the augmented time matrix,
the corresponding humor snippet will be scrolled to the top in the content exploration, and the corresponding sentence in the context linking graph and the transcript will be highlighted.
\textbf{Linked scrolling.}
When users scroll in the content exploration, the time range of the visible humor snippets will be highlighted in the augmented time matrix.
\textbf{Active query through searching, sorting, and filtering}.
Users can search a \xingbo{speech} or sort \xingbo{speeches} according to a specific criterion in the {\textit{{control panel}}}.
Also, they can apply filters in the {{\textit{humor focus}}} to find different styles of punchlines. Then, the corresponding humor snippets will be highlighted in the {{\textit{humor exploration}}} view.
\section{Design Process}
\label{sec:designprocess}
\begin{table*}
\caption{A summary of humor-related features that we identify from the qualitative and quantitative research of humor.}
\label{table:featuretable}
\vspace{-3mm}
\centering
\scalebox{0.8}{
\begin{tabular}{@{}cllll@{}}
\toprule
\multicolumn{1}{r}{} &
\textbf{Humor-related features} &
\textbf{Subcategory} &
\textbf{Description} &
\textbf{References} \\ \midrule
\multirow{8}{*}{Textual} & Content-related features & & Key concepts (e.g. situation) on which the humorous story is built & \cite{ahuja2018makes, mihalcea2005making, vuillemot2009s, yang2015humor} \\ \cmidrule(l){2-5}
& \multirow{2}{*}{Incongruity} & Disconnection & \xingbo{Semantic disconnection (e.g., contrast) between two content words in a sentence} & \cite{mihalcea2005making, yang2015humor, radev2015humor, yuan2008speaker} \\ \cmidrule(l){3-5}
&
&
\xingbo{Intra-sentence repetition} &
Repeating similar objects in a sentence & \cite{mihalcea2005making, radev2015humor, vuillemot2009s, yang2015humor, yuan2008speaker}
\\ \cmidrule(l){2-5}
&
\multirow{2}{*}{Human-centeredness} &
Polarity &
Positive/negative orientation of emotion & \cite{mihalcea2005making, yuan2008speaker, reyes2012humor, zhang2014recognizing, yang2015humor}
\\ \cmidrule(l){3-5}
&
&
Subjectivity &
Subjective/objective orientation & \cite{yuan2008speaker, yang2015humor, davis2008communication, castro2017crowd}
\\ \cmidrule(l){2-5}
& \multirow{2}{*}{Phonetic style} & Alliteration & Occurrences of the same letter or sound at the beginning of a group or words & \cite{yang2015humor, donahue-etal-2017-humorhawk, mihalcea2005making, yuan2008speaker, zhang2014recognizing} \\ \cmidrule(l){3-5}
&
&
Rhyme &
Repetition of similar sounds in the final stressed syllables of a group of words & \cite{donahue-etal-2017-humorhawk, yang2015humor, yuan2008speaker, zhang2014recognizing}
\\ \cmidrule(l){2-5}
& Build-ups & \xingbo{Inter-sentence repetition} & Concepts (e.g., a person) that have been previously told before a punchline & \cite{bauman1986story, nash2014language, raskin2012semantic} \\ \midrule
\multirow{4}{*}{Audio} &
Volume &
Volume variation &
Variation in volume: softer and louder & \cite{attardo2011prosodic, bertero2016long, purandare2006humor, schuller2010interspeech}
\\ \cmidrule(l){2-5}
&
Pitch &
Stress &
Vocal stress by raising pitch & \cite{attardo2011prosodic, bertero2016long, purandare2006humor, pickering2009prosodic, schuller2010interspeech}
\\ \cmidrule(l){2-5}
&
Pause &
&
A temporary stop in speech & \cite{attardo2011prosodic, bertero2016long, purandare2006humor, schuller2010interspeech}
\\ \cmidrule(l){2-5}
&
Speed &
Speed variation &
Variation in speech rate: faster and slower & \cite{attardo2011prosodic, bertero2016long, purandare2006humor, schuller2010interspeech}
\\ \bottomrule
\end{tabular}
}
\vspace{-3mm}
\end{table*}
Our goal is to support an in-depth and systematic investigation into the humor composition and its vocal delivery in \xbRev{public speaking} (e.g., oral presentation).
Our main target \xbRev{users are people}
who have theoretical knowledge in humor and are motivated to study humorous speech (e.g., humor researchers and communication coaches).
We expect our system to alleviate their mental burden when analyzing unstructured humorous speech (i.e., texts and audio) in an organized and quantitative way.
To build a concrete understanding of humor,
we conducted literature reviews and informal user interviews to identify a set of textual and audio features that are both quantifiable and essential for humor analysis. Specifically, we first summarized features from existing literature and proposed a new computational method for extracting the context-related feature (i.e., \xingbo{inter-sentence} repetition)
to supplement the framework of computational humor.
Next, based on these feature candidates, we interviewed five humor researchers and two communication coaches who provided professional insights into humor study and helped validate our proposed features regarding their importance and helpfulness for humor analysis.
Meanwhile, during interviews, we inquired about their perspectives on the analytical aspects and challenges within humor analysis.
Based on their feedback, we distilled design requirements, which guided our initial system design.
The researchers (\textit{E1-E5}, including three postgraduates, one PhD graduate, and one university lecturer) study English/applied linguistics, English literature, and L2 learning. Three of them have published research papers on humor. They all have done humor research projects.
The communication coaches (\textit{C1, C2})
have five and ten years of communication skills training experience, respectively.
\xbRev{One part of their work is to} train speakers to deliver humorous speeches based on pre-selected humor examples or topics.
\subsection{Literature Review \& User Interviews}
\subsubsection{Humor features}
We borrowed \xingbo{the most common and essential quantifiable features}
of humor content creation and vocal delivery from the existing work mentioned in Sec.~\ref{subsec:relate_comp_humor}.
For the textual features, we organized and selected our features based on
the frameworks proposed by Yang et al.~\cite{yang2015humor} and Bali et al.~\cite{ahuja2018makes}, such that our features cover both semantic structures (e.g., incongruity and phonetic style) and content understanding (e.g., topic).
Similarly, for the audio features, we collected features and merged the related ones from different studies (e.g., tempo~\cite{schuller2010interspeech} v.s. speech rate~\cite{pickering2009prosodic}, and energy~\cite{schuller2010interspeech} v.s. volume~\cite{pickering2009prosodic}).
As a result, we covered four audio aspects: volume, pitch, pause, and speed.
The full list of features is in Table~\ref{table:featuretable}, and the computations are in Sec.~\ref{subsec:humorFeatureExtraction}.
While these features comprehensively summarize different aspects of one-liners, existing computational research rarely covers context features in humor cases with longer set-ups.
For example, \xingbo{\textbf{(inter-sentence) repetition}} is one essential comedic devices~\cite{goldstein1970repetition}.
Consider a simple example in~\cite{tannen2007talking}:
\example{\textit{A}: Rover (a dog) is being good.
\textit{B}: I know.
\textit{C}: He is being hungry.}
The repetition of the structure ``he is being'' makes the audience expect a similar response to A's.
However, the word ``hungry'' conflicts with the expectation, and the repetition enhances the dramatic effect of the twist.
To seize such patterns in the build-up of a humorous story, we introduce an algorithm to compute context-level ``\textit{repetition}''. The detail of the computation is illustrated in Sec.~\ref{subsubsec:context_analysis}.
\subsubsection{User interviews}
To validate the proposed humor feature sets and discover analytical needs for humor analysis,
we conducted independent interviews with the seven target users (\textit{E1-E5}, \textit{C1, C2}) mentioned earlier.
During interviews, we asked the participants to
(1) describe their general process/methodology of humor analysis,
(2) illustrate what aspects of humor in speeches they care about and how do they rank our proposed features regarding the importance/difficulty for analysis,
\xingbo{(3)} propose desired design requirements (e.g., functions) for a system that facilitates systematic humor analysis.
Their feedback is
reported as follows.
The design requirements are summarized in Sec.~\ref{subsec:designrequirements}.
\textbf{Whole-to-part analysis.}
\xbRev{According to the participants' feedback,}
they generally analyze the speech \emph{from the whole (e.g., speech topics) to the parts (e.g., \xingbo{word use}).}
They usually first search for humor examples from public speeches, TV shows, and books according to humor topics, genres, and comedians.
Then, they
focus on
the humorous pieces that can elicit laughter from the audience and investigate
the patterns of speech content and delivery in humor \xbRev{speeches}.
Specifically, the analysis follows the \textbf{language strata}~\cite{halliday2014introduction, martin1992english}---\textit{the context, semantics/pragmatics, lexemes (words and phrases), and phones}---from coarse to fine.
\begin{table}[]
\caption{\xbRevise{The average importance/difficulty rankings for the analytical aspects
(A smaller rank value means greater importance/difficulty)}.}
\label{tab:rank_question}
\vspace{-3mm}
\centering
\scalebox{0.85}{
\begin{tabular}{l|lllll}
\hline
& \textbf{Word usage} & \textbf{Vocal delivery} & \textbf{Build-ups} & \textbf{Timing} & \textbf{Topics} \\ \hline
\textbf{Importance} & 1.86 & 2.57 & 3.00 & 3.14 & 4.43 \\ \hline
\textbf{Difficulty} & 2.71 & 2.43 & 1.86 & 4.57 & 3.43 \\ \hline
\end{tabular}}
\vspace{-3mm}
\end{table}
\textbf{Analytical aspects and computational features.}
\xbRevise{As shown in Table~\ref{tab:rank_question}, \emph{word usage} (rank: 1.86) and \emph{vocal delivery} (rank: 2.57) with the \xbRev{highest} importance rankings
were considered essential for humor analysis.}
\xbRev{The participants} appraised the extraction of incongruous words, affective lexicons (i.e., sentiment and subjectivity), \xbRev{and} phonetic styles (i.e., alliteration and rhyme).
These features cover their typical analytical interests and provide quantitative and concrete evidence for language patterns of humor in semantics, lexemes, and phones.
\textit{E1} claimed that the incongruous words can reflect the unexpected conflicts and twists in punchlines, which are at the core of an influential humor theory---incongruity.
\textit{E3} added that the negative sentiment lexicons help study the styles of self-deprecating or self-enhancing humor.
The participants also thought the acoustic features---volume, pitch, pause, speed---can effectively
reflect the vocal characteristics of humor.
\xb{For example, smart pauses (e.g., comic timing) are effective for building up suspense.}
C1 said that ``I can use them (acoustic features) to tell whether a speaker is good at telling jokes or not.''
Besides, the two coaches attached much importance to the \emph{timing of humor}.
They considered it as a good starting point to learn humor in \xbRev{public speaking}.
\textit{C1} reasoned that
finding a proper place (e.g., speech opening) to insert humor
may be the easiest thing for ordinary people to learn, which can make a big impact on their speeches.
\textit{E4} suggested the timing should include the distribution and drift of topics (``\textit{What content it supports} and \textit{how the topics evolve}'').
\xbRevise{In terms of difficulty (Table~\ref{tab:rank_question}), \emph{humor build-ups} was regarded as the most difficult aspect with the \xbRev{top} rank (1.86).}
\xbRev{The participants} thought that
the cognitive load of backtracking and comprehending related concepts (e.g., background, characters, emotion) in humor build-ups can be heavy. The inter-sentence repetition extraction was considered reasonable and helpful. \textit{E4} said, ``\textit{It (the context repetition) connects the dots (of humor).}''
\textit{E3} specified, ``\textit{It is useful for revealing the humor structure and can help summarize the core idea of humor.}''
Still, some context-related humor characteristics proposed by the participants, such as the social background, culture, humor genres (e.g., dark humor), are difficult or unreliable to be quantified. Thus, they are left as future research.
\xb{Besides the humor features for word usage, speech content, and vocal delivery (Sec.~\ref{subsec:relate_comp_humor}),
we enrich the existing computational framework with inter-sentence repetitions for humor context analysis and the timing of humor based on the participants' feedback.}
Their computation and visualization are described in Sec.~\ref{subsubsec:context_analysis} and Sec.~\ref{subsec:augmented_timematrix} respectively.
\textbf{Desired functionality.}
Since there are few tools that
enable humor exploration and analysis in various speeches,
both coaches and researchers need to manually select
and digest speech examples of their interests.
\xingbo{It is ineffective and challenging for them to analyze humor in terms of both speech content and vocal delivery.}
They valued our attempt to build an interactive tool that systematically organizes these computational \xbRev{multimodal} features and provides concrete examples \xingbo{to verify the existing humor theories or reveal new insights into humor.}
We distilled the corresponding design requirements in Sec.~\ref{subsec:designrequirements}.
\subsection{Design Requirements}
\label{subsec:designrequirements}
According to the whole-to-part analysis \xingbo{regarding the language strata},
our analytical system should support the hierarchical exploration of humor at different levels---speech level, context level, and sentence level.
We summarize the design requirements on these levels as follows.
\textbf{R1: Analyze text and audio simultaneously to reveal their correlations.} Our experts confirmed that both speech content and vocal delivery are considered necessary for a humorous effect.
\xingbo{It is difficult to capture both of them by watching the videos.}
Therefore, at each level, the system needs to present textual and audio features concurrently to help users reason about the effective use of words and voice.
\textbf{R2: Visualize a speech level overview that shows vocal and verbal styles of humor, as well as their distribution.}
At the \textbf{speech level}, the system should summarize
\xingbo{the timing of humor-related properties}---the humor is injected how frequently, under what condition (Or, what topic flow), to which part of the \xingbo{speech}, and with what verbal and vocal styles.
The visual summary of these properties
serves as guidance and should help users find specific humor snippets within a \xingbo{speech}.
For example, a communication coach might prioritize the very first humorous punchline (\emph{when}),
to show students how to provide an impressive opening (\emph{objective}) by making small talks or sharing personal lives (e.g., \example{My brother's in prison.}).
Besides, as suggested by the experts, the visual summary of the humor distribution
should be integrated with temporal information, along with the topic flow and verbal feature statistics.
\textbf{R3: Provide a context-level overview that shows build-up elements of humor, as well as their relationships.}
Once zoomed in to a specific snippet, \textbf{context level} exploration is necessary for evaluating how a humorous story is written (e.g., how the key concepts in the punchline are first introduced and how they connect the pieces of humor stories), as well as a summary of delivery skills that are frequently used to help convey the story.
Both researchers and communication coaches
\xbRev{viewed} the
contextual analysis of humor build-ups to be the most demanding.
Therefore, our system should primarily support users at this level.
\textbf{R4: Highlight the pairing of individual content words and humor-related verbal delivery units.}
We need to expose the co-occurrence between textual and audio features within each \textbf{individual sentence}, so to
demonstrate the humor strategies with relevant concrete examples (e.g., words and utterance).
Within a snippet, the punchline is the most important sentence since it immediately triggers laughter.
\textbf{R5: Support intuitive interactions for helping users traverse among different levels, and reveal the different level of details}.
For example,
our communication coaches suggested that the original audio and scripts of the speech should also be included in the system, in addition to a visual summary of humor.
The system should allow users to rapidly locate the segments of interest in the speech and playback the corresponding audio clips.
\section{Discussion}
Here, we discuss the lessons learned and system generality.
We also identify several limitations that need further research in future work, including extending humor features, alleviating algorithm inaccuracy, enhancing system scalability, and enabling personalized humor explorations.
\textbf{Lessons learned}.
We learned two important lessons during our system design and evaluation.
\textit{1) Social context is important for humor understanding.}
During the evaluation, experts pointed out that interpreting humor requires external knowledge of social context. For example, understanding the humor in Sec.~\ref{sec:introduction} needs to know the ``are you a cop'' trope in American movies and TV, and the one in Sec.~\ref{subsubsec:context_analysis} relates to the WWII history.
\textit{2) Compact summary of multimodal features is helpful for multimedia analysis.}
Given the multimodality and heterogeneity of humor expression in speeches,
we present inline annotations of verbal and vocal features along the text. Experts confirmed the annotations help gain quick insights into speech content and vocal delivery of humor, as well as the relation between them.
We believe that the integration of visual representation and multimedia data facilitates intuitive multimedia content understanding.
\textbf{System generality}.
While {{\textit{DeHumor}}} supports the analysis of speech content and vocal delivery of humor based on audience laughter markers,
it can also be extended to evaluate public speaking skills based on other types of audience reaction (e.g., booing).
For example,
by highlighting the audio features of the speech sentence in a snippet that elicits booing or applause, we can further investigate effective voice modulation skills.
\xbRevise{In addition},
when there is no audience audio, the context linking graph can \xbRev{still} be used for text analysis. First, the text can be divided into snippets based on paragraphs or text segmentation algorithms. Then, the context linking graph can visualize contextual repetitions and narrative flows within a text snippet in various forms of literature (e.g., poetry and novel).
\textbf{Extending humor features}.
In this work,
we derived a set of significant textual and audio humor features to analyze the speech content and its vocal delivery.
The proposed features can be enriched and further improved to enhance the understanding of humor.
First, as suggested by our experts,
it is interesting to explore how features from other modalities
contribute to the delivery of humor. For example, facial landmarks~\cite{hasan2019ur, petridis2009joke, petridis2013mahnob}, and head movements~\cite{petridis2009joke} have been mentioned in previous research. How to incorporate features from these modalities in the analysis is a challenging while promising direction.
Second, the extraction and visualization of the repeated phrases for the humor build-up can be enhanced.
\xingbo{Currently, we focus on inter-sentence repetitions between punchlines for humor context analysis.
During the expert interviews, some participants discovered that some interesting repetitions appear across different snippets and incongruous concepts are distributed across different sentences. They wished to explore them.
Hence, we plan to extend the contextual repetition algorithm to extract semantically dissimilar phrases between sentences and to highlight repetitions in the whole speech.}
\xbRevise{Additionally}, there are other textual features for humor context analysis.
For example,
funny riddles are used by many comedians and public speakers to entertain and interact with the audience.
We plan to extend context-level features to facilitate
further study
of a humorous story.
\textbf{Alleviating inaccuracy of feature computation}.
Through case studies and expert interviews, we showed that the computation and visualization of humor features assisted users in reasoning about humor styles.
Inevitably, the imperfection of the algorithms will have harm the effectiveness of {{\textit{DeHumor}}}.
To alleviate such issues, we will keep improving the feature extraction.
Specifically, we plan to label humor features in the sentences
and train advanced deep learning models (e.g.,
BERT~\cite{devlin2018bert}, GPT-3~\cite{brown2020language}) for more accurate computation.
Moreover, we will improve the current visualization by encoding model uncertainty. For example, we can give visual hints (e.g., opacity) about the models' accuracy to alert users when the models output features with low confidence scores.
\textbf{Enhancing system scalability}.
\xingbo{Our system divides a speech into snippets based on the laughter occurrences. When the transcript is too long with too many punchlines, the exploration of humor snippets will be not so effective. To mitigate such an issue, we can consider merging neighboring humor snippets based on the semantic similarity and temporal proximity.}
\xingbo{Moreover, with the increasing number of repetitions within a humor snippet,} the context linking graph may have visual clutter of links.
More advanced interaction techniques are needed to address such issues (e.g., allowing users to interactively reduce and control the visibility of different groups of links).
\textbf{Enabling personalized humor explorations.}
{{\textit{DeHumor}}} helps users narrow down to a video of interest according to the speech title, speaker, category, and laughter occurrences. In addition, it provides visual cues for users to find humor snippets based on textual and audio features.
As suggested by our communication coaches in expert interviews, supporting more complex user queries (e.g., styles of humor feature combinations) would enable a more personalized exploration of humorous content.
\xbRevise{\textbf{Improving system evaluation.} The current evaluation is conducted with only four expert users.
A long-term study with more domain experts can further validate the usability and effectiveness of {\textit{DeHumor}}{}, which is left as future work.}
\section{Conclusion}
In this work, we presented {{\textit{DeHumor}}}, a visual analytics system for exploring and analyzing humorous snippets in \xbRev{public speaking}. We first summarized humor-related features and design requirements based on literature review and user interviews. Then we developed a set of methods for presenting and decomposing multimodal features from a humorous speech.
Through case studies on stand-up comedy shows and TED Talks, as well as interviews with domain experts,
we demonstrated the usefulness and usability of {{\textit{DeHumor}}} in helping users explore and analyze speech content and vocal delivery of humor in speeches.
\xbRevise{In future work, we can improve the system usability by supporting humor query and humor style comparison.
We plan to integrate more contextual features and features from other modalities (e.g., facial expressions) into the system. We can also apply deep learning models to improve the feature extraction accuracy. Furthermore, we will conduct a long-term study with more experts to further evaluate the system usability and its effectiveness for humor analysis.}
\section{Evaluation}
\label{sec:evaluation}
\xbRevise{We demonstrate how {\textit{DeHumor}}{} helps users gain insights into \xbRev{the verbal content and vocal delivery of humor speeches} through two case studies and expert interviews. The experts include two humor researchers (\textit{E1}, \textit{E6}) and two communication coaches (\textit{C1}, \textit{C3}).
\textit{E1} and \textit{C1} have participated in the design process, while \textit{E6} and \textit{C3} were new to our system before the interviews.
Specifically,
\textit{E6} holds a master degree in linguistics, and her research focuses on the pragmatics of humor. \textit{C3} has been a communication coach for four years. He is also a stand-up comedian and has performed at famous venues (e.g., Broadway).
During the interviews, the experts used Dehumor to explore two humor datasets, which consist of 157 shows of \textit{Comedy Central Stand-up} and 1,876 \textit{TED Talks}.
The cases in Sec.~\ref{subsec:case1} and Sec.~\ref{subsec:case2} were found by
\textit{E1} and \textit{C1}, respectively. All the experts’ feedback was
collected and
reported in Sec.~\ref{subsec:expert_interviews}.}
\xbRevise{To better illustrate the cases,
we highlight the important humor analysis steps: \textcolor{bleudefrance}{Context relationship analysis} for context exploration, \textcolor{bleudefrance}{Humor context} and \textcolor{bleudefrance}{Punchline} for humor description, and \textcolor{bleudefrance}{Feature analysis} for punchline analysis.}
\subsection{Case Study on Stand-up Comedy}
\label{subsec:case1}
In this case, \textit{E1} used {{\textit{DeHumor}}} to explore the ``stand-up comedy'' dataset and check how comedians effectively set up humor about the funny incidents happened in their lives. In particular, she was interested in the word usage of punchlines and would like to see how it helps create humorous effects.
First, \textit{E1} filtered the speeches by keywords ``personal experience'' in the {\textit{{control panel}}} (Fig.~\ref{fig:teaser}A1). Then, she felt interested in \xingbo{speeches} that frequently involve humorous moments. Thus, she sorted the speeches by the total occurrences of laughter and selected the first \xingbo{speech} named ``Apparently You Can’t Pretend You’re a Cop''\footnote{Comedy video url: \url{https://bit.ly/3u1lcZq}}.
\textbf{\textit{Case Context}}:
This case includes three
speaker's personal experiences in the selected \xingbo{speech}.
(1) The speaker talked about his experience at a crime scene.
The people there wanted to check if he was a cop.
(2) The speaker told a story when he was a teacher. He was once threatened by a student during a fight.
(3) Following the previous story, the speaker described that after the fight, both he and the student claimed that they were ready to die.
\vspace{-1mm}
\subsubsection{Overall styles of verbal humor}
\textit{E1} wondered what the major characteristics of this speaker's humor strategies in his speech are. By observing the height of bars at the top of the augmented time matrix (Fig.~\ref{fig:teaser}C), she found that ``repetition (\textcolor{repetition}{\faRefresh})'' and ``disconnection (\textcolor{disconnection}{\faCircleONotch})'' frequently occur in the punchlines. She was curious about \textit{what words the speaker used to create incongruity and how he delivered} (\textit{\textbf{R1, R4}}).
As she skimmed through the dark gray lines in the time matrix, she found clusters of punchlines that are close to each other across the whole speech. She wondered \textit{how the speaker set up humor within a short context} (\textit{\textbf{R2, R3}}). To answer the two questions, she adjusted the filters of context length and textual features in the \textit{humor focus} (Fig.~\ref{fig:teaser}B). The snippets that satisfied the conditions were highlighted with colored feature statistics in the augmented time matrix (Fig.~\ref{fig:teaser}C). Next, she explored them in detail.
\vspace{-1mm}
\subsubsection{Digging into humorous snippets}
She found that the first highlighted snippet appeared at the beginning of the speech (Fig.~\ref{fig:teaser}C1), where the keyword ``cop'' was spotted (\textit{\textbf{R2}}). As she hovered over the word, its other occurrences were also marked in dashed red lines in the time matrix, one of which fell into the current snippet of her interest.
Then, she clicked the dashed line to locate the corresponding snippet and its context linking graph to see the story development (\textit{\textbf{R1, R3, R5}}).
\textcolor{bleudefrance}{Context relationship analysis}
Through the links (Fig.~\ref{fig:teaser}C4-1) and repeated phrases
(\example{are you a cop}, \example{'m a cop}, \example{are you a cop})
in the context distribution summary of the graph (Fig.~\ref{fig:teaser}C4-2) \textbf{(\textit{R4})}, she guessed that the speaker was having a conversation with someone else about the ``cop'' identity. She followed the link connections among sentences and observed the corresponding content (Sentences \#0 to \#2 in Fig.~\ref{fig:teaser}C4-3) (\textit{\textbf{R3}})---\textcolor{bleudefrance}{Humor context} the speaker was asked by the people at a crime scene about whether he \xbRev{was} cop. Since he is not an actual cop, he did not want to explicitly make an illegal claim that he \xbRev{was} a cop.
\textit{E1} navigated to the punchline (Sentence \#3 in Fig.~\ref{fig:teaser}C4-3) to see how the speaker responded to the question.
\textcolor{bleudefrance}{Punchline} She discovered that the speaker quoted a common trope for police in movies and TV (i.e., \example{I'm asking the f**king question!} in Fig.~\ref{fig:teaser}C5-3) and misled the people at the crime scene to believe that he \xbRev{was} a cop.
\textcolor{bleudefrance}{Feature analysis} Specifically,
\textit{E1} referred to the feature annotations in the sentence, finding that the speaker raised his voice (\textcolor{louder}{\faLevelUp}) on the first few words in the punchline (\example{And}, \example{I}).
Then the speaker paused a little bit (\textcolor{pause}{\faSquare}) before revealing the essence of the content---``I will ask the f**king (\textcolor{polarity}{\faArrowsV}, \textcolor{subjectivity}{\textbf{( )}}) questions''. Finally, he strengthened his annoyance by the previous question about his identity (\textcolor{louder}{\faLevelUp}) through a tone particle ``okay''.
\textit{E1} concluded that the ``cop'' repetitions in the context and his voice modulation in the punchline renders the speaker's annoyance and enhance humorous effect.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{pictures/comedy-cousin_case.pdf}
\vspace{-5mm}
\caption{The context linking graph of the first snippet in Fig.~\ref{fig:teaser}C2,
with Sentence \#2, as well as corresponding repeated phrases and their links being highlighted.}
\vspace{-5mm}
\label{fig:comedy-cousin_case}
\end{figure}
Then, \textit{E1} clicked in the second highlighted snippet (\textit{\textbf{R5}}) (Fig.~\ref{fig:teaser}C2).
\textcolor{bleudefrance}{Context relationship analysis}
She noticed that the third rectangle in the context summary (Fig.~\ref{fig:comedy-cousin_case}A) has the most bars attached below, suggesting it contains the most repetitive phrases. Then, she clicked on the rectangle and the corresponding repetitions were highlighted. By observing the red rectangles (\example{to the ground}) and purple rectangles (\example{going to kill you}, \example{kills}) of the repeated phrases (Fig.~\ref{fig:comedy-cousin_case}B),
she assumed there was a big fight. By following the links in the graph from beginning to end and exploring the content (Fig.~\ref{fig:comedy-cousin_case}A), \textcolor{bleudefrance}{Humor context} she grasped that the speaker wrestled a student to the ground during a fight. Then, the student threatened that his cousin would come and help him kill the speaker.
\textcolor{bleudefrance}{Punchline \& Features}
The speaker said he also had a cousin (\textcolor{repetition}{\faRefresh}) (Sentence \# 4 in Fig.~\ref{fig:comedy-cousin_case}C) in response to the student, implying his cousin would also help him and kill the student if the student's cousin killed him (\textit{\textbf{R4}}).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pictures/comedy-ready-to-die_case.pdf}
\vspace{-5mm}
\caption{The context linking graph of the snippet in Fig.~\ref{fig:teaser}C3, with the repeated phrases from Sentence \#2 and their links being highlighted.}
\label{fig:comedy-read-to-die_case}
\vspace{-5mm}
\end{figure}
Then, \textit{E1} scrolled down until the third highlighted snippet (Fig.~\ref{fig:teaser}C3).
\textcolor{bleudefrance}{Humor context} She found that the speaker won the fight with the mentioned student. The student said that he was ``ready to die'' after losing the fight.
Then in the current snippet (Fig.~\ref{fig:comedy-read-to-die_case}), \textit{E1} tracked the colored repeated phrases (``ready to die'') (Fig.~\ref{fig:comedy-read-to-die_case}B) (\textit{\textbf{R3}}). She
found that the speaker responded with ``I'm also ready to die'' and explained in the punchline (Sentence \#2 Fig.~\ref{fig:comedy-read-to-die_case}C) (\textit{\textbf{R4}})---\textcolor{bleudefrance}{Punchline}
the speaker felt disappointed about arguing with the naughty student because he was paid extremely low wages at school to deal with such a big trouble maker (i.e., the student).
\textcolor{bleudefrance}{Feature analysis}
Specifically, the labeled word pair ``\emph{18000}'' (\includegraphics[height=\fontcharht\font`\B]{pictures/discon0.png}) and ``\emph{conversation}'' (\includegraphics[height=\fontcharht\font`\B]{pictures/discon1.png}) in the punchline (Sentence \#2 in Fig.~\ref{fig:comedy-read-to-die_case}C)
contrasts the low-paid job with the high effort of teaching the student.
In addition, the speaker even inserted pauses (\textcolor{pause}{\faSquare}) and raised his voice (\textcolor{louder}{\faLevelUp}) after \example{18000} to emphasize his complaints about his challenging but low-paid work.
As for takeaways of this exploration process, \textit{E1} concluded that the speaker set up conversation scenarios to narrate his interesting personal experiences. He is good at using \xingbo{contextual} repetitions to connect pieces of a story and using words to create incongruity. Moreover, he modulated his voice (e.g., using pauses and increasing volume) to express his emotion and strengthen the humor.
\begin{figure*}
\centering
\includegraphics[width=1.97\columnwidth]{pictures/Dehumor-TED.pdf}
\vspace{-3mm}
\caption{\xingbo{The case study on \textit{TED Talk}. After selecting the speech in (A), the user clicked on ``\textit{spam emails}'' in the augmented time matrix. Then the snippet in the speech opening (B1) and their context linking graphs (C1-4) are shown. Next, the user used the {{\textit{humor focus}}} (Fig.~\ref{fig:teaser}B) to find the snippet in the speech closing (B2) with rich humor delivery skills (long dark green bars to the left). Its context linking graph is shown in (C5).}}
\label{fig:ted_case}
\vspace{-4mm}
\end{figure*}
\subsection{Case Study on TED Talks}
\label{subsec:case2}
In the second case, the communication coach \textit{C1} explored humor skills in \textit{TED Talk} speeches that are related to ``technology''.
Since lots of his clients come from IT companies,
he expected talks on technologies are suitable teaching examples for using humor in speech.
In particular, he
focused more on the timing of humor and speakers' vocal delivery skills,
which were regarded as practical and effective
humor skills for students to follow and further improve their speeches.
Sorting the \xingbo{speeches} by the number of views in descending order, he discovered the most popular \xingbo{speech} named \emph{``This is what happens when you reply to spam email''\footnote{Ted Talk video url: \url{https://bit.ly/3eFqm6P}}}.
\textbf{\textit{Case Context}}:
This case includes two pieces of the speaker's experiences of replying to spam emails.
In the speech opening, the speaker introduced that he once received an email from a sender who had a strange name, and described how he replied to the email for fun.
To wrap up the speech, the speaker first suggested the audience replying to spam email with a pseudonymous email address.
Originally, \textit{C1} noticed that in the bar code chart (Fig.~\ref{fig:ted_case}A),
the laughter has a dense concentration at the start and end of the timeline, which was often considered as a pattern of strong opening and closing. He clicked the \xingbo{speech} to saw how the speaker delivered humor (\textit{\textbf{R1, R4}}) to entertain the audience at the start and the end (\textit{\textbf{R2}}).
\subsubsection{Speech opening}
He noticed that ``spam emails'' appears near the top of augmented time matrix (Fig.~\ref{fig:ted_case}B1). He clicked the phrase and saw how the speaker introduced it.
\textcolor{bleudefrance}{Context relationship analysis \& Humor context}
From the highlighted phrases \example{those spam emails} and \example{through my spam filter} in the context linking graph (Fig.~\ref{fig:ted_case}C1-1) (\textit{\textbf{R2, R4}}), he inferred that the speaker received a spam email.
Then, he clicked on the highlighted phrases and confirmed his thought after reading the detailed sentences.
\textcolor{bleudefrance}{Punchline}
In the punchline, \textit{E1} found that the speaker introduced the identity of the spam email sender, who had a very strange name---``Solomon Odonkoh'' (Sentence \#2 of Fig.~\ref{fig:ted_case}C1-2).
\textcolor{bleudefrance}{Feature analysis}
Specifically, \textit{E1} observed that the speaker slowed down his speed on the word ``guy (\textcolor{slow}{\faAngleDoubleLeft})'' before revealing the spammer's name.
Moreover, the speaker paused (\textcolor{pause}{\faSquare}), and immediately added a short phrase ``I know'' (Sentence \#0 in Fig.~\ref{fig:ted_case}C2), which triggered another immediate laughter about the stranger for the second time.
\textit{C1} commented that the pause and speed variation motivated the audience to think about the spammer's name and identity.
Similarly, \textit{C1} also spotted a pause (\textcolor{pause}{\faSquare}) between the next two snippets (Fig.~\ref{fig:ted_case}C3, C4) (\textit{\textbf{R1, R4}}), so he explored them accordingly.
\textcolor{bleudefrance}{Humor context}
He discovered that after introducing the spammer, the speaker also considered deleting the email (Sentence \#3 in Fig.~\ref{fig:ted_case}C3-1). However, he decided not to. Instead, he did what \example{we've always wanted to do} (Sentence \#4 in Fig.~\ref{fig:ted_case}C3-1)---reply to this email.
\textcolor{bleudefrance}{Punchline}
Then, the speaker shared his funny responses to the email, starting with acknowledgment, \example{Solomon, you email intrigues me.} (Sentence \#0 in Fig.~\ref{fig:ted_case}C4).
\textcolor{bleudefrance}{Feature analysis}
\textit{C1} commented that this was a smart pause (\textcolor{pause}{\faSquare}) at the beginning for helping
engage audiences to digest the speaker's previous sentence.
Here the audience got a chance to connect \example{we've all always wanted to do} (the bottom of Fig.~\ref{fig:ted_case}C3) with their desire for replying to spam emails.
The pause aroused the audience's interests in the speaker's next move, which
enhanced the humorous effect of the speaker's unexpected acknowledgment of the spam email
(Sentence \#0 in Fig.~\ref{fig:ted_case}C4).
\subsubsection{Speech closing}
Then, \textit{C1} wanted to see more snippets at the end, with rich delivery skills, especially with pauses. Thus, \textit{C1} used the {{\textit{humor focus}}} to
find the highlighted snippet (\textit{\textbf{R2}}) (Fig.~\ref{fig:ted_case}B2).
For the first one, he referred to its context linking graph (Fig.~\ref{fig:ted_case}C5).
\textcolor{bleudefrance}{Humor context}
As he tracked the repetition links (Fig.~\ref{fig:ted_case}C5-1) from the left to the right, he realized that the speaker expressed a positive attitude towards replying to spam email through repeated phrases \example{(highly recommend)} (green rectangles in Fig.~\ref{fig:ted_case}C5-2) and their sentence contents. The speaker suggested using a
\example{pseudonymous email address} to do so and explained the reason in the punchline (Sentence \#9 in Fig.~\ref{fig:ted_case}C5-3). \textcolor{bleudefrance}{Punchline} He once used his own email address. The result is that the mailbox was flooded with \example{a thousand} useless advertisements about \example{penis enlargements}. Among them, he was only able to find one legitimate information that he wanted (Sentence \#9 in Fig.~\ref{fig:ted_case}C5-3).
\textcolor{bleudefrance}{Feature analysis}
The speaker stressed the reason in the punchline by pauses (\textcolor{pause}{\faSquare}) and vocal stress (\textcolor{stress}{\faUnderline}) on keywords such as \example{thousand} and \example{only} (\textit{\textbf{R1, R4}}).
{\textit{C1}} commented that the speaker effectively used pauses to adapt the pace of his presentation to engage the audience, which is considered to be comic timing.
He added that pause is crucial to speech delivery, and most students do not realize how powerful it is. He emphasized that this speech is a good example for teaching students how to use pauses to deliver a strong opening and closing in their speeches.
\subsection{Expert Interviews}
\label{subsec:expert_interviews}
\xbRevise{We collected the experts' feedback from the individual interviews with the aforesaid experts (\textit{E1, E6, C1, and C3})}. Each interview lasted about one hour and was recorded with the participants' consent. First, we gave the participants a fifteen-minute tutorial outlining the humor features with concrete examples, as well as the visual designs and interactions of {{\textit{DeHumor}}}.
Then, participants were asked to explore the speeches introduced in
Secs.~\ref{subsec:case1} and \ref{subsec:case2} in a think-aloud manner for about forty minutes. For each speech, they \xbRev{were asked} to find and explore humor snippets with specific timing and features (\textit{\textbf{R2}})---snippets that contain (1) words-of-interest at speech opening and (2) humor-features-of-interest at speech closing.
Then, within each snippet, they were required to reason about what contributes to the humor. Specifically, they were assigned the following tasks:
\begin{compactenum}
\item To examine if our context linking graph effectively highlights related build-ups (\textit{\textbf{R3}}), we asked participants to summarize how the speaker builds up humor.
\item To evaluate our inline feature highlighting (\textit{\textbf{R1, R4}}), we asked participants to identify which part of the punchline contributes the most to laughter in terms of word usage and vocal delivery.
\item To validate extracted features (\textit{\textbf{R4}}), we asked participants to read the original text script, listen to the audio, and voice any features that are out of place.
\end{compactenum}
\xbRevise{We then collected post-study feedback on system designs, usefulness, usability, and suggestions for improvements.}
\subsubsection{Results}
\xingbo{Compared with manual browsing and digesting of raw humor speeches, all the participants confirmed the usefulness of computation ability and visualization in the system for assisting humor exploration and analysis.
\xb{\textbf{Concrete humor representations}}.
The participants appreciated that {{\textit{DeHumor}}} automatically segments a speech based on the audience laughter. They confirmed it helps them quickly focus on the highlights of humor.
And the system offers convenient and user-friendly functions for revealing humor patterns in both speech content and vocal delivery.
The context linking graph was generally considered useful for traversing and summarizing humor build-ups.
\textit{E1} said, ``\textit{The context summary (at the top) helps understand and track the backbones of the story.}''
They praised
the straightforward inline annotations of textual and audio features.
These annotations help the participants quickly identify the important word use, utterances, and their co-occurrence for creating humor; on the contrary, it is challenging to capture these patterns by watching videos only.
\textit{E6} mentioned that ``\textit{these annotations, especially the audio feature annotations, successfully guide my attention (to critical parts of the punchline).
They vividly capture the delivery patterns within the sentence. I can picture the speaker in front of me giving a speech!}''
\textbf{Analysis flow}.
\xbRevise{For each speech, the participants explored around nine minutes of the speech content for humor analysis.
They confirmed that multi-level humor exploration supported by {\textit{DeHumor}}{} aligns well with their general analysis workflow.
The most time-consuming task is humor context analysis. But all of the participants could finish it in about three minutes with {\textit{DeHumor}}{}. The punchline analysis took them about one minute, and the extracted features were validated within a minute.
Finally, the participants could elaborate on what contributes to the humor
in a snippet regarding the verbal content and vocal delivery.
During the exploration, the textual incongruity features of punchlines were frequently used to identify the essence of humor.
The pitch and pause were found most useful for revealing the key delivery patterns.
Also, the participants often relied on the verb and noun phrase repetitions
to gain an overview of the humor story development in a snippet.}
\textbf{Usage scenarios}.
The participants valued the interactive exploration experience with {{\textit{DeHumor}}} and were eager to use it in the future.
Coaches \textit{C1} and \textit{C3} believed it would be an excellent teaching tool for coaches to show their students how to impress the audience with concrete examples (e.g., where to pause).
\textit{E1} confirmed that {{\textit{DeHumor}}} provides a corpus with various humor examples and
enables rich interactions, which facilitates a systematic study of humor.
}
\xingbo{Despite the positive feedback above, our participants also identified several limitations of {{\textit{DeHumor}}} and provided some suggestions for improvement.
\textbf{Reliability of feature extraction.}
\xbRevise{Our participants found that the extraction of textual features, especially inter-sentence repetitions and incongruity, contains more errors than the extraction of audio features. For example, they did not \xbRev{find a strong} semantic disconnection between \textit{``f**king''} and \textit{``questions''} (Fig.~\ref{fig:teaser}C5).
However, they thought that the false positives of incongruity usually did not affect humor analysis very much, since they \xbRev{could} highlight some critical
content words in punchlines for digesting humor context.
In contrast, the errors of inter-sentence repetitions \xbRev{might negatively affect}
their exploration experience. For instance,
the phrases (\textit{``i got''}, \textit{``managed to''}, \textit{``quite sure''}) in Fig.~\ref{fig:ted_case}C1
were not regarded as repetitions, and their presence confused the participants. \textit{E6} thought showing meaningless inter-sentence repetitions is a little distracting. Thus,
\textit{E6} was a bit suspicious about
the effectiveness of the context summary in the context linking graph, and she tended to directly explore the sentence details.}
\textbf{Learning curve.}
\xbRevise{Though a fifteen-minute tutorial was provided,
the participants still needed our guidance to finish some required tasks at the very beginning (i.e., the first ten to fifteen minutes) of their exploration.
For example, the participants \xbRev{might} not remember all the visual encodings and interactions, and we \xbRev{further explained} them.
When we illustrated the system designs,
the participants found the augmented time matrix was the most complex view.
But
after several trials,
they could successfully utilize it to find snippets of interest for further humor analysis.}
They claimed it is worth the effort to learn the system features and were willing to use {{\textit{DeHumor}}} in their future research or work.
In addition, they have provided us with valuable suggestions. For example, \textit{E1} recommended adding a sentence comparison function to examine the nuances of vocal delivery or word usage in different sentences.
\textit{C1} commented that besides texts and voice, visualizing gestures and facial expressions can enhance the analysis of humor techniques.}
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{H}{umor}---the use of puns, turns of phrases, or humorous anecdotes---is a powerful communication skill for public speakers to connect, engage, and entertain their audiences.
A proper usage of humor can help induce shared amusement~\cite{mulholland2003handbook}, reduce social anxiety~\cite{wooten1996humor}, and boost persuasive power~\cite{davidson2003complete}.
Although \emph{identifying} humor (i.e., judging whether a joke is funny or not) comes natural to us, \emph{becoming} humorous is challenging in practice, as it requires the integration of various humor skills.
To create humorous content (i.e., jokes), the speakers need to come up with intelligent gradual setups, as well as a sudden twist to subvert the audience's expectation~\cite{raskin2012semantic}.
Then, to effectively achieve the intended dramatic effect, speakers have to decorate the contents with appropriate acoustic delivery methods~\cite{jefferson1979technique,bauman1986story}
(e.g., pause, pitch).
Thorough understanding and learning of humor can only be achieved if we can decompose these building blocks and the interactions between them.
Prior work across multiple disciplines (psychology, philosophy, and linguistics) has qualitatively \xbRev{characterized} humor.
For example,
Plato described humor as an expression of superiority to others,
while
Schopenhauer~\cite{schopenhauer1891world} stated that humor comes from the realization of incongruous interpretations of a statement.
These abstract theories become the cornerstone of various computational features that capture phonetic~\cite{mihalcea2005making,yang2015humor},
stylistic~\cite{mihalcea2005making,mihalcea2006learning}, human-centered~\cite{yang2015humor, mihalcea2007characterizing,zhang2014recognizing}, and content-based~\cite{taylor2009computational,radev2015humor} humor characteristics.
While these features make it possible to quantify humor, analyzing concrete humor examples \xbRev{in speeches} remains challenging for two reasons.
First, presenting a laundry list of features \xbRev{for a humorous speech} can be overwhelming.
Not only do the features create a heavy perceptual burden, but the sophisticated interactions between different building blocks remain hidden in the large feature space.
Most of the analyses to date~\cite{mihalcea2005making, mihalcea2007characterizing,yang2015humor, pickering2009prosodic,attardo2011prosodic,attardo2011timing,attardo2013multimodality}
only focus on either humorous content or vocal delivery independently. However,
\xbRevise{both of them are needed simultaneously to understand humor in practice.}
A \xingbo{\textbf{punchline}---the most important sentence that triggers the audience response (e.g., laughter)---}may be mundane in isolation but hilarious when rendered with exaggerated volume and pitch.
In the example\footnote{The example link: \xingbo{\url{http://bit.ly/2MrVKf9}}} below,
patterns like acoustic stressing at the modal particles (i.e., getting louder at \example{okay} in \xbRev{Line \#5}) or pausing to emphasize turning points (i.e., pausing before \example{I} in Line \#4 to contrast with \example{they} in the preceding sentence) are only observable when we highlight their occurrences.
\qbox{So when I show up to a crime scene, \\
Somebody is always like, ``are you a cop?''\\
I don't wanna say I'm a cop cause it's against the law.\\
So they go, ``are you a cop?''\\
And I go, \textcolor{gray}{\texttt{[PAUSE]}} ``I'll ask the f**king questions, okay\textcolor{gray}{\texttt{[LOUDER]}}?''}
\vspace{-2mm}
Second, the already overwhelming feature definitions have yet to be comprehensive.
Existing research \xbRev{emphasizes} short jokes (e.g., one-liners) while overlooking those with longer-term set-ups, making it difficult to track the clues that help lead to the core punchline.
In the previous example,
the punchline in Line \#5 only becomes funny after Lines \#1 to \#5 \xbRev{provide} the essential context:
When attempting to enter a crime scene (\#1), the speaker was asked about the cop identity (\#2).
Not being an actual cop, he avoided explicitly making an illegal claim that he is one (\#3).
As a workaround, he quoted a common trope for police in movies and TV (\emph{I'm asking the f**king question!}, \#5), and so to mislead the \example{somebody} to believe that he is a cop.
This example demonstrates the importance of contexts for humor analysis, which motivates our study.
To better decompose humor examples into critical features, as well as intuitively present the mixtures of these features, we present a novel visual analytics system named {{\textit{DeHumor}}}.
{{\textit{DeHumor}}} \xingbo{aims to help domain experts (e.g., communication coaches and researchers) analyze the verbal content and vocal delivery of}
\xbRev{public speaking} containing many humorous punchlines (labeled with audience response markers like \texttt{[LAUGHTER]}).
We formulate the design requirements based on literature surveys and user interviews with five humor researchers as well as two communication coaches. We choose to interview these experts because they have theoretical knowledge in humor and need assistance on a systematic investigation of humor.
Accordingly, we design {{\textit{DeHumor}}} to support \xingbo{multi-level explorations of humorous texts and delivery in speeches.}
We aim to enable users to easily understand when and where humorous punchlines are inserted (speech-level), how one particular punchline relates to its preceding build-up sentences (context-level), and how the \xbRev{vocal delivery}
and the textual content are paired within each sentence (sentence-level).
In particular, to reveal the interactions between textual and audio features, we provide inline annotations of the features along with the raw \xingbo{transcripts}.
To highlight the build-ups, we introduce a context linking graph that can recognize relevant phrases and visually connect them \xingbo{with links}.
With case studies on stand-up comedy shows and TED talks, we show that {{\textit{DeHumor}}} can highlight various building blocks of humor examples.
Interviews with domain experts further confirm that {{\textit{DeHumor}}} is helpful for exploratory and in-depth analysis of humorous snippets.
In summary, the major contributions of our work are:
\begin{compactitem}
\item We design a visual analytics system to support interactive and multimodal analysis of humorous pieces and reveal humor strategies in speech content and voice.
\item We demonstrate the usability and effectiveness of {{\textit{DeHumor}}} through case studies and expert interviews with communication coaches and humor researchers.
\end{compactitem}
\section{Related Work}
This section reviews related research on humor theory and visualization for speech analysis.
\subsection{Computational Humor Features on Speech}
\label{subsec:relate_comp_humor}
Modeling humor features is beneficial and critical for automatic humor understanding. Prior work has modeled humor using both text and audio features.
For textual features,
Mihalcea and Strapparava~\cite{mihalcea2005making}
extracted stylistic features that characterize humorous texts, including alliteration, antonym, and adult slang.
Later, Mihalcea and Pulman~\cite{mihalcea2007characterizing}
extended feature sets with human centeredness and polarity orientation.
Kiddon and Brun~\cite{kiddon2011s}
measured erotica-level of nouns, adjectives, and verb phrases.
Zhang and Liu~\cite{zhang2014recognizing}
designed five categories of humor-related linguistic features, including morpho-syntactic features, lexico-semantic features, pragmatic features, and phonetic features\cite{yang2015humor,donahue-etal-2017-humorhawk}.
Content-based features (e.g., n-grams~\cite{taylor2004computationally, yan2017duluth}, lexical centrality~\cite{radev2015humor}, incongruity~\cite{yang2015humor}, and word associations~\cite{cattle2018recognizing}) were also widely experimented to study the patterns in humorous text content in previous work.
However, most of these features were not systematically derived and were defined in an empirical way.
Yang et al.~\cite{yang2015humor} proposed a computational framework
to describe the latent semantic structures of humor, including incongruity structure, ambiguity theory, interpersonal effect, and phonetic style.
Bali et al.~\cite{ahuja2018makes} extracted three major characteristics across all humor types, which are mode, theme, and topic.
The mode (e.g., exaggeration) is dependent on situations of delivery. The theme relates to emotions behind the use of language. The topic covers the central elements of humor.
In our work, we integrate and extend the above frameworks for textual features to analyze humorous texts.
For audio features,
previous quantitative studies~\cite{pickering2009prosodic,attardo2011prosodic,attardo2011timing,attardo2013multimodality,purandare2006humor,bertero2016long} identified significant dimensions for joke-telling,
including volume, pitch, speech rate, and pause length.
Pickering et al.~\cite{pickering2009prosodic} found punchlines were produced with lower pitch in joke narrative.
Attardo and Pickering~\cite{attardo2011timing} investigated pauses around punchlines.
Purandare and Litman~\cite{purandare2006humor} used acoustic-prosodic features (i.e., pitch, energy, and tempo) and linguistic features to automatically recognize the humor in the TV sitcom.
Bertero and Fung~\cite{bertero2016long}
modeled conversational humor by combining
speech utterances with a set of high level features (e.g., speaking rate).
Our work computes pitch, volume, speed, and pauses around punchlines and their contexts, to reveal acoustic patterns in the humor delivery.
\subsection{Visualization for Speech Analysis}
Speech visualization is an important research topic in the multimedia analysis.
It is applied in many domains, such as
public speaking training~\cite{wang2020voicecoach,rubin2015capture},
visualization for the hearing impaired~\cite{watanabe2000speech},
and emotion analysis~\cite{zeng2019emoco}.
\xingbo{While
some prior studies have visualized
the speaker/audience interactions and topic dynamics in multi-party speeches (e.g., debate, conversation)~\cite{rony2020claimviz, el2016contovi, debatevis2020}, our work focuses on analyzing verbal content (e.g., word use) and vocal delivery (e.g., voice modulation) of humor in public speaking.}
One of the main goals of visualizing speech data is to intuitively and effectively reveal the relationship between content and speaking voice.
The most straightforward way is to encode sequential features as bar charts or line charts and then draw them along the script~\cite{zeng2019emoco,
oktem2017prosograph}.
Oh~\cite{oh2010text} used a vertical timeline to summarize features of sections in songs.
However, directly overlaying features on the words can lead to a high cognitive load. Moreover, it does not explicitly demonstrate relationships between words.
Patel and Furr~\cite{patel2011readn}
\xbRev{proposed
a method to directly encode the prosodic features using text properties.}
It manipulates the vertical offset, opacity, and letter spacing of texts to represent pitch, intensity and audio duration, respectively.
Similarly, Wang et al.~\cite{wang2020voicecoach} and Rubin et al.~\cite{rubin2015capture} designed intuitive glyphs to represent prosodic features, which annotates speakers' vocal performance on the script.
Similarly, in our work,
we design glyphs and adjust text styles to explicitly demonstrate the humor features and their relationships.
Besides, we aim to visualize the semantic relationship of texts in humor snippets to help understand textual humor.
Here, we summarize \xbRev{the prior studies that have} inspired our research.
Matrices~\cite{cao2016introduction,janicke2014visualizations} are widely adopted to visualize co-occurrence patterns in text documents.
Word clouds~\cite{vuillemot2009s} can also summarize word relations.
It is also common to use graphs~\cite{wattenberg2002arc,don2007discovering} and links~\cite{subavsic2008web,riehmann2015visual,sinclair1991corpus} to describe co-occurrence and repetitions.
However, it is challenging to directly apply these techniques in our work.
For example,
word clouds lose temporal information.
Matrices suffer from space inefficiencies, especially when they are sparse.
Arc diagrams alleviate the above issues by placing words in a line and visually connecting them.
Still, if the text is long, it is difficult to obtain an overview of the text relationship while keeping the temporal orders.
In this paper,
we extend the arc diagram with a multi-level context summary and rich interactions to support the effective identification of \xingbo{contextual repetitions} in a humor snippet.
\section{System Overview}
Motivated by the design requirements, we design
and implement
an interactive visual analytics system, {{\textit{DeHumor}}} (Fig.~\ref{fig:teaser}), to explore and analyze verbal humor in public speaking.
It combines
multimodal humor features with visualization to facilitate users with insights into writing and delivering humor at three levels: speech level, context level, and sentence level.
{{\textit{DeHumor}}} contains three major modules: a data processing module, an analytics module, and a visual interface.
The \textbf{processing module} \xingbo{extracts humor snippets with aligned audio and transcripts from raw speech videos}
\xbRev{to}
support \xbRev{multimodal} analysis at different granularities.
The \textbf{analytics module} computes \xbRev{multimodal} humor features from audio and text, which characterize abstract and complex humor behaviors quantitatively.
The \textbf{interface} visualizes the features extracted by the \textbf{analytics module} to support intuitive exploration.
Here, we describe
our processing module and then provide a brief overview of the interface.
We delay the feature extraction in the analytical module, as well as their visual encodings, to \autoref{subsec:humorFeatureExtraction}.
\subsection{Data Preprocessing}
\label{subsec:process}
\textbf{Data collection.}
Given a humorous speech, we collect four kinds of data from it:
(1) We collect the meta-information (e.g.,~title, speakers, and categories) for indexing and querying a specific \xingbo{speech}, so to enhance the usability of {\textit{DeHumor}};
(2) We label humor occurrence within a \xingbo{speech} based on the audience behavior markers
(i.e., \texttt{[LAUGHTER]}) that are annotated in the transcripts;
Previous studies have verified that laughter can reliably indicate humor ~\cite{morreall1986philosophy,nash2014language,bertero2016multimodal,hasan2019ur};
(3) We use the \emph{transcripts} for content analysis, and
(4) the \emph{audio sequences} for verbal delivery analysis.
For demonstration purpose, we collect two speech datasets from \emph{TED Talks}~\footnote{\url{https://www.ted.com/talks}} and \emph{Comedy Central Stand-up}~\footnote{\url{http://www.cc.com/shows/stand-up}}, which will be described in detail in \autoref{sec:evaluation}. Users can prepare speech datasets of similar structures for their interests.
\textbf{Preprocessing.}
We process the collected data such that (1) the text script and audio are aligned to support multimodal analysis, and (2) the full speech data is segmented into \emph{humor snippets} to support context and sentence-level analysis.
To achieve the alignment, we first detect each word's starting time and ending time in the transcript using P2FA~\cite{yuan2008speaker}.
Thereafter, we align the audio and text modality together at the word level.
As for humor snippet segmentation, we regard a sentence immediately before a laughter marker as a \textbf{punchline} (i.e., the most important sentence that triggers the audience response).
We treat all of the sentences between two punchlines as the \textbf{candidate context paragraph} for the second punchline.
The intuition is that all the information that occurs after a punchline are potentially useful for \xbRev{building up} the next punchline. More concrete context recognition (shown in Sec.~\ref{subsubsec:context_analysis}) should come from these candidate sentences.
As a result, we split the transcripts at laughter markers. Each resulting \textbf{humor snippet} contains exactly one punchline (i.e., the last sentence of the segment) and its contexts (all the preceding sentences).
The audio is clipped correspondingly through the starting and ending times of the sentences.
Eventually, we organize the raw speech data into aligned audio and transcripts per snippet, per sentence, and per word.
\subsection{Interface Overview}
The user interface follows an overview-to-detail flow.
In a collapsible {\textit{{control panel}}} (Fig.~\ref{fig:teaser}{A}), a user can use the metadata (name, views, etc.) and the temporal distribution of punchlines (visualized as a \textbf{bar code chart} (in Fig.~\ref{fig:teaser}A{1})) to find their \xingbo{speech-of-interest}, which will be loaded in the main component, {{\textit{humor exploration}}} view (Fig.~\ref{fig:teaser}C).
{{\textit{Humor exploration}}} visualizes the humor-related details of a speech at different granularity.
First, an augmented time matrix (on the left) summarizes the overall patterns of speech topics, humor insertion, and vocal delivery (\textit{\textbf{R1, R2}}).
With queries on the time matrix (\textit{\textbf{R5}}) or in the {{\textit{humor focus}}} (Fig.~\ref{fig:teaser}B), a user can locate a specific humor-snippet-of-interest into the context panel (on the right of Fig.~\ref{fig:teaser}{C}).
Within each snippet,
the user can examine the humor context (\textit{\textbf{R3}}) through the context linking graph, and understand the interactions between text and audio through the inline humor feature annotations among the transcripts (\textit{\textbf{R4}}).
Additional interactions on finding specific context links, related queries, etc. would further support users' exploration experience (\textit{\textbf{R5}}).
\subsection{Subsection Heading Here}
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
We would like to thank our industry collaborator, Own The Room Asia Limited, for offering valuable resources. We also thank
our domain experts and the anonymous reviewers for their insightful comments.
This project is partially funded by a grant from ITF UICP (Project No. UIT/142).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\newpage
\bibliographystyle{IEEEtran}
|
1,116,691,500,704 | arxiv | \section{Introduction}
To prove by probabilistic methods that every $(n-1)$-dimensional section of the unit cube in $\R^n$ has volume at most $\sqrt 2$, Ball \cite{Ball:1986} made essential use of the inequality
\begin{equation}\label{ball's ineq}
\frac{1}{\pi}\int_{-\infty}^{\infty} \left(\frac{\sin^2 t}{t^2}\right)^pdt\leq \frac{1}{\sqrt p}, \quad p\geq 1,
\end{equation}
in which equality holds if and only if $p=1$.
As we will see in the Theorem 2 below, the right side of (\ref{ball's ineq}) has the correct rate of decay though the limit of the ratio of the right to left side is $\displaystyle{\sqrt{\frac{3}{\pi}}}$ rather then $1$. With this in mind, we apply Ball's methods to obtain the following improved form of (\ref{ball's ineq}).
\begin{theorem}
Let
\begin{align*}
C(p):=
\left\{ \begin{array}{rcl}
\sqrt{\frac{\pi}{3}},&\quad 1\leq p \leq p_0\\
\\
1+\frac{1}{\sqrt{3 \pi}}\frac{\left(\sqrt 5/6\right)^{2p-1}}{\sqrt p-1/2 \sqrt p},&\quad p> p_0,
\end{array}\right.
\end{align*}
where
\begin{align*}
\frac{\left(\sqrt 5/6\right)^{2p_0-1}}{\sqrt p_0-1/2 \sqrt p_0}=\left(1-\sqrt{3/{\pi}}\right) \pi,
\end{align*}
so that $p_0=1.8414$ to $4$ D.
Then,
\begin{equation}\label{ineq theorem}
\frac{1}{\pi}\int_{-\infty}^{\infty} \left(\frac{\sin^2 t}{t^2}\right)^pdt\leq C(p) \frac{\sqrt{3/\pi}}{\sqrt p}\leq \frac{1}{\sqrt p}, \quad p\geq 1.
\end{equation}
The first two terms being equal if and only if $p=1$.
Further,
\begin{align}
\lim_{p\longrightarrow \infty}\frac{1}{\pi}\int_{-\infty}^{\infty} \left(\frac{\sin^2 t}{t^2}\right)^pdt/ \frac{\sqrt{3/\pi}}{\sqrt p}=1.
\end{align}
\end{theorem}
\section{Symmetric B-splines and the integral $\displaystyle{\int_{-\infty}^{\infty} \left(\frac{\sin^2 t}{t^2}\right)^pdt}$}
The symmetric $B$-splines, $\beta^n$, are defined inductively by
\begin{align*}
\beta^0(x):=\chi_{[-\frac 12, \frac 12]}(x) \quad \mbox{and} \quad \beta^n(x):=\int_0^1 \beta^{n-1}(x-y)
dy,
\end{align*}
$n=1,2,...$
Using known properties of these $B$-splines we obtain an asymptotic formula for our integral as $p\longrightarrow \infty$, namely
\begin{theorem}
\begin{equation}\label{lim theorem}
\frac{1}{\pi}\int_{-\infty}^{\infty} \left(\frac{\sin^2 t}{t^2}\right)^pdt \sim \frac{\sqrt{3/\pi}}{\sqrt p}, \quad \mbox{ as } \quad p\longrightarrow \infty.
\end{equation}
\end{theorem}
\begin{proof}
Suppose, to begin with, that $p \in \Z_+$, say $p=n$. Now,
\begin{align*}
\widehat{\beta^n}(t):=\int_{-\infty}^{\infty}\beta^n(s)e^{-2 \pi i t s}ds=\left(\frac{\sin \pi t}{\pi t}\right)^n,
\end{align*}
so Plancherel's theorem yields
\begin{align*}
\frac{1}{\pi}\int_{-\infty}^{\infty} \left(\frac{\sin^2 t}{t^2}\right)^ndt= \int_{- \infty}^{\infty}\left(\frac{\sin t}{t}\right)^{2n}dt=\int_{-\infty}^{\infty}|\beta^n(s)|^2 ds.
\end{align*}
Further, by [2, p. 89],
\begin{align*}
\int_{-\infty}^{\infty}{\beta^n}(s)^2 ds=\int_{-\infty}^{\infty}\beta_n(s)\beta_n(1)^n ds=\beta^{2n}(0).
\end{align*}
Again, according to Theorem 1 in \cite{Unser:1992},
\begin{align*}
{\beta^{2n}}\left(\sqrt{\frac{2n+1}{12}}x\right) \sim \sqrt{\frac{6}{\pi(2n+1)}} \exp(-x^2/2),
\end{align*}
so in particular,
\begin{align*}
\beta^{2n}(0) \sim \sqrt{\frac{6}{\pi(2n+1)}} \sim \frac{\sqrt{3/ \pi}}{\sqrt{\pi}}, \quad \mbox{as} \quad n\longrightarrow \infty.
\end{align*}
Finally, $\displaystyle{\int_{-\infty}^{\infty}\left(\frac{\sin^2 t}{t^2}\right)^p dt}$ is a decreasing function of $p$, so one has
\begin{equation}\label{eq 4}
\frac{1}{\pi}\int_{-\infty}^{\infty}\left(\frac{\sin^2 t}{t^2}\right)^{[p]+1} dt\leq \frac{1}{\pi}\int_{-\infty}^{\infty}\left(\frac{\sin^2 t}{t^2}\right)^{p} dt\leq \frac{1}{\pi}\int_{-\infty}^{\infty}\left(\frac{\sin^2 t}{t^2}\right)^{[p]}dt
\end{equation}
and hence (\ref{lim theorem}), since the extreme terms in (\ref{eq 4}) are both asymptotically equal to $\displaystyle{\frac{\sqrt{3/ \pi}}{\sqrt{p}}}$.
\end{proof}
\section{Proof of Theorem 1}
The following estimate was obtained by Ball in [1]:
\begin{align*}
\frac{1}{\pi}\int_{-6/\sqrt{5}}^{6/\sqrt{5}}\left(\frac{\sin^2 t}{t^2}\right)^p dt \leq \frac{\sqrt{3/ \pi}}{\sqrt{p}}.
\end{align*}
Also, we have
\begin{align*}
\frac{1}{\pi}\int_{|t|\geq 6/\sqrt{5}}^{\infty}\left(\frac{\sin^2 t}{t^2}\right)^p dt \leq \frac{2}{\pi}\int_{ 6/\sqrt{5}}^{\infty} t^{-2p} dt= \frac{1}{\pi}\frac{(\sqrt{5}/6)^{2p-1}}{p-\frac12}.
\end{align*}
Altogether, then,
\begin{align*}
\frac{1}{\pi}\int_{-\infty}^{\infty}\left(\frac{\sin^2 t}{t^2}\right)^p dt \leq \left(1+ \frac{1}{\sqrt{3 \pi}} \frac{(\sqrt 5/ 6)^{2p-1}}{\sqrt p- 1/2 \sqrt p}\right) \frac{\sqrt{3/ \pi}}{\sqrt {p}},
\end{align*}
with
\begin{align*}
1+ \frac{1}{\sqrt{3 \pi}} \frac{(\sqrt 5/ 6)^{2p-1}}{\sqrt p- 1/2 \sqrt p}\leq \sqrt{\pi/3}, \quad \mbox{ if and only if } \quad p\geq p_0.
\end{align*}
Finally,
\begin{align*}
\lim_{p\longrightarrow \infty}\frac{1}{\pi}\int_{-\infty}^{\infty} \left(\frac{\sin^2 t}{t^2}\right)^pdt/ \frac{\sqrt{3/\pi}}{\sqrt p}=1,
\end{align*}
in view of Theorem 2. $\hspace{12cm}\Box$
|
1,116,691,500,705 | arxiv | \section{Introduction}
The slowing down and Coulomb capture of the negative
particle $M^{-}$ ($M^{-}=\mu^{-},\pi^{-}, etc.$) in
hydrogen media lead to the formation of the
$M^{-}$-molecular complex the decay of which results in the exotic
atom formation in highly excited states with the principal
quantum number $n \sim \sqrt{\mu}$ where $\mu$ is the
reduced mass of the exotic atom. Their initial
$nl$-population and kinetic energy distribution of the exotic atom are defined
by the competition of different decay modes of this
complex. The further destiny of the exotic atom depends on
the kinetics of the processes occurring in the deexcitation
cascade. The experimental data are mainly appropriate to
the last stage of the atomic cascade, such as X-ray yields
and the products of the weak or strong interaction of the
exotic particle in the low angular momentum states with
hydrogen isotopes.
Hadronic hydrogen atoms are of special interest among exotic
atoms because they have the simplest structure and are the
probe in the investigations of the various aspects of both
the exotic atom physics and the elementary hadron-nucleon interactions at zero energy.
In particular, in order to analyze the precision spectroscopy experimental
data~\cite{1} the kinetic energy distribution must be taken
into account. The velocity of the exotic hydrogen atom plays an important
role due to the effect of the Stark transitions on the L X-rays
yields and the Doppler broadening of the L-lines owing to the
preceding Coulomb deexcitation transitions. The energy
release in the last process leads to an acceleration of
colliding partners. So the reliable theoretical backgrounds
on the processes both in low-lying and in highly excited states are required for
the detailed and proper analysis of these data.
In this paper we present the first step to {\em ab initio}
quantum-mechanical treatment of non-reactive scattering
processes of the excited antiprotonic hydrogen atom in collisions with the
hydrogenic atom in the ground state:
\begin{itemize}
\item[-] elastic scattering
\begin{equation}
(ax)_{n l} + (be^-)_{1s} \to (ax)_{n l} + (be^-)_{1s};
\end{equation}
\item[-] Stark mixing
\begin{equation}
(ax)_{nl} + (be^-)_{1s} \to (ax)_{nl'} + (be^-)_{1s};
\end{equation}
\item[-] Coulomb deexcitation
\begin{equation}
(ax)_{nl} + (be^-)_{1s} \to (ax)_{n'l'} + (b e^-)_{1s}.
\end{equation}
\end{itemize}
Here $(a,b) = (p,d,t)$ are hydrogen isotopes and $x=K^-,
\tilde{p}$; $(n,l)$ and $1s$ are the principal and orbital quantum
numbers of the exotic and hydrogenic atom, respectively.
The processes (1) - (2) decelerate and
accelerate while the Coulomb deexcitation (3) accelerates the
exotic atoms, influencing their quantum number and energy
distributions during the cascade. The last process has
attracted particular attention especially after the "hot"
$\pi p$ atoms with the kinetic energy up to 200 eV were
found experimentally\cite{2}. Due to the similarity of the general features of
the exotic atoms the Coulomb deexcitation process must be also taken into
account for the other exotic atoms.
Starting from the classical paper by Leon and Bethe~[3],
Stark transitions have been treated in the semiclassical
straight-line trajectory approximation (see [4] and
references therein). The first fully quantum-mechanical treatment
of the processes (1) - (2) based on the adiabatic description
was given in [5-8]. Recently\cite{9, 10} the elastic scattering
and Stark transitions (for $n=2-5$) have also been studied
in a close-coupling approach treating the interaction of
the exotic hydrogen atom with the hydrogenic one in the dipole
approximation with electron screening taken into account by the
model. As for higher exotic atom states $(n>5)$, the
semiclassical approach~\cite{10} is
used for the description of these processes.
As concerning the acceleration process (3) in the muonic and hadronic
hydrogen atoms, the parameterization based on the
calculations in the semiclassical model~\cite{11} (see also [13])
is used for the low-lying states ($n=3-7$) and the
results of the classical-trajectory Monte Carlo
model~\cite{12, 13} are used for higher exotic atom states
in the cascade calculations.
The main aim of this paper is to obtain the cross sections
of the processes (1)-(3) for the excited antiprotonic atom
beginning from the low collision energies in the framework of the fully
quantum-mechanical approach. For this purpose the
unified treatment of the elastic scattering, Stark transitions
and Coulomb deexcitation within the close-coupling method has been used.
This approach has been recently applied for the study of the
differential and total cross sections of the elastic
scattering, Stark transitions and Coulomb deexcitation in
the collisions of the excited muonic~\cite{14} and pionic~\cite{15}
hydrogen atoms with the hydrogen ones. In the following Section
we briefly describe the close-coupling formalism.
The results of the close-coupling calculations concerning
the total cross sections of the processes (1)-(3) are presented
and discussed in Section III. Finally, summary and concluding
remarks are given in Section IV.
\section{Close-coupling approach}
\subsection{ Total wave function of the binary system
in terms of basis states}
The total wave function $\Psi (\boldsymbol{\rho},
\mathbf{r}, \mathbf{R})$ of the four-body system $(a
\bar{p} + b e^{-})$ satisfies the time independent
Schr\"{o}dinger equation with the Hamiltonian which after
separating the center of mass motion can be written as
\begin{equation}
H = -\frac{1}{2m}\Delta _{\mathbf{R}} +
h_{\bar{p}}(\boldsymbol{\rho}) + h_e(\mathbf{r})
+V(\mathbf{r},\boldsymbol{\rho},\mathbf{R})
\label{Ham}
\end{equation}
($m$ is the reduced mass of the system). Here we use the
set of Jacobi coordinates $(\mathbf{R},
\boldsymbol{\rho}, \mathbf{r})$:
\[\mathbf{R} =
\mathbf{R}_H - \mathbf{R}_{\bar{p}a},\quad
\boldsymbol{\rho} = \mathbf{r}_{\bar{p}} -
\mathbf{r}_a,\quad \mathbf{r}= \mathbf{r}_e
-\mathbf{r}_b,\] where $\mathbf{r}_a, \mathbf{r}_b,
\mathbf{r}_{\bar{p}}, \mathbf{r}_e$ are the radius-vectors
of the nuclei, muon and electron in the lab-system and
$\mathbf{R}_H, \mathbf{R}_{\bar{p} a}$ are the center of
mass radius-vectors of hydrogen and exotic atoms,
respectively. The Hamiltonians $h_{\bar{p}}$ and $h_e $ of
the free exotic and hydrogen atoms, respectively, satisfy
the Schr\"{o}dinger equations
\begin{align}
h_\mu \Phi_{nlm}(\boldsymbol{\rho}) &= \varepsilon_{nl}
\Phi_{nlm}(\boldsymbol{\rho}),
\\
h_e\varphi_{n_e l_e}(\mathbf{r}) &=
\epsilon_{n_e}\varphi_{n_e l_e}(\mathbf{r}),
\end{align}
where $\Phi_{nlm}(\boldsymbol{\rho})$ and
$\varphi_{n_e}(\mathbf{r})$ are the hydrogen-like wave
functions of the exotic atom and hydrogen atom bound
states, $\varepsilon_{nl}$ and $\epsilon_{n_e}$ are the
corresponding eigenvalues. In the present study
$\varepsilon_{nl}$ includes beyond the standard
non-relativistic two-body Coulomb problem the energy shifts
due to the strong interaction, vacuum polarization and finite
size. It is worthwhile noting that in order to treat the hadronic $ns$
states as normal asymptotic states in the scattering
problem we take into consideration only the real part of
the complex strong interaction energy shift.
The interaction potential
\begin{equation}
V(\mathbf{r},\boldsymbol{\rho},\mathbf{R})=V_{ab}+V_{\mu
b}+V_{ae}+V_{\mu e} \label{}
\end{equation}
includes the two-body Coulomb interactions between the
particles from two colliding subsystems:
\begin{eqnarray}
V_{ab}=\frac{1}{r_{ab}}=
|\mathbf{R}+\nu \boldsymbol{\rho}-\nu_e \mathbf{r}|^{-1},
V_{\bar{p} b}=
-\frac{1}{r_{\bar{p} b}}=
-|\mathbf{R}-\xi\boldsymbol{\rho}-\nu_e \mathbf{r}|^{-1},
\\
V_{\bar{p} e}=
\frac{1}{r_{\bar{p} e}}=
|\mathbf{R}-\xi \boldsymbol{\rho}+\xi_e \mathbf{r}|^{-1},
V_{ae}=-\frac{1}{r_{ae}}=-|\mathbf{R}+\nu
\boldsymbol{\rho}+\xi_e \mathbf{r}|^{-1},
\end{eqnarray}
where the following notations are used:
\begin{eqnarray}
\nu = m_{\bar{p}}/(m_{\bar{p}} +m_a),\; \xi = m_a
/(m_{\bar{p}}+m_a),
\\ \nu_e = m_e /(m_e+m_b), \; \xi_e = m_b /(m_e +m_b),
\end{eqnarray}
($m_a, m_b, m_{\bar{p}}$ and $m_e$ are the masses of
hydrogen isotopes, antiproton and electron, respectively).
Atomic units (a.u)
$\hbar=e=m_e m_b/(m_e+m_b)=1$ will be used throughout the
paper unless otherwise stated.\\
In this paper, as well as in the previous studies [11, 12-15],
we assume that the state of the target
electron is fixed during the collision. The electron
excitations can be taken into account in a straightforward
manner.
In a space-fixed coordinate frame we built the basis states
from the eigenvectors of the operators $ h_e, h_{\bar{p}},
\mathbf{l}^2, \mathbf{L}^2, \mathbf{J}^2, J_z$ and the
total parity $\pi$ with the corresponding eigenvalues
$\varepsilon_{1s}, \varepsilon_n, l(l+1), L(L+1), J(J+1),
M$ and $(-1)^{l+L}$, respectively:
\begin{equation}
|1s,n l, L:JM\rangle\equiv \frac{1}{\sqrt{4\pi}} R_{1s}(r)
R_{nl}(\rho) {\cal Y}_{lL}^{JM}(\hat {\b\rho}, \hat{\bf
R}), \label{}
\end{equation}
where
\begin{equation}
{\cal Y}_{l L}^{JM} (\hat {\b\rho}, \hat{\bf R})\equiv
\sum_{m \lambda}\langle l m L \lambda |JM \rangle
Y_{lm}(\hat{\b\rho}) Y_{L \lambda}(\mathbf{\hat R}).
\label{}
\end{equation}
Here the orbital angular momentum $\bf l$ of $(a\mu)_{nl}$
is coupled with the orbital momentum $\bf L$ of the
relative motion to give the total angular momentum, $\bf
{J=l + L}$. The explicit form of the radial hydrogen-like
wave functions $R_{nl}(\rho)$ will be given below.
Then, for the fixed values of $J, M, \pi = (-1)^{l+L}$ the
exact solution of the Schr\"{o}dinger equation
\begin{equation}
(E - H) \Psi_{E}^{JM\pi}({\bf r}, {\b\rho}, {\bf R}) = 0,
\label{}
\end{equation}
is expanded as follows
\begin{equation}
\Psi_{E}^{J M \pi}(\mathbf{r}, {\b\rho}, \mathbf{R}) =
\frac{1}{R} \sum_{nl L}G_{nlL}^{J \pi}(R)|1s,n
l,L:JM\rangle, \label{}
\end{equation}
where the $G_{nlL}^{J \pi}(R)$ are the radial functions of
the relative motion and the sum is restricted by the $(l,
L)$ values to satisfy the total parity conservation. This
expansion leads to the coupled radial scattering equations
\begin{equation}
\left(\frac{d^2}{dR^2} + k^2_{n} -
\frac{L(L+1)}{R^2}\right)G^{J \pi}_{nlL}(R) = 2m
\sum_{n'l'L'}W^{J}_{n'l'L', nlL}(R)\,G^{J \pi
}_{n'l'L'}(R), \label{cce}
\end{equation}
where
$k^{2}_{nl}=2m(E_{cm}+\varepsilon_{n_il_i}-\varepsilon_{nl})$
specifies the channel wave numbers; $E_{cm}$ and
$\varepsilon_{n_il_i}$ are relative motion energy and exotic atom
bound energy in the entrance channel, respectively.
The radial functions $G_{E, nlL}^{J \pi}(R)$ satisfy the
usual plane-wave boundary conditions at $R\rightarrow 0$
\begin{equation}
G_{E, n'l'L'}^{J \pi}(0)=0 (\sim R^{L+1}) \label{}
\end{equation}
and at asymptotic distances ($R\rightarrow \infty$)
\begin{equation}
G_{E, n'l'L'}^{J \pi}(R)\Rightarrow \frac{1}{\sqrt
{k_f}}\{\delta_{if} \delta_{nn'} \delta_{ll'}
\delta_{LL'}e^{-i(k_{i}R-L\pi/2)}- S^J(nl, L\rightarrow
n'l', L')e^{i(k_{f}R-L'\pi/2)}\}, \label{}
\end{equation}
where $k_i$, $k_f$ are the wave numbers of initial and
final channels and $S^J(nl, L\rightarrow n'l', L')$ is the
scattering matrix in the total angular momentum
representation. Here and below the indices of the entrance
channel and target electron state are omitted for brevity.
\subsection{Potential matrix}
Here we present the derivation of the exact matrix of the
interaction potentials involved in the close-coupling
calculations. The interaction potential matrix
$W^{J}_{n'l'L', nlL}$ coupling the asymptotic initial $(n l
L; J)$ and final $(n' l' L'; J)$ channels is defined by
\begin{align}
W^{J}_{n'l'L,nlL}(R)&=\frac{1}{4\pi}\int{\rm d} {\bf
r}\,{\rm d}{\b \rho} \,{\rm d} \hat{\bf R} R^2_{1s}
(r)R_{nl} (\rho)R_{n'l'}(\rho) \nonumber\\ &\times {\cal
Y}^{JM}_{lL} (\hat{\b \rho},\hat{\bf R})\, V({\bf r},{\b
\rho},{\bf R})\, '({\cal Y}^{JM}_{l'L'})^{^*} (\hat{\b
\rho},\hat{\bf R}),
\end{align}
where the radial hydrogen-like wave functions
are given explicitly by
\begin{equation}
R_{nl}(\rho)=N_{nl}\left(\frac{2\rho}{n
a}\right)^l\exp(-\rho/na)
\sum_{q=0}^{n-l-1}S_{q}(n,
l)\left(\frac{2\rho}{n a}\right)^q \label{}
\end{equation}
($a$ is the Bohr' radius of the exotic atom in a.u.) with
\begin{equation}
N_{nl}=\left(\frac{2}{n a}\right)^{3/2}
\left[\frac{(n+l)!(n-l-1)!}{2n}\right]^{1/2}, \label{}
\end{equation}
and
\begin{equation}
S_{q}(n, l) = (-)^q \frac{1}{q!(n-l-1-q)!(2l+1+q)!}.
\label{}
\end{equation}
Averaging $V({\bf r},{\b \rho},{\bf R})$ over $1s$-state of hydrogen atom
leads to
\begin{align}
V({\bf R},{\b \rho})&=\frac{1}{4\pi}\int_{0}^{\infty}{\rm
d} {\bf r} R^2_{1s} (r)V({\bf r},{\b \rho},{\bf R})=
\nonumber\\ &= \frac{1}{\xi_e}\{U_{\nu,\xi_e}({\bf
R},{\b \rho})-U_{-\xi,\xi_e}({\bf R},{\b \rho})\} -
\frac{1}{\nu_e}\{U_{\nu,\nu_e}({\bf R},{\b
\rho})-U_{-\xi,\xi_e}({\bf R},{\b \rho})\}.
\end{align}
Then we use the transformation
\begin{multline}
U_{\alpha,\beta}({\bf R}, {\b \rho})=(1+\frac{\beta}{|{\bf
R}+ \alpha\boldsymbol{\rho}|}) {\rm e}^{-\frac{2|{\bf
R}+\alpha\boldsymbol{\rho}|}{\beta}}\equiv
\lim_{x\to 1}\left(1-\frac{1}{2}\frac{\partial}{\partial x}\right)
\beta\frac{{\rm e}^{-\frac{2x|{\bf
R}+\alpha\boldsymbol{\rho}|}{\beta}}} {|{\bf
R}+\alpha\boldsymbol{\rho}|}.
\end{multline}
which allows us to apply the additional theorem for the
spherical Bessel functions~\cite{16}
\begin{multline}
\frac{{\rm e}^{-\lambda |{\bf R}_1+{\bf r}_1|}}{|{\bf
R}_1+{\bf r}_1|}=
\frac{4\pi}{\sqrt{R_1r_1}}\sum_{t\tau}(-1)^t
Y^*_{t\tau}(\hat{\bf R}_1) Y_{t\tau}(\hat{\bf r}_1)\times
\\ \times \left\{ K_{t+1/2}(\lambda R_1)\,I_{t+1/2}(\lambda
r_1)\left |_{r_1 <R_1} + I_{t+1/2}(\lambda
R_1)\,K_{t+1/2}(\lambda r_1)\right |_{r_1 >R_1} \right \}
\end{multline}
($I_p(x)$ and $K_p(x)$ are the modified spherical Bessel
functions of the first and third kind). Furthermore, by
substituting the Eqs.(20)-(25) into Eq.(19) we can integrate
over the angular variables $(\bf {R}, \boldsymbol{\rho})$.
Finally, applying
the angular momentum algebra and integrating over $\rho$, we obtain:
\begin{align}
W^{J}_{nlL,n'l'L'}(R)&=(-1)^{J+l+l'}i^{l'+L'-l-L}\sqrt{\hat{l}\hat{l'}\hat{L}\hat{L'}}
\sum_{t=0}^{t_m}(l0l'0|t0)(L0L'0|t0)
\left\{\begin{array}{lll}
l&l'&t\\L'&L&J\end{array}\right\}\times \nonumber \\
&\times \left\{\frac{1}{\xi_e}\left[ (-1)^t W_t(R,\nu
,\xi_e;nl,n'l') - W_t (R,\xi,\xi_e;nl,n'l')\right] \right
.- \nonumber \\ & - \left . \frac{1}{\nu_e}\left[ (-1)^t
W_t(R,\nu,\nu_e;nl,n'l') - W_t(R,\xi,\nu_e;nl,n'l')\right]
\right\}
\end{align}
($t_m$ is the maximum value of the allowed multipoles).
Here the next notations are used:
\begin{align}
W_t(R,\alpha,\beta;n l,n'l')&={\cal N}_{nl,n'l'}
\sum_{m_1=0}^{n-l-1}S_{m_1}(n,l)\left(\frac{2n'}{n+n'}\right)^{m_1}
\sum_{m_2=0}^{n'-l'-1} S_{m_2}(n',
l')\left(\frac{2n}{n+n'}\right)^{m_2} \times \nonumber \\
\times
&\left\{H_t(x)J_1^{t,s}(x,\lambda(n,n',\alpha,\beta))-
h_t(x)J_2^{t,s}(x,\lambda(n,n',\alpha,\beta)) + \right .
\nonumber \\
&+F_t(x)J_3^{t,s}(x,\lambda(n,n',\alpha,\beta)) \left .
+f_t(x)J_4^{t,s}(x,\lambda(n,n',\alpha,\beta)) \right\},
\end{align}
where $x=2R/\beta $, $ s=l+l'+m_1+m_2 $, $ \hat{L}\equiv
2L+1$;
\begin{equation}
{\cal N}_{nl,n'l'} =
\frac{1}{n+n'}\!\left(\frac{2n'}{n+n'}\right)^{l+1}\!\left(\frac{2n}{n+n'}\right)^{l'+1}
\!\sqrt{(n+l)!(n-l-1)!(n'+l')!(n'-l'-1)}; \label{}
\end{equation}
\begin{equation}
\lambda(n,n',\alpha,\beta)=\frac{2nn'}{n+n'}\frac{a\alpha}{\beta};
\label{}
\end{equation}
\begin{equation}
H_t(x)=(1-2t)h_t(x)+xh_{t+1}(x); \label{}
\end{equation}
\begin{equation} F_t(x)=(1-2t)f_t(x)-xf_{t+1}(x). \label{}
\end{equation}
The functions $h_t(x)$ and $f_t(x)$ are given by
\begin{equation}
h_t(x)\equiv\sqrt{\frac{2}{\pi x}}K_{t+1/2}(x) \label{}
\end{equation}
and
\begin{equation}
f_t(x)\equiv\sqrt{\frac{\pi}{2 x}}I_{t+1/2}(x). \label{}
\end{equation}
The radial integrals $J_i^{t,s}(x,\lambda)$ are defined as
follows:
\begin{equation}
J_1^{t,s}(x,\lambda)=\int_{0}^{x/\lambda}
y^{s+2}e^{-y}f_t(\lambda y)\,{\rm d}y,
\end{equation}
\begin{equation}
J_2^{t,s}(x,\lambda)=\lambda J_1^{t+1,s+1}(x,\lambda),
\end{equation}
\begin{equation}
J_3^{t,s}(x,\lambda)=\int_{x/\lambda}^{\infty}
y^{s+2}e^{-y}h_t(\lambda y)\,{\rm d}y,
\end{equation}
\begin{equation}
J_4^{t,s}(x,\lambda)=\lambda J_3^{t+1,s+1}(x,\lambda)
\end{equation}
and calculated analytically using the power series
for the modified Bessel functions.
\subsection {Cross sections}
The transition amplitude from the initial state
$|nlm>$ to the final state $|n'l'm'>$ of the exotic atom can be
defined by
\begin{align}
f(nlm\rightarrow n'l'm'|\mathbf{k}_i, \mathbf{k}_f
\rangle&= \frac{2\pi
i}{\sqrt{k_{i}k_{f}}}\sum_{JMLL'\lambda
\lambda'}i^{L'-L}\langle lmL\lambda |J M\rangle \langle
l'm'L'\lambda'|J M\rangle \times \nonumber \\ &\times
Y_{L\lambda}^{*}(\mathbf{\hat k_i})
Y_{L'\lambda'}(\mathbf{\hat k_f})T^J(nlL\rightarrow
n'l'L'). \label{}
\end{align}
Here, $k_i$ and $k_f$ are the center of mass relative momenta in the
initial and final channels; $\mathbf{\hat k_i}$ and
$\mathbf{\hat k_i}$ are their unit vectors in the
space-fixed system, respectively, and, finally, the
transition matrix $T^J(nlL\rightarrow n'l'L')$ used here is
given by
\begin{equation}
T^J(nl, L\rightarrow n'l',
L')=\delta_{nn'}\delta_{ll'}\delta_{LL'}\delta_{mm'}\delta_{\lambda\lambda'}
- S^J(nl, L\rightarrow n'l', L'). \label{}
\end{equation}
In terms of the scattering amplitude (38) defined above, all the
types of both the differential and total cross sections of the all
processes under consideration for the transition from the
initial $(n l)$ state to the final $(n'l')$ state are
defined as:
\\ differential cross sections
\begin{equation} \label{}
\frac{d\sigma_{nl \rightarrow n'l'}}{d\Omega}
=\frac{1}{2l+1}\frac{k_f}{k_i}\sum_{m m'}|f(nlm\rightarrow
n'l'm'|\mathbf{k}_i, \mathbf{k}_f \rangle|^2,
\end{equation}
partial cross sections
\begin{equation}
\sigma^J_{n l \rightarrow n'l'}( E) =
\frac{\pi}{k_{i}^2}\frac{2J+1}{2l+1} \sum_{L
L'}|T^J(nlL\rightarrow n'l'L')|^2, \label{}
\end{equation}
and the total cross sections for the $n l\rightarrow n'l'$
transition are obtained by summing the corresponding partial
cross sections over the total angular momentum $J$:
\begin{equation}
\sigma_{nl \to n'l'}(E) = \sum_{J}\sigma^J_{n l \rightarrow
n'l'}( E).
\end{equation}
Finally, the averaged over the initial orbital angular
momentum $l$ cross sections for the $n \rightarrow
n'$ transitions are then defined by summing over $l'$ and
$l$ ($l = l'$ for the elastic scattering and $l \neq l'$ for the
Stark transitions) with the statistical weight
$(2l+1)/n^2$ in the case of the degenerated exotic atom
states and with the weight $(2l+1)/(n^2-1)$ in the case
when the energy shift of the $ns$ state is taken into
account:
\begin{equation}
\sigma_{n\to n'}(E) =
\frac{\pi}{k_{i}^2}\frac{1}{n^2}\sum_{l,\,l\,' \, L L'
J}(2J+1) |T^J(nlL\rightarrow n'l'L')|^2, \label{}
\end{equation}
and
\begin{equation}
\sigma_{n\to n'}^{l>0}(E) =
\frac{\pi}{k_{i}^2}\frac{1}{n^2-1}\sum_{l>0,\,l\,' \, L L'
J}(2J+1) |T^J(nlL\rightarrow n'l'L')|^2,
\label{}
\end{equation}
respectively.
\section{Results}
The close-coupling method described in the previous Section
has been used to obtain the total cross sections for the
collisions of the $\bar{p} p$ atoms in excited states with
hydrogen atoms. The present calculations had at least two goals:
first, to apply the fully quantum-mechanical approach for
the study of the processes (1) - (3) and, second, to clear
the effect of the energy shifts of the $ns$ states of the
antiprotonic hydrogen atom on the cross sections of these processes.
The coupled differential equations (16) are solved numerically
by the Numerov method with the standing-wave boundary
conditions involving the real $K$-matrix. The corresponding
$T$-matrix are obtained from $K$-matrix using the matrix
equation $ T=2iK(I- iK)^{-1}$. In the calculations both the exact
interaction matrix and all the open channels with $n'\le n$ have
been taken into account. The closed channels are not
considered in the present study.
The close-coupling calculations have been carried out for
the relative collision energies $E_{\rm cm}$ from 0.05 up
to 50 eV and for the excited states with $n=3\div14$. At
all energies the convergence of the partial wave expansion
was achieved and all the cross sections were calculated with the
accuracy better than 0.1\%.
\begin{figure}[h]
\includegraphics[width=0.7\textwidth,keepaspectratio]{fig1.eps}
\caption{The total cross sections $\sigma_{nl\to nl'}$ for the
collisions of the $p\bar{p}$ atom in the $n=8$ state with
hydrogen atom at $E_{\rm cm}=1.4$~eV. The dashed and dotted
lines connect the points corresponding to the calculations both
with and without taking into account the $ns$-state energy shifts,
respectively. The dotted lines denote the results obtained
without including the $ns$-states into the basis set.}
\label{fig1}
\end{figure}
The results of the calculations are presented in Figures 1-4.
In Fig.~1 we introduced the calculated total cross sections of the $nl \to nl'$
transitions for $n=8$ at the kinetic energy $E_{\rm cm}=1.4$~eV both with
and without the $ns$ state energy shifts.
The following
measured world-average value~\cite{17} for the
spin-averaged shift
$$\epsilon_{1s} = (721 \pm 14) {\rm eV }$$
was used in the present calculations
and the energy shifts of the $ns$
states are defined by $\epsilon_{1s} /n^3$.
\begin{figure}[h!]
\includegraphics[width=0.7\textwidth,keepaspectratio]{fig2.eps}
\caption{The $l$-averaged cross sections of the elastic scattering for
the collisions of the $p\bar{p}$ atom with the hydrogenic atom for the
different values of the principal quantum number $n$}
\label{fig2}
\end{figure}
Contrary to the $(\pi^- p)$-atom, the energy shifts of the $ns$ states in
$(\bar{p}p)$-atom are repulsive, hence,
the $nl \to ns$ transitions are closed below the corresponding threshold and,
besides, according to the present study (e.g., see Fig. 1) the Stark transitions both
from the $ns$-states and to the $ns$-state at the same collisional energy are strongly
suppressed at the kinetic energies above the threshold.
The similar effect is also observed for the
elastic $np \to np$ transitions. The other transitions are practically
unchangeable at fixed energy. The analog of this effect can be modeled by
excluding $ns$-states from the basis states. At energy above a few
thresholds the effect is much weaker. Therefore, the strong interaction effect
in the antiprotonic hydrogen (the similar effect must be also observed in the
kaonic hydrogen atom) results in the essential and important difference of the Stark transition
role in the absorption from the $ns$ states in contrast to pionic hydrogen
atoms. The influence of the strong interaction shift enhances for the lower states
and becomes less pronounced for the highly excited states of the antiprotonic atom.
\begin{figure}[h!]
\includegraphics[width=0.7\textwidth,keepaspectratio]{fig3.eps}
\caption{The $l$-averaged cross sections of the Stark transitions for
the collisions of the $p\bar{p}$ atom with the hydrogenic one. The results of
the calculations~\cite{10} in the semiclassical model are shown
for $n=8$ with triangles}
\label{fig3}
\end{figure}
In Figures 2 and 3 the energy dependence of the $l$-averaged total elastic
and Stark cross sections are shown for the different principle
quantum number values from $n=3$ up to $n=12$.
Since the relative contribution of the $ns$ state in the $l$-averaged cross
sections are small, the calculations both with and without the energy shift are
practically indistinguishable at the energies above the corresponding threshold.
So, in Figs. 2 and 3 the small energy region (below the corresponding
threshold) corresponds to the calculations
without the energy shift taken into account. In Fig. 3 the $l$-averaged cross
section for $n$=8 are compared with the calculations in the framework of the
semiclassical model~\cite{10}. As a whole a fair agreement is observed,
but the semiclassical model results in a different energy dependence
especially at low energy collision and gives smaller cross sections than
those obtained in the present approach.
\begin{figure}[h]
\includegraphics[width=0.7\textwidth,keepaspectratio]{fig4.eps}
\caption{The $l$-averaged cross sections of Coulomb deexcitation with
$\Delta n=1,2,3$ for the collisions of the $p\bar{p}$ atom
($n=8$) with the hydrogenic atom. The dashed line shows the fit
used in cascade calculations~\cite{13} for the transitions with
$\Delta n =1$and based on the mass-scaling of the results~\cite{11} for the
muonic atom. The present calculations of the $l$-averaged elastic and
Stark cross sections are shown for comparison}
\label{fig4}
\end{figure}
The dependence of the Coulomb deexcitation cross sections on energy obtained in
the present study is illustrated in Fig. 4 for $n=8$ and the different
values of $\Delta n$ =1, 2 and 3. The special features of
these cross sections are the following: the similar energy dependence but sharper than that
of the elastic scattering and Stark transitions (see also in Fig. 4);
the contribution of the transitions with $\Delta n > 1$ is comparable with the
one for $\Delta n$ =1 and approximately equal about 50\%. The effect of the
$ns$ state energy shifts in the $l$-averaged Coulomb deexcitation cross
sections are small for the same reason as it was discussed above (due to small
statistical weight of the ns-state).
In Fig. 4 we also compare our results with those obtained in the semiclassical model
(we use the parameterization suggested in [13] which gives a fair description
of the Coulomb cross sections from [11]) for the $\Delta n$ =1 transition.
The satisfactory agreement is observed, but this agreement is
quite occasional and takes no place for other $n$ values.
The distribution over the final states $n'$ is completely
different from the semiclassical results [11] as it was
illustrated in Fig.~4. The present calculations
predict that $\Delta n>1$ transitions give a substantial contribution to the
Coulomb deexcitation of the highly excited antiprotonic hydrogen atom that is in agreement
with our previous results for the muonic and pionic hydrogen~[14,15] atoms.
\section{Conclusion}
The unified treatment of the elastic scattering, Stark
transitions and Coulomb deexcitation is presented in {\em
ab initio} quantum-mechanical approach and for the first time
the cross sections of these processes have been calculated for the
highly excited antiprotonic hydrogen atom.
The influence of the energy shifts of the $ns$ states on
these processes has been studied.
We have found that strong interaction shifts
in the antiprotonic hydrogen atom lead to
substantial suppression of both the $ns \to nl'\neq 0$ and
$nl'\neq 0 \to ns$ transitions. At the same time the cross sections
of the elastic scattering and Stark transitions for the the states with
$l>2$ are practically unchangeable.
The present study is the first step to achieving a reliable theoretical
input for realistic kinetics calculations.
We are grateful to Prof. L.Ponomarev for the stimulating
interest and support of this investigation.
|
1,116,691,500,706 | arxiv | \section{Introduction}
Magnetars are a subclass of neutron stars (NSs) that have long spin periods (2--12 s) and large inferred dipole
magnetic fields ($\approx 10^{14-15} $ G). They are also characterized by unsteady spin down and variable X-ray emission.
Unlike ``normal'' rotation powered radio pulsars, magnetars' X-ray luminosity can exceed their spin-down power.
This luminosity is thought to be supplied by their decaying magnetic fields.
\xte, the first discovered transient magnetar, was found during its X-ray outburst in 2003 \citep{ibr04}.
Subsequently \xte \ became the first magnetar observed to produce emission at radio frequencies \citep{hal05,cam06}.
Currently, four of the twenty-three confirmed magnetars are known to be transient radio emitters \citep{ola14}.
The X-ray flux of \xte declined over the next four years, and it become radio quiet in late 2008 \citep{got07,ber09,cam06,cam15}.
A theoretical model of magnetar outbursts relevant to \xte \ has been developed by \cite{bel09}.
In Beloborodov's model, a twisted NS magnetosphere stores energy that is released in the X-ray outburst.
As the magnetosphere untwists, particles impact and heat the NS surface, resulting in observable thermal emission.
Beloborodov's model predicts an approximately exponential luminosity decay consistent with early observations of \xte.
Models of magnetar spectra relevant to \xte \ have been developed by \cite{guv07}, \cite{alb10}, and
\cite{ber11}.
Guver et al. calculated the effects of photon propagation through the NS atmosphere and magnetosphere, accounting for both quantum and general relativistic effects.
Guver et al. fit their model to \xte \ data and calculated a magnetospheric magnetic field strength close to the dipole spin-down value.
Bernardini et al. calculated the effects of light bending and relativistic beaming on the X-ray spectrum.
They fit this model to multiple epochs of data and calculated the viewing geometry of hotter regions on the surface of \xte.
Albano et al. used Monte Carlo simulations of the propagation of photons from the NS surface through the magnetosphere and produced synthetic pulse profiles to fit to observations.
Albano et al. found that a limited region of the surface of \xte \ was heated, and that this region's size and temperature were decreasing after its outburst.
In this paper we report on the long term X-ray evolution of \xte, with all available \xmm \ data including previously unpublished data from 2008 through 2014.
We also analyze a \chandra \ observation taken near the time when \xte \ became radio quiet in late 2008, looking for correlations between its X-ray and radio activity.
We explore several spectral models, and attempt to determine if \xte \ has returned to a ``quiescent'' state.
\section{Data Reduction and Analysis}
Table 1 summarizes the X-ray observations considered in this paper.
We mostly focus on the \xmm \ observations because of \xmm's broader energy coverage, larger effective area, and more stable effective area.
\begin{table}
\caption{Log of \xmm \ Observations}
\begin{tabular}{lccc}
\hline \hline
ObsID & Date & Exposure & Net Counts \\
& (UT) & (ks) & (0.3 - 10 keV) \\
\hline
0161360301 & 2003 Sep 8 & 12.1 & 68958 \\
0152833201 & 2003 Oct 12 & 8.9 & 29268 \\
0161360501 & 2004 Mar 11 & 18.9 & 15485 \\
0164560601 & 2004 Sep 18 & 28.9 & 83341 \\
0301270501 & 2005 Mar 18 & 42.2 & 64978 \\
0301270401 & 2005 Sep 20 & 42.2 & 28726 \\
0301270301 & 2006 Mar 12 & 51.4 & 14104 \\
0406800601 & 2006 Sep 24 & 50.3 & 21107 \\
0406800701 & 2007 Mar 6 & 68.3 & 17347 \\
0504650201 & 2007 Sep 16 & 74.9 & 31397 \\
7594 (\chandra) & 2008 Mar 18 & 29.6 & 5786 \\
0552800201 & 2009 Mar 5 & 65.8 & 14022 \\
0605990201 & 2009 Sep 5 & 21.6 & 7746 \\
0605990301 & 2009 Sep 7 & 19.9 & 6738 \\
0605990401 & 2009 Sep 23 & 14.2 & 4666 \\
0605990501 & 2010 Apr 9 & 9.9 & 1485 \\
0605990601 & 2010 Sep 5 & 11.3 & 3655 \\
0671060101 & 2011 Apr 3 & 22.9 & 6975 \\
0671060201 & 2011 Sep 9 & 15.9 & 5206 \\
0691070301 & 2012 Sep 6 & 17.9 & 6335 \\
0691070401 & 2013 Mar 3 & 17.9 & 3170 \\
0720780201 & 2013 Sep 5 & 24.5 & 7857 \\
0720780301 & 2014 Mar 4 & 26.0 & 8200 \\
\hline
\end{tabular}
\end{table}
\subsection{Data Reduction}
All of the XMM-Newton data were reduced and extracted using the Science Analysis Software (SAS) version 13.5.
We only used data from the EPIC pn CCD because of its superior long term stability and throughput at low energies (which was particularly important in the later observations as the blackbody temperatures decreased).
All observations were performed in Imaging Large Window mode, with the exception of the 2003 September 8 and 2003 October 12 observations, which were performed in Small Window mode and Full Window mode respectively.
The XMM-Newton data were filtered to remove time intervals with high-energy flares, and events with FLAG $ = 0$ and PATTERN $\le4$ were selected from these good time intervals.
Circular source and background regions were created with radii of $45^{''}$, or sometimes as small as $30^{''}$ in order to avoid overlap with the edge of a detector chip.
The spectra were grouped with a minimum of 25 counts per bin, and such that the energy resolution was oversampled by no more than a factor of 3.
The \chandra \ observation of 2008 March 18 was reduced and extracted using the Chandra Interactive Analysis of Observations software (CIAO) version 4.6.
The observation was performed with the Advanced CCD Imaging Spectrometer spectral component (ACIS-S) in Very Faint Timed Exposure mode, with a custom subarray
of 100 rows to achieve a frame time of 0.3~s, sufficient to resolve the 5.54~s pulsations.
We followed the online CIAO thread\footnote{http://cxc.harvard.edu/ciao/threads/pointlike/} ``Extract Spectrum and Response Files for a Point-like Source''.
Circular source and background regions were created with radii of $8^{''}$.
The source spectrum was grouped with the default value of 15 counts per bin.
All spectral fitting of the \xmm \ and \chandra \ data was performed using XSPEC \citep{arn96}.
\subsection{Three-To-Two-Blackbody Model}
Following \cite{ber09}, we found that the first seven \xmm \ observations, from 2003 September 8 to 2006 March 12, are well described by a three blackbody model,
where the lowest temperature component (the ``cold'' region) is interpreted as emission from the whole NS surface.
The temperature and area of this cold component are therefore held constant across all epochs.
The hot temperature component is thought of as a small hot spot on the NS surface, and the warm temperature component is thought of as warm annulus surrounding the hot spot.
This simplified model is a rough approximation to hot spot with a large temperature gradient on the surface of the NS.
This temperature gradient is determined by the details of the heating and the NS surface thermal conductivity.
The central region is cooling faster than the cooler outer regions, and there is gradual transition to times when the hot spot and annulus are well described by a single temperature.
Starting with the eighth XMM observation on 2006 September 24, and through the most recent observation on 2014 March 4, we find that the X-ray spectra are well modeled by a two temperature blackbody model.
The coldest temperature component is again interpreted as emission from the whole surface and is therefore held constant across all epochs (including the earlier observations from 2003 September 8 to 2006 March 12).
This model, in which the spectra of \xte \ evolve from being well described by three blackbodies at earlier times to two blackbodies at later times, is what we refer to here as the three-to-two-blackbody model.
This is distinct from the two blackbody model, described in the next subsection, in which the spectra are fitted by two blackbodies at all epochs, and it is not assumed that we can see emission from the whole stellar surface.
Essential features of the three-to-two-blackbody model are that hydrogen column density, cold blackbody temperature and cold blackbody normalization are held constant at all epochs.
The hot and warm blackbody components are allowed to vary independently at each epoch.
Another feature of the three-to-two-blackbody model is an absorption line at 1.2 keV.
The details of the fitting procedure are as follows.
We began by simultaneously fitting the last fifteen \xmm \ observations, from 2006 September 24 to 2014 March 4.
In this simultaneous fitting procedure, $N_{\rm H}$, $\textit{kT}_{\rm{cold}}$ and the cold temperature normalization are constrained to be equal at each epoch, and the other model parameters are varied independently at each epoch until a total chi-square minimum value is found.
The cold blackbody component is relatively dominant over the warm component from 2006 September 24 to 2014 March 4, when we are in the two blackbody regime of the entire three-to-two-blackbody model.
Therefore, at these later times, we obtained better (than at earlier times) constraints on the cold blackbody component normalization and temperature $\textit{kT}_{\rm{cold}}$, as well as the column density $N_{\rm H}$.
Consequently, these are the values that we chose to hold constant throughout the three-to-two-blackbody model.
We used the XSPEC model 'wabs', which uses abundances from \cite{mor83}, to calculate the effects of photoelectric absorption.
All spectral modeling was done in the 0.3 to 10 keV energy range.
The best fit value of the column density is $N_{\rm H}=(0.945\pm0.045)\times10^{22}$ cm$^{-2}$.
This is significantly larger than the of value of $N_{\rm H}=(0.72\pm0.02)\times10^{22}$ cm$^{-2}$ found by \cite{ber09}.
This discrepancy is due to the additional new data that has now been modelled by the two blackbody regime of the three-to-two-blackbody model.
\cite{ber09} also found $kT_{\rm{cold}}=0.144\pm0.003$ keV, which is statistically consistent with our measurement of $kT_{\rm{cold}}=0.138\pm0.006$ keV.
For this two blackbody model simultaneous fit we obtained a reduced chi-square value of 1.15 for 804 degrees of freedom.
Next we simultaneously fit the first seven \xmm \ observations, from 2003 September 8 to 2006 March 12.
For this three blackbody model, we obtained a reduced chi-square value of 1.16 for 775 degrees of freedom.
An absorption feature at 1.2 keV was included in all of the fits described here.
Below we describe the significance of this feature and the details of how it was incorporated into the three-to-two-blackbody model.
The absorption feature at 1.2 keV was modeled with a Gaussian absorption line with the optical depth profile$$\tau(E)=\frac{d}{\sigma\sqrt{2\pi}}\ e^{-\frac{1}{2}\left(\frac{E-E_{\rm cyc}}{\sigma}\right)^2}.$$
This feature was also modeled by \cite{ber09}, and is likely produced by resonant proton scattering.
If resonant proton scattering is the origin of this spectral feature, then it provides a measurement of the magnetic field in which the scattering occurs.
The energy of such an absorption line is: $E_{\rm{cyc}} = 0.63(1+z)^{-1}(B/10^{14}G)$ keV where $(1+z)^{-1}= (1-2GM/Rc^{2})^{1/2}$ is the gravitational redshift.
From 2006 September 24 to 2014 March 4, we did not find significant changes in the absorption line among the different epochs.
We therefore held the parameters of this absorption line fixed in all of the later observations, from 2006 September 24 to 2014 March 4.
We also did not find significant changes in the absorption line among the different epochs from 2003 September 8 to 2006 March 12.
Since the line energy centroid did not differ with any statistical significance between earlier and later observations, we adopted a constant value of 1.2 keV across all epochs.
In summary, the line energy was held fixed at 1.2 keV at all epochs, but the line strength and width were allowed to differ between the two and three blackbody regimes.
The line width $\sigma$ was about 320 eV in the three blackbody regime, and about 130 eV in the two blackbody regime.
The strength of the line $d$ decreased from 0.28 in earlier observations to 0.07 in later observations.
We also searched for evidence of phase-variability of the absorption feature by fitting the peaks and troughs of the pulse profiles separately.
We did not find any statistically significant differences in the best fit Gaussian absorption feature parameters.
We also note that Bernardini et al. did not find it necessary to include an absorption line in the first three observations on 2003 September 8, 2003 October 12, and 2004 March 11.
However, with our higher adopted column density value, we found that an absorption feature was necessary in these observations as well.
To illustrate this, we show the line residuals of the 2003 September 8 observation in Figure \ref{fig:line_res}, where all the model parameters are held at the best fit values of Table 2, but the line strength is set to zero.
Similarly, in Figures \ref{fig:11_resid} and \ref{fig:22_resid} we show the absorption feature in the 2009 March 5 and 2014 March 4 observations.
Table 2 lists the results of the fits to the three-to-two-blackbody model.
Figures \ref{fig:xraylum}, \ref{fig:temps} and \ref{fig:areas} illustrate the evolution of the bolometric luminosity, temperatures and apparent emitting areas.
Because the absorption feature at 1.2 keV was characterized by a different strength and width in the two and three blackbody regimes, we set the strength of this component to zero when computing the fluxes.
This convention allows for a meaningful comparison of the fluxes as \xte \ moves from the three blackbody regime to the two blackbody regime.
The model parameters in the three blackbody regime and the model parameters in the two blackbody regime are technically components of distinct models.
For this reason we introduce new symbols $\textit{kT}_{\rm{warm-hot}}$, $\textit{F}_{\rm{warm-hot}}$, and $\textit{A}_{\rm{warm-hot}}$ in Table 2.
The three-to-two-blackbody model is an approximation to a gradual transition between these two regimes, and that is why there is a jump in the values of $kT_{\rm{warm}}$ to the values of $kT_{\rm{warm-hot}}$ when the model is switched.
\begin{figure}
\includegraphics[width=0.85\linewidth]{f1.pdf}
\caption{\label{fig:line_res}
Absorption line residuals in the 2003 September 8 observation, illustrating the presence of this feature even from the beginning of \xte's outburst.
All the model parameters are held at the best fit values of Table 2, but the line strength is set to zero. }
\end{figure}
\begin{figure}
\includegraphics[width=0.85\linewidth]{f2.pdf}
\caption{\label{fig:11_resid}
Absorption line residuals in the 2009 March 5 observation, illustrating that the line width has decreased since 2003 September 8.
All the model parameters are held at the best fit values of Table 2, but the line strength is set to zero.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.85\linewidth]{f3.pdf}
\caption{\label{fig:22_resid}
Absorption line residuals in the most recent 2014 March 4 observation, illustrating that the absorption line has not changed since 2009 March 5.
All the model parameters are held at the best fit values of Table 2, but the line strength is set to zero.
}
\end{figure}
\begin{figure}
\includegraphics[width=1.1\linewidth]{f4.pdf}
\caption{\label{fig:xraylum}
X-ray luminosity of the components of the three-to-two-blackbody model.
These are bolometric luminosities calculated from the temperatures and apparent areas in Table 2, assuming a distance of 3.5 kpc.}
\end{figure}
\begin{figure}
\includegraphics[width=1.1\linewidth]{f5.pdf}
\caption{\label{fig:temps}
Temperatures of the components of the three-to-two-blackbody model.}
\end{figure}
\begin{figure}
\includegraphics[width=1.1\linewidth]{f6.pdf}
\caption{\label{fig:areas}
Areas of the components of the three-to-two-blackbody model, assuming
a distance of 3.5 kpc.
}
\end{figure}
With the most recent data putting more constraints on the cold blackbody component, thought to represent the whole surface of the NS, we tested this whole surface interpretation for consistency with a physically plausible value of the NS radius.
Figure \ref{fig:chi-square} shows the chi-square statistic on a grid of values of surface temperature and column density.
Since the NS surface temperature parameter is most correlated with the column density, this grid provides a good estimate of the uncertainty in the NS surface temperature, and therefore also the NS radius.
Since none of the other model parameters (e.g. the cold component normalization) were varied, this is an underestimate of the full NS radius confidence range.
Figure \ref{fig:radius} shows the inferred NS radius for all of the grid points at the top of the figure.
These radii values were calculated as follows:
At each grid point the free model parameters were re-fit, and the new best fit blackbody normalization was used to calculate an apparent radius assuming a distance of 3.5 kpc.
A distance of $3.5\pm0.5$~kpc was measured by \cite{min08}.
We then assumed a mass of 1.4 M$_{\odot}$ and applied the relativistic correction, $R_{\infty}=R\,(1-r_{\rm g}/R)^{-1/2}$, to compute the physical NS radius.
$R_{\infty}$ is the apparent NS radius at infinity and $r_g = 2GM/c^2$ is the Schwarzschild radius.
We find $R_{\infty}=28.9^{+6.7}_{-5.3}$ km and $R=26.6^{+6.7}_{-5.5}$ km.
\begin{figure}
\includegraphics[width=0.85\linewidth]{f7.pdf}
\caption{\label{fig:chi-square}
Chi-square grid for a range of column densities and temperatures of the whole NS surface. The
region in black is outside of the 90\% confidence range for three interesting parameters.}
\end{figure}
\begin{figure}
\includegraphics[width=0.85\linewidth]{f8.pdf}
\caption{\label{fig:radius}
Grid of NS radii \textit{R} (km) for the range of column densities and temperatures in Figure \ref{fig:chi-square}. We assume a distance of 3.5
kpc and a mass of $1.4\,M_{\odot}$. $D_{3.5}$ is the distance to \xte \ in units of 3.5 kpc. We account for the general relativistic correction between apparent and physical NS radius.}
\end{figure}
This is significantly different from the physically plausible range of about $9-13$~km, and we emphasize that we are not claiming that the best fit value $R$ quoted above is a measurement of the NS radius.
Rather, we interpret this large best fit radius as evidence that the whole surface of \xte \ is visible, and that the spectrum has some deviations from two pure blackbodies plus a Gaussian absorption feature.
In addition to deviations from simple, uniform temperature blackbodies, we list here several factors that may contribute to this discrepancy between the physical NS radius and the best fit value.
First, the best fit radius $R$ is highly sensitive to the shape and energy of the absorption feature.
For example, fixing the line energy at 1.3 keV decreases $R$ by 22\% while increasing the chi-square statistic to only 1.27.
Second, the physical NS radius measurement is proportional to the NS distance, and it possible that \xte \ is at a distance closer to the low end of the range of 3.5 $\pm$ 0.5 kpc measured by \cite{min08}.
Third, the physical NS radius measurement decreases as the mass of \xte \ increases, and it is possible that \xte \ is more massive than the canonical value of $1.4\,M_{\odot}$.
NSs with masses up to $2.0\,M_{\odot}$ have been observed \citep{dem10}.
A magnetar mass has never been measured, and it might not be surprising if magnetars as a class are toward the upper end of the range of possible NS masses \citep{mer15}.
Fourth, as stated above, our estimate of the confidence range of this physical NS radius measurement is actually an underestimate of the true statistical uncertainty.
We also checked the 2008 March 18 Chandra observation for consistency with the two blackbody model.
This observation is notable because it was taken just months before \ \xte \ became radio quiet in late 2008 \citep{cam15}.
We held the hydrogen column density, absorption line, and cool temperature component fixed at the \xmm \ fit values and allowed the warm temperature component to vary.
We find that it is consistent with the rest of the later observations with a reduced chi-square fit statistic of 1.05 for 112 degrees of freedom, and a warm component temperature of 0.35 keV.
\begin{deluxetable}{l c c c c c c c c c c}
\tabletypesize{\tiny}
\setlength{\tabcolsep}{0.009in}
\tablecaption{Three-To-Two-Blackbody Model}
\tablehead{ \colhead{Date} & \colhead{$\textit{kT}_{\rm{hot}}$} & \colhead{$\textit{kT}_{\rm{warm}}$/} & \colhead{$\textit{kT}_{\rm{cold}}$} & \colhead{$\textit{F}_{\rm{hot}}$}
& \colhead{$\textit{F}_{\rm{warm}}$/} & \colhead{$\textit{F}_{\rm{cold}}$} & \colhead{$\textit{A}_{\rm{hot}}$} & \colhead{$\textit{A}_{\rm{warm}}$/} & \colhead{$\textit{A}_{\rm{cold}}$} & \colhead{$\textit{L}_{\rm{bol}}$} \\
&& \colhead{$\textit{kT}_{\rm{warm-hot}}$} &&& \colhead{$\textit{F}_{\rm{warm-hot}}$ } &&& \colhead{$\textit{A}_{\rm{warm-hot}}$} &&
\\
\colhead{(UT)} & \colhead{(keV)} & \colhead{(keV)} & \colhead{(keV)} & \colhead{(erg cm$^{-2}$ s$^{-1}$)}
& \colhead{(erg cm$^{-2}$ s$^{-1}$)} & \colhead{(erg cm$^{-2}$ s$^{-1}$)} & \colhead{(cm$^2$)} & \colhead{(cm$^2$)} & \colhead{(cm$^2$)} & \colhead{($10^{34}$erg s$^{-1}$)}
}
\startdata
2003/9/8 & 0.69$\pm$0.01 & 0.23$\pm$0.01 & 0.138$\pm$0.006 & $3.2 \times 10^{-11}$ & $8.1\times 10^{-12}$ & $5.8 \times 10^{-13}$ & $3.1 \times 10^{11}$ & $2.8\times10^{13}$ & $1.1\times 10^{14}$ & 19.5 \\
2003/10/12 & 0.71$\pm$0.02 & 0.25$\pm$0.01 & $^{\prime\prime}$ & $2.6\times 10^{-11}$ & $6.7 \times10^{-12}$ & $^{\prime\prime}$ & $2.2\times10^{11}$ & $1.8\times10^{13}$ & $^{\prime\prime}$ & 16.7 \\
2004/3/11 & 0.69$\pm$0.02 & 0.23$\pm$0.01 & $^{\prime\prime}$ & $1.6\times 10^{-11}$ & $4.2 \times10^{-12}$ & $^{\prime\prime}$ & $1.5\times10^{11}$ & $1.7\times10^{13}$ & $^{\prime\prime}$ & 12.7 \\
2004/9/18 & 0.67$\pm$0.01 & 0.22$\pm$0.01 & $^{\prime\prime}$ &$9.0\times 10^{-12}$ & $3.0 \times10^{-12}$ & $^{\prime\prime}$ & $9.8\times 10^{10}$ & $1.7 \times 10^{13}$ & $^{\prime\prime} $ & 10.3 \\
2005/3/18 & 0.61$\pm$0.01 & 0.22$\pm$0.01 & $^{\prime\prime}$ & $3.4\times 10^{-12}$ & $1.6 \times10^{-12}$ & $^{\prime\prime}$ & $5.8\times 10^{10}$ & $1.1 \times 10^{13}$ & $^{\prime\prime}$ & 7.3 \\
2005/9/20 & 0.57$\pm$0.03 & 0.21$\pm$0.01 & $^{\prime\prime}$ & $9.5\times 10^{-13}$ & $8.4 \times 10^{-13}$ & $^{\prime\prime}$ & $2.3\times 10^{10}$ & $6.4 \times 10^{12}$ & $^{\prime\prime}$ & 5.5\\
2006/3/12 & 0.51$\pm$0.05 & 0.20$\pm$0.02 & $^{\prime\prime}$ & $5.0\times 10^{-13}$ & $4.5 \times10^{-13}$ & $^{\prime\prime}$ & $2.2\times 10^{10}$ & $5.1 \times 10^{12}$ &$^{\prime\prime}$ & 4.9 \\
2006/9/24 & & 0.32$\pm$0.01 & $^{\prime\prime}$ & & $4.2 \times10^{-13}$ & $^{\prime\prime}$ & & $2.4 \times 10^{11}$ & $^{\prime\prime}$ & 3.9 \\
2007/3/6 & & 0.31$\pm$0.01 & $^{\prime\prime}$ & & $3.3 \times 10^{-13}$ & $^{\prime\prime}$ && $2.1 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
2007/9/16 & & 0.32$\pm$0.01 & $^{\prime\prime}$ & & $3.4 \times 10^{-13}$ & $^{\prime\prime}$ && $1.8 \times 10^{11}$ & $^{\prime\prime}$ & 3.8\\
2009/3/5 & & 0.33$\pm$0.02 & $^{\prime\prime}$ & & $2.5 \times 10^{-13}$ & $^{\prime\prime}$ & & $1.2 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
2009/9/5 & & 0.32$\pm$0.02 & $^{\prime\prime}$ & & $2.5 \times 10^{-13}$ &$^{\prime\prime}$ && $1.5 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
2009/9/7 & & 0.34$\pm$0.03 & $^{\prime\prime}$ & & $2.4 \times 10^{-13}$ & $^{\prime\prime}$ & & $9.7 \times 10^{10}$ & $^{\prime\prime}$ & 3.8 \\
2009/9/23 & & 0.33$\pm$0.03 & $^{\prime\prime}$ & & $2.3 \times 10^{-13}$ & $^{\prime\prime}$ && $1.0 \times 10^{11}$ & $^{\prime\prime}$ & 3.8\\
2010/4/9 & & 0.28$\pm$0.05 & $^{\prime\prime}$ & & $2.2 \times 10^{-13}$ &$^{\prime\prime}$ && $2.7 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
2010/9/5 & &0.30$\pm$0.03 & $^{\prime\prime}$ & & $2.4 \times 10^{-13}$ &$^{\prime\prime}$ && $1.8 \times 10^{11 }$ & $^{\prime\prime}$ & 3.8 \\
2011/4/3 & & 0.31$\pm$0.02 & $^{\prime\prime}$ & & $2.5 \times 10^{-13}$ & $^{\prime\prime}$ && $1.8 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
2011/9/9 & &0.34$\pm$0.04 & $^{\prime\prime}$ & & $2.1 \times 10^{-13}$ & $^{\prime\prime}$ && $8.4 \times 10^{10}$ & $^{\prime\prime}$ & 3.8 \\
2012/9/6 & & 0.30$\pm$0.03 & $^{\prime\prime}$ & & $2.3 \times 10^{-13}$ & $^{\prime\prime}$ && $1.9 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
2013/3/3 & & 0.30$\pm$0.04 & $^{\prime\prime}$ & &$2.2 \times 10^{-13}$ &$^{\prime\prime}$ && $1.8 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
2013/9/5 & & 0.29$\pm$0.02 & $^{\prime\prime}$ & & $2.2 \times 10^{-13}$ & $^{\prime\prime}$ && $2.3 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
2014/3/4 & & 0.29$\pm$0.02 & $^{\prime\prime}$ & & $2.3 \times 10^{-13}$ & $^{\prime\prime}$ && $2.2 \times 10^{11}$ & $^{\prime\prime}$ & 3.8 \\
\enddata
\tablecomments{
The column density was held fixed at $N_{\rm H}=0.945\times10^{22}$~cm$^{-2}$ for all observations.
All listed fluxes are the absorbed values and are computed with the strength of the 1.2 keV absorption feature set to zero.
Uncertainties in the hot and warm temperatures are 90\% for two interesting parameters.
They were estimated from $\chi^2$ contours with $kT_{\rm cold}$.
}
\end{deluxetable}
\subsection{Two Blackbody Model}
We also simultaneously fit a two blackbody model to the first seven observations, as originally considered by \cite{got07}.
In this model the two temperatures are thought of as a central hot spot on the surface of the NS, surrounded by a warm temperature annulus.
We also include a Gaussian absorption line at 1.2 keV.
We again found this model to be a good fit to the first seven observations, although with lower values of the hydrogen column density than in the three-to-two-blackbody model.
All of these results are consistent with the findings of \cite{got07}.
We found a reduced chi-square of 1.15 for 775 degrees of freedom.
In light of the most recent data indicating a larger column density than previously measured, we also attempted to fit a two blackbody model with a column density fixed at this new larger value to the earlier observations.
This resulted in a slightly worse fit with a reduced chi-square of 1.24 for 776 degrees of freedom.
Table 3 lists the results of the fits to the two blackbody model.
We then tried to fit the this same two blackbody model, (i.e. with the column density value fixed at $N_{\rm H}= 0.76 \times10^{22}$ cm$^{-2}$) to the latest observations and found it was a poor fit to the data with a reduced chi-square value of 1.5.
The lower value of the column density resulted in larger temperatures for the cool blackbody component, averaging around 0.16 keV.
This poor fit leads us to favor the larger $N_{\rm H}$ value and the three-to-two-blackbody model.
\begin{deluxetable}{l c c c c c c c}
\tabletypesize{\tiny}
\tablecaption{Two Blackbody Model}
\tablehead{ \colhead{Date} & \colhead{$\textit{kT}_{1}$} & \colhead{$\textit{kT}_{2}$} & \colhead{$\textit{F}_{1}$} & \colhead{$\textit{F}_{2}$} & \colhead{$\textit{A}_{1}$} & \colhead{$\textit{A}_{2}$} & \colhead{\textit{L}$_{\rm bol}$} \\
\colhead{(UT)} & \colhead{(keV)} & \colhead{(keV)} & \colhead{(erg cm$^{-2}$ s$^{-1}$)} & \colhead{(erg cm$^{-2}$ s$^{-1}$)} & \colhead{(cm$^2$)} & \colhead{(cm$^2$)} & \colhead{(erg s$^{-1}$)}
}
\startdata
2003 Sep 8 & 0.70 $\pm$ 0.01 & 0.25 $\pm$ 0.01 & 3$.3 \times 10^{-11}$ & $7.6 \times 10^{-12}$ & $1.2 \times 10^{13}$ & $2.7 \times 10^{11}$ & $1.2 \times 10^{35}$\\
2003 Oct 12 & 0.72 $\pm$ 0.02 & 0.27 $\pm$ 0.02 & $2.6 \times 10^{-11}$ & $6.7 \times 10^{-12}$ & $8.4 \times 10^{12}$ & $2.0 \times 10^{11}$ & $1.0 \times 10^{35}$ \\
2004 Mar 11 & 0.70 $\pm$ 0.02 & 0.25 $\pm$ 0.02 & $1.6 \times 10^{-11}$ & $4.3 \times 10^{-12}$ & $9.8 \times 10^{12}$ & $1.4 \times 10^{11}$ & $7.1 \times 10^{34}$ \\
2004 Sep 18 & 0.68 $\pm$ 0.01 & 0.23 $\pm$ 0.01 & $9.3 \times 10^{-12}$ & $3.2 \times 10^{-12}$ & $1.1 \times 10^{13}$ & $9.1 \times 10^{10}$ & $5.2 \times 10^{34}$ \\
2005 Mar 18 & 0.61 $\pm$ 0.01 & 0.21 $\pm$ 0.01 & $3.6 \times 10^{-12}$ & $1.9 \times 10^{-12}$ & $1.1 \times 10^{13}$ & $5.8 \times 10^{10}$ & $3.2 \times 10^{34}$ \\
2005 Sep 20 & 0.55 $\pm$ 0.02& 0.19 $\pm$ 0.01 & $1.1 \times 10^{-12}$ & $1.2 \times 10^{-12}$ & $1.4 \times 10^{13}$ & $2.8 \times 10^{10}$ & $2.2 \times 10^{34}$ \\
2006 Mar 12 & 0.50 $\pm$ 0.04 & 0.18 $\pm$ 0.01 & $5.5 \times 10^{-13}$ & $8.4 \times 10^{-13}$ & $1.7 \times 10^{13}$ & $2.4 \times 10^{10}$ & $1.9 \times 10^{34}$ \\
\enddata
\tablecomments{
The column density was held fixed at $N_{\rm H}=0.76\times10^{22}$~ cm$^{-2}$ for all observations.
All listed fluxes are the absorbed values and are computed with the strength of the 1.2 keV absorption feature set to zero.
Uncertainties in the hot and warm temperatures are 90\% for two interesting parameters. They were estimated from $\chi^2$ contours of the hot versus warm temperature parameters.
}
\end{deluxetable}
\subsection{Comptonized Blackbody Model}
We explored the possibility that the deviation from a single blackbody spectrum is simply due to Compton scattering.
We attempted to fit a model where the source photons are Comptonized by relativistic electrons of small optical depth (as described in Rybicki and Lightman 1979, section 7.5) to the first seven observations.
The blackbody spectrum is comptonized such that it is characterized by the parameter $\alpha = -{\ln \tau_{es}}/{\ln A}$ where $\tau_{es}$ is the optical depth and $A$ is the mean energy amplification per scattering.
We allowed $\alpha$ to vary between observations, and fit each observation separately.
The fit to each individual observation was poor.
For example, the 2003 September 8 observation was one of the data sets best described by this model and had a reduced chi-square of 1.9 for 133 degrees of freedom.
We interpret this as evidence that the deviation from a simple blackbody spectrum is not dominated by Compton scattering.
\subsection{Model-Independent Measurements of Spectral and Temporal Changes}
We also sought to confirm, in a model independent way, that \xte \ has reached a steady state.
In Figure \ref{fig:ratios}, we plot the count rates of successive observations, for all of the \xmm \ data.
The channel ranges were chosen to keep at least 500 counts per bin.
Due to the long term stability of the EPIC pn detector, we are confident that all these channels correspond to the energy ranges shown.
We are able to confirm that \xte's X-ray spectrum has reached a steady state.
Another model independent test of the X-ray stability of \xte \ is its energy dependent pulse profiles, which are shown in Figure \ref{fig:pp}.
The timing analysis used to compute these pulse profiles is presented \cite{cam15}.
These pulse profiles consistently show more pulsed emission at higher energies, which is consistent with our model of the pulsed emission coming from a small warm spot on the NS surface.
\begin{figure}
\centerline{\includegraphics[width=1.0\linewidth]{f9.pdf}}
\caption{\label{fig:ratios}
Changes in count rate from one observation to the next. \xte's X-ray spectrum has reached a steady state.
15.0* keV is a nominal upper limit. Most events detected in this range are at much lower energies.}
\end{figure}
\begin{figure}
\centerline{\includegraphics[height=1.0\textheight,keepaspectratio]{f10.pdf}}
\caption{\label{fig:pp}
Background subtracted normalized pulse profiles. Pulsed X-ray emission from a hot region on the NS surface is evidence of continued magnetar activity. Pulsed fraction is indicated in each panel. Profiles were phase shifted to alignment in the $1.5-5.0$ keV energy range.}
\end{figure}
\section{Discussion}
\subsection{Comparison with Previous Results}
Our findings on the previously analyzed data are consistent with the results of \cite{got07} and \cite{ber09}.
The lower total X-ray fluxes of the most recent observations allow us to better constrain the properties of the cooler blackbody component as well at the hydrogen column density.
We measure the column density to be about $50\%$ larger than previously reported.
There is an abundance of new X-ray data in this paper that was not available to \cite{got07} and \cite{ber09}.
It is this new data that has led to the higher measured column density.
We also note that the best fit whole surface temperature we have measured, 0.138 keV, is statistically consistent (within the 90\% uncertainty range) with the value in Bernardini et al. 2009 (0.144 keV).
The NS radius value quoted in Bernardini et al. is $R_{\infty}= 17.9\pm^{+1.9}_{-1.5}$ km and assumes a distance of 3.5 kpc.
This is significantly smaller than our measurements of $R_{\infty}=28.9^{+6.7}_{-5.3}$ km and $R=26.6^{+6.7}_{-5.5}$ km.
This discrepancy is explained by the larger column density that is required by the new X-ray data we have from 2009 March 5 to 2014 March 4.
\subsection{Has \xte\ ``Returned to Quiescence''?}
Between the time of the 2008 March 18 \chandra \ observation and the 2009 March 5 \xmm \ observation \xte \ became radio quiet \citep{cam15}.
It is notable that the last significant ($\approx 20\%$) decrease in the warm/hot flux occurred in this same time interval.
This suggests a possible correlation between this magnetar's X-ray hot spot flux and its radio emission.
We find no other correlation between \xte's \ radio turn off and its X-ray behavior during the \chandra \ observation.
Beginning with the 2009 March 5 observation, the total X-ray flux
of \ \xte \ reached a constant minimum value.
The warm temperature component is also constant to within statistical uncertainties.
However, it is not clear that \ \xte \ is back in its pre-outburst state, i.e. that it has returned to quiescence.
The archival \textit{ROSAT} data is not of high enough quality to tightly constrain its previous surface temperature or the possible existence of an absorption feature.
Furthermore, Bernardini et al. (2009) found that both one and two blackbody models are good fits to the \textit{ROSAT} data.
Gotthelf et al. (2004) fit a single blackbody model to four archival \textit{ROSAT} observations, spanning 1990 September 3 to 1993 April 3, and calculated unabsorbed X-ray fluxes
ranging from $(5.5-8.3) \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ (in the $0.5-10$ keV range).
From 2009 March 5 to 2014 March 4 we measure unabsorbed X-ray fluxes ranging from $(7.4-7.8) \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ in the $0.5-10$ keV range, in accord with the \textit{ROSAT} measurements.
This evidence suggests that it is possible that \xte \ is back in its pre-outburst, ``quiescent'' state.
However, given the uncertainty of the details of its pre-outburst state, it is just as possible that \xte's magnetosphere is in a new configuration.
The hot spot on its surface and the 1.2 keV absorption line may even be new features.
\subsection{Comparison with Other Transient Magnetars}
Several other transient magnetars have been found since the discovery of \xte \ in 2003.
Here we review what is known about the later stages of these other transient magnetars.
First we note that there is no other transient magnetar whose whole surface temperature has been measured.
This is likely due to a combination of relatively large distances to and low surface temperatures of the other transient magnetars.
We also note that none of the blackbody components of the transient magnetars have been observed to completely fade away after an outburst.
(A caveat is that 3XMM J185246.6+003317 was only observed for several months after its 2008 outburst, and its current state is therefore unknown.)
Table 4 lists some of the properties of the known transient magnetars as well as the candidate transient magnetars PSR J1622$-$4950 and AX J1845$-$0258.
We list the most recently measured spin parameters as well as the most recently measured X-ray luminosity.
In the interest of comparing their current spin-down luminosities $\dot E$ to current bolometric X-ray luminosities, we quote the X-ray luminosities in the widest bands reported.
In addition to \xte, there are several other transient magnetars whose X-ray luminosity is likely magnetically powered several years post outburst.
They are the "low magnetic field" magnetar SGR 0418+5729, SGR 0501+4516, CXO 164710.2$-$455216, the Galactic center magnetar SGR J1745$-$2900, and Swift J1822.3$-$1606.
The Galactic center magnetar SGR J1745$-$2900 is particularly similar to \xte \ in that it produced radio emission and had a relatively slow decay as its blackbody temperature and hot spot area decreased.
There are also three transient magnetars whose spin-down power exceeds their X-ray luminosity.
They are 1E 1547.0$-$5408, SGR 1627$-$41, and Swift J1834.9$-$0846.
1E 1547.0$-$5408 is still much more luminous than its lowest pre-outburst state, and is therefore not in a true quiescent state \citep{ber09}.
SGR 1627$-$41 and Swift J1834.9$-$0846 are currently close to their pre-outburst fluxes (an upper bound in the case of Swift J1834.9$-$0846) and could be in true quiescent states \citep{an12,kar12,esp13}.
In some cases, there are observations of transient magnetars before their outburst.
As alluded to above, a pre-outburst observation of Swift J1834.9$-$0846 with Chandra could not even detect the source, and gave an upper limit on the $2-10$~keV luminosity of $1.7 \times 10^{31}$ erg~s$^{-1}$ \citep{kar12}.
This could be a pre-outburst quiescent state in which there is no magnetar activity.
SGR 1627$-$41 was observed between its 1998 and 2008 outbursts, and its luminosity is similar several years after each outburst \citep{an12}.
A non-detection of 3XMM J185246.6+003317 by Chandra in 2001 gives an upper bound of $4 \times 10^{32}$ erg~s$^{-1}$ \citep{zho14}.
SGR 0501+4516 was observed by ROSAT before its 2008 outburst and, like \xte, had a similar temperature and flux level post-outburst \citep{cam14}.
SGR 1833$-$0832 could not be detected by \xmm \ in 2006, five years before its 2011 outburst \citep{esp11}
While many transient magnetars were fit to blackbody plus powerlaw models, some could be fit to pure blackbody models with temperatures comparable to \xte.
The low magnetic field magnetar SGR 0418+5729 has a very small hot spot of temperature 0.3 keV \citep{rea13}.
PSR J1622$-$4950 has a hot spot temperature of about 0.5 keV \citep{and12}.
SGR 1833$-$0832 is best modeled by a 1.2 keV blackbody \citep{esp11}.
The distance to SGR 1833$-$0832 is highly uncertain, so we don't know if its X-ray luminosity or spin-down power is dominant.
The blackbody components of transient magnetars that were modeled by a blackbody plus a power-law are all within about a factor of 2 of these temperature values.
AX J1845$-$0258 is included in this table even though its status as a transient magnetar is uncertain.
It has disappeared since its 1993 outburst and its period derivative has not been measured \citep{tor98,tam06}.
The other candidate transient magnetar, PSR J1622$-$4950, was discovered in radio and subsequently observed as an X-ray source.
Its fading X-ray emission suggests that this radio magnetar could have been observed just after an outburst in 2007 \citep{and12}.
\begin{deluxetable}{l l l c c l c c}
\tabletypesize{\tiny}
\tablewidth{0pt}
\setlength{\tabcolsep}{0.04in}
\tablecaption{Transient Magnetars}
\tablehead{ \colhead{Name} & \colhead{Outburst Obs. Date} & \colhead{Period} & \colhead{$\dot{P}$} & \colhead{$\dot{E}$} & \colhead{$L_{x}$} & \colhead{Radio?}
& \colhead{References}
\\
& \colhead{} & \colhead{(s)} & \colhead{($10^{-11}$ s s$^{-1}$)} & \colhead{($10^{33}$ erg s$^{-1}$)} & \colhead{(10$^{33}$ erg s$^{-1}$)}
}
\startdata
SGR 0418+5729 & 2009 June & 9.07838822(5) & 0.0004(1) & 0.00021 & $\approx0.006$ ($0.5-10$ keV) & No & 1\\
SGR 0501+4516 & 2008 Aug & 5.7620695(1) & 0.594(2) & 1.2 & $\approx9$ ($0.5-10$ keV) & No & 2 \\
1E 1547.0$-$5408 & 2008 Oct, 2009 Jan & 2.0721255(1) & 4.77 & 210 & $\approx25$ ($1-10$ keV) & Yes & 3,4 \\
PSR J1622$-$4950* & ... & 4.3261(1) & 1.7(1) & 8.3 & $\approx1$ ($0.3-10$ keV) & Yes & 5,6 \\
SGR 1627$-$41 & 1998 June, 2008 May & 2.594578(6) & 1.9(4) & 43 & $\approx$ 3 ($2-10$ keV) & No & 7,8 \\
CXO 164710.2$-$455216 & 2006 Sep & 10.610644(17) & $<0.04$ & $<0.013$ & $\approx$ 10 ($2-10$ keV) & No & 9 \\
SGR J1745$-$2900 & 2013 June & 3.76363824(13) & 1.385(15) & 10 & $\approx70$ ($1-10$ keV) & Yes & 10,11 \\
\xte & 2003 Sep & 5.540525412(3) & 2.79039(6) & $0.56-0.66$ & $\approx38$ ($0.3-10$ keV) & Yes & 12 \\
Swift J1822.3$-$1606 & 2011 July & 8.43772106(6) & 0.00214(21) & 0.0014 & $\approx0.1$ ($1-10$ keV) & No & 13 \\
SGR 1833$-$0832 & 2010 Mar & 7.5654084(4) & 0.35(3) & 0.32 & ... & No &14 \\
Swift J1834.9$-$0846 & 2011 Aug & 2.4823018(1) & 0.796(12) & 21 & $\approx0.057$ ($2-10$ keV) & No & 15,16 \\
AX J1845$-$0258* & 1993 & 6.97127(28) & ... & ... & ... & No & 17 \\
3XMM J185246.6+003317 & 2008 Sep & 11.55871346(6) & $<0.014$ & $<0.0036$ & $\approx3$ ($0.5-10$ keV) & No & 18 \\
\enddata
\tablecomments{
*PSR J1622$-$4950 is only a candidate transient magnetar, since no X-ray outbursts have yet been observed.
AX J1845$-$0258 has not been observed since its 1993 outburst, and no period derivative has been measured.
}
\tablerefs{
(1) \citealt{rea13};
(2) \citealt{cam14};
(3) \citealt{dib12};
(4) \citealt{kui12}
(5) \citealt{and12}
(6) \citealt{lev10}
(7) \citealt{an12}
(8) \citealt{esp09}
(9) \citealt{an13}
(10) \citealt{cot15}
(11) \citealt{kas14}
(12) \citealt{cam15}
(13) \citealt{sch14}
(14) \citealt{esp11}
(15) \citealt{esp13}
(16) \citealt{kar12}
(17) \citealt{tor98}
(18) \citealt{rea14}
}
\end{deluxetable}
\subsection{Comparison with Theory}
\cite{bel09} predicted that the hot spot on the surface of \xte \ would fade away.
In the special case of a narrow j-bundle with a uniform twist, Beloborodov predicts that the hot spot luminosity will decay with a timescale
$t_{ev}\approx15\,V_{9}^{-1}\,B_{14}\,R_{6}^{2}\,\Psi\,u_{*}$ years where $V_{9}$ is the voltage in units of 10$^{9}$~V, $B_{14}$ is the magnetic field in units of 10$^{14}$~G,
$R_6$ is the NS radius in units of 10$^{6}$~cm, $\Psi$ is the magnitude of the twist in units of radians, and $u_{*}$ is the angle subtended by the arc of
circumference of the hot spot.
The rate of decay is even faster than an exponential, since the time constant decreases as the twist angle $\Psi$ and the size of the hot spot $u_{*}$ decrease.
While this model is a good fit to the data at the beginning of \xte's outburst, this is not what has been observed in the spectra and pulse profiles from about 2007 onwards.
From 2009 March 5 through 2014 March 4 \xte\ has been in a steady state, yet the hot spot on its surface remains.
\cite{rea12} suggested that radio magnetars share the property that $L_{x} / \dot{E} < 1$, but that not all radio magnetars will necessarily satisfy this condition.
The observations of \xte \ presented here do not support this proposition.
Between 2007 and 2012 \xte \ reached a spin-down power of $\dot{E} = (5.6-6.6) \times 10^{32}$ erg s$^{-1}$ \citep{cam15},
while the luminosity of the cold component of \xte \ alone is $\sim 4 \times 10^{34}$ \textit{d}$^{2}_{3.5}$ erg s$^{-1}$.
\cite{sza15} explain magnetar radio emission with the partially screened gap model that has been developed to explain the radio emission of normal rotation powered pulsars.
In this scenario \xte's radio emission is also rotation powered, and radio emission is only possible if the polar cap is below the critical temperature for ion emission, and the polar cap luminosity is much less than
the spin-down power.
However, the polar cap luminosity of \xte \ increased by more than two orders of magnitude during its outburst, while the spin-down luminosity increased by a factor of 8 \citep{cam15}.
This data places a difficult constraint on how this model could explain the turn-on and turn-off of \xte's radio emission.
\section{Conclusions}
\xte \ was the first discovered transient magnetar and therefore provides the longest record of transient magnetar behavior.
A hot spot on the NS surface is evident even in the most recent observations.
The luminosity of this hot spot exceeds \xte's spin-down power, and is therefore an indicator of continued magnetar activity.
There is currently no detailed theoretical model that explains this persistent magnetar activity.
With the benefit of over ten years of \xmm \ observations, we can say with some confidence that we have detected emission from the whole surface of \xte.
Unfortunately, large systematic uncertainties plague a measurement of the NS radius.
We nevertheless have a good measurement of the surface temperature of a magnetar.
The radio emission from \xte \ during its outburst is similar to the radio emission of a subgroup of the known transient magnetars.
There is no detailed theoretical model that explains the turn-on and turn-off of \xte's radio emission, but the X-ray data presented in this paper may provide an important clue.
The flux from the hot-spot on the NS surface reached its lowest level just as the radio emission disappeared.
This suggests that this magnetar's radio emission is powered by magnetic field decay.
\acknowledgements
This investigation is based on observations obtained
with \xmm, an ESA science mission with instruments and contributions
directly funded by ESA Member States and NASA.
This research made use of data obtained from the
High Energy Astrophysics Science Archive Research Center (HEASARC),
provided by NASA's Goddard Space Flight Center. We acknowledge
support from NASA ADAP grant NNX15AE63G.
We thank Eric Gotthelf for valuable discussions and the anonymous referee
for several helpful comments.
|
1,116,691,500,707 | arxiv | \section*{Acknowledgements}
This research was supported by JSPS KAKENHI (Grant number: 18K11335).
\section{Conclusion}\label{s:concl}
Exploiting our previous work~\cite{takahashi2018jascome} on the development of the isogeometric boundary element method (IGBEM) for the 3D Helmholtz equation and the sensitivity analysis based on the adjoint variable method (AVM), we have newly proposed a shape optimisation system by integrating the IGBEM and AVM into nonlinear optimisation algorithms, viz. a primal-dual interior point method (IP) and the method of moving asymptotes (MMA) as well as the sequential least-squares quadratic programming (SLSQP), which are available from the open software Ipopt~\cite{ipopt} and NLopt~\cite{NLopt}, respectively. We have numerically verified the system in a (parametric) optimisation problem that has the exact solution. The system could find the optimal solutions successfully. Then, we applied the system to optimise three models that consider a reflector, resonator and bending-duct, which consist of multiple NURBS surfaces. We could maximise the objective function in every optimisation and found that the SLSQP was the best in the sense that it required less number of evaluating the objective function as well as its gradient, which is the most time-consuming part in the optimisation based on the IGBEM and AVM.
In the future, we are going to enhance our shape optimisation system so that it can directly deal with NURBS models that are generated by solid modellers such as Rhinoceras\footnote{Home page: \texttt{https://www.rhino3d.com/}.} and SMlib\footnote{Home page: \texttt{https://smlib.com/smlib/}.}. This task is technically evident but practically important to give initial shapes comprising of truly curved surfaces. In addition, we are planning to develop a similar shape optimisation software for electromagnetism in 3D from the present one, considering the application to metamaterials and plasmonics.
\section{Isogeometric BEM}\label{s:igbem}
We will overview the formulation of the IGBEM for the 3D Helmholtz equation, referring to our previous work~\cite{takahashi2018jascome}.
\subsection{Problem statement}\label{s:problem}
Let us consider a scattering problem of the time-harmonic acoustic wave in 3D. Specifically, we will solve the following exterior Neumann boundary value problem (BVP) in the infinite domain $\bbbr^3\setminus\overline{V}$:
\begin{subequations}
\begin{align}
&\text{Governing equation}: &&\triangle u + k^2 u = 0 && \text{in $\bbbr^3\setminus\overline{V}$}, \label{eq:helm3d}\\
&\text{Boundary condition}: &&\frac{\partial u}{\partial n}=0 && \text{on $S$},\label{eq:bc}\\
&\text{Radiation condition}: &&u(\bm{x})\rightarrowu^{\rm in}(\bm{x}) && \text{as $\abs{\bm{x}}\rightarrow\infty$},
\end{align}%
\label{eq:primary}%
\end{subequations}
where $u:\bm{x}\in\bbbr^3\to\bbbc$ denotes the total field or sound pressure, $u^{\rm in}$ denotes a given incident field, $V$ denotes one or more acoustically-hard scatterers in $\bbbr^3$, $S$ denotes the boundary $\partial V$, $\bm{n}$ denotes the unit outward normal to $S$ and $k$ denotes the prescribed wavenumber.
\subsection{Boundary integral equation}\label{s:bie}
We will solve the BVP in (\ref{eq:primary}) with the following standard boundary integral equation (BIE):
\begin{eqnarray}
C(\bm{x})u(\bm{x})+\int_S\frac{\partial G(\bm{x}-\bm{y})}{\partial n_y} u(\bm{y})\mathrm{d} S_y = u^{\rm in}(\bm{x})\quad\text{for $\bm{x}\in S$},
\label{eq:bie}
\end{eqnarray}
where $G$ denotes the fundamental solution of the 3D Helmholtz equation, that is,
\begin{eqnarray}
G(\bm{x}):=\frac{e^{\mathrm{i} k |\bm{x}|}}{4\pi|\bm{x}|}.
\label{eq:G}
\end{eqnarray}
Also, $C$ denotes the free term and is equal to $1/2$ if $S$ is smooth at $\bm{x}$. In this study, we utilise the equi-potential condition to yield
\begin{eqnarray}
C(\bm{x})=1-\int_S\frac{\partial\Gamma(\bm{x}-\bm{y})}{\partial n_y}\mathrm{d} S_y,
\label{eq:C}
\end{eqnarray}
where $\Gamma(\bm{x}):=\frac{1}{4\pi\abs{\bm{x}}}$ denotes the fundamental solution for the Laplace equation in 3D.
\subsection{Isogeometric analysis}\label{s:iga}
We will discretise the BIE in (\ref{eq:bie}) as well as the RHS of (\ref{eq:C}) under the concept of the isogeometric analysis (IGA). The IGA is a kind of isoparametric formulation that exploits the NURBS basis as both interpolation and shape functions mainly in the field of both boundary and finite element methods.
First, we express a given boundary $S$, which is supposed to consist of one or more closed surfaces, by using multiple NURBS surfaces. Each NURBS surface, say $\Pi$, is parameterised with two curve parameters $s$ and $t$, where the domain of $s$ and $t$ can be $[0,1]$ without the loss of generality. Then, we can express any point $\bm{y}$ on $\Pi$ as the tensor product of NURBS basis as follows:
\begin{eqnarray}
\bm{y}(s,t)
=\frac{\displaystyle\sum_{k=0}^{{n_s}-1}\sum_{l=0}^{{n_t}-1}w_{kl}N_k^\ps(s)N_l^\pt(t)\bm{C}_{kl}}{\displaystyle\sum_{k'=0}^{{n_s}-1}\sum_{l'=0}^{{n_t}-1}w_{k'l'}N^\ps_{k'}(s)N^\pt_{l'}(t)}
=\sum_{k,l}\frac{w_{kl}N_{kl}(s,t)}{W(s,t)}\bm{C}_{kl},
\label{eq:y}
\end{eqnarray}
where $N_k^p$ denotes the $k$-th B-spline function of degree $p$ and $w_{kl}$ and $\bm{C}_{kl}$ denote the $(k,l)$-th weight and control points, respectively, which should be determined according to the shape of $S$. Also, for the sake of simplicity, we denote the product $N_k^\ps(s)N_l^\pt(t)$ by $N_{kl}(s,t)$ and the summation in the denominator by $W(s,t)$.
The two series of knots, which are denoted by $\{s_i\}_{i=0}^{{n_s}+\ps}$ and $\{t_j\}_{j=0}^{{n_t}+\pt}$, are non-decreasing in general. To guarantee that the outer control points, i.e. the control points $\bm{C}_{kl}$ whose index $k$ or $l$ is either $0$ or the largest one (i.e. ${n_s}-1$ or ${n_t}-1$), locate on the perimeter $\partial\Pi$ of the NURBS surface, we use the clamped knots, i.e.
\begin{eqnarray*}
s_i=
\begin{cases}
0 & i=0,\ldots,\ps\\
\frac{i-\ps}{{n_s}-\ps} & i=\ps+1,\ldots,{n_s}-1\\
1 & i={n_s},\ldots,{n_s}+\ps
\end{cases}
\end{eqnarray*}
for $s$ and the same for $t$.
Similarly to the boundary point $\bm{y}$ in (\ref{eq:y}), we interpolate the boundary density $u$ on a surface $\Pi$ with the tensor product of the NURBS basis as follows:
\begin{eqnarray}
u(s,t)=\sum_{k,l}\frac{w_{kl}N_{kl}(s,t)}{W(s,t)}u_{kl},
\label{eq:u}
\end{eqnarray}
where coefficients $u_{kl}$ are the unknown variables to be determined from the BIE in (\ref{eq:bie}).
It should be noted that, since the knots are clamped, $u$ at a control point $\bm{C}_{kl}$ on the perimeter $\partial\Pi$ corresponds to $u_{kl}$ exactly; meanwhile, the other coefficients $u_{kl}$ do not generally correspond to $u$ at $\bm{C}_{kl}$.
The solution, that is, Dirichlet data $u$ on $S$ must be continuous across the intersecting line between two adjacent NURBS surfaces. This continuity-requirement can be satisfied by giving a unique unknown index, say $\nu$, to all the unknown coefficients associated with the underlying intersection. For example, let us consider the case that an outer control points $\bm{C}_{kl}$ on a NURBS surface $\Pi$ has the same position as an outer point $\bm{C}'_{k'l'}$ on another surface $\Pi'$, where we measure the geometrical distance of the two points $\bm{C}_{kl}$ and $\bm{C}'_{k'l'}$ to judge if they share the same position or not. Then, we give a global unknown index $\nu$ to the two points $\bm{C}_{kl}$ and $\bm{C}'_{k'l'}$ as well as the corresponding unknown coefficients $u_{kl}$ and $u_{k'l'}'$. As a result, we can obtain a certain number $N$ that represents the number of (global) unknowns over $S$. By using $N$ global unknowns and control points denoted by $u_\nu$ and $\bm{C}_\nu$, respectively, we no longer use the local indices (i.e. $kl$ and $k'l'$) and can express any point $\bm{y}$ and the boundary value $u$ as follows:
\begin{eqnarray}
\bm{y}(s,t)=\sum_{\nu=1}^N R_\nu(s,t)\bm{C}_\nu,\quad u(s,t)=\sum_{\nu=1}^N R_\nu(s,t)u_\nu,
\label{eq:y_u}
\end{eqnarray}
where $R_\nu$ corresponds to the basis $\frac{w_{kl} N_{kl}}{W}$ for a certain NURBS surface.
\subsection{Discretisation of the BIE}
By plugging (\ref{eq:y_u}) into the BIE in (\ref{eq:bie}), we can yield the following discretised BIE:
\begin{eqnarray}
C(\bm{x}(\hat{s},\hat{t}))\sum_{\nu=1}^N R_\nu(\hat{s},\hat{t})u_\nu
+\int_S \frac{\partial G}{\partial n_y}(\bm{x}(\hat{s},\hat{t}),\bm{y}(s,t))\sum_{\nu=1}^N R_\nu(s,t)\mathrm{d} S_y u_\nu
=u^{\rm in}(\bm{x}(\hat{s},\hat{t})).
\label{eq:bie2}
\end{eqnarray}
Here, a pair of parameters $(\hat{s},\hat{t})$ corresponds to a collocation point $\bm{x}$ on $S$ and each parameter is determined as the Greville abscissa~\cite{liu2017}. Similarly to the determination of the global $N$ unknowns ($u_\nu$), we regard the repeated collocation point on an intersection as a unique collocation point. As a result, we can determine $N$ distinct collocation points on $S$, which are enough to solve (\ref{eq:bie2}). In this study, we use the LU decomposition to solve $N$ unknowns ($u_\nu$) from a set of $N$ discretised BIEs of (\ref{eq:bie2}).
Once the unknowns are obtained, we can compute $u$ at any point: we may use (\ref{eq:u}) for any point on $S$, while we may exploit the integral representation for any point in $V$. In addition, we can compute the derivatives of $u$ on $S$ by differentiating the NURBS functions in (\ref{eq:u}) with respect to $s$ and/or $t$. This is useful to compute the shape derivative (sensitivity) because it usually consists of the derivative(s) of $u$ on a surface, as seen in (\ref{eq:sd}).
Regarding the boundary integrals in (\ref{eq:bie2}), we apply the Lachat's method to the singular integrals and the hierarchical subdivision technique to the singular- and nearly-singular-integrals. The details are described in our previous paper~\cite{takahashi2018jascome}.
\subsection{Knot insertion}\label{s:ki}
As we will mention in Section~\ref{s:sopt}, we will optimise the shape of $S$ via the control points $\bm{C}_\nu$. If the number of control points involved in a target $S$ is large, the convergence of the optimisation would be slow. So, one may construct a surface with a small number $N$ of control points. However, this can lead to a low accurate solution in the IGA because the number of unknowns (degrees of freedom) is also $N$; recall (\ref{eq:y_u}). To resolve this issue, which is common in the IGA~\cite{hughes2005}, we may resort to the knot insertion, by which control points can be added to anyplace without changing the shape of $S$. This technique is used when we analyse the BVP in (\ref{eq:primary}) as well as the adjoint one in (\ref{eq:adjoint}), which will be mentioned in Section~\ref{s:statement}.
\section{Background and purpose}\label{s:intro}
Interaction of waves with materials in a specific shape/topology can bring exotic wave phenomena such as the extraordinary transmission through a sub-wavelength aperture~\cite{Ebbesen_1998} and the emergence of a collimated beam by corrugating the aperture~\cite{Lezec_2002, Christensen_2007, Zhou_2010, takahashi2014}. In particular, structures with certain periodic patterns, that is, metamaterials have been recently and intensively studied in science and engineering~\cite{liu2011metamaterials,wang2020tunable}.
Shape optimisation is useful in appropriately designing (meta)materials in a wave field of interest. To analyse the wave problem numerically, boundary element method (BEM) is suitable because it can deal with the infinite domain without any absorbing boundary condition, which is necessary for domain-type solvers such as finite element method (FEM) and finite difference method. In addition, boundary-only models that BEM handles fit in shape optimisation, which concerns the deformation of the surface (or boundary) of a target material rather than its inside.
The first study on shape optimisation with using BEM was conducted by Soares et al. in 1984~\cite{soares1984}, which investigated linear-elastostatic problems in 2D. Since then, there are over 200 publications that are related to both shape optimisation and BEM, as shown in Figure~\ref{fig:publications}. The BEM-based shape optimisation is currently being promoted by a new type of BEM, that is, isogeometric BEM (IGBEM), whose number of publications is also shown in the same figure. The IGBEM is characterised by employing the NURBS (including B-spline) function as both shape and interpolation functions, following the concept of isogeometric analysis (IGA)~\cite{hughes2005,cottrell2009}. In this case, one can design the shape of interest through the control points (CPs) associated with the NURBS surface(s). On the other hand, one needs to regard (a part of) the nodes of a boundary element mesh as the design variables in the case of the conventional BEMs, which are based on a piecewise polynomial basis. This can increase the number of design variables unnecessarily, in particular, when the shape of the boundary is complicated and, thus, the mesh is fine. In the IGA, the technique of knot insertion can readily resolve the dilemma between reducing the number of design variables and increasing the resolution of the boundary element analysis. This is the main advantage of the IGBEM over the conventional BEMs, although the formulation and implementation of the former are hard than those of the latter.
Another merit of the IGBEM is that we can easily compute the gradient of the sound pressure at any points (generally except for its boundary, where the other surfaces are connected) on the surface of a scatterer. This is because the sound pressure is usually differentiable over a NURBS surface. This property of the IGBEM is useful when we compute the shape derivative of our interest (see (\ref{eq:sd})). On contrary, the gradient can be discontinuous on the edges of the boundary element mesh in the convectional BEM.
\begin{figure}[H]
\centering
\input{publications.pgf}
\caption{Publications with the terms both ``shape optimization'' and ``boundary element method'' (coloured in blue) and those with the term ``isogeometric boundary element method'' (in red). The data was obtained from Web of Science on Apr 28, 2021.}
\label{fig:publications}
\end{figure}
So far, shape optimisations based on the IGBEM have been investigated in terms of potential problems (or steady-state heat problems)~\cite{yoon2015,kostas2015,gillebaart2016,kostas2017,kostas2018}, elastostatic problems~\cite{li2011,lian2016,lian2017,sun2018,li2019,sun2020}, including 2D thermoelastic problem~\cite{yoon2020}, and acoustic problems in concern~\cite{liu2017,takahashi2019ewco,ummidivarapu2020,shaaban2020,shaaban2020b,wang2020,chen2019}. In regard to 2D, Liu et al.~\cite{liu2017} performed a shape optimisation of a $\Gamma$-shaped sound barrier, where the direct differentiation method (DDM) was employed to compute the sensitivity of the objective function with respect to CPs. Takahashi et al.~\cite{takahashi2019ewco}, which is a prior research of the current work, optimised periodic and layered structures in terms of the ultra-thin solar panels. They derived the shape derivatives with the adjoint variable method (AVM). Ummidivarapou et al.~\cite{ummidivarapu2020} introduced a teaching-learning-based optimisation algorithm, which is a gradient-free method, to design a acoustic horn. Similarly, Shaaban et al.~\cite{shaaban2020} performed a shape optimisation by exploiting the particle swarm optimisation (PSO) algorithm, which is gradient-free. This was extend to the axi-symmetric problem by the same authors~\cite{shaaban2020b}. The shape optimisation by Wang et al.~\cite{wang2020} is similar to Liu et al.~\cite{liu2017} but used the AVM instead of the DDM. On the other hand, the 3D acoustics was considered only by Chen et al.~\cite{chen2019}. They conducted a shape optimisation based on the DDM. Thus, their study can be regarded as a 3D version of \cite{liu2017}. They maximised the sound pressure of the surface of submarine or vase successfully.
Similarly to Chen et al.~\cite{chen2019}, the purpose of this study is to establish a shape optimisation system for 3D acoustic problems. In this system, a nonlinear optimisation algorithm integrates the corresponding IGBEM and AVM. These two ingredients were developed in the authors' previous research~\cite{takahashi2018jascome}. They proposed an accurate method to evaluate the singular and nearly-singular integrals associated with the isogeometric discretisation and, additionally, performed a shape-sensitivity analysis as an application.
The present work makes steady progress toward the shape optimisation with considering some optimisation algorithms which are implemented in two software Ipopt~\cite{ipopt} and NLopt~\cite{NLopt}. Those algorithm are compared with respect to their performances in some numerical examples.
The rest of this paper is organised as follows: Section~\ref{s:igbem} overviews an IGBEM for the 3D Helmholtz equation in terms of exterior homogeneous Neumann problems, which was constructed in our previous work~\cite{takahashi2018jascome}. Section~\ref{s:sopt} formulates the shape optimisation on the basis of the IGBEM and the adjoint variable method and describes the reduction of the problem to a nonlinear optimisation problem. Section~\ref{s:num} validates the proposed shape optimisation system through a numerical example and then demonstrates the capability of the system for complicated problems. Finally, Section~\ref{s:concl} concludes the present study.
\section*{\refname}
\iffalse
\bibliographystyle{elsarticle/elsarticle-num}
\section{Numerical examples}\label{s:num}
This section will provide some numerical examples with our shape optimisation software. In Section~\ref{s:validate}, we will validate the software through an optimisation problem, which can be analysed exactly. The problem is actually a parametric optimisation problem, but we can test the entire of our software for the shape optimisation problem. After the validation, we will show the capability of the software through some more complicated examples in Sections~\ref{s:reflector}--\ref{s:bending}.
\subsection{Verification}\label{s:validate}
\nprounddigits{5}
\subsubsection{Problem configuration}
Let us consider a parametric optimisation problem. Specifically, we will find the radius, denoted by $a$, of a spherical scatterer (centred at the origin) so that the radius can maximises an objective function $\mathcal{J}$ in (\ref{eq:J}), where $M=1$ and the corresponding observation point $\bm{z}_1$ is chosen as $(0, 0, 8.5)^{\rm T}$ (Figure~\ref{fig:test}): namely,
\begin{eqnarray}
\mathcal{J}(u;S):=\frac{\abs{u(\bm{z}_1)}^2}{2}\quad\text{where $\bm{z}_1=(0, 0, 8.5)^{\rm T}$.}
\label{eq:J_verification}
\end{eqnarray}
We give a planewave incident field $u^{\rm in}(\bm{x})=\mathrm{e}^{-\mathrm{i} kz}$, which propagates in the $-z$ direction, where the wavenumber $k$ is given as one.
Following the reference~\cite{cobb1988}, we create the surface $S$ of the spherical scatter with six NURBS surfaces. Each surface is constructed with $5\times 5$ control points and the tensor product of the B-spline functions of degree 4, i.e. ${n_s}={n_t}=5$ and $\ps=\pt=4$ . Correspondingly, the number $N$ of the (unique) control points is 98. As noted in Section~\ref{s:ki}, we can increase $N$ to improve the resolution of the boundary element solution by the knot insertion. We consider three cases of $N$, i.e. $N=866$, $2402$ and $3458$.
\begin{figure}[H]
\centering
\includegraphics[width=.35\textwidth]{sphere.eps}
\caption{Problem setting.}
\label{fig:test}
\end{figure}
Under the present configuration, the sound pressure $u$ at the observation point $\bm{z}_1$ can be written as a function of the radius $a$~\cite{bowman1987}, that is,
\begin{eqnarray}
u(\bm{z}_1) = \sum_{n=0}^{\infty} i^n (2n + 1)(j_n(kr) - A'_n {h^{(1)}_n}'(kr))P_n (\cos(\theta-\pi)),
\label{eq:uanal}
\end{eqnarray}
where the spherical coordinates $\theta$ and $r$ corresponding to $\bm{z}_1$ are $0$ and $8.5$, respectively.
Also, $j_n$, $h^{(1)}_n$ and $P_n$ denote the spherical Bessel function of degree $n$, the spherical Hankel function of the first kind and degree $n$ and the Legendre polynomial of degree $n$, respectively. In addition, the coefficient $A'_n$ is defined as
\begin{eqnarray*}
A'_n := \frac{j'_n (ka)}{{h^{(1)}_n}'(ka)},
\end{eqnarray*}
where the prime represents the differentiation with respect to $a$. It should be noted that (\ref{eq:uanal}) is valid when the observation points $\bm{z}_1$ is in the outside of the sphere, that is, $a\in(0,8.5)$.
From (\ref{eq:J_verification}) and (\ref{eq:uanal}), we can plot $\mathcal{J}$ against $a\in[1,8]$ as in Figure~\ref{fig:J_a}. When we restrict the lower and upper bounds of the design variable $a$ to $1$ and $7$, respectively, we can have two local maxima, i.e.
\begin{eqnarray}
(a,J)=(\numprint{2.24991750058038420e+00},\ \numprint{0.623794337070834093}),\quad
(\numprint{5.37318093130602481e+00},\ \numprint{1.01260319466883364e+00}),
\label{eq:maxima}
\end{eqnarray}
which are computed by applying the Brent minimisation algorithm, which is implemented in GNU Scientific Library~\cite{GSL}, to (\ref{eq:uanal}). If we can arrive at one of these local maxima from a certain initial radius, denoted by $a_0$, we can validate our optimisation software. In what follows, we consider two initial values, i.e. $a_0=3$ and $4$.
\begin{figure}[H]
\centering
\input{J.pgf}
\caption{The value of the objective function $\mathcal{J}$ in (\ref{eq:J_verification}) as a function of the radius $a$ of the spherical scatterer at the origin. The two points represent the local maxima in (\ref{eq:maxima}) when $a$ is restricted to $[1,7]$.}
\label{fig:J_a}
\end{figure}
The design variable of this problem is the radius $a$ only, while our shape optimisation method treats all the control points $\bm{C}_\nu$ as the design variables (recall Section~\ref{s:nonlinear}). To fill this gap, we modify a fraction of the computer program by considering the relationship between the radius $a$ and control points $\bm{C}_\nu$.
Specifically, when the radius is updated from $a$ to $\tilde{a}$, all the control points must be scaled by $\tilde{a}/a$ so that the surface $S$ can preserve its spherical shape. Therefore, the control points $\bm{C}_\nu$ must be updated to $\tilde{\bm{C}}_\nu$ so that
\begin{eqnarray}
\tilde{\bm{C}}_\nu = \frac{\tilde{a}}{a} \bm{C}_\nu
\label{eq:tildeC}
\end{eqnarray}
holds. Plugging this and the variation of the radius, i.e. $\delta a:=\tilde{a}-a$, into (\ref{eq:tildeJ}), we have
\begin{eqnarray}
\mathcal{J}(\tilde{u};\tilde{S}) = \mathcal{J}(u;S)+\sum_{\nu=1}^N \frac{\bm{s}_\nu(u;S)\cdot\bm{C}_\nu}{a}\delta a+O(\epsilon^2).
\end{eqnarray}
Clearly, the shape derivative of $\mathcal{J}$ with respect to the radius $a$, i.e. $\pp{\mathcal{J}}{a}$, is obtained as
\begin{eqnarray}
\pp{\mathcal{J}}{a}=\sum_{\nu=1}^N \frac{\bm{s}_\nu(u;S)\cdot\bm{C}_\nu}{a}.
\label{eq:djda}
\end{eqnarray}
Hence, when the new (perturbated) radius $\tilde{a}$ is determined by an optimisation algorithm, we first update the control points $\bm{C}_\nu$ to $\tilde{\bm{C}}_\nu$ according to (\ref{eq:tildeC}) and, then, $\pp{\mathcal{J}}{a}$ in (\ref{eq:djda}). This procedure is added to the user-defined routine to compute $\mathcal{J}$ and its gradient.
In this analysis, we let the convergence tolerance of IP, i.e. the parameter \texttt{tol} of Ipopt~\cite[Page 69]{ipopt} be $10^{-3}$.
In regard to both MMA and SLSQP, we let the relative tolerance, which corresponds to the parameter \texttt{ftol\_rel} of NLopt, be $10^{-3}$.
\subsubsection{Results and discussions}
Table~\ref{tab:sphere} shows the computed optimal radius $a$ and the corresponding value of $\mathcal{J}$ for the two initial radii $a_0$ and three numbers $N$ of control points.
We can observe that every solver could achieve one of the local maxima.
There is no clear difference due to $N$, which means that the discretisation error of the IGBEM is almost negligible with the smallest $N$.
The columns of ``Eva.'' in Table~\ref{tab:sphere} shows the number of evaluating $\mathcal{J}$ (and most likely its gradient at the same time) until convergence. In every combination of $a_0$ and $N$, the SLSQP required less evaluation counts than the others. Figure~\ref{fig:test_history} plots the value of $\mathcal{J}$ against the evaluation count in the case of $a_0=3$ and $N=866$.
The present results indicate that our formulation and its numerical implementation are valid.
\begin{table}[H]
\centering
\caption{Result of the optimised radius $a$ and the corresponding value of the objective function $\mathcal{J}$ in Section~\ref{s:validate}. Here, the heading ``Eva.'' stands for the number of evaluating $\mathcal{J}$.}
\label{tab:sphere}
\scriptsize
$a_0=3$\\
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Algo. & \multicolumn{3}{c|}{IP} & \multicolumn{3}{c|}{MMA} & \multicolumn{3}{c|}{SLSQP}\\
\hline
$N$ & $a$ & $\mathcal{J}$ & Eva. & $a$ & $\mathcal{J}$ & Eva. & $a$ & $\mathcal{J}$ & Eva. \\
\hline
866 & \numprint{2.24998615900516929e+00} & \numprint{0.623794283354436985} & 23 & \numprint{2.24978909861472687e+00} & \numprint{0.623794279670119023} & 8 & \numprint{2.24274042762932035e+00} & \numprint{0.623778143104502414} & 5\\
2402 & \numprint{2.24998081149424500e+00} & \numprint{0.623794335188337157} & 23 & \numprint{2.24978606250787783e+00} & \numprint{0.623794331009669412} & 8 & \numprint{2.24273797877532655e+00} & \numprint{0.623778182868081066} & 5\\
3458 & \numprint{2.24998289952717334e+00} & \numprint{0.623794335575214687} & 23 & \numprint{2.24978730438999408e+00} & \numprint{0.623794331583398809} & 8 & \numprint{2.24273916078656965e+00} & \numprint{0.623778188638723918} & 5\\
\hline
\end{tabular}\\[\baselineskip]
%
$a_0=4$\\
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Algo. & \multicolumn{3}{c|}{IP} & \multicolumn{3}{c|}{MMA} & \multicolumn{3}{c|}{SLSQP}\\
\hline
$N$ & $a$ & $\mathcal{J}$ & Eva. & $a$ & $\mathcal{J}$ & Eva. & $a$ & $\mathcal{J}$ & Eva. \\
\hline
866 & \numprint{5.37302035529867261e+00} & \numprint{1.01254399549004437e+00} & 13 & \numprint{5.37327557417074431e+00} & \numprint{1.01254397411240737e+00} & 10 & \numprint{5.37319113655478908e+00} & \numprint{1.01254399352844837e+00} & 9\\
2402 & \numprint{5.37295246736562149e+00} & \numprint{1.01260217818399623e+00} & 13 & \numprint{5.37315952650911122e+00} & \numprint{1.01260222197983096e+00} & 16 & \numprint{5.37312574673180787e+00} & \numprint{1.01260221984283727e+00} & 9\\
3458 & \numprint{5.37298806847122368e+00} & \numprint{1.01260290777094331e+00} & 13 & \numprint{5.37320910135166496e+00} & \numprint{1.01260293879015717e+00} & 10 & \numprint{5.37315980955157002e+00} & \numprint{1.01260293911576449e+00} & 9\\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\input{test_history.pgf}
\caption{History of the value of the objective function $\mathcal{J}$ in (\ref{eq:J_verification}) against the evaluation count in the case of the initial radius $a_0=3$ and $N=866$.}
\label{fig:test_history}
\end{figure}
\subsection{Example 1: Reflector}\label{s:reflector}
To demonstrate the capability of our shape optimisation framework, we will begin with a simple model of a cuboid ``reflector'', whose dimensions are $1\times 1\times 0.5$, as shown in Figure~\ref{fig:app12a_config}.
Regarding a planewave incident field $u^{\rm in}$ with the wavenumber of $k=3$ that propagates in the $-z$ direction, i.e. $u^{\rm in}(\bm{x})=e^{-\mathrm{i} k z}$, we try to maximise the objective function $\mathcal{J}$ in (\ref{eq:J}), where a single observation point $\bm{z}_1=(0.5,0.5,1.0)^{\rm T}$ in the illuminated side is considered.
\begin{figure}[H]
\centering
\includegraphics[width=.7\textwidth]{app12a_config_lowresolution.eps}
\caption{The initial shape of the reflector model in Section~\ref{s:reflector}. The point in grey represents the observation point $\bm{z}_1=(0.5,0.5,1.0)^{\rm T}$. The red and points represent the CPs that are designed, while the blue ones represent the fixed CPs.}
\label{fig:app12a_config}
\end{figure}
The surface of the reflector consists of six NURBS surfaces (viz., top, bottom, left, right, front and back) with the NURBS functions of degree 2, i.e. $\ps=\pt=2$. They are shown in different colours in Figure~\ref{fig:app12a_config}. The number ${n_s}$, which denotes the number of control points along the local coordinate $s$, is given as $6$, $6$ and $3$ if $s$ is parallel to $x$-, $y$- and $z$-axis, respectively, at the initial configuration. Similarly, we determine the value of ${n_t}$ every NURBS surface. Each NURBS surface is clamped on its perimeter, as mentioned in Section~\ref{s:iga}. In this case, the number $N$ of unique CPs is 92, which are shown as points (in red or blue) on the surfaces in Figure~\ref{fig:app12a_config}. The number of CPs is increased to 548 by the knot insertion when we perform the isogeometric boundary element analysis.
In this example, we design only $4\times 4$ CPs, which are coloured in red in Figure~\ref{fig:app12a_config}, on the top surface excluding the perimeter. In addition, we allow each target CP to move vertically at most 0.3, which guarantees that any target CP never touches with the others. Thus, the number of design variables is 16. To be specific, we first regard all the coordinates of the CPs as the design variables and, then, set the initial coordinate to both lower and upper bounds for each coordinate that is not optimised.
Figure~\ref{fig:app12a_history} compares the history of the value of $\mathcal{J}$ for the three optimisation algorithms. All the algorithms converged to almost the same solution. Similarly to the previous example, the SLSQP required the least number of evaluations until convergence.
\begin{figure}[H]
\centering
\input{app12a_history.pgf}
\caption{History of the value of the objective function $\mathcal{J}$ in the reflector model (Section~\ref{s:reflector}).}
\label{fig:app12a_history}
\end{figure}
Figure~\ref{fig:app12a_u_abs_final} shows the distribution of the absolute value of sound pressure, i.e. $|u|$, on the boundary at both initial and optimised shapes. In addition, Figure~\ref{fig:app12a_uin_final} shows the distribution of $|u|$ on the middle cross section, i.e. $y=0.5$; draw range was selected as $-0.5\le x\le 1.5$ and $-0.5\le z\le 3.5$. The peak of $|u|$ is not on the observation point $\bm{z}_1$, but the created shape is reasonable in the sense that it looks like a parabolic antenna. It should be noted that the results in Figures~\ref{fig:app12a_u_abs_final} and \ref{fig:app12a_uin_final} are of the SLSQP but almost the same results were obtained by both IP and MMA.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{app12a_u_abs_0000_crop_by_gimp.eps}
&\includegraphics[width=.4\textwidth]{app12a_u_abs_final_crop_by_gimp.eps}\\
Initial shape & Optimised shape
\end{tabular}
\caption{Distribution of $|u|$ on the surface in the reflector model (Section~\ref{s:reflector}). The point (coloured in grey) is the observation point $\bm{z}_1=(0.5,0.5,1.0)^{\rm T}$.}
\label{fig:app12a_u_abs_final}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{data/app12a_uin_final.eps}
\caption{Distribution of $|u|$ on the plane of $y=0.5$ in the optimised shape of the reflector model (Section~\ref{s:reflector}).}
\label{fig:app12a_uin_final}
\end{figure}
By considering the evaluation counts in Figure~\ref{fig:app12a_history} as well as Figure~\ref{fig:test_history}, we will use the SLSQP only in the following examples.
\subsection{Example 2: Resonator}\label{s:resonator}
As the second example, we attempt to catch a sound with a bowl. Specifically, as illustrated in Figure~\ref{fig:app7f_config} (left), we consider a cubic scatterer (whose dimensions are $3\times 3\times 3$) with a hollow ($1\times 1\times 2$). Then, we optimise the shape of the hollow so that we can increase the sound pressure inside it. We give two types of the incident fields, i.e. $u^{\rm in}(\bm{x})=e^{-\mathrm{i} k z}$ and $e^{-\mathrm{i} k x}$, which represent the planewave propagating in the $-z$ and $-x$ direction, respectively. Here, $k=3$ is supposed.
Regarding the objective function $\mathcal{J}$ in (\ref{eq:J}) to be maximised, we consider three observation points aligned vertically in the hollow, i.e. $\bm{z}_1=(1.5,1.5,1.4)$, $\bm{z}_2=(1.5,1.5,1.5)$ and $\bm{z}_3=(1.5,1.5,1.6)$, which are shown as three points (in grey) Figure~\ref{fig:app7f_config} (right).
The bowl is modelled with $46$ NURBS surfaces of degree 2, which are distinguished by different colours as in Figure~\ref{fig:app7f_config} (left). The whole surface of the bowl includes $282$ CPs, which are displayed as the points (in blue) in the same figure. In addition, we increase the number $N$ of CPs from 282 to 1314 by the knot insertion in every boundary element analysis.
We optimise the shape of the hollow, preserving the initial square-shape of both top aperture and bottom surface. To this end, we choose $32$ CPs on the side walls of the hollow; 20 of them in the back side are shown as red points in Figure~\ref{fig:app7f_config} (right). We allow each CP to move up to a certain value from its initial position. The value is selected as $0.15$, which is less than the half of the minimum distance (i.e. $0.4$) between any two CPs at the initial configuration, so that any CP does not touch the others as well as all the observation points.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{app7f_shape_init_crop_by_gimp.eps}
&\includegraphics[width=.4\textwidth]{app7f_shape_init_half_crop_by_gimp.eps}\\
Entire & Posterior half
\end{tabular}
\caption{The initial shape of the resonator model in Section~\ref{s:resonator}. The points in grey represent the three observation point $\bm{z}_1$, $\bm{z}_2$ and $\bm{z}_3$ in the hollow. The red and points represent the CPs that are designed, while the blue ones represent the fixed CPs; the CPs on the left, back and bottom surfaces are also fixed.}
\label{fig:app7f_config}
\end{figure}
\nprounddigits{2}
Figure~\ref{fig:app7fg_history} shows the history of $\mathcal{J}$ in both incident fields. We could increase $\mathcal{J}$ in both cases. The value of $\mathcal{J}$ increased monotonically from \numprint{3.19436966151474544e-01} and then converged at \numprint{2.22377289810397105e+00} after seven counts in the case of the vertical ($-z$-direction) incidence, while it oscillated significantly but increase gradually from \numprint{4.12797175555163764e-02} to \numprint{4.60735738700995157e+01} after 60 counts in the case of the horizontal ($-x$-direction) incidence.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\begin{minipage}{.45\textwidth}
\input{app7f_history.pgf}
\end{minipage}
&
\begin{minipage}{.45\textwidth}
\input{app7g_history.pgf}
\end{minipage}
\end{tabular}
\caption{History of the value of the objective function $\mathcal{J}$ in the resonator model (Section~\ref{s:resonator}).}
\label{fig:app7fg_history}
\end{figure}
Figures~\ref{fig:app7_u_abs_y15_25} shows the initial and optimised (final) shapes of the hollow together with the distribution of $|u|$ on it. In both cases, the $|u|$ around the observation points was relatively low, but the sound pressure was actually strengthened inside the hollow after each optimisation. In addition, Figure~\ref{fig:app7fg_uin_final} shows the distribution of $|u|$ on the middle cross section of $y=1.5$.
\begin{figure}[H]
\begin{tabular}{cc}
Vertical incidence: Initial shape & Vertical incidence: Optimised shape\\
\includegraphics[width=.48\textwidth]{app7f_u_abs_0000_y15_25_crop_by_gimp_lowresolution.eps}
&\includegraphics[width=.48\textwidth]{app7f_u_abs_final_y15_25_crop_by_gimp_lowresolution.eps}\\
Horizontal incidence: Initial shape & Horizontal incidence: Optimised shape\\
\includegraphics[width=.48\textwidth]{app7g_u_abs_0000_y15_25_crop_by_gimp_lowresolution.eps}
&\includegraphics[width=.48\textwidth]{app7g_u_abs_final_y15_25_crop_by_gimp_lowresolution.eps}
\end{tabular}
\caption{Initial (left) and optimised (right) shapes of the hollow together with the distribution of $|u|$ in the case of the horizontal and horizontal incidences (Section~\ref{s:resonator}). Here, the range of $y$ is from 1.5 to 2.5, which covers the back half of the hollow. The points in grey are the observation points.}
\label{fig:app7_u_abs_y15_25}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{app7f_uin_final.eps}\\
Vertical incident planewave\\
\includegraphics[width=.8\textwidth]{app7g_uin_final.eps}\\
Horizontal incident planewave
\caption{Distribution of $|u|$ on the middle cross section of $y=1.5$ in the optimised shape in the case of the vertical and horizontal incidences (Section~\ref{s:resonator}). The points coloured in magenta represents the observation points.}
\label{fig:app7fg_uin_final}
\end{figure}
\subsection{Example 3: Bending duct}\label{s:bending}
As the final example, we consider a more complicated model, whose dimension is $3\times 3 \times 5$, containing a bending duct. Figure~\ref{fig:app17_config} shows the cross section of $y=1.5$ to see the inside of the model. As illustrated in the figure, we consider $3\times 3$ observation points on the plane $x=-0.5$ near the exit of the duct. We design a part of the top and bottom surfaces of the duct through 30 control points, 20 of which are drawn as red points in the figure. More precisely, we optimise the $z$ coordinates of those CPs. Here, every coordinate can be changed from its initial value up to 0.2, which takes account of the aforementioned consideration, that is, any CP never collides with the others during the optimisation. The incident planewave is given from the $+x$ side and its wavenumber is selected as 1, 2, 3 or 5 for comparison.
As shown in Figure~\ref{fig:app17_history}, the optimisation was successfully terminated for every wavenumber $k$. The value of $\mathcal{J}$ was largest at the second largest $k=3$. This can be related to the standing wave in the vertical direction excited in the duct, which can be observed in Figure~\ref{fig:app17}, but we did not pursue the reason.
\begin{figure}[H]
\centering
\includegraphics[width=.55\textwidth]{app17a_config_crop_by_gimp.eps}
\caption{The back ($-y$) side of the bending-duct model at the initial configuration (Section~\ref{s:bending}). The points in grey on the plane $x=-0.5$ represent six of the nine observation point $\{\bm{z}_m\}_{m=1}^9$, which are close to the exit of the duct on the plane $x=0$. The red points represent 20 of the 30 CPs to be designed.}
\label{fig:app17_config}
\end{figure}
\begin{figure}[H]
\centering
\input{app17_history.pgf}
\caption{History of the value of the objective function $\mathcal{J}$ in the bending-duct model (Section~\ref{s:bending}).}
\label{fig:app17_history}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{ccc}
& Initial & Optimised\\
$k=1$
&\includegraphics[height=.22\textheight]{app17c_u_abs_0000_y15_25_GIMP.eps}
&\includegraphics[height=.22\textheight]{app17c_u_abs_final_y15_25_GIMP.eps}\\
$k=2$
&\includegraphics[height=.22\textheight]{app17b_u_abs_0000_y15_25_GIMP.eps}
&\includegraphics[height=.22\textheight]{app17b_u_abs_final_y15_25_GIMP.eps}\\
$k=3$
&\includegraphics[height=.22\textheight]{app17a_u_abs_0000_y15_25_GIMP.eps}
&\includegraphics[height=.22\textheight]{app17a_u_abs_final_y15_25_GIMP.eps}\\
$k=5$
&\includegraphics[height=.22\textheight]{app17d_u_abs_0000_y15_25_GIMP.eps}
&\includegraphics[height=.22\textheight]{app17d_u_abs_final_y15_25_GIMP.eps}
\end{tabular}
\caption{Shape and distribution of $|u|$ at the initial (left) and optimised (right) configurations of the bending-duct model (Section~\ref{s:bending}). Note that the range of $|u|$ varies according to the wavenumber $k$.}
\label{fig:app17}
\end{figure}
\section{Gradient-based shape optimisation}\label{s:sopt}
We will construct our shape optimisation method based on the IGBEM, adjoint variable method and nonlinear optimisation method. The present framework is a direct extension of the 2D case investigated in our previous paper~\cite{takahashi2019ewco}\footnote{The corresponding software is available at \texttt{https://sourceforge.net/projects/igbemsopt/}.}.
\subsection{Problem statement and shape derivative}\label{s:statement}
The present shape optimisation problem is to maximise or minimise a prescribed objective function $\mathcal{J}$ by changing the surface of scatterer(s) $V$, i.e. the boundary $S$. Specifically, we define $\mathcal{J}$ as the summation of the sound pressure $u$ at $M$ observation points $\{\bm{z}_i\}_{i=1}^M$, that is,
\begin{eqnarray}
\mathcal{J}(u;S):=\sum_{m=1}^M\frac{\abs{u(\bm{z}_m)}^2}{2},
\label{eq:J}
\end{eqnarray}
where $u$ is supposed to be a solution of the BVP in (\ref{eq:primary}) or the \textit{primary} problem in the context of the adjoint variable method.
To define the shape derivative (sensitivity), denoted by $\mathcal{S}$, of $\mathcal{J}$ in (\ref{eq:J}), we slightly move every point $\bm{y}$ on $S$ by $\epsilon\bm{V}(\bm{y})$, where $\epsilon$ is an infinitesimally small number and $\bm{V}$ denotes the direction to move. Correspondingly, the boundary $S$ and the field $u$ are perturbated to $\tilde{S}$ and $\tilde{u}$, respectively. Then, $\mathcal{S}$ is defined as the coefficient of the term $\epsilon$ which is obtained by expanding the perturbated objective function $\mathcal{J}(\tilde{u}; \tilde{S})$ with respect to $\epsilon$. Therefore, we have
\begin{eqnarray}
\mathcal{J}(\tilde{u};\tilde{S})=\mathcal{J}(u;S)+\epsilon\mathcal{S}(u;S)+O(\epsilon^2).
\label{s:J_expand}
\end{eqnarray}
Here, as well-known (see \cite{feijoo2003} for example), $\mathcal{S}$ can be derived as follows:
\begin{eqnarray}
\mathcal{S}(u;S)=\mathop{\mathrm{Re}}\int_S\left(k^2\lambda^* u-\nabla \lambda^* \cdot \nabla u\right)\bm{V}\cdot\bm{n}\ \mathrm{d} S,
\label{eq:sd}
\end{eqnarray}
where $()^*$ denotes the complex conjugate and the adjoint field $\lambda$ is the solution of the following adjoint problem:
\begin{subequations}
\begin{align}
&\text{Governing equation}: &&\triangle \lambda(\bm{x}) + k^2 \lambda(\bm{x}) = -\sum_m u(\bm{x})\delta(\bm{x}-\bm{z}_m) && \text{in $V$},\label{eq:helm_adj}\\
&\text{Boundary condition}: &&\frac{\partial\lambda}{\partial n}=0 && \text{on $S$},\label{eq:bc_adj}\\
&\text{Radiation condition}: &&\lambda(\bm{x})\rightarrow 0 && \text{as $\abs{\bm{x}}\rightarrow\infty$}.\label{eq:radiation_adj}%
\end{align}%
\label{eq:adjoint}%
\end{subequations}
We can also solve the adjoint problem with the IGBEM mentioned in Section~\ref{s:igbem}. To this end, we may replace the incident field $u^{\rm in}(\bm{x})$ in (\ref{eq:bie}) with the following term:
\begin{eqnarray*}
-\int_V G(\bm{x}-\bm{y})\left(-\sum_m u(\bm{y})\delta(\bm{y}-\bm{z}_m)\right)\mathrm{d} V_y
= \sum_m G(\bm{x}-\bm{z}_m)u(\bm{z}_m),
\end{eqnarray*}
where $G$ is the fundamental solution given in (\ref{eq:G}).
\subsection{Discretisation of the shape derivative}
In the numerical analysis, the infinitesimal deformation (perturbation) $\epsilon\bm{V}$ must be finite. When we denote the point of an arbitrary point $\bm{y}$, which is expressed as (\ref{eq:y_u}) in the IGBEM, on the surface $\tilde{S}$ by $\tilde{\bm{y}}$, we can approximate $\epsilon\bm{V}(\bm{y})$ as follows:
\begin{eqnarray*}
\epsilon\bm{V}(\bm{y})\approx\tilde{\bm{y}}(s,t)-\bm{y}(s,t)
=\sum_{\nu=0}^{N-1} R_\nu(s,t)\delta\bm{C}_\nu,
\end{eqnarray*}
where $\delta\bm{C}_{\nu}$ denotes the variation of the control point $\bm{C}_\nu$, i.e.
\begin{eqnarray*}
\delta\bm{C}_{\nu}:=\tilde{\bm{C}_{\nu}}-\bm{C}_{\nu}.
\end{eqnarray*}
Then, $\mathcal{J}$ in (\ref{s:J_expand}) can be discretised as follows:
\begin{eqnarray}
\mathcal{J}(\tilde{u};\tilde{S})\approx \mathcal{J}(u;S)+\sum_{\nu=0}^{N-1}\bm{s}_\nu(u;S)\cdot\delta\bm{C}_\nu+O(\epsilon^2),
\label{eq:tildeJ}
\end{eqnarray}
where
\begin{eqnarray*}
\bm{s}_\nu(u;S):=\mathop{\mathrm{Re}}\int_S\left( k^2\lambda^* u-\nabla \lambda^* \cdot \nabla u
\right) R_\nu \bm{n}\mathrm{d} S.
\label{eq:sd_disc}
\end{eqnarray*}
The vector $\bm{s}_\nu$ stands for the sensitivity of $\mathcal{J}$ with respect to the control points $\bm{C}_\nu$, which are considered as the design variables in this study.
Because of $\partial u/\partial n=0$ and $\partial\lambda/\partial n=0$ due to the boundary conditions in (\ref{eq:bc}) and (\ref{eq:bc_adj}), respectively, the gradients of $u$ and $\lambda^*$ in (\ref{eq:sd_disc}) can be expressed as follows:
\begin{eqnarray*}
\nabla u = \frac{1}{J}\left(\pp{u}{s}\bm{t}\times\bm{n}+\pp{u}{t}\bm{n}\times\bm{s}\right),\quad
\nabla \lambda^* = \frac{1}{J}\left(\pp{\lambda^*}{s}\bm{t}\times\bm{n}+\pp{\lambda^*}{t}\bm{n}\times\bm{s}\right),
\end{eqnarray*}
where $\bm{s}:=\pp{\bm{y}}{s}$ and $\bm{t}:=\pp{\bm{y}}{t}$ denote the tangential vectors along $s$ and $t$ coordinates, respectively, and $J:=\norm{\bm{s}\times\bm{t}}$ is the Jacobian. It should be emphasised that the tangential derivatives $\pp{u}{s}$ and $\pp{u}{t}$ can be computed readily by differentiating the NURBS basis $R_\nu$. In addition, the gradients are continuous over the surface $S$ (except for the intersections among NURBS surfaces in general) if the degrees $\ps$ and $\pt$ of the NURBS basis are two or more.
Since there is no singularity in the integral in (\ref{eq:sd_disc}), we may evaluate the integral with the Gauss-Legendre quadrature formula.
\subsection{Reduction to nonlinear optimisation problem}\label{s:nonlinear}
The optimisation problem stated in Section~\ref{s:statement} forms a nonlinear optimisation problem. In general, the problem is to minimise the prescribed objective function $f:\bbbr^n\rightarrow\bbbr$ with respect to $n$ design variables $x\in\bbbr^n$ under $m$ inequality-constraints $g:\bbbr^n\rightarrow\bbbr^m$, i.e.
\begin{eqnarray}
g_{\rm L} \leq g(x)\leq g_{\rm U},
\end{eqnarray}
where $g_{\rm L,U}\in\bbbr^m$ denote the bounds of $g$. The design variables $x$ are usually bounded as
\begin{eqnarray}
x_{\rm L} \leq x\leq x_{\rm U},
\end{eqnarray}
where $x_{\rm L,U}\in\bbbr^m$ denote the bounds of $x$. Optionally, the gradient $\nabla f$ and Hessian of $f$ is considered if they are readily computed.
In the present shape optimisation problem, we choose the $N$ control points $\bm{C}_\nu$ (where $\nu=0,\ldots,N-1$) as the design variables. Then, we may regard our objective function $\mathcal{J}$, design variables $\bm{C}_\nu$ and their gradients $\bm{s}_\nu$ in (\ref{eq:sd_disc}) as $f$, $(\bm{C}_0,\ldots,\bm{C}_{N-1})^{\rm T}\in\bbbr^{3N}$ and $(\bm{s}_0,\ldots,\bm{s}_{N-1})^{\rm T}\in\bbbr^{3N}$, respectively, where $3N$ corresponds to the number $n$ of design variables. In this study, we do not consider the Hessian of $\mathcal{J}$.
We utilise a primal-dual interior-point method with line searches based on Filter methods, which is implemented in Ipopt~\cite{ipopt,ipopt_wiki} and will be called IP hereafter. In the previous research~\cite{takahashi2019ewco}, the IP sometimes required many number of backtracking line-search steps and, thus, many number of evaluating $f$ as well as $\nabla f$. As a result, the computational time was sometimes enormous.
Hence, we consider different gradient-based optimisation methods. To this end, we exploit the software NLopt~\cite{NLopt}, which contains many optimisation methods. We use the MMA (method of moving asymptotes~\cite{svanberg2002class}) and SLSQP (sequential least-squares quadratic programming~\cite{kraft1994}) because they are general-purpose in the sense that they can handle nonlinear inequality-constraints.
|
1,116,691,500,708 | arxiv | \section{Introduction}
The formation and early evolution of massive stars is difficult to
study and, as a result, still not fully understood. This is partly
because sites of massive star formation are typically situated at
greater distances than nearby sites of low mass star formation. In
addition, massive stars form rapidly, deep within their natal
clouds. These factors make detailed study of the small scale
environment of young massive stars challenging. Consequently, our
understanding of how the star formation process depends on mass is
incomplete. To address this issue, it is important to characterise the
circumstellar environment of massive young stars and contrast this
to the case of low mass star formation.
\smallskip
Most studies on the comparison between low and high mass star
formation have focused on Herbig Ae/Be stars. These objects are
pre-main-sequence objects identified by the presence of an infrared
excess and emission lines and have a mass of $\sim2-15$~$M_{\odot}$
\citep{Herbig1960,The1994}. Herbig Ae/Be (HAe/Be) stars span the
transition from low to high stellar masses. Since they are optically
visible and relatively luminous, HAe/Be stars offer an opportunity to
study the circumstellar geometry of young stellar objects (YSOs) at
intermediate and high luminosities. As a result, there have been many
studies of the circumstellar environment of HAe/Be stars \citep[see
e.g.][]{Meeus2001, Natta2001, Vink2002, Millan-Gabet2001,
Eisner2004,Acke2005,
Monnier2005,Kraus2008lines,Kraus2008,Weigelt2011}. {{An extensive
overview of the structure of the inner discs of Herbig Ae/Be stars is
presented in \citet{DullemondandMonnier2010}. Here, we focus on the differences between the
circumstellar environments of Herbig Ae (HAe) and Herbig Be (HBe)
stars.}}
\smallskip
Based on the analysis
of the infrared excesses of such objects, \citet{Meeus2001} suggest
that the disks of HAe/Be stars can be split into two Groups: I \&
II. It has been proposed that these two groups represent different
disc geometries. Group I objects, objects with prominent mid infrared
excesses, are thought to possess flared discs. Group II objects, which
have less strong excesses in the mid infrared, are thought to possess
flatter disc geometries. While the majority of HAe stars are
classified as Group I objects, HBe stars generally belong to Group II
\citep{Acke2005}. Whether this dichotomy is due to the more rapid
evolution of luminous YSOs, a consequence of the dependence of disc
geometry on the temperature of the central star or a combination of
these and other factors is not clear.
\smallskip
Another difference between the discs around HAe and HBe objects was
discovered with long baseline interferometry in the infrared. In the
past decade, the circumstellar environments of many HAe/Be objects
have been spatially resolved by interferometric observations in the
near infrared \citep[see e.g.][]{Millan-Gabet1999,Millan-Gabet2001,
Eisner2003,Eisner2004,Eisner2007,Eisner2009,Eisner2010,Monnier2005,Monnier2006,Malbet2007,
Kraus2008,Tannirkulam2008, Benisty2010,Renard2010,Kreplin2012}. It
has been noted that low and intermediate luminosity HAe and HBe
objects follow a tight correlation between their size in the $K-$band
and their luminosity. However, it has also been found that the most
luminous HBe objects appear undersized based on this relationship
\citep{Monnier2005}. It has been proposed that this could be due to
the presence of an optically thick inner disc that shields the outer
dust disc from stellar radiation, allowing it to exist closer to the
central star \citep{Monnier2002}. {{This inner disc may represent an
optically thick accretion disc, and this hypothesis is consistent with
broad-band, long baseline interferometry in the infrared
\citep{Monnier2005,Vinkovic2007}}}. Furthermore, such discs can provide
a significant contribution to the NIR excess of their host star. Since
this emission originates from within the dust sublimation radius, it
will also contribute to the object appearing undersized. This
hypothesis has been shown to be valid in a few cases
\citep[see][]{Kraus2008}. However, since only a few high luminosity
HBe stars have been studied with high resolution, the structure and
evolution of their discs is still not fully understood. {{Indeed,
it is still not clear whether or not all luminous YSOs are
undersized \citep[see e.g.][]{DullemondandMonnier2010}.}}
\smallskip
The issue is further complicated by the fact that it can be difficult
to determine the evolutionary status of luminous emission line
objects. Several objects that may be Herbig Be stars could also be
evolved objects \citep[see e.g.][]{Kraus2009}. The uncertain
evolutionary status of such objects makes it difficult to obtain an
overview of the circumstellar environment of luminous YSOs.
\smallskip
To address these issues, we observed the Herbig Be candidate HD 85567
using the VLTI and AMBER. HD 85567 (CPD $-$60$^{^{\circ}}$1510, Hen
3-331) is a luminous B[e] object of uncertain evolutionary status. It
is listed as Herbig Be star with spectral type B5Ve by
\citet{The1994}. The object was also listed as a HAeB[e] star, i.e. a
Herbig Be star that exhibits forbidden line emission, by
\citet{Lamers1998}. Based on the object's infrared SED,
\citet{Malfait1998} classify HD 85567 as an object with double dust
disk. However, the lack of a significant dip between the near and mid
infrared flux indicates that the disc of HD 85567 is relatively
un-evolved. Recently, \citet{Verhoeff2012} included HD 85567 in a
sample of HAeBe stars observed in the $N-$band. These authors classify
the SED of HD 85567 as that of a type II object using the scheme of
\citet{Meeus2001}. This could indicate the presence of an optically
thick inner disc that prevents the outer disc from flaring.
\smallskip
The numerous studies mentioned above classify HD 85567 as a relatively
luminous ($L_{\star}\sim15\,000~L_{\odot}$) YSO. However, there is an
alternative scenario. \citet{Miro2001} note that HD 85567 does not
exhibit a prominent far infrared excess. These authors suggest that
this could indicate that this object is not a YSO and that the
presence of warm dust in its environment might be attributed to mass
loss driven by binary interactions. However, this was only a
conjecture as these authors do not find direct evidence of a
companion. HD 85567 was later classified as a binary by
\citet{DB2006}. These authors present a clear photo centre shift of 29
milli-arcsec to the south over the H$\alpha$ line. Assuming that the
flux ratio in the optical is unity and that only one component
exhibits H$\alpha$ emission, this implies a separation of around 60
milli-arcsec (mas), approximately 100~AU at 1.5~kpc. However, it is
likely the optical flux ratio is not unity and thus that the
separation is many times larger. Therefore, it is not clear if the
companion detected is close enough to interact with the primary, thus
causing the B[e] behaviour of this object. As a result, at present, it
is still not clear whether HD 85567 is a bone fide YSO or an
interacting B[e] binary.
\smallskip
High resolution observations have the potential to address this issue
by examining whether HD 85567 has an additional companion at a smaller
separation. Furthermore, high resolution observations also provide an
opportunity to probe the structure of the object's disc. With this in
mind, we observed HD 85567 with the VLTI and AMBER. This paper
presents the observations and is structured as follows. The
observations are presented in Sect. \ref{SECT:OBS_AND_DATA}. The
results are presented in Sect. \ref{SECT:RESULTS} and discussed in
Sect. \ref{SECT:DISC}. The paper is concluded in
Sect. \ref{SECT:CONC}.
\section{Observations and data reduction}
\label{SECT:OBS_AND_DATA}
HD 85567 was observed with the VLTI and AMBER in the $K-$band using the medium spectral resolution mode. This provides a spectral resolution of $R \sim 1500$ or $\Delta v \sim$200$\mathrm{km\,s^{-1}}$ and a wavelength coverage of $\sim$2.15--2.45~$\mu$m. Observations were conducted using the UT1-UT2-UT3 telescopes on two occasions and the UT2-UT3-UT4 telescopes on two additional occasions. In all cases, {{FINITO \citep{FINITO}}} was used to provide fringe tracking. The observations span a period of approximately 11~months. In all cases, observations of HD 85567 ($H = 6.7$, $K = 5.8$) were conducted between observations of the calibrator objects HD 85313 ($H = 5.3$, $K = 5.1$) and HD 84177 ($H = 5.4$, $K = 5.3$). The projected baselines are displayed in Fig. \ref{FIG:UV} and a log of the observations is presented in Table \ref{TAB:LOG}.
\smallskip
The data were reduced in the standard fashion for AMBER data using the JMMC amdlib package\footnote{Version 3.0.3, available at http://www.jmmc.fr/amberdrs} \citep[see][]{Tat-amdlib,Chelli2009}. A variety of {{selection rates were used to choose frames of the interferograms for processing. Accurate visibilities require a low selection rate, as low S/Ns can bias the results, while precise differential phases can benefit from relatively high selection rates. They are not biased in the same and thus the precision can be increased by increasing the amount of frames selected.}} Calibration of the data, visibilities and closure phases, was performed using a transfer function constructed from the observations of the calibrators. The transfer functions were constructed assuming that the two calibrators, HD 84177 and HD 85313, can be described as uniform discs with radii given by $0.435 \pm 0.031$ \& $0.449 \pm 0.032$~mas respectively \citep{VLTI_Cal_1,VLTI_Cal_2}.
\begin{center}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{./figures/uv_plot.eps}
\caption{Projected baselines of the observations of HD 85567.\label{FIG:UV}}
\end{center}
\end{figure}
\end{center}
\begin{center}
\begin{table*}
\begin{center}
\caption{Details of the series of AMBER observations.\label{TAB:LOG}}
\begin{tabular}{l l l l l l l l }
\hline
\hline
Object & Seeing & Coherence time & DIT$^1$ & Telescopes & Baselines & PAs \\
&(\arcsec) & (ms) & (ms) & & (m) & (${\degr}$) \\
\hline
\multicolumn{5}{l}{\bf{{2012/04/06}}}\\
HD 84177 & 0.81 & 2.98 & 500.0 & UT1-UT2-UT3 & 50.0/43.3/92.6 & 23.8/37.7/30.3\\
HD 85567 & 0.78 & 3.10 & 500.0 & UT1-UT2-UT3 & 48.4/42.1/89.8 & 28.5/43.1/35.2\\
HD 85313 & 0.78 & 3.11 & 500.0 & UT1-UT2-UT3 & 48.0/41.5/88.7 & 32.1/47.3/39.1\\
\multicolumn{5}{l}{\bf{2012/05/06}}\\
HD 84177 & 0.88 & 5.60 & 500.0 & UT2-UT3-UT4& 41.8/89.0/61.3 & 226.7/84.2/108.8 \\
HD 85567 & 1.06 & 4.95 & 500.0 & UT2-UT3-UT4& 40.5/88.3/61.8 & 231.7/89.8/113.6\\
HD 85313 & 0.55 & 8.52 & 500.0 & UT2-UT3-UT4& 37.6/85.5/62.5 & 242.9/104.4/127.9\\
\multicolumn{5}{l}{\bf{2012/12/29}}\\
HD 84177 & 0.55 & 4.11 & 500.0 & UT2-UT3-UT4& 44.5/88.4/56.1 & 205.7/58.2/83.4\\
HD 85567 & 0.52 & 4.35 & 500.0 & UT2-UT3-UT4& 44.0/88.9/56.9 & 208.2/60.7/85.3\\
HD 85313 & 0.70 & 3.31 & 500.0 & UT2-UT3-UT4& 43.8/89.2/58.0 & 211.9/65.6/90.3\\
\multicolumn{5}{l}{\bf{2013/03/04}}\\
HD 84177 & 0.86 & 4.33 & 500. 0 & UT1-UT2-UT3&50.0/43.3/92.6 & 23.7/37.6/30.1\\
HD 85567 & 0.55 & 5.80 & 500.0 & UT1-UT2-UT3 & 48.9/42.6/90.8 & 25.8/40.0/32.4\\
HD 85313 & 0.54 & 6.64 & 500. 0 & UT1-UT2-UT3& 48.7/42.2/90.1 & 28.9/43.6/35.7\\
\hline
\end{tabular}
\tablefoot{$^1${{DIT represents the Detector Integration Time.}}}
\end{center}
\end{table*}
\end{center}
Comparison of the transfer functions and observations of HD 85567
(shown in Fig. \ref{FIG:TRANS}) reveals an apparent change in the
appearance of the target. In two cases (2012/04/06 and 2013/03/04),
the visibilities of HD 85567 are the same as the calibrators,
indicating a compact source. In the other two cases (2012/05/06 and
2012/12/29), the visibilities of HD 85567 are noticeably lower than
the calibrators, indicating an extended source. If this behaviour were
real, this would indicate that the environment of HD 85567 was compact
at the beginning of our observations, became extended and then
returned to its initial appearance.
\smallskip
The simplest explanation of this behaviour is that HD 85567 has a
previously undetected binary companion and the period of the system
is of the order of approximately 1 year. However, the lack of a
strong closure phase signal is not consistent with this
scenario. Therefore, we explored the possibility that an
observational bias is affecting the visibilities (this is discussed
in App. \ref{SECT:APP_B}).
\smallskip
It was found that when the target visibilities are significantly lower
than those of the calibrators, there is a marked difference in the
distributions of ratio of the target and calibrator fringe signal to noise
(S/N). {{The S/N associated with the fringes is the S/N of the
coherent flux, and is an important quantity to consider in the
reduction of AMBER data \citep{Tatulli2007}}}. We surmise that the
difference between the target and calibrator observations on these
dates is a bias caused by the fringe tracking performance degrading
when observing the target. This is supported by the FINITO data
recorded by the RMNREC software. The degradation of FINITO's
performance when observing the target was likely due to two
reasons. Firstly, the target is a magnitude fainter than the
calibrators in the $H-$band where fringe tracking is
conducted. Secondly, the science observations were associated with
poor seeing (especially on 2012-05-06). It is surmised that on the
dates in question, poor fringe tracking resulted in an artificially
lower fringe contrast for the observations of HD 85567, when compared
to the calibrator observations. Consequently, only the observations
when the fringe S/N distributions of the target and calibrator
observations are similar can be calibrated. In principle, the
observations of 2012-04-06 and 2013-03-04 offer reliable
calibration. However, since the fringe S/Ns of the observations
conducted 2012-04-06 are relatively low, the rest of the paper focuses
exclusively on the observations obtained on the date 2013-03-04. These
data were taken after AMBER's performance was improved in January 2013
and thus both the target and calibrator observations exhibit
relatively high fringe S/Ns (see Fig. \ref{FIG:FRAME_SNR}).
\section{Results}
\label{SECT:RESULTS}
The interferometric observations of HD 85567 conducted on 2013-03-04
are presented in Fig. \ref{FIG_V2_CP}. The time averaged closure phase
is close to zero. We conclude that there is no compelling evidence
that the environment of HD 85567 is asymmetric on the scales probed by
these observations. The calibrated visibilities are relatively high,
$\sim0.7-0.8$. This indicates that the environment of HD 85567 is only
marginally resolved. To determine the characteristic size of the
continuum emission region, the calibrated visibilities were fit with a
geometric ring model. This is discussed in Sect. \ref{SECT:VIS_FITS}.
\smallskip
The $K-$band spectrum of HD 85567 exhibits Br$\gamma$ and CO first
overtone bandhead emission. The differential visibilities and phases
across the Br$\gamma$ and CO overtone emission are presented in
Fig. \ref{FIG:DIFF_V2_AND_PHASE}. In both cases, no conspicuous
signature is observed. This suggests that the distributions of the
continuum, Br$\gamma$ and CO overtone emission are similar. However,
it is possible a slight change occurs in the differential phases
associated with the longest baseline over the CO bandhead
emission. This is discussed in more detail in Sect. \ref{SECT:DIFF}.
\smallskip
\begin{center}
\begin{figure*}
\begin{center}
\begin{tabular}{l l}
\includegraphics[width=0.43\textwidth]{./figures/cp_plot.eps} &
\includegraphics[width=0.45\textwidth]{./figures/vsquared_plot_3.eps} \\
\end{tabular}
\caption{Time-averaged closure phase and squared visibility
observations of HD 85567. The panel on the left presents the
closure phases. The closure phase error bars shown represent
the mean error in the measurements. A frame
selection of 80 per cent was used. In the panel on the right,
we present the squared visibilities. A frame selection of 20
per cent was used. The errors represent the mean error in the
calibrated visibilities. The solid line is the visibility
profile of a ring with a radius of 0.69~mas, 1.0~AU at
1.5~kpc. The long-dashed line corresponds to a ring with a
radius of 0.56~mas (0.8~AU) with the addition of a background
that accounts for 5 percent of the total
flux.\label{FIG_V2_CP}}
\end{center}
\end{figure*}
\end{center}
\begin{center}
\begin{figure*}
\begin{center}
\begin{tabular}{l l}
\includegraphics[width=0.35\textwidth]{/aux/pc20217a/hwwright/HD_85567_AMBER_mark_2/Run_B/089.C-0220-B/HD_85567/2013-03-04/20_percent/2013-03-04_3.0.3_OIDATA_AVG/diff_vsquared.eps} &
\includegraphics[width=0.35\textwidth]{/aux/pc20217a/hwwright/HD_85567_AMBER_mark_2/Run_B/089.C-0220-B/HD_85567/2013-03-04/20_percent/2013-03-04_3.0.3_OIDATA_AVG/diff_vsquared_2.eps} \\
\includegraphics[width=0.35\textwidth]{/aux/pc20217a/hwwright/HD_85567_AMBER_mark_2/Run_B/089.C-0220-B/HD_85567/2013-03-04/80_percent/2013-03-04_3.0.3_OIDATA_AVG/dp.eps} &
\includegraphics[width=0.35\textwidth]{/aux/pc20217a/hwwright/HD_85567_AMBER_mark_2/Run_B/089.C-0220-B/HD_85567/2013-03-04/80_percent/2013-03-04_3.0.3_OIDATA_AVG/dp_2.eps} \\
\end{tabular}
\caption{Time-averaged differential squared visibility (top) and differential phases (bottom) around the Br$\gamma$ line (left) and CO first overtone bandhead emission (right). Error bars shown represent statistical errors on the mean. A frame selection of 20 per cent was used for the visibilities and a selection rate of 80 percent was employed to obtain the averaged phases. The {{individual AMBER files}} were merged before frame selection was conducted. The dashed line alongside the CO emission represents the spectrum before telluric correction. \label{FIG:DIFF_V2_AND_PHASE}}
\end{center}
\end{figure*}
\end{center}
\subsection{Ring model}
\label{SECT:VIS_FITS}
To fit the visibilities, the ratio of the infrared excess and
photospheric emission was determined. This was achieved by analysing
the SED of HD 85567, which was constructed by taking data from the
literature and using the VOSA utility \citep{VOSA}. The SED contains
data from 2MASS \citep{2MASS_ref}, AKARI
\citep{Ishihara2010,Murakami2007,Onaka2007}, DENIS \citep{DENIS}, IRAS
\citep{IRASPS,IRAS_2}, Tycho-2 \citep{TYCHO}, WISE
\citep{Wright2010,WISE}, \citet{deWinter2001}, \citet{Klare77},
\citep{Schild83}, \citet{Miro2001}, \citet{Mer1994} and
\citet{Verhoeff2012}. The data were de-reddened using the extinction
relationship of \citet{Cardelli1989} and values of $A_V = 1.1, R_V = 3.1$ (see
Table \ref{TAB:PAR}). The final SED is shown in Fig. \ref{FIG:SED}.
\smallskip
To determine the ratio of the stellar and circumstellar emission, we
compared the observed SED to the stellar flux expected for the
spectral type of HD 85567. The stellar parameters taken from the
literature, including spectral type and effective temperature, are
listed in Table \ref{TAB:PAR}. Assuming an effective temperature of
$T_{\mathrm{eff}} = 19\,000$, the ratio of the excess to stellar
emission in the $K-$band is 9.0.
\begin{center}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{./figures/sed_plot.eps}
\caption{HD 85567's SED. The data were de-reddened using an
$A_V=1.1$, and are shown with the predicted SED for the parameters $T_{\rm{eff}}$=19\,000 and $\log g$=3.5. \label{FIG:SED}}
\end{center}
\end{figure}
\end{center}
\begin{center}
\begin{table}
\caption{Adopted stellar parameters.\label{TAB:PAR}}
\begin{center}
\begin{tabular}{l l l}
\hline
\hline
Parameter & Value & Ref.\\
\hline
Spec. Typ. & B2& M01\\
$T_{\rm{eff}}$ & 19\,000~K& M01\\
$d$ & $1.5\pm0.5$~kpc& M01\\
$R_{\star}$ & $9\pm2$~$R_{\odot}$& V12\\
$A_{\rm{V}}$& $1.1\pm0.1$& V12 \\
$\log L_{\star}$& $4.17\pm0.16$~$L_{\odot}$& V12 \\
$M_{\star}$ &$12 \pm 2$~$M_{\odot}$ & V12\\
\hline
\end{tabular}
\tablefoot{M01: \citet{Miro2001}, V12: \citet{Verhoeff2012}}
\end{center}
\end{table}
\end{center}
{{The visibilities were then fit with a model of a ring, which was
assumed to be face on as the data do not show PA-related V2 variations
indicative of an asymmetric object.}} The best fit ring radius was found
to be given by $r = 0.69_{-0.23}^{+0.20}$~mas, which resulted in a
minimum chi squared value of $\chi^2_{\mathrm{R}} = 3.59$ (using the
rms of the visibility measurements). It was found that the fit could
be improved by adding a resolved background component. The minimum
contribution from a totally resolved background flux that resulted in
a $\chi^2 < 2$ was determined to be approximately 5 percent of the
total flux. The resulting best fit ring radius was $r =
0.56_{-0.20}^{+0.16}$~mas, which resulted in a minimum chi squared
value of $\chi^2_{\mathrm{R}} = 1.68$. The best fit visibility
distributions are displayed in the right panel of
Fig. \ref{FIG_V2_CP}.
\subsection{Differential visibilities and phases}
\label{SECT:DIFF}
\smallskip
The differential visibilities across the B$\gamma$ line and CO bandhead emission are presented in Fig. \ref{FIG:DIFF_V2_AND_PHASE}. No clear change in visibilities is observed across the Br$\gamma$ line. There are some suggestions of an increase in visibilities over the line, indicating a compact line emitting region. However, these are not considered significant given the lack of consistency of the position of these increases with respect to the line centre. Since baselines with similar lengths and position angles (UT1-UT2 and UT2-UT3) exhibit different changes in visibility in the approximate region of the Br$\gamma$ line, the features observed are considered artifacts. The differential visibilities over the CO bandhead emission exhibit several artificial changes across telluric absorption lines. These make it challenging to detect changes in the visibilities across the CO bandhead emission.
\smallskip
The differential phases across the Br$\gamma$ and CO bandhead emission
are also presented in Fig. \ref{FIG:DIFF_V2_AND_PHASE}. In the case of
one baseline (UT1-UT2), it appears that there is a change in phase
across the Br$\gamma$ line. The behaviour of the phase variation with
wavelength; a negative change on the blue side of the line and a
positive change over the red side, is similar to that expected in the
case of line emission originating in a rotating medium. However, since
the phases associated with the similar UT2-UT3 baseline (49 and 43~m
at PAs of $26$ and $40^{\circ}$ respectively) do not exhibit this
behaviour, it is suggested that the phase signal discussed is also an
artifact. In general, no prominent offset is observed in the
differential phases across the CO bandhead. However, since the
observed spectrum features several CO overtone transitions, we
could increase the precision of the differential phase
observations by co-adding the data across the individual
transitions. This was done using the data associated with the longest
baseline (UT3-UT1, 91~m) as these observations access the smallest
scales. The results are discussed in the following section
(\ref{SECT:COADD}).
\subsubsection{Photo-centre offset over the CO bandhead emission}
\label{SECT:COADD}
To increase the precision of the differential phase observations
obtained with the UT3-UT1 baseline, the phases across the 3 first
bandhead transitions were averaged. {{The photo-centre offset associated
with the resultant differential phase signal was calculated using $p = -\frac{\phi}{2\pi}\dot\frac{\lambda}{B}$, where $B$ is the baseline length and $p$ represents the projection of the 2D photo-centre along the orientation of the baseline. The result is shown in Fig. \ref{FIG:CO_PHOTO}}}. The observations
are consistent with a small offset corresponding to approximately
10~$\mu$as occurring over the bandhead profile. In contrast, offsets
larger than approximately 10~$\mu$as can be excluded. Whether this can
be used to constrain the location of the CO bandhead emission was then
explored using the model developed in \citet{MeCO,me_hd_327083_2}.
\smallskip
To reduce the running time of the model, it is assumed that the
average photo-centre shift associated with the first three CO overtone
transitions ($2\rightarrow0$, $3\rightarrow1$ and $4\rightarrow2$)
could be modelled as the shift over the first bandhead
($2\rightarrow0$). This is a simplification but ultimately, the
emission of the different bandheads will originate from the same
location. Based on the excitations requirements of the different
transitions, this approach will slightly over-estimate the average
offset. However, given the 0.5~kpc uncertainty in the distance to HD
85567, this was not considered significant.
\smallskip
To calculate the photo-centre offsets associated with CO bandhead
emission from a circumstellar disc, we used the model presented in
\citet{me_hd_327083_2} and the stellar parameters presented in Table
\ref{TAB:PAR}. The source of the CO emission was represented by a
Keplerian disc with power laws describing the radial dependence of the
excitation temperature and surface number density. The exponents of
the respective power laws were set to $p = -0.5$ and $q =
-1.5$. Finally, the inclination was set to $i = 35^{^{\circ}}$, which
is based on fits to the CO bandhead emission presented
in \citep{Ilee_phd}. It is noted that this value is relatively uncertain it was derived from a model fit to spectra of moderate, rather than high, spectral resolution. Nonetheless, it serves as a representative value and is sufficient for our purposes. {{Once the images of the disc at various wavelengths had been calculated, the associated offset was determined from the photo-centre of each image.}}
\smallskip
We calculated the photo-centre offsets for two
models. The first with a relatively small inner radius,
5~$R_{\star}$, and a compact outer radius of 1~AU, as predicted by
the scenario of an optically thick gas disc interior to the dust
sublimation radius. The second model featured a larger inner radius,
10~$R_{\star}$, and a more extended outer radius of 4~AU. This outer
radius corresponds to the scenario of an optically thin inner disc
and a dust sublimation radius that reproduces the size luminosity
relationship of intermediate and low luminosity objects.
\smallskip
The model photo-centre offsets are displayed in
Fig. \ref{FIG:CO_PHOTO}. Clearly, the significance of the slight
offset observed is low, the tentatively identified signature is
approximately 4 times the continuum rms. However, it is evident that
the data are consistent with the offset associated with the smaller
disc. Furthermore, the data favour the smaller disc over the larger
disc as the more extended disc results in an offset that is larger
than that observed.
\begin{center}
\begin{figure}
\begin{center}
\includegraphics[width=0.25\textwidth]{/aux/pc20217a/hwwright/HD_85567_AMBER_mark_2/Run_B/089.C-0220-B/HD_85567/2013-03-04/80_percent/2013-03-04_3.0.3_OIDATA_AVG/dp_4.eps}
\caption{Photo-centre offsets associated with the CO bandhead emission. This figure was constructed by co-adding the time averaged differential phase observations obtained on the date 2013-03-04 using the UT1-UT3 baseline across the first 3 CO bandheads. The uncertainty in the differential phase is represented by the rms of the continuum measurements. The smaller model offset (solid line) corresponds to an inner disc radius of 5 stellar radii (0.2~AU) and an outer radius of 1.0~AU (0.7~mas). The larger offset (dashed line) was calculated for an inner radius of 10 stellar radii (0.4~AU) and an outer radius of 4.0~AU (2.7~mas, sizes calculated assuming a distance of 1.5~kpc).\label{FIG:CO_PHOTO}}
\end{center}
\end{figure}
\end{center}
\section{Discussion}
\label{SECT:DISC}
This paper presents new VLTI/AMBER observations of the B[e] star HD
85567. Two scenarios have been proposed to explain the B[e] behaviour
of this object. One scenario that explains the object's infrared
excess and line emission is that it is a YSO with a circumstellar
accretion disc. The alternative scenario is that HD 85567 is an
interacting binary with circumstellar material that has been deposited
though mass loss driven by binary interactions. Here, we discuss our
findings in the context of these two scenarios. We also briefly discuss the
structure of HD 85567's circumstellar material and consider how this
is evolving.
\smallskip
We note that our moderate spectral resolution observations reveal that
HD 85567 exhibits $^{12}$CO bandhead emission, but not $^{13}$CO
bandhead emission. In principle, the fact that the circumstellar
material of HD 85567 is not significantly enriched in $^{13}$CO
favours the YSO scenario \citep{Kraus2009}. However, we note that
while the spectrum excludes ratios of $^{12}$CO/$^{13}$CO below approximately 15, this is not sufficient to place strong constraints
on the evolutionary status of HD 85567 \citep{Kraus2009}.
\smallskip
The closure phase observations provide an additional means to
investigate the interacting binary hypothesis. HD 85567 has already
been shown to be a binary, although the estimated minimum separation
is $\sim$100~AU \citep[and likely many times this,][]{DB2006}. Since
this companion may be too distant to induce mass loss from the
primary, we used our high resolution observations to investigate the
hypothesis that HD 85567 has an additional, closer companion within
the field of view of the UT telescopes (60~mas, $\sim$100~AU). Since
no closure phase signature is detected, the observations do not reveal
an additional close binary companion. For completeness, we note that
the $u,v$ coverage of the observations discussed is relatively
linear. In principle, a companion could escape detection if it was
aligned perpendicularly to the projected baselines. {{However, the
additional closure phases associated with the nights of degraded
FINITO performance are also consistent with zero, and thus
indicate a symmetric source. This is a robust result as a bias in
visibilities will not affect closure phase
measurements}}. Therefore, the data support the conclusion that HD
85567 does not appear to have a close binary companion, although a
faint companion could still escape detection. We now investigate
whether the data are consistent with the hypothesis that HD 85567 is a
YSO.
\smallskip
The observed visibilities are relatively high and can be reasonably
reproduced using a point source and a ring model. We report that the
apparent radius of the $K-$band continuum emitting region of HD 85567
is $r = 0.56_{-0.20}^{+0.16}$~mas ($\sim$$0.8\pm0.3$~AU). Based on the
luminosity of HD 85567 and the predicted dust sublimation radius when
the inner disc is optically thin, the expected ring radius is
4.2~AU. Therefore, this is considerably smaller than expected based on
the size luminosity relationship exhibited by YSOs of low and
intermediate luminosity \citep{Monnier2005}. This is a robust result
as it is most likely independent of a possible bias in the calibrated
visibilities due to the use of FINITO. As discussed previously, FINITO
can bias the target visibilities to low values. Therefore, if the data
are biased, the true size of HD 85567 may be smaller, but not
larger. Furthermore, HD 85567 appears undersized even when allowing
for the uncertainties in its distance and luminosity (both
approximately 30 percent). The undersized appearance of HD 85567 is
similar to the case of luminous YSOs. For example, the Herbig Be star
V1685 Cyg has a luminosity of 21$\,$400~$L_{\odot}$ and a $K-$band
ring fit radius of $2.15^{+0.23}_{-0.18}$~AU, making it undersized by
nearly 3~AU \citep{Monnier2005}. This was also reported to be the case
for the early B type Herbig Be star MWC 297
\citep{Weigelt2011}. Therefore, the size of HD 85567 supports the
hypothesis that this object is also a YSO.
\smallskip
It has been proposed that the reason for the small sizes of luminous
YSO is that their inner discs are optically thick, shielding the inner
rim of the dust disc from stellar radiation. This can allow the dust
sublimation radius to be located closer to the central star than would
otherwise be the case. The optically thick inner gas disc is associated
with active accretion discs interior to the dust sublimation radius
\citep{Eisner2004,Monnier2005}. Here we explore whether this scenario
is applicable to HD 85567. By considering the combined effect of
stellar irradiation and viscous heating, \citet{Millan-Gabet2001}
present the temperature of an accretion disc as a function of
radius. The equations used are the following:
\begin{equation}
T(r) = (T_{\mathrm{rep}}^4 + T_{\mathrm{acc}}^4)^{\frac{1}{4}}
\end{equation}
in combination with
\begin{equation}
T_{\mathrm{rep}} = T_{\star}\left(\frac{1}{3}^{\frac{1}{4}}\right)\left(\frac{R_{\star}}{r}^{\frac{3}{4}}\right)
\end{equation}
and
\begin{equation}
T_{\mathrm{acc}} = \left(\frac{3GM_{\star}\dot{M}_{\mathrm{acc}}}{8\pi\sigma}\right)r^{\frac{-3}{4}}
\end{equation}
where $\sigma$ is the Stefan-Boltzmann constant, $G$ is the gravitational constant and $\dot{M}_{\mathrm{acc}}$ is the accretion rate. These equations can be used to crudely estimate the expected size of accretion discs by determining the radius where the temperature falls to 1500~K, i.e. the approximate dust sublimation radius.
\smallskip
It has been estimated that HD 85567 accretes material at a rate of
rate of approximately $1 \times 10^{-6}~M_{\odot}\,{\mathrm{yr^{-1}}}$
\citep[based on the object's Br$\gamma$ emission,][]{Ilee_phd}. This
should be sufficient to ensure an optically thick inner disc
\citep[see e.g.][]{Weigelt2011}. Using this accretion rate and the
parameters in Table \ref{TAB:PAR}, we obtain a predicted dust
sublimation radius of 0.9~AU. This is consistent with the best fitting
ring radius of $0.8 \pm 0.3$~AU. Therefore, it is certainly plausible
that the size of HD 85567 in the $K-$band reflects the presence of an
optically thick disc interior to the dust sublimation radius. This is
supported by the finding that a gaseous disc 1~AU in size is
consistent with the differential phase observations over the CO
bandhead emission. Gaseous discs with radii in excess of 4~AU, the
location of the dust sublimation radius in the case of an optically
thin inner disc, do not reproduce the data well.
\smallskip
We conclude that the observations are consistent with the
hypothesis that HD 85567 is a YSO while they do not support the
interacting, evolved binary scenario. We find that HD 85567 appears
undersized according to the size luminosity relationship of YSOs and
demonstrate that this could be due to the presence of an optically
thick gaseous disc interior to the dust sublimation radius. Finally,
we note that the presence of an optically thick inner disc and the
absence of a far infrared excess suggest that HD 85567 is
photo-evaporating its disc from the outside. Supporting the hypothesis
that this is the fate of discs around Herbig Be stars
\citep{Alonso-Albi2009,Verhoeff2012}.
\section{Conclusion}
\label{SECT:CONC}
This paper presents new VLTI/AMBER observations of the enigmatic B[e]
object HD 85567. Here we reiterate the salient results.
\smallskip
The object's environment appears compact and symmetric on scales of a
few to 100~AU. This does not support the hypothesis that the object is an
evolved, interacting binary. The apparent radius of HD 85567's
environment in the $K-$band is found to be $r =
0.56_{-0.20}^{+0.16}$~mas ($\sim$$0.8\pm0.3$~AU). This makes the
object undersized according to the size luminosity relationship based
on YSOs of low and intermediate luminosity. This has previously been
found to be the case for luminous YSO and thus the size of HD 85567 is
consistent with the hypothesis that it is a YSO.
\smallskip
We then investigate why HD 85567 appears undersized according to the
size luminosity relationship of YSOs. The size of the $K-$band
emitting region is congruous with the predicted location of the dust
sublimation assuming an accretion disc that is optically thick in the
inner regions. Furthermore, the differential phase observations over
the CO bandhead are also consistent with a compact ($r \sim 1$~AU)
gaseous disc interior to the dust disc. More extended discs do not
reproduce the data as well.
\smallskip
To conclude, the data support the hypothesis that HD 85567 appears
undersized according to the YSO size luminosity relationship due to
the presence of an optically thick gaseous disc interior to the dust
sublimation radius. This indicates that HD 85567 is indeed a YSO. If
this is the case, the gaseous inner disc may be identified as an
accretion disc. The presence of an optically think inner disc and the
absence of a far infrared excess suggest that HD 85567 is
photo-evaporating its disc from the outer edge.
\begin{acknowledgements}
HEW acknowledges the financial support of the Max Planck
Society. This research has made use of the \texttt{AMBER data
reduction package} of the Jean-Marie Mariotti Center. This
publication also makes use of VOSA, developed under the Spanish
Virtual Observatory project supported from the Spanish MICINN
through grant AyA2008-02156.
\end{acknowledgements}
\bibliographystyle{aa}
\section{Introduction}
The formation and early evolution of massive stars is difficult to
study and, as a result, still not fully understood. This is partly
because sites of massive star formation are typically situated at
greater distances than nearby sites of low mass star formation. In
addition, massive stars form rapidly, deep within their natal
clouds. These factors make detailed study of the small scale
environment of young massive stars challenging. Consequently, our
understanding of how the star formation process depends on mass is
incomplete. To address this issue, it is important to characterise the
circumstellar environment of massive young stars and contrast this
to the case of low mass star formation.
\smallskip
Most studies on the comparison between low and high mass star
formation have focused on Herbig Ae/Be stars. These objects are
pre-main-sequence objects identified by the presence of an infrared
excess and emission lines and have a mass of $\sim2-15$~$M_{\odot}$
\citep{Herbig1960,The1994}. Herbig Ae/Be (HAe/Be) stars span the
transition from low to high stellar masses. Since they are optically
visible and relatively luminous, HAe/Be stars offer an opportunity to
study the circumstellar geometry of young stellar objects (YSOs) at
intermediate and high luminosities. As a result, there have been many
studies of the circumstellar environment of HAe/Be stars \citep[see
e.g.][]{Meeus2001, Natta2001, Vink2002, Millan-Gabet2001,
Eisner2004,Acke2005,
Monnier2005,Kraus2008lines,Kraus2008,Weigelt2011}. {{An extensive
overview of the structure of the inner discs of Herbig Ae/Be stars is
presented in \citet{DullemondandMonnier2010}. Here, we focus on the differences between the
circumstellar environments of Herbig Ae (HAe) and Herbig Be (HBe)
stars.}}
\smallskip
Based on the analysis
of the infrared excesses of such objects, \citet{Meeus2001} suggest
that the disks of HAe/Be stars can be split into two Groups: I \&
II. It has been proposed that these two groups represent different
disc geometries. Group I objects, objects with prominent mid infrared
excesses, are thought to possess flared discs. Group II objects, which
have less strong excesses in the mid infrared, are thought to possess
flatter disc geometries. While the majority of HAe stars are
classified as Group I objects, HBe stars generally belong to Group II
\citep{Acke2005}. Whether this dichotomy is due to the more rapid
evolution of luminous YSOs, a consequence of the dependence of disc
geometry on the temperature of the central star or a combination of
these and other factors is not clear.
\smallskip
Another difference between the discs around HAe and HBe objects was
discovered with long baseline interferometry in the infrared. In the
past decade, the circumstellar environments of many HAe/Be objects
have been spatially resolved by interferometric observations in the
near infrared \citep[see e.g.][]{Millan-Gabet1999,Millan-Gabet2001,
Eisner2003,Eisner2004,Eisner2007,Eisner2009,Eisner2010,Monnier2005,Monnier2006,Malbet2007,
Kraus2008,Tannirkulam2008, Benisty2010,Renard2010,Kreplin2012}. It
has been noted that low and intermediate luminosity HAe and HBe
objects follow a tight correlation between their size in the $K-$band
and their luminosity. However, it has also been found that the most
luminous HBe objects appear undersized based on this relationship
\citep{Monnier2005}. It has been proposed that this could be due to
the presence of an optically thick inner disc that shields the outer
dust disc from stellar radiation, allowing it to exist closer to the
central star \citep{Monnier2002}. {{This inner disc may represent an
optically thick accretion disc, and this hypothesis is consistent with
broad-band, long baseline interferometry in the infrared
\citep{Monnier2005,Vinkovic2007}}}. Furthermore, such discs can provide
a significant contribution to the NIR excess of their host star. Since
this emission originates from within the dust sublimation radius, it
will also contribute to the object appearing undersized. This
hypothesis has been shown to be valid in a few cases
\citep[see][]{Kraus2008}. However, since only a few high luminosity
HBe stars have been studied with high resolution, the structure and
evolution of their discs is still not fully understood. {{Indeed,
it is still not clear whether or not all luminous YSOs are
undersized \citep[see e.g.][]{DullemondandMonnier2010}.}}
\smallskip
The issue is further complicated by the fact that it can be difficult
to determine the evolutionary status of luminous emission line
objects. Several objects that may be Herbig Be stars could also be
evolved objects \citep[see e.g.][]{Kraus2009}. The uncertain
evolutionary status of such objects makes it difficult to obtain an
overview of the circumstellar environment of luminous YSOs.
\smallskip
To address these issues, we observed the Herbig Be candidate HD 85567
using the VLTI and AMBER. HD 85567 (CPD $-$60$^{^{\circ}}$1510, Hen
3-331) is a luminous B[e] object of uncertain evolutionary status. It
is listed as Herbig Be star with spectral type B5Ve by
\citet{The1994}. The object was also listed as a HAeB[e] star, i.e. a
Herbig Be star that exhibits forbidden line emission, by
\citet{Lamers1998}. Based on the object's infrared SED,
\citet{Malfait1998} classify HD 85567 as an object with double dust
disk. However, the lack of a significant dip between the near and mid
infrared flux indicates that the disc of HD 85567 is relatively
un-evolved. Recently, \citet{Verhoeff2012} included HD 85567 in a
sample of HAeBe stars observed in the $N-$band. These authors classify
the SED of HD 85567 as that of a type II object using the scheme of
\citet{Meeus2001}. This could indicate the presence of an optically
thick inner disc that prevents the outer disc from flaring.
\smallskip
The numerous studies mentioned above classify HD 85567 as a relatively
luminous ($L_{\star}\sim15\,000~L_{\odot}$) YSO. However, there is an
alternative scenario. \citet{Miro2001} note that HD 85567 does not
exhibit a prominent far infrared excess. These authors suggest that
this could indicate that this object is not a YSO and that the
presence of warm dust in its environment might be attributed to mass
loss driven by binary interactions. However, this was only a
conjecture as these authors do not find direct evidence of a
companion. HD 85567 was later classified as a binary by
\citet{DB2006}. These authors present a clear photo centre shift of 29
milli-arcsec to the south over the H$\alpha$ line. Assuming that the
flux ratio in the optical is unity and that only one component
exhibits H$\alpha$ emission, this implies a separation of around 60
milli-arcsec (mas), approximately 100~AU at 1.5~kpc. However, it is
likely the optical flux ratio is not unity and thus that the
separation is many times larger. Therefore, it is not clear if the
companion detected is close enough to interact with the primary, thus
causing the B[e] behaviour of this object. As a result, at present, it
is still not clear whether HD 85567 is a bone fide YSO or an
interacting B[e] binary.
\smallskip
High resolution observations have the potential to address this issue
by examining whether HD 85567 has an additional companion at a smaller
separation. Furthermore, high resolution observations also provide an
opportunity to probe the structure of the object's disc. With this in
mind, we observed HD 85567 with the VLTI and AMBER. This paper
presents the observations and is structured as follows. The
observations are presented in Sect. \ref{SECT:OBS_AND_DATA}. The
results are presented in Sect. \ref{SECT:RESULTS} and discussed in
Sect. \ref{SECT:DISC}. The paper is concluded in
Sect. \ref{SECT:CONC}.
\section{Observations and data reduction}
\label{SECT:OBS_AND_DATA}
HD 85567 was observed with the VLTI and AMBER in the $K-$band using the medium spectral resolution mode. This provides a spectral resolution of $R \sim 1500$ or $\Delta v \sim$200$\mathrm{km\,s^{-1}}$ and a wavelength coverage of $\sim$2.15--2.45~$\mu$m. Observations were conducted using the UT1-UT2-UT3 telescopes on two occasions and the UT2-UT3-UT4 telescopes on two additional occasions. In all cases, {{FINITO \citep{FINITO}}} was used to provide fringe tracking. The observations span a period of approximately 11~months. In all cases, observations of HD 85567 ($H = 6.7$, $K = 5.8$) were conducted between observations of the calibrator objects HD 85313 ($H = 5.3$, $K = 5.1$) and HD 84177 ($H = 5.4$, $K = 5.3$). The projected baselines are displayed in Fig. \ref{FIG:UV} and a log of the observations is presented in Table \ref{TAB:LOG}.
\smallskip
The data were reduced in the standard fashion for AMBER data using the JMMC amdlib package\footnote{Version 3.0.3, available at http://www.jmmc.fr/amberdrs} \citep[see][]{Tat-amdlib,Chelli2009}. A variety of {{selection rates were used to choose frames of the interferograms for processing. Accurate visibilities require a low selection rate, as low S/Ns can bias the results, while precise differential phases can benefit from relatively high selection rates. They are not biased in the same and thus the precision can be increased by increasing the amount of frames selected.}} Calibration of the data, visibilities and closure phases, was performed using a transfer function constructed from the observations of the calibrators. The transfer functions were constructed assuming that the two calibrators, HD 84177 and HD 85313, can be described as uniform discs with radii given by $0.435 \pm 0.031$ \& $0.449 \pm 0.032$~mas respectively \citep{VLTI_Cal_1,VLTI_Cal_2}.
\begin{center}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{fig_1.eps}
\caption{Projected baselines of the observations of HD 85567.\label{FIG:UV}}
\end{center}
\end{figure}
\end{center}
\begin{center}
\begin{table*}
\begin{center}
\caption{Details of the series of AMBER observations.\label{TAB:LOG}}
\begin{tabular}{l l l l l l l l }
\hline
\hline
Object & Seeing & Coherence time & DIT$^1$ & Telescopes & Baselines & PAs \\
&(\arcsec) & (ms) & (ms) & & (m) & (${\degr}$) \\
\hline
\multicolumn{5}{l}{\bf{{2012/04/06}}}\\
HD 84177 & 0.81 & 2.98 & 500.0 & UT1-UT2-UT3 & 50.0/43.3/92.6 & 23.8/37.7/30.3\\
HD 85567 & 0.78 & 3.10 & 500.0 & UT1-UT2-UT3 & 48.4/42.1/89.8 & 28.5/43.1/35.2\\
HD 85313 & 0.78 & 3.11 & 500.0 & UT1-UT2-UT3 & 48.0/41.5/88.7 & 32.1/47.3/39.1\\
\multicolumn{5}{l}{\bf{2012/05/06}}\\
HD 84177 & 0.88 & 5.60 & 500.0 & UT2-UT3-UT4& 41.8/89.0/61.3 & 226.7/84.2/108.8 \\
HD 85567 & 1.06 & 4.95 & 500.0 & UT2-UT3-UT4& 40.5/88.3/61.8 & 231.7/89.8/113.6\\
HD 85313 & 0.55 & 8.52 & 500.0 & UT2-UT3-UT4& 37.6/85.5/62.5 & 242.9/104.4/127.9\\
\multicolumn{5}{l}{\bf{2012/12/29}}\\
HD 84177 & 0.55 & 4.11 & 500.0 & UT2-UT3-UT4& 44.5/88.4/56.1 & 205.7/58.2/83.4\\
HD 85567 & 0.52 & 4.35 & 500.0 & UT2-UT3-UT4& 44.0/88.9/56.9 & 208.2/60.7/85.3\\
HD 85313 & 0.70 & 3.31 & 500.0 & UT2-UT3-UT4& 43.8/89.2/58.0 & 211.9/65.6/90.3\\
\multicolumn{5}{l}{\bf{2013/03/04}}\\
HD 84177 & 0.86 & 4.33 & 500. 0 & UT1-UT2-UT3&50.0/43.3/92.6 & 23.7/37.6/30.1\\
HD 85567 & 0.55 & 5.80 & 500.0 & UT1-UT2-UT3 & 48.9/42.6/90.8 & 25.8/40.0/32.4\\
HD 85313 & 0.54 & 6.64 & 500. 0 & UT1-UT2-UT3& 48.7/42.2/90.1 & 28.9/43.6/35.7\\
\hline
\end{tabular}
\tablefoot{$^1${{DIT represents the Detector Integration Time.}}}
\end{center}
\end{table*}
\end{center}
Comparison of the transfer functions and observations of HD 85567
(shown in Fig. \ref{FIG:TRANS}) reveals an apparent change in the
appearance of the target. In two cases (2012/04/06 and 2013/03/04),
the visibilities of HD 85567 are the same as the calibrators,
indicating a compact source. In the other two cases (2012/05/06 and
2012/12/29), the visibilities of HD 85567 are noticeably lower than
the calibrators, indicating an extended source. If this behaviour were
real, this would indicate that the environment of HD 85567 was compact
at the beginning of our observations, became extended and then
returned to its initial appearance.
\smallskip
The simplest explanation of this behaviour is that HD 85567 has a
previously undetected binary companion and the period of the system
is of the order of approximately 1 year. However, the lack of a
strong closure phase signal is not consistent with this
scenario. Therefore, we explored the possibility that an
observational bias is affecting the visibilities (this is discussed
in App. \ref{SECT:APP_B}).
\smallskip
It was found that when the target visibilities are significantly lower
than those of the calibrators, there is a marked difference in the
distributions of ratio of the target and calibrator fringe signal to noise
(S/N). {{The S/N associated with the fringes is the S/N of the
coherent flux, and is an important quantity to consider in the
reduction of AMBER data \citep{Tatulli2007}}}. We surmise that the
difference between the target and calibrator observations on these
dates is a bias caused by the fringe tracking performance degrading
when observing the target. This is supported by the FINITO data
recorded by the RMNREC software. The degradation of FINITO's
performance when observing the target was likely due to two
reasons. Firstly, the target is a magnitude fainter than the
calibrators in the $H-$band where fringe tracking is
conducted. Secondly, the science observations were associated with
poor seeing (especially on 2012-05-06). It is surmised that on the
dates in question, poor fringe tracking resulted in an artificially
lower fringe contrast for the observations of HD 85567, when compared
to the calibrator observations. Consequently, only the observations
when the fringe S/N distributions of the target and calibrator
observations are similar can be calibrated. In principle, the
observations of 2012-04-06 and 2013-03-04 offer reliable
calibration. However, since the fringe S/Ns of the observations
conducted 2012-04-06 are relatively low, the rest of the paper focuses
exclusively on the observations obtained on the date 2013-03-04. These
data were taken after AMBER's performance was improved in January 2013
and thus both the target and calibrator observations exhibit
relatively high fringe S/Ns (see Fig. \ref{FIG:FRAME_SNR}).
\section{Results}
\label{SECT:RESULTS}
The interferometric observations of HD 85567 conducted on 2013-03-04
are presented in Fig. \ref{FIG_V2_CP}. The time averaged closure phase
is close to zero. We conclude that there is no compelling evidence
that the environment of HD 85567 is asymmetric on the scales probed by
these observations. The calibrated visibilities are relatively high,
$\sim0.7-0.8$. This indicates that the environment of HD 85567 is only
marginally resolved. To determine the characteristic size of the
continuum emission region, the calibrated visibilities were fit with a
geometric ring model. This is discussed in Sect. \ref{SECT:VIS_FITS}.
\smallskip
The $K-$band spectrum of HD 85567 exhibits Br$\gamma$ and CO first
overtone bandhead emission. The differential visibilities and phases
across the Br$\gamma$ and CO overtone emission are presented in
Fig. \ref{FIG:DIFF_V2_AND_PHASE}. In both cases, no conspicuous
signature is observed. This suggests that the distributions of the
continuum, Br$\gamma$ and CO overtone emission are similar. However,
it is possible a slight change occurs in the differential phases
associated with the longest baseline over the CO bandhead
emission. This is discussed in more detail in Sect. \ref{SECT:DIFF}.
\smallskip
\begin{center}
\begin{figure*}
\begin{center}
\begin{tabular}{l l}
\includegraphics[width=0.43\textwidth]{fig_2_a.eps} &
\includegraphics[width=0.45\textwidth]{fig_2_b.eps} \\
\end{tabular}
\caption{Time-averaged closure phase and squared visibility
observations of HD 85567. The panel on the left presents the
closure phases. The closure phase error bars shown represent
the mean error in the measurements. A frame
selection of 80 per cent was used. In the panel on the right,
we present the squared visibilities. A frame selection of 20
per cent was used. The errors represent the mean error in the
calibrated visibilities. The solid line is the visibility
profile of a ring with a radius of 0.69~mas, 1.0~AU at
1.5~kpc. The long-dashed line corresponds to a ring with a
radius of 0.56~mas (0.8~AU) with the addition of a background
that accounts for 5 percent of the total
flux.\label{FIG_V2_CP}}
\end{center}
\end{figure*}
\end{center}
\begin{center}
\begin{figure*}
\begin{center}
\begin{tabular}{l l}
\includegraphics[width=0.35\textwidth]{fig_3_a.eps} &
\includegraphics[width=0.35\textwidth]{fig_3_b.eps} \\
\includegraphics[width=0.35\textwidth]{fig_3_c.eps} &
\includegraphics[width=0.35\textwidth]{fig_3_d.eps} \\
\end{tabular}
\caption{Time-averaged differential squared visibility (top) and differential phases (bottom) around the Br$\gamma$ line (left) and CO first overtone bandhead emission (right). Error bars shown represent statistical errors on the mean. A frame selection of 20 per cent was used for the visibilities and a selection rate of 80 percent was employed to obtain the averaged phases. The {{individual AMBER files}} were merged before frame selection was conducted. The dashed line alongside the CO emission represents the spectrum before telluric correction. \label{FIG:DIFF_V2_AND_PHASE}}
\end{center}
\end{figure*}
\end{center}
\subsection{Ring model}
\label{SECT:VIS_FITS}
To fit the visibilities, the ratio of the infrared excess and
photospheric emission was determined. This was achieved by analysing
the SED of HD 85567, which was constructed by taking data from the
literature and using the VOSA utility \citep{VOSA}. The SED contains
data from 2MASS \citep{2MASS_ref}, AKARI
\citep{Ishihara2010,Murakami2007,Onaka2007}, DENIS \citep{DENIS}, IRAS
\citep{IRASPS,IRAS_2}, Tycho-2 \citep{TYCHO}, WISE
\citep{Wright2010,WISE}, \citet{deWinter2001}, \citet{Klare77},
\citep{Schild83}, \citet{Miro2001}, \citet{Mer1994} and
\citet{Verhoeff2012}. The data were de-reddened using the extinction
relationship of \citet{Cardelli1989} and values of $A_V = 1.1, R_V = 3.1$ (see
Table \ref{TAB:PAR}). The final SED is shown in Fig. \ref{FIG:SED}.
\smallskip
To determine the ratio of the stellar and circumstellar emission, we
compared the observed SED to the stellar flux expected for the
spectral type of HD 85567. The stellar parameters taken from the
literature, including spectral type and effective temperature, are
listed in Table \ref{TAB:PAR}. Assuming an effective temperature of
$T_{\mathrm{eff}} = 19\,000$, the ratio of the excess to stellar
emission in the $K-$band is 9.0.
\begin{center}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{fig_4.eps}
\caption{HD 85567's SED. The data were de-reddened using an
$A_V=1.1$, and are shown with the predicted SED for the parameters $T_{\rm{eff}}$=19\,000 and $\log g$=3.5. \label{FIG:SED}}
\end{center}
\end{figure}
\end{center}
\begin{center}
\begin{table}
\caption{Adopted stellar parameters.\label{TAB:PAR}}
\begin{center}
\begin{tabular}{l l l}
\hline
\hline
Parameter & Value & Ref.\\
\hline
Spec. Typ. & B2& M01\\
$T_{\rm{eff}}$ & 19\,000~K& M01\\
$d$ & $1.5\pm0.5$~kpc& M01\\
$R_{\star}$ & $9\pm2$~$R_{\odot}$& V12\\
$A_{\rm{V}}$& $1.1\pm0.1$& V12 \\
$\log L_{\star}$& $4.17\pm0.16$~$L_{\odot}$& V12 \\
$M_{\star}$ &$12 \pm 2$~$M_{\odot}$ & V12\\
\hline
\end{tabular}
\tablefoot{M01: \citet{Miro2001}, V12: \citet{Verhoeff2012}}
\end{center}
\end{table}
\end{center}
{{The visibilities were then fit with a model of a ring, which was
assumed to be face on as the data do not show PA-related V2 variations
indicative of an asymmetric object.}} The best fit ring radius was found
to be given by $r = 0.69_{-0.23}^{+0.20}$~mas, which resulted in a
minimum chi squared value of $\chi^2_{\mathrm{R}} = 3.59$ (using the
rms of the visibility measurements). It was found that the fit could
be improved by adding a resolved background component. The minimum
contribution from a totally resolved background flux that resulted in
a $\chi^2 < 2$ was determined to be approximately 5 percent of the
total flux. The resulting best fit ring radius was $r =
0.56_{-0.20}^{+0.16}$~mas, which resulted in a minimum chi squared
value of $\chi^2_{\mathrm{R}} = 1.68$. The best fit visibility
distributions are displayed in the right panel of
Fig. \ref{FIG_V2_CP}.
\subsection{Differential visibilities and phases}
\label{SECT:DIFF}
\smallskip
The differential visibilities across the B$\gamma$ line and CO bandhead emission are presented in Fig. \ref{FIG:DIFF_V2_AND_PHASE}. No clear change in visibilities is observed across the Br$\gamma$ line. There are some suggestions of an increase in visibilities over the line, indicating a compact line emitting region. However, these are not considered significant given the lack of consistency of the position of these increases with respect to the line centre. Since baselines with similar lengths and position angles (UT1-UT2 and UT2-UT3) exhibit different changes in visibility in the approximate region of the Br$\gamma$ line, the features observed are considered artifacts. The differential visibilities over the CO bandhead emission exhibit several artificial changes across telluric absorption lines. These make it challenging to detect changes in the visibilities across the CO bandhead emission.
\smallskip
The differential phases across the Br$\gamma$ and CO bandhead emission
are also presented in Fig. \ref{FIG:DIFF_V2_AND_PHASE}. In the case of
one baseline (UT1-UT2), it appears that there is a change in phase
across the Br$\gamma$ line. The behaviour of the phase variation with
wavelength; a negative change on the blue side of the line and a
positive change over the red side, is similar to that expected in the
case of line emission originating in a rotating medium. However, since
the phases associated with the similar UT2-UT3 baseline (49 and 43~m
at PAs of $26$ and $40^{\circ}$ respectively) do not exhibit this
behaviour, it is suggested that the phase signal discussed is also an
artifact. In general, no prominent offset is observed in the
differential phases across the CO bandhead. However, since the
observed spectrum features several CO overtone transitions, we
could increase the precision of the differential phase
observations by co-adding the data across the individual
transitions. This was done using the data associated with the longest
baseline (UT3-UT1, 91~m) as these observations access the smallest
scales. The results are discussed in the following section
(\ref{SECT:COADD}).
\subsubsection{Photo-centre offset over the CO bandhead emission}
\label{SECT:COADD}
To increase the precision of the differential phase observations
obtained with the UT3-UT1 baseline, the phases across the 3 first
bandhead transitions were averaged. {{The photo-centre offset associated
with the resultant differential phase signal was calculated using $p = -\frac{\phi}{2\pi}\dot\frac{\lambda}{B}$, where $B$ is the baseline length and $p$ represents the projection of the 2D photo-centre along the orientation of the baseline. The result is shown in Fig. \ref{FIG:CO_PHOTO}}}. The observations
are consistent with a small offset corresponding to approximately
10~$\mu$as occurring over the bandhead profile. In contrast, offsets
larger than approximately 10~$\mu$as can be excluded. Whether this can
be used to constrain the location of the CO bandhead emission was then
explored using the model developed in \citet{MeCO,me_hd_327083_2}.
\smallskip
To reduce the running time of the model, it is assumed that the
average photo-centre shift associated with the first three CO overtone
transitions ($2\rightarrow0$, $3\rightarrow1$ and $4\rightarrow2$)
could be modelled as the shift over the first bandhead
($2\rightarrow0$). This is a simplification but ultimately, the
emission of the different bandheads will originate from the same
location. Based on the excitations requirements of the different
transitions, this approach will slightly over-estimate the average
offset. However, given the 0.5~kpc uncertainty in the distance to HD
85567, this was not considered significant.
\smallskip
To calculate the photo-centre offsets associated with CO bandhead
emission from a circumstellar disc, we used the model presented in
\citet{me_hd_327083_2} and the stellar parameters presented in Table
\ref{TAB:PAR}. The source of the CO emission was represented by a
Keplerian disc with power laws describing the radial dependence of the
excitation temperature and surface number density. The exponents of
the respective power laws were set to $p = -0.5$ and $q =
-1.5$. Finally, the inclination was set to $i = 35^{^{\circ}}$, which
is based on fits to the CO bandhead emission presented
in \citep{Ilee_phd}. It is noted that this value is relatively uncertain as it was derived from a model fit to spectra of moderate, rather than high, spectral resolution. Nonetheless, it serves as a representative value and is sufficient for our purposes. {{Once the images of the disc at various wavelengths had been calculated, the associated offset was determined from the photo-centre of each image.}}
\smallskip
We calculated the photo-centre offsets for two
models. The first with a relatively small inner radius,
5~$R_{\star}$, and a compact outer radius of 1~AU, as predicted by
the scenario of an optically thick gas disc interior to the dust
sublimation radius. The second model featured a larger inner radius,
10~$R_{\star}$, and a more extended outer radius of 4~AU. This outer
radius corresponds to the scenario of an optically thin inner disc
and a dust sublimation radius that reproduces the size luminosity
relationship of intermediate and low luminosity objects.
\smallskip
The model photo-centre offsets are displayed in
Fig. \ref{FIG:CO_PHOTO}. Clearly, the significance of the slight
offset observed is low, the tentatively identified signature is
approximately 4 times the continuum rms. However, it is evident that
the data are consistent with the offset associated with the smaller
disc. Furthermore, the data favour the smaller disc over the larger
disc as the more extended disc results in an offset that is larger
than that observed.
\begin{center}
\begin{figure}
\begin{center}
\includegraphics[width=0.25\textwidth]{fig_5.eps}
\caption{Photo-centre offsets associated with the CO bandhead emission. This figure was constructed by co-adding the time averaged differential phase observations obtained on the date 2013-03-04 using the UT1-UT3 baseline across the first 3 CO bandheads. The uncertainty in the differential phase is represented by the rms of the continuum measurements. The smaller model offset (solid line) corresponds to an inner disc radius of 5 stellar radii (0.2~AU) and an outer radius of 1.0~AU (0.7~mas). The larger offset (dashed line) was calculated for an inner radius of 10 stellar radii (0.4~AU) and an outer radius of 4.0~AU (2.7~mas, sizes calculated assuming a distance of 1.5~kpc).\label{FIG:CO_PHOTO}}
\end{center}
\end{figure}
\end{center}
\section{Discussion}
\label{SECT:DISC}
This paper presents new VLTI/AMBER observations of the B[e] star HD
85567. Two scenarios have been proposed to explain the B[e] behaviour
of this object. One scenario that explains the object's infrared
excess and line emission is that it is a YSO with a circumstellar
accretion disc. The alternative scenario is that HD 85567 is an
interacting binary with circumstellar material that has been deposited
though mass loss driven by binary interactions. Here, we discuss our
findings in the context of these two scenarios. We also briefly discuss the
structure of HD 85567's circumstellar material and consider how this
is evolving.
\smallskip
We note that our moderate spectral resolution observations reveal that
HD 85567 exhibits $^{12}$CO bandhead emission, but not $^{13}$CO
bandhead emission. In principle, the fact that the circumstellar
material of HD 85567 is not significantly enriched in $^{13}$CO
favours the YSO scenario \citep{Kraus2009}. However, we note that
while the spectrum excludes ratios of $^{12}$CO/$^{13}$CO below approximately 15, this is not sufficient to place strong constraints
on the evolutionary status of HD 85567 \citep{Kraus2009}.
\smallskip
The closure phase observations provide an additional means to
investigate the interacting binary hypothesis. HD 85567 has already
been shown to be a binary, although the estimated minimum separation
is $\sim$100~AU \citep[and likely many times this,][]{DB2006}. Since
this companion may be too distant to induce mass loss from the
primary, we used our high resolution observations to investigate the
hypothesis that HD 85567 has an additional, closer companion within
the field of view of the UT telescopes (60~mas, $\sim$100~AU). Since
no closure phase signature is detected, the observations do not reveal
an additional close binary companion. For completeness, we note that
the $u,v$ coverage of the observations discussed is relatively
linear. In principle, a companion could escape detection if it was
aligned perpendicularly to the projected baselines. {{However, the
additional closure phases associated with the nights of degraded
FINITO performance are also consistent with zero, and thus
indicate a symmetric source. This is a robust result as a bias in
visibilities will not affect closure phase
measurements}}. Therefore, the data support the conclusion that HD
85567 does not appear to have a close binary companion, although a
faint companion could still escape detection. We now investigate
whether the data are consistent with the hypothesis that HD 85567 is a
YSO.
\smallskip
The observed visibilities are relatively high and can be reasonably
reproduced using a point source and a ring model. We report that the
apparent radius of the $K-$band continuum emitting region of HD 85567
is $r = 0.56_{-0.20}^{+0.16}$~mas ($\sim$$0.8\pm0.3$~AU). Based on the
luminosity of HD 85567 and the predicted dust sublimation radius when
the inner disc is optically thin, the expected ring radius is
4.2~AU. Therefore, this is considerably smaller than expected based on
the size luminosity relationship exhibited by YSOs of low and
intermediate luminosity \citep{Monnier2005}. This is a robust result
as it is most likely independent of a possible bias in the calibrated
visibilities due to the use of FINITO. As discussed previously, FINITO
can bias the target visibilities to low values. Therefore, if the data
are biased, the true size of HD 85567 may be smaller, but not
larger. Furthermore, HD 85567 appears undersized even when allowing
for the uncertainties in its distance and luminosity (both
approximately 30 percent). The undersized appearance of HD 85567 is
similar to the case of luminous YSOs. For example, the Herbig Be star
V1685 Cyg has a luminosity of 21$\,$400~$L_{\odot}$ and a $K-$band
ring fit radius of $2.15^{+0.23}_{-0.18}$~AU, making it undersized by
nearly 3~AU \citep{Monnier2005}. This was also reported to be the case
for the early B type Herbig Be star MWC 297
\citep{Weigelt2011}. Therefore, the size of HD 85567 supports the
hypothesis that this object is also a YSO.
\smallskip
It has been proposed that the reason for the small sizes of luminous
YSO is that their inner discs are optically thick, shielding the inner
rim of the dust disc from stellar radiation. This can allow the dust
sublimation radius to be located closer to the central star than would
otherwise be the case. The optically thick inner gas disc is associated
with active accretion discs interior to the dust sublimation radius
\citep{Eisner2004,Monnier2005}. Here we explore whether this scenario
is applicable to HD 85567. By considering the combined effect of
stellar irradiation and viscous heating, \citet{Millan-Gabet2001}
present the temperature of an accretion disc as a function of
radius. The equations used are the following:
\begin{equation}
T(r) = (T_{\mathrm{rep}}^4 + T_{\mathrm{acc}}^4)^{\frac{1}{4}}
\end{equation}
in combination with
\begin{equation}
T_{\mathrm{rep}} = T_{\star}\left(\frac{1}{3}^{\frac{1}{4}}\right)\left(\frac{R_{\star}}{r}^{\frac{3}{4}}\right)
\end{equation}
and
\begin{equation}
T_{\mathrm{acc}} = \left(\frac{3GM_{\star}\dot{M}_{\mathrm{acc}}}{8\pi\sigma}\right)r^{\frac{-3}{4}}
\end{equation}
where $\sigma$ is the Stefan-Boltzmann constant, $G$ is the gravitational constant and $\dot{M}_{\mathrm{acc}}$ is the accretion rate. These equations can be used to crudely estimate the expected size of accretion discs by determining the radius where the temperature falls to 1500~K, i.e. the approximate dust sublimation radius.
\smallskip
It has been estimated that HD 85567 accretes material at a rate of
rate of approximately $1 \times 10^{-6}~M_{\odot}\,{\mathrm{yr^{-1}}}$
\citep[based on the object's Br$\gamma$ emission,][]{Ilee_phd}. This
should be sufficient to ensure an optically thick inner disc
\citep[see e.g.][]{Weigelt2011}. Using this accretion rate and the
parameters in Table \ref{TAB:PAR}, we obtain a predicted dust
sublimation radius of 0.9~AU. This is consistent with the best fitting
ring radius of $0.8 \pm 0.3$~AU. Therefore, it is certainly plausible
that the size of HD 85567 in the $K-$band reflects the presence of an
optically thick disc interior to the dust sublimation radius. This is
supported by the finding that a gaseous disc 1~AU in size is
consistent with the differential phase observations over the CO
bandhead emission. Gaseous discs with radii in excess of 4~AU, the
location of the dust sublimation radius in the case of an optically
thin inner disc, do not reproduce the data well.
\smallskip
We conclude that the observations are consistent with the
hypothesis that HD 85567 is a YSO while they do not support the
interacting, evolved binary scenario. We find that HD 85567 appears
undersized according to the size luminosity relationship of YSOs and
demonstrate that this could be due to the presence of an optically
thick gaseous disc interior to the dust sublimation radius. Finally,
we note that the presence of an optically thick inner disc and the
absence of a far infrared excess suggest that HD 85567 is
photo-evaporating its disc from the outside. Supporting the hypothesis
that this is the fate of discs around Herbig Be stars
\citep{Alonso-Albi2009,Verhoeff2012}.
\section{Conclusion}
\label{SECT:CONC}
This paper presents new VLTI/AMBER observations of the enigmatic B[e]
object HD 85567. Here we reiterate the salient results.
\smallskip
The object's environment appears compact and symmetric on scales of a
few to 100~AU. This does not support the hypothesis that the object is an
evolved, interacting binary. The apparent radius of HD 85567's
environment in the $K-$band is found to be $r =
0.56_{-0.20}^{+0.16}$~mas ($\sim$$0.8\pm0.3$~AU). This makes the
object undersized according to the size luminosity relationship based
on YSOs of low and intermediate luminosity. This has previously been
found to be the case for luminous YSO and thus the size of HD 85567 is
consistent with the hypothesis that it is a YSO.
\smallskip
We then investigate why HD 85567 appears undersized according to the
size luminosity relationship of YSOs. The size of the $K-$band
emitting region is congruous with the predicted location of the dust
sublimation assuming an accretion disc that is optically thick in the
inner regions. Furthermore, the differential phase observations over
the CO bandhead are also consistent with a compact ($r \sim 1$~AU)
gaseous disc interior to the dust disc. More extended discs do not
reproduce the data as well.
\smallskip
To conclude, the data support the hypothesis that HD 85567 appears
undersized according to the YSO size luminosity relationship due to
the presence of an optically thick gaseous disc interior to the dust
sublimation radius. This indicates that HD 85567 is indeed a YSO. If
this is the case, the gaseous inner disc may be identified as an
accretion disc. The presence of an optically think inner disc and the
absence of a far infrared excess suggest that HD 85567 is
photo-evaporating its disc from the outer edge.
\begin{acknowledgements}
HEW acknowledges the financial support of the Max Planck
Society. This research has made use of the \texttt{AMBER data
reduction package} of the Jean-Marie Mariotti Center. This
publication also makes use of VOSA, developed under the Spanish
Virtual Observatory project supported from the Spanish MICINN
through grant AyA2008-02156.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,500,709 | arxiv | \section{Introduction}
Cygnus X-1 (Cyg X-1) is probably the most widely monitored microquasar (MQ) in the Galaxy. The binary system, located at $1.86~\rm{kpc}$ from Earth (Reid et al. 2011), is composed of a high-mass star (spectral type O9.7 Iab and mass
\mbox{$\sim 20~M_{\odot}$}) and a black hole of \mbox{$14.8~M_{\odot}$} (Orosz et al. 2011).
A very complete broadband spectral energy distribution (SED) is available for Cyg X-1 in the hard state (for a compilation of the data see Zdziarski et al. 2014), including gamma-ray detections
and upper limits at GeV energies and above (Albert et al. 2007, Malyshev et al. 2013). The origin of the soft gamma rays ($\sim$ MeV), in particular, is still unknown:
there is no agreement about whether they originate in the jets or somewhere else in the accretion flow. This is one of the issues we assess in this work.
Jets in Cyg X-1 have been resolved in the radio band (Stirling et al. 2001). The outflow is extremely collimated and mildly relativistic. The extension and geometry of the radio emission region may provide complementary, useful information about the conditions in the jets, such as the size and location of the acceleration region of relativistic particles, and the magnetic field.
Finally, while polarization data at low energies have been long available for Cyg X-1 (see Russell et al. 2013 for a compilation), high levels of polarization in the X rays/soft gamma rays have been measured recently for the first time (Laurent et al. 2011, Rodr\'iguez et al. 2015). Polarization studies of the jet radiation can help settle the issue of the origin of the MeV tail.
In this work, we combine these three different sources of data (non-thermal SED, radio images and polarization measurements) to obtain information about the conditions in the jets of Cyg X-1. In Sections \ref{radiative} and \ref{maps}, we briefly review the radiative model - developed in detail in Pepe et al. (2015) - and its application to the generation of synthetic radio maps. In Section \ref{polarization}, we
present our preliminary results for the degree of polarization of the jet synchrotron emission. Finally, in Section \ref{Conclusions} we discuss our conclusions and perspectives for future work.
\section{Radiative processes}
\label{radiative}
In this section we describe the modelling of the radiative output of Cyg X-1. The reader is referred to Pepe et al. (2015) for details. We adopt a conical
shape for the jet, see Fig. \ref{fig:sketch}. The jet base is located at a distance $z_0 = 1.1\times10^8$~cm from the compact object. Relativistic particles are injected in a region that
starts at $z_{\rm{acc}}= 2.2\times10^8$~cm and extends up to $z_{\rm{max}}= 8.6\times10^{11}$~cm. The jet ends (for computing purposes) at $z_{\rm{end}}= 1.0\times10^{15}$~cm. The magnetic field at the base, $B_0 = 5.0\times10^7$~G, is estimated from equipartition between the magnetic and kinetic energies and it decays as $B(z) = B_0 (z_0/z)$. Given the total power of the jet $L_{\rm{jet}}$, a power
\begin{equation}
L_{\rm{rel}} = q_{\rm{rel}} L_{\rm{jet}} \qquad q_{\rm{rel}} = 0.1
\end{equation}
\noindent is transferred to the relativistic particles in the acceleration region, which, in turn, is distributed between electrons and protons as
\begin{equation}
L_{\rm{p}} = a L_{\rm{e}} \qquad a = 0.07.
\end{equation}
\begin{figure}
\begin{center}
\hspace{0.25cm}
\includegraphics[width = 0.5\textwidth, keepaspectratio]{./Pepe_f1.eps}
\caption{Basic sketch of the binary and the jet (not to scale). }
\label{fig:sketch}
\end{center}
\end{figure}
Relativistic particles are injected in the jet according to the injection function
\begin{equation}
Q(E,z) = Q_0\, E^{-\Gamma}\, \exp{\left[-E/E_{\rm{max}}(z)\right]}.
\end{equation}
\noindent Here, $Q_0$ is a normalization constant obtained from the total power injected in each particle species and $\Gamma$ is the spectral index. The injection function is different from zero only in
the region $z_{\rm acc}\leq z \leq z_{\rm max}$ and for $E\geq E_{\rm{min}}$. The cutoff energy $E_{\rm{max}}$ is calculated equating the total particle energy loss rate and the acceleration
rate (e.g. Aharonian 2004); we adopt $E_{\rm{min}} = 120 m_0$, where $m_0$ is the rest mass of the particles.
Radiative cooling is calculated for all particles. Leptons cool via synchrotron, relativistic Bremsstrahlung and
inverse Compton. For this last process we consider three different photon targets: electron synchrotron radiation (SSC), the radiation field of the companion star (IC-Star) and the X-ray photons from the accretion disk (IC-Disk). Protons cool via synchrotron, proton-proton ($pp$) and proton-photon ($p\gamma$) interactions. In the
case of $pp$ collisions the targets are the thermal protons in the jet and in the stellar wind ($pp$-Star), while the photons for $p\gamma$ interactions are those of the radiation field of the companion star. In the case of electrons, synchrotron losses are dominant until almost the end of the jet, while in the case of protons adiabatic losses govern the cooling nearly all along the jet.
Once the cooling rates and the injection functions are calculated, we solve the steady-state transport equation for the particle distributions $N(E,z)$,
\begin{equation}
v_{\rm{conv}} \frac{\partial N}{\partial z} + \frac{\partial}{\partial E} \left( \left.\frac{dE}{dt}\right|_{\rm{tot}} N \right) = Q(E,z),
\label{eq:transport}
\end{equation}
\noindent for both protons and electrons. The main feature of this equation is that it accounts for the transport of particles with a convection velocity on the order of the jet bulk velocity, $v_{\rm{conv}} \approx v_{\rm{jet}}$.
The resulting electron and proton distributions are shown in Fig. \ref{fig:distribution}. Energetic protons can
be found well outside the acceleration region, but electrons cool almost immediately after they leave the acceleration region.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.48\textwidth, keepaspectratio]{./Pepe_f2.eps}
\includegraphics[width = 0.48\textwidth, keepaspectratio]{./Pepe_f3.eps}
\caption{Steady-state distribution of electrons (left) and protons (right). }
\label{fig:distribution}
\end{center}
\end{figure}
Once the particle distributions are known, we then obtain the
specific luminosity in the observer reference frame (in $\rm{erg~s^{-1}~sr^{-1}}$) at photon energy $E_\gamma$ as
\begin{equation}
L_\gamma (E_\gamma) = E_\gamma\, \int_{V} q_\gamma\, dV ,
\end{equation}
\noindent where $V$ is the volume of the emission region and $q_\gamma$ the volume
emissivity. In Fig. \ref{fig:SED} we show our best-fit SED as well as the broadband data for Cyg X-1. In this model, the non-thermal emission from radio wavelengths to the the MeV tail is well described as synchrotron radiation from electrons in the jet. Note that all the emission above 10~GeV is exclusively
of hadronic origin. Furthermore, it is very close to the detections limits of MAGIC and CTA. If this emission were detected, it would be an indicator of the presence of protons in the outflows of Cyg X-1.\footnote{So far, heavy nuclei have been detected in the jets of only two MQs: SS~433 and 4U~1630C47.}
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.95\textwidth, keepaspectratio]{./Pepe_f4.eps}
\caption{Best-fit spectral energy distribution for Cygnus X-1. Down-pointing arrows indicate upper limits. The data are not simultaneous. See Pepe et al. (2015) for details on the sources of the data.}
\label{fig:SED}
\end{figure*}
\section{Radio maps}
\label{maps}
In this section we describe the procedure and results of our modelling of the radio emission region. We integrate the volume emissivity $q_\gamma$
along the line of sight and then convolve it with a bidimensional Gaussian function of full width at half maximum (FWHM) of $2.25\times 0.86$ mas$^2$ to mimic the effect of an array
with a beam as in Fig. 3 of Stirling et al. (2001); the chosen separation between pointings was of one beam radius in each direction. The result is shown in Fig. \ref{fig:mapas}. The flux levels are comparable to those measured by Stirling et al. (2001); the extension of the emitting region, however, is smaller.
This may be an indication that our modelling of the acceleration region (size and/or position) and/or the magnetic field should be revised.
\begin{figure*}
\centering
\includegraphics[width = 0.48 \textwidth, keepaspectratio]{./Pepe_f5.eps}
\caption{Image of the jet at $8.4$ GHz after convolution with a Gaussian beam of $2.25\times 0.86$ mas$^2$. The origin of coordinates is chosen to coincide with the position of the flux maximum. Contours are spaced in factors of $\sqrt{2}$; the lowest contour corresponds to 0.1~mJy~beam$^{-1}$.}
\label{fig:mapas}
\end{figure*}
\section{Synchrotron polarization}
\label{polarization}
Polarization depends directly on the magnetic field strength and configuration. Hence, we study the polarization of the emitted radiation as a means of testing our description of the magnetic field in the jet. We particularly focus on the polarization of the MeV radiation in order to compare our results with the recent measurements in that energy range. We follow Korchakov \& Sirovatskii (1962) for the calculation of the degree of polarization. We
compute the Stokes parameters from first principles, i.e., for completely general shapes of the magnetic field and particle distributions. We explore two different, simple geometries for the magnetic field; see Fig. \ref{fig:magneticField}. In both scenarios the magnetic field intensity decays with the coordinate $z$ as stated in Section \ref{radiative}. Our calculations indicate levels of polarization $\rho_{{B_{z}}} \sim 80\%$ and
$\rho_{{B_{\phi}}} \sim 75\%$, comparable to those reported by Laurent et al. (2011) and Rodriguez et al. (2013).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.18\textwidth, trim=0 0 0 0, clip]{./Pepe_f6.eps}
\includegraphics[width = 0.18\textwidth, trim=0 0 0 0, clip]{./Pepe_f7.eps}
\caption{Magnetic field geometries explored: decaying field in the vertical direction (left, $B_z$) and toroidal magnetic field (right, $B_{\phi}$). The $z$ direction is parallel to the jet axis. }
\label{fig:magneticField}
\end{center}
\end{figure}
\section{Conclusions}
\label{Conclusions}
In this paper we present our latest results for the modelling of the broadband emission of Cyg X-1. Our model indicates that the most energetic emission of this source is dominated by hadronic processes and that the MeV tail has a leptonic origin in the jets. We also obtain a flux of synchrotron radio emission consistent with
observations. However, the compactness of the synthetic radio source indicates that our description of the magnetic field and/or the acceleration zones needs to be improved. Our first investigations of the polarization of the radiation in the MeV band show that very simple field geometries can account for the observed level of polarization. In future works, we expect to further exploit the radio and polarization measurements to improve our modelling of the jets. In this regard, we will improve our description of the jet magnetic field by considering more realistic geometries and explore other configurations for the particle (re-)acceleration zones.
|
1,116,691,500,710 | arxiv | \section{Introduction}
There are several good reasons for considering a geometrization of quantum mechanics,
as it has been beautifully illustrated in a paper by Ashtekar and Shilling \cite{Ashtekar:1997ud}; consider also the partial list of papers \cite{Heslot:1985,
Rowe:1980,
Cantoni:1975,
Cantoni:1977a,
Cantoni:1977b,
Cantoni:1980,
Cantoni:1985,
Cirelli:1983,
Cirelli:1984,
Abbati:1984,
Provost:1980,
Gibbson:1992,
Brody:2001,
Gosson:2001,
Carinena:2000,
Marmo:2002,
Marmo:2006z,
ClementeGallardo:2007,
Carinena:2007ws,
Grabowski:2000zk,
Grabowski:2005my}, where a geometric formulation of quantum mechanics has been developed. Perhaps, the most appealing reason is provided by the opportunity of making available the whole experience of `classical' methods in the study of quantum mechanical problems. Here we shall focus on some recently established results of the geometrization program of quantum mechanics concerning the study of particular problems of quantum information theory \cite{Aniello:09,Volkert:2010iop,Volkert:2010,Aniello:10, Facchi:2010}.\\
To be more specific, let us comment on what we mean by geometrization of quantum mechanics: To replace the usual Hilbert space picture with a description in terms of Hilbert manifolds, together with all natural implications of this alternative description.\\ In this respect, this proposal is very much similar to the transition from special to general relativity: Space-time is considered to be a Lorentzian manifold and the properties of the Minkowski space time are transferred to the tangent space at each point of the space-time manifold. In particular, we go from the scalar product $\eta_{\mu\nu}X^{\mu}X^{\nu}$ to the Lorentzian metric tensor field $\eta_{\mu\nu}dx^{\mu}\otimes dx^{\nu}$, which is further generalized to non-flat space-time manifolds in the form $\eta_{\mu\nu}\theta^{\mu}\otimes\theta^{\nu}$, where $\{\theta^{\mu}\}$ are general 1-forms which carry the information on the non-vanishing of the curvature tensor.\\
Similarly, in the geometrization of quantum mechanics we go from the scalar product $\braket{\psi}{\psi}$ on the Hilbert space $\mathcal{H}$ to the Hermitian tensor field on the Hilbert manifold, written as $\braket{d\psi}{d\psi}$. This would be the associated covariant (0, 2)-tensor field.\\
If we consider as starting carrier space not $\mathcal{H}$ itself but its dual $\mathcal{H}^*$ --- say not ket-vectors but bra-vectors, in Dirac's notation --- we will obtain a (2,0)-tensor field, i.e., a contra-variant tensor field. Once we consider these replacements, algebraic structures will be associated with tensorial structures, and we have to take into account that there will be no more invertible linear transformations but just diffeomorphisms. The linear structure will emerge only at the level of the tangent space and will `reappear' on the manifold carrier space as a choice of each observer \cite{Ercolessi:2007zt}.\\
We must stress that manifold descriptions appear in a natural way already in the standard approach in terms of Hilbert spaces when, due to the probabilistic interpretation of quantum mechanics, we realize that pure states are not vectors in $\mathcal{H}$ but, rather, equivalence classes of vectors, i.e., \emph{rays}. The set of rays, say $\mathcal{R}(\mathcal{H})$, is the complex projective space associated with $\mathcal{H}$. It is not linear and carries a manifold structure with the tangent space at each point $[\psi]$ as `model space'. This space may be identified with the Hilbert subspace of vectors orthogonal to $\psi$.
Other examples of `natural manifolds of quantum states' are provided by the set of density states which do not allow for linear combinations but only convex combinations. They contain submanifolds of density states with fixed rank.\\
The best known example of a manifold of quantum states is provided by the coherent states or any generalized variant \cite{Perelomov:1971,Gilmore:1972,Onofri:1974}, including also non-linear coherent states \cite{Man'ko:1996xv, Aniello:2000}. As is well known, these manifolds of quantum states allow us to describe many properties of the system we are considering by means of finite dimensional smooth manifolds.\\
In this contribution, we start by reviewing, in section \ref{geo of H}, the geometrical formulation of the Hilbert space picture. We shall focus attention on the identification of tensor fields on submanifolds in terms of a natural pull-back procedure as considered in \cite{Aniello:08}. This procedure is applied, in section \ref{app}, by taking account the pull-back on the locally unitarily related quantum states. We then discuss some of its direct consequences for entanglement characterization according to \cite{Aniello:09,Volkert:2010}. In this regard, we relate this tensor fields to the concept of invariant operator valued tensor fields (IOVTs) on Lie groups \cite{Aniello:10}, which naturally admit also applications in the general case of mixed quantum states. In section \ref{QIMetrics}, we review a recently considered connection between the pull-back of the Fubini-Study-metric and a quantum version of the Fisher information metric \cite{Facchi:2010}. We conclude, in section \ref{outlook}, by outlining a relation between IOVTs and the Fisher quantum information metric.
\section{Geometrical Formulation of the Hilbert Space Picture}\label{geo of H}
Consider a separable complex Hilbert space $\mathcal{H}$. A geometrization of this space may be described in two steps as follows. First, by replacing the complex vector space structure with a real manifold $\mathcal{H}^{\mathbb{R}}$, and second, by identifying tensor fields on the latter manifold which are associated with all additional structures being defined on the `initial' Hilbert space, provided by the complex structure, a Hermitian inner product $\braket{\cdot}{\,\cdot}$, Hermitian operators and associated symmetric and anti-symmetric products. Moreover, we'll be interested to focus on geometric structures on $\mathcal{H}^{\mathbb{R}}$ being defined as pull-back structures from the associated projective Hilbert space of complex rays $\mathcal{R}(\mathcal{H})$.
\\
In what follows, our statements should be considered to be always mathematically well defined whenever the Hilbert space we intend to geometrize is finite dimensional. Indeed, the basic ideas coming along the geometric approach in the finite dimensional case are fundamental for approaching the infinite dimensional case. The additional technicalities which may be required in the latter case will be discussed here by underlining them within specific examples rather than by focusing on general claims (For the manifold point of view for infinite dimensional vector spaces see \cite{Chernoff:1974,Schmid:1987,Lang:1994}).
\subsection{From Hermitian operators to real-valued functions}
Let us start with the identification of tensor fields of order zero. Given a Hermitian operator $A\in u^*(\mathcal{H})$ defined on a Hilbert space $\mathcal{H}$, we shall find a real symmetric function
\begin{equation} f_A(\psi) := \braket{\psi}{A\psi},\quad \psi\in \mathcal{H}\end{equation}
on $\mathcal{H}$ and on $\mathcal{H}^{\mathbb{R}}$ respectively. These functions decompose into \emph{elementary quadratic functions}
\begin{equation} f_{P_j}(\psi) = \braket{\psi}{P_j\psi},\quad \psi\in \mathcal{H}\end{equation}
on $\mathcal{H}^{\mathbb{R}}$ by virtue of a spectral decomposition
\begin{equation} A= \sum_j\lambda_j P_j\end{equation}
associated with a family of projectors
$P_j :=\ket{e_j}\bra{e_j}$ and an orthonormal basis $\{\ket{e_j}\}_{j\in I}$ on $\mathcal{H}$. This may be illustrated by taking into account \emph{coordinate functions}
\begin{equation} \braket{e_j}{\psi}:= z^j(\psi) \,.\end{equation}
yielding
\begin{equation} f_{A}(\psi) = \sum_j \lambda_j f_{P_j}(\psi)=\sum_j \lambda_j|z^j|^2(\psi).\end{equation}
In this regard we may recover the eigenvalues and
eigenvectors of the operators at the level of a related function
\begin{equation} e_A(\psi):=\frac{f_A(\psi)}{\braket{\psi}{ \psi}}\end{equation}
on the punctured Hilbert space $\mathcal{H}_0:=\mathcal{H}-\{0\}$. It is simple to see that eigenvectors are critical points $\psi_*$ of the function $e_A$, i.e.
\begin{equation}
de_A(\psi_*)=0 \text{ iff $\psi_*$ is an eigenvector of $A$.}\end{equation}
Hence,
\begin{equation} e_A(\psi_*) \text{ is eigenvalue of $A$}.\end{equation}
By virtue of the momentum map
\begin{equation} \mu: \mathcal{H}_0 \> u^*(\mathcal{H}), \quad \ket{\psi} \mapsto \rho_{\psi}:= \frac{\ket{\psi}\bra{\psi}}{\braket{\psi}{\psi}}\label{m-map}\end{equation}
we note that \begin{equation} e_A(\psi)= \rho_{\psi}(A),\quad \rho_{\psi}\in D^1(\mathcal{H})\end{equation}
identifies a pull-back function from the set $D^1(\mathcal{H})$ of normalized rank-1 projectors which are in 1-to-1 correspondence with pure physical states in $\mathcal{R}(\mathcal{H})$. Hence, $e_A$ is the pull-back of a function which lives on $\mathcal{R}(\mathcal{H})$.
\subsection{The Fubini-Study metric seen from the Hilbert space}\label{fb-metric}
On this point we shall underline that the momentum map $\mu$, as written within the commutative diagram
\begin{equation*}
\begin{CD}
\mathcal{H}_0 @>\mu>>u^*(\mathcal{H})\\
@V\pi VV@A\iota AA\\
\mathcal{R}(\mathcal{H}) @>\cong>>D^1(\mathcal{H})
\end{CD}
\end{equation*}
provides a fundamental tool for pulling back, in a \emph{computable} way, any covariant structure defined on $D^1(\mathcal{H}) \cong \mathcal{R}(\mathcal{H})$
to the `initial' punctured Hilbert space $\mathcal{H}_0$. For this purpose, we may consider for a given Hermitian operator $A$, the operator-valued differential $dA$ in respect to a real parametrization of $u^*(\mathcal{H})$, and define the $(0,2)$-tensor field
\begin{equation} \mathrm{Tr}(A dA \otimes dA).\end{equation}
The differential calculus on a submanifold $\mathcal{M}\subset u^*(\mathcal{H})$, may then inherited from the `ambient space' $u^*(\mathcal{H})$ together with this covariant structure. In particular for $\mathcal{M}\cong \mathcal{R}(\mathcal{H})$ we find by taking into account the momentum map (\ref{m-map}),
\begin{equation} \mathrm{Tr}(\rho_{\psi}d\rho_{\psi}\otimes d\rho_{\psi})
=\frac{\tbraket{d\psi}{d\psi}}{\braket{\psi}{\psi}}- \frac{\braket{\psi}{d\psi}}{\braket{\psi}{\psi}}\otimes \frac{\braket{d\psi}{\psi}}{\braket{\psi}{\psi}}\,,\end{equation}
as momentum-map induced pull-back tensor field on the associated punctured Hilbert space $\mathcal{H}_0$ \cite{Aniello:09}. Moreover, this tensor-field turns out to be identified as a pull-back of the Fubini-Study metric tensor field from the space of rays $\mathcal{R}(\mathcal{H})$. Here we shall note that $\ket{d\psi}$ defines a $\mathcal{H}$-vector-valued 1-form which provides a `classical' $\mathbb{R}$-valued 1-form according to $\braket{e_j}{d\psi} \equiv dz_j$, as we shall explain more in detail in the next section.
\subsection{From Hermitian inner products to classical tensor fields}
By introducing an orthonormal basis $\{\ket{e_j}\}_{j\in J}$, we may define coordinate functions on $\mathcal{H}$ by setting
\begin{equation} \braket{e_j}{\psi} = z^j(\psi),\end{equation}
which we'll write in the following simply as $z^j$. Correspondently, for the dual basis $\{\bra{e_j}\ $ we find coordinate functions \begin{equation} \braket{\psi}{e_j}= \bar{z}_j(\psi^*)\end{equation}
defined on the dual space $\mathcal{H}^*$. By using the inner product we can identify in the finite dimensional case $\mathcal{H}$ and $\mathcal{H}^*$. This provides two possibilities: The scalar product $\braket{\psi}{\psi}$ gives rise to a covariant Hermitian (0, 2)-metric tensor on $\mathcal{H}$
\begin{equation} \braket{d\psi}{d\psi} =\sum_j \braket{d\psi}{e_j}\braket{e_j}{d\psi}= d\bar{z}_j\otimes dz^j,\end{equation}
where we have used $d\braket{e_j}{\psi}=\braket{e_j}{d\psi}$, i.e., the chosen basis is not `varied',
or to a contra-variant (2,0) tensor
\begin{equation} \braket{\frac{\partial}{\partial \psi}}{\frac{\partial}{\partial \psi}} = \frac{\partial}{\partial \bar{z}_j}\otimes \frac{\partial}{\partial z^j}\end{equation}
on $\mathcal{H}^*$.
\\\\
\emph{Remark:} Specifically, we assume that an orthonormal basis has been selected once and it does not depend on the base point.
\\\\
By introducing real coordinates, say
\begin{equation} z^j({\psi}) = x^j(\psi)+iy^j(\psi)\end{equation}
one finds
\begin{eqnarray}
\braket{d\psi}{d\psi} = (dx_j \otimes dx^j + dy_j\otimes dy^j)+i(dx_j\otimes dy^j - dy_j\otimes dx^j).
\end{eqnarray}
Thus the Hermitian tensor decomposes into an Euclidean metric (more generally a Riemannian tensor) and a symplectic form.\\
Similarly, on $\mathcal{H}^*$ we may consider
\begin{eqnarray}
\braket{\frac{\partial}{\partial \psi}}{\frac{\partial}{\partial \psi}}= \bigg(\frac{\partial}{\partial x_j} \otimes \frac{\partial}{\partial x^j} + \frac{\partial}{\partial y_j} \otimes \frac{\partial}{\partial y^j}\bigg)+i\bigg( \frac{\partial}{\partial y_j} \otimes \frac{\partial}{\partial x^j}-\frac{\partial}{\partial x_j} \otimes \frac{\partial}{\partial y^j}\bigg).
\end{eqnarray}
This tensor field, in contravariant form, may be also considered as a bi-differential operator, i.e., we may define a binary bilinear product on real smooth functions by setting
\begin{equation} ((f, g)) = \bigg( \frac{\partial f}{\partial x_j}+i\frac{\partial f}{\partial y_j}\bigg)\cdot \bigg( \frac{\partial g}{\partial x^j}-i\frac{\partial g}{\partial y^j}\bigg)\end{equation}
which decomposes into a symmetric bracket
\begin{equation} (f, g) = \frac{\partial f}{\partial x_j}\frac{\partial g}{\partial x^j} + \frac{\partial f}{\partial y_j}\frac{\partial g}{\partial y^j}\end{equation}
and a skew-symmetric bracket
\begin{equation} \{f, g\} = \frac{\partial f}{\partial y_j}\frac{\partial g}{\partial x^j} - \frac{\partial f}{\partial x_j}\frac{\partial g}{\partial y^j}.\end{equation}
This last bracket defines a Poisson bracket on smooth functions defined on $\mathcal{H}$.\\
Summarizing, we can replace our original Hilbert space with an Hilbert manifold, i.e. an even dimensional real manifold on which we have tensor fields in covariant form
\begin{equation} G = dx_j\otimes dx^j + dy_j\otimes dy^j\end{equation}
\begin{equation} \Omega = dy_j\otimes dx^j - dx_j\otimes dy^j,\end{equation}
or tensor fields in contravariant form
\begin{equation} G^{-1} = \frac{\partial}{\partial x_j} \otimes \frac{\partial}{\partial x^j} + \frac{\partial}{\partial y_j} \otimes \frac{\partial}{\partial y^j}\end{equation}
\begin{equation} \Omega^{-1} = \frac{\partial}{\partial y_j} \otimes \frac{\partial}{\partial x^j}-\frac{\partial}{\partial x_j} \otimes \frac{\partial}{\partial y^j},\end{equation}
along with a complex structure tensor field
\begin{equation} J = dx^j\otimes \frac{\partial}{\partial y_j}-dy^j\otimes \frac{\partial}{\partial x^j}.\end{equation}
The contravariant tensor fields, considered as bi-differential operators define a symmetric product and a skew symmetric product on real smooth functions. The skew-symmetric product actually defines a Poisson bracket. In particular, for functions
\begin{equation} f_{A}(\psi) = \braket{\psi}{A\psi},\quad \psi\in \mathcal{H},\end{equation}
associated with Hermitian operators $A$, we shall end up with the relations
\begin{equation} f_{[A, B]_+} \equiv G^{-1}(df_A, df_B).\end{equation}
\begin{equation} f_{[A, B]_-} \equiv \Omega^{-1}(df_A, df_B),\end{equation}
which replaces symmetric and anti-symmetric operator products $[A, B]_{\pm}$ by symmetric and anti-symmetric tensor fields respectively. Hence, via these tensor fields we may identify symmetric and Poisson brackets on the set of quadratic functions according to
\begin{equation} f_{[A, B]_+} = (f_A, f_B),\end{equation}
\begin{equation} f_{[A, B]_-} = \{f_A, f_B\},\end{equation}
which synthesize to a star-product
\begin{equation} ((f, g)) = (f_A, f_B)+i\{f_A, f_B\}:=f_A\star f_B \end{equation}
and turn therefore the set of quadratic functions into a C-star algebra. In this way we may encode the original non-commutative structure on operators in terms of `classical', i.e.\,Riemannian and symplectic tensor fields according to
\begin{equation} f_A\star f_B=f_{A\cdot B}(\psi) = (G^{-1}+i\Omega^{-1})(df_A(\psi), df_B(\psi)).\end{equation}
To take into account the geometry of the set of physical (pure) states, we need to modify $G^{-1}$ and
$\Omega^{-1}$ by a conformal factor to turn them into projectable tensor fields on $\mathcal{R}(\mathcal{H})$. The projection is generated at the infinitesimal level by the real and imaginary parts of the action
of $\mathbb{C}_0$ on $\mathcal{H}_0$ given by the dilation vector field $\Delta$ and the
$U(1)$-phase rotation generating vector field $\Gamma:=J(\Delta)$ respectively. In this way we shall identify
\begin{equation} \widetilde{G}(\psi)=\braket{\psi}{\psi}G^{-1}-(\Delta\otimes \Delta+\Gamma\otimes \Gamma)\end{equation}
\begin{equation} \widetilde{\Omega}(\psi)=\braket{\psi}{\psi}\Omega^{-1}-(\Delta\otimes \Gamma-\Gamma\otimes \Delta),\end{equation}
as projectable structures \cite{Chruscinski:2008}. They establish a Lie-Jordan algebra structure on the space of
real valued functions whose Hamiltonian vector fields are also Killing vector
fields for the projection $\tilde G$. In this regard one finds
a generic function on $\mathcal{R}(\mathcal{H})$ defines a quantum evolution, via the associated Hamiltonian vector field, if and only if the vector field is a derivation for the Riemann-Jordan product \cite{Cirelli:1991, Marmo:2006z}.
\\
The geometric formulation of the Hilbert space picture reviewed here so far can be summerized at this point by a `dictionary' as follows \cite{Heslot:1985,
Rowe:1980,
Cantoni:1975,
Cantoni:1977a,
Cantoni:1977b,
Cantoni:1980,
Cantoni:1985,
Cirelli:1983,
Cirelli:1984,
Abbati:1984,
Provost:1980,
Ashtekar:1997ud,
Gibbson:1992,
Brody:2001,
Gosson:2001,
Carinena:2000,
Marmo:2002,
Marmo:2006z,
ClementeGallardo:2007,
Carinena:2007ws,
Grabowski:2000zk,
Grabowski:2005my}.
\begin{center}
\begin{tabular}{|c|l|}
\hline
\textsc{Standard QM} & \textsc{Geometric QM}\\
\hline
\hline
Complex vector space & Real manifold with a complex structure\\
Hermitian inner product $\langle\cdot ,\cdot \rangle$ & Hermitian tensor field\\
Real part of $\langle\cdot ,\cdot \rangle$ & Riemannian tensor field \\
Imaginary part of $\langle\cdot ,\cdot \rangle$ & Symplectic tensor field \\
Hermitian operator $A$ & Real-valued function $e_A(\psi):=\frac{\langle\psi , A\psi \rangle}{\langle\psi ,\psi\rangle}$ \\
Eigenvectors of $A$ & Critical points of $e_A(\psi)$\\
Eigenvalues of $A$ & Values of $e_A$ at critical points\\
Commutator & Poisson bracket\\
Anti-commutator & Symmetric bracket\\
Quantum evolution & Hamiltonian Killing vector field\\
\hline
\end{tabular}
\end{center}
\subsection{Pull-back structures on submanifolds of $\mathcal{H}$}\label{pb from H}
One interesting aspect for the current applications of the geometric formulation of quantum mechanics is the possibility to induce tensor fields in covariant form on a given submanifold via a pull-back procedure \cite{Aniello:08, Aniello:09,Volkert:2010}. In particular one finds
\begin{The}\label{BP-Th} Let $\{\theta_j\}_{j\in J}$ be a basis of left-invariant 1-forms on a Lie group $\mathcal{G}$, and let
$\{X_j\}_{j\in J}$ be a dual basis of left-invariant vector fields, and let $iR$ be the infinitesimal representation of $U:\mathcal{G}\rightarrow U(\mathcal{H})$, inducing for $\ket{\psi}\in S(\mathcal{H})$ a map
$$f_{\mathcal{G}}:\mathcal{G}\rightarrow \mathcal{H},$$ $$f_{\mathcal{G}}(g):=U(g)\ket{\psi},$$
and let $$\sum_{j=1}^N d\bar{z}^{j}\otimes dz^{j}=\sum_{j=1}^N\underbrace{dx^{j}\odot dy^{j}+dx^{j}\odot dy^{j}}_{:=G}+i\underbrace{(dx^{j}\wedge dy^{j})}_{:=\Omega }$$
be an invariant Hermitian tensor field on $\mathcal{H}\cong \mathbb{C}^N\cong \mathbb{R}^{2N}$. Then
$$f_{\mathcal{G}}^*(\sum_{j=1}^N d\bar{z}^{j}\otimes dz^{j})=\rho^{\psi}(R(X_j)R(X_k)) \theta^{j} \otimes \theta^{k}:=T_{\mathcal{G}}^{\rho^{\psi}} $$
for $\rho^{\psi} := \frac{\ket{\psi}\bra{\psi}}{\braket{\psi}{\psi}} \in D^1(\mathcal{H})$.
\end{The}
As a direct consequence, we shall identify this (degenerate) pull-back tensor field with the pull-back of a non-degenerate pull-back tensor field which lives on a homogenous space $\mathcal{G}/\mathcal{G}_0$. The latter admits a smooth embedding via the unitary action of the Lie Group as orbit manifold $\mathcal{O}$ in the Hilbert space and establishes therefore a pull-back of the Hermitian structure both on the orbit $\mathcal{O}$ and the homogenous space $\mathcal{G}/\mathcal{G}_0$. Hence, the computation of the pull-back on the orbit, reduces to the the computation of the pull-back on the Lie group, as indicated here in the commutative diagram below.
\begin{equation*}
\begin{CD}
\mathcal{G} @>f_{\mathcal{G}}>>\mathcal{H}\\
@V\pi VV@A\iota AA\\
\mathcal{G}/\mathcal{G}_0 @>\cong>>\mathcal{O},
\end{CD}
\end{equation*}
where $\pi$ denotes the canonical projection of $\mathcal{G}$ onto $\mathcal{G}/\mathcal{G}_0$ and $\iota$ defines the inclusion map of the orbit $\mathcal{O}$ on $\mathcal{H}$.\\
Taking into account in this regard the space of pure states provided by the projective Hilbert space $\mathcal{R}(\mathcal{H})$, it becomes appropriated to consider
the covariant tensor field
\begin{equation} \frac{d\bar{z}^{j}\otimes dz^{j}}{\sum_j |z^{j}|^2}-\frac{z^{j}d\bar{z}^{j}\otimes \bar{z}^{k}dz^{k}}{(\sum_j |z^{j}|^2)^2}\label{ProjectiveHT}\end{equation}
on $\mathcal{H}_0$ which has been identified in section \ref{fb-metric} as pull-back tensor of the Fubini study metric from $\mathcal{R}(\mathcal{H})\cong \mathbb{C} P^{n}$ to $\mathcal{H}_0\cong \mathbb{C}^{n+1}_0$. Here the pull-back on the Lie group reads
\begin{equation} (\rho^{\psi}(R(X_j)R(X_k))- \rho^{\psi}(R(X_j))\rho^{\psi}(R(X_k))\theta^j\otimes \theta^k.\label{ProjectivePBTonG}\end{equation}
The embedding of the Lie group and its corresponding orbit is related to the co-adjoint action map on all group elements modulo U(1)-representations $U(h)=e^{i\phi(h)}$
\begin{equation} f_{\mathcal{G}}^{U(1)}: \mathcal{G}/U(1) \rightarrow \mathcal{R}(\mathcal{H}), \quad g\mapsto U(g)\rho U(g)^{\dagger},\quad \rho\in \mathcal{R}(\mathcal{H}).\end{equation}
Let us underline again that the structure (\ref{ProjectivePBTonG}) is defined \emph{on the Lie group via a pull-back tensor field from the Hilbert space} even though it contains the full information of the (non-degenerate) tensor field on the corresponding co-adjoint orbit $\mathcal{O}$ which is embedded in the projective Hilbert space. The additional $U(1)$- degeneracy is here captured in a corresponding enlarged isotropy group $\mathcal{G}_0^{U(1)}$ according the commutative diagram below.
\begin{equation*}
\begin{CD}
\mathcal{G} @>f_{\mathcal{G}}>>S(\mathcal{H})\\
@VU(1) VV@VU(1) VV\\
\mathcal{G}/U(1) @>f_{\mathcal{G}}^{U(1)}>>\mathcal{R}(\mathcal{H})\\
@V\pi VV@A\iota AA\\
\mathcal{G}/\mathcal{G}_0^{U(1)} @>\cong>>\mathcal{O}
\end{CD}
\end{equation*}
This approach provides therefore in an `algorithmic' procedure to find a geometric description of coherent state manifolds, as defined in \cite{Perelomov:1971,Gilmore:1972,Onofri:1974}. Indeed, the associated orbits in our approach turn out to be more general as those give by coherent states, whenever we allow to take into account also reducible representations, as it typically occurs in composite Hilbert spaces.
\section{Some Applications :
Composite Systems, Entanglement and Separability}\label{app}
\subsection{Separable and maximal entangled pure states}
By considering the representation
\begin{align}
\mathcal{G}\equiv U(n)\times U(n) \rightarrow & U(n^2)\notag\\
g\equiv (g_A, g_B) \mapsto & U(g)\equiv g_A\otimes g_B =( g_A\otimes \mathds{1})(\mathds{1}\otimes g_{B})\end{align}
infinitesimal generated by generalized Pauli-matrices tensored by the identity of a subsystem
\begin{align}
\mbox{Lie}(\mathcal{G}) \equiv u(n)\oplus u(n) \rightarrow & u(n^2)\notag\\
X_j \mapsto & iR(X_j) \equiv
\begin{cases}
i\sigma_j\otimes \mathds{1} &\text{ for } 1 \le j \le n^2\\
\mathds{1}\otimes i\sigma_{j-n^2} &\text{ for } n^2+1 \le j \le 2n^2,
\end{cases}\end{align}
one finds according to theorem \ref{BP-Th} a pull-back tensor field on the Lie group
$$f_{\mathcal{G}}^*(\delta_{jk}d\bar{z}^{j}\otimes dz^{k}) = \rho^{\psi}(R(X_j)R(X_k)) \theta^{j} \otimes \theta^{k}$$
$$=\underbrace{\rho^{\psi}([R(X_j)R(X_k)]_+) \theta^{j} \odot \theta^{k}}_{=f_{\mathcal{G}}^*G}+i\underbrace{\rho^{\psi}([R(X_j)R(X_k)]_-) \theta^{j} \wedge \theta^{k}}_{=f_{\mathcal{G}}^*\Omega}$$
which decomposes for all $\rho^{\psi} \in D^1(\mathbb{C}^n\otimes \mathbb{C}^n)$ into a Riemannian and a symplectic coefficient matrix
\begin{equation} (T^{\rho^{\psi}}_{jk})= \left(\begin{array}{cc}(A^{\rho_A}_{(jk)}) & (C^{\rho^{\psi}}_{jk}) \\(C^{\rho^{\psi}}_{jk}) & (B^{\rho_B}_{(jk)}) \end{array}\right)+i \left(\begin{array}{cc}(A^{\rho_A}_{[jk]}) & 0 \\0 & (B^{\rho_B}_{[jk]}) \end{array}\right),\notag\end{equation}
\begin{align}
A^{\rho_A}_{(jk)}= & \evp{[\sigma_j,\sigma_k]_+\otimes \mathds{1}}=\rho_A([\sigma_j,\sigma_k]_+)\\\notag\\
A^{\rho_A}_{[jk]}= & \evp{[\sigma_j,\sigma_k]_-\otimes \mathds{1}} =\rho_A([\sigma_j,\sigma_k]_-)\\\notag\\
C^{\rho^{\psi}}_{jk}= & \evp{\sigma_j\otimes \sigma_{k-n^2}}.\end{align}
In contrast to the Riemannian part, we observe that the symplectic part splits in general into two symplectic structures associated with the subsystems. Hence, the symplectic structure behaves in analogy to classical composite systems. This may suggest to consider following definition and associated theorem \cite{Aniello:10}:
\begin{Def} $\rho^{\psi} \in D^1(\mathbb{C}^n\otimes \mathbb{C}^n)$ is called \textit{maximally entangled}
if $$f_{U(n)\times U(n)}^*\Omega=0.$$
\end{Def}
Based on this definition, we find
\begin{The} $\rho^{\psi} \in D^1(\mathbb{C}^n\otimes \mathbb{C}^n)$ is a maximally entangled iff the reduced state is maximally mixed.
\end{The}
Hence, this theorem recovers the definition \cite{Donald:2002} which provides the von Neumann entropy as the unique measure of entanglement for pure bi-partite states.\\
On the other extreme, we find for separable states a factorization of the Riemannian coefficient sub-matrix $C$ into reduced density states according to
$$ \rho^{\psi} \mbox{ is \textit{separable} }\Leftrightarrow C^{\rho^{\psi}}_{jk} = \rho^{\psi}(\sigma_j\otimes \sigma_{k-n^2}) = \rho_A(\sigma_j)\rho_B(\sigma_{k-n^2}).$$
In contrast, if we take the pull-back tensor field
\begin{equation} \rho^{\psi}(R(X_j)R(X_k)) \theta^{j} \otimes \theta^{k}- \rho^{\psi}(R(X_j))\rho^{\psi}(R(X_k)) \theta^{j} \otimes \theta^{k}:=\mathcal{T}^{\rho^{\psi}}_{\mathcal{G}} \end{equation}
provided by the Fubini-Study metric from the projective Hilbert space we find the modified coefficient sub-matrix
\begin{equation} \mathcal{C}^{\rho^{\psi}}_{jk} := \rho^{\psi}(\sigma_j\otimes \sigma_{k-n^2}) - \rho_A(\sigma_j)\rho_B(\sigma_{k-n^2}),\end{equation}
and therefore a splitting condition
$$ \rho^{\psi} \mbox{ is \textit{separable} }
\Leftrightarrow \mathcal{T}^{\rho^{\psi}}_{U(n)\times U(n)}=\mathcal{T}^{\rho_A\otimes \rho_B}_{U(n)\times U(n)} = \mathcal{T}^{\rho_A}_{U(n)} \oplus \mathcal{T}^{\rho_B}_{U(n)}.$$
Hence, we may detect separable states, as those provided by a Segre-embedding
\begin{equation} \mathcal{R}(\mathcal{H}_A)\times \mathcal{R}(\mathcal{H}_B) \hookrightarrow \mathcal{R}(\mathcal{H}_A\otimes \mathcal{H}_B)\end{equation}
seen from the Hilbert space by the condition $\mathcal{C}^{\rho^{\psi}}_{jk}=0$.
\subsection{Quantitative statements}
For approaching in this setting quantitative statements we may consider invariant functions
$$f(\psi) := \braket{\mathcal{T}^{\rho^{\psi}}_{U(n)\times U(n)}}{\mathcal{T}^{\rho^{\psi}}_{U(n)\times U(n)}}$$
under local unitary transformations provided by a Hermitian inner product on invariant tensor fields on $U(n)\times U(n)$ associated with the pullback of the Fubini-Study metric seen from the Hilbert space. More specific, we find
$$\mathcal{T}^{\rho^{\psi}}_{\mathcal{G}}=\mathcal{T}^{\rho^{\psi}}_{j_1j_2}\theta^{j_1}\otimes \theta^{j_2}$$
$$\braket{\mathcal{T}^{\rho^{\psi}}_{\mathcal{G}}}{\mathcal{T}^{\rho^{\psi}}_{\mathcal{G}}}:=(\mathcal{T}^{\rho^{\psi}}_{j_1j_2})^*\mathcal{T}^{\rho^{\psi}}_{k_1k_2}\braket{\theta^{j_1}\otimes \theta^{j_2}}{\theta^{k_1}\otimes \theta^{k_2}}.$$
With $\braket{\theta^j}{\theta^k}= \delta^{jk}$ this gives rise to
$$\braket{\mathcal{T}^{\rho^{\psi}}_{\mathcal{G}}}{\mathcal{T}^{\rho^{\psi}}_{\mathcal{G}}}=\sum_{j_1,j_2}|\mathcal{T}^{\rho^{\psi}}_{j_1j_2}|^2:=\|\mathcal{T}^{\rho^{\psi}}_{_{j_1j_2}} \|^2_2.$$
In particular, we may consider an inner product on the symmetric part
$$\|\mathcal{T}^{\rho^{\psi}}_{_{(jk)}} \|^2_2=\|\mathcal{A}^{\rho_A}_{_{(jk)}} \|^2_2+\|\mathcal{B}^{\rho_B}_{_{(jk)}} \|^2_2+2\|\mathcal{C}^{\rho^{\psi}} \|^2_2$$
which implies an entanglement monotone candidate,\emph{ which evades the explicit computation of Schmidt-coefficients} (compare also \cite{Schlienz:1995,Man'ko:2002ti}). In particular, we find \cite{Aniello:09,Volkert:2010}
\begin{The} Let $\rho^{\psi}\in D^1(\mathbb{C}^n\otimes \mathbb{C}^n)$ and let $\rho_A, \rho_B\in D(\mathbb{C}^n)$ be the reduced density states of $\rho$. Then
$$\frac{1}{n^2}\|\mathcal{C}^{\rho^{\psi}} \|_2 = \|\rho-\rho_A\otimes \rho_B\|_2.$$
\end{The}
\subsection{Mixed states entanglement and invariant operator valued tensor fields}
So far we modeled an entanglement characterization \emph{algorithm} based on invariant tensor fields on the Lie group $\mathcal{G}=U(n)\times U(n)$, which `replaces' functions on Schmidt-coefficients by functions on tensor-coefficients:\begin{center}
\begin{tikzpicture}[node distance=3.5cm,auto,>=latex']
\node [int, pin={[init]above:$\rho^{\psi}$}] (a) {$\mathcal{T}^{\rho^{\psi}}_{\mathcal{G}}$};
\node (b) [left of=a,node distance=2cm, coordinate] {$ $};
\node (c) [right of=a] {$f(\psi) :=\sum_{j_1j_2}|\mathcal{T}^{\rho^{\psi}}_{j_1j_2}|^2$};
\node [coordinate] (end) [right of=c, node distance=2cm]{};
\path[->] (b) edge node {$(R(X_j))$} (a);
\path[->] (a) edge node {$ $} (c);
\end{tikzpicture}\end{center}
Similar to the case of pure states, we shall also identify in the generalized regime of mixed states entanglement monotone candidates by functions
\begin{equation} f:D(\mathcal{H}_A\otimes \mathcal{H}_B) \rightarrow \mathbb{R}_+ \end{equation}
which are invariant under the local unitary group of transformations $U(\mathcal{H}_A)\times U(\mathcal{H}_B)$ \cite{Vidal:2000}. In this necessary strength, we propose in the following entanglement monotones candidates by taking into account constant functions on local unitary orbits of entangled quantum states, arising from invariant operator valued tensor fields (IOVTs) on $U(\mathcal{H}_A)\times U(\mathcal{H}_B)$ as considered recently on general matrix Lie groups $\mathcal{G}$ \cite{Aniello:10}. Let us review the basic construction.\\
Given a unitary representations
\begin{equation} U: \mathcal{G}\rightarrow U(\mathcal{H}),\end{equation}
we may identify an anti-Hermitian operator-valued left-invariant 1-form
\begin{equation}-U(g)^{-1}dU(g)\equiv iR(X_j)\theta^j\end{equation}
on $\mathcal{G}$, where the operator $iR(X_j)$ is associated with the representation of the Lie algebra $\text{Lie}(\mathcal{G})$. In this way, we may construct higher order invariant operator valued tensor fields
\begin{equation} -U(g)^{-1}dU(g) \otimes U(g)^{-1}dU(g)=R(X_j)R(X_k)\theta^j\otimes \theta^j,\end{equation}
on $\mathcal{G}$ by taking into account the representation as being equivalently defined by means of the representation of the enveloping algebra of the Lie algebra in the operator algebra $\mathcal{A} :=$End$(\mathcal{H})$. More specific, any element $X_j\otimes X_k$ in the enveloping algebra becomes associated with a product
\begin{equation} R(X_j)R(X_k)\in \mathcal{A}:=\mbox{End}(\mathcal{H}),\end{equation}
where $\mathcal{A}$, may denote the vector space of a $C^*$-algebra. On this point, we may evaluate each one of these products by means of dual elements \begin{equation} \rho\in \mathcal{A}^*,\end{equation}
according to
\begin{equation} \rho(R(X_j)R(X_k)) \equiv \mathrm{Tr}(\rho\, R(X_j)R(X_k))\in \mathbb{C},\end{equation}
yielding a complex-valued tensor field
\begin{equation} \rho(R(X_j)R(X_k))\theta^j\otimes \theta^j\label{rho tensor}\end{equation}
on the group manifold. By taking the k-th product of invariant operator-valued left-invariant 1-forms
\begin{equation} -U(g)^{-1}dU(g) \otimes U(g)^{-1}dU(g)\otimes...\otimes U(g)^{-1}dU(g),\end{equation}
we shall find a representation R-dependent IVOT of order $k$
\begin{equation} \theta_R := \bigg(\prod_{a=1}^k R(X_{i_a})\bigg) \bigotimes_{a=1}^k\theta^{i_a}\notag\end{equation}
on a Lie group $\mathcal{G}=U(n)\times U(n)$. After evaluating it with a \emph{mixed} quantum state
\begin{equation} \theta_R \mapsto \rho(\theta_R):=\theta^{\rho}_R =\rho\bigg(\prod_{a=1}^k R(X_{i_a})\bigg) \bigotimes_{a=1}^k\theta^{i_a}\notag\end{equation}
one may again consider invariant functions via an inner product $\braket{\theta^{\rho}_R}{\theta^{\rho}_R}$. In particular, for $k=n=2$, we recover in this way the purity and the concurrence related measures involving a spin-flip transformed state $\tilde{\rho}$ by considering inner product combinations of symmetric and anti-symmetric tensor fields
\begin{equation} G^{\rho}_R :=\rho([R(X_j),R(X_k)]_+),\quad \Omega^{\rho}_R :=\rho([R(X_j),R(X_k)]_-),\end{equation}
according to
\begin{equation} \frac{1}{8}\big(\braket{G^{\rho}_R}{G^{\rho}_R}+ (-1)^s\braket{\Omega^{\rho}_R}{\Omega^{\rho}_R}\big) -\frac{1}{2}= \begin{cases}
\mbox{Tr}(\rho^2) &\text{ for } s=0\\
\mbox{Tr}(\rho\tilde{\rho}) &\text{ for } s=1.
\end{cases}.\end{equation}
In more general terms, one may introduce $R$-\emph{classes} of entanglement monotone candidates by taking into account polynomials
$$f_k^{R}(\rho):=\sum_n a_n\braket{\theta^{\rho}_R}{\theta^{\rho}_R}^n, \quad \theta^{\rho}_R :=\rho\bigg(\prod_{a=1}^k R(X_{i_a})\bigg) \bigotimes_{a=1}^k\theta^{i_a}. $$
The case
\begin{equation} \tilde{R}(X_j)= R(X_j)- \rho(R(X_j))\mathds{1},\label{NL-op}\end{equation}
recovers for IOVTs of order $k=2$, a class of separability criteria associated with covariance matrices (CMs) (Gittsovich et al. 2008) by means of a \emph{CM-tensor field} \begin{equation}\theta^{\rho}_{\tilde{R}}=(\rho(R(X_j)R(X_k))- \rho(R(X_j))\rho(R(X_k))\theta^j\otimes \theta^k.\end{equation}
An open problem in the field of CM-ctiteria is provided by the question how to find an extension to quantitative statements \cite{Gittsovich:2008}. A possible approach could be provided here by taking into account a $\tilde{R}$-\emph{class} of entanglement monotone-candidates by considering $$f_2^{\tilde{R}}(\rho)= \sum_n a_n\braket{\theta^{\rho}_{\tilde{R}}}{\theta^{\rho}_{\tilde{R}}}^n.$$
To give an example, we consider the function
\begin{equation} f_2^{\tilde{R}}(\rho)\equiv \braket{\theta^{\rho}_{\tilde{R}}}{\theta^{\rho}_{\tilde{R}}}\end{equation}
applied to a family of 2-parameter states on a composite Hilbert space of two qubits given by
\begin{equation} \rho_{x, \alpha_0}:= x\ket{\alpha_0} \bra{\alpha_0} +(1-x)\rho^*, \quad \ket{\alpha_0}:=\cos(\alpha_0)\ket{11}+\sin(\alpha_0)\ket{00}\end{equation}
and find a possible approximation to the concurrence measure\\
\begin{equation}\mbox{max}[\lambda_4-\lambda_3-\lambda_2-\lambda_1, 0], \quad\lambda_j\in\mbox{Spec}(\rho\widetilde{\rho}).\end{equation}
Both functions are plotted in figure \ref{fig1}.
\begin{figure}[htp]
\centerline{\includegraphics[scale=0.50]{D.pdf}\,\,\includegraphics[scale=0.50]{C.pdf}}
\vspace*{8pt}
\caption{The function $f_2^{\tilde{R}}(\rho)$ gives rise to a possible approximation (left) to the concurrence measure (right) applied to a family of 2-parameter states on a composite Hilbert space of two qubits. \label{fig1}}
\end{figure}
\section{From quantum to classical information}\label{QIMetrics}
In the previous section we considered invariant operator valued tensor fields (IOVTs) on Lie groups to tackle the problem of entanglement quantification in composite quantum systems. As a source for performing quantum computation, quantum communication and other types of quantum information processes, we may ask how the resulting entanglement monotone candidates which we have discussed so far are related to known quantum information measure, in analogy to the von Neumann entropy
$$S(\rho)=-\mathrm{Tr}(\rho \log \rho),$$
which establishes a unique entanglement measure for pure states, when applied to corresponding reduced density states \cite{Donald:2002}. In this regard we may consider the \emph{quantum relative entropy}
$$S(\rho||\rho')=S(\rho)-\mathrm{Tr}(\rho \log \rho'),$$
which introduces the notion of a distance between quantum states. In particular, it defines a distance which is monotone under completely positive maps $\Phi$ \cite{Bengtsson:2006},
$$S(\Phi \rho||\Phi \rho')\leqslant S(\rho||\rho').$$
More general \cite{Gibilisco:2007}, a completely positive map-monotone metric on the space of quantum states $D(\mathcal{H})$ may establish the notion of a \emph{quantum Fisher information metric}. It is of general interest to understand under which condition the \emph{classical} Fisher information metric can be recovered from a given quantum information metric. In contrast to the latter, we shall note that the classical Fisher metric is uniquely defined as a Markov map-monotone metric on the space of classical probability distributions.\\
A frequently used quantum Fisher information metric on a real submanifold of quantum states
\begin{equation} \rho_{\theta}\in \mathcal{N}\subset D(\mathcal{H}),\label{theta-sm rho}\end{equation}
parametrized by $\theta\in \mathbb{R}^{\dim(\mathcal{N})}$, is given by
\begin{equation} I(\theta):= \mathrm{Tr}(\rho_{\theta}d_{l}\rho_{\theta}\otimes d_{l}\rho_{\theta})\label{QIFM}\end{equation}
with the implicitly defined logarithmic differential $d_{l}$ related to an operator-valued 1-form
\begin{equation} d\rho_{\theta}=\frac{1}{2}(\rho_{\theta} d_{l}\rho_{\theta}+d_{l}\rho_{\theta}\rho_{\theta}),\end{equation}
where the `ordinary' differential $d\rho_{\theta}$ is considered in respect to the parameters $\theta$.\\
In the case of pure states $\rho=\rho^2$, one finds
\begin{equation} d\rho^2_{\theta}=\rho_{\theta} d\rho_{\theta} +d\rho_{\theta} \rho_{\theta} =d\rho_{\theta},\end{equation}
and therefore $d_l \rho_{\theta} = 2d\rho_{\theta}$. In conclusion,
\begin{equation} I(\theta)= 4\mathrm{Tr}(\rho_{\theta}d\rho_{\theta}\otimes d\rho_{\theta})\quad \mbox{if $\rho_{\theta}$ is pure.}\end{equation}
By taking into account the pull-back induced by the momentum map on the associated Hilbert space according to section \ref{fb-metric},
one may identify a submanifold $\mathcal{M}\subset \mathcal{H}_0$ of Hilbert space vectors $\ket{\psi_{\theta}}\in \mathcal{M}$, such that the restriction of the momentum map
\begin{equation} \mu|_{\mathcal{M}}:\mathcal{M} \rightarrow u^*(\mathcal{H}),\quad \ket{\psi_{\theta}} \mapsto\mu(\ket{\psi_{\theta}})=\frac{\ket{\psi_{\theta}}\bra{\psi_{\theta}}}{\braket{\psi_{\theta}}{\psi_{\theta}}}:=\rho^{\psi}_{\theta}\end{equation}
on this submanifold gives rise to a pullback tensor field
$$ K:=\mathrm{Tr}(\rho^{\psi}_{\theta}d\rho^{\psi}_{\theta}\otimes d\rho^{\psi}_{\theta})$$
\begin{equation}=\frac{\tbraket{d\psi_{\theta}}{d\psi_{\theta}}}{\braket{\psi_{\theta}}{\psi_{\theta}}}- \frac{\braket{\psi_{\theta}}{d\psi_{\theta}}}{\braket{\psi_{\theta}}{\psi_{\theta}}}\otimes \frac{\braket{d\psi_{\theta}}{\psi_{\theta}}}{\braket{\psi_{\theta}}{\psi_{\theta}}}\,,\label{PHT}\end{equation}
on $\mathcal{M}\subset \mathcal{H}_0$. Hence, we have the pull-back of the Fubiny Study metric tensor field from the space of rays $\mathcal{R}(\mathcal{H})$ to $\mathcal{N}$, seen from a submanifold $\mathcal{M}$ in the Hilbert space. As a consequence, and in accordance to \cite{Facchi:2010}, the pull-back tensor field $K$ on a submanifold $\mathcal{M}$ of quantum state vectors
\begin{equation} \psi(x,\theta) \equiv p(x, \theta)^{1/2} e^{W(x, \theta)}\in L^2(\mathbb{R}^n),\quad x\in \mathbb{R}^n,\quad \theta\in \mathbb{R}^{\dim(\mathcal{N})}\label{theta-sm},\end{equation}
is related to the quantum information metric $I(\theta)$ in
(\ref{QIFM}) if $\rho_{\theta}$ is a pure state.\\
To illustrate the pull-back, we define for any given tensor field $T(x,\theta)$ of order $r$ (including functions for order $r=0$) the generalized expectation value integral
\begin{equation} \mathbb{E}_p(T):= \int_{\mathbb{R}^n} p(x,\theta) T(x,\theta) dx,\label{deriv1} \end{equation}
which `traces out' the $x$-dependence of the tensor field $T$. A straightforward computation (see \cite{Facchi:2010}) yields then the identification of a pull-back tensor field
\begin{equation} K= G +i\Omega,\end{equation}
on the submanifold $\mathcal{M}$ which is decomposed into a symmetric tensor field
\begin{equation} G:= \mathbb{E}_p((d\ln p)^{\otimes 2}) +\mathbb{E}_p(dW ^{\otimes 2})- \mathbb{E}_p(dW)^2\label{f1}\end{equation}
and a antisymmetric tensor field
\begin{equation} \Omega := \mathbb{E}_p(d\ln p\wedge dW).\end{equation}
Moreover, by taking into account in the symmetric part a further decomposition
\begin{equation} G\equiv \mathcal{F} + \mbox{Cov}(dW),\end{equation}
one recovers the \emph{Classical Fisher Information metric}
\begin{equation} \mathcal{F} := \mathbb{E}_p((d\ln p)^{\otimes 2})\end{equation}
and a \emph{phase-covariance matrix tensor field}
\begin{equation} \mbox{Cov}(dW):=\mathbb{E}_p(dW ^{\otimes 2})- \mathbb{E}_p(dW)^2.\label{f2}\end{equation}
For the parts of the pull-back tensor field containing the phase $W$ in differential form, we may therefore identify for pure states according to the non-classical counterpart of the Fisher classical within the quantum information metric. As a matter of fact,
the quantum information metric collapses to the \emph{classical Fisher information metric} for \begin{equation} dW=0,\end{equation}
i.e.\,if the phase is constant.
\section{Conclusions and Outlook}\label{outlook}
For a given embedding of the manifold $\mathcal{M}$ into the Hilbert space $\mathcal{H}$,
\begin{equation} f:\mathcal{M}\rightarrow \mathcal{H},\end{equation}
we have seen that, for pure states $\rho\in D^1(\mathcal{H})$,
\begin{equation} I(\theta) = f^*_{\mathcal{M}}\mathrm{Tr}(\rho d\rho\otimes \rho),\end{equation}
while, for IOVTs associated with the realization (\ref{NL-op}), evaluated on pure states we have
\begin{equation} \rho(\mbox{IOVT}) = f^*_{\mathcal{G}}\mathrm{Tr}(\rho d\rho\otimes \rho).\end{equation}
But then we have
\begin{equation} I(\theta) = \rho(\mbox{IOVT}) \mbox{ if } \mathcal{M}=\mathcal{G}.\end{equation}
Hence, pure quantum state-evaluated IOVTs are directly related to a quantum information metric, if the pull-back of $\mathrm{Tr}(\rho d\rho\otimes \rho)$ is made on a Lie group and associated $\mathcal{G}$-orbits respectively.\\
To identify pull-back tensor fields from $\mathcal{R}(\mathcal{H})$ associated with mixed quantum states,
we shall consider the pull-back on $$\mathcal{G}\equiv U(n)\times U(n)\times U(n)\times.. \times U(n)$$ inducing reduced density state dependencies and associated tensor field splittings on multi-partite systems $\mathcal{H}\cong (\mathbb{C}^n)^{\otimes N>2}$.
As in the case of pure states, we believe that a connection to the IOVT-construction on Lie groups of general linear transformations (see \cite{Grabowski:2000zk}) should reduce the computational effort in concrete applications involving strata of mixed states with fixed rank. But also for more general submanifolds of quantum states, we may deal with the idea of computing quantum information distances on the level of a Hilbert space, rather than on the convex set of density states by taking into account quantum state purification procedures \cite{Man'ko:2000ti, Man'ko:2002ti}. In this way the advantage of dealing with probability amplitudes rather than with probability densities \cite{Facchi:2010} may be generalized from the regime of pure to mixed states. Besides the possible computational advantages related to density state purification, we shall also underline physical motivations for taking into account account pure rather mixed states as fundamental physical states \cite{Penrose:2004}. Hence, by the virtue of the latter point of view, we may in general put the geometry of the projective Hilbert space at the first place, even though we are dealing with the generalized regime of mixed states.
\section*{Acknowledgments}
This work was supported by the National Institute of Nuclear Physics (INFN).
|
1,116,691,500,711 | arxiv | \section{Introduction}
\subsection{Symmetric probes}
Probes were introduced by McDuff~\cite{McD11} to prove that some toric fibres are displaceable. Probes are rational segments in the base of a toric base polytope which hit the boundary integrally transversely in one point. The latter condition implies that one can perform symplectic reduction on a probe and obtain a two-disk as reduced space, see also the exposition in~\cite{AbrMac13}. Toric fibres map to circles in the reduced space, where displaceability questions are easy to settle since they boil down to area arguments.
Symmetric probes are rational segments in which \emph{both} endpoints hit the boundary of the moment polytope integrally transversely. They were introduced in a follow-up paper to~\cite{McD11} by Abreu--Borman--McDuff~\cite{AbrBorMcD14} to settle some more subtle displaceability questions. Here, we use them to a different end. The reduced space associated to a symmetric probe is a two-sphere and the quotient map takes toric fibres to orbits of the standard~$S^1$-action on the two-sphere. Observe that -- except for the equator -- orbits of this circle action in~$S^2$ appear in pairs which are Hamiltonian isotopic. Our main observation is that, since Hamiltonian isotopies in reduced spaces can be lifted, this proves that toric fibres corresponding to such pairs of circles are Hamiltonian isotopic, as well. This is illustrated in Figure~\ref{fig:1}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[inner sep=0pt] at (0,0)
{\includegraphics[trim={1.5cm 6.5cm 4cm 2cm},clip,scale=0.75] {fig1.pdf}};
\node at (-2.4,0.55){$x$};
\node at (-2.9,-1.5){$x'$};
\node at (-4,1.3){$\Delta$};
\node at (4.5,1.5){$S^2$};
\node at (-2.7,-0.5){$\sigma$};
\node at (1.3,0.64){$S^1_x$};
\node at (1.35,-1.675){$S^1_{x'}$};
\end{tikzpicture}
\caption{A symmetric probe~$\sigma$ in a moment polytope~$\Delta$ with points~$x,x'$ at equal distance to the boundary. The toric fibres~$T(x),T(x')$ map to the circles~$S^1_x,S^1_{x'} \subset S^2$ under symplectic reduction.}
\label{fig:1}
\end{center}
\end{figure}
To state this formally, let us introduce some notation. Let~$(X^{2n},\omega)$ be a (not necessarily compact) toric symplectic manifold with moment map~$\mu \colon X \rightarrow \mathbb{R}^n$ and moment polytope~$\mu(X) = \Delta$. For~$x \in \operatorname{int} \Delta$ the set~$T(x) = \mu^{-1}(x)$ is a Lagrangian torus, called a \emph{toric fibre}. A \emph{symmetric probe}~$\sigma \subset \Delta$ is a rational segment intersecting~$\partial \Delta$ integrally transversely in the in interior of two facets, see also~\cite[Definition 2.2.3]{AbrBorMcD14}. An intersection of a rational line and a rational hyperplane is called \emph{integrally transverse} if their union contains a~$\mathbb{Z}$-basis of~$\mathbb{Z}^n$. See also Definition~\ref{def:symmprobe} and the surrounding discussion or~\cite[\S 2.1]{McD11} for more details.
\begin{TheoremA}
\label{thm:mainA}
Let~$(X,\omega)$ be a toric symplectic manifold and let~$\sigma \subset \Delta$ be a symmetric probe in its moment polytope. Furthermore, let~$x,x' \in \sigma$ be two points at equal distance to the boundary~$\partial \Delta$. Then~$T(x)$ and~$T(x')$ are Hamiltonian isotopic.
\end{TheoremA}
\subsection{Classification of toric fibres}
\label{ssec:introclass}
Deciding which two given Lagrangians~$L$ and~$L'$ in $(X,\omega)$ can be mapped to one another by a symplectomorphism or by a Hamiltonian diffeormorphism is a central question in symplectic geometry. In many situations, it is quite hopeless to give a full classification -- even constructing examples of Lagrangians that are not equivalent to known ones (so-called \emph{exotic} Lagrangians) is an active area of research where many questions are open, see for example~\cite{Aur15, CheSch10, Via17, Via16}. In this paper we care about the following classification question of Lagrangian submanifolds.
\begin{question}
\label{qu:class}
In a toric symplectic manifold~$(X,\omega)$, give a classification of toric fibres up to Hamiltonian diffeomorphisms of the ambient space.
\end{question}
\begin{remark}
One can ask the same questions for symplectomorphisms of the ambient space. In this paper we focus on the case of Hamiltonian diffeomorphisms. See also Remark~\ref{rk:sympclass}.
\end{remark}
Although Question~\ref{qu:class} is a much less ambitious question than a full classification of all Lagrangian tori (since we exclude exotic tori a priori) of~$X$, it is open except for a few special cases and surprisingly absent from the literature. To our knowledge, it has only been answered for~$\mathbb{C}^n$ (where toric fibres are simply product tori) by Chekanov~\cite[Theorem A]{Che96} and for~$\mathbb{C} P^2$ by Shelukhin--Tonkonog--Vianna~\cite[Proposition 7.1]{SheTonVia19}.
Let us make some conventions. From now on, we call~$T(x)$ and~$T(x')$ \emph{equivalent} and write~$T(x) \cong T(x')$ if they can be mapped to one another by a Hamiltonian diffeomorphism of the ambient space. Furthermore, let
\begin{equation}
{\mathfrak{H}}_x = \{ x' \in \operatorname{int} \Delta \, \vert \, T(x) \cong T(x') \},
\end{equation}
the set of toric fibres equivalent to~$T(x)$. A first guess may be that~${\mathfrak{H}}_x = \{x\}$, since the zero-section in~$T^*T^n$ is non-displaceable (see~\cite[\S 11.3]{McDSal17} and the references therein) and thus this is true if we restrict our attention to Hamiltonian diffeomorphisms supported in a Weinstein neighbourhood of~$T(x)$. However, a glance at~$S^2$ shows that this guess is wrong, since one can use the topology of the ambient space to obtain non-trivial equivalences of toric fibres. More generally, by Theorem~\ref{thm:mainA}, symmetric probes (and their concatenations) can be used to construct equivalences of toric fibres up to Hamiltonian diffeomorphisms. Let us also point out that symmetric probes are abundant in arbitrary toric manifolds -- at least close to the boundary of the moment polytope, see \S\ref{ssec:arbitrary}. We conjecture that the method of constructing equivalent toric fibres by symmetric probes gives a complete anwer to the classification question.
\begin{conjecture}
\label{conj:main}
Two toric fibres~$T(x),T(x') \subset X$ are equivalent if and only if they are equivalent by a sequence of symmetric probes.
\end{conjecture}
In Section~\ref{sec:examples}, we verify this conjecture for~$\mathbb{C}^n$ and~$\mathbb{C} P^2$ (where the classification was previously known), for~$\mathbb{C} \times S^2, \mathbb{C}^2 \times T^*S^1, T^*S^1 \times S^2$ and for monotone~$S^2 \times S^2$ (where we classify toric fibres). The classification of toric fibres in non-monotone~$S^2 \times S^2$ is more intricate and is given in~\cite{BreKim23}.
On the side of obstructions to Hamiltonian equivalence, we prove the following.
\begin{TheoremA}
\label{thm:mainB}
If toric fibres~$T(x),T(x') \subset X$ of a compact toric manifold~$X$ are Hamiltonian isotopic, then the following three invariants agree
\begin{equation}
\label{eq:chekinvariants}
d(x) = d(x'), \quad
\#_d(x) = \#_d(x'), \quad
\Gamma(x) = \Gamma(x').
\end{equation}
\end{TheoremA}
The invariant~$d(x) \in \mathbb{R}$ is the \emph{integral affine distance} of~$x$ to the boundary of the moment polytope. The invariant~$\#_d(x) \in \mathbb{N}_{\geqslant 1}$ is the number of facets of~$\Delta$ realizing the minimal distance~$d(x)$. Both of these invariants are \emph{hard} in the symplectic sense. The last invariant is the subgroup
\begin{equation}
\Gamma(x) = \mathbb{Z}\langle \ell_1(x)-d(x), \ldots, \ell_N(x)-d(x) \rangle \subset \mathbb{R}
\end{equation}
and it is soft. Here~$\ell_i(x)$ denotes the integral affine distance of~$x$ to the~$i$-th facet of~$\Delta$. Since these invariants are derived from Chekanov's invariants~\cite[Theorem A]{Che96} of product tori in~$\mathbb{R}^{2n} = \mathbb{C}^n$, we call them \emph{Chekanov invariants}.
Let us outline the proof of Theorem~\ref{thm:mainB}. Suppose~$T(x), T(x') \subset X$ are Hamiltonian isotopic fibres. By a construction going back to Delzant~\cite{Del88}, we can view~$X$ as a symplectic quotient of~$\mathbb{C}^N$, where~$N$ is the number of facets of~$\Delta$. The preimages of the tori~$T(x),T(x')$ under the symplectic quotient map are the product tori~$T(\ell(x)), T(\ell(x')) \subset \mathbb{C}^N$, where~$\ell = (\ell_1, \ldots, \ell_N)$. The Hamiltonian isotopy mapping~$T(x)$ to~$T(x')$ lifts to a Hamiltonian isotopy of~$\mathbb{C}^N$ mapping~$T(\ell(x))$ to~$T(\ell(x'))$. This means that Chekanov's invariants for product tori have to agree on~$T(\ell(x))$ and~$T(\ell(x'))$, which yields the statement. To our knowledge, this \emph{lifting trick} first appeared in~\cite{AbrMac13} to prove non-dispaceability of certain fibres and it was also heavily used in~\cite{Bre20}. It is not obvious to us how to prove Theorem~\ref{thm:mainB} directly, i.e.\ without using the lifting trick. The first two invariants are clearly related to the area and the number of non-trivial Maslov two $J$-holomorphic disks of minimal area with boundary on the corresponding tori, respectively. It is not obvious how to pursue this due to the lack of monotonicity, although an approach in the spirit of~\cite{SheTonVia19} may be promising, especially in dimension four, see Remark~\cite[\S 5.6]{SheTonVia19}.
The invariants in Theorem~\ref{thm:mainB} are not complete, even in very simple examples such as~$\mathbb{C} P^2$, see Example~\ref{ex:cp2notcomplete}. We suspect that the first two invariants are all there is in terms of hard obstructions, but that the soft invariant~$\Gamma(\cdot)$ is far from optimal -- this is the case in all examples where we know the classification.
\subsection{Examples}
\label{ssec:introex}
Let us give some examples of symmetric probes. In dimension two, there are not many toric spaces. The main examples are~$T^*S^1 = S^1 \times \mathbb{R}$ equipped with the standard exact symplectic form and moment map given by projection to the~$\mathbb{R}$-coordinate;~$\mathbb{C} = \mathbb{R}^2$ equipped with the standard symplectic form and moment map~$z \mapsto \pi \vert z \vert^2$, and~$S^2$ equipped with the height function. We normalize the symplectic form~$\omega_{S^2}$ such that~$\int_{S^2} \omega_{S^2} = 2$ meaning that the corresponding moment polytope is~$[-1,1]$. In the two-dimensional setting, symmetric probes are not interesting and the classification of toric fibres boils down to simple area arguments. However, some four-dimensional products of the above examples (equipped with the product symplectic and toric structures) already contain non-trivial probes.
In~$\mathbb{C}^2$, there is one non-trivial probe in direction~$(1,-1)$, which can be used to show that~$T(x,y) \cong T(y,x)$, which also follows from the fact that all elements in~$\operatorname{U}(2)$ can be realized by Hamiltonian diffeomorphisms. These are all possible equivalences in~$\mathbb{C}^2$, as was shown by Chekanov~\cite{Che96}. In~$T^*S^1 \times S^2$, all directions~$(k,1)$ for~$k \in \mathbb{Z}$ give symmetric probes, see Figure~\ref{fig:2}. This proves that~$T(x,y)$ is Hamiltonian isotopic to all~$T(x+2ky,\pm y)$. Note that this also follows from a suspension argument due to Polterovich, see~\cite[Example 6.3.C]{Pol01} and the discussion in~\cite[\S1.3]{MakSmi19}. The example~$\mathbb{C} \times S^2$ is obtained from the previous one by a vertical symplectic cut and we will see in~\S\ref{ssec:excs2} that there are slightly more equivalences between toric fibres. In~$\mathbb{C} P^2$ and monotone~$S^2 \times S^2$, it is easy to see that symmetric probes realize all equivalences of toric fibres coming from symmetries of the moment polytope. In all of these examples, the method by probes is sharp and the classification of toric fibres is discussed in detail in Section~\ref{sec:examples}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[inner sep=0pt] at (0,0)
{\includegraphics[trim={3cm 6.5cm 1cm 3cm},clip,scale=0.75] {fig2.pdf}};
\node at (-2.2,0.6){$(x,y)$};
\end{tikzpicture}
\caption{The set~${\mathfrak{H}}_{(x,y)}$ for~$T(x,y) \subset T^*S^1 \times S^2$ and some symmetric probes.}
\label{fig:2}
\end{center}
\end{figure}
In dimensions~$\geqslant 6$, the situation is quantitatively different from the above examples. Indeed, the set~${\mathfrak{H}}_x$ has accumulation points in~$\Delta$ for many~$x \in \operatorname{int} \Delta$, see Corollary~\ref{cor:accumulation}. This already occurs in the case of~$\mathbb{C}^3$, treated by Chekanov~\cite{Che96}, see also Theorem~\ref{thm:chekanov}. In essence, this is due to the existence of a symmetric probe in direction~$(1,1,-1)$ (or coordinate permutations thereof), see Figure~\ref{fig:5}. In~\S\ref{ssec:chekrev}, we show that one can recover Chekanov's classification using symmetric probes. The property that~${\mathfrak{H}}_x$ has accumulation points is not exclusive to dimension six and above. In fact, in the forthcoming~\cite{BreKim23}, we show that this occurs in~$S^2 \times S^2$ equipped with any non-monotone symplectic form.
In light of this, it would be very interesting to characterize the toric manifolds having the property that there exists~$x \in \operatorname{int} \Delta$ with~${\mathfrak{H}}_x$ not discrete.
\subsection{Hamiltonian monodromy of toric fibres} Let~$x \in \sigma \subset \Delta$ be the midpoint of a symmetric probe. The corresponding toric fibre~$T(x)$ projects to the equator of the sphere obtained as a reduced space and thus we do not get any equivalence with another toric fibre by the above method. However, we still get information about~$T(x)$. Indeed, we can lift a Hamiltonian isotopy mapping the equator in the reduced sphere to itself but changing the orientation of the equator. By lifting the such a Hamiltonian isotopy, we obtain a Hamiltonian isotopy mapping~$T(x)$ to itself with non-trivial homological monodromy, meaning that it induces a non-trivial map in~$\operatorname{Aut} H_1(T(x);\mathbb{Z})$. An explicit formula for this monodromy map in terms of data related to the symmetric probe~$\sigma$ is given in~\eqref{eq:basicinvolution}.
\begin{definition}
\label{def:hmg}
Let~$L \subset (X,\omega)$ be a compact Lagrangian submanifold. The \emph{Hamiltonian monodromy group} is given by
\begin{equation}
\label{eq:hmg}
{\mathcal H}_L = \{ (\phi\vert_L)_* \in \operatorname{Aut} H_1(L;\mathbb{Z}) \; \vert \; \phi \in \operatorname{Ham}(X,\omega),\, \phi(L) = L \}.
\end{equation}
\end{definition}
The analogous monodromy group for symplectomorphisms was computed by Chekanov for product tori and Chekanov tori in~\cite[Theorem 4.5]{Che96} and, in that case, the Hamiltonian monodromy group actually agrees with it. To our knowledge this is the first occurence of this kind of question in the literature. See also Yau~\cite{Yau09} for related results and Hu--Lalonde--Leclercq~\cite{HuLalLec11} which establishes that weakly exact Lagrangian manifolds have trivial Hamiltonian monodromy group. See Porcelli~\cite{Por22} for recent progress in the same direction. Another recent work is Augustynowicz--Smith--Wornbard~\cite{AugSmiWor22} which makes significant progress in case~$L$ is a monotone Lagrangian torus and provides an excellent overview of the topic in its introduction.
Let~$\xi_1,\ldots,\xi_N \in \mathbb{Z}^n$ be the set of inward pointing primitive normal vectors to the facets of~$\Delta$, as in~\eqref{eq:polytope}. The vectors~$\xi_i$ naturally determine homology classes~$\xi_i \in H_1(T(x))$ for every toric fibre~$T(x)$. See for example the discussion surrounding~\eqref{eq:lambda}. Let~${\mathcal D}(x)$ be the subset of those normal vectors realizing the minimal integral affine distance of~$x$ to facets,
\begin{equation}
{\mathcal D}(x) = \{\xi_i \, \vert \, \ell_i(x) = d(x) \}.
\end{equation}
We call elements of this subset \emph{distinguished classes}. Note that~$\# {\mathcal D}(x) = \#_d(x)$. The following is an obstruction result for Hamiltonian monodromy of toric fibres.
\begin{TheoremA}
\label{thm:mainC}
Let~$T(x) \subset X$ be a toric fibre in a compact toric manifold. Every element in the Hamiltonian monodromy group~${\mathcal H}_{T(x)}$ acts by a permutation on the set~${\mathcal D}(x)$ of distinguished classes.
\end{TheoremA}
This theorem again follows from Chekanov's work~\cite[Theorem 4.5]{Che96} together with the lifting trick discussed in~\S\ref{ssec:introclass}. In fact, we get a stronger statement, see Theorem~\ref{thm:ambientmonodromy}. The number~$\#_d(x)$ of distinguished classes is maximal if~$T(x)$ is monotone, since all integral affine distances are equal in that case. In fact, in the monotone case, we recover~\cite[Theorem 2]{AugSmiWor22} for Hamiltonian diffeomorphisms, see Corollary~\ref{cor:smith}. Note that Theorem~\ref{thm:mainC} does not require monotonicity.
In terms of examples, we give a complete description of~${\mathcal H}_L$ for all toric fibres in~$S^2 \times S^2, \mathbb{C} P^2, \mathbb{C} \times S^2, \mathbb{C}^2 \times T^*S^1$ and~$T^*S^1 \times S^2$, and show that all Hamiltonian monodromy elements can be realized by symmetric probes as outlined above.
\subsection{Outline}
In Section~\ref{sec:toric}, we review the relevant toric geometry and in particular we discuss \emph{toric reduction}, a version of symplectic reduction which is compatible with the toric structure and on which we rely to prove Theorems~\ref{thm:mainA},~\ref{thm:mainB} and~\ref{thm:mainC}. Section~\ref{sec:symmprobes} is the heart of this paper, where we discuss symmetric probes and prove Theorem~\ref{thm:mainA}. In Section~\ref{sec:chekanovinv}, we discuss obstructions to the equivalence of toric fibres and prove Theorem~\ref{thm:mainB}. Furthermore, we discuss obstructions to which Hamiltonian monodromy can be realized for toric fibres. Section~\ref{sec:examples} is dedicated to examples and serves to illustrate the results of the previous sections.
\subsection*{Acknowledgements}
We cordially thank Jonny Evans, Joontae Kim and Felix Schlenk for many useful discussions. We are grateful to Jack Smith for generously sharing his insights on Hamiltonian monodromy and offering explanations about~\cite{AugSmiWor22}. This work was started at Université de Neuchâtel, partially supported by SNF grant 200020-144432/1, and continued at Tel Aviv University, partially supported by the Israel Science Foundation grant 1102/20 and by the ERC Starting Grant 757585.
\section{Some toric symplectic geometry}
\label{sec:toric}
In this section, we review toric geometry with special emphasis on a certain type of symplectic reduction, which we call \emph{toric reduction}. Toric reduction generalizes probes as well as Delzant's construction of toric symplectic manifolds, both of which heavily feature in this paper.
\subsection{Toric manifolds}
A symplectic manifold~$(X^{2n},\omega)$ together with a moment map~$\mu \colon X \rightarrow {\mathfrak{t}}^*$ is called \emph{toric} if~$\mu$ generates an effective Hamiltonian action of the~$n$-torus~$T^n$. By~${\mathfrak{t}}^*$ we denote the dual of the Lie algebra~${\mathfrak{t}}$ of~$T^n$. Choosing an identification~$T^n \cong \mathbb{R}^n/\mathbb{Z}^n$ induces an identification~${\mathfrak{t}}^* \cong \mathbb{R}^n$ and, depending on context, we will use both the invariant way and the coordinate-dependent way of seeing things. Note that some symplectic manifolds admit distinct toric structures and hence we are really concerned with the triple~$(X,\omega,\mu)$ when we say \emph{toric manifold} although we may just write~$X$ or~$(X,\omega)$ for simplicity.
A classical result by Delzant~\cite{Del88} states that if~$X$ is \emph{compact} toric\footnote{Many authors include compactness in the definition of \emph{toric}, but we do not.}, then the image~$\Delta = \mu(X)$ is a so-called \emph{Delzant polytope}, and that Delzant polytopes (up to integral affine transformations) classify toric manifolds up to equivariant symplectomorphism. There are many classical references for toric manifolds, e.g.~\cite{Aud04, Can03, Gui94}, and we refer to these for details. We revisit part of Delzant's result in~\S\ref{ssec:delzant}.
Due to Delzant's theorem, the moment polytope associated to a toric manifold~$X$ is a crucial object of study. We view it as
\begin{equation}
\label{eq:polytope}
\Delta = \{ x \in {\mathfrak{t}}^* \;\vert\; \ell_i(x) \geqslant 0 \}, \quad
\ell_i(x) = \langle x , \xi_i \rangle + \lambda_i.
\end{equation}
Here, we view the vectors~$\xi_i$ in~${\mathfrak{t}}$ and~$\langle \cdot, \cdot \rangle$ denotes the natural pairing of~${\mathfrak{t}}$ and its dual. Note that~${\mathfrak{t}}$ contains a natural lattice~$\Lambda$ obtained as the kernel of the exponential map~$\exp \colon {\mathfrak{t}} \rightarrow T^n$. Similarly, the dual~${\mathfrak{t}}^*$ contains the dual lattice~$\Lambda^*$. If we choose a basis, we can identify~$\Lambda \cong \mathbb{Z}^n$ and dually~$\Lambda^* \cong \mathbb{Z}^n$. Again, depending on context, we use both the invariant viewpoint and the coordinate-dependent one. Furthermore, since~$\Delta$ is rational (with respect to~$\Lambda^*$), we can choose the vectors~$\xi_i$ to be primitive in~$\Lambda$.
\begin{definition}
A vector~$v \in \Lambda$ in a lattice~$\Lambda$ is called \emph{primitive} if~$\alpha v \notin \Lambda$ for all~$0<\alpha<1$.
\end{definition}
Together with~\eqref{eq:polytope}, this condition uniquely determines~$\xi_i$ and~$\lambda_i$ in terms of~$\Delta$ and vice-versa. As we have mentioned above,~$\Delta$ is a \emph{Delzant polytope}, meaning that at every vertex the vectors~$\xi_i$ determining the facets meeting at that vertex form a basis of the lattice~$\Lambda$ over the integers. There is a natural symmetry group acting on~$\Delta \subset {\mathfrak{t}}^*$ without changing the toric manifold determined by~$\Delta$.
\begin{definition}
The \emph{integral affine transformations} of~$({\mathfrak{t}}^*,\Lambda^*)\cong(\mathbb{R}^n,\mathbb{Z}^n)$ are the elements in the group
\begin{equation}
\operatorname{Aut} \Lambda^* \ltimes {\mathfrak{t}}^* \cong \operatorname{GL}(n;\mathbb{Z}) \ltimes \mathbb{R}^n.
\end{equation}
\end{definition}
The elements in~$\operatorname{Aut} \Lambda^* \cong \operatorname{GL}(n;\mathbb{Z})$ correspond to base changes in the torus~$T^n$, whereas the translation part~${\mathfrak{t}}^* \cong \mathbb{R}^n$ correspond to adding constant elements to the moment map. Neither of these transformations changes the Hamiltonian~$T^n$-action.
\subsection{Toric reduction}
\label{ssec:toricred}
In this paragraph we are interested in symplectic reduction with respect to subtori of a toric~$T^n$-action. We call symplectic reduction of this type \emph{toric reduction}. The symplectic quotient of this operation inherits a toric structure with moment polytope obtained by intersecting~$\Delta$ with an affine rational subspace in~${\mathfrak{t}}^*$. Roughly speaking, toric reductions are in bijection with inclusions (which are compatible in the sense of Defintion~\ref{def:redadm}) of the moment polytope of the reduced space into the moment polytope of the initial space. Although we could not find a precise statement of sufficient generality in the literature, this idea is hardly new - see for example~\cite{AbrMac13}. In fact, as we will discuss in~\S\ref{ssec:delzant}, the Delzant construction and McDuff's probes are special cases of Theorem~\ref{thm:toricred}. What may be new is the precise formulation we give in Definition~\ref{def:redadm} of the conditions for this reduction to yield a smooth symplectic quotient in terms of the geometry of~$\Delta$.
Let~$X$ be a toric manifold and~$\Delta$ its moment polytope. Note that symplectic reduction with respect to the full~$T^n$-action is pointless. Indeed, the reduced spaces are zero-dimensional. However, it is quite fruitful to perform symplectic reduction with respect to a subtorus~$K \subset T^n$. Dually, we may look at affine rational subspaces~$V \subset \mathbb{R}^n \cong {\mathfrak{t}}^*$. Indeed, to any affine rational subspace~$V$ we can associate its complementary torus
\begin{equation}
\label{eq:comptorus}
K_V = \exp(V^0), \quad
V^0 = \{ \xi \in {\mathfrak{t}} \,\vert\, \langle x - x', \xi \rangle = 0, \; x,x' \in V \} \subset {\mathfrak{t}},
\end{equation}
and vice-versa. Rationality of~$V$ is equivalent to the compactness of~$K_V$. The subspace~$V$ is a level set of the natural projection~${\mathfrak{t}}^* \rightarrow \operatorname{Lie}(K_V)^*$, meaning that the moment map~$\mu_{K_V} \colon X \rightarrow \operatorname{Lie}(K_V)^*$ generating the induced~$K_V$-action on~$X$ has level set~$\mu^{-1}(\Delta \cap V)$ for some suitable level. Thinking in terms of~$V$ and instead of~$K_V$ or~$\mu_{K_V}$ has the advantage that both the subtorus and the level at which we wish to carry out reduction are fixed by a choice of~$V$. Furthermore, one can easily read off the integral affine geometry of the pair~$(\Delta,V)$ whether the action of~$K_V$ on~$\mu^{-1}(\Delta \cap V)$ is free (and hence the reduction admissible). Obviously, this is not always the case, since~$V$ may contain a vertex of~$\Delta$ for example.
\begin{definition}
\label{def:redadm}
Let~$\Delta$ be a Delzant polytope and let~$V$ be an affine rational subspace. We call the pair~$(\Delta,V)$ \emph{reduction-admissible} if, for every face~$F \subset \Delta$ intersecting~$V$, the union of (the linear part of)~$F$ and (the linear part of)~$V$ contains a basis of the lattice~$\Lambda^*$.
\end{definition}
Analogously we call a polytope~$\Delta' \subset \Delta$ \emph{reduction-admissible} if it is obtained as the intersection~$\Delta' = \Delta \cap V$ of~$\Delta$ with a reduction-admissible~$V$. Note that one only needs to check reduction-admissibility at the faces~$F$ of the smallest dimension for which~$V \cap F$ is non-empty, i.e.\ at the vertices of the polytope~$\Delta'$.
\begin{theorem}[Toric reduction]
\label{thm:toricred}
Let~$\Delta \subset \mathbb{R}^n$ be a Delzant polytope and~$V \subset \mathbb{R}^n$ an affine rational subspace such that the pair~$(\Delta,V)$ is reduction-admissible. Then the action of~$K_V = \exp(V^0)$ on~$Z = \mu^{-1}(\Delta \cap V)$ is free and the reduced space~$X' = Z/K_V$ is itself toric with moment polytope~$\Delta' = \Delta \cap V$.
\end{theorem}
\noindent {\it Proof. \;}
Let~$e_1^*,\ldots,e_n^* \in \mathbb{R}^n = {\mathfrak{t}}^*$ be the standard basis. Reduction-admissibility implies that, up to applying an integral affine transformation, we may assume that
\begin{equation}
V = \operatorname{span}_{\mathbb{R}}\{e_1^*,\ldots,e_i^*\}, \quad
F = \operatorname{span}_{\mathbb{R}}\{e_j^*,\ldots,e_n^*\}, \quad j \leqslant i + 1.
\end{equation}
In this normal form, we have~$V^0 = \operatorname{span}_{\mathbb{R}}\{e_{i+1},\ldots,e_n\}$ and hence~$K_V = \{1\} \times T^{n-i}$. This subtorus acts freely on~$\mu^{-1}(F)$. Since this holds for any facet~$F$ intersecting~$V$, the action of~$K_V$ is free and thus symplectic reduction is admissible.
The quotient manifold carries a residual~$T^n/K_V$-action. It is effective, since the~$T^n$-action on~$X$ is. Since~$\mu$ is invariant under the~$T^n$-action, it is in particular invariant under the induced~$K_V$-action and thus its restriction to~$Z = \mu^{-1}(\Delta \cap V)$ factors through the quotient by~$K_V$ and has image~$\Delta' = \Delta \cap V$. It is not hard to check that the map obtained in this way is a moment map generating the~$T^n/K_V$-action on the quotient. For dimensional reasons, the resulting action is toric.
\hspace*{\fill} $\Box$\\
Let~$M = T^n/K_V$ be the torus acting by the residual action. Note that, by definition,~$\Delta'$ is contained in~${\mathfrak{t}}^*$ instead of~$\operatorname{Lie}(M)^* = {\mathfrak{m}}^*$. However, one can pick an identification of~$({\mathfrak{m}}^*, \Lambda_{M}^*)$ with~$(V,\Lambda \cap V)$ and, up to an element in the integral affine transformations of~$({\mathfrak{m}}^*,\Lambda_M^*)$, this yields a well-defined polytope~$\Delta' \subset {\mathfrak{m}}^*$. Conversely, given an integral affine embedding
\begin{equation}
\label{eq:iota}
\iota \colon ({\mathfrak{m}}^*,\Lambda_M) \hookrightarrow ({\mathfrak{t}}^*, \Lambda), \quad \iota(\Delta') = \iota({\mathfrak{t}}^*) \cap \Delta
\end{equation}
such that~$(\Delta,\iota({\mathfrak{m}}^*))$ is reduction-admissible, there is a symplectic reduction from~$X$ to~$X'$. To summarize, there is a short exact sequence of tori,
\begin{equation}
\label{eq:sestori}
0 \rightarrow K_V \hookrightarrow T^n \stackrel{\Xi}{\rightarrow} M \rightarrow 0,
\end{equation}
where~$T^n$ acts on~$X$ and~$M$ acts on the reduced space~$X'$ such that the reduction map~$p \colon Z \rightarrow X'$ is equivariant with respect to the~$T^n$- and~$M$-actions meaning that
\begin{equation}
\label{eq:pequivariance}
p(t.x)=\Xi(t).p(x), \quad t \in T^N, x \in X.
\end{equation}
In particular, orbits are mapped to orbits under toric reduction. This will be used in~\S\ref{ssec:toricfibres}.
\subsection{Delzant construction}
\label{ssec:delzant}
The Delzant construction gives a recipe for constructing a toric manifold~$(X,\omega,\mu)$ from a compact Delzant polytope~$\Delta$. We review it here, since it will be used in Section~\ref{sec:chekanovinv}, and refer to~\cite{Gui94} for details. Actually, the Delzant construction is a special case of toric reduction as discussed in~\S\ref{ssec:toricred} where~$X$ is obtained as a symplectic quotient of some~$\mathbb{C}^N$ equipped with its standard toric structure.
Let~$\Delta \subset {\mathfrak{t}}^*$ be a Delzant polytope with~$N$ facets. Since~$\Delta$ is compact, we have~$N > n$. Let~$(\mathbb{C}^N,\omega_0)$ be the standard symplectic vector space equipped with the moment map
\begin{equation}
\label{eq:stdmm}
\mu_0 \colon \mathbb{C}^N \rightarrow ({\mathfrak{t}}^N)^* \cong \mathbb{R}^N, \quad
(z_1,\ldots,z_N) \mapsto (\pi\vert z_1 \vert^2, \ldots, \pi\vert z_N \vert^2),
\end{equation}
which generates the standard~$T^N$-action on~$\mathbb{C}^N$ by rotation in the factors. Its image is the positive orthant~$\mathbb{R}^{N}_{\geqslant 0}$. Instead of starting with the subtorus~$K \subset T^N$ by which to reduce, we start by defining an inclusion
\begin{equation}
\label{eq:ell}
\ell \colon {\mathfrak{t}}^* \hookrightarrow \mathbb{R}^N, \quad
x \mapsto (\ell_1(x),\ldots,\ell_N(x)),
\end{equation}
which maps~$\Delta$ to~$\mathbb{R}^N_{\geqslant 0}$. The components~$\ell_i$ defined in~\eqref{eq:polytope} are the functions measuring the integral affine distance of a given point to the facets of~$\Delta$. The map~$\ell$ is an integral affine embedding as in~\eqref{eq:iota} and the subtorus~$K$ by which we reduce is given~$K = \exp (\operatorname {im} \ell)^0 \subset T^N$. Using the Delzant condition on~$\Delta$, it is easy to check that the inclusion~$\ell(\Delta) \subset \mathbb{R}^N_{\geqslant 0}$ is admissible in the sense of Definition~\ref{def:redadm}. Thus the toric symplectic manifold~$(X,\omega,\mu)$ is obtained as symplectic quotient~$X = \mu_0^{-1}(\ell(\Delta)) / K$.
Let us illustrate this by a simple example.
\begin{example}[Complex projective plane]
\label{ex:cp2}
Let~$\Delta \subset {\mathfrak{t}}^* = \mathbb{R}^2$ be the simplex defined by
\begin{equation}
\ell_1(x) = x_1 + 1, \quad
\ell_2(x) = x_2 + 1, \quad
\ell_3(x) = - x_1 - x_2 +1.
\end{equation}
This simplex is Delzant and since~$N=3$, we will obtain~$X$ as a symplectic reduced space of~$\mathbb{C}^3$. The map~$\ell$ is depicted in Figure~\ref{fig:3}. The orthogonal complement~$(\operatorname {im} \ell)^{\perp}$ is spanned by~$(1,1,1)$ and thus~$K=\{(t,t,t) \,\vert\, t \in S^1\} \subset T^3$ and~$\mu_K(z) = \pi(\vert z_1 \vert^2 + \vert z_2 \vert^2 + \vert z_3 \vert^2)$. We conclude that the symplectic reduction~$\mu_K^{-1}(3) = S^5(3) \rightarrow \mathbb{C} P^2$ corresponds to the Hopf fibration map. The symplectic form one obtains by this procedure is the Fubini--Study form~$\omega_{\mathbb{C} P^2}$ with normalization~$\int_{\mathbb{C} P^1}\omega_{\mathbb{C} P^2} = 3$.
\end{example}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[inner sep=0pt] at (0,0)
{\includegraphics[trim={3cm 5cm 5.5cm 2.5cm},clip,scale=0.75] {fig3.pdf}};
\node at (-3.8,2){$\Delta $};
\node at (1.25,2){$\ell(\Delta)$};
\end{tikzpicture}
\caption{The idea of the Delzant construction in the case of~$X = \mathbb{C} P^2$. The complement of~$\operatorname {im} \ell$ generates the circle action by which the symplectic reduction is performed.}
\label{fig:3}
\end{center}
\end{figure}
\subsection{Toric fibres}
\label{ssec:toricfibres}
Every toric manifold~$X^{2n}$ contains an~$n$-parametric family of Lagrangian tori called \emph{toric fibres}.
\begin{definition}
Let~$x \in \operatorname{int}\Delta$ be a point in the interior of a toric moment polytope. The corresponding preimage~$T(x)=\mu^{-1}(x)$ is called a \emph{toric fibre}.
\end{definition}
\begin{example}
\label{ex:prodtori}
{\rm (Product tori)
The toric fibres of the standard toric structure~\eqref{eq:stdmm} are \emph{product tori}~$\mu_0^{-1}(a_1,\ldots,a_N) = S^1(a_1)\times \ldots \times S^1(a_N) \subset \mathbb{C}^N$, where~$a_i > 0$. Here,~$S^1(a) \subset \mathbb{C}$ denotes the circle bounding a disk of area~$a$.
}
\end{example}
Toric fibres are orbits with trivial stabilizer of the~$T^n$-action. This means that the torus action gives a canonical identification~$T^n \cong T(x)$ and
\begin{equation}
\label{eq:lambda}
\Lambda
= \ker(\exp \colon {\mathfrak{t}} \rightarrow T^n)
= \pi_1(T(x))
= H_1(T(x);\mathbb{Z}).
\end{equation}
Let us now discuss what happens to toric fibres under toric reduction. In general, let~$p \colon Z \rightarrow X$ be the quotient map of a symplectic reduction. If~$L \subset X$ is Lagrangian, then~$p^{-1}(L)$ is Lagrangian as well and we call it the \emph{lift} of~$L$. Conversely, any Lagrangian contained in~$Z$ is automatically invariant under the group action and projects to a Lagrangian in the reduced space. Adopting our notation from \S\ref{ssec:toricred}, let~$X'$ be a quotient obtained from~$X$ by toric reduction and let~$\iota(\Delta') \subset \Delta$ be the inclusion of the corresponding moment polytopes. Furthermore, we denote the reduction map by~$p \colon Z \rightarrow X'$ and the toric fibres in~$X$ by~$T(\cdot)$ and those in~$X'$ by~$T'(\cdot)$.
\begin{proposition}
\label{prop:toricfibresred}
In the above notation, we have the following correspondence of toric fibres in~$X$ and~$X'$,
\begin{equation}
p^{-1}(T'(x)) = T(\iota(x)) \subset X, \quad
x \in \operatorname{int} \Delta'.
\end{equation}
\end{proposition}
\noindent {\it Proof. \;}
This follows directly from the definition of the moment map~$\mu'$ on the quotient~$X'$.
\hspace*{\fill} $\Box$\\
In later sections, we will heavily use the second relative homotopy/homology groups of toric fibres, which is why we will discuss them here. Recall from~\eqref{eq:polytope} that the vectors~$\xi_i \in \Lambda$ are defined as orthogonal vectors to the facets of~$\Delta$. We prove the following well-known fact using the Delzant construction together with Proposition~\ref{prop:toricfibresred}.
\begin{proposition}
\label{prop:pi2}
Let~$(X,T(x))$ be a pair of a toric symplectic manifold and a toric fibre. Then~$\pi_2(X,T(x)) \cong \mathbb{Z}^N$, where~$N$ is the number of facets of~$\Delta$. Furthermore, there is a canonical basis~$D_1,\ldots,D_N \in \pi_2(X,T(x))$ bounding the classes~$\partial D_i = \xi_i \in \Lambda = \pi_1(T(x))$.
\end{proposition}
\noindent {\it Proof. \;}
By the Delzant construction and Proposition~\ref{prop:toricfibresred}, the toric fibre~$T(x)$ lifts to a product torus~$T(\ell(x)) \subset \mathbb{C}^N$ under the reduction map~$p \colon Z \rightarrow X$, where~$N$ is the number of facets of~$\Delta$. Let~$\tilde{D}_1,\ldots,\tilde{D}_N$ be the obvious basis of~$\pi_2(\mathbb{C}^N,T(\ell(x)))$. Note that these can be chosen to lie in~$Z \subset \mathbb{C}^N$ since the image of~$Z$ under the moment map~$\mu_0$ is equal to the image of the embedding~$\ell$ from~\eqref{eq:ell}. Furthermore, reduction maps induce isomorphisms of relative homotopy groups, see for example the proof of~\cite[Proposition 3.2]{Smi19}. This shows that~$\pi_2(\mathbb{C}^N,T(\ell(x)))$ and~$\pi_2(X,T(x))$ are isomorphic, and we denote the image of~$\tilde{D}_i$ under the isomorphism by~$D_i$. In order to compute the boundary operator~$\partial$, consider the commutative diagram
\begin{equation}
\label{eq:homotopycd}
\begin{tikzcd}
\pi_2(\mathbb{C}^N,T(\ell(x))) \arrow{r}{\partial'} \arrow{d}{p_*}
& \pi_1(T(\ell(x))) \arrow{d}{p_*} \\
\pi_2(X,T(x)) \arrow{r}{\partial}
& \pi_1(T(x)).
\end{tikzcd}
\end{equation}
The boundary operator~$\partial'$ is an isomorphism mapping~$\tilde{D}_i$ to the $i$-th standard basis vector~$e_i$ and therefore it suffices to understand~$p_*$ on the fundamental group. Recall from the discussion surrounding~\eqref{eq:sestori} that~$p$ is equivariant in the sense that~$p(t.z)=\Xi(t).p(z)$ for all~$t \in T^N$ and~$z \in \mathbb{C}^N$. In the special case of the Delzant construction, one can easily check that~$\Xi_*(e_i)=\xi_i$ and thus this proves the last claim. \hspace*{\fill} $\Box$\\
The homotopy long exact sequence for the pair~$(X,T(x))$ gives a short exact sequence,
\begin{equation}
\label{eq:seshomotopy}
0 \rightarrow \pi_2(X) \rightarrow \pi_2(X,T(x)) \rightarrow \pi_1(T(x)) \rightarrow 0.
\end{equation}
Indeed, the higher homotopy grups of the torus vanish and toric manifolds are simply-connected. In homology (with integer coefficients) we obtain the same short exact sequence,
\begin{equation}
\label{eq:seshomology}
0 \rightarrow H_2(X) \rightarrow H_2(X,T(x)) \rightarrow H_1(T(x)) \rightarrow 0.
\end{equation}
Indeed, the maps~$H_*(T(x)) \rightarrow H_*(X)$ are zero, since there is a contractible subset~$\Omega \subset X$ such that~$T(x) \subset \Omega \subset X$. Take for example~$\Omega = \mu^{^{-1}}(\operatorname{int} \Delta \cup U)$, where~$U$ is a small neighbourhood of a vertex of~$\Delta$. There are obvious identifications of the respective groups in~\eqref{eq:seshomotopy} and~\eqref{eq:seshomology} which commute with the maps of these short exact sequences and thus we use homology and homotopy groups interchangeably.
Note that this discussion yields a very effective way to read off~$\pi_2(X) = H_2(X)$ from the moment polytope of a toric manifold. It is the kernel of~$\partial$, i.e.\ the lattice of integral relations among the vectors~$\xi_1,\ldots,\xi_N$ orthogonal to the facets of~$\Delta$. This in turn has a nice geometric interpretation in terms of the singular fibration structure of the moment map~$\mu \colon X \rightarrow \Delta$. Indeed, when moving from the interior of the moment polytope to the interior of a facet~$F_i$, the circle~$S^1(\xi_i) \subset T^n$ collapses, where by~$S^1(\xi_i)$ we have denoted the circle generated by the orthogonal vector~$\xi \in \Lambda \subset {\mathfrak{t}}$ to the facet~$F_i$. The canonical basis~$D_1,\ldots,D_N \in \pi_2(X,T(x))$ corresponds to the disks coming from these circles collapsing, which explains~$\partial D_i = \xi_i$. Furthermore, let~$\sum_i a_i D_i$ be an integral combination of such disks with~$\sum_i a_i \xi_i = 0$. The latter condition means that the corresponding concatenation of curves representing the~$\xi_i $ bound in the fibre~$T(x)$. Thus they define a homotopy class in~$\pi_2(X)$, which illustrates~$\ker \partial = \pi_2(X)$.
\section{Symmetric probes}
\label{sec:symmprobes}
Symmetric probes were first defined in~\cite{AbrBorMcD14}, where they were used to a different end. Let~$(X,\omega,\mu)$ be a toric symplectic manifold with moment polytope~$\Delta$.
\begin{definition}
\label{def:symmprobe}
A \emph{symmetric probe}~$\sigma \subset \Delta$ is a reduction-admissible line segment, see Definition~\ref{def:redadm}.
\end{definition}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[inner sep=0pt] at (0,0)
{\includegraphics[trim={3.5cm 5cm 8cm 2.8cm},clip,scale=0.75] {fig4-1.pdf}};
\node at (-3,2){$\Delta$};
\node at (-2,1){$\xi'$};
\node at (-2.15,-1){$\xi$};
\node at (1.55,0.9){$F'$};
\node at (0.6,-1.95){$F$};
\node at (-0.4,-1){$v$};
\node at (0.2,0.5){$\sigma$};
\end{tikzpicture}
\caption{A symmetric probe~$\sigma \subset \Delta$ and the surrounding notation.}
\label{fig:4}
\end{center}
\end{figure}
Let us unpack this definition and introduce some notation. By~$l \subset {\mathfrak{t}}^*$ we denote the line containing the symmetric probe~$\sigma$, by~$v \in \Lambda^*$ a primitive directional vector of~$l$ and by~$F$ and~$F'$ the facets of~$\Delta$ which~$\sigma$ intersects. We choose~$F,F'$ so that~$v$ points away from~$F$ and towards~$F'$. See Figure~\ref{fig:4} for an illustration of the set-up. Note that symmetric probes indeed do intersect facets, and not lower dimensional faces. Definition~\ref{def:symmprobe} implies that there is a basis of~$\Lambda^*$ contained in the unions~$l \cup F$ and~$l \cup F'$, respectively. This means that, locally, all intersections of symmetric probes with a facet are equivalent under integral affine transformations. After choosing a basis, we can work in~$(\mathbb{R}^n,\mathbb{Z}^n)$ and assume that
\begin{equation}
\label{eq:inttransnf}
v = e_n^*, \quad
F = \operatorname{span}_{\mathbb{R}}\{ e_1^*, \ldots , e_{n-1}^* \}.
\end{equation}
This follows from the fact that~$\operatorname{GL}(n;\mathbb{Z})$ acts transitively on the set of bases of~$\mathbb{Z}^n$. We take~\eqref{eq:inttransnf} to be the normal form of an intersection of a symmetric probe with a facet. McDuff~\cite{McD11} calls these intersections \emph{integrally transverse} and we refer to her paper for a detailed discussion of this notion. In the above notation we have~$\langle v,\xi \rangle = -\langle v,\xi' \rangle = 1$, where~$\xi,\xi' \in \Lambda$ are the normal vectors to~$F$ and $F'$, respectively. By the normal form~\eqref{eq:inttransnf}, it follows that we can assume~$\xi = e_n$, which implies that~$\xi' = \sum_{i=1}^{n-1} k_ie_i - e_n $. The numbers~$k_1,\ldots,k_{n-1} \in \mathbb{Z}$ completely determine the toric structure of a neighbourhood of the symmetric probe~$\sigma$ and they are topological invariants of the torus bundle coming from the reduction map~$\mu^{-1}(\sigma) \rightarrow S^2$ appearing in the proof of Theorem~\ref{thm:symmprobe}.
\begin{theorem}
\label{thm:symmprobe}
Let~$\sigma \subset \Delta$ be a symmetric probe and~$x,y \in \sigma$ be a pair of points lying at equal distance to the boundary of~$\sigma$. Then the toric fibres~$T(x)$ and~$T(y)$ are Hamiltonian isotopic by a Hamiltonian isotopy inducing the map
\begin{equation}
\label{eq:basicinvolution}
\Phi_{\sigma} \colon H_1(T(x)) \rightarrow H_1(T(y)), \quad
a \mapsto a + \langle v , a \rangle (\xi' - \xi)
\end{equation}
on the first homology of the toric fibres.
\end{theorem}
In particular, this proves Theorem~\ref{thm:mainA}. In~\eqref{eq:basicinvolution}, we have used the identification~$\Lambda = H_1(T(x)) = H_1(T(y))$ induced by the torus action. The map~$\Phi_{\sigma}$ is an involution and its~$(+1)$-Eigenspace is~$(n-1)$-dimensional and given by the complement~$\sigma^0=v^0 \subset {\mathfrak{t}} = \Lambda \otimes \mathbb{R}$. Its~$(-1)$-Eigenspace is spanned by~$\xi' - \xi$. Note also that~$\Phi_{\sigma}$ is uniquely determined by~$\sigma \subset \Delta$ and more precisely by an arbitrarily small neighbourhood of~$\sigma$ in~$\Delta$. Indeed, if we exchange~$\xi$ and~$\xi'$, then~$v$ changes its sign by our convention.
In case~$x=y$, we obtain an interesting corollary about the Hamiltonian monodromy group (see Definition~\ref{def:hmg}) of the corresponding toric fibre.
\begin{corollary}
\label{cor:symmprobemon}
Let~$x$ be the midpoint of a symmetric probe~$\sigma$. Then the Hamiltonian monodromy group~${\mathcal H}_{T(x)}$ contains the element~$\Phi_{\sigma}$.
\end{corollary}
\proofof{Theorem~\ref{thm:symmprobe}}
Since~$\sigma \subset \Delta$ is reduction-admissible, we can perform toric reduction by Theorem~\ref{thm:toricred}. The reduced space is a copy of~$S^2$ with a standard symplectic form of total area equal to the integral affine length of~$\sigma$. Under the reduction, the fibres~$T(x), T(y)$ are mapped to a pair of circles~$S^1_x, S^1_y \subset S^2$ which are orbits of the residual Hamiltonian circle action on~$S^2$, see Proposition~\ref{prop:toricfibresred}. Since~$x,y$ are at equal distance to the boundary of~$\sigma$, the circles~$S_x^1,S^1_y$ bound disks of the same area and thus can be exchanged by a Hamiltonian isotopy~$\varphi$ on~$S^2$. Lift this Hamiltonian isotopy from~$S^2$ to~$X$ by lifting its Hamiltonian function by the reduction map to~$\mu^{-1}(\sigma)$ and extending it (for example by cut-off) to the total space. See for example~\cite[Lemma 3.1]{AbrMac13} or~\cite[Lemma 3.1]{Bre20} for details on lifting Hamiltonian isotopies.
Let us now compute the map induced by~$\varphi$ on~$\Lambda$. We work with homotopy groups here, but the problem is exactly the same in homology by the discussion in~\S\ref{ssec:toricfibres}. Let~$d_x,d_x' \in \pi_2(S^2,S^1_x)$ and~$d_y,d_y' \in \pi_2(S^2,S^1_y)$ be the generators of relative homotopy groups such that~$d_x,d_y$ contain the south pole and~$d_x',d_y'$ contain the north pole such that~$d_x + d_x' = d_y + d_y' = [S^2]$ for a chosen orientation on~$S^2$. The map~$\varphi_*$ induced by the Hamiltonian isotopy~$\varphi$ on relative homotopy groups satisfies~$\varphi_*d_x = d_y'$ and~$\varphi_*d_y = d_x'$. Furthermore, the map~$\Phi_{\sigma}$ is uniquely determined by the properties
\begin{equation}
\label{eq:phisigma}
\Phi_{\sigma}(\xi) = \xi', \quad
\Phi_{\sigma}\vert_{v^0} = \operatorname{id}_{v^0},
\end{equation}
where~$v^0 \subset \Lambda$ denotes the elements on which~$v \in \Lambda^*$ vanishes. Indeed,~$\xi$ is transverse to~$v^0$ since~$\langle v, \xi \rangle = 1$. We show that the lift of~$\varphi$ satisfies~\eqref{eq:phisigma}, which proves the claim. The second property in~\eqref{eq:phisigma} follows from the~$K_{\sigma}$-equivariance of the lift of~$\varphi$ where~$K_{\sigma} = \exp \sigma^0$ is the complementary torus of the probe~$\sigma$. Indeed,~$K_{\sigma} \subset T^n$ is the subtorus with respect to which the symplectic reduction~$\mu^{-1}(\sigma) \rightarrow S^2$ is carried out, see also~\eqref{eq:comptorus} and the proof of Theorem~\ref{thm:toricred}, and thus any Hamiltonian isotopy lifted from the reduced space is equivariant with respect to this group action. For the first property in~\eqref{eq:phisigma}, note that the map~$p_* \colon \pi_2(\mu^{-1}(\sigma),T(x)) \rightarrow \pi_2(S^2,S^1_{x})$ induced by symplectic reduction is an isomorphism. See for example the proof of~\cite[Proposition 3.2]{Smi19}. Therefore~$\pi_2(\mu^{-1}(\sigma),T(x_{\sigma}))$ is generated by~$D_x,D'_x$ with~$\pi_*(D_x) = d_x$ and~$\pi_*(D'_x) = d'_x$ and similarly for~$T(y)$. This allows us to conclude that the lift of~$\varphi$ maps~$D_x$ to~$D_y'$ and~$D_y$ to~$D_x'$. Since~$\partial_xD_x = \partial_y D_y = \xi$ and~$\partial_y D_y = \partial_y D_y' = \xi'$ this finishes the proof.
\hspace*{\fill} $\Box$\\
Note that we have actually computed the map induced on relative second homology,
\begin{equation}
H_2(X,T(x)) \rightarrow H_2(X,T(y)), \quad
b \mapsto b + \langle v , \partial b \rangle (D' - D ),
\end{equation}
where~$D, D'$ denote the homology classes of the canonical basis in Proposition~\ref{prop:pi2} corresponding to~$F,F'$, respectively. Note also that the lift of~$\varphi$ in the proof of Theorem~\ref{thm:symmprobe} depends on the extension of the Hamiltonian function to~$X$ and is thus not uniquely defined by~$\varphi$.
\begin{remark}
By choosing a suitable cut-off of the lifted Hamiltonian function in the proof of Theorem~\ref{thm:symmprobe}, one can choose the Hamiltonian isotopy to be supported in an arbitrarily small neighbourhood of~$\sigma \subset \Delta$.
\end{remark}
\section{Chekanov invariants}
\label{sec:chekanovinv}
The main idea of this section is to use the Delzant construction to lift toric fibres of certain toric manifolds to product tori in some~$\mathbb{C}^N$ via Proposition~\ref{prop:toricfibresred} and to make use of the various results on product tori in~\cite{Che96}. In particular, this yields strong obstructions to the equivalence of toric fibres (Theorem~\ref{thm:mainB}) and their Hamiltonian monodromy (Theorem~\ref{thm:mainC}). As we shall discuss, similar results can be obtained \emph{by hand} (i.e.\ avoiding the lifting trick) via displacement energy and versal deformations, which comes in handy in case~$X$ cannot be seen as a toric reduction of~$\mathbb{C}^N$. However, we note that the approach by hand runs into the question of determining the displacement energy of toric fibres, which turns out to be very subtle in general, see for example the papers~\cite{AbrBorMcD14, McD11} for detailed discussions of the (qualitative) question of displaceability and~\cite[Section 3]{Bre20} for the quantitative question about displacement energy. In case~$X$ can be seen as a toric reduction of~$\mathbb{C}^N$, this question can be completely avoided by the lifting trick.
\begin{definition}
\label{def:redtype}
A toric symplectic manifold~$X$ is called \emph{of reduction type} if it can be obtained as a toric reduction of some~$\mathbb{C}^N$.
\end{definition}
By the Delzant construction in~\S\ref{ssec:delzant}, all compact toric manifolds are of reduction type. The space~$X = \mathbb{C} \times S^2$, which will be discussed in~\S\ref{ssec:excs2}, is an example of a non-compact space which is of reduction type.
Before moving to the Chekanov invariants, let us point out the following.
\begin{remark}[Classification up to symplectomorphisms]
\label{rk:sympclass}
We focus on equivalence of toric fibres up to Hamiltonian diffeomorphisms. One may ask an analogue of Question~\ref{qu:class} for the group of symplectomorphisms. Note that a toric symplectic manifold~$X$ is simply connected whenever its moment polytope has at least one vertex, meaning that the distinction between the two classification questions is, at best, a question about connected components of~$\operatorname{Symp}(X,\omega)$. In fact, both classifications agree in all simply connected examples we consider in Section~\ref{sec:examples} of this paper. This is not always true, as the following example illustrates. Let~$X$ be the space obtained from~$\mathbb{C} P^2$ by three small toric blow-ups of the same size~$\varepsilon > 0$ at the vertices of the original moment triangle. The resulting symplectic manifold is toric and its moment polytope is a hexagon with three long and three short edges. Near each of the short edges, there is a non-displaceable toric fibre, as was proved in~\cite[\S 5.5]{Cho08}. In particular, these three non-displaceable fibres are not equivalent under Hamiltonian diffeomorphisms. However they are symplectomorphic. Indeed, their base points can be permuted by integral affine symmetries of the moment polytope and such symmetries lift to symplectomorphisms of the corresponding toric manifold, see for example~\cite[Lemma 4.3]{BreKimMoo19} for a proof of this well-known fact.
In particular, Conjecture~\ref{conj:main} is false for equivalence up to symplectomorphisms.
\end{remark}
\subsection{Equivalence of toric fibres}
\label{ssec:chekanovinv}
As we have seen in Example~\ref{ex:prodtori}, the product tori
\begin{equation}
T(a)
= T(a_1,\ldots,a_N)
= S^1(a_1) \times \ldots \times S^1(a_N) \subset \mathbb{C}^N
\end{equation}
are a special case of toric fibres. Chekanov has given a classification of product tori\footnote{Chekanov calls these tori \emph{elementary tori}.} up to symplectomorphism in~\cite[Theorem A]{Che96}. A complete set of invariants is given by
\begin{eqnarray}
d(a) &=& \min\{a_1,\ldots,a_N\}, \\
\#_d(a) &=& \#\{ i \in \{1,\ldots,N \} \,\vert\, a_i = d(a) \}, \\
\Gamma(a) &=& \mathbb{Z}\langle a_1 - d(a), \ldots, a_N - d(a) \rangle,
\end{eqnarray}
where we write~$a = (a_1,\ldots,a_N) \in \mathbb{R}_{>0}^N$. The first invariant is a positive real number and corresponds to the displacement energy\footnote{In the original paper, Chekanov uses the first Ekeland--Hofer capacity instead.}~$d(a) = e(\mathbb{C}^N,T(a))$. The second invariant is a positive integer less or equal to~$N$ (with equality if~$T(a)$ is monotone) which comes from versal deformations and displacement energy. As it turns out, versal deformations of product tori are given as the minimum of~$\#_d(a)$ linear functionals, and they contain no other information beyond this number. The third invariant~$\Gamma(a) \subset \mathbb{R}$ is a subgroup of~$\mathbb{R}$ generated by~$N - \#_d(a)$ elements and is a purely \emph{soft} invariant. In fact, it is the set of symplectic areas of disks with vanishing Maslov class~$m(\cdot)$. Note that in the case of~$\mathbb{C}^N$, the symplectic form has a primitive~$\lambda$ and thus we can express~$\Gamma(a)$ as
\begin{equation}
\Gamma(a) = \left\{ \left. \int_{\gamma} \lambda \in \mathbb{R} \;\right\vert \; \gamma \in H_1(T(a)), \; m(\gamma) = 0 \right\}.
\end{equation}
This invariant can be more explicitely expressed as~$\Gamma(a) = \mathbb{Z}\langle a_1 - d(a), \ldots , a_n - d(a) \rangle$.
\begin{theorem}[Chekanov]
\label{thm:chekanov}
The product tori~$T(a)$ and~$T(a')$ are symplectomorphic in~$\mathbb{C}^N$ if and only if
\begin{equation}
d(a) = d(a'), \quad
\#_d(a) = \#_d(a'), \quad
\Gamma(a) = \Gamma(a').
\end{equation}
\end{theorem}
Let us now get back to the case of toric fibres and prove Theorem~\ref{thm:mainB}. Recall from~\S\ref{ssec:introclass} that the \emph{Chekanov invariants} of a toric fibre~$T(x)\subset X$ are defined in terms of the integral affine distances~$\ell(x) = (\ell_1(x),\ldots,\ell_N(x))$ of the the point~$x$ to the facets of~$\Delta$,
\begin{eqnarray}
\label{eq:ell}
d(x) &=& \min\{\ell_1(x),\ldots,\ell_N(x)\}, \\
\#_d(x) &=& \#\{ i \in \{1,\ldots,N \} \,\vert\, \ell_i(x) = d(x) \}, \\
\Gamma(x) &=& \mathbb{Z}\langle \ell_1(x) - d(x), \ldots, \ell_N(x) - d(x) \rangle.
\end{eqnarray}
\proofof{Theorem~\ref{thm:mainB}}
We prove the result for all toric manifolds of reduction type, see Definition~\ref{def:redtype}. Let~$X$ be a toric manifold of reduction type and~$T(x),T(x') \subset X$ be toric fibres which are equivalent under Hamiltonian isotopies. Recall from~\S\ref{ssec:delzant} that we may view~$X$ as a toric reduction of~$\mathbb{C}^N$, where the inclusion map of the moment polytope~$\Delta$ of~$X$ into~$\mathbb{R}^N$ is given by the map~$\ell(x) = (\ell_1(x),\ldots,\ell_N(x))$. Furthermore, the toric fibre~$T(x)$ lifts to the product torus~$T(\ell(x))$ in~$\mathbb{C}^N$ by Proposition~\ref{prop:toricfibresred} and similarly for~$T(x')$. Since~$T(x)\cong T(x')$, we obtain that~$T(\ell(x))\cong T(\ell(x'))$. Indeed, Hamiltonian isotopies can be lifted through symplectic reductions by lifting the corresponding Hamiltonian function and extending it to~$\mathbb{C}^N$ by cut-off. It is easy to see that any such lift will map the lift of~$T(x)$ to the lift of~$T(x')$. Theorem~\ref{thm:mainB} now follows from Theorem~\ref{thm:chekanov}.
\hspace*{\fill} $\Box$\\
The Chekanov invariants are not complete, as the following example illustrates.
\begin{example}
\label{ex:cp2notcomplete}
Let~$\mathbb{C} P^2$ the complex projective plane equipped with the toric structure described in Example~\ref{ex:cp2} and with moment polytope~$\Delta$, and set
\begin{equation}
x = \left( -\frac{5}{10} , -\frac{2}{10} \right), \quad
x' = \left( -\frac{5}{10} , \frac{1}{10} \right) \in \Delta.
\end{equation}
Since $\ell(x)=(1+x_1,1+x_2,1-x_1-x_2)$, we obtain
\begin{equation}
\ell(x) = \left( \frac{5}{10} , \frac{8}{10}, \frac{17}{10} \right), \quad
\ell(x') = \left( \frac{5}{10} , \frac{11}{10}, \frac{14}{10} \right) \in \mathbb{R}^3_{\geqslant 0}.
\end{equation}
By the classification of toric fibres in~$\mathbb{C} P^2$ from~\cite[Proposition 7.1]{SheTonVia19}, see also~\S\ref{ssec:cp2}, the fibres~$T(x)$ and~$T(x')$ are not Hamiltonian isotopic. However, their Chekanov invariants agree. Indeed, we find
\begin{equation}
d(x)=d(x')=\frac{1}{2}, \quad
\#_d(x)=\#_d(x')=1, \quad
\Gamma(x)=\Gamma(x')=\mathbb{Z}\left\langle \frac{3}{10} \right\rangle.
\end{equation}
\end{example}
\subsection{Hamiltonian monodromy}
\label{ssec:hammon}
Let~$\phi \in \operatorname{Ham}(X,\omega)$ be a Hamiltonian diffeomorphism of a toric manifold~$(X,\omega)$ mapping a toric fibre~$T(x)$ to a toric fibre~$T(x')$. Then one can consider the map induced on relative second homology,
\begin{equation}
\phi_* \colon H_2(X,T(x)) \rightarrow H_2(X,T(x')).
\end{equation}
We call this map \emph{ambient monodromy}. In the same vein as in~\S\ref{ssec:chekanovinv}, we derive obstructions to which maps~$\phi_*$ can be obtained in this way by using the Delzant construction to lift Hamiltonian isotopies. Note that by setting~$x = x'$ and by projecting to the first homology (see~\eqref{eq:seshomology}), we can extract information about the Hamiltonian monodromy question as a special case.
The key result by Chekanov is~\cite[Theorem 4.5]{Che96}.
\begin{theorem}[Chekanov]
\label{thm:chekanov2}
Let~$T(a),T(a') \subset \mathbb{C}^N$ be product tori. An isomorphism
\begin{equation}´
\Phi \colon H_1(T(a)) \rightarrow H_1(T(a'))
\end{equation}
can be realized as~$(\phi\vert_{T(a)})_* = \Phi$ by a symplectomorphism~$\phi \in \operatorname{Symp}(X,\omega)$ mapping~$T(a)$ to~$T(a')$ if and only if the following conditions hold
\begin{equation}
\Phi(\mathcal{D}(a)) = \mathcal{D}(a'), \quad
\Phi^* m_{T(a')} = m_{T(a)}, \quad
\Phi^* \sigma_{T(a')} = \sigma_{T(a)}.
\end{equation}
\end{theorem}
Here~$m_{T(a)} \in H^1(T(a);\mathbb{Z})$ and~$\sigma_{T(a)} \in H^1(T(a);\mathbb{R})$ are the Maslov class and the symplectic area class, respectively. By~${\mathcal D}(a) \subset H_1(T(a))$ we denote the set of \emph{distinguished classes}. In the standard basis~$e_1,\ldots,e_N \in H_1(T(a))$ the basis vector~$e_i$ is called a \emph{distinguished class} if the corresponding component in~$a = (a_1,\ldots,a_N)$ is minimal, i.e.\ if ~$a_i = d(a)$.
Let us now move to toric fibres. Recall from Proposition~\ref{prop:toricfibresred} that for any toric fibre~$T(x) \subset X$, the relative second homology~$H_2(X,T(x))$ has a canonical basis~$D_1,\ldots,D_N$, where~$D_i$ corresponds to the~$i$-th facet of the moment polytope~$\Delta$ of~$X$.
\begin{definition}
Let~$T(x)\subset X$ be a toric fibre. The \emph{distinguished classes} of~$T(x)$ are the elements of the set
\begin{equation}
{\mathcal D}(x) = \{D_i \, \vert \, \ell_i(x) = d(x)\} \subset H_2(X,T(x)),
\end{equation}
i.e.\ elements of the canonical basis for which the distance of~$x \in \operatorname{int} \Delta$ to the corresponding facet of~$\Delta$ is minimal.
\end{definition}
Recall that there is a canonical inclusion~$H_2(X) \subset H_2(X,T(x))$, meaning that there is a distiguished subspace which is independent of the choice of~$x$. We prove the following.
\begin{theorem}
\label{thm:ambientmonodromy}
Let~$T(x),T(x') \subset X$ be toric fibres in a compact toric manifold~$X$ such that there exists a Hamiltonian diffeomorphism~$\phi \in \operatorname{Ham}(X,\omega)$ mapping~$T(x)$ to~$T(x')$. Then the induced map
\begin{equation}
\phi_* \colon H_2(X,T(x)) \rightarrow H_2(X,T(x'))
\end{equation}
on relative homology groups satisfies
\begin{equation}
\label{eq:exmonodromyconstraints}
\phi_*({\mathcal D}(x)) = {\mathcal D}(x'), \quad
\phi^*m_{T(x')} = m_{T(x)}, \quad
\phi^*\sigma_{T(x')} = \sigma_{T(x)}, \quad
\phi_*\vert_{H_2(X)} = \operatorname{id}.
\end{equation}
\end{theorem}
\noindent {\it Proof. \;}
The second and third identity in~\eqref{eq:exmonodromyconstraints} are general facts about the Maslov and the symplectic area class. The last identity is straightforward since Hamiltonian diffeomorphisms are isotopic to the identity on~$X$ and hence the map~$\phi_* \colon H_*(X) \rightarrow H_*(X)$ is the identity. For the first identity in~\eqref{eq:exmonodromyconstraints}, we again use the Delzant construction together with lifting the Hamiltonian isotopy. The following groups are canonically isomorphic,
\begin{equation}
\label{eq:canisom}
H_2(X,T(x))
\cong H_2(\mathbb{C}^N,T(\ell(x)))
\cong H_1(T(\ell(x))),
\end{equation}
see the proof of Proposition~\ref{prop:pi2}, where this is proved for the corresponding (relative) homotopy groups. Thus the map~$H_1(T(\ell(x))) \rightarrow H_1(T(\ell(x')))$ induced by the lifted Hamiltonian diffeomorphism is conjugate to~$\phi_*$ by the canonical isomorphism~\eqref{eq:canisom}. It is easy to see that the distinguished classes~${\mathcal D}(x)$ of~$T(x)$ are by definition mapped under~\eqref{eq:canisom} to the distinguished classes~${\mathcal D}(\ell(x))$ of the product torus~$T(\ell(x))$ and thus the first identity in~\eqref{eq:exmonodromyconstraints} follows from Theorem~\ref{thm:chekanov2}.
\hspace*{\fill} $\Box$\\
It seems reasonable to guess that these constraints are sufficient. More precisely, note that there is a canonical identification~$H_2(X,T(x)) = H_2(X,T(x'))$ for any two points~$x,x' \in \operatorname{int} \Delta$. Then we conjecture the following.
\begin{conjecture}
An isomorphism~$\Phi \in \operatorname{Aut} H_2(X,T(x))$ can be realized as ambient monodromy of a Hamiltonian diffeomorphism mapping~$T(x)$ to~$T(x')$ if and only if the identities in~\eqref{eq:exmonodromyconstraints} hold.
\end{conjecture}
We show that this conjecture holds in all examples discussed in Section~\ref{sec:examples}. In fact, we use the ambient monodromy and Theorem~\ref{thm:ambientmonodromy} to classify toric fibres and determine the Hamiltonian monodromy groups in these examples. The area class~$\sigma_{T(x)}$ determines~$x$ and hence proving this conjecture gives, in particular, an answer to Question~\ref{qu:class}.
Let us now move to the ordinary Hamiltonian monodromy group of toric fibres, see Definition~\ref{def:hmg}. To derive information about~${\mathcal H}_{T(x)}$ from Theorem~\ref{thm:ambientmonodromy}, fix~$x = x'$ and let~$\phi \in \operatorname{Ham}(X,\omega)$ be a Hamiltonian isotopy such that~$\phi(T(x)) = T(x)$. Note that the ambient monodromy~$\phi_*$ determines the map~$(\phi\vert_{T(x)})_* \in \operatorname{Aut} H_1(T(x))$ by the short exact sequence~\eqref{eq:seshomology}.\medskip
\proofof{Theorem~\ref{thm:mainC}}
Any element in the Hamiltonian monodromy group~${\mathcal H}_{T(x)}$ comes from an ambient monodromy element~$\phi_*$ by~\eqref{eq:seshomology} and hence the theorem follows directly from Theorem~\ref{thm:ambientmonodromy} where the set of distinguished classes in~$H_1(T(x))$ is given by
\begin{equation}
\label{eq:disth1}
\partial {\mathcal D}(x)
= \{\xi_i \, \vert \, \ell_i(x) = d(x)\}
\subset H_1(T(x)),
\end{equation}
where~$\xi_i \in {\mathfrak{t}} \cong H_1(T(x))$ is a primitive defining vector of the~$i$-th facet of~$\Delta$. Indeed, recall from Proposition~\ref{prop:pi2} that the boundary of a canonical basis element~$D_i$ is~$\xi_i$.
\hspace*{\fill} $\Box$\\
It follows from Theorem~\ref{thm:mainC} that if the distinguished classes span the lattice~$H_1(T(x))$, then~${\mathcal H}_{T(x)}$ is a subgroup of the group of permutations on~$\#_d(x)$ elements. In particular, the Hamiltonian monodromy group is finite in this case. See also~\cite[Theorem 1]{AugSmiWor22}. In contrast, we shall see that the Hamiltonian monodromy group is infinite in some examples, see~\S\ref{ssec:excs2}, \S\ref{ssec:c2ts1} and~\S\ref{ssec:ts1s2}. The number~$\#_d(x)$ is maximal in case~$T(x)$ is the monotone toric fibre of a (monotone) toric manifold~$X$. In that case, we obtain the obstructive statement of~\cite[Theorem 2]{AugSmiWor22} for the group of Hamiltonian diffeomorphisms as a special case of Theorem~\ref{thm:ambientmonodromy}.
\begin{corollary}
\label{cor:smith}
Let~$T(x) \subset X$ be a monotone toric fibre. Then any element in~${\mathcal H}_{T(x)}$ acts as a permutation on the set~$\{\xi_1, \ldots, \xi_N\}$ of defining vectors of the polytope and the corresponding ambient monodromy acts as the identity on~$H_2(X)$.
\end{corollary}
\subsection{Displacement energy and versal deformations of toric fibres}
\label{ssec:displacementen}
In this subsection, we discuss obstructions for the equivalence of toric fibres and their Hamiltonian monodromy relying on versal deformations instead of the lifting trick employed in the proofs of Theorems~\ref{thm:mainB} and \ref{thm:ambientmonodromy}. This comes in handy in cases where~$X$ cannot be seen as a toric reduction of some~$\mathbb{C}^N$, and we will use them in~\S\ref{ssec:c2ts1} and~\S\ref{ssec:ts1s2}. Note that the direct approach by versal deformations has the drawback that it requires a computation of the displacement energy of toric fibres, at least on an open dense subset. See Assumption~\ref{ass:detoric}.
Let us briefly discuss displacement energy and versal deformations. We refer to~\cite{Che96} and especially~\cite{CheSch10, CheSch16} for more details. The displacement energy of a compact subset~$A \subset (X,\omega)$ is defined as the infimum of the Hofer norm taken over all Hamiltonian isotopies displacing~$A$ from itself,
\begin{equation}
e(X,A)
=
\inf\{\Vert H \Vert \, \vert \, \phi_1^H(A) \cap A = \varnothing \},
\end{equation}
and by convention~$e(X,A) = \infty$ if the infimum is taken over the empty set. The displacement energy is a symplectic invariant and we will use it only in case~$A$ is a Lagrangian.
For a compact Lagrangian~$L \subset X$, Chekanov introduced a way to strengthen a given symplectic invariant by looking at the invariant on Lagrangian neighbours of~$L$. This is called \emph{versal deformation} of~$L$. Perturbing~$L$ in a Weinstein neighbourhood, we find that nearby Lagrangians correspond to graphs of closed one-forms on~$L$. Furthermore, we can associate to every such perturbation an element in~$H^1(L;\mathbb{R})$, by taking its (Lagrangian) flux. Two such perturbations are Hamiltonian isotopic (with support in the Weinstein neighbourhood of~$L$) if and only if they map to the same element in~$H^1(L;\mathbb{R})$. Thus we obtain a continuous bijection between locally supported Hamiltonian isotopy classes of Lagrangian neighbours of~$L$ and a neighbourhood of the origin of~$H^1(L;\mathbb{R})$. As the flux description suggests, this correspondence is independent of the chosen Weinstein neighbourhood.
We may post-compose any symplectic invariant with the map from~$U \subset H^1(L;\mathbb{R})$ to classes of nearby Lagrangians. Here, we use displacement energy to obtain a function~$U \rightarrow \mathbb{R} \cup \{\infty\}$. By taking its germ, we obtain,
\begin{equation}
\label{eq:degerm}
{\mathcal E}_L \colon H^1(L;\mathbb{R}) \rightarrow \mathbb{R} \cup \{ \infty \}.
\end{equation}
\begin{definition}
We call the function~\eqref{eq:degerm} the \emph{displacement energy germ} of~$L \subset X$.
\end{definition}
The displacement energy germ is a symplectic invariant in the sense that if~$\phi \in \operatorname{Symp}(X,\omega)$, then
\begin{equation}
\label{eq:vdinv}
{\mathcal E}_L \circ \phi\vert_L^* = {\mathcal E}_{\phi(L)},
\end{equation}
where~$\phi\vert_L^*$ is the transpose of the isomorphism~$(\phi\vert_L)_* \colon H_1(L) \rightarrow H_1(\phi(L))$. In particular, this can be used to derive obstructions to Hamiltonian monodromy.
\begin{proposition}
\label{prop:vdmonodormy}
Let~$L \subset X$ be a compact Lagrangian submanifold. If~$\Phi \in {\mathcal H}_L$ is an element in the Hamiltonian monodromy group, then~${\mathcal E}_L \circ \Phi^* = {\mathcal E}_L$.
\end{proposition}
Let us discuss this in more detail in the special case where~$L= T(x) \subset X$ is a toric fibre of a toric manifold~$(X,\omega)$. Coming up with a versal deformation of toric fibres is straightforward. Indeed, a versal deformation of~$T(x)$ is obtained by varying the base point,~$a \mapsto T(x+a)$ for small enough~$a$, where we identify~$H^1(T(x);\mathbb{R}) \cong {\mathfrak{t}}^*$ as usual via the~$T^n$-action. Thus the crucial point in computing~${\mathcal E}_{T(x)}$ is finding the displacement energy of toric fibres~$e(X,T(x))$ as a function of~$x \in \operatorname{int}(T(x))$. Let us make the following assumption.
\begin{assumption}
\label{ass:detoric}
On an open and dense subset of the moment polytope~$\Delta$, we assume that
\begin{equation}
\label{eq:detoric}
e(X,T(x))
= d(x)
= \min \{ \ell_1(x), \ldots, \ell_N(x) \}.
\end{equation}
\end{assumption}
Here,~$d(\cdot)$ denotes the integral affine distance to the boundary of~$\Delta$ as in~\eqref{eq:ell}. Recall that the functionals~$\ell_i(\cdot) = \langle \cdot , \xi_i \rangle + \lambda_i$ measure the integral affine distance of~$x$ to the~$i$-th facet of~$\Delta$. Let~$f,g$ be two functions defined on a vector space~$V$. Since equalities on open and dense subsets will come up quite often and are in fact sufficient for our purposes, we write~$f \simeq g$ if~$f$ and~$g$ agree on an open and dense subset of~$V$.
Let us briefly discuss why Assumption~\ref{ass:detoric} is reasonable. First, we note that the inequality~$e(X,T(x)) \geqslant d(x)$ holds whenever~$X$ is compact toric, and more generally, whenever~$X$ can be seen as the toric reduction of some~$\mathbb{C}^N$. This follows again from toric reduction and the lifting trick, see also~\cite[\S 3.2]{Bre20}. Indeed, if~$T(x) \subset X$ can be displaced with energy~$e$, then so can the corresponding product torus~$T(\ell(x)) \subset \mathbb{C}^N$ obtained by Proposition~\ref{prop:toricfibresred}. The displacement energy of the latter is precisely given by~$d(x)= \min \{ \ell_1(x), \ldots, \ell_N(x) \}$. Although this inequality may fail to be sharp (for example for non-dispaceable tori), in all the examples we know of, it fails only on the complement of an open dense subset, meaning that Assumption~\ref{ass:detoric} still holds. Furthermore, the assumption holds for all compact \emph{monotone} toric symplectic manifolds of dimension~$\leqslant 18$ as was checked computationally. The monotone case in arbitrary dimension is related to the so-called Ewald conjecture. See~\cite{McD11} or~\cite[\S 3.4]{Bre20} for a detailed discussion. The following proposition is~\cite[Proposition 4.3]{Bre20}.
\begin{proposition}
\label{prop:vdtoricfibres}
Under Assumption~\ref{ass:detoric}, the displacement energy germ of~$T(x)$ is given by
\begin{equation}
{\mathcal E}_{T(x)}(a)
\simeq \min_{i \in I(x)} \{ \ell_i(x+a) \},
\end{equation}
where~$I(x) \subset \{1,\ldots,N\}$ is the subset of indices for which~$\ell_i(x)$ is minimal.
\end{proposition}
Under Assumption~\ref{ass:detoric}, we can prove the symplectically hard part of Theorem~\ref{thm:mainB} and a weaker form of the hard part of Theorem~\ref{thm:ambientmonodromy}, where ambient monodromy is replaced by the map induced on first homology.
\begin{theorem}
\label{thm:alternative}
Let~$X$ be a toric manifold for which Assumption~\ref{ass:detoric} holds. Let~$\phi$ be a Hamiltonian diffeomorphism mapping a toric fibre~$T(x)$ to a toric fibre~$T(x')$. Then we have
\begin{equation}
\label{eq:alt}
d(x) = d(x'), \quad
\#_d(x) = \#_d(x').
\end{equation}
Furthermore, the map~$(\phi\vert_{T(x)})_* \colon H_1(T(x)) \rightarrow H_1(T(x'))$ acts by a permutation on distinguished classes,~$(\phi\vert_{T(x)})_*{\mathcal D}(x) = {\mathcal D}(x')$.
\end{theorem}
\noindent {\it Proof. \;}
Let~$U \subset \operatorname{int} \Delta$ be an open dense subset such that~\eqref{eq:detoric} holds for all~$x \in U$. For~$x,x' \in U$, we have~$d(x) = d(x')$. If~$x \notin U$ or~$x' \notin U$, use Proposition~\ref{prop:vdtoricfibres} to see that~$\min_{i \in I(x)} \{ \ell_i(x+a) \} \simeq \min_{i \in I(x')} \{ \ell_i(x'+a) \}$. Thus these two continuous functions of~$a$ are actually equal near~$a = 0$, and they yield~$d(x)$ and~$d(x')$, respectively, when evaluated at~$a =0$. The second invariance property in~\eqref{eq:alt} similarly follows from~\eqref{eq:vdinv} and Proposition~\ref{prop:vdtoricfibres} by noting that~$\#_d(x) = \#I(x)$. The claim about~$(\phi\vert_{T(x)})_*$ follows from~\eqref{eq:vdinv} and Proposition~\ref{prop:vdtoricfibres}. Indeed, recall that the distinguished classes of~$T(x)$ are the vectors~$\xi_i$ for which the corresponding~$\ell_i$ is minimal, see~\eqref{eq:disth1}.
\hspace*{\fill} $\Box$\\
To illustrate that the methods of this paragraph can be applied to a broader set of examples than toric fibres, we include the following example.
\begin{example}[Vianna tori in~$\mathbb{C} P^2$]
\label{ex:vianna}
Using Proposition~\ref{prop:vdmonodormy}, one can show that all Vianna tori in~$\mathbb{C} P^2$, except for the first and the second one, have trivial Hamiltonian monodromy groups. The Vianna tori in~$\mathbb{C} P^2$ form a countable family of monotone Lagrangian tori which are not pairwise symplectomorphic. They are in bijection with so-called \emph{Markov triples}, i.e.\ triples of natural numbers solving the \emph{Markov equation}. We refer to~\cite{Via16} for a detailed description. We denote the Vianna torus corresponding to a Markov triple~$(a,b,c)$ by~$T(a,b,c) \subset \mathbb{C} P^2$. This torus appears as a monotone fibre of an \emph{almost toric fibration} of~$\mathbb{C} P^2$ with base diagram given by a certain triangle~$\Delta_{a,b,c}$. On an open and dense subset of a neighbourhood of the origin in~$H^1(T(a,b,c);\mathbb{R})$, the displacement energy germ~${\mathcal E}_{T(a,b,c)}$ has level sets given by scalings of~$\partial\Delta_{a,b,c}$. This means that the versal deformation \emph{sees} the corresponding almost toric base diagram, and thus the integral affine equivalence class of~$\Delta_{a,b,c}$ is an invariant of~$T(a,b,c)$. In particular, this can be used to distiguish the Vianna tori as was noted by Chekanov--Schlenk in private communications. For a proof of this claim, see the forthcoming paper~\cite{BreMikSch21}.
Using Proposition~\ref{prop:vdmonodormy}, we note that a necessary condition for~$T(a,b,c)$ to admit non-trivial monodromy is that the corresponding almost toric base diagram admits some integral affine symmetry. Such a symmetry can only exist if at least two vertices are of the same integral affine type, i.e.\ if the same Markov number appears at least twice in the same triple. This is only the case for~$(1,1,1)$ and~$(1,1,2)$. The former is the Clifford torus which has Hamiltonian monodromy group isomorphic to the dihedral group~$D_6$ and the latter is the first non-trivial Vianna torus~$T(1,1,2)$ having monodromy group isomorphic to~$\mathbb{Z}_2$. For all other Vianna tori, we obtain~${\mathcal H}_{T(a,b,c)} = \{1\}$. In particular, the Hamiltonian monodromy group does not contain enough information to distinguish Vianna tori.
\end{example}
\section{Examples}
\label{sec:examples}
In Subsections~\S\ref{ssec:s2s2}--\ref{ssec:ts1s2}, we classify toric fibres and determine their Hamiltonian monodromy in some examples. With the exception of~$\mathbb{C}^2 \times T^*S^1$, our examples are four-dimensional. This comes from the fact that the classification question in dimensions~$\geqslant 6$ is qualitatively very different -- provided the moment polytope has at least one vertex. Indeed, in that case, there are toric fibres~$T(x)$ for which~${\mathfrak{H}}_x$ has accumulation points, see Corollary~\ref{cor:accumulation}.
The proofs of the results of this section all follow the same pattern. Equivalences and monodromy elements are constructed by symmetric probes. The main ingredients for the obstructive side are Theorems~\ref{thm:mainB} and~\ref{thm:ambientmonodromy} applied to the ambient monodromy map~$\phi_* \colon H_2(X,T(x)) \rightarrow H_2(X,T(x'))$ induced by a Hamiltonian diffeomorphism~$\phi$. The conceptual reason why constraints on ambient monodromy give constraints on equivalences of toric fibres is the observation that the symplectic area class of a toric fibre determines~$x \in \Delta$. These methods probably apply to most four-dimensional toric manifolds, with the computational complexity increasing with the number of edges of the moment polytope. The examples we chose are diverse in the sense that~$S^2 \times S^2$ and~$\mathbb{C} P^2$ are compact toric and thus the Delzant construction can be used directly;~$\mathbb{C} \times S^2$ is non-compact, but still a toric reduction of~$\mathbb{C}^3$; the spaces~$\mathbb{C}^2 \times T^*S^1$ and~$T^*S^1 \times S^2$ are non-compact and cannot be seen as toric reductions of any~$\mathbb{C}^N$. However, the latter is a toric reduction of the former. In the case of the former, we apply the direct methods from~\S\ref{ssec:displacementen}. Note also that the spaces~$\mathbb{C}^2 \times T^*S^1$ and~$T^*S^1 \times S^2$ are not simply-connected, whence the classification up to symplectomorphisms is drastically different from the classification up to Hamiltonian diffeomorphisms.
In~\S\ref{ssec:chekrev}, we revisit Chekanov's classification result and prove Conjecture~\ref{conj:main} for~$\mathbb{C}^n$. In~\S\ref{ssec:arbitrary}, we collect some remarks on how to construct symmetric probes in arbitrary toric manifolds.
Let us point out that all monodromy results for \emph{monotone} toric fibres in this section also follow from the methods developed in~\cite{AugSmiWor22}.
\subsection{The case of monotone $X = S^2 \times S^2$}
\label{ssec:s2s2}
Let~$S^2 \times S^2$ be equipped with the monotone product symplectic structure~$\omega = \omega_{S^2} \oplus \omega_{S^2}$, where~$\omega_{S^2}$ is the area form with normalization~$\int_{S^2} \omega_{S^2} = 2$. Then the corresponding moment polytope is given by the square~$\Delta = [-1,1] \times [-1,1]$. There are probes with four different directional vectors. The probes with~$v = e_1^*, e_2^*$ are admissible everywhere in the interior of the polytope. The probes with~$v = e_1^* + e_2^*$ and~$v = e_1^* - e_2^*$ are admissible everwhere except for the two main diagonals of the square.
Note that the equivalences of toric fibres generated by these probes can also be read off from the symmetries of~$\Delta = [-1,1] \times [-1,1]$. Let us turn to the classification of toric fibres.
\begin{proposition}
\label{prop:s2s2class}
The classification of toric fibres of monotone~$S^2 \times S^2$ is given by
\begin{equation}
{\mathfrak{H}}_x = \{ (\pm x_1 , \pm x_2) , (\pm x_2, \pm x_1) \}, \quad x = (x_1,x_2) \in \operatorname{int} \Delta.
\end{equation}
\end{proposition}
Note that the sets~${\mathfrak{H}}_x$ contain eight elements if~$x_1 \neq x_2$ and both are non-zero, four elements if~$x_1 = x_2$ or if one of the~$x_i$ is zero, and one element (the monotone fibre) if~$x_1 = x_2 = 0$. See Figure~\ref{fig:6}.
\smallskip
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[inner sep=0pt] at (0,0)
{\includegraphics[trim={5cm 6.5cm 7cm 0.8cm},clip,scale=0.8] {fig6.pdf}};
\end{tikzpicture}
\caption{Some symmetric probes in the monotone~$S^2\times S^2$. Points of the same colour denote equivalent fibres.}
\label{fig:6}
\end{center}
\end{figure}
\proofof{Proposition~\ref{prop:s2s2class}}
The constructive side follows either from the symmetric probes listed above or from the symmetries of~$\Delta$. For the obstructions, we will use Chekanov's invariants as expressed in Theorems~\ref{thm:mainB} and~\ref{thm:ambientmonodromy}. Let~$T(x)=T(x_1,x_2)$. By Chekanov's first invariant from Theorem~\ref{thm:mainB}, we can restrict our attention to the set~${\mathfrak{D}}_x$ lying at distance~$d(x)$ to the boundary of~$\Delta$. This set is the boundary of a square of size~$2(1-d(x))$ and it is stratified by the second Chekanov invariant. Indeed, we have~$\#_d(x) = 2$ whenever~$x$ is a vertex of~${\mathfrak{D}}_x$ and~$\#_d(x) = 1$ elsewhere on~${\mathfrak{D}}_x$. This means that we are left with proving the result on the interior of the four edges of~${\mathfrak{D}}_x$. Using the symmetries of~${\mathfrak{D}}_x \subset \Delta$, we can restrict our attention to the segment~$[0,x_2) \times \{x_2\}$ since this is a fundamental domain for~${\mathfrak{D}}_x$ under the symmetries.
\textbf{Claim:}
If~$T(x_1,x_2) \cong T(x_1',x_2)$ for some~$x_1, x'_1 \in [0,x_2)$ then~$x_1 = x_1'$.
\noindent
We use Theorem~\ref{thm:ambientmonodromy} to prove the claim. Suppose there is~$\phi \in \operatorname{Ham}(X,\omega)$ mapping~$T(x_1,x_2)$ to~$T(x_1',x_2)$. This induces
\begin{equation}
\phi_* \colon H_2(X,T(x_1,x_2)) \rightarrow H_2(X,T(x_1',x_2)).
\end{equation}
Let~$D_1,D_2,D_3,D_4$ be the canonical basis of~$H_2(X,T(x_1,x_2))$ as in Proposition~\ref{prop:pi2} where~$D_1$ is the disk corresponding to the facet~$\{1\} \times [-1,1]$ and the remaining ones are ordered in the anti-clockwise direction. Let~$D_1',D_2',D_3',D_4'$ be the corresponding basis elements for~$H_2(X,T(x_1',x_2))$. The distinguished classes are~${\mathcal D}(x_1,x_2) =\{D_2\}$ and~${\mathcal D}(x_1',x_2) = \{D_2'\}$, meaning that Theorem~\ref{thm:ambientmonodromy} yields~$\phi_* D_2 = D_2'$. Set
\begin{equation}
\phi_* D_1 = a_1D_1' + a_2D_2' + a_3D_3' + a_4D_4', \quad
a_i \in \mathbb{Z}.
\end{equation}
Since the Maslov class is preserved, we obtain~$a_1 + a_2 + a_3 + a_4 = 1$, meaning that~$a_4 = 1 - a_1 - a_2 - a_3$.
Since the induced map~$(\phi\vert_{T(x_1,x_2)})_*$ is invertible, we deduce that~$\det(\partial\phi_*D_1,\partial\phi_*D_2) = \pm 1$ which yields~$a_1 - a_3 = \pm 1$. Preservation of symplectic area,~$\int_{D_1} \omega = \int_{\phi_*D_1} \omega$ yields
\begin{equation}
1 - x_1 = a_1(1-x_1') + a_2(1-x_2) + a_3(1+x_1') + a_4(1+x_2),
\end{equation}
since the areas of the disks~$D_i'$ are just given by the distances of~$(x_1',x_2)$ to the respective facets of~$\Delta$.
In case~$a_1 - a_3 = +1$, we use the above relations on the~$a_i$ to find~$a_3 = a_1 - 1$ and~$a_4 = 2 - 2a_1 - a_2$ and thus,
\begin{equation}
\label{eq:xdiff1}
x_1 - x'_1 = 2x_2(a_1 + a_2 - 1) \in 2x_2 \mathbb{Z}.
\end{equation}
Since~$\vert x_1 - x'_1 \vert < x_2$, we conclude~$x_1 = x_1'$. In case~$a_1 - a_3 = -1$, we find by the same reasoning,
\begin{equation}
\label{eq:xdiff2}
x_1 + x_1' = 2x_2(a_1 + a_2) \in 2x_2 \mathbb{Z}.
\end{equation}
Since~$0 \leqslant x_1 + x_1' < 2x_2$, we deduce~$x_1' = - x_1 $ and hence~$x_1 = x_1' =0$
\hspace*{\fill} $\Box$\\
\begin{proposition}
\label{prop:s2s2mon}
Let~$0 \leqslant x_1 \leqslant x_2$. Then the Hamiltonian monodromy group of the toric fibre~$T(x_1,x_2)\subset S^2 \times S^2$ in the monotone~$S^2 \times S^2$ is given by
\begin{equation}
{\mathcal H}_{T(x_1,x_2)} =
\begin{cases}
\left \langle
\begin{pmatrix}
-1 & 0 \\
0 & 1
\end{pmatrix}
\right \rangle \cong \mathbb{Z}_2,
& x_1 = 0, x_2 \neq 0; \\[3ex]
\left \langle
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}
\right \rangle \cong \mathbb{Z}_2,
& x_1 = x_2 \neq 0; \\[3ex]
\left \langle
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix},
\begin{pmatrix}
-1 & 0 \\
0 & 1
\end{pmatrix}
\right \rangle \cong \mathbb{Z}_2 \times \mathbb{Z}_2,
& x_1 = x_2 = 0; \\[3ex]
\end{cases}
\end{equation}
and by~${\mathcal H}_{T(x_1,x_2)} = \{1\}$ in all other cases.
\end{proposition}
Note that any other toric fibre is Hamiltonian isotopic to a fibre with~$0 \leqslant x_1 \leqslant x_2$, meaning that its Hamiltonian monodromy group is conjugate to one of the above. Thus the only isomorphism types of groups which appear are~$\mathbb{Z}_2, \mathbb{Z}_2 \times \mathbb{Z}_2$ and the trivial group. The astute reader may have wondered why~${\mathcal H}_{(0,0)}$ is not the full symmetry group of~$\Delta = [-1,1] \times [-1,1]$. This comes from the fact that some of these symmetries act non-trivially on~$H_2(S^2 \times S^2)$ (by exchanging the obvious generators) and thus they can be realized by symplectomorphisms, but not by Hamiltonian diffeomorphisms. We refer to~\cite{AugSmiWor22}, where the monodromy group generated by symplectomorphisms is determined for monotone toric fibres.
\smallskip
\proofof{Proposition~\ref{prop:s2s2mon}}
Again, the construction side can be obtained by symmetric probes and Theorem~\ref{thm:symmprobe}. For the obstruction side, let~$T(x_1,x_2)$ be a toric fibre of~$S^2\times S^2$ and~$\phi \in \operatorname{Ham}(S^2 \times S^2)$ a Hamiltonian diffeomorphism mapping this fibre to itself. We again analyse the map~$\phi_* \in \operatorname{Aut} H_2(X,T(x_1,x_2))$ and use the fact that~$\phi_*$ determines~$(\phi\vert_{T(x_1,x_2)})_*$.
Let us start with the case of the monotone fibre~$T(0,0) \subset S^2 \times S^2$, which is also a special case of~\cite[Theorem 2]{AugSmiWor22}. In this case, the distiguished classes are~${\mathcal D}(0,0) = \{D_1,D_2,D_3,D_4\}$. Therefore Theorem~\ref{thm:ambientmonodromy} implies that the ambient monodromy is a permutation of these classes. Since~$D_1 + D_3, D_2 + D_4 \in H_2(X)$, these two classes must be preserved under~$\phi_*$ which implies the claim. In the case~$x_1 = x_2 \neq 0$, the distinguished classes are~${\mathcal D}(x_1,x_1) = \{D_1,D_2\}$ and hence only permutations of~$D_1$ and~$D_2$ are permitted by Theorem~\ref{thm:ambientmonodromy}. Now let~$0 \leqslant x_1 < x_2$. Then the set of distinguished classes is~${\mathcal D}(x_1,x_2) = \{D_2\}$ and hence the ambient monodromy map takes~$D_2$ to~$D_2$. We set~$x_1 = x_1'$ in~\eqref{eq:xdiff1} and~\eqref{eq:xdiff2}. In the first case, we find that~$a_1 + a_2 = 1$ and a computation using the expressions for~$a_3$ and~$a_4$ from the proof of Proposition~\ref{prop:s2s2class}, this yields that the monodromy is trivial. In the second case, we find that~$x_1 = 0$ and~$a_1 + a_2 = 0$ and a similar computation shows that the monodromy maps~$e_1 \mapsto -e_1$ in that case. We conclude that the monodromy group is trivial whenever~$x_1 \neq 0$ and that it is generated by the map~$e_1 \mapsto -e_1, e_2 \mapsto e_2$ in case~$x_1 = 0$.
\hspace*{\fill} $\Box$\\
\begin{remark}
We point out that in case~$S^2 \times S^2$ is equipped with a non-monotone symplectic form, the classification as well as the Hamiltonian monodromy is drastically different. Indeed, some equivalence classes~${\mathfrak{H}}_x$ of fibres have accumulation points in~$\Delta$ and some fibres have infinite Hamiltonian monodromy groups. We refer to~\cite{BreKim23} for details.
\end{remark}
\subsection{The case of~$X = \mathbb{C} P^2$}
\label{ssec:cp2}
Let~$\mathbb{C} P^2$ be equipped with the symplectic form and moment polytope as in Example~\ref{ex:cp2}. We give the classification of toric fibres and the Hamiltonian monodromy groups without proof since the proofs are the same as for~$S^2 \times S^2$. The classification of toric fibres was first given in~\cite[Proposition 7.1]{SheTonVia19}. Note that all equivalences and Hamiltonian monodromies in the case of the monotone~$S^2 \times S^2$ are induced by symmetries of the moment polytope. The same holds in the case of~$\mathbb{C} P^2$.
\begin{proposition}
Toric fibres~$T(x),T(y) \subset \mathbb{C} P^2$ are equivalent if and only if~$x$ can be mapped to~$y$ by an integral symmetry of~$\Delta$. Similarly, the Hamiltonian monodromy group~${\mathcal H}_{T(x)}$ consists of transformations induced by integral symmetries of~$\Delta$ fixing the point~$x$.
\end{proposition}
\subsection{The case of~$X = \mathbb{C} \times S^2$}
\label{ssec:excs2}
Let~$\mathbb{C} \times S^2$ be equipped with the symplectic form~$\omega = \omega_{\mathbb{C}} \oplus \omega_{S^2}$. We normalize the moment map such that its moment polytope is given by~$\Delta = \mathbb{R}_{\geqslant -1} \times [-1,1]$. There are symmetric probes with directional vector~$e_2^* + k e_1^*$ for every~$k \in \mathbb{Z}$. The probe with~$k=0$ is admissible everywhere. For~$k = \pm 1$, the probes are admissible everywhere except when they hit a vertex of~$\Delta$. The symmetric probes with~$k \notin \{-1,0,1\}$ are admissible whenever they hit both facets~$\mathbb{R}_{\geqslant -1} \times \{1\}$ and~$\mathbb{R}_{\geqslant -1} \times \{-1\}$. As we shall see the latter types of symmetric probes are obsolete as all results can be proven using only those with~$k \in \{-1,0,1\}$.
\begin{proposition}
\label{prop:classcs2}
The classification of fibres in~$(X,\omega)$ is as follows. For~$x= (x_1,0) \in \operatorname{int}\Delta$ with~$x_1 \geqslant 0$, we have~${\mathfrak{H}}_x = \{x\}$, i.e.\ the corresponding toric fibre is not equivalent to any other fibres. For~$x = (x_1,\pm x_1)$ with~$x_1 < 0$, we have~${\mathfrak{H}}_x = \{(x_1,x_1),(x_1,-x_1)\}$. For~$x = (0,x_2)$ with~$x_2 > 0$, we have
\begin{equation}
{\mathfrak{H}}_x = \{ (2nx_2, \pm x_2) \, \vert \, n \in \mathbb{N} \} \cup \{(-x_2,0)\}.
\end{equation}
For~$x = (x_1,\pm x_1)$ with~$x_1 > 0$, we have
\begin{equation}
{\mathfrak{H}}_x = \{ ((2n+1)x_1, \pm x_1) \, \vert \, n \in \mathbb{N} \}.
\end{equation}
For~$x = (x_1,x_2)$ with~$0 < x_1 < x_2$, we have
\begin{equation}
{\mathfrak{H}}_x = \{ (\pm x_1 + 2nx_2, \pm x_2) \, \vert \, n \in \mathbb{N} \} \cup \{(-x_2,\pm x_1)\}.
\end{equation}
All~$y \in \operatorname{int}\Delta$ are in one of the above~${\mathfrak{H}}_x$.
\end{proposition}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[inner sep=0pt] at (0,0)
{\includegraphics[trim={3cm 6cm 2.5cm 1cm},clip,scale=0.8] {fig7.pdf}};
\end{tikzpicture}
\caption{Some symmetric probes in~$\mathbb{C} \times S^2$. Points of the same colour denote equivalent fibres.}
\label{fig:7}
\end{center}
\end{figure}
\noindent {\it Proof. \;}
For the construction of the equivalences, we use concatenations of the probes with directional vectors~$e_2^* + k e_1^*$ for~$k \in \{-1,0,1\}$, as we discuss below on a case by case basis. For the obstruction side, note that we cannot directly apply Theorems~\ref{thm:mainB} and~\ref{thm:ambientmonodromy}, since~$X$ is non-compact. However,~$X$ is of reduction type, see Definition~\ref{def:redtype}. Indeed, the toric reduction of~$\mathbb{C}^2$ to~$S^2$ coming from the Delzant construction yields a toric reduction of~$\mathbb{C} \times \mathbb{C}^2$ to~$\mathbb{C} \times S^2$. Therefore, the results of Theorems~\ref{thm:mainB} and~\ref{thm:ambientmonodromy} still apply.
By the first Chekanov invariant from Theorem~\ref{thm:mainB}, the polytope~$\Delta$ decomposes into subsets~${\mathfrak{D}}_x$ of constant distance~$0 < d(x) \leqslant 1$ to the boundary~$\partial\Delta$. First, let~$0 < d(x) < 1$. The toric fibres of the type~$(x_1,\pm x_1)$ with~$x_1 < 0$ are the only ones having~$\#_d = 2$, which distinguishes them from all others. Note that for any other~$x \in \Delta$ with~$d(x) < 1$, the torus~$T(x)$ is equivalent by symmetric probes to exactly one torus on the segment~$[0,x_2] \times \{x_2\}$ with~$x_2 = 1-d(x)$. It is easy to see that by probes with directional vectors~$e_2 + e_1$ and~$e_2 - e_1$, any fibre is equivalent to one on the segment~$[-x_2,x_2] \times \{x_2\}$. Now note that fibres on this segment come in equivalent pairs as can be seen by noting that~$(x_1,x_2)$ is equivalent to~$(-x_2,-x_1)$ which is equivalent (by a vertical probe) to~$(-x_2,x_1)$ which in turn is equivalent to~$(-x_1,x_2)$. Thus the problem of classifying fibres with~$d(x)<1$ boils down to classifying fibres on the segment~$[0,x_2] \times \{x_2\}$.
\textbf{Claim:} If $T(x_1,x_2) \cong T(x_1',x_2)$ for~$x_1,x_1' \in [0,x_2]$ and~$x_2 = 1 - d(x)$, then~$x_1 = x_1'$.
\noindent
To prove the claim, we follow the same strategy as in the proof of Proposition~\ref{prop:s2s2class}. Suppose there is~$\phi \in \operatorname{Ham}(X,\omega)$ mapping~$T(x_1,x_2)$ to~$T(x_1',x_2)$ and let~$\phi_*$ be the ambient monodromy induced by this map. Let~$D_1,D_2,D_3$ be the canonical basis of~$H_2(X,T(x_1,x_2))$ where~$D_1$ is the disk corresponding to the facet~$\{-1\} \times [-1,1]$ and the remaining ones ordered in the anti-clockwise direction. Let~$D_1',D_2',D_3' \in H_2(X,T(x_1',x_2))$ be the disks obtained by the same convention. The distinguished classes are~${\mathcal D}(x_1,x_2) = \{D_3\}$ and~${\mathcal D}(x_1',x_2) = \{D_3'\}$ meaning that~$\phi_*D_3 = D_3'$. Set
\begin{equation}
\label{eq:phiD}
\phi_* D_1 = a_1D_1' + a_2D_2' + a_3D_3', \quad
a_i \in \mathbb{Z}.
\end{equation}
By the invariance of the Maslov class, we obtain~$a_1 + a_2 + a_3 = 1$. Since the induced map~$(\phi\vert_{T(x_1,x_2)})_*$ is invertible, we deduce that~$\det(\partial\phi_*D_1,\partial\phi_*D_3) = \pm 1$ which yields~$a_1 = \pm 1$. Preservation of symplectic area,~$\int_{D_1} \omega = \int_{\phi_*D_1} \omega$ yields
\begin{equation}
1 + x_1 = a_1(1 + x_1') + a_2(1 + x_2) + a_3(1 - x_2).
\end{equation}
In the case~$a_1 = -1$, we find
\begin{equation}
\label{eq:xdiff3}
x_1 + x_1' = 2x_2(a_2 - 1),
\end{equation}
from which we deduce that~$x_1 = x_1'$. Similarly, for~$a_1 = 1$, we find
\begin{equation}
\label{eq:xdiff4}
x_1 - x_1' = 2x_2a_2,
\end{equation}
which implies the same conclusion and thus proves the claim.
Let us now turn to the case~$d(x)=1$, i.e.\ tori of the form~$T(x_1,0)$ with~$x_1 \geqslant 0$. Note that~$T(0,0)$ is the only monotone fibre and thus not equivalent to any other fibre. We will show that the same remains true for~$x_1 > 0$. Indeed, if~$T(x_1,0)$ and~$T(x_1',0)$ were equivalent, then the same arguments as above apply to the ambient monodromy~$\phi_*$ except that now~${\mathcal D}(x_1,0) = \{D_2,D_3\}$. Equations~\eqref{eq:xdiff3} and~\eqref{eq:xdiff4} for~$x_2 = 0$ imply the claim. Equivalently, one can use~\cite[Theorem 1.1]{FOOO13} to find that~$e(X,T(x_1,0)) = 1 + x_1$, meaning that these fibres are also distinguished by their displacement energy.
\hspace*{\fill} $\Box$\\
\begin{proposition}
Let~$(x_1,x_2) \in \operatorname{int} \Delta$. The Hamiltonian monodromy group of the toric fibre~$T(x_1,x_2) \subset \mathbb{C} \times S^2$ is given by
\begin{equation}
\label{eq:moncs2}
{\mathcal H}_{T(x_1,x_2)} =
\begin{cases}
\left \langle
\begin{pmatrix}
0 & -1 \\
-1 & 0
\end{pmatrix}
\right \rangle \cong \mathbb{Z}_2,
& x_1 = - x_2, x_2 > 0; \\[3ex]
\left \langle
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\right \rangle \cong \mathbb{Z}_2,
& x_1 \leqslant 0 , x_2 = 0; \\[3ex]
\left \langle
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix},
\begin{pmatrix}
1 & 0 \\
2 & -1
\end{pmatrix}
\right \rangle \cong \mathbb{Z}_2 \ltimes \mathbb{Z},
& x_1 > 0, x_2 = 0;
\end{cases}
\end{equation}
and is given by a group conjugated to the above in case~$T(x_1,x_2)$ is equivalent to one of the above cases according to Proposition~\ref{prop:classcs2}, and by~${\mathcal H}_{T(x_1,x_2)} = \{1\}$ in all other cases.
\end{proposition}
\noindent {\it Proof. \;}
In case~$x_1 = - x_2, x_2 > 0$, we have~${\mathcal D}(x_1,x_2) = \{D_1,D_3\}$ and thus any monodromy matrix has to permute~$\partial D_1 = e_1$ and~$\partial D_3 = -e_2$. This permutation is realized by a symmetric probe with directional vector~$e_1^* + e_2^*$. For the case~$x_1 \leqslant 0 , x_2 = 0$, note that the vertical symmetric probe realizes the claimed monodromy. In terms of obstructions, note that~${\mathcal D}(0,0) = \{D_1,D_2,D_3\}$, which yields the claim for~$T(0,0)$. In case~$x_1 < 0$, note that~$T(x_1,0)$ is Hamiltonian isotopic to~$T(0,x_1)$ to which we can apply the equations~\eqref{eq:xdiff3} and~\eqref{eq:xdiff4} to find that the only possible monodromy for~$T(0,x_1)$ is~$e_1 \mapsto -e_1, e_2 \mapsto e_2$. Under the conjugation induced by the equivalence of~$T(x_1,0)$ and~$T(0,x_1)$, this yields the answer. Now let~$x_1 \in (0,x_2]$ and~$x_2 > 0$. Then the Hamiltonian monodromy group is trivial by equations~\eqref{eq:xdiff3} and~\eqref{eq:xdiff4}. Now let us turn to the case of the infinite monodromy groups for the fibres~$T(x_1,0)$ with~$x_1 > 0$. The two generators given in~\eqref{eq:moncs2} correspond to the vertical probe and the probe with directional vector~$-e_1^* + e_2^*$. Since~${\mathcal D}(x_1,0) = \{D_2,D_3\}$, we distiguish the cases~$D_2 \mapsto D_2, D_3 \mapsto D_3$ and~$D_2 \mapsto D_3, D_3 \mapsto D_2$. Let us first restrict our attention to the former case. We use~\eqref{eq:phiD} with~$D_i = D_i'$. Recall that~$a_1 = \pm 1$. Since~\eqref{eq:xdiff3} cannot be satisfied for~$x_1 = x_1', x_2 = 0$, we deduce that~$a_1 = 1$. A computation shows that
\begin{equation}
\label{eq:monocs2}
(\phi\vert_{T(x_1,0)})_* e_1
= \partial \phi_* D_1
= e_1 + 2a_2 e_2,
\end{equation}
which proves the claim in case~$\det(\phi\vert_{T(x_1,0)})_* = 1$ (this corresponds to the powers of the product of the two generators given in~\eqref{eq:moncs2}). The case~$D_2 \mapsto D_3, D_3 \mapsto D_2$ is completely analoguous.
\hspace*{\fill} $\Box$\\
\subsection{The case of~$X = \mathbb{C}^2 \times T^*S^1$}
\label{ssec:c2ts1}
Let~$X = \mathbb{C}^2 \times T^*S^1$ be equipped with the exact symplectic form~$\omega = d\lambda = \omega_{\mathbb{C}^2} \oplus \omega_{T^*S^1} = d\lambda_{\mathbb{C}^2} \oplus d\lambda_{T^*S^1}$ and the product toric structure with moment polytope~$\Delta = \mathbb{R}_{\geqslant 0}^2 \times \mathbb{R}$. Note that~$X$ is not a toric reduction of~$\mathbb{C}^N$ and hence we cannot apply the techniques from~\S\ref{ssec:chekanovinv} and~\S\ref{ssec:hammon} as in the previous examples. Instead, we rely on~\S\ref{ssec:displacementen}, meaning that we need to compute the displacement energy of toric fibres.
\begin{lemma}
\label{lem:c2ts1}
The displacement energy of toric fibres is given by
\begin{equation}
e(X,T(x_1,x_2,x_3))
=
\min\{x_1,x_2\}.
\end{equation}
In particular, equality~\eqref{eq:detoric} (and thus also Assumption~\ref{ass:detoric}) holds for all toric fibres in~$X$.
\end{lemma}
\noindent {\it Proof. \;}
The upper bound is obvious, either by using probes or by using the fact that $e(\mathbb{C}^2,T(x_1,x_2)) = \min\{x_1,x_2\}$. For the lower bound, we use Chekanov's theorem~\cite{Che98}, which we briefly recall here. Let~$L \subset X$ be a compact Lagrangian submanifold of a tame symplectic manifold and let~$J \in {\mathcal J}(X,\omega)$ be a tame almost complex structure. Furthermore, denote by~$\sigma(X,L;J)$ the infimum of symplectic areas over all non-constant~$J$-holomorphic disks with boundary on~$L$. If this set is empty, set~$\sigma(X,L;J) = \infty$. If the set is not empty, we obtain a strictly positive value which is attained by Gromov compactness. The quantity~$\sigma(X;J)$ is defined similarly for~$J$-holomorphic spheres in~$X$. Then Chekanov's theorem gives the lower bound,
\begin{equation}
e(X,L) \geqslant \min \{ \sigma(X;J) , \sigma(X,L;J) \}.
\end{equation}
Now let~$X = \mathbb{C}^2 \times T^*S^1$ and~$L = T(x_1,x_2,x_3)$ a toric fibre. Note that~$X$ is aspherical, and thus~$\sigma(X;J)= \infty$. Let~$J_0 \in {\mathcal J}(X,\omega)$ be the complex structure obtained from the identification~$X = \mathbb{C}^2 \times \mathbb{C}^{\times}$. There are two obvious families of~$J_0$-holomorphic disks,
\begin{eqnarray}
u_{\alpha_1,\alpha_2}(z) &=& \left(z,\sqrt{\frac{x_2}{\pi}}e^{i\alpha_1} , e^{x_3 + i\alpha_2} \right), \\
v_{\alpha_1,\alpha_2}(z) &=& \left(\sqrt{\frac{x_1}{\pi}}e^{i\alpha_1}, z , e^{x_3 + i\alpha_2} \right),\quad \alpha_1, \alpha_2 \in S^1.
\end{eqnarray}
These disks have area~$\int u_{\alpha_1,\alpha_2}^*\omega = x_2$ and~$\int v_{\alpha_1,\alpha_2}^*\omega = x_1$ for all~$\alpha \in S^1$. We show that the minimal one among these two disks realizes the minimum~$\sigma(X,L;J_0)$. For a similar argument, see~\cite[Lemma 4]{Aur15}. Now let~$u \colon (D,\partial D) \rightarrow (X,L)$ be a non-trivial~$J_0$-holomorphic disk. First note that the map~$p_2 \circ u$, where~$p_2 \colon X \rightarrow T^*S^1 \cong \mathbb{C}^*$ is the projection, is constant. This follows from the maximum principle. Indeed, by the maximum principle this map takes values in the unit disk. Since~$0$ is not contained in its image, we can precompose it with~$z \mapsto \frac{1}{z}$, to see that it actually takes values in the unit circle. Since it is holomorphic, it is actually constant. By considering~$p_1 \circ u$, where~$p_1 \colon (X,J_0) \rightarrow (\mathbb{C}^2,i \oplus i)$ is the natural projection, it is sufficient to understand holomorphic disks in~$\mathbb{C}^2$ with boundary on the product torus~$T(x_1,x_2)$. The group~$\pi_2(\mathbb{C}^2,T(x_1,x_2))$ is generated by the two coordinate disks~$D_1,D_2$. We have~$[p_1 \circ u] = k_1D_1 + k_2D_2$, where~$k_1,k_2$ are the algebraic intersection numbers with coordinate axes. By positivity of intersections, we deduce~$k_1,k_2 \geqslant 0$. Since~$\int_{D_1} \omega_{\mathbb{C}^2} = x_1$ and~$\int_{D_2} \omega_{\mathbb{C}^2} = x_2$, we obtain
\begin{equation}
\sigma(X,L;J_0)
\geqslant \min\{ k_1x_1 + k_2x_2 \,\vert\, k_1,k_2 \geqslant 0, k_1k_2 \neq 0 \}
= \min\{x_1,x_2\}.
\end{equation}
This minimum is realized by the disks~$u_{\alpha_1,\alpha_2}$ or $v_{\alpha_1,\alpha_2}$.
\hspace*{\fill} $\Box$\\
Using Theorem~\ref{thm:alternative}, we can now classify toric fibres and determine their Hamiltonian monodromy groups.
\begin{proposition}
\label{prop:classc2ts1}
The classification of toric fibres~$T(x) = T(x_1,x_2,x_3)$ in~$X = \mathbb{C}^2 \times T^*S^1$ is given by
\begin{equation}
{\mathfrak{H}}_x = \{(x_1,x_2,x_3 + k(x_2 - x_1)) , (x_2,x_1,x_3 + k(x_2 - x_1)) \,\vert\, k \in \mathbb{Z}\}.
\end{equation}
Furthermore, all Hamiltonian monodromy groups are trivial except when~$x_1 = x_2$, in which case
\begin{equation}
\label{eq:c2ts1mon}
{\mathcal H}_{T(x_1,x_1,x_3)}
=
\left\langle
\begin{pmatrix}
1 & 0 & 1 \\
0 & 1 & -1 \\
0 & 0 & 1
\end{pmatrix} ,
\begin{pmatrix}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{pmatrix}
\right\rangle.
\end{equation}
\end{proposition}
\noindent {\it Proof. \;}
The equivalences are easy to construct using probes with direction~$e_1^* - e_2^* + k e_3^*$ with~$k \in \mathbb{Z}$. Let~$\phi$ be a Hamiltonian diffeomorphism mapping~$T(x)$ to~$T(x')$. First note that the long exact sequence for relative homology looks quite different than in the compact toric case. Indeed, we obtain
\begin{equation}
0
\rightarrow H_2(X,T(x))
\rightarrow H_1(T(x))
\rightarrow H_1(X)
\rightarrow 0,
\end{equation}
and~$H_1(T(x)) \cong H_2(X,T(x)) \oplus H_1(X) \cong \mathbb{Z}^2 \oplus \mathbb{Z}$. Then the induced map~$(\phi\vert_{T(x)})_*$ on the first homology is of the form
\begin{equation}
(\phi\vert_{T(x)})_*
=
\begin{pmatrix}
a_1 & a_2 & b_1 \\
a_3 & a_4 & b_2 \\
0 & 0 & 1
\end{pmatrix} , \quad a_i,b_j \in \mathbb{Z}.
\end{equation}
This follows from the fact that~$\phi$ induces the identity on~$H_1(X)$. First suppose that~$x_1 = x_2$. Then, by Theorem~\ref{thm:alternative} and Lemma~\ref{lem:c2ts1}, we obtain~$x_1' = x_2' = x_1$ and~$(\phi\vert_{T(x)})_*$ either swaps the first two coordinates or acts by the identity on them. By preservation of the Maslov index, we obtain~$b_2 = -b_1$. Note that all monodromies of these tori can be realized by symmetric probes of direction~$e_1^* - e_2^* + k e_3^*$, proving~\eqref{eq:c2ts1mon}. To prove that~$x_3 = x_3'$, compute
\begin{equation}
\label{eq:c2ts1}
x_3
= \int_{e_3} \lambda \vert_{T(x)}
= \int_{(\phi\vert_{T(x)})_*e_3} \lambda \vert_{T(x')}
= b_1 (x_1' - x_2') + x_3,
\end{equation}
proving the claim. In the case~$x_1 \neq x_2$, suppose without loss of generality that~$x_1 < x_2$ and~$x_1' < x_2'$. By Theorem~\ref{thm:alternative} and Lemma~\ref{lem:c2ts1}, we obtain~$x_1 = x_1'$ and~$a_1 = 1, a_3 = 0$ and~$a_4=1, a_2 = 0$ or~$a_4 = -1, a_2 = -1$. The latter case is actually impossible. Indeed, if~$a_4 = -1, a_2 = -1$, then
\begin{equation}
x_2
= \int_{e_2} \lambda \vert_{T(x)}
= \int_{(\phi\vert_{T(x)})_*e_2} \lambda \vert_{T(x')}
= 2 x'_1 - x'_2,
\end{equation}
contradicting~$x_1 < x_2, x_1' < x_2'$. The rest of the proof is as in the case~$x_1 = x_2$ and the claim~$b_1 = 0$ follows from~\eqref{eq:c2ts1}.
\hspace*{\fill} $\Box$\\
\subsection{The case of~$X = T^*S^1 \times S^2$}
\label{ssec:ts1s2}
Let~$X = T^*S^1 \times S^2$ be equipped with the product symplectic form~$\omega = \omega_{T^*S^1} \oplus \omega_{S^2}$. The moment polytope is~$\Delta = \mathbb{R} \times [-1,1]$. Note that~$X$ is not a toric reduction of any~$\mathbb{C}^N$, but it is a toric reduction of~$\mathbb{C}^2 \times T^*S^1$, meaning that we can use Proposition~\ref{prop:classc2ts1} together with the lifting trick.
\begin{proposition}
The classification of toric fibres~$T(x) = T(x_1,x_2)$ in~$X = T^*S^1 \times S^2$ is given by
\begin{equation}
{\mathfrak{H}}_x
=
\{ (x_1 + 2k x_2,\pm x_2) \, \vert \, k \in \mathbb{Z} \}.
\end{equation}
Furthermore, all Hamiltonian monodromy groups are trivial, except for~$x_2 = 0$, in which case,
\begin{equation}
{\mathcal H}_{T(x_1,0)}
=
\left\{ \left.
\begin{pmatrix}
1 & 0 \\
2k & \pm 1
\end{pmatrix}
\right\vert
k \in \mathbb{Z}
\right\}.
\end{equation}
\end{proposition}
\noindent {\it Proof. \;}
Again, all constructions immediately follow from symmetric probes. For the obstructions, we view~$X$ as a toric reduction of~$\mathbb{C}^2 \times T^*S^1$. Perform toric reduction on~$\mathbb{C}^2 \times T^*S^1$ with respect to the plane~$V = \{ x_1 + x_2 = 1 \} \subset \mathbb{R}^3$ to obtain~$X$. This corresponds to the Hamiltonian~$H = \pi \vert z_1 \vert^2 + \pi \vert z_2 \vert^2$. The classification of toric fibres follows immediately from Propostion~\ref{prop:classc2ts1}, Proposition~\ref{prop:toricfibresred} and the lifting trick, as in the proof of Theorem~\ref{thm:mainB}. The obstructions to monodromy similarly follow from Proposition~\ref{prop:classc2ts1} and the lifting trick, as in the proof of Theorem~\ref{thm:ambientmonodromy}. \hspace*{\fill} $\Box$\\
\subsection{Chekanov's classification revisited}
\label{ssec:chekrev}
In this subsection, we prove Conjecture~\ref{conj:main} in the case of~$\mathbb{C}^n$. The classification of product tori goes back to Chekanov, see Theorem~\ref{thm:chekanov}, meaning that we only need to prove that all equivalences of toric fibres can be realized by iterated symmetric probes.
\begin{theorem}
\label{thm:cnsymmprobes}
Product tori~$T(x),T(y) \subset \mathbb{C}^n$ are Hamiltonian isotopic if and only if they are equivalent by a sequence of symmetric probes.
\end{theorem}
Before proving this result, let us revisit Chekanov's classification. In~$\mathbb{C}^2$, it states that,
\begin{equation}
{\mathfrak{H}}_x = \{(x_1,x_2),(x_2,x_1)\}, \quad (x_1,x_2) \in \mathbb{R}^2_{> 0}.
\end{equation}
In~$\mathbb{C}^n$ with~$n \geqslant 3$, however, the situation is much richer. Note for example that all tori~$T(1,2,k)$ with~$k \in \mathbb{N}_{\geqslant 2}$ are Hamiltonian isotopic, since their Chekanov invariants agree. The set~${\mathfrak{H}}_x$ even has accumulation points in many cases, see Corollary~\ref{cor:accumulation}. To discuss this further, we slightly reformulate Chekanov's invariants. Since coordinate permutations can be realized by Hamiltonian isotopies, we may assume that~$T(x)$ is given under the form
\begin{equation}
\label{eq:productnormal}
T(x) = T(\underbrace{\underline{x},\dots,\underline{x}}_{\#_d(x)},\underline{x}+\widehat{x}_1 ,\dots,\underline{x}+\widehat{x}_s),
\end{equation}
for~$\widehat{x}_i > 0$ and~$s = n - d(x)$. We call~$\widehat{x} = (\widehat{x}_1, \ldots, \widehat{x}_s) \in \mathbb{R}_{>0}^s$ the \emph{reduced vector} associated to~$x$.
Theorem~\ref{thm:chekanov} can be reformulated as follows.
\begin{corollary}
\label{cor:chekref}
Product tori~$T(x),T(y) \subset \mathbb{C}^n$ are Hamiltonian isotopic if and only if~$d(x) = d(y)$,~$\#_d(x) = \#_d(y)$ and there is~$M \in \operatorname{GL}(s;\mathbb{Z})$ such that~$M\widehat{x} = \widehat{y}$.
\end{corollary}
\noindent {\it Proof. \;}
Note that the~$\widehat{x}_i$ are exactly the non-trivial generators of the lattice~$\Gamma(x)$,
\begin{equation}
\Gamma(x)
= \mathbb{Z}\langle \widehat{x} \rangle
= \{ k_1 \widehat{x}_1 + \ldots + k_s \widehat{x}_s \, \vert \, k_i \in \mathbb{Z} \} \subset \mathbb{R}.
\end{equation}
Furthemore,~$\mathbb{Z}\langle \widehat{x} \rangle$ is a complete invariant for~$\operatorname{GL}(s;\mathbb{Z})$-orbits. See for example Cabrer--Mundici~\cite[Proposition 1]{CabMun16}.
\hspace*{\fill} $\Box$\\
This allows us to gain a good qualitative understanding of~${\mathfrak{H}}_x$.
\begin{corollary}
The inclusion
\begin{equation}
\label{eq:fhinclusion}
{\mathfrak{H}}_x \subset \{y \in \mathbb{R}^n_{>0} \, \vert \, d(y) = d(x), \, \#_d(y) = \#_d(x)\}
\end{equation}
is dense if and only if~$\operatorname{rank} \Gamma(x) \geqslant 2$.
\end{corollary}
\noindent {\it Proof. \;}
Let~$\widehat{x} \in \mathbb{R}_{>0}^s$ be the reduced vector as in~\eqref{eq:productnormal}. It follows from Corollary~\ref{cor:chekref} that the inclusion~\eqref{eq:fhinclusion} is dense if and only if the~$\operatorname{GL}(s;\mathbb{Z})$-orbit of~$\widehat{x}$ is dense in~$\mathbb{R}_{>0}^s$. The latter is equivalent to~$\operatorname{rank} \Gamma(x) \geqslant 2$ by a classical theorem of Dani~\cite[Theorem 17]{Dan75}, see also~\cite{CabMun16}.
\hspace*{\fill} $\Box$\\
Let us now have a look at the discrete case, i.e.\ the case where~$\operatorname{rank}(\Gamma(x))=1$. In that case, the reduced vector~$\widehat{x} \in \mathbb{R}^s_{>0}$ is a real multiple of a lattice vector,~$\widehat{x} = \operatorname{\ell_{int}}(\widehat{x}) k$ with~$k \in \mathbb{Z}^s$ a primitive vector and where~$\operatorname{\ell_{int}}(\widehat{x}) > 0$ denotes the integral affine length.
\begin{corollary}
\label{cor:rankone}
The product tori~$T(x),T(y)\subset \mathbb{C}^n$ with~$d(x)=d(y)$,~$\#_d(x)=\#_d(y)$ and~$\operatorname{rank}(\Gamma(x))=\operatorname{rank}(\Gamma(y)) = 1$ are Hamiltonian isotopic if and only if their reduced vectors have the same integral affine length,
$\operatorname{\ell_{int}}(\widehat{x})=\operatorname{\ell_{int}}(\widehat{y})$.
\end{corollary}
\noindent {\it Proof. \;}
Write~$\widehat{x} = \operatorname{\ell_{int}}(\widehat{x}) k$,~$\widehat{y} = \operatorname{\ell_{int}}(\widehat{y}) k'$ and note that~$\operatorname{GL}(s;\mathbb{Z})$ acts transitively on the set of primitive lattice vectors and preserves integral affine length. Thus the claim follows from Corollary~\ref{cor:chekref}.
\hspace*{\fill} $\Box$\\
Let us now turn to the proof of Theorem~\ref{thm:cnsymmprobes}. The following lemma is key.
\begin{lemma}
\label{lem:elswap}
Let~$x=(x_1,x_2,x_3) \in \mathbb{R}^3_{> 0}$ with~$x_1 < x_2,x_3$. Then there is a symmetric probe showing
\begin{equation}
\label{eq:elswap}
T(x_1,x_2,x_3) \cong T(x_3,x_2 + x_3 - x_1, x_1) \subset \mathbb{C}^3.
\end{equation}
\end{lemma}
\noindent {\it Proof. \;}
The directional vector~$\eta = e_1^* +e_2^* - e_3^*$ defines a probe~$\sigma = \mathbb{R}^3_{\geqslant 0} \cap \{ x + t\eta \, \vert \, t \in \mathbb{R} \}$ which realizes the equivalence~\eqref{eq:elswap}, see Figure~\ref{fig:5}. Indeed, since~$x_1 < x_2$, the probe intersects the boundary~$\partial \mathbb{R}^3_{\geqslant 0}$ in the points~$(0,x_2-x_1,x_3+x_1)$ and~$(x_1+x_3,x_2+x_3,0)$ which lie in the interior of the facets~$\{y_1 = 0\}$ and~$\{y_3 = 0\}$ respectively. Since~$\langle \eta , e_1 \rangle = 1$ and~$\langle \eta , e_3 \rangle = -1$, both intersections are integrally transverse and hence the probe is admissible. Furthermore, since~$x_1 \leqslant x_2,x_3$, the points~$(x_1,x_2,x_3)$ and~$(x_3,x_2 + x_3 - x_1, x_1)$ both lie at integral distance~$x_1$ to the boundary and hence the corresponding fibres are Hamiltonian isotopic.
\hspace*{\fill} $\Box$\\
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[inner sep=0pt] at (0,0)
{\includegraphics[trim={4cm 3cm 6cm 1.5cm},clip,scale=0.6] {fig5.pdf}};
\node at (0.25,-1.5){$(x_1,x_2,x_3)$};
\node at (-0.45,0.65){$(x_3,x_2,x_1)$};
\node at (2.8,0.65){$(x_3,x_2+x_3-x_1,x_1)$};
\node at (-3.3,2.9){$y_1$};
\node at (2.75,-1.2){$y_2$};
\node at (-1.5,-3){$y_3$};
\end{tikzpicture}
\caption{Two symmetric probes in~$\mathbb{C}^3$, one realizes a coordinate permutation, the other is used in the proof of Lemma~\ref{lem:elswap}}
\label{fig:5}
\end{center}
\end{figure}
\proofof{Theorem~\ref{thm:cnsymmprobes}}
Let~$T(x),T(y) \subset \mathbb{C}^n$ be product tori whose Chekanov invariants agree. First note that coordinate permutations
\begin{equation}
(x_1,\ldots,x_i,\ldots,x_j,\ldots,x_n) \mapsto
(x_1,\ldots,x_j,\ldots,x_i,\ldots,x_n)
\end{equation}
can be realized by symmetric probes. Therefore, we may assume that both tori are given in the normal forms
\begin{equation}
x=(\underline{x}, \ldots, \underline{x}, \underline{x} + \widehat{x}_1, \ldots, \underline{x} + \widehat{x}_s ), \quad
y=(\underline{x}, \ldots, \underline{x}, \underline{x} + \widehat{y}_1, \ldots, \underline{x} + \widehat{y}_s ),
\end{equation}
where the reduced vectors~$\widehat{x},\widehat{y} \in \mathbb{R}^s_{>0}$ are~$\operatorname{GL}(s;\mathbb{Z})$-equivalent by Corollary~\ref{cor:chekref}. We thus need to prove that all~$\operatorname{GL}(s;\mathbb{Z})$-transformations on the reduced vectors can be realized by symmetric probes. The group~$\operatorname{GL}(s;\mathbb{Z})$ is generated by coordinate permutations together with the transformation
\begin{equation} \label{eq:generator}
(\widehat{x}_1,\widehat{x}_2,\ldots,\widehat{x}_s) \mapsto (\widehat{x}_1+\widehat{x}_2,\widehat{x}_2, \ldots, \widehat{x}_s),
\end{equation}
see for example~\cite{Tro62}. Coordinate permutations can be realized by symmetric probes as we have just seen. By Lemma~\ref{lem:elswap}, the generator~\eqref{eq:generator} can also be realized by a symmetric probe. Indeed, note that the reduced vectors associated to the tori~$T(x_1,x_2,x_3) \subset \mathbb{C}^3$ and~$T(x_3,x_2 + x_3 - x_1, x_1) \subset \mathbb{C}^3$ are given by~$(x_2-x_1,x_3-x_1) \in \mathbb{R}^2_{>0}$ and~$(x_2+x_3-2x_1, x_3 - x_1)\in \mathbb{R}^2_{>0}$ which corresponds to~$(\widehat{x}_1,\widehat{x}_2) \mapsto (\widehat{x}_1+\widehat{x}_2, \widehat{x}_2)$ in terms of the reduced vectors. Hence~\eqref{eq:generator} can be realized by a symmetric probe lying in an appropriately chosen coordinate subspace~$\mathbb{C}^3 \subset \mathbb{C}^n$. This proves the claim of the theorem.
\hspace*{\fill} $\Box$\\
\begin{remark}
\label{rk:classquant}
Let us briefly discuss a quantitative version of Theorem~\ref{thm:cnsymmprobes}. More specifically,
For every~$\varepsilon > 0$, we can find a Hamiltonian isotopy which has support in the ball~$B^6(x_1 + x_2 + 2x_3 + \varepsilon)$ realizing the equivalence in Lemma~\ref{lem:elswap}. Indeed, note that the closure of~$B^6(x_1 + x_2 + 2x_3)$ is the smallest closed ball containing the symmetric probe~$\sigma$ in the proof of Lemma~\ref{lem:elswap}, and the support of the Hamiltonian isotopy can be chosen to lie in an arbitrarily small neighbourhood of~$\sigma$. This yields a simple proof of~\cite[Lemma 4.1]{CheSch16}. Furthermore, by the same argument as in the proof of~\cite[Theorem 1.1 (ii)]{CheSch16}, this remark implies that for
\begin{equation}
\label{eq:balladmissible}
\sum_{i=1}^n x_i + d(x) < a, \quad
\sum_{i=1}^n y_i + d(y) < a
\end{equation}
the product tori~$T(x),T(y)$ are Hamiltonian isotopic by iterated symmetric probes \emph{inside the ball}~$B^{2n}(a)$. In other words, given a ball~$B^{2n}(a)$, there is a region~${\mathcal R}(a) \subset B^{2n}(a)$ in its (open) moment polytope
\begin{equation}
{\mathcal R}(a) = \mu_0^{-1}\left\{ x \in \mathbb{R}^n_{\geqslant 0} \, \left\vert \, \sum_{i=1}^n x_i + d(x) < a \right.\right\}
\end{equation}
in which the classification of product tori in~$B^{2n}(a)$ coincides with the classification of product tori in all of~$\mathbb{C}^n$ and all symmetric probes producing the equivalences are contained in~$\mu_0({\mathcal R}(a))$. Together with Chekanov's classification theorem~\ref{thm:chekanov}, this shows Corollary~\ref{cor:accumulation}, which also follows from the methods in~\cite{CheSch16}. Let us point out that one cannot drop~\eqref{eq:balladmissible} as was shown in~\cite[Theorem 1.2]{CheSch16}. A reasonable guess would be that in the complement of~${\mathcal R}(a)$, only coordinate permutations are allowed, since these are the only symmetric probes admissible in that region. However, we do not know the classification of product tori in the ball outside of~${\mathcal R}(a)$.
\end{remark}
\begin{corollary}
\label{cor:accumulation}
Let~$X$ be a toric manifold of dimension~$\geqslant 6$ whose moment polytope has at least one vertex. Then~$X$ contains toric fibres such that~${\mathfrak{H}}_x$ has accumulation points.
\end{corollary}
\subsection{In arbitrary toric manifolds}
\label{ssec:arbitrary}
The goal of this section is to illustrate that there are many symmetric probes in arbitrary toric manifolds. We focus on constructions near the boundary of the moment polytope~$\Delta$ using normal forms of Delzant polytopes. For example each vertex~$v \in \Delta$ yields an equivariant symplectic ball embedding~$B^{2n}(a) \rightarrow X$, for each~$a$ smaller than the integral length of the shortest edge adjacent to~$v$. Note that this ball embedding is unique up to coordinate permutations. Let us denote the corresponding subset by~$B_v(a) \subset X$. Furthermore, denote the image of a product torus~$T(x) \subset B^{2n}(a)$ under the equivariant embedding by~$T_v(x) \subset B_v(a)$. From Remark~\ref{rk:classquant}, we deduce that there is a region~${\mathcal R}_v(a) \subset B_v(a)$ in which the same probes are admissible as those that are admissible in~$\mathbb{C}^n$. Let us show the following result about the classification of toric fibres close to vertices.
\begin{proposition}
\label{prop:vertexclass}
Let~$B^{2n}_v(a)$ and~$B^{2n}_{v'}(a)$ be balls at vertices~$v,v' \in \Delta$ with
\begin{equation}
0 < a < \min\{ \operatorname{\ell_{int}}(e) \, \vert \, e \text{ edge of } \Delta \}.
\end{equation}
Then~$T_v(x) \cong T_{v'}(x)$. In particular, we have~$T_v(x) \cong T_v(y)$ if and only if~$T_{v'}(x) \cong T_{v'}(y)$.
\end{proposition}
This means that in small enough neighbourhoods of vertices, the classification problem of toric fibres does not depend on the choice of vertex. \medskip
\proofof{Proposition~\ref{prop:vertexclass}}
The main idea of the proof is to use the edges of~$\Delta$ to construct symmetric probes exchanging a pair of toric fibres sitting close to the vertices at the endpoints of the given edge.
Let~$e \subset \Delta$ be an edge of the moment polytope with directional vector~$v_e \in \Lambda^*$. Let~$F,F' \subset \Delta$ be the two facets of~$\Delta$ adjacent to the endpoints of~$e$ but not containing~$e$. We note that the Delzant condition at the vertices adjacent to~$e$ implies that~$v_e$ intersects~$F,F'$ integrally transversely. This can be easily seen by using the corresponding normal form mapping~$e$ to the span of~$e_n^*$ and~$F$ (or~$F'$) to the span of~$e_1^*, \ldots, e_{n-1}^*$. In other words, we obtain symmetric probes parallel to~$e$, as long as both endpoints intersect~$F$ and~$F'$. If~$a < \min\{ \operatorname{\ell_{int}}(e) \, \vert \, e \text{ edge of } \Delta \}$, then every~$T_v(x) \subset B_v(a)$ can be accessed by such a symmetric probe. Any two vertices~$v,v' \in \Delta$ can be linked by a chain of edges and this proves the claim, up to performing coordinate permutations in one of the two ball embeddings (which can always be realized by symmetric probes).
\hspace*{\fill} $\Box$\\
The technique used in the previous proof can be generalized to any symmetric probe in a face~$\Delta'$ of a Delzant polytope. Recall that any face of a Delzant polytope is itself Delzant.
\begin{proposition}
Let~$\Delta' \subset \Delta$ be a face and~$\sigma' \subset \Delta'$ a symmetric probe therein. Then there is a neighbourhood~$U$ of~$\sigma'$ such that any parallel translate of~$\sigma'$ in~$U \cap \operatorname{int} \Delta$ is a symmetric probe.
\end{proposition}
\noindent {\it Proof. \;}
Let~$\sigma' \subset \Delta'$ be a symmetric probe with endpoints on the facets~$f,f' \subset \Delta'$. We can write~$f = \Delta' \cap F$ and similarly~$f' = \Delta \cap F'$ for~$F,F'$ facets of~$\Delta$. Let us now show that any parallel translate of~$\sigma'$ with endpoints on~$F$ and~$F'$ is an admissible symmetric probe in~$\Delta$. At the face~$f = \Delta' \cap F$, we can choose a normal form such that~$\Delta'$ spans the coordinate subspace spanned by~$e_1^*,\ldots,e_k^*$ and~$F$ the one spanned by~$e_2^*, \ldots, e_n^*$. Intergral transversality of the intersection of~$\sigma'$ and~$f$ implies that~$v_{\sigma'},e_2^*,\ldots,e_k^*$ is a lattice basis for the sublattice spanned by~$e_1^*,\ldots,e_k^*$. Here, we have denoted the directional vector of the symmetric probe~$\sigma'$ by~$v_{\sigma'}$. This implies that~$v_{\sigma'},e_2^*,\ldots,e_n^*$ is a lattice basis of the full lattice in the ambient space, proving integral transversality.
\hspace*{\fill} $\Box$\\
\newpage
\bibliographystyle{abbrv}
|
1,116,691,500,712 | arxiv | \section{Basic Equations} The Hookean dumbbell model represents a
macromolecule by a mechanical model that consists of two identical
beads connected by a spring. The solvent in which the beads are
suspended is assumed to be Newtonian, and attention is restricted to
flows which have a homogeneous velocity field, {\it ie.} of the form $
\bv = \bv_0 + \bk (t) \cdot \br $, where $\bv_0$ is a constant vector,
$\bk(t)$ is a traceless tensor, and $ \br$ is the position vector with
respect to a laboratory-fixed frame of reference. The instantaneous
position of the beads are specified by bead position vectors $ \br_1$
and $ \br_2$.
This paper examines the consequence of introducing an excluded volume
interaction between the beads of the dumbbell, so that the total
potential experienced by the beads, $ \phi $, is the sum of the spring
potential $S$, and the excluded volume potential $E$. The force on bead
$\nu$ due to this potential, $\bFnu$, is then given by
$ \bFnu = - \, {\partial \phi / \partial \br_{\nu} }$.
For Hookean springs, the spring potential is given by $S
= ({1 / 2})\, H Q^2$, where $H$ is the spring constant. Two forms of
the excluded volume potential are considered in this work, namely, the
narrow Gaussian potential, and Fixman's quadratic
potential. These are discussed in greater detail subsequently. Here,
we summarise the governing equations that are valid for an
arbitrary choice of the excluded volume potential $E$.
For homogeneous flows, the configurational distribution function $
\psi(\bQ, t )$ depends only on the internal configuration of the
dumbbell, specified by the bead-connector vector $ \bQ = \br_{2} -
\br_1 $, and not on the center of mass. The quantity $ \psi( \bQ, t )\,
d \bQ $ is then the probability that at time $t$ the dumbbell has a
configuration in the range $\bQ$ to $\bQ + d\bQ$. Using the framework
of polymer kinetic theory~\cite{bird2} one can show that the
distribution function $ \psi( \bQ, t )$, in the presence of excluded
volume, satisfies the following {\it diffusion equation},
\begin{equation} {\partial \psi \over \partial t} = - \, {\partial
\over \partial \bQ} \cdot \biggl\lbrace (\bk \cdot \bQ) \, \psi - {2
\over \zeta} \, \psi \, \dphi - {2 k_B T \over \zeta} \, {\partial
\psi \over \partial \bQ } \biggr\rbrace \label{diff} \end{equation}
where, $\zeta$ is the bead friction coefficient ({\it ie.,} $\zeta=6 \pi
\eta_s a$ for spherical beads with radius $a$, in a solvent with
viscosity $\eta_s$), $ k_B$ is Boltzmann's constant, and $T$ is the
absolute temperature.
The stress tensor, $\btau$, in a polymer solution is considered to be
given by the sum of two contributions, $\btau=\btau^s + \btau^p$, where
$\btau^s$ is the contribution from the solvent, and $\btau^p$ is the
polymer contribution. Since the solvent is assumed to be Newtonian,
$\btau^s= - \, \eta_s \, {\dot \bgam}$, where ${\dot \bgam}$ is the
rate of strain tensor, ${\dot \bgam} = (\nabla \bv) (t) + (\nabla
\bv)^\dagger (t)$. The rheological properties of a dilute polymer
solution may thus be obtained by calculating the polymer contribution
to the stress tensor, $\btau^p$. For a dumbbell model in the presence
of excluded volume, it is given by the Kramer's
expression~\cite{bird2},
\begin{equation}
\btau^p=- n \, \avel \, \bQ \, \dphi\, \aver + nk_BT \, \bu
\label{kram}
\end{equation}
Here, $n$ is the number density of polymers, and angular brackets represent
averaging with respect to the configurational distribution function
$\psi (\bQ, t)$.
For both the excluded volume potentials considered here, it will turn
out that calculation of the second moment $\mom$ is necessary in order
to evaluate the average in equation~(\ref{kram}). A time evolution
equation for the second moment can be obtained by multiplying the
diffusion equation~(\ref{diff}) by $\bQ \bQ$ and integrating over all
configurations, \begin{equation} {d \over dt} \mom = \bk \cdot \mom +
\mom \cdot \bk^T + {4 k_B T \over \zeta}\, \bu - {2\over \zeta} \,
\left[ \, \avel \bQ \dphi \aver + \avel \dphi \bQ \aver \, \right]
\label{secmom} \end{equation}
It proves convenient for subsequent calculations to introduce the
following dimensionless variables,
\begin{equation}
t^* = {t \over
\lambda_H}, \quad \bQ^* = \sqrt{{ H \over k_B T}} \, \bQ, \quad \bk^* =
\lambda_H \bk, \quad \phi^* = S^* + E^*
\label{nondimvar}
\end{equation}
where, $ \lambda_H = (\zeta / 4H)$ is the familiar time
constant, $ S^* = S / k_B T = (1 / 2) \, {\bQ^*}^2 $ and $E^* = E / k_B
T $ are the non-dimensional Hookean spring potential and the
non-dimensional excluded volume potential, respectively. Note that
$\theta$-solvent values of a typical time scale ($ \lambda_H$)
and a typical length scale ($\sqrt{k_B T / H}$) are used for the purpose
of non-dimensionalisation, regardless of the form of the excluded
volume potential.
The intramolecular force is expected to be in the direction of the
bead-connector vector. Therefore, it can be written in terms of
non-dimensional variables as, \begin{equation} {\partial \phi^* \over
\partial \bQ^*} = H^*(Q^*) \, \bQ^* \label{evforce} \end{equation}
where $H^*(Q^*)$ is an arbitrary function of the magnitude of $\bQ^*$.
As a result, the diffusion equation~(\ref{diff}),
in terms of non-dimensional variables, is given by,
\begin{equation} {\partial \psi \over \partial t^*} = - \, {\partial
\over \partial \bQ^*} \cdot \biggl\lbrace \bk^* \cdot \bQ^* - {1 \over
2} \, H^*(Q^*) \, \bQ^* \biggr\rbrace \, \psi \, + \, {1 \over 2} \,
{\partial \over \partial \bQ^*} \cdot {\partial \psi \over \partial
\bQ^* } \label{nondimdiff} \end{equation} while, the Kramers
expression~(\ref{kram}) assumes the form, \begin{equation} {\btau^p
\over nk_BT} = - \avel H^* (Q^*) \, \bQ^* \bQ^* \aver + \bu
\label{nondimkram} \end{equation} and the second moment
equation~(\ref{secmom}) becomes, \begin{equation} {d \over dt^*} \avel
\bQ^* \bQ^* \aver = \bk^* \cdot \avel \bQ^* \bQ^* \aver + \avel \bQ^*
\bQ^* \aver \cdot {\bk^*}^T - \avel H^* (Q^*) \, \bQ^* \bQ^* \aver +
\bu \label{nondimsecmom} \end{equation} Equation~(\ref{nondimsecmom})
is in general not a closed equation for the second moments since it
depends on the form of the function $H^*(Q^*)$ ({\it ie.} on the choice
of excluded volume potential).
On examining equations~(\ref{nondimkram}) and~(\ref{nondimsecmom}), it
is straightforward to see that the second moment
equation~(\ref{nondimsecmom}) is nothing but the Giesekus expression
for the stress tensor, \begin{equation} {\btau^p \over nk_BT} = {d
\over dt^*} \avel \bQ^* \bQ^* \aver - \bk^* \cdot \avel \bQ^* \bQ^*
\aver - \avel \bQ^* \bQ^* \aver \cdot {\bk^*}^T \label{Giesekus}
\end{equation}
While the equations derived in this section are valid for arbitrary
homogeneous flows, in this paper we confine attention to the prediction of
rheological properties in simple shear flows, defined in the
section below.
\section{Simple Shear Flows}
\subsection{Steady simple shear flow}
Steady simple shear flows are described by a tensor $\bk$ which
has the following matrix representation in the laboratory-fixed
coordinate system, \begin{equation} \bk={\dot \gamma} \, \pmatrix{ 0 &
1 & 0 \cr 0 & 0 & 0 \cr 0 & 0 & 0 \cr } \label{ssf1} \end{equation}
where ${\dot \gamma}$ is the constant shear rate.
The three independent material functions used to characterize such
flows are the viscosity, $\eta_p$, and the first and second normal
stress difference coefficients, $\Psi_1 \,\,{\rm and}\,\, \Psi_2$,
respectively. These functions are defined by the following relations
\cite{bird1}, \begin{equation} \tau_{xy}^p = - {\dot \gamma}\, \eta_p
\, ; \quad \quad \tau_{xx}^p- \tau_{yy}^p = - {\dot \gamma^2}\, \Psi_1
\, ; \quad \quad \tau_{yy}^p- \tau_{zz}^p = - {\dot \gamma^2}\, \Psi_2
\label{sfvis} \end{equation}
\subsection{Small amplitude oscillatory shear flow} A transient
experiment that is used often to characterise polymer solutions is
small amplitude oscillatory shear flow, where the tensor $\bk(t)$ is
given by, \begin{equation} \bk(t)={\dot \gamma}_0 \, \cos \, \omega t
\pmatrix{ 0 & 1 & 0 \cr 0 & 0 & 0 \cr 0 & 0 & 0 \cr } \label{usf3}
\end{equation} Here, ${\dot \gamma_0}$ is the amplitude, and $\omega$
is the frequency of oscillations in the plane of flow. The polymer
contribution to the shear stress, $\tau_{yx}^p$, depends on time
through the relation \cite{bird1}, \begin{equation} \tau_{yx}^p=-
\eta^\prime(\omega)\, {\dot \gamma}_0 \, \cos \, \omega t -
\eta^{\prime\prime}(\omega)\, {\dot \gamma}_0 \, \sin \, \omega t
\label{usf4} \end{equation} where $\eta^\prime$ and $
\eta^{\prime\prime}$ are the material functions characterising
oscillatory shear flow. They are represented in a combined form as the
complex viscosity, $\eta^* =\eta^\prime - i \,\eta^{\prime\prime}$.
In the linear viscoelastic flow regime, the stress tensor is described
by the linear constitutive relation,
\begin{equation}
\btau^p (t) = - \,
\int_{- \infty}^t d\!s \, G(t-s)\, {\dot \bgam}(s)
\label{usf5}
\end{equation}
where $G(t)$ is the relaxation modulus. As a result, for
oscillatory shear flows with a small amplitude ${\dot \gamma_0}$,
expressions for the real and imaginary parts of the complex viscosity
can be found in terms of the relaxation modulus from the expression,
\begin{equation} \eta^*= \int_0^\infty G(s)\, e^{-i \omega s} \, d\!s
\label{usf6} \end{equation}
Note that the zero shear rate viscosity $\eta_{p,0}$ and the zero shear
rate first normal stress difference $\Psi_{1,0}$, which are linear
viscoelastic properties, can be obtained from the complex viscosity in
the limit of vanishing frequency, \begin{equation} \eta_{p,0} =
\lim_{\omega\to 0} \, \eta^{\prime} (\omega) \, ; \quad \quad
\Psi_{1,0} = \lim_{\omega\to 0} {2 \, \eta^{\prime\prime} (\omega)
\over \omega} \label{usf8} \end{equation}
\section{Retarded Motion Expansion}
A retarded motion expansion for the stress tensor, derived previously
for the FENE dumbbell model~\cite{bird2}, can be adapted to the present
instance by recognising that the FENE spring potential (as indeed any
choice of excluded volume potential) is but a particular
example of a connector force potential between the beads of the
dumbbell. In this section, we briefly summarise the development of a
retarded motion expansion for an arbitrary choice of the excluded volume
potential. Details of the derivation may be found in
Bird~et al.\ \cite{bird2}.
We seek a solution of the diffusion equation~(\ref{nondimdiff}) whereby
the configurational distribution function, $\psi (\bQ, t)$, can be
written as a product of an equilibrium contribution and a flow
contribution,
\begin{equation}
\psi (\bQ, t) = \psi_{\rm eq} (\bQ) \,
\phi_{\rm fl} \, (\bQ, t)
\label{prod}
\end{equation}
The governing equation for the flow contribution
$\phi_{\rm fl} \, (\bQ,t)$,
\begin{equation}
{\partial \phi_{\rm fl} \over \partial t^*} = - \,
\biggl\lbrace \, {\partial \phi_{\rm fl} \over \partial \bQ^*} -
\phi_{\rm fl} \, {\partial \phi^* \over \partial \bQ^*} \,
\biggr\rbrace \cdot \bk^* \cdot \bQ^* + {1 \over 2} \, \biggl\lbrace
{\partial \over \partial \bQ^*} \cdot {\partial \phi_{\rm fl} \over
\partial \bQ^* } -
{\partial \phi_{\rm fl} \over \partial \bQ^*} \cdot {\partial \phi^*
\over \partial \bQ^* } \, \biggr\rbrace
\label{flowdiff}
\end{equation}
can be obtained by substituting equation~(\ref{prod}) into the
diffusion equation~(\ref{nondimdiff}), and exploiting the fact that the
equilibrium distribution function is given by, \begin{equation}
\psi_{\rm eq} (\bQ) = {\cal N_{\rm eq}} \, e^{- \phi^*} \label{eqdist}
\end{equation} where ${\cal N_{\rm eq}}$ is the normalisation
constant.
Regardless of the form of the excluded volume potential, at steady state,
an exact solution to equation~(\ref{flowdiff}) can be found
for all homogeneous {\it potential} flows~\cite{bird2}. For more
general homogeneous flows, however, it is necessary to seek a perturbative
solution. The flow contribution, $\phi_{\rm fl} \, (\bQ,t)$, is
assumed to be expandable in a power series in the velocity gradients,
\begin{equation}
\phi_{\rm fl} \, (\bQ,t) = 1+ \phi_1 + \phi_2 + \phi_3 + \ldots
\label{flowdistexp}
\end{equation}
where $\phi_k$ is of order $k$ in the velocity gradient.
Partial differential equations governing each of the $\phi_k$ may be
obtained by substituting equation~(\ref{flowdistexp}) into
equation~(\ref{flowdiff}) and equating terms of like order.
Following the procedure suggested in Bird~et al.\ \cite{bird2},
one can judiciously guess the specific
forms for the functions $\phi_k$ by noting that each of these functions
must have certain properties.
The form of the function $\phi_{1}$ which satisfies these requirements is,
\begin{equation}
\phi_{1} = {1\over 2} \, \bQ^* \cdot {\dot \bgam} \cdot \bQ^*
\label{phi1}
\end{equation}
while the form of $\phi_{2}$ can be guessed to be,
\begin{equation}
\phi_{2} = {1 \over
8} \, (\bQ^* \cdot {\dot \bgam} \cdot \bQ^*)^2 - {1 \over 60} \, \avel
\, {\bQ^*}^4 \, \aver_{\rm eq} \, {\rm tr} \, ( {\dot \bgam} \cdot
{\dot \bgam} ) + A^* (Q^*) \, \bQ^* \cdot {\dot \bgam} \cdot \bomega
\cdot \bQ^*
\label{phi2}
\end{equation}
where, $\avel \quad \aver_{\rm eq}$ denotes an average with
$ \psi_{\rm eq} (\bQ) $,
$\bomega$ is the vorticity tensor, defined here as
$\bomega= \bk^* - {\bk^*}^T $, and the scalar function $A^*(Q^*)$
obeys the following second order differential equation,
\begin{equation}
{d^2 \, A^* \over d \, {Q^*}^2} + \biggl( \, {6 \over Q^*} -
H^*(Q^*) \, Q^* \, \biggr) \, {d \, A^* \over d \, {Q^*}} - 2 \, A^*
(Q^*) \, H^*(Q^*) = 1
\label{Aode}
\end{equation}
It is difficult to suggest
boundary conditions for equation~(\ref{Aode}) other than to say that
the solution must be such that $ \psi (\bQ, t)$ is bounded. In the
case of FENE springs, it is possible to explicitly obtain the
particular solution of a similar second order differential equation.
It is clear from equation~(\ref{Giesekus}) that at steady state, the
stress tensor can be found once $\avel \bQ^* \bQ^* \aver$ is known.
The second moment $\avel \bQ^* \bQ^* \aver$ can be found correct to
second order in velocity gradients by using the power series
expansion~(\ref{flowdistexp}) for $\phi_{\rm fl} \, (\bQ,t)$,
and the specific forms for $\phi_{1}$ and $\phi_{2}$
in equations~(\ref{phi1}) and (\ref{phi2}), respectively.
This leads to the following expression for
the stress tensor, correct to third order in velocity gradients,
\begin{eqnarray}
- {\btau^p \over nk_BT} &=& {\lambda_H \over 3} \,
\biggl( {H \over k_{\rm B} T} \biggr) \avel Q^2 \aver_{\rm eq}
\{ {\dot \bgam} \} + {\lambda_H^2 \over 30} \, \biggl( {H \over k_{\rm
B} T} \biggr)^2 \avel Q^4 \aver_{\rm eq} \,
\bigl\lbrace 2 {\dot \bgam}^2 - ( {\dot \bgam} \cdot \bomega -
\bomega \cdot {\dot \bgam} ) \bigr\rbrace \nonumber \\
&+& {\lambda_H^3 \over 105} \,
\biggl( {H \over k_{\rm B} T} \biggr)^3 \, \avel Q^6 \aver_{\rm eq} \,
\{ \, {3 \over 4} \, {\rm tr} \, ( {\dot \bgam} \cdot {\dot \bgam} ) \,
{\dot \bgam} - {1 \over 2} \, ( {\dot \bgam}^2 \cdot
\bomega - \bomega \cdot {\dot \bgam}^2 ) \bigr\rbrace \nonumber \\
&-& {\lambda_H^3 \over 180} \, \biggl( {H \over k_{\rm B} T} \biggr)^3
\, \avel Q^4 \aver_{\rm eq} \, \avel Q^2 \aver_{\rm eq} \,
\{ {\rm tr} \, ( {\dot \bgam} \cdot {\dot \bgam} ) \,
{\dot \bgam} \} \nonumber \\ &+&
{\lambda_H^3 \over 30} \, \biggl( {H \over k_{\rm B} T} \biggr)^2 \,
\avel Q^4 \, A^* \aver_{\rm eq} \, \bigl\lbrace \, ( {\dot \bgam}^2
\cdot \bomega - \bomega \cdot {\dot \bgam}^2 ) + \bomega \cdot ( {\dot
\bgam} \cdot \bomega - \bomega \cdot {\dot \bgam} ) \nonumber \\
&-& ({\dot \bgam} \cdot \bomega - \bomega \cdot {\dot \bgam} ) \cdot \bomega
\, \bigr\rbrace + \ldots
\label{retmot}
\end{eqnarray}
The Cayley-Hamilton theorem has been used to eliminate the term ${\dot
\bgam}^3$ in equation~(\ref{retmot}), and an isotropic term that does
not affect the rheological properties has been dropped.
The stress tensor in simple shear flow, for small values of
the non-dimensional shear rate $\lambda_H \, {\dot \gamma}$,
can be found by substituting equation~(\ref{ssf1}) for the rate
of strain tensor, in equation~(\ref{retmot}). Using the
definitions of the viscometric functions in equation~(\ref{sfvis}), the
following power series expansions are obtained,
\begin{eqnarray}
{\eta_{p} \over \lambda_H n k_{\rm B} T } &=& {1 \over 3} \, \biggl( {H
\over k_{\rm B} T} \biggr) \, \avel Q^2 \aver_{\rm eq} + \biggl\lbrace
\, {2 \over 15} \,\biggl( {H \over k_{\rm B} T} \biggr)^2 \, \avel
Q^4 \, A^* \aver_{\rm eq} + {1 \over 70} \, \biggl(
{H \over k_{\rm B} T} \biggr)^3 \, \avel Q^6
\aver_{\rm eq} \nonumber \\
&-& {1 \over 90} \, \biggl( {H \over k_{\rm B} T}
\biggr)^3 \, \avel Q^4 \aver_{\rm eq} \, \avel Q^2 \aver_{\rm eq} \,
\biggr\rbrace \, (\lambda_H \, {\dot \gamma})^2 + \ldots
\label{etap} \\
{\Psi_{1} \over \lambda_H^2 n k_{\rm B}
T } &=& {2 \over 15} \, \biggl( {H \over k_{\rm B} T} \biggr)^2 \,
\avel Q^4 \aver_{\rm eq} + \ldots
\label{Psi1}
\end{eqnarray}
Clearly, equation~(\ref{Psi1}) indicates that one must expand to higher
orders in velocity gradients before the shear rate dependence of the
first normal stress difference can be obtained. Zero shear rate
properties, however, can be obtained from equations~(\ref{etap})
and~(\ref{Psi1}).
\section{The Narrow Gaussian Potential}
In the static theory of polymer solutions it is common to represent
the dimensionless excluded volume potential, between two
points on the polymer chain separated by a non-dimensional
distance $\bQ^*$, with the Dirac delta function,
\begin{equation}
E^*\,(\bQ^*) = (2 \pi )^{3 / 2} z \,\, \delta \, (\bQ^*)
\label{delpot}
\end{equation}
where, $ z = v \, ( \, {H
/ 2 \pi k_B T} \,)^{3 / 2} $ is a non-dimensional parameter which
represents the strength of the excluded volume interaction,
and in which $v$---which has the dimensions of
volume---is called the `excluded volume parameter'~\cite{doi}.
The parameter $z$ is frequently used in theories that
incorporate excluded volume,
as it is considered to be the appropriate parameter to be used in
perturbation expansions.
As mentioned earlier, excluded volume interactions
are taken into account in this work by means of a narrow Gaussian
potential~\cite{ottbk}. The narrow Gaussian potential has the
following form in terms of non-dimensional variables,
\begin{equation}
E^*\,(\bQ^*)= {z \over
\mu^3 } \,\, \exp \, \biggl(\, - {1 \over 2} \, {{Q^*}^2 \over \mu^2}\,
\biggr)
\label{nondimnagpot}
\end{equation}
It is clear from equation~(\ref{nondimnagpot}) that the non-dimensional
parameter $\mu$ controls the extent of the excluded volume interaction,
and as $\mu \to 0$, the narrow Gaussian potential tends to the
$\delta$--potential. The narrow Gaussian potential,
as mentioned earlier, serves as a means of regularising the
singular $\delta$--potential and consequently, permits the evaluation of
results obtained with a $\delta$--potential.
With excluded volume interactions described by the narrow Gaussian
potential, the function $H^* (Q^*)$, which appears in
equation~(\ref{evforce}) for the non-dimensional force between the
beads of the dumbbell, is given by,
\begin{equation}
H^* (Q^*) \equiv H_G^*(Q^*)
= 1 - \left( {z \over \mu^5} \right) \, \exp \,
\left[ \, - \, {{Q^*}^2 \over 2 \mu^2}\, \right]
\label{nondimevforce}
\end{equation}
The complex form of this function implies that the diffusion
equation~(\ref{nondimdiff}) cannot be solved exactly analytically to
obtain the non-equilibrium configurational distribution function
$\psi(\bQ, t )$. Furthermore, the time evolution equation for the
second moments~(\ref{nondimsecmom}) is not a closed equation for the
second moments since it involves the higher order moment $\avel H_G^*
(Q^*) \, \bQ^* \bQ^* \aver$ on the right hand side. As a result,
perturbative methods, non-perturbative approximation procedures or
numerical schemes must be used in order to obtain the material
functions predicted by the narrow Gaussian potential.
\subsection{Zero shear rate properties}
The viscosity and the first normal stress
difference predicted by the narrow Gaussian potential at low shear rates
can be obtained from the equations~(\ref{etap}) and
(\ref{Psi1}), respectively, once the equilibrium averages that occur in
these expressions are evaluated. For the narrow Gaussian potential, the
equilibrium distribution function is given by equation~(\ref{eqdist}),
with the excluded volume contribution to the non-dimensional potential
$\phi^*$ given by equation~(\ref{nondimnagpot}). We denote the various
non-dimensional moments of the equilibrium distribution for
a narrow Gaussian potential by,
\begin{equation}
q_m \equiv \, \biggl( {H \over k_{\rm B} T}
\biggr)^m \, \avel Q^{2m} \aver_{\rm eq} \, ; \quad m=1,2,3, \ldots
\end{equation}
In order to obtain the viscosity at non-zero
shear rates, it is necessary to find the function $A^*$ that
satisfies the second order differential equation~(\ref{Aode}) [with
$H^* (Q^*) = H^*_G(Q^*)$]. Unlike in the case of an FENE dumbbell, it
has not been possible to obtain the particular solution to
equation~(\ref{Aode}). As a result, attention is confined here
to obtaining the
zero shear rate predictions of the narrow Gaussian potential,
\begin{eqnarray}
{\eta_{p,0} \over \lambda_H n k_{\rm B} T } &=& {1 \over 3} \, q_1
\label{etap02} \\
{\Psi_{1,0} \over \lambda_H^2 n k_{\rm B} T }
&=& {2 \over 15} \, q_2
\label{Psi102}
\end{eqnarray}
for which only the moments $q_1$ and $q_2$ are required.
Alternative methods will be used in sections~5.2 to 5.4 to
obtain the shear rate dependence of the viscometric functions.
The non-dimensional equilibrium moments $q_1$ and $q_2$ can be obtained
exactly, as will be shown in section~5.1.1 below.
The need to calculate equilibrium moments is also frequently
encountered in static theories of polymer solutions.
In these theories, as mentioned earlier, it is common to represent
excluded volume interactions with a delta-function excluded volume
potential, and furthermore, to obtain universal predictions
by considering the limit of long
chains. The Hamiltonian then typically has two singular
objects---making it impossible to evaluate equilibrium moments exactly.
The most succesful approach so far towards approximately evaluating
these moments has been to develop a perturbation expansion in
the parameter $z$, and to use renormalisation group methods to
refine the results of the perturbation calculation~\cite{doi,declos}.
An alternative and simpler non-perturbative
route is the uniform expansion model~\cite{doi}.
The use of the narrow Gaussian potential---albeit in the simple
context of Hookean dumbbells---provides an opportunity to compare
the results of these approximate models with
the exact solution. The rigorous solution is discussed in
section~5.1.1 below, while the perturbation expansion and
the uniform expansion models are discussed in
sections~5.1.2 and~5.1.3, respectively.
\subsubsection{Exact solution}
It is straight forward to show that
the moments $q_m$ are given by the ratio of two integrals,
$ q_m = (I_m / I_0)$, where,
\begin{equation}
I_j \equiv \int_0^\infty \,
{Q^*}^{2j+2} \, \exp \, \bigl\lbrace \, - \, (1/2) \, {Q^*}^2 -
E^* \, \bigr\rbrace \, d Q^* \, ; \quad j=0,1,2,\ldots
\label{sj}
\end{equation}
In the limit of $\mu \to 0$ and $\mu \to \infty$, these integrals
can be evaluated analytically.
Consider the quantities,
$$ p_j (Q^*)\equiv {Q^*}^{2j+2} \, \exp \, \lbrace
\, - \, {1 \over 2 } \, {Q^*}^2 - E^* \, \rbrace$$
which are the integrands for the integrals $I_j$, and
$$R_j (Q^*) \equiv {Q^*}^{2j+2} \, \exp \, \lbrace
\, - \, {1 \over 2 } \, {Q^*}^2 \, \rbrace$$
Now, $p_j(0) = R_j(0) = 0$, for all values of $\mu$.
At any non-zero value of $Q^*$, it is clear from
equation~(\ref{nondimnagpot}) that,
$$p_j (Q^*) \to R_j (Q^*) \quad {\rm as} \quad \mu \to 0
\quad {\rm or} \quad \infty $$
In other words, for all values of $Q^*$, the quantities $p_j (Q^*)$ tend
{\it pointwise} to $R_j (Q^*) $ as $\mu$ tends to zero or to infinity.
Furthermore, for all values of $\mu$,
it can be shown that $p_j (Q^*)$ are bounded functions
of $Q^*$ on $[0,\infty[$.
It then follows from a theorem of the calculus~\cite{buck} that,
$$ q_m \to { \int_0^\infty \, {Q^*}^{2m+2} \, \exp \,
\bigl\lbrace \, - \, {1 \over 2 } \, {Q^*}^2 \, \bigr\rbrace \, d Q^*
\over \int_0^\infty \, {Q^*}^{2} \, \exp \, \bigl\lbrace \, -
\, {1 \over 2 } \, {Q^*}^2 \, \bigr\rbrace \, d Q^* }
\quad {\rm as } \quad \mu \to 0 \quad {\rm or} \quad \infty $$
As a result, the asymptotic values of $q_m$ for $\mu \to
0$ and $\mu \to \infty$ are found to be {\it independent} of $z$,
and are equal to the $\theta$-solvent values,
$$ q_1=3 \, ; \quad q_2=15 \, ; \quad q_3=105 \, ; \, \ldots$$
This implies---from equations~(\ref{etap02}) and~(\ref{Psi102})---that
the use of a delta-function potential to represent excluded volume
interactions leads to the prediction of zero
shear rate properties in good solvents which are identical to those in
$\theta$-solvents.
Away from these limiting values of $\mu$, {\em i.e}.\ at non-zero
finite values of $\mu$, the integrals $I_j$
can be found by numerical quadrature. Here they have been evaluated
using a routine given in Numerical recipes~\cite{numrec} for the
integration of an exponentially decaying integrand. Discussion of
zero shear rate property predictions in this case
is taken up in section~7.
\subsubsection{Perturbation expansion }
Static theories for polymer solutions indicate that accounting
for excluded volume interactions with a delta function potential
leads to the prediction of
a {\em swelling} ({\em i.e}.\ an increase in the mean square end-to-end
distance) of the polymer chain, which is in close agreement
with experimental observations~\cite{doi,declos}.
In the case of Hookean dumbbell, however, we have seen above that
the use of a delta function potential to account for the presence
of excluded volume (which corresponds to the limit $\mu \to 0$),
does not lead to any change in the prediction of equilibrium moments
when compared to the $\theta$-solvent case.
It is worthwhile therefore to examine the nature of the perturbation
expansion in $z$, and to compare it with the results of the exact calculation.
Upon expanding $e^{-E^*}$ in a power series, the integral $I_j$ has the form,
\begin{equation}
I_j \equiv \int_0^\infty \, d Q^* \,
\sum_{n=0}^{\infty} \, u_n (Q^*) \, ; \quad j=0,1,2,\ldots
\label{sjexp}
\end{equation}
where,
\begin{equation}
u_n (Q^*) = {(-1)^n \over n! } \, {Q^*}^{2j+2} \,
e^{- {1 \over 2} \, {Q^*}^2} \, \left( \, E^*(Q^*)\, \right)^n
\label{un}
\end{equation}
In order to carry out a term by term integration of the functional
series $\sum_{n=0}^{\infty} u_n (Q^*)$ in equation~(\ref{sjexp}),
it is necessary for the series to be {\it uniformly
convergent} on $[0,\infty[$. For all values of $z$, and $\mu \neq 0$,
uniform convergence can be established with the help of the Weierstrass
$M$ test~\cite{buck}.
Therefore, a term by term integration in
equation~(\ref{sjexp}) can be caried out, {\em except} when $\mu =0$.
Assuming that $\mu \neq 0$, and performing the
integration in equation~(\ref{sjexp}), one obtains,
\begin{equation}
I_j = 2^{j+{1 \over 2}} \, \Gamma \biggl(j+{3 \over 2} \biggr)
\, \sum_{n=0}^{\infty} \,
{ (-1)^n \over n! } \biggl({z \over \mu^3} \biggr)^n \,
{ \mu^{2j+3} \over (n+\mu^2)^{j+{3 \over 2}}}
; \quad j=0,1,2,\ldots
\label{ijper}
\end{equation}
Consider the moment $q_1$, which is undoubtedly the most interesting
physical moment. Using the perturbation expansion for $I_j$,
$q_1$ is given by the ratio, $q_1 = 3 \, (S_1 / S_0)$,
where, $S_0$ and $S_1$ are defined by,
\begin{eqnarray}
S_0 &=& \sum_{n=0}^{\infty} \,
{ (-1)^n \over n! } \biggl({z \over \mu^3} \biggr)^n \,
{ \mu^{3} \over (n+\mu^2)^{3 \over 2}} \nonumber \\
S_1 &=& \sum_{n=0}^{\infty} \,
{ (-1)^n \over n! } \biggl({z \over \mu^3} \biggr)^n \,
{ \mu^{5} \over (n+\mu^2)^{5 \over 2}}
\label{s0s1}
\end{eqnarray}
If only the first order perturbation term is retained, one obtains,
\begin{equation}
q_1 = 3 \, \left( 1+{ z \over (1 + \mu^2)^{5/2} } \right)
\label{equimom}
\end{equation}
Curiously, the limit $\mu \to 0$ can be carried out in this case. It
leads to a result which is in line with static theories, which indicate a
finite non-vanishing effect due to the presence of
$\delta$-potential excluded volume interactions. However, such a
limit cannot be carried out if higher order terms are
retained, since both the sums $S_0$ and $S_1$ diverge as
$\mu \to 0$. In static theories, higher order perturbation
expansions are obtained by {\em dropping} divergent terms, as they
are postulated not to matter. These divergent terms arise from products of
the $\delta$-potential for the {\em same} pair of interacting beads. In
dumbbells, these are the only kind of beads present. As a result,
there is no meaningful or mathematically consistent way of going
beyond first order perturbation theory for dumbells in the limit
$\mu \to 0$.
Note that both the series, $S_0$ and $S_1$,
are alternating series. Furthermore, for given values of $z$ and $\mu$, the
terms decrease monotonically for large enough values of $n$. It follows then,
from the Leibnitz criterion for alternating series~\cite{arfken}, that
both $S_0$ and $S_1$ converge for all values of $z$ and $\mu \neq 0$.
This suggests that, even though it is not possible to switch the
integral and summation in equation~(\ref{sjexp}) for $\mu = 0$,
the value of $q_1$ at $\mu = 0$ can be found by setting it equal
to the limit of $q_1(\mu)$ as $\mu \to 0$. We shall see
in section~7 that such a limiting process is infeasible since it
becomes numerically impossible to evaluate the sums~(\ref{s0s1})
for small enough values of $\mu$.
\subsubsection{Uniform expansion model}
The uniform expansion model seeks to approximate the average $\avel X
\aver_{\rm eq}$ of any quantity $X(\bQ)$, with $\avel X \aver_{\rm
eq}^\prime$, where $\avel \quad \aver_{\rm eq}^\prime $ denotes an
average with the Gaussian equilibrium distribution function,
\begin{equation} \psi_{\rm eq}^\prime (\bQ) = {\cal N_{\rm eq}^\prime}
\, \exp \, \bigl\lbrace \, - \, {3 \over 2 {b^\prime}^2 } \, Q^2 \,
\bigr\rbrace \label{uemgau} \end{equation} with ${\cal N_{\rm
eq}^\prime} = [ \, 3 / 2 \pi {b^\prime}^2 \, ]^{3 / 2}$. The aim is to
find the parameter $b^\prime$ that leads to the best possible
approximation. As may be expected, this depends on the quantity
$X(\bQ)$ that is averaged. The motivation behind the uniform expansion
model, and details regarding the calculation of $b^\prime$ are given in
appendix A. Since the equilibrium distribution function in the absence
of excluded volume is Gaussian, this assumption expects the equilibrium
distribution to remain Gaussian upon the incorporation of excluded
volume, albeit with an increased end-to-end vector.
If we define $u_m$, such that $u_m \equiv \, ( {H / k_{\rm B} T} )^m
\, \avel Q^{2 m} \aver_{\rm eq}^\prime$, then clearly $u_m$ is the
uniform expansion model approximation for $q_m $. The uniform
expansion model predictions of the zero shear
rate properties are given by
(\ref{etap02}) and~(\ref{Psi102}), with $q_m $ replaced by $u_m$.
Results of material properties obtained by numerical quadrature, and
by the uniform expansion model, are discussed in section~7.
\subsection{Brownian dynamics simulations}
Development of the retarded motion expansion has proved useful in
obtaining exact expressions for the zero shear rate properties. At
non-zero shear rates, exact predictions of the viscometric functions
can be obtained by solving the Ito stochastic differential
equation,
\begin{equation}
d \bQ^* = \left[ \, \bk^* - {1 \over 2} \, H^*(Q^*) \, \bu \, \right]
\cdot \bQ^* \, dt + d \bW
\label{ito}
\end{equation}
which corresponds to the non-dimensional diffusion
equation~(\ref{nondimdiff}), and in which $\bW$ is a three-dimensional
Wiener process.
For the narrow Gaussian potential, since $H^* (Q^*) = H^*_G(Q^*)$,
equation~(\ref{ito}) is non-linear. As a result, it cannot be
solved analytically.
Two different Brownian dynamics simulation algorithms have been
adopted here for the numerical solution of equation~(\ref{ito}).
Both schemes use a second order
predictor-corrector algorithm with time-step
extrapolation~\cite{ottbk}. The first scheme obtains
steady-state expectations by the simulation of a single long
trajectory, and is based on the assumption of ergodicity~\cite{ottbk}.
It has been used to obtain results at equilibrium, and for large values
of the shear rate. A second algorithm---which employs
a variance reduction procedure---has been used at low values of the
shear rate, since the variance for the viscometric functions is found
to be relatively large at these shear rates. Reduction in the variance
is obtained by following a scheme suggested by Wagner and
\"Ottinger~\cite{wagott}. The scheme---which constructs an ensemble of
trajectories over several relaxation times, from start-up of
flow to steady-state---essentially consists
of subtracting the rheological properties obtained from a parallel
equilibrium simulation. While
this doesn't change the average values of the properties,
it significantly reduces the fluctuations, since the fluctuations
are virtually the same at zero and small shear rates. The results of these
simulation algorithms are discussed in section~7.
\subsection{The Gaussian approximation}
The main obstacle (in the configuration space of the dumbbell) to
obtaining the rheological properties predicted by a narrow Gaussian
potential is that the second moment equation is not a closed equation.
A closure problem has also been encountered earlier in treatments of
the phenomenon of hydrodynamic interaction and internal viscosity,
where it has been shown that an accurate approximation can be obtained
by assuming that the non-equilibrium configurational distribution
function is a Gaussian distribution~\cite{zylkadb, zylkaga,
prakbk, schiebiv, wedgeiv}. In this section,
a similar systematic
approximation procedure for the treatment of excluded volume
interactions described by a narrow Gaussian potential is introduced.
The assumption that $\psi(\bQ,t)$ is a Gaussian distribution,
\begin{equation} \psi(\bQ,t) = {1 \over (2 \pi )^{3 / 2}} \, {1 \over
{\sqrt{ {\rm det} \mom}}} \,\, \exp \, \biggl\lbrace \, - {1 \over 2}
\, \bQ \cdot \mom^{-1} \cdot \bQ \, \biggr\rbrace \label{gausdist}
\end{equation} makes the second moment equation~(\ref{nondimsecmom}) a
closed equation, since the higher order moment $\avel H_G^* (Q^*) \,
\bQ^* \bQ^* \aver$ can be expressed in terms of the second moment. On
performing this reduction, it can be shown that the Gaussian
approximation leads to the following closed second moment equation,
\begin{equation}
{d \, \over dt^* } \moms = \bk^* \cdot \moms + \moms
\cdot {\bk^*}^T - \moms + {z \over {{\sqrt {{\rm det} \,
\lbrack \, {\moms} + \, \mu^2 \, {\bu } \rbrack }}}} \, \bPi + \bu
\label{gasecmom}
\end{equation}
where, $$ \bPi = \bigl\lbrack
\moms + \mu^2 \, {\bu} \, \bigr\rbrack^{\, -1} \cdot \moms$$
It is also straight forward to show that on introducing the Gaussian
approximation, the Giesekus expression for the stress tensor has the
form,
\begin{equation}
{\btau^p \over n k_B T } = - \moms + {z \over
{\sqrt {{\rm det} \, \lbrack \, {\moms} + \, \mu^2 \, {\bu } \rbrack
}}} \, \bPi + \, \bu
\label{gakram}
\end{equation}
The steady state viscometric functions [defined by
equations~(\ref{sfvis})] can therefore be found once
equation~(\ref{gasecmom}) is solved for $\moms$.
It is worth examining the nature of the polymer contribution to the
stress tensor. In the limit $\mu \to 0$, $\bPi$ reduces to $\bu$, and
as a result, the presence of excluded volume only makes an indirect
contribution through its influence on the second moment $\moms$. This
follows from the fact that an isotropic contribution to the stress
tensor makes no difference to the rheological properties of the polymer
solution. On the other hand, for non-zero values of $\mu$, the
rheological properties are also affected directly by excluded volume.
Under the Gausssian approximation, linear viscoelastic properties
can be obtained by deriving a first order codeformational memory
integral expansion for the stress tensor.
The tensor $\moms$ is expanded, in terms of deviations from its
isotropic equilibrium solution, upto first order in velocity gradient,
\begin{equation}
\moms= \alpha^2 \, ( \bu + \beps + \ldots \, )
\label{linvisexp}
\end{equation}
where, the parameter $\alpha$
(commonly called the swelling ratio) is defined by,
\begin{equation}
\alpha^2= {\avel \, Q^2 \, \aver_{\rm eq} \over \avel \, Q^2 \,
\aver_{0,\rm eq}}
\label{alpha}
\end{equation}
Here, $\avel \, Q^2 \,
\aver_{0,\rm eq} = ({3 k_B T / H})$, is the mean square end-to-end
distance in the absence of excluded volume. Clearly, $\alpha$
represents the {\it equilibrium} swelling of the polymer chain caused
by the presence of excluded volume. $\alpha$ is not an
independent parameter since at equilibrium the second moment
equation~(\ref{gasecmom}) reduces to the following consistency
relationship between $z$, $\mu$ and $\alpha$,
\begin{equation}
z = (1 - \alpha^{-2} \, ) \, \lbrack \,
\alpha^2 + \mu^2 \, \rbrack^{5 \over 2}
\label{equirel}
\end{equation}
The well known scaling relation for the end-to-end
distance with the number of monomers $N$, namely,
$\sqrt{ \avel \, Q^2 \, \aver_{\rm eq}} \sim N^{3/5}$,
may be obtained from equations~(\ref{alpha}) and
(\ref{equirel}) in the limit of large $N$ by noting that---since excluded
volume is a pairwise interaction---$z$ must scale as $\sqrt{N}$ for
dumbbells~\cite{larson}.
Substituting the expansion~(\ref{linvisexp}) into the evolution
equation~(\ref{gasecmom}) leads to, \begin{equation} {d \over dt^*} \,
\beps= \bk^* + {\bk^*}^T - {1 \over \tau^*} \, \beps \label{lvdif}
\end{equation} where, \begin{equation} \tau^* = \biggl[ 1 - {z \,
\mu^2
\over ( \, \alpha^2 + \mu^2 \, )^{7/2}} \biggr]^{\, -1} \label{lvtau}
\end{equation} Furthermore, the stress tensor~(\ref{gakram}) upto first
order in the velocity gradient (without the rheologically unimportant
isotropic contribution) is given by, \begin{equation} {\btau^p \over n
k_B T } = - {\cal H } \, \beps \label{lvkram} \end{equation} where,
\begin{equation} {\cal H} = \alpha^2 \, ({\tau^*})^{\, -1} \label{lvH}
\end{equation}
Upon integrating equation~(\ref{lvdif}), which is a first order
ordinary differential equation for $\beps$, and substituting the result
into equation~(\ref{lvkram}), the following codeformational integral
expansion is obtained,
\begin{equation}
\btau^p (t) = - \, \int_{- \infty}^t d\!s \,
n k_B T \, \tilde{G}(t-s)\, {\bgam}_{[1]}(t,s)
\label{lvmemexp}
\end{equation}
where ${\bgam}_{[1]}$ is the
codeformational rate-of-strain tensor~\cite{bird1} and the memory
function $\tilde{G}(t)$ is given by \begin{equation} \tilde{G}(t)=
{\cal H} \, e^{- (t / \lambda_H \tau^*) } \label{lvmemfn}
\end{equation} The product $\lambda_H {\tau^*}$ is usually interpreted
as a relaxation time, and ${\cal H}$ as a relaxation weight. Clearly,
the incorporation of excluded volume effects increases the relaxation time
in a good solvent relative to a theta solvent by a factor of $\tau^*$.
The memory function $\tilde{G}(t)$ can now be used to derive the linear
viscoelastic material properties. Substituting $ G (t) = n k_B T
\tilde{G}(t)$, into equation~(\ref{usf6}), leads to, \begin{equation}
{\eta^\prime (\omega) \over \lambda_H n k_B T} = {\tau^* \, {\cal H}
\over 1 + (\lambda_H \tau^* \, \omega)^2} \quad;\quad
{\eta^{\prime\prime} (\omega) \over \lambda_H^2 nk_B T } = {\omega \,
{\tau^*}^2 \, {\cal H} \over 1 + (\lambda_H \tau^* \, \omega)^2}
\label{etaprime} \end{equation} Upon taking the limit of $\omega \to 0$
we obtain, \begin{equation} {\eta_{p,0} \over \lambda_H n k_B T }=
\tau^* \, {\cal H } \quad; \quad {\Psi_{1,0} \over \lambda_H^2 n k_B T
} = 2 \, {\tau^*}^2 \, {\cal H} \label{lvetap0} \end{equation}
At moderate to large values of the shear rate, it is not possible to
obtain analytical expressions for the shear rate dependence of the
viscometric functions, and consequently a numerical procedure is
required. Since the second moment $\moms$ shares the symmetry of
the flow field in simple shear flow, its Cartesian components can be
denoted by,
\begin{equation}
\moms=\pmatrix{ s_1 & s_4 & 0 \cr
\noalign{\vskip3pt} s_4 & s_2 & 0 \cr \noalign{\vskip3pt} 0 & 0 & s_3
\cr }
\label{momscompsf}
\end{equation}
Upon substituting equation~(\ref{momscompsf}) into
equation~(\ref{gasecmom}), a system of four first order ordinary
differential equations for the quantities $s_j, \, j=1, \ldots, 4$ is
obtained. Steady state viscometric functions, as functions of shear
rate, can then be found by numerically integrating these equations with
respect to time (using a simple Euler scheme) until steady state is
reached. Results obtained by this procedure are discussed in
section~7.
\subsection{First order perturbation expansion in $z$}
The influence of excluded volume effects on the universal shear rate
dependence of viscometric functions has
been studied, as mentioned earlier, by using renormalisation group
methods~\cite{ottrg, zylkarg}. The renormalisation group theory
approach is essentially a method for refining the results of a
low order perturbation expansion in $z$, by introducing higher order
interactions effects so as to remove the ambiguous definition of
the bead size. Results of this approach, based on a
$\delta$-function excluded volume potential, indicate that the
presence of excluded volume has a non-trivial influence on the
shear rate dependence of the viscometric functions. We have seen earlier
in this work that as far as equlibrium swelling and zero shear
rate properties are
concerned, the use of a $\delta$-function excluded volume potential
leads to trivial results. In this section, a first order perturbation
expansion in $z$---with a narrow Gaussian excluded
volume potential---is constructed, in order to compare its predictions
of shear rate dependence with those obtained with Brownian dynamics
simulations, and with the Gaussian approximation. The dependence of
the predictions on the width parameter $\mu$ is of particular interest.
A first order perturbation expansion can be constructed, following the
procedure suggested in references~\cite{ottrabrg, ottrg}, from the
second moment equation~(\ref{nondimsecmom}). We assume that the
configurational distribution function $\psi$ can be written as
$\psi_\theta + \psi_z$, where $\psi_\theta$ is the distribution
function in the absence of excluded volume, i.e. in a
$\theta$-solvent, and $\psi_z$ is the correction to first order
in the strength of the excluded volume interaction. The averages
performed with these contributions will be denoted
by $\avel \cdots \aver_\theta$ and $\avel \cdots \aver_z$, respectively.
On equating terms of equal order, equation~(\ref{nondimsecmom}) can
be rewritten as two equations, namely, a zeroth order second moment
equation and a first order second
moment equation. The zeroth order equation---which is linear
in the moment $\avel \bQ^* \bQ^* \aver_\theta$---is the well
known second moment equation for Hookean dumbbells in a
$\theta$-solvent~\cite{bird2}. It has the analytical
solution~\cite{bird2},
\begin{equation}
\avel \bQ^* \bQ^* \aver_\theta =
\bu - \, \int_{- \infty}^{t^*} d\!s^* \,
e^{-(t^*-s^*) } \, {\bgam}_{[0]}(t^*, s^*)
\label{qqtheta}
\end{equation}
where, ${\bgam}_{[0]}$ is the codeformational relative strain
tensor~\cite{bird1}. The first order second moment equation
has the form,
\begin{equation}
{d \over dt^*} \avel \bQ^* \bQ^* \aver_z =
\bk^* \cdot \avel \bQ^* \bQ^* \aver_z
+ \avel \bQ^* \bQ^* \aver_z \cdot {\bk^*}^T -
\avel \bQ^* \bQ^* \aver_z + {z \over \mu^5} \,
\avel e^{- ({Q^*}^2 / 2 \, \mu^2) } \, \bQ^* \bQ^* \aver_\theta
\label{secmomz}
\end{equation}
The $\theta$-solvent distribution function $\psi_\theta$ is a
Gaussian~\cite{bird2}, and consequently, the complex moment
on the right hand side of equation~(\ref{secmomz}) can be
reduced to a function of $ \avel \bQ^* \bQ^* \aver_\theta$.
The following equation is obtained on performing this reduction,
\begin{equation}
{d \, \over dt^* } \avel \bQ^* \bQ^* \aver_z =
\bk^* \cdot \avel \bQ^* \bQ^* \aver_z + \avel \bQ^* \bQ^* \aver_z
\cdot {\bk^*}^T - \avel \bQ^* \bQ^* \aver_z + { {\mbox {\boldmath $Y$}} }
\label{redsecmomz}
\end{equation}
where,
$$ { {\mbox {\boldmath $Y$}} } = {z \over {{\sqrt {{\rm det} \,
\lbrack \, {\avel \bQ^* \bQ^* \aver_\theta} + \,
\mu^2 \, {\bu } \rbrack }}}} \, \bigl\lbrack
\avel \bQ^* \bQ^* \aver_\theta + \mu^2 \, {\bu} \, \bigr\rbrack^{\, -1}
\cdot \avel \bQ^* \bQ^* \aver_\theta $$
Clearly, equation~(\ref{redsecmomz}) could have also been derived
by expanding the second moment equation for the Gaussian
approximation~(\ref{gasecmom}) to first order in $z$.
It follows, therefore, that the Gaussian
approximation is exact to first order in $z$.
This is also the situation in the case of the Gaussian approximation
introduced for the treatment of hydrodynamic interaction effects,
where it was found to be exact to first order in the
strength of hydrodynamic interaction, $h^*$~\cite{ottrabrg}.
Equation~(\ref{redsecmomz}) is a system of linear inhomogeneous ordinary
differential equations, whose solution is,
\begin{equation}
\avel \bQ^* \bQ^* \aver_z = \int_{- \infty}^{t^*} d\!s^* \,
e^{-(t^*-s^*) } \, { {\mbox {\boldmath $E$}} }(t^*, s^*) \cdot { {\mbox {\boldmath $Y$}} } \cdot { {\mbox {\boldmath $E$}} }{^T}(t^*, s^*)
\label{qqz}
\end{equation}
where, $ { {\mbox {\boldmath $E$}} }$ is the displacement gradient tensor~\cite{bird1}.
The expression for the stress tensor~(\ref{nondimkram}) can also be
expanded to first order in $z$. After reduction of complex moments
to second moments, the stress tensor depends only on
the second moments $\avel \bQ^* \bQ^* \aver_\theta $ and
$\avel \bQ^* \bQ^* \aver_z$. Equations~(\ref{qqtheta}) and~(\ref{qqz})
may then be used to derive the following first order perturbation theory
expression for the stress tensor in arbitrary homogeneous flows,
\begin{equation}
{\btau^p \over nk_BT} = { {\mbox {\boldmath $Y$}} } + \int_{- \infty}^{t^*} d\!s^* \,
e^{-(t^*-s^*) } \, \left( {\bgam}_{[0]}(t^*, s^*)
- { {\mbox {\boldmath $E$}} }(t^*, s^*) \cdot { {\mbox {\boldmath $Y$}} } \cdot { {\mbox {\boldmath $E$}} }{^T}(t^*, s^*) \right)
\label{pertau}
\end{equation}
Note that $ { {\mbox {\boldmath $Y$}} }$, the direct contribution to the stress tensor,
is isotropic only in the limit $\mu \to 0$.
The form in steady shear flow, of the tensors ${\bgam}_{[0]}$ and $ { {\mbox {\boldmath $E$}} }$,
has been tabulated in reference~\cite{bird1}. Using the expression for
the stress tensor~(\ref{pertau}), and the definition of the viscometric
functions~(\ref{sfvis}), the following first order perturbation theory
results for the viscometric functions are obtained,
\begin{eqnarray}
{\eta_{p} \over \lambda_H n k_{\rm B} T } &=& 1 +
\left( {1 + \mu^2 + \lambda_H^2 {\dot \gamma}^2 \over
\sqrt{1 + \mu^2 } \, \Delta^{3/2} } \right) \, z
\label{etaper} \\
{\Psi_{1} \over \lambda_H^2 n k_{\rm B} T } &=&
2 + 2 \left( {1 + 2 \, \mu^2 + \lambda_H^2 {\dot \gamma}^2 \over
\sqrt{1 + \mu^2 } \, \Delta^{3/2} } \right) \, z
\label{Psi1per}
\end{eqnarray}
where, $\Delta=(1 + \mu^2)[1 + \mu^2 + 2 \lambda_H^2
{\dot \gamma}^2] - \lambda_H^2 {\dot \gamma}^2$. In the limit
of $\lambda_H {\dot \gamma}$ going to zero, the expression
for the viscosity~(\ref{etaper}) reduces to the expression derived
earlier in section~5.1---using the retarded motion expansion
and the equilibrium perturbation expansion---for the zero shear rate
viscosity. One can also show that ${\rm tr} \avel \bQ^* \bQ^*
\aver $ reduces to the equilibrium moment $q_1$ [see
equation~(\ref{equimom})], in the limit $\lambda_H {\dot \gamma}
\to 0$.
The first order perturbation results are compared with
simulation results, and with results of the Guassian
approximation, in section~7.
\section{Fixman's Theory}
Many years ago, in path-breaking seminal work, Fixman~\cite{fix}
considered the simultaneous inclusion of hydrodynamic interaction and
excluded volume in bead-spring chain models for dilute polymer
solutions. In order to render the problem solvable, Fixman introduced a
number of approximations. Since we are only concerned with excluded
volume in the context of Hookean dumbbells in this work, we shall only
consider those approximations which are relevant in this context. The
introduction of the quadratic potential, the governing equations of
Fixman's theory, and the calculation of material functions predicted by
the theory in simple shear flow are considered in this section.
\subsection{The quadratic potential}
With regard to excluded volume, the most crucial approximation of
Fixman~\cite{fix} is the replacement of the delta function potential with
a quadratic potential. By adopting a Boson operator formulation of the
governing equations, Fixman has shown that the delta
function potential~(\ref{delpot}) may be represented by,
\begin{equation}
E^* \, (\bQ^*) = {1 \over 2}\, \bQ^* \cdot {\bGF^* } \cdot \bQ^*
\label{bosonE}
\end{equation}
where ${\bGF^*}$ is a symmetric function of various configuration
dependent quantities introduced in the Boson operator formalism. From
this expression it is clear that a quadratic potential for the excluded
volume may be obtained by replacing the fluctuating quantity $\bGF^*$
with an average. Fixman obtains a quadratic potential
by replacing $\bGF^*$ with a configuration dependent average, as described
below.
As a result of replacing $\bGF^*$ with its average, one can show
from equation~(\ref{bosonE}) that,
\begin{equation}
\avel \, \bGF^* \, \aver = \avel \, {\partial \over \partial \bQ^*}
\,{\partial E^* \over \partial \bQ^*}\, \aver
\label{avgG}
\end{equation}
In other words, for any given potential $E^*$, one can find
$\avel \, \bGF^* \, \aver $
provided that the non-equilibrium distribution function $\psi (\bQ,t)$
with which to carry out the average on the right hand side of
equation~(\ref{avgG}) is known.
It turns out that $\psi (\bQ,t)$, in
the presence of a quadratic excluded volume potential and with
consistently averaged hydrodynamic interaction~\cite{ottca1, prakbk},
is a Gaussian distribution. Evaluation of the average
for a $\delta$--potential~(\ref{delpot}), with a Gaussian
distribution~(\ref{gausdist}), leads to the following
non-dimensional quadratic potential,
\begin{equation}
E^* \,(\bQ^*)=- {1 \over 2} \, { z \over {\sqrt{ {\rm det}
\moms}}} \,\, \bQ^* \cdot \moms^{-1} \cdot \bQ^*
\label{modquadpot}
\end{equation}
It is straightforward to show that the potential~(\ref{modquadpot})
leads to an unphysical {\it non-central} excluded volume force
between the beads. Fixman, perhaps for this reason, introduces a
further approximation which consists of replacing the above
potential with the following simpler form,
\begin{equation}
E^* \,(\bQ^*)=- {1 \over 2} \, {z \over \alpha^2}
\, {{Q^*}^2 \over \, {\sqrt{ {\rm det} \moms}}}
\label{quadpot}
\end{equation}
where, $\alpha$ is defined as before by equation~(\ref{alpha}).
However, in this case, $\alpha$ obeys the consistency relation,
\begin{equation}
z = {(\, \alpha^2 - 1 )\, \alpha^3 }
\label{fixalp}
\end{equation}
Interestingly enough, equation~(\ref{equirel}) reduces to
equation~(\ref{fixalp}) in the limit $\mu \to 0$ (which corresponds
to a $\delta$--potential).
It is appropriate here to note
that while Fixman has presented all his arguments for bead-spring chain
models, the form of Fixman's potential for dumbbells given above can be
found in the book by Larson~\cite{larson}.
The consequences of adopting the quadratic excluded volume
potential~(\ref{quadpot}) are briefly discussed in the
following section. It is worthwhile to
point out here that Fixman's original formulation of the problem
was not in terms of the Gaussian distributions and second moments
discussed below. Rather, his attempt was to
directly solve the diffusion equation~(\ref{diff}) for the configurational
distribution function, and then carry out the average in
equation~(\ref{kram}) to obtain
the rheological properties. It is, however, possible to discuss his
approach within the framework developed subsequently by
{\" O}ttinger~\cite{ottca1} and this is the procedure that
is adopted here. A graphic exposition of Fixman's algorithm is
given in reference~\cite{larson}.
\subsection{The governing equations}
With excluded volume interactions described by the quadratic
potential~(\ref{quadpot}), the diffusion equation~(\ref{diff}) becomes
linear in the bead-connector vector. As a result, the diffusion
equation is exactly satisfied by a Gaussian
distribution~(\ref{gausdist}). In Fixman's theory therefore, a
tractable model is obtained not by approximating the distribution
function, as in the case of the Gaussian approximation,
but by introducing the quadratic potential~(\ref{quadpot}).
While $\psi (\bQ,t)$ is a Gaussian both in the Gaussian approximation
and in Fixman's theory, the second moment which completely determines
these distributions is different in the two cases. In Fixman's theory,
the non-dimensional second moment $\moms$ is governed by the equation,
\begin{equation}
{d \over dt^*} \avel \bQ^* \bQ^* \aver = \bk^* \cdot
\avel \bQ^* \bQ^* \aver + \avel \bQ^* \bQ^* \aver \cdot {\bk^*}^T -
\left[ \, 1 - {z \over \alpha^2 {\sqrt{ {\rm
det} \avel \bQ^* \bQ^* \aver}}} \, \right] \, \avel \bQ^* \bQ^* \aver + \bu
\label{fixsecmom}
\end{equation}
Note that the second moment equation reduces to equation~(\ref{fixalp})
at equilibrium.
The Giesekus expression for the stress tensor has
the form, \begin{equation} {\btau^p \over n k_B T } =
- \left[ \, 1 - {z \over \alpha^2 {\sqrt{ {\rm
det} \avel \bQ^* \bQ^* \aver}}} \, \right] \, \moms +
\bu \label{fixkram} \end{equation} As a result, the stress tensor in
Fixman's theory, for any flow situation, may be obtained once
equation~(\ref{fixsecmom}) is solved for $\moms$.
It is clear from equation~(\ref{fixkram}) that in Fixman's theory, as
for $\mu > 0$ in the Gaussian approximation, rheological properties are
affected both directly and indirectly by the presence of excluded
volume.
As will be shown in the section below, at {\it steady state} in simple
shear flow, the problem of solving the governing equation for the second
moments~(\ref{fixsecmom}), reduces to one of solving a single nonlinear
algebraic equation. In
the linear viscoelastic limit, however, analytical expressions for the
various properties can be derived in the same manner as described
earlier for the Gaussian approximation. Indeed, it can be shown that
the linear viscoelastic properties are given by
equations~(\ref{etaprime}) and~(\ref{lvetap0}), where the quantities
${\cal H}$ and $\tau^* $ are now given by, \begin{equation} {\cal H} =
1 \quad;\quad \tau^* = \alpha^2 \label{fixlvspk} \end{equation}
In steady simple shear flow, substituting equation~(\ref{momscompsf})
for $\moms$ and equation~(\ref{ssf1}) for $\bk$, into the second moment
equation~(\ref{fixsecmom}), leads to the following equations for the
components of $\moms$, $$s_1 = ( 1 + 2 \, \lambda_H^2 {\dot \gamma}^2
\, s_2^2 \, ) \, s_2 \quad;\quad s_3 = s_2 \quad;\quad s_4 = \lambda_H
\, {\dot \gamma} \, s_2^2 $$ where, $s_2$ must satisfy the nonlinear
algebraic equation, \begin{equation} s_2^{3 / 2} - s_2^{1 / 2} =
{\alpha(\alpha^2 -1) \over \sqrt{ 1 + \lambda_H^2 {\dot \gamma}^2 \,
s_2^2 } } \label{sfs2} \end{equation}
The normalised viscometric functions can be found by using the
definitions~(\ref{sfvis}), and the results above for the zero shear
rate properties, \begin{equation} {\eta_p \over \eta_{p,0}} = {s_2
\over \alpha^2} \quad;\quad {\Psi_1 \over \Psi_{1,0}} = {s_2^2 \over
\alpha^4} \quad;\quad \Psi_2 = 0 \label{fixetar} \end{equation}
The nonlinear algebraic equation~(\ref{sfs2}) is solved here with a
Newton-Raphson scheme. The material functions predicted by Fixman's
theory are compared with the predictions of the narrow Gaussian
potential in the section below.
\section{Results and Discussion}
The predictions of rheological properties in simple shear flow, by the
various theories for the excluded volume effect, are compared in this
section. Predictions in the limit of zero shear rate are first
considered below, and those at finite non-zero shear rates
subsequently.
\subsection{Zero shear rate properties}
\begin{figure}[!htb] \centerline{ \epsfxsize=4in \epsfbox{fig1.ps}}
\caption{ \footnotesize Non-dimensional zero shear rate viscosity
versus the extent of excluded volume interaction $\mu$, for two values
of the strength of the interaction $z$. The continuous lines are exact
predictions obtained by numerical quadrature,
the triangles and circles are results of
Brownian dynamics simulations, the dashed and the dot-dashed lines
are the approximate predictions of the Gaussian approximation, and the
first order perturbation theory, respectively, and the filled squares
are the predictions of Fixman's theory. The error bars in the Brownian
dynamics simulations cannot be resolved within the line thickness.}
\label{fig1} \end{figure}
Figure~{\ref{fig1}} is a plot of $(\eta_{p,0}/ \lambda_H \, nk_BT) $
versus $\mu$ for $z=3$ and $z=100$. The continuous curves are exact
predictions obtained by numerical quadrature, while the triangles and
circles are exact results of Brownian dynamics simulations
carried out at equilibrium (without variance reduction)
[see equation~(\ref{etap02})]. The dashed lines are the
predictions of the Gaussian approximation for the narrow Gaussian
potential [{\em i.e}.\ equation~(\ref{lvetap0}), with $\tau^*$ and
${\cal H}$ given by equations~(\ref{lvtau}) and~(\ref{lvH}),
respectively], the dot-dashed curve is the prediction of the
first order perturbation theory [{\em i.e}.\ equation~(\ref{etaper}) in
the limit $\lambda_H {\dot \gamma} \to 0$], and the filled squares
are the predictions of Fixman's theory [{\em i.e}.\
equation~(\ref{lvetap0}), with $\tau^*$ and ${\cal H}$ given by
equations~(\ref{fixlvspk})]. The parameter $\mu$ does not
enter into Fixman's theory, however, these values are plotted
corresponding to $\mu=0$, since the quadratic potential is used in
Fixman's theory as an approximation for the $\delta$-potential.
The first feature to be noticed in figure~{\ref{fig1}} is the
reassuring closeness of the exact results obtained by using the
retarded motion expansion and by Brownian dynamics simulations. The
retarded motion expansion provides a means of validating the results of
Brownian dynamics simulations.
In the limit $\mu \to 0$, and for large values of $\mu$, the continuous
curves and Brownian dynamics simulations reveal that, as expected, the
exact predictions of the narrow Gaussian potential tend to the
$z$-independent $\theta$-solvent value, $(\eta_{p,0}/ \lambda_H \, nk_BT)
= 1$. This implies, as pointed out earlier, that the use of a
$\delta$-function potential to represent excluded volume interactions
does not lead to any change in the zero shear rate viscosity
prediction. On the other hand, figure~{\ref{fig1}} seems to suggest
that a finite range of excluded volume interaction is required to cause
a change from the $\theta$-solvent value. Away from these limits, at
non-zero values of $\mu$, the narrow Gaussian potential predicts an
increase in the value of zero shear rate viscosity. The existence of
shear thinning in good solvents can be attributed to this increase.
This follows from the fact that at high shear rates, as the effect of
the excluded volume interaction diminishes, the viscosity is expected
to return to its $\theta$-solvent value. We shall see later that this
expectation is indeed justified.
The dashed lines in figure~{\ref{fig1}} indicate that in the limit of
zero shear rate, for a given value of $z$, the Gaussian approximation
is reasonably accurate above a certain value of $\mu$. This limiting
value of $\mu$ appears to be smaller for smaller values of $z$. A
similar behavior is also observed with regard to the prediction of the
zero shear rate first normal stress difference (see
figure~{\ref{fig2}}). Thus it appears that the exact configurational
distribution function $\psi (\bQ, t)$ becomes increasingly non-Gaussian
as the narrow Gaussian potential becomes narrower, and as the strength
of the excluded volume interaction becomes larger.
For the large values of $z$ considered in figure~{\ref{fig1}},
results obtained with the first order perturbation expansion in $z$
cannot be expected to be accurate. It is clear from the dot-dashed lines
that the perturbation expansion results deviate significantly
from exact results for small values of $\mu$. However, they become
increasingly accurate as $\mu$ increases, for a given value of $z$.
This can be understood
by considering equation~(\ref{etaper}), which indicates that in
the limit $\lambda_H {\dot \gamma} \to 0$, the
first order correction to the $\theta$-solvent value increases as
$z$ increases, but decreases as $\mu$ increases.
As $\mu$ increases from zero, values of the zero shear rate viscosity
predicted by the Gaussian approximation, approach the exact values
more rapidly than the predictions of the first order perturbation
theory, for a given value of $z$. The Gaussian approximation is a
non-perturbative approximation---however, it was shown in section~5.4,
to be exact to first order in $z$. One way to understand this is
to consider the Gaussian approximation to consist of an infinite
number of higher order terms, whose nature is unknown. In this sense,
it is an {\em uncontrolled} approximation, which remains accurate at values
of $z$ and $\mu$, where the first order perturbation expansion becomes
inaccurate. As will be seen shortly, these remarks apply also to the
results obtained at finite shear rate.
The difference in the prediction of the zero shear rate viscosity by
Fixman's theory and by the narrow Gaussian potential is evident in
figure~{\ref{fig1}}. It is also worth noting that, although the
relaxation weight and the relaxation time are different in Fixman's
theory and in the Gaussian approximation for $\mu=0$, they lead to the
same prediction of the zero shear rate viscosity. This is, however, not
true for the zero shear rate first normal stress difference. The ratio
$U_{\Psi \eta}$, defined by~\cite{ottbk},
\begin{equation}
U_{\Psi \eta} = {n k_B T \Psi_{1} \over \eta_{p}^2 }
\label{upsieta}
\end{equation}
is equal, in the zero shear rate limit, to $(2 / \alpha^2)$ in the
Gaussian approximation, while it has a constant value of 2 in
Fixman's theory.
\begin{figure}[!htb] \centerline{ \epsfxsize=4in \epsfbox{fig2.ps}}
\caption{ \footnotesize Non-dimensional zero shear rate first normal
stress difference versus $\mu$, for two values of $z$. The continuous
lines are exact predictions obtained by numerical quadrature,
the triangles and circles are results of Brownian dynamics
simulations, the dashed lines are approximate predictions using the
Gaussian approximation, and the dotted lines are approximate
predictions using the uniform expansion model. The error bars
in the Brownian dynamics simulations cannot be resolved within
the line thickness. } \label{fig2}
\end{figure}
Both the uniform expansion model and the Gaussian approximation use
Gaussian distributions in order to evaluate averages. However, the
uniform expansion model uses different Gaussian distributions for
different equilibrium averages, such that the best approximation is
obtained. While this does not lead to any difference in the prediction
of the zero shear rate viscosity by the two approximations,
figure~{\ref{fig2}} reveals that, at small enough values of $\mu$,
there is a significant difference in the prediction of the zero shear
rate first normal stress difference. Clearly, the uniform expansion
model continues to be a reasonable approximation for values of $\mu$ at
which the Gaussian approximation is no longer accurate. However, even
the uniform expansion model leads to a poor approximation at
sufficiently small values of $\mu$.
\begin{figure}[!htb]
\centerline{\epsfxsize=4in \epsfbox{fig3.ps}}
\caption{ \footnotesize
The sum $S_0$ versus $\mu$, for different numbers of summed terms
$k$ in the perturbation expansion [see equation~(\ref{s0s1})].
The exact results are obtained by numerical quadrature.}
\label{fig3}
\end{figure}
In the discussion of the equilibrium perturbation expansion in
section~5.1.2, it was pointed out that, in principle, the
equilibrium moment $q_1$ can be obtained for any non-zero value
of $\mu$, provided that enough numbers of terms in the series for the
quantities $S_0$ and $S_1$, are summed [see equations~(\ref{s0s1})].
Figure~3 displays the sum $S_0$, for different numbers of summed
terms in the perturbation expansion, as a function
of $\mu$, for $z = 3.0$. Exact results are obtained by noting
that $S_0 = \sqrt{2 / \pi} \, I_0$, and evaluating $I_0$ by numerical
quadrature. Clearly, more terms of the expansion are required
for convergence as $\mu$ decreases. The terms
of the series, which are alternating in sign, keep increasing rapidly in
magnitude, until $n$ is approximately $ > (z/\mu^3) $, before they
begin to decrease. Therefore, as $z$ increases, or as $\mu$
decreases, more and more terms are required for the sum to converge.
Above a threshold value of $(z/\mu^3)$ however, it becomes impossible to
evaluate $S_0$ since the round-off errors due to the summation of large
numbers makes the perturbation expansion numerically useless. A
similar problem is also encountered while evaluating the sum $S_1$.
In short, the hope of extrapolating finite $\mu$ results to the limit $\mu=0$,
cannot be realised. Therefore, in the case of Hookean dumbbells, one
cannot obtain equilibrium moments for a delta function excluded volume
potential by using a narrow Gaussian potential and considering a
perturbation expansion in the limit $\mu \to 0$.
\subsection{Steady state viscometric functions}
The results of Brownian dynamics simulations
(without variance reduction) displayed in
figure~\ref{fig4} reveal that the dependence of the viscosity and the
first normal stress difference on $\mu$, at a value of the
non-dimensional shear rate $\lambda_H {\dot \gamma} = 0.3$, is similar
in shape to the dependence observed in the limit of zero shear rate. At
small and large values of $\mu$ the material functions tend to the
$\theta$-solvent value, and exhibit a maximum at some value in between.
Since, even at this non-zero value of shear rate, it appears that the
viscosity and the first normal stress difference remain at their
$\theta$-solvent values for $\mu =0$, it implies that the use of a
$\delta$-potential to represent excluded volume interactions would not
predict any shear thinning. On the other hand, as we shall see
subsequently, a quadratic potential does predict substantial shear
thinning. At $\lambda_H {\dot \gamma} = 0.3$, the Gaussian
approximation seems to be accurate above roughly the same values of
$\mu$ as were observed in the limit of zero shear rate.
\begin{figure}[!htb] \centerline{ \epsfxsize=4in \epsfbox{fig4a.ps}}
\centerline{ \epsfxsize=4in \epsfbox{fig4b.ps}} \caption{\footnotesize
Non-dimensional viscosity and first normal stress difference versus
$\mu$ for two values of $z$, at a non-dimensional shear rate $\lambda_H
{\dot \gamma} = 0.3$. The squares and circles are results of Brownian
dynamics simulations, and the dashed and continuous lines are
the predictions of the Gaussian approximation, for $z=3$ and $z=100$,
respectively. The error bars
in the Brownian dynamics simulations cannot be resolved within
the line thickness.} \label{fig4}
\end{figure}
Figures~\ref{fig5} and~\ref{fig6} are plots of non-dimensional
viscosity and first normal stress difference versus the non-dimensional
shear rate $\lambda_H {\dot \gamma}$. Figure~\ref{fig5} displays the
dependence of these viscometric functions
on the parameter $\mu$, for a fixed value of $z = 0.1$,
while figure~\ref{fig6} displays the dependence on the parameter $z$,
for a fixed value of $\mu=2.5$. The prediction of shear thinning for
non-zero values of $\mu$ is apparent, and our earlier expectation
in this direction is justified. In particular, the predictions of
Brownian dynamics simulations, the Gaussian approximation, and the
first order perturbation theory tend to $\theta$-solvent values at
high shear rates.
Shear thinning, which is seemingly physically meaningful,
is also predicted, as can be seen from figure~\ref{fig5},
by both the Gaussian approximation and the first order perturbation
theory, for $\mu=0$. This corresponds to a $\delta$-function excluded
volume potential, and is clearly an artifact of the perturbation
expansion, since rigorous calculations indicate a trivial result. It remains
to be seen if the situation is different in the limit of long chains.
For small enough values of $z$, and large enough values of
$\mu$, the results of the Gaussian approximation, and the first order
perturbation expansion, agree exceedingly well with the exact results of
Brownian dynamics simulations. Indeed, as $\lambda_H {\dot \gamma}$
increases for fixed values of $z$ and $\mu$, both the Gaussian
approximation, and the first order perturbation expansion become
increasingly accurate. In particular, if both the approximations
are accurate at zero shear rate, they continue to remain accurate at
non-zero shear rates. This can be understood in the case of the
first order perturbation expansion by considering
equations~(\ref{etaper}) and~(\ref{Psi1per}). Clearly, the departure from
the $\theta$-solvent values decreases as $\lambda_H {\dot \gamma}$
increases. This is inline with the intuitive expectation of
decreasing excluded volume interactions with increasing shear rate.
For a fixed value of the shear rate $\lambda_H {\dot \gamma}$,
as $z$ increases, or $\mu$ decreases, the predictions of the
Gaussian approximation and the first order perturbation expansion
become increasingly inaccurate,
with the first order perturbation expansion breaking down before
the Gaussian approximation. This can be expected, since the
Gaussian approximation---being exact to first order in $z$---is
at least as accurate as the first order perturbation expansion.
\begin{figure}[!htb]
\centerline{\epsfxsize=4in \epsfbox{fig5a.ps}}
\centerline{ \epsfxsize=4in \epsfbox{fig5b.ps}}
\caption{ \footnotesize
Non-dimensional viscosity and first normal stress difference versus
non-dimensional shear rate $\lambda_H {\dot \gamma}$,
for three values of $\mu$ at $z = 0.1$.
The circles are results of Brownian dynamics simulations,
the dashed lines are the predictions of the Gaussian approximation,
and the triangles are the predictions of the first order
perturbation expansion. In the viscosity plot, the error bars in
the Brownian dynamics simulations are smaller than the size of the
symbols. }
\label{fig5} \end{figure}
\begin{figure}[!htb]
\centerline{\epsfxsize=4in \epsfbox{fig6a.ps}}
\centerline{ \epsfxsize=4in \epsfbox{fig6b.ps}}
\caption{ \footnotesize
Non-dimensional viscosity and first normal stress difference versus
$\lambda_H {\dot \gamma}$ for three values of $z$, at $\mu = 2.5$.
The symbols are as indicated in the caption to figure~\ref{fig5}.
The error bars in the Brownian dynamics simulations are smaller
than the size of the symbols.}
\label{fig6} \end{figure}
The two different Brownian dynamics simulation algorithms mentioned
in section~5.2 were used to obtain the data in figures~\ref{fig5}
and~\ref{fig6}. While the algorithm with variance reduction
was used for shear rates upto $\lambda_H {\dot \gamma} = 0.1$, the
algorithm without variance reduction
was used at higher shear rates. With regard to the results obtained by
variance reduction, it was found that the variance was typically reduced
by a factor of five to ten by the parallel equilibrium simulation
subtraction procedure. While the magnitude of the reduced variance
was relatively independent of shear rate for the viscosity, it decreased
with increasing shear rate for the first normal stress difference.
The time to reach steady-state, from start-up of flow, was roughly ten
relaxation times for $z=0.1$, $z=3$ and $z=30$, and roughly fifteen
relaxation times for $z=100$. Rheological properties in the two parallel
simulations remained correlated during the time required to reach
steady-state, and as a result the present technique proved adequate
for the purpose of variance reduction.
We have seen earlier---in figure~\ref{fig1}---that both Fixman's theory, and
the Gaussian approximation for $\mu=0$, lead to identical values for the
zero shear rate viscosity. This coincidence is, however, restricted to
the limit of zero shear rate. At non-zero shear rates, as can be seen
from figure~\ref{fig7}, there is considerable divergence between the
predictions of the two theories.
\begin{figure}[!htb] \centerline{ \epsfxsize=4in \epsfbox{fig7.ps}}
\caption{ \footnotesize Non-dimensional viscosity versus
non-dimensional shear rate for $z=100$. The continuous line is the
prediction of Fixman's theory, and the dashed line is the prediction of
the Gaussian approximation for $\mu=0$. } \label{fig7} \end{figure}
Fixman's theory for dumbbells, though differing considerably
from the narrow Gaussian potential in terms of its predictions of
rheological properties, has the appealing aspect that it captures some of
the {\it universal} features of the behavior of good solvents---which
can only be expected from bead-spring chain theories in the limit of a
large number of beads. We have seen this universal behavior earlier
in the correct prediction of the end-to-end distance scaling, and in
the parameter free nature of the ratio $U_{\Psi \eta}$.
Figure~\ref{fig8} displays the
prediction by Fixman's theory of the reduced variable $(\eta_p /
\eta_{p,0})$ versus the non-dimensional shear rate $\beta = \lambda_p
{\dot \gamma},$ where, $\lambda_p = ( \, \lbrack \eta \rbrack_0 \, M \,
\eta_s / N_{\rm A} \, k_{\rm B}\, T \, ) $, is a characteristic
relaxation time. Here, $\lbrack \eta \rbrack_0 $ is the zero shear rate
intrinsic viscosity, $M$ is the molecular weight and $ N_{\rm A}$ is
Avagadro's number. For dilute solutions one can show that, $\beta =
\eta_{p,0} \, {\dot \gamma} / n \, k_{\rm B}\, T$. The figure clearly
reveals that as $z \to \infty$, the curves for different values of $z$
overlap. Therefore, in this respect also, Fixman's theory mimics the
universal behavior expected of long chains. The source of this behavior
can be understood by examining equation~(\ref{sfs2}). In the limit of
$\alpha \gg 1$, one can show that $s_2 = [\alpha^6 \, (\lambda_H {\dot
\gamma})^{-2}]^{1/5}$. As a result, $(\eta_p / \eta_{p,0}) = [\alpha^2
\, \lambda_H {\dot \gamma}]^{-2/5}$ ---leading to the observed
scaling. At small values of $\lambda_H {\dot \gamma}$, $(\eta_p /
\eta_{p,0}) \to 1$. One can therefore construct the analytical
expression, \begin{equation} {\eta_p \over \eta_{p,0}} = (1 + \alpha^4
\lambda_H^2 {\dot \gamma}^2)^{-1/5} \label{anal} \end{equation} and
expect it to be accurate at very small and large values of $\beta$. The
dot-dashed curve in figure~\ref{fig8} shows that equation~(\ref{anal})
is accurate over a fairly wide range of $\beta$.
\begin{figure}[!htb] \centerline{ \epsfxsize=4in \epsfbox{fig8.ps}}
\caption{ \footnotesize Reduced viscosity versus reduced shear rate
predicted by Fixman's theory for large values of $z$. The continuous
line and circles are obtained numerically, while the dot-dashed line is
obtained with the analytical expression~(\ref{anal}). } \label{fig8}
\end{figure}
The universal behavior discussed above is not exhibited by the Gaussian
approximation. The reason for this can be easily understood in the
limit $\mu \to 0$, where, the second moment equation~(\ref{gasecmom})
reduces at steady-state to a non-linear algebraic equation. Indeed, one
can show that for $\alpha \gg 1$, $(\eta_p / \eta_{p,0}) = [1+
\lambda_H^2 {\dot \gamma}^2]^{-1/5}$. As a result, the normalised
material functions do not collapse onto a single curve when plotted
versus $\beta$. It is, however, not realistic to expect a dumbbell
model to exhibit universal features, and the real verification of
universal behavior requires the development of a theory for long
bead-spring chains.
The non-dimensional ratio $U_{\Psi \eta}$ [see equation~(\ref{upsieta})]
has a constant value of two, independent of shear rate, for
Hookean dumbbells in $\theta$-solvents, and in Fixman's theory
for good solvents. Figure~9 displays the predictions of
$U_{\Psi \eta}$ by the narrow Gaussian potential, obtained by
Brownian dynamics simulations, the Gaussian approximation, and
the following first order perturbation expansion (which can be
derived from equations~(\ref{etaper}) and~(\ref{Psi1per})),
\begin{equation}
U_{\Psi \eta} = 2 - 2 \, z \, {1 +
\lambda_H^2 {\dot \gamma}^2 \over
\sqrt{1 + \mu^2 } \, \Delta^{3/2} }
\label{upsietaper}
\end{equation}
where, $\Delta$ has been defined below equation~(\ref{Psi1per}).
Since a logarithmic scale has been chosen for the shear rate axis,
it is difficult to represent the zero shear rate value of
$U_{\Psi \eta}$. However, since it is very nearly constant at low
values of shear rate, the zero shear rate value is represented in
figure~9 by the filled circles on the $y$-axis. All the data in
figure~9 have the same trend of remaining nearly constant at
low shear rates, and approaching asymptotically the value of two at
high shear rates. In the case of the first order perturbation
expansion, this can be understood by considering
equation~(\ref{upsietaper}) in the limit $\lambda_H {\dot \gamma}
\to \infty$.
\begin{figure}[!htb]
\centerline{ \epsfxsize=4in \epsfbox{fig9.ps}}
\caption{ \footnotesize The ratio $U_{\Psi \eta}$ [see
equation~(\ref{upsieta})] versus $\lambda_H {\dot \gamma}$.
The circles are results of Brownian dynamics simulations,
the dashed lines are the predictions of the Gaussian approximation,
and the dotted lines are the predictions of the first order
perturbation expansion. The filled circles on the $y$-axis represent
zero shear rate values of $U_{\Psi \eta}$ obtained by equilibrium
simulations. The error bars in the Brownian dynamics simulations
are smaller than the size of the symbols. }
\label{fig9}
\end{figure}
\begin{figure}[!htb]
\centerline{\epsfxsize=4in \epsfbox{fig10.ps}}
\caption{ \footnotesize
Non-dimensional mean squared end-to-end vector versus
non-dimensional shear rate $\lambda_H {\dot \gamma}$,
for two values of $z$ at $\mu = 1$.
The circles and triangles are results of Brownian dynamics simulations,
the dotted and dashed lines are the predictions of the first order
perturbation expansion, and the continuous line is the analytical
solution for a theta solvent. The error bars in
the Brownian dynamics simulations are smaller than the size of the
symbols. }
\label{fig10}
\end{figure}
The dependence of the mean squared end-to-end distance
$\avel {Q^*}^{2} \aver$ on
$\lambda_H {\dot \gamma}$ is revealed in Figure~10.
The circles and triangles are the results of Brownian dynamics simulations
obtained with the algorithm without variance reduction. The dotted and
dashed lines are plots of the following expression,
\begin{equation}
\avel {Q^*}^{2} \aver = 3 + 2 \, \lambda_H^2 {\dot \gamma}^2
+ {z \over \sqrt{1 + \mu^2 } \, \Delta^{3/2} }
\left[ \, 3(1+\mu^2) + (5+6\mu^2) \, \lambda_H^2 {\dot \gamma}^2
+2 \, \lambda_H^4 {\dot \gamma}^4 \right]
\end{equation}
obtained from the first order perturbation expansion, and
the continuous line is the well known result for a theta
solvent~\cite{bird2},
$\avel {Q^*}^{2} \aver = 3 + 2 \, \lambda_H^2 {\dot \gamma}^2$.
Interestingly, the effect of excluded volume interactions on swelling
increases with increasing shear rate. In the case of the
first order perturbation expansion---which appears to be accurate for
$z=3$, but not for $z=30$---this can be understood by
noting, in the expression above, that the term representing the
correction to the theta solvent result due to the presence of excluded
volume increases as the shear rate increases.
\section{Conclusions}
The use of a narrow Gaussian potential, to describe the
excluded volume interactions between the beads of a Hookean
dumbbell model, leads to the prediction of swelling and
shear thinning for relatively
small non-zero values of the extent of interaction $\mu$. This
is essentially caused by an increase in the magnitude of
the equilibrium moments relative to their $\theta$-solvent values.
A delta function description of the excluded volume potential,
on the other hand, is found to predict neither swelling nor
shear thinning for Hookean dumbbells.
For a given strength of the excluded volume interaction $z$, the
Gaussian approximation is found to be reasonably accurate for values of
$\mu$ larger than some threshold value. The behavior of the Gaussian
approximation can be understood by comparing its predictions with
those of a first order perturbation expansion in $z$, since it
is shown here to be exact to first order in $z$. The perturbation
expansion reveals that the departure of the viscometric functions
from their $\theta$-solvent values increases with increasing $z$,
but decreases with increasing $\mu$, and increasing shear rate
$\lambda_H {\dot \gamma}$.
The use of a quadratic potential in Fixman's theory leads to the
prediction of viscometric functions which are considerably
different from those of the narrow Gaussian potential.
However, Fixman's theory for dumbbells reproduces a number of universal
features observed in good solvents.
\vskip15pt
\noindent
{\bf \large Acknowledgement}
\vskip15pt
Support for this work through a
grant III. 5(5)/98-ET from the Department of Science and Technology,
India, to J. Ravi Prakash is acknowledged. Part of this work was carried out
while JRP was a participant in the research programme
{\em Jamming and Rheology} at the Institute for Theoretical Physics,
University of California, Santa Barbara, USA. JRP would also like to
thank the non-linear dynamics group at IIT Madras for providing
the use of their computational facility.
|
1,116,691,500,713 | arxiv | \section{Introduction}\label{s1}
Let $0<k\in\mathbb{Z}$, let $n=2k+1$ and let $O_k$ be the $k${\it -odd graph} \cite[p.~206]{GR} considered as the graph whose vertices are the $k$-subsets of the cyclic group $\mathbb{Z}_n=\{0,1,\ldots, 2k\}$ with an edge $uv$ for each two vertices $u,v$ of $O_k$ if and only if $u\cap v=\emptyset$.
We recur to a natural {\it edge-supplementary 1-arc factorization} $\mathbb{A}_k$ of $O_k$, meaning that the two oppositely oriented arcs (1-arcs \cite[p.~59]{GR}) of each edge of $O_k$ are assigned by $\mathbb{A}_k$ {\it colors} $a,b\in\{0,\ldots,k\}=[0,k]$ such that $a+b=k$ (so $a,b$ are said to be $k$-{\it supplementary} or {\it supplementary in} $k$), in such a way that the arcs departing from each vertex are in one-to-one correspondence with $[0,k]$.
To define the claimed edge-supplementary 1-arc factorization $\mathbb{A}_k$ of $O_k$, we consider the partition of $V(O_k)$ into $\mathbb{Z}_n$-{\it classes}, that is the cyclic equivalence classes mod $n$.
To get these $\mathbb{Z}_n$-classes, we take each vertex $u$ of $O_k$ expressed as the characteristic vector of the subset $u\subset\mathbb{Z}_n$ it represents.
Each such characteristic vector yields a binary string, called a {\it bitstring}, which is a sequence of digits 0 and 1, called 0-{\it bits} and 1-{\it bits}, respectively.
The number of bits (resp. 1-bits) of a bitstring $u$ is said to be its {\it length} (resp. its {weight}). Each $u\in V(O_k)$ can be seen as a bitstring of length $n$, so we say it is an $n$-bitstring.
\begin{example}\label{o1} In $O_1$, the subsets $\{i\}$ of $\mathbb{Z}_3$, ($i=0,1,2$) are denoted $100,010,001$, respectively. \end{example}
We also consider the vertices of $O_k$ as corresponding polynomials mod $x^n+1$ in the ring $\mathbb{Z}[x]$,
\cite{D2,D1}, namely in Example~\ref{o1}: $x^0$, $x^1$ and $x^2$ mod $x^3 +1$.
The $\mathbb{Z}_n$-classes are obtained by successive multiplication of such polynomials by $x$ mod $x^n+1$. The resulting equivalence relation defines a quotient graph of $O_k$ whose vertices are those $\mathbb{Z}_n$-classes.
In Example~\ref{o1}, $O_1$ has just one such equivalence class, and $O_2$ has two such equivalence classes.
Theorem~\ref{bij}, below, asserts that there is a bijection between $\mathbb{Z}_n$-classes and Dyck words of length $2k$ as defined in its preceding Remark~\ref{Dyck} (namely, with the roles of 0- and 1-bits exchanged with respect to the Dyck words of \cite{Hcs}).
\section{Main purposes}\label{uni}
Consider the $k$-th Catalan number $C_k=\frac{(2k)!}{k!(k+1)!}$ \cite[\seqnum{A000108}]{oeis}.
To unify a presentation of odd graphs, we introduce the sequence \cite[\seqnum{A239903}]{oeis}, composed by {\it restricted-growth strings} ({\it \!RGS}) \cite[p.~325]{Arndt}.
The first $C_k$ terms of such sequence, call it $S_{(\infty)}$, form the set $S_{(u)}$ in \cite[p.~224]{Stanley} which is equivalent to the set $S_{(i)}$ of Dick paths from $(0,0)$ to $(2k,0)$ \cite[p.~221]{Stanley} by \cite[p.~219, ex.~6.19]{Stanley}.
Theorem~\ref{bij} shows that the set $S_{(i)}$ is constituted by the $\mathbb{Z}_n$-classes of $V(O_k)$.
That will allow to determine $\mathbb{A}_k$ (Remark~\ref{r2}), and a version through $\mathbb{A}_k$ of:
\begin{enumerate}
\item[\bf(i)] the uniform 2-factors of $O_k$ in \cite{u2f} (see Theorem~\ref{L5}) and the Hamilton cycles of $O_k$ for $k>2$ in\cite{Hcs} (see Theorem~\ref{L6});
\item[\bf(ii)] the double covering graph $M_k$ of $O_k$, known as the {\it middle-levels graph} of the {\it Boolean lattice} $B_n$, induced by the levels $L_k$ and $L_{k+1}$ of $B_n$, where $L_k$ and $L_{k+1}$ are formed by the $n$-bitstrings of weight $k$ and $k+1$, respectively;
\item[\bf(iii)] the explicit {\it modular} 1-factorization of the graphs $M_k$ \cite{DKS}, with factor colors in $[1,k+1]$ obtained from the color set $[0,k]$ in Section~\ref{modular} by uniformly adding 1; such a modular 1-factorization of $M_k$ contrasts with the {\it lexical} 1-factorization of $M_k$ \cite{KT} (see Remark~\ref{u}); in fact, T. M\"utze et al. found two different approaches to Hamilton cycles in $M_k$: via the lexical 1-factorization of \cite{gmn,M} and via the modular 1-factorization of \cite{Hcs}, via \cite{u2f} (see also Corollary~\ref{the-end}).
\end{enumerate}
A reinterpretation of the Hamilton cycle treatment of $M_k$ \cite{gmn,M} via RGS's in \cite{D2,D1} is modified through Section~\ref{verL} below to present relevant properties of the uniform 2-factors and Hamilton cycles of $O_k$ and $M_k$ \cite{u2f,Hcs}, cited above in items (i) and (iii).
\section{Germs of restricted growth strings}\label{germs}
We take the sequence ${S_{(\infty)}}$ to be as follows: ${S_{(\infty)}}=(\beta(0),\beta(1),\beta(2),\ldots,\beta(17),\ldots)=$
$$(0,1,10,11,12,100,101,110,111,112,120,121,122,123,1000,1001,1010,1011,\ldots),$$
where the lengths of contiguous pairs $(\beta(i-1),\beta(i))$ in $S_{(\infty)}$ are constant unless $i=C_k$ for some $k>1$. If $i=C_k$ for some $k>1$, then
$\beta(i-1)=\beta(C_k-1)=12\cdots k$ and $\beta(i)=\beta(C_k)=10^k=10\cdots 0$, (with $0^k$ meaning $0\cdots 0$, $k$ times).
Each RGS $\beta=\beta(m)$ is transformed, for every $k\in\mathbb{Z}$ such that $k\ge$ length$(\beta)$, into a $(k-1)$-string $\alpha=\alpha(\beta,k)=a_{k-1}a_{k-2}\cdots a_2a_1$ called a $k$-{\it germ}
by prefixing $k-$ length$(\beta)$ zeros to such $\beta$.
Concretely,
a $k$-{\it germ} $\alpha$, where $k>1$, is a $(k-1)$-string $\alpha=a_{k-1}a_{k-2}\cdots a_2a_1$ such that:\\
{\bf(1)} the first position of $\alpha$ contains an entry $a_{k-1}\in\{0,1\}$; \\
{\bf(2)} given $1<i<k$,
the entry $a_{i-1}$ satisfies $0\le a_{i-1}\le a_i+1$.\\ \\
Every $k$-germ $a_{k-1}a_{k-2}\cdots a_2a_1$ determines a $(k+1)$-germ $0a_{k-1}a_{k-2}\cdots$ $a_2a_1$.\\
We say that a {\it non-null RGS} is obtained by stripping a $k$-germ $\alpha=a_{k-1}a_{k-2}\cdots a_1$ $\ne 00\cdots 0$ of all its null entries to the left of its first entry equal to 1, in which case such a non-null RGS is still denoted with the same Greek letter $\alpha$. We say that the {\it null RGS} $\alpha=0$ corresponds to all null $k$-germs $\alpha$, for $0<k\in\mathbb{Z}$ and use the same notations $\alpha=\alpha(m)$ and $\beta=\beta(m)$ to denote both a $k$-germ and its associated RGS.
The $k$-germs are ordered as follows. Given two $k$-germs
$\alpha=a_{k-1}\cdots a_2a_1$ and $\beta=b_{k-1}\cdots b_2b_1,$ ($\alpha\ne \beta$), we say that $\alpha<\beta$, whenever either \.
{\bf (i)} $a_{k-1} < b_{k-1}$ or \\
{\bf (ii)} $a_j=b_j$, for $k-1\le j\le i+1$, and $a_i < b_i$, for some $k-1>i\ge 1$.
The resulting order on $k$-germs $\alpha(m)$ ($m\le C_k$) corresponding bijectively (via the assignment $m\rightarrow\alpha(m)$) with the natural order on $m$, yields a listing that we call the {\it natural ($k$-germ) enumeration}.
Note that there are exactly $C_k$ $k$-germs $\alpha=\alpha(m)<10^k$, $\forall k>0$.
We consider also the {\it empty RGS}, denoted $\alpha=\phi$, that yields for $k=1$ the only {\it empty} $k$-germ $\alpha=0^{k-1}=0^{1-1}=\phi$, using the same notation $\phi$ both for the empty RGS and the empty 1-germ and extending this way the general notation $\alpha=0^{k-1}$ ($k>1$) to every $k>0$.
\section{Ordered trees of germs and associated Dyck words}\label{nat}
\begin{theorem}\label{thm1}\cite{D2,D1} {\bf(A)} Let $1\le i <k$. The $k$-germs are the nodes of an ordered tree ${\mathcal T}_k$ rooted at $0^{k-1}$ such that each $k$-germ $\alpha\ne0^{k-1}$ with first nonzero entry $a_i$ has parent $\beta(\alpha)=b_{k-1}\cdots b_1\!<\alpha$ in ${\mathcal T}_k$ with $b_i= a_i-1$ and $a_j=b_j$, for each $j\ne i$ in $[1,k-1]$. {\bf(B)}
To each $k$-germ $\alpha=a_{k-1}\cdots a_1$ ($k>1$) or $\alpha=\phi$ ($k=1$) corresponds an $n$-string $F(\alpha)$ whose entries are the numbers $0,1,\ldots,k$, once each, and $k$ ``$=$"-signs. In particular, $F(0^{k-1})=``012\cdots(k-2)(k-1)k=\cdots ="$. Moreover,
if $\alpha\ne 0^{k-1}$ then $F(\alpha)$ arises from $F(\beta)=F(\beta(\alpha))$ as follows, where $i$ is as in {\rm(A)}:
Let $W^i$ and $Z^i$ be the initial and terminal substrings of length $i$ in $F(\beta)$. Let $\gamma>0$ be the first entry of $U=F(\beta)\setminus(W^i\cup Z^i)$. Since $U$ contains the number entry $\gamma+1$ of $F(\beta)$, $U$ splits as $U=X|Y$, where $Y$ starts at $\gamma+1$. We take $F(\alpha)=``W^i|Y|X|Z^i"$, a re-concatenation of $F(\beta)=``W^i|X|Y|Z^i"$. Furthermore, if $b'\in[0,k]$ is next on its right in $F(\alpha)$ to a number $b\in[0,k)$, then $b'>b$.
Also, $W^i$ is an ascending number $i$-substring, $Z^i$ is formed by $i$ signs``$=$", and $``k="$ is a substring of $F(\alpha)$, but $``=k"$ is not,.
\end{theorem}
\begin{remark}\label{u} We digress to say that $k$-edge ordered trees \cite{D2} that appear in \cite[p.~221, item~(e)]{Stanley} as ``plane trees with $k+1$ vertices'' and in \cite{gmn} as ``ordered rooted trees", represent Dyck paths of length $2k$ (see Remark~\ref{Dyck}). These trees are
equivalent to $k$-strings $0b_{k-1}\cdots b_1$ that were called $k$-{\it RGS}'s in \cite{D2} and that are tailored from the RGS's in Section~\ref{uni} via \cite[p.~224, items~(r)-(u)]{Stanley} in a different way from that of the $k$-germs in Section~\ref{germs}. An equivalence of our $k$-germs of Section~\ref{germs} and those $k$-RGS's was presented in \cite{D2} via their distinct relation to the $k$-edge ordered trees (whose purpose in \cite{gmn,M} was using their plane rotations toward Hamilton cycles in $M_k$, not related to the odd-graph approach of \cite{Hcs}, discussed in Section~\ref{verL}).\end{remark}
\begin{example}
By representing ${\mathcal T}_k$ with each node $\beta$ having its children $\alpha$ enclosed between parentheses following $\beta$ and separating siblings with commas, we can write: $${\mathcal T}_4=000(001,010(011(012)),100(101,110(111(121)),120(121(122(123))))).$$\end{example}
\begin{example}
Fig.~\ref{fig1} shows the tree ${\mathcal T}_k$ ($k=1,2,3,4$), with its root $0^{k-1}$ represented in a box containing from left to right: first, the order $ord(0^{k-1})=0$ of $0^{k-1}$, then $0^{k-1}$ and then $F(0^{k-1})$. Each internal or leaf node $\alpha$ of ${\mathcal T}_k$ is represented by a box of two levels: the top level contains from left to right the order $ord(\beta(\alpha))$ of $\beta(\alpha)$, then $\beta(\alpha)$ and then $F(\beta(\alpha))$; the lower level
contains from left to right the order $ord(\alpha)$ of $\alpha$, then $\alpha$ and then $F(\alpha)$. In these presentations of $\beta(\alpha)$ and $\alpha$, the entries $b_i$ and $a_i$ (as in Theorem~\ref{thm1}(A)) are colored red and the remaining entries black. In the boxes of $F(\beta(\alpha))=``W^i|X|Y|Z^i"$ and $F(\alpha)=``W^i|Y|X|Z^i"$ in the figure, $X$ and $Y$ are colored blue and red, respectively, while $W^i$ and $Z^i$ are left black. In addition, the edge leading from $\beta(\alpha)$ to $\alpha$ is labeled with the subindex $i$.
\end{example}
\begin{figure}[htp]
\includegraphics[scale=0.66]{4cases.eps}
\caption{The trees ${\mathcal T}_k$ ($k=1,2,3,4)$ exemplifying Theorem~\ref{thm1}}
\label{fig1}
\end{figure}
\begin{remark}\label{alfalfa}
For each $k$-germ $\alpha$ ($k>1$), we define the bitstring form $f(\alpha)$ of $F(\alpha)$ by replacing each number entry of $F(\alpha)$ by a 0-bit and each ``$=$"-sign by a 1-bit. (0-bits and 1-bits here correspond respectively to the 1-bits and 0-bits of \cite{Hcs}). Such $f(\alpha)$ is an $n$-bitstring of weight $k$ whose support $supp(f(\alpha))$ is in $V(O_k)$. So, we consider both
$F(\alpha)$ and the characteristic vector $f(\alpha)$ of $supp(f(\alpha))$ to represent the vertex $supp(f(\alpha))$ of $O_k$.
\end{remark}
\begin{example}\label{PLC}
We can recover $F(\alpha)$ from $f(\alpha)$, exemplified for $k=1,2,3$ in Fig.~\ref{fig2}: for each one of the $1+2+5=8$ cases in the figure, we grow iteratively a piecewise-linear curve $PLC(\alpha)$ that starts at the shown origin O in the Cartesian plane by replacing successively the 0-bits and 1-bits of $f(\alpha)$, from left to right, by {\it up-steps} and {\it down-steps}, namely diagonal segments $(x,y)(x+1,y+1)$ and $(x,y)(x+1,y-1)$, respectively. To each down-step of $PLC(\alpha)$, we assign the ``$=$"-sign. To the up-steps of $PLC(\alpha)$, we assign the integers in the interval $[0,k]$ in {\it decreasing order} from top to bottom, subject to assigning them from left to right (in that order) between each two contiguous horizontal levels. Then, by reading and successively writing the number entries and ``$=$"-signs assigned to the steps of $PLC(\alpha)$ from left to right, the $n$-tuple $F(\alpha)$ is obtained.
Fig.~\ref{fig2} is provided, underneath each instance, with the corresponding $k$-germ $\alpha$ followed by $F(\alpha)$ and its (underlined) order of presentation via Theorem~\ref{thm1}. We assume that all elements of $V(O_k)$ are represented by means of such piecewise-linear curves, for each fixed integer $k>0$.
\end{example}
\begin{figure}[htp]
\includegraphics[scale=0.671]{equals.eps}
\caption{Recovering $F(\alpha)$ from $f(\alpha)$: $PLC(\alpha)$ for triples (($\alpha)$ $[F(\alpha)]$, $\underline{ord(\alpha)})$, $k=1,2,3$}
\label{fig2}
\end{figure}
\begin{remark}\label{Dyck}
Let $0<k\in\mathbb{Z}$ and let $\alpha$ be a $k$-germ. The curve $PLC(\alpha)$ (Example~\ref{PLC} and Fig.~\ref{fig2}) yields a {\it Dyck path} $DP(\alpha)$ by the removal of its first up-step $(0,0)(1,1)$ and a change of coordinates from $(1,1)$ to $(0,0)$. Such Dyck path $DP(\alpha)$ represents a corresponding {\it Dyck word} $DW(\alpha)=``0\cdots 1"$ of length $2k$; this is a particular case for $\ell=k$ of a {\it Dick word of length} $2\ell$ ($0<\ell\in\mathbb{Z}$), defined as a $2\ell$-bitstring
of weight $\ell$ such that in every prefix the number of 0-bits is at least the number of 1-bits
(differing from the Dyck words of \cite{Hcs} in which, on the contrary, the number of 1-bits is at least equal to the number of 0-bits).
The concept of {\it empty Dyck word} $\epsilon$ also makes sense here and is used for example in Section~\ref{verL}, display~(\ref{!}).
The Dyck paths $DP(\alpha)$ corresponding to the curves $PLC(\alpha)$ in Fig.~\ref{fig2} are represented in the lower-left quarter of Fig.~\ref{fig3}, with notation specified in Examples~\ref{rr2}--~\ref{rrr2}.
\end{remark}
\begin{theorem}\label{bij}
There exists a bijection $\lambda$ from the $\mathbb{Z}_n$-classes of $V(O_k)$ onto the Dyck words of length $2k$, this conforming the set $S_{(i)}$ of 6.19$(i)$ \cite[p.~221]{Stanley}.
In fact, each $\mathbb{Z}_n$-class $\Gamma$ of $V(O_k)$ has a Dyck word $f(\alpha)$ of length $2k$ as sole representative. The other $n$-tuples in $\Gamma$ are obtained by translations $f(\alpha).j$ mod $n$ of $f(\alpha)$, where $j\in[0.2k]$ is the position of the null entry in $f(\alpha).j$.
Also, $f(\alpha)$ may be interpreted as its corresponding $F(\alpha)$ and the other $n$-tuples $f(\alpha).j$ above may be interpreted as the corresponding translations $F(\alpha).j$ mod $n$.
\end{theorem}
\begin{proof}
Since there are just $C_k$ Dick words of length $2k$ corresponding bijectively to the $n$-tuples $F(\alpha)$, or their binary versions $f(\alpha)$, as well as $C_k$ $\mathbb{Z}_n$-classes $\Gamma$ in $V(O_k)$, the correspondence from the $\mathbb{Z}_n$-classes $\Gamma$ onto such Dyck words is a bijection $\lambda$ such that $\lambda(\Gamma)=f(\alpha)$, as claimed, where we may write $\Gamma=\Gamma_\alpha=\lambda^{-1}(f(\alpha))$.
\end{proof}
\begin{figure}[htp]
\includegraphics[scale=0.38]{alverre.eps}
\caption{Illustration for Examples~\ref{Toc} and~\ref{Inc} and Remark~\ref{r3}}
\label{fig4}
\end{figure}
\begin{example}\label{Toc} In this and subsequent examples, we express integers in their hexadecimal form (e.g., $a=10,b=11$, etc.). To clarify concepts, let us determine the $n$-germ $\alpha_1$ ($n=21$) corresponding to the bitstring $f(\alpha_1)=000011010001111010011$.
We proceed by determining $PLC(\alpha_1)$ (as indicated in Example~\ref{PLC}), drawn in the upper right of Fig.~\ref{fig4}, where the black hexadecimal number entries and ``$=$"-signs form from left to right the $n$-string $F(\alpha_1)$, while the red symbols
are the first twenty positive hexadecimal numbers, that appear in that order in the expression $h_0(\alpha)=h_0(\alpha_1)$
of Remark~\ref{r3} and Example~\ref{Inc} (Section~\ref{rev}, below).
To associate the $k$-germ $\alpha_1$ to the $n$-string $F(\alpha_1)$, we built a list $\mathbb{L}(\alpha_1)$ shown on the upper left of Fig.~\ref{fig4}.
The first lines of $\mathbb{L}(\alpha_1)$ contain data concerning the path $P(\alpha_1)$ from $\alpha_1$ to the root $0^{20}=\alpha_{16}$ in ${\mathcal T}_{21}$, namely, from left to right: $F(\alpha_i)$, $\alpha_i$, $ord(\alpha_i)-ord(\alpha_{i+1})$ and $ord(\alpha_i)$, for $i=1,2,\ldots,15$. The first sublist $\mathbb{L}'(\alpha_1)$ in $\mathbb{L}(\alpha_1)$, composed successively by $F(\alpha_1),\ldots,F(\alpha_{16})$, shows each of the 21-strings $F(\alpha_j)$, ($j=1,\ldots,15$), as a concatenation $``W^{i_j}|X|Y|Z^{i_j}"$, where $i_j$ is the first index in $F(\alpha_j)=c_0c_1\cdots c_{20}$ such that $c_j>j$ with blue $X$, red $Y$, and black for both $W^{i_j}$ and $Z^{i_j}$, showing
in the following line the 21-string
$F(\alpha_{j+1})=``W^{i_j}|Y|X|Z^{i_j}"$, just under $F(\alpha_j)$. To the right of $\mathbb{L}'(\alpha_1)$ and starting at the red $\alpha_{16}=0^{k-1}$ in line 16, we went up and built a sublist $\mathbb{L}''(\alpha_1)$ by reconstructing each $\alpha_j=a_{k-1}\cdots a_1$, setting in red the terminal substring $a_{i_j}\cdots a_1$ and in black the initial substring $a_{k-1}\cdots a_{{i_j}+1}$. To the right of $\mathbb{L}''(\alpha_1)$, we constructed an accompanying blue sublist $\mathbb{L}'''(\alpha_1)$ formed by Catalan numbers taken as increments that determine the corresponding orders of the vertices in $P(\alpha_1)$. These orders, appearing in the final sublist $\mathbb{L}''''(\alpha_1)$, are obtained as the partial sums of Catalan numbers. This takes to $ord(\alpha_1)=6821<16796=|V({\mathcal T}_{21})|$.\end{example}
\section{Edge-supplementary arc factorization}\label{modular}
\begin{remark}\label{r2} Each arc $(u,v)$ of $O_k$ ($u,v\in V(O_k)$) is represented by translations
$F(\alpha_u).j_u$ and $F(\alpha_v).j_v$ mod $n$ of the $n$-strings $F(\alpha_u)$ and $F(\alpha_v)$.
Looking $u$ and $v$ upon as $u=F(\alpha_u).j_u$ and $v=F(\alpha_v).j_v$ and comparing, we see that apart from a specific number entry $i\in[0,2k]$ in both $u$ and $v$, all other number entries of one of them correspond to sign-``$=$" entries of the other one, and vice versa. Moreover, the entries $u_i$ of $u$ and $v_i$ of $v$ satisfy $u_i+v_i=k$, so they are said to be $k$-{\it supplementary}. Then, the edge-supplementary 1-arc factorization $\mathbb{A}_k$ of $O_k$ claimed in Section~\ref{s1} is given by the values of those entries $u_i$ and $v_i$ taken as colors of the arcs $(u,v)$ and $(v,u)$, respectively, for all pairs of adjacent vertices $u$ and $v$ of $O_k$. \end{remark}
\begin{figure}[htp]
\includegraphics[scale=0.71]{eran01y2.eps}
\caption{Illustration for Remark~\ref{r2} and Examples~\ref{rr2} and~\ref{rrr2}}
\label{fig3}
\end{figure}
\begin{example}\label{rr2}
Remark~\ref{r2} is illustrated for $k=1,2$ in Fig.~\ref{fig3}, where the $n$-tuples $F(\alpha)$ are shown as the initial lines of corresponding vertical lists $L(\alpha)$ in which arcs $(u,v)$ of $O_k$ are shown as ordered pairs $(F(\alpha_u).j_u,F(\alpha_v).j_v)$ disposed in contiguous lines, except for all arcs from bottom lines, taken as $n$-tuples $F(\alpha_u).j_u$, to corresponding top lines, taken as associated $n$-tuples $F(\alpha_v).0$, thus closing the lists $L(\alpha)$ into oriented cycles $C(\alpha)$. In each such pair $(F(\alpha_u).j_u,F(\alpha_v).j_v)$, the $i$-entries $u_i$ and $v_i$ of Remark~\ref{r2} are colored respectively blue in $F(\alpha_u).j_u$ and red in $F(\alpha_v).j_v$ (the other entries in black) with the exception of the bottom and top $n$-tuples in each list: these are also adjacent, with the entry $u_i=u_0$ holding blue value $k$ on the bottom $n$-tuple $u$ and the entry $v_i=v_0$ holding red value 0 on the top $n$-tuple $v=F(\alpha_v)=F(\alpha_v)_0$. The position $i$ of the blue entry $u_i$ in each line of the lists is cited underlined ($``\underline{i}"$) to the right of its $n$-tuple $u$; the vertex $u\in O_k$ represented in such a line is still cited to the right of its $``\underline{i}"$ as $``ord(\alpha).j_u"$, where
$ord(\alpha)$ refers to the $\mathbb{Z}_n$-class $\Gamma_\alpha$ (so denoted in the proof of Theorem~\ref{bij}) of $u$ in $O_k$. Such vertical lists are used in
Section~\ref{verL} (Fig.~\ref{fig6},~\ref{fig5},~\ref{fig7}) to yield Hamilton cycles of $O_k$, for $k>2$; ($k=2$ is excluded; in fact, $O_2$ is the hypohamiltonian Petersen graph).
\end{example}
\begin{example}\label{rrr2} Fig.~\ref{fig3} contains one oriented 3-cycle for $O_1$ and two oriented 5-cycles for $O_2$, each such cycle headed by two lines: a first line reading ``$ord(\alpha)$:$F(\alpha);\pi(\alpha)$", with ``$F(\alpha)$'' as the first line of the cycle and ``$\pi(\alpha)$" (to be further specified in Remark~\ref{r3}) formed by the different entries at which a blue-to-red $k$-supplementation takes place in the cycle; the second line contains the (underlined) positions 0 to $2k$ of the vertices (as $n$-tuples) in the cycle, followed by ``$O_k$". The arcs of $O_k$ receive colors in the set $[0,k]$ so that the edge between each two adjacent vertices in those cycles has its two composing arcs bearing $k$-{\it supplementary} colors $b$ (for blue) and $r$ (for red), meaning that $b,r\in[0,k]$ are such that $b+r=k$.
To the immediate right of each of these three cycles, for lists $L(\epsilon),L(0),L(1)$ of respective lengths 3, 5, 5, are also represented vertical lists $L^M(\epsilon), L^M(0), L^M(1)$, (occupying two contiguous columns each) closing into corresponding cycles $C^M(\epsilon), C^M(0), C^M(1)$ of respective double lengths 6, 10, 10, obtained by replacing the ``$=$"-signs by the ``$>$"-signs and ``$<$"-signs uniformly on alternate lines.
These cycles can be interpreted as cycles in the middle levels graphs $M_1,M_2$, obtained by reading the subsequent lines in the concatenation of two subsequent columns as follows: from left to right if they bear ``$>$'' signs, and from right to left if they bear ``$<$'' signs. In addition, the graphs $O_1$, $O_2$, $M_1$, $M_2$ are represented in Fig.~\ref{fig3} in thick trace for the edges containing the arcs of the oriented cycles $C(\alpha)$; recalling $\mathbb{A}_k$ and $\mathbb{E}_k$ from Section~\ref{s1}, each
vertex (resp. edge) of $O_1$, $O_2$ is denoted by the support of its corresponding bitstring $f(\alpha)$
(resp. denoted centrally by its underlined color in $\mathcal{E}_k$ and marginally by its blue-red arc-color pair in $\mathbb{A}_k$). In $M_1$, $M_2$, a plus or minus sign precedes each such support indicating respectively a vertex in level $L_k$ or in level $L_{k+1}$ of $B_n$; if in $L_{k+1}$, as the complement $\overline{f(\alpha,<)}$ of the right-to-left reading $f(\alpha,<)$ of the bitstring $f(\alpha)=f(\alpha,>)$; if in $L_k$, as $f(\alpha)=f(\alpha,>)$ itself. The resulting readings of $n$-tuples of $M_1$, $M_2$ inherit the mentioned arc colors for $O_1$, $O_2$, corresponding to the {\it modular matchings} of \cite{DKS}, only that the colors in \cite{DKS} are in $[1,k+1]$ with supplementary sum $k+1$ while our colors are in $[0,k]$ with supplementary sum $k$.\end{example}
\section{String reversals in properly nested parentheses}\label{rev}
Let $\gamma$ be a sequence of properly-nested parentheses, as presented in the Dyck language of \cite[p.~204]{Stanley}. Insert an integer of $[1,2k]$ immediately to the right (resp. left) of each left (resp. right) parenthesis of $\gamma$ and separate with a comma each two so-inserted integers not separated by parentheses The resulting expression $(\gamma)$ will be said to be a {\it tightly parenthesized permutation} ({\it TPP}) if each inserted integer appears just once in $(\gamma)$. We write $(\gamma)=(w)$, where $w$ is the integer sequence obtained from $(\gamma)$ by removing its parentheses and commas.
\begin{remark}\label{r3}
We define subsequently the $2k$-tuple $\pi(\alpha)$ mentioned in Example~\ref{rrr2} and illustrated in the lower-left quarter of Fig.~\ref{fig3} as an inverse $2n$-permutation. Consider the Dyck path $DP(\alpha)$ obtained from each curve $PLC(\alpha)$ by the removal of its first up-step and change of coordinates from $(1,1)$ to $(0,0)$. Assign an integer $p(i)$ from 1 to $2k$ to each step $i$ of such $DP(\alpha)$, as follows. Let $\alpha$ be a $k$-germ.
Consider the $n$-bitstring $f(\alpha)$ and let $f'(\alpha)$ be the $2k$-bitstring obtained from $f(\alpha)$ by removing its first entry. Set parentheses and/or commas between each two entries of $f'(\alpha)$ so that the substrings ``$01$", ``$10$'', ``$00$" and ``$11$" are transformed into ``$0,1$", ``$1)(0$", ``$0(0$" and ``$1)1$", respectively. Set a final parenthesis so that the last ``$1$" in $f'(\alpha)$ appears as ``$1)$". The resulting parenthesized version of $f'(\alpha)$ is denoted $g(\alpha)$.
Then, replace the bits in $g(\alpha)$ from left to right by the successive integers from 1 to $|g(\alpha)|$, keeping all pre-inserted parentheses and commas in $g(\alpha)$ fixed in their places. This yields a parenthesized numbering version $h_0(\alpha)$ of $g(\alpha)$.
To the right of each left parenthesis ``(" in $h_0(\alpha)$, we take the closest right parenthesis ``)" enclosing an expression, call it $w$, from which the removal of its internal commas and open/close parentheses (resp. removal of its number entries) produces a number substring $w'$ lifting to a Dyck subword of $\alpha$ \cite{Hcs} (resp. an irreducible TPP).
We perform a recursive step ${\mathcal RS}$ consisting in transforming $w'$ into its reverse substring, call it $w''$, then resetting $w''$ in place of $w'$ in $(w)$, with the delimiting parentheses of $w$ in place and with the old commas and open/close parentheses kept also in place. Let us denote the resulting expression by ${\mathcal RS}(w)$. Observe that for some $t\ge 1$, $h_0(\alpha)$ is a concatenation $(w_1)|(w_2)|\cdots|(w_t)$ of $t$ TPP's $(w_1)$, $(w_2)$, $\ldots$, $(w_t)$, (eventually just only one TPP, $(w_1)=(w_t)$). Applying ${\mathcal RS}$ inside $h_0(\alpha)=(w_1)|(w_2)|\cdots|(w_t)$ ($t$ times) yields $h_1(\alpha)={\mathcal RS}(w_1)|{\mathcal RS}(w_2)|\cdots|{\mathcal RS}(w_t)$. Next, applying recursively ${\mathcal RS}$, for $i=1,2,\ldots,t$, to the concatenations of TPP's inside each ${\mathcal RS}(w_i)$, while keeping commas and open/close parentheses in place, yields a string $h_2(w)$, then a string $h_3(w),$ etc, ending eventually when we process all the innermost TPP's (of the form $(w)=(a,b)$, with $a,b\in[1,2k]$, where $b=a\pm 1$). This recursion produces a final $h_s(w)$, for some $s>0$. Disregarding the parentheses and commas in $h_s(w)$ yields a $2k$-string $g'(\alpha)$ that insures the claimed assignment $i\rightarrow p(i)$, for $i=1,\ldots,2k$, by making correspond the positions $i=1,\ldots,2k$ of $g'(\alpha)$ to the actual entries of $g'(\alpha)$.
Our desired $2k$-permutation $\pi(\alpha)$ is the inverse permutation of $p(\alpha)=(p(1)p(2)\cdots p(2k))$ \cite{Hcs}. In our approach, the entries of $\pi(\alpha)$ must be taken from right to left as instructions for the positions $i$ they indicate to be taken as the places where the $k$-supplementations must occur ($0<i<n$).
\end{remark}
\begin{example}\label{Inc} As an example of the procedure in Remark~\ref{r3} and in continuation to Examples~\ref{Toc}, \ref{rr2} and~\ref{rrr2}, note that the middle right of Fig.~\ref{fig4}, just under the upper right representation of the curve $PLC(\alpha_1)$, contains a list, call it $\ell$, whose first line is $g(\alpha_1)$ as in Remark~\ref{r3}, with $\alpha_1$ as in Example~\ref{Toc}, and whose second line is $h_0(\alpha_1)$. In this $h_0(\alpha_1)$, the red substrings $w_1=1(2(3,4)5)(6,7)(8(9(a,b)c)d)e$, $w_2=f,g$ and $w_3=h(i,j)k$, are to be reverted according to the first instances of ${\mathcal RS}(w)$ in Remark~\ref{r3}. This yields the third line $h_1(\alpha_1)$ in the list $\ell$. In $h_1(\alpha_1)$, the red substrings are to be reverted according to the next instances of ${\mathcal RS}(w)$, and so on. In the end, the sixth line $h_4(\alpha_1)$ of $\ell$ is $h_4(\alpha_1)=p(\alpha_1)=(14,10,12,11,13,8,9,2,6,4,5,3,7,16,15,20,18,19,17)$.
Its inverse is $\pi(\alpha_1)=p^{-1}(\alpha_1)=(14,8,12,10,11,9,13,6,7,2,4,3,5,1,16,15,20,18,19,17)$. The lower left of Fig.~\ref{fig4} contains a representation of $PLC(\alpha_1)$ with indication of the entries of $p(\alpha_1)$ in black over the corresponding up-steps and down-steps, and the same red symbols as in the upper right of the figure, for comparison.
\end{example}
\section{Uniform 2 factor and Hamilton cycles}\label{verL}
\begin{remark}\label{Vl} Let $k>1$. A vertical list as in Example~\ref{rrr2} and Fig.~\ref{fig3} can be formed for each $k$-germ $\alpha$. In fact, there are $C_k$ such lists $L(\alpha)=(L_0(\alpha),L_1(\alpha),\ldots,L_{2k}(\alpha))^t$, where $t$ stands for transpose, each $L(\alpha)$ representing in $O_k$ an oriented $n$-path $P(\alpha)$ whose end-vertices $L_0(\alpha)=F(\alpha)$ and $L_{2k}(\alpha)$, are adjacent in $O_k$, thus completing an oriented cycle $C(\alpha)$ in $O_k$ by the addition of the arc $(L_{2k}(\alpha),L_0(\alpha))$. Those paths $P(\alpha)$ arose in \cite[Theorem 4]{u2f} and \cite[Lemma 4]{Hcs}, in the latter case leading to Hamilton cycles in $O_k$ and $M_k$.
\end{remark}
\begin{example}\label{joder}
Fig.~\ref{fig6} for $k=3$, and Fig.~\ref{fig7} for $k=4$, show some of those lists $L(\alpha)$ assembled in groups of three or four lists. Fig.~\ref{fig6} for $k=3$ shows two such groups, gathering the lists for the triples $\tau_0=(\alpha_0,\alpha_1,\alpha_2)$ and $\tau_1=(\alpha_0,\alpha_3,\alpha_4)$, each list $L(\alpha_i)$ headed on its top by the subindex $i$ of $\alpha_i$. In each such $L(\alpha_i)$, there are just two contiguous lines in red (against the blackness of the remaining lines) except for their initial entries, in black, and the unique blue vertical pair of number entries that are $k$-supplementary (Remark~\ref{r2}).
These colored line pairs $FT(\alpha_i)$ are examples of {\it flippable tuples}, so named in \cite{Hcs}. For each of $j=0$ and $1$, the three $FT(\alpha_i)$ with $\alpha_i\in\tau_j$ are combined into a 6-cycle $F_6C(\tau_j)$ in $O_3$, which for $\kappa=2k=6$ is an example of a {\it flipping $\kappa$-cycle}, as in \cite{Hcs}, that we denote $F_\kappa C(\tau_j)$; such flipping 6-cycle $F_6C(\tau_j)$ is shown in the middle left of the respective upper-left part and upper-right part, respectively, in
Fig.~\ref{fig6}, sided each on its right and below by its three participating lists $L(\alpha_i)$. Above such flipping 6-cycle $F_6C(\tau_j)$ ($j=0,1)$, a triple of Dyck words of length 6 headed each by the subindex $i\in\{0,1,2\}$ or $i\in\{0,3,4\}$ of the corresponding $\alpha_i$ is shown in red except for a blue entry.
This blue entry corresponds to the position of the blue $k$-supplementary number entries in the two contiguous red-blue substrings of the corresponding flippable tuples
$FT(\alpha_i)$.\end{example}
\begin{figure}[htp]
\includegraphics[scale=0.689]{mk3-2tri.eps}
\caption{Illustration for Remark~\ref{Vl} and Example~\ref{ex6}}
\label{fig6}
\end{figure}
\begin{remark} In general, that is for any $k>1$, red-blue flippable bitstrings as those $6=2\times 3$ in Example~\ref{joder} were shown to exist in \cite[Section 3]{Hcs}. They were shown to form part of the family of bitstrings of \cite[display (4)]{Hcs} corresponding to displays~(\ref{!}) and~(\ref{!!}) in Remark~\ref{r4} below. Moreover, they were used in constructing Hamilton cycles in \cite{Hcs}.\end{remark}
\begin{example}
For $k=3$, Fig.~\ref{fig6} still contains, on the right of each of the two cases of $\tau_j$ in Example~\ref{joder}, the symmetric difference of the corresponding flipping 6-cycle $F_6C(\tau_j)$ and the union of the three 7-cycles $C(\alpha_i)$, for each $i$ in $(0,1,2)$ or $(0,3,4)$, yielding a 21-cycle in each case.
The two resulting 21-cycles then are recombined into a Hamilton cycle of $O_3$, shown on the lower part of Fig.~\ref{fig6} as a list sectioned from left to right into five sublists. To the left of these five sublists, there is a drawing of an hypergraph as in Remark~\ref{r4} below. \end{example}
\begin{figure}[htp]
\hspace*{1.1cm}
\includegraphics[scale=1.05]{troca.eps}
\caption{Illustration for Remark~\ref{uu} and Example~\ref{ex7}}
\label{fig5}
\end{figure}
\begin{remark}\label{uu}
Each $n$-tuple $F(\alpha)$ gives place to a modified $n$-tuple $\underline{F}(\alpha)$ formed by the number entries $j\in[0,k]$ of $F(\alpha)$ set in the same positions they have in $F(\alpha)$ together with $k$ underlined number entries $\underline{j}$ in place of the ``$=$"-signs, where $j\in[1,k]$ (or $\underline{j}\in\{\underline{1},\ldots,\underline{k}\}$), in a fashion determined by the fact that a nonempty Dyck word is expressible uniquely as a string $0u1v=0_u^vu1_u^vv$ (modified from the appearing $1u0v$ in \cite[p.~1260]{Hcs}), where $u$ and $v$ are (possibly empty) Dyck words.
Each number entry $j\in[0,k]$ in $F(\alpha)$ corresponds to the starting entry $0_u^v$ of a Dyck word $0_u^vu1_u^vv$ in $f(\alpha)$, with its $1_u^v$ represented in $F(\alpha)$ by an ``$=$"-sign. Its $\underline{F}(\alpha)$ has each number entry $j$ $(\ne\underline{j})$ in its same position as in $F(\alpha)$, with a corresponding entry $0_u^v$ of a Dyck word $0_u^vu1_u^vv$ in $f(\alpha)$.
Moreover, $\underline{F}(\alpha)$ has each ``$=$"-sign of $F(\alpha)$ replaced by a corresponding underlined integer $\underline{j}$ in the position of an accompanying $1_u^v$. As an example, the lower right of Fig.~\ref{fig4} contains, under a line repeating the first line $g(\alpha_1)$ of the previous list $\ell$ in the figure (denoted $\ell$ from Example~\ref{Inc}), a corresponding line with the 0-bits and 1-bits of $g(\alpha_1)$ replaced by the respective number entries $j$ and $\underline{j}$ of $\underline{F}(\alpha_1)$.
For $k=4$, Fig.~\ref{fig5} contains vertical lists $\underline{L}(\alpha)=(\underline{L}_0(\alpha),\underline{L}_1(\alpha),\ldots,\underline{L}_{2k}(\alpha))^T$ similar to the lists $L(\alpha)$ but corresponding instead to the $n$-strings $\underline{F}(\alpha)=\underline{L}_0(\alpha)$, where $\alpha$ runs over the total of 14 $4$-germs and the only non-black entries are those corresponding to the 4-supplementary vertical blue-red pairs realizing the adjacency of each pair of contiguous lines, including the pair formed by the initial blue ``4" in the last line $\underline{L}_8(\alpha)$ and the initial red 0 in the first line $\underline{L}_0(\alpha)$ in each list. Note that all the first columns of the 14 lists form the same column vector, with transpose row vector $(0,\underline{4},1,\underline{3},2,\underline{2},3,\underline{1},4)$.\end{remark}
\begin{remark}\label{April1st} The sole representative $f(\alpha)$ of a $\mathbb{Z}_n$-class of $V(O_k)$, as in Theorem~\ref{bij}, may not only be interpreted as the $n$-tuple $F(\alpha)$ but also as the corresponding $\underline{F}(\alpha)$, so the other $n$-tuples of that class may be interpreted as its translations mod $n$. Note that the lines of each $L(\alpha)$ and the lines of its associated $\underline{L}(\alpha)$ are translations mod $n$ of respective $n$-tuples $F(\alpha_\iota)$ and $\underline{F}(\alpha_\iota)$ that depend on the orders $\iota\in[0,2k]$ of such lines. These facts are used in the following statement, where the subindex $j$ is $j=2k-\iota$ in relation to the subindex $\iota$.
\end{remark}
\begin{theorem}\label{L5} For each $k$-germ $\alpha$:
\begin{enumerate}
\item[\bf(i)] $L(\alpha)$ is generated, with initial current $n$-tuple $L_0(\alpha)=F(\alpha)$, by transforming iteratively
for $j=2k,2k-1,\ldots,2,1$, the current $n$-tuple $L_{2k-j}(\alpha)$ into the uniquely feasible next $n$-tuple, $L_{2k-j+1}(\alpha)$,
by $k$-supplementing its $\pi(j)$-th entry and exchanging its $k$ remaining number entries with its $k$ ``$=$"-sign entries;
\item[\bf(ii)] the first column of $\underline{L}(\alpha)$ has transpose row vector $(0,\underline{k},1,\underline{k-1},2,k-2,\cdots,\underline{3},k-2,\underline{2},k-1,\underline{1},k)$, obtained by alternating the entries of the vectors $(0,1,2,\ldots,k-1,k)$ and $(\underline{k},\underline{k-1},\ldots,\underline{2},\underline{1})$; moreover, $k\underline{k}$ and $\underline{1}0$ are substrings$\mod{n}$ of each $\underline{L}_j(\alpha)$.
\end{enumerate} The resulting lists $L(\alpha)$, $\underline{L}(\alpha)$ yield a uniform 2-factor of $O_k$ formed by $C_k$ $n$-cycles \cite{u2f,Hcs}.
\end{theorem}
\begin{proof} Item (i) is an adaptation of \cite[Lemma 5]{Hcs} to the $k$-germ setting via our previous Remarks~\ref{alfalfa}, \ref{Dyck}, \ref{r2}, \ref{r3}, \ref{uu}, \ref{April1st} and the following argument. We note that the Dyck path of length $2k$ defined in Remark~\ref{u} above corresponds to the Dyck paths with $2k$ steps and 0 flaws of \cite{u2f}, presented in each list $L(\alpha)$ as $L_0(\alpha)$. In the same way, $L_2(\alpha),L_4(\alpha),\ldots,L_{2k}(\alpha)$ correspond respectively to the Dyck paths with $2k$ steps and $1,2,\ldots,k$ flaws of \cite{u2f}, obtained in our cases again as in Remark~\ref{r3} by the removal of its first up-step and change of coordinates from $(1,1)$ to $(0,0)$. In fact, passing from each $L_{2i}(\alpha)$ to its subsequent $L_{2i+1}(\alpha)$ corresponds to applying the function $g$ defined between Figure 4 and Theorem 2 in \cite{u2f}, and passing from $L_{2i+1}(\alpha)$ to $L_{2k+2}(\alpha)$ corresponds to applying the function\ $h$ composing the mapping $f=h\circ g$ of Theorem 2 \cite{u2f}.
For item (ii), note that the $n$-tuples $\underline{L}_j(\alpha)$ having a common initial entry in $[0,k]\cup[\underline{1},\underline{k}]$ are at the same height $j$ in all vertical lists $\underline{L}(\alpha)$
so that the entries of the first column $(a_0,\underline{b}_0,a_1,\underline{b}_1,\ldots,a_k,\underline{b}_k,a_{k+1})^T$ of each such $\underline{L}(\alpha)$ satisfy both $a_i+b_i=k$ and $b_i+a_{i+1}=k+1$, for $i\in[0,k]$.
Thus, the alternating first-entry column in each vertical list characterizes and controls the formation of the claimed uniform 2-factor.
\end{proof}
\begin{remark}\label{r4}
Consider the following Dyck-word triples and quadruple:
\begin{eqnarray}\label{!}\begin{array}{lllllr}
S_1(w)&=&\{\xi_{1(w)}^1=0w001\bar{1}1,&\xi_{1(w)}^2=0w\bar{0}1101,&\xi_{1(w)}^3=0w0101\bar{1}&\},\\
S_2&=& \{\xi_2^1\hspace{4.4mm}=0\bar{0}110011,&\xi_2^2\hspace{4.4mm}=0010011\bar{1},&\xi_2^3\hspace{4.4mm}=00010\bar{1}11&\},\\
S_3&=& \{\xi_3^1\hspace{4.4mm}=00011\bar{1},&\xi_3^2\hspace{4.4mm}=0100\bar{1}1,&\xi_3^3\hspace{4.4mm}=\bar{0}10101&\},\\
S_4&=&\{\xi_4^1\hspace{4.4mm}=00011\bar{1},&\xi_4^2\hspace{4.4mm}=0010\bar{1}1,&\xi_4^3\hspace{4.4mm}=01\bar{0}011,&\xi_4^4=\bar{0}10101\},
\end{array}\end{eqnarray}
\noindent where $w$ is any (possibly empty) Dyck word. Consider also the sets $\underline{S}_1(w)$, $\underline{S}_2$, $\underline{S}_3$, $\underline{S}_4$ obtained respectively from $S_1(w)$, $S_2$, $S_3$, $S_4$ by having their component Dyck paths
$\underline{\xi}_{1(w)}^j$, $\underline{\xi}_2^j$, $\underline{\xi}_3^j$, $\underline{\xi}_4^j$
defined as the complements of the reversed strings of the corresponding Dyck paths
$\xi_{1(w)}^j$, $\xi_2^j$, $\xi_3^j$, $\xi_4^j$.
Note that each Dyck word in the subsets of display~(\ref{!}) has an entry with a bar on top. By denoting
\begin{eqnarray}\label{denot}\xi_i^j=x_sx_{s-1}\cdots x_2,x_1,x_0\end{eqnarray} for adequate $s$ in each case, the barred positions in~(\ref{!}) are the targets of the following correspondence $\Phi$:
\begin{eqnarray}\label{!!}\begin{array}{llll}
\Phi(\xi_{1(w)}^1)=1,&\Phi(\xi_{1(w)}^2)=4,&\Phi(\xi_{1(w)}^3)=0,&\\
\Phi(\xi_2^1)\hspace{4.4mm}=6,&\Phi(\xi_2^2)\hspace{4.4mm}=0,&\Phi(\xi_2^3)\hspace{4.4mm}=2,&\\
\Phi(\xi_3^1)\hspace{4.4mm}=0,&\Phi(\xi_3^1)\hspace{4.4mm}=1,&\Phi(\xi_3^3)\hspace{4.4mm}=5,&\\
\Phi(\xi_4^1)\hspace{4.4mm}=0,&\Phi(\xi_4^2)\hspace{4.4mm}=1,&\Phi(\xi_4^3)\hspace{4.4mm}=3,&\Phi(\xi_4^4)=5.\\
\end{array}\end{eqnarray}
The correspondence $\Phi$ is extended over the Dyck words $\underline{\xi}_{1(w)}^j$, $\underline{\xi}_2^j$, $\underline{\xi}_3^j$, $\underline{\xi}_4^j$ with their barred positions taken reversed with respect to the corresponding barred positions in $\xi_{1(w)}^j$, $\xi_2^j$, $\xi_3^j$, $\xi_4^j$, respectively.
Recall the ordered tree ${\mathcal T}_k$ from Theorem~\ref{thm1}. Adapting from \cite{Hcs}, we define an hypergraph $H_k$ with $V(H_k)=V({\mathcal T}_k)$ and having as hyperedges the subsets $\{\alpha^j;j\in\{1,2,3\}\}\subset V(H_k)$ and $\{\alpha^j;j\in\{1,2,3,4\}\}\subset V(H_k)$ whose member $k$-germs $\alpha^j$ have associated bitstrings $f(\alpha^j)$, for $j=1,2,3\mbox{ or }j=1,2,3,4$, containing respective Dyck words in $\{\xi_{1(w)}^j$, $\xi_2^j$, $\xi_3^j$, $\xi_4^j$, $\underline{\xi}_{1(w)}^j$, $\underline{\xi}_2^j$, $\underline{\xi}_3^j$, $\underline{\xi}_4^j\}$ in the same 6 or 8 fixed positions $x_i$ (for specific indices $i\in\{0,1,\ldots,s\}$ in~(\ref{denot})) and forming respective subsets
$\{\xi_1^j(w);j=1,2,3\}$,
$\{\underline{\xi}_1^j(w);j=1,2,3\}$,
$\{\xi_4^j;j=1,2,3,4\}$,
$\{\underline{\xi}_4^j;j=1,2,3,4\}$,
$\{\xi_i^j;j=1,2,3\}$ and
$\{\underline{\xi}_i^j;j=1,2,3\}$, for both $i=2$ and $3$,
\end{remark}
\begin{example}\label{ex6} For $k=3$,
$S_1$ and $S_3$ are shown heading on the upper-left and upper-right sides of Fig.~\ref{fig6}, respectively, with strings $\xi_{1(w)}^j$ or $\xi_i^j$ ($j=2,3,4$) having their constituent entries in red except for the barred entry, in blue. For $S_1$ concretely, $f(\alpha_0)$,
$f(\alpha_1)$ and $f(\alpha_2)$, represented by the respective subindices 0, 1 and 2, are shown stacked in the upper-left of the figure, those subindices indicating (each via a colon) respectively the Dyck words $\xi_1^1$, $\xi_1^2$ and $\xi_1^3$ with their entries in red except for the entries in positions $\Phi(\xi_1^1)=1$, $\Phi(\xi_1^2)=4$ and $\Phi(\xi_1^3)=0$, which are blue. A similar observation can be made on the right side of the figure for $S_3$. The corresponding hypergraph $H_3$ contains the subhypergraph $H_3'$ depicted
on the lower-left of the figure. This is used to construct the shown Hamilton cycle. The hyperedges of $H'_3$ are denoted by the triples of subindices $i$ of their composing 4-germs $\alpha_i$. So, the hyperedges of $H'_3$ are taken to be $(0,1,2)$ and $(0,3,4)$. This type of notation is used in Example~\ref{ex7} as well.
\end{example}
\begin{figure}[htp]
\hspace*{1.5cm}
\includegraphics[scale=1.77]{changing.eps}
\caption{Illustration for Remark~\ref{Vl} and Example~\ref{ex7}}
\label{fig7}
\end{figure}
\begin{example}\label{ex7} For $k=4$, let us represent each $k$-germ $\alpha_i$ by its respective order $i=ord(\alpha_i)$.
In a likewise manner to that of Example~\ref{ex6}. Fig.~\ref{fig7} shows on its lower-left corner a depiction of a subhypergraph $H'_4$ of $H_4$ with the hyperedges $$(0,2,a), (8,7,5), (7,6,a), (1,4,6), (1,9,d), (3,b,c,d).$$
The respective triples of Dyck words $\xi_{1(w)}^j$ or $\underline{\xi}_{1(w)}^j$ or $\xi_i^j$ or $\underline{\xi}_i^j$ ($j=2,3,4$) may be expressed as follows by replacing the Greek letters $\xi$ by the values of the correspondence $\Phi$:
$$
(\underline{5}_3^1,\underline{4}_3^2,\underline{0}_3^3),
(6_2^1,0_2^2,2_2^3),
(1_{1(01)}^1,4_{1(01)}^2,0_{1(01)}^3),
(1_{1(\epsilon)}^1,4_{1(\epsilon)}^2,0_{1(\epsilon)}^3),
(0_3^1,1_3^2,5_3^3),
(0_4^1,1_4^2,3_4^3,5_4^4),$$
where we can also write $(\underline{5}_3^1,\underline{4}_3^2,\underline{0}_3^3)=\underline{(0_3^1,1_3^2,5_3^3)}$.
From top to bottom in Fig.~\ref{fig7}, excluding the said depiction of $H'_4$, the vertical lists corresponding to the composing $4$-germs of those six hyperedges are presented side by side, in a fashion similar to that of Fig.~\ref{fig6}, except that the first line in each such vertical list has its
corresponding substring $\xi$ (a member of one of the sets presented in display~(\ref{!!})) in red but for its blue entry
$\Phi(\xi)$ to stress their roles in the respective $L(\alpha)$ and $FT(\alpha)$. The pair $FT(\alpha)$ in each such $L(\alpha)$ (just partly superposed with that first line of $L(\alpha_c)$ in the hyperedge $(3,b,c,d)$) allow to compose five flipping $6$-cycles and a final flipping $8$-cycle, presented to the right of each triple or quadruple of vertical lists, allowing to integrate, by symmetric differences, a Hamilton cycle comprising all the vertices in the cycles provided by the vertical lists. Below those $6$- or $8$-cycles, the corresponding red-blue substrings $\xi_i^j$ appear separated by a hyphen in each case from the associated (multicolored) first lines.
\end{example}
\begin{remark}
We represent $H_k$ as a simple graph $\psi(H_k)$ with $V(\psi(H_k))=V(H_k)$ by replacing each hyperedge $e$ of $H_k$ by the clique $K(e)=K(V(e))$ so that $\psi(H_k[e])=K(e)$, being such replacements the only source of cliques of $\psi(H_k)$. A {\it tree} $T$ of $H_k$ is a subhypergraph of $H_k$ such that: {\bf(a)} $\psi(T)$ is a connected union of cliques $K(V(e))$; {\bf(b)} for each cycle $C$ of $\psi(H_k)$, there exist a unique clique $K(V(e))$ such that $C$ is a subgraph of $K(e)$. A {\it spanning tree} $T$ of $H_K$ is a tree of $H_k$ with $V(T)=V(H_k)$. Clearly, the subhypergraphs $H'_k$ of $H_k$ depicted in Fig.~\ref{fig6} and~\ref{fig7} for $k=3$ and 4 are corresponding spanning trees.
A subset $G$ of hyperedges of $H_k$ is said to be {\it conflict-free} \cite{Hcs} if: {\bf(a)} any two hyperedges of $G$ have at most one vertex in common; {\bf(b)} for any two hyperedges $g,g'$ of $G$ with a vertex in common, the corresponding images by $\Phi$ (as in display~(\ref{!!})) in $g$ and $g'$ are distinct.
\end{remark}
\begin{theorem}\label{L6} (\cite{Hcs}) A conflict-free spanning tree of $H_k$ yields a Hamilton cycle of $O_k$, for every $k\ge 3$. Moreover, distinct conflict-free spanning trees of $H_k$ yield distinct Hamilton cycles of $H_k$, for every $k\ge 6$.
\end{theorem}
\begin{proof} Let $D_\ell$ be the set of all Dyck words of length $2\ell$, and let
\begin{eqnarray}\label{rec1}\begin{array}{|l|l|l|}\hline
E_2=\{0101\} \!&\! E_3=S_4 \!&\! E_\ell=01D_{\ell-1}, \forall \ell>3\\
F_2=\{0011\} \!&\! F_3=D_3\setminus E_3=\{001101\} \!&\!F_\ell=D_\ell\setminus 01D_{\ell-1}, \forall \ell>3\\\hline
\end{array}\end{eqnarray}
In particular, $0101(01)^{\ell-2}\in E_\ell$ and $0011(01)^{\ell-2}\in F_\ell$. Now, let
\begin{eqnarray}\label{rec2}\begin{array}{|l|l|l|l|}\hline
{\mathcal E}_2=\emptyset & \mathcal{E}_3=\{S_4\} & \mathcal{T}_3=\{S_1(\epsilon),S_3\}&\mathcal{E}_\ell=01\mathcal{T}_{\ell-1}, \forall \ell>3\\
{\mathcal F}_2=\emptyset & \mathcal{F}_3=\emptyset & \mathcal{F}_4=\{S_1(01),S_2,0\underline{S}_31,S_1(\epsilon)01\}&\\\hline
\end{array}\end{eqnarray}
We will set $\mathcal{F}_\ell$ in function of $\mathcal{E}_2,\ldots,\mathcal{E}_{\ell-1},\mathcal{F}_2,\ldots,\mathcal{F}_{\ell-1}$ and $\mathcal{T}_{\ell-2}$, as follows:
For $1<j\le\ell$, let $F_\ell^j=\cup_{i=2}^j\{0\underline{u}1v;u\in D_{i-1},v\in D_{\ell-1}\}$. Since $F_\ell=F_\ell^\ell$, the following statement implies the existence of a spanning tree of $H_\ell[F_\ell]$.
\begin{lemma} For every $1<j\le\ell$, there exists a spanning tree $\mathcal{F}_\ell^j$ of $H_\ell[F_\ell^j]$.
\end{lemma}
\begin{proof}
Lemma 7 \cite{Hcs} asserts that if $\tau$ is a flippable tuple and $u,v$ are Dyck words, then: {\bf(i)} $u\tau v$ is a flippable tuple if $|u|$ is even; {\bf(ii)} $u\underline{\tau}v$ is a flippable tuple if $|u|$ is odd. Lemma 8 \cite{Hcs} asserts that the triples and quadruple in (\ref{!}) are flippable tuples. Using these two lemmas, \cite{Hcs} defines
$\Psi$ as the set of all such flippable tuples $u\tau v$ and $u\underline{\tau}v$.
Moreover, \cite{Hcs} defines $\Psi_2=\emptyset$ and $\Psi_\ell=\Psi\cap D_\ell$, for $\ell>2$.
Since $F_\ell^2=0011D_{\ell-2}$, we let $\mathcal{F}_\ell^2=0011\mathcal{T}_{\ell-2}$. Assume $2<j\le\ell$. Since $D_{j-2}=E_{j-1}\cup F_{j-1}$ is a disjoint union, there exists the following partition:
\begin{eqnarray}\label{quelio}F_\ell^j=F_\ell^{j-1}\cup_{v\in D_{\ell-j} }(0\underline{D}_{j-1}1v)=F_\ell^{j-1}\cup_{v\in D_{\ell-j} }((0\underline{E}_{j-1}1v)\cup(0\underline{F}_{j-1}1v)).\end{eqnarray}
For every $v\in D_{\ell-j}$, consider $\tau(v)=S_1((01)^{j-3})v\in\Psi_\ell$, whose elements are:
\begin{eqnarray}\label{rec3}\begin{array}{|c|c|c|}\hline
0(01)^{j-3}001\bar{1}1v\in 0\underline{F}_{j-1}1v & 0(01)^{j-3}0101\bar{1}v\in 0\underline{E}_{j-1}1v & 0(01)^{j-3}\bar{0}1101v\in F_\ell^{j-1}\\\hline\end{array}\end{eqnarray}
Now, we let
\begin{eqnarray}\label{formula}\mathcal{F}_\ell^j=\mathcal{F}_\ell^{j-1}\cup(\cup_{v\in D_{\ell-j}}(\{\tau(v)\}\cup (0\underline{\mathcal{E}}_{j-1}1v)\cup (0\underline{\mathcal{F}}_{j-1}1v))),\end{eqnarray} which is a spanning tree of $H_\ell[F_\ell^j]$. \end{proof}
Now, consider $\tau=S_3(01)^{\ell-3}\in\Psi_\ell$, whose elements are:
\begin{eqnarray}\label{rec4}\begin{array}{|c|c|c|}\hline
00011\bar{1}(01)^{\ell-3}\in F_\ell$, ($\ell>3) & 0100\bar{1}1(01)^{\ell-3}\in 01E_{\ell-1} & \bar{0}10101(01)^{\ell-3}\in 01F_{\ell-1}\\\hline\end{array}\end{eqnarray}
The sets $F_\ell$, $01E_{\ell-1}$ and $01F_{\ell-1}$ form a partition of $D_\ell$. We take the spanning trees of the subhypergraphs induced by these three sets and connect them into a single spanning tree using the triple $\tau$, that is:
\begin{eqnarray}\label{rec5}\mathcal{T}_\ell=\mathcal{F}_\ell\cup\{\tau\}\cup 01\mathcal{E}_{\ell-1}\cup 01\mathcal{F}_{\ell-1},\end{eqnarray} which is a spanning tree of $H_\ell$.\end{proof}
\begin{example} Example~\ref{ex6} uses $\mathcal{T}_3$ in display~(\ref{rec2}), with $S_1(\epsilon)=012$ and $S_3=034$ yielding the hypergraph $\mathcal{T}_3$ depicted in the lower left of Fig.~\ref{fig6}. Example~\ref{ex7} uses ${\mathcal T}_\ell$ in display~(\ref{rec5}) for $\ell=4$, $\mathcal{F}_4$ and $\mathcal{E}_3$ in display~(\ref{rec2}) and $\tau$ in display~(\ref{rec4}), with $S_1(01)=67a$, $S_2=875$, $0\underline{S}_31=02a$, $S_1(\epsilon)=146$, being these four triples the elements in $\mathcal{F}_4$; $01S_4=3bcd$, this one as the only element of $01\mathcal{E}_3$, (while $\mathcal{F}_3=\emptyset$); and $\tau=02a$, yielding the hypergraph $\mathcal{T}_4$ depicted at the lower left corner of Fig.~\ref{fig7}.\end{example}
\begin{corollary}\label{the-end} To each Hamilton cycle in $O_k$ produced by Theorem~\ref{L6} corresponds a Hamilton cycle in $M_k$.
\end{corollary}
\begin{proof}
For each vertical list $L(\alpha)$ provided by Theorem~\ref{L5}, let $L^M(\alpha)$ be a vertical list as exemplified in Fig.~\ref{fig3} and Example~\ref{rrr2}, which is obtained from $L(\alpha)$ by replacing its ``$=$"-signs by: {\bf(a)} ``$>$"-signs (meaning left-to-right string-reading) for the strings $L_{2j}(\alpha)$ ($j\in[0,k]$) of $L(\alpha)$ and {\bf(b)} ``$<$"-signs (meaning right-to-left string-reading) for the strings $L_{2j+1}(\alpha)$ ($j\in[0,k-1]$) of $L(\alpha)$. Then, Theorem~\ref{L6} can be adapted to producing Hamilton cycles in the $M_k$ by repeating the argument in its proof in replacing the lists $L(\alpha)$ by $L^M(\alpha)$, since they have locally similar behaviors, being the cycles provided by the lists $L^M(\alpha)$ twice as long as the corresponding lists $L(\alpha)$, so the said local behavior happens twice around opposite short subpaths. Combining Dyck-word triples and quadruples as in display~(\ref{!}) into adequate liftings in the lists $L^M(\alpha)$ of the parts of the lists $L(\alpha)$ in which the necessary symmetric differences take place to produce the Hamilton cycles in $O_k$ will produced corresponding Hamilton cycles in $M_k$.
\end{proof}
|
1,116,691,500,714 | arxiv | \section{Introduction}
We are currently witnessing the explosive growth of technologies that focus on processing the large amounts of data available in the biomedical sciences. Closely, in parallel, machine learning has been gaining traction in an effort toward analyzing and making sense of said biomedical data. However, effectively using machine learning tools often requires deep knowledge and expertise of both machine learning techniques as well as the application domain. For example, to effectively apply machine learning to a genome-wide association study (GWAS)~\cite{bird2007perceptions, cordell2009detecting}, the practitioner must understand the complex trait being studied (e.g., a particular disease such as prostate cancer), the research surrounding the underlying genetics of the trait, as well as the numerous steps in the machine learning process that are necessary for a successful analysis (e.g., data preprocessing, feature engineering, model selection, etc.). If we can provide off-the-shelf tools that reduce the barrier to entry for using machine learning by non-experts, then such tools could prove beneficial to researchers working in the biomedical sciences. Mapping statistical inferences and models from genetic data analysis to underlying biological processes is an important goal to the field of computational genomics~\cite{ma2002functional}.
In recent years, evolutionary computation (EC) has been proven successful in automating a variety of tasks, and even outperformed several hand-designed solutions in human vs. machine competitions~\cite{hornby2011computer,fredericks2013exploring,forrest2009genetic,spector2008genetic}. As such, we believe there is considerable promise in using EC to automate the analysis of biomedical data. Last year, we introduced the Tree-Based Pipeline Optimization Tool (TPOT)~\cite{Olson2016EvoBio,Olson2016GPTP}, which seeks to automate the process of designing machine learning pipelines using genetic programming (GP)~\cite{banzhaf1998genetic}. We found that TPOT often outperforms a standard machine learning analysis, all the while requiring no {\em a priori} knowledge about the problem it is solving~\cite{OlsonGECCO2016,Olson2016JMLR}. Here, we report on our attempts to specialize TPOT for human genetics research.
Human genetics research poses a unique data analysis challenge due to the effects of non-additive gene-gene interactions (i.e., epistasis) and the large number of genes that must be simultaneously considered as possible predictors of a complex trait~\cite{moore2010bioinformatics}. As a result, simple linear models of complex traits often predict little about the trait, and it is typically impossible to perform an exhaustive combinatorial search of every possible genetic model including two or more genes. For this reason, many researchers leverage {\em a priori} expert knowledge to intelligently reduce and guide the search space when performing a combinatorial search of possible genetic models~\cite{moore2006exploiting}.
In this paper we introduce TPOT-MDR, which uses GP to automate the study of complex diseases in GWAS. TPOT-MDR automatically designs sequences of common operations from genetic analysis studies, such as data filtering and Multifactor Dimensionality Reduction (MDR)~\cite{ritchie2001multifactor, hahn2003multifactor, moore2002new, cho2004multifactor, moore2006flexible, moore2015epistasis}, with the goal of producing a model that best predicts the outcome of a complex trait based solely on their genetics. Furthermore, we enable TPOT-MDR to leverage {\em a priori} expert knowledge through an Expert Knowledge Filter (EKF), which performs feature selection on the GWAS datasets using information from the expert knowledge source.
To demonstrate TPOT-MDR's capabilities, we compare TPOT-MDR to state-of-the-art machine learning methods on a combination of simulated and real-world GWAS datasets. These datasets are all supervised classification datasets with a focus on human disease as the outcome. We find that TPOT-MDR performs significantly better than the state-of-the-art machine learning methods on the GWAS datasets, especially when it is provided the EKF as an optional feature selector. We further analyze the resulting TPOT-MDR model on a real-world GWAS dataset to highlight the interpretability of TPOT-MDR models, which is a feature that is typically lacking in machine learning models. Finally, we release TPOT-MDR as an open source Python software package to be freely used in human genetics research.
\section{Related Work}
For automated machine learning in general, approaches have mainly focused on optimizing subsets of a machine learning pipeline~\cite{hutter2015beyond}, which is otherwise known as hyperparameter optimization. One readily accessible approach is grid search, which applies brute force search within a search space of all possible model parameters to find the best model configuration. Relatively recently, randomized search~\cite{bergstra2012random} and Bayesian optimization~\cite{snoek2012practical} techniques have entered into the foray and have offered more intelligently derived solutions---by adaptively choosing new configurations to train---to the hyperparameter optimization task. Much more recently, a novel bandit-based approach to hyperparameter optimization have outperformed state-of-the-art Bayesian optimization algorithms by 5x to more than an order of magnitude for various deep learning and kernel-based learning problems~\cite{li2016hyperband}. Although TPOT-MDR is an automated machine learning approach, it is more specialized on bioinformatics problems rather than general machine learning.
Narrowing the focus to automated machine learning in bioinformatics, the literature is far more sparse. One such example is~\cite{franken2012inferring}, in which they analyze metabolomics data using a modified Bayesian optimization algorithm integrated with the classification algorithms provided in WEKA, a suite of machine learning software written in Java. The Bayesian optimization provided feature subset selection, which filtered irrelevant and redundant features from the datasets to achieve dimensionality reduction. These techniques lead to an improvement of classification accuracy.
Genetic programming and evolutionary computation methods have also been successfully applied to bioinformatics studies, such as~\cite{Moore2013,urbanowicz2013role}, but they do not focus on designing and tuning a series of standard data analysis operations for a specific dataset. As such, although they are related techniques, they do not fall into the automated machine learning domain.
\section{Methods}
In this section, we briefly review TPOT~\cite{Olson2016EvoBio,OlsonGECCO2016,Olson2016JMLR,Olson2016GPTP} and describe the new pipeline operators that were implemented for TPOT-MDR. Afterwards, we describe the datasets used to evaluate TPOT-MDR and compare it to the state-of-the-art machine learning methods.
\subsection{TPOT Review}
\label{sec:tpot-review}
TPOT uses an evolutionary algorithm to automatically design and optimize a series of standard machine learning operations (i.e., a pipeline) that maximize the final classifier's accuracy on a supervised classification dataset. It achieves this task using a combination of genetic programming (GP)~\cite{banzhaf1998genetic} and Pareto optimization (specifically, NSGA2~\cite{Deb2002}), which optimizes over the trade-off between the number of operations in the pipeline and the accuracy achieved by the pipeline.
TPOT implements four main types of pipeline operators: (1) preprocessors, (2) decomposition, (3) feature selection, and finally (4) models. All the pipeline operators make use of existing implementations in the Python scikit-learn library~\cite{pedregosa2011scikit}. Preprocessors consist of two scaling operators to scale the features and an operator that generates new features via polynomial combinations of numerical features. Decomposition consists of a variant of the principal component analysis (\texttt{RandomizedPCA}). Feature selection implements various strategies that serve to filter down the features by some criteria, such as the linear correlation between the feature and the outcome. Models consist of supervised machine learning models, such as tree-based methods, probabilistic and non-probabilistic models, and k-nearest neighbors.
TPOT combines all the operators described above and assembles machine learning pipelines from them. When a pipeline is evaluated, the entire dataset is passed through the pipeline operations in a sequential manner---scaling the data, performing feature selection, generating predictions from the features, etc.---until the final pipeline operation is reached. Once the dataset has fully traversed the pipeline, the final predictions are used to evaluate the overall classification accuracy of the pipeline. This accuracy score is used as part of the pipeline's fitness criteria in the GP algorithm.
To automatically generate and optimize these machine learning pipelines, TPOT uses a GP algorithm as implemented in DEAP~\cite{fortin2012deap}, which is a Python package for evolutionary algorithms. Oftentimes, GP builds trees of mathematical functions that seek to optimize toward a specified criteria. In TPOT, GP is used to optimize the number and order of pipeline operators as well as each operator's parameters. TPOT follows a standard GP process for 100 generations: random initialization of the initial population (default population size of 100), evaluation of the population on a supervised classification dataset, selection of the most fit individuals on the Pareto front via NSGA2, and variation through uniform mutation (90\% of all individuals per generation) and one-point crossover (5\% of all individuals per generation). For more information on the TPOT optimization process, see~\cite{OlsonGECCO2016}.
\subsection{TPOT-MDR}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/tpot-mdr-pipeline-example.pdf}
\centering
\caption{Example TPOT-MDR pipeline. Each circle represents an operation on the dataset, and each arrow represents the passing of the processed dataset to another operation.}
\label{fig:tpot-mdr-example}
\end{figure*}
TPOT-MDR is a specialized version of TPOT that focuses on genetic analysis studies. It features two new operators that are commonly used genetic analyses of human disease: (1) Multifactor Dimensionality Reduction (MDR) and (2) an Expert Knowledge Filter (EKF).
MDR is a machine learning method for detecting statistical patterns of epistasis by manipulating the feature space of the dataset to more easily identify interactions within the data~\cite{ritchie2001multifactor, hahn2003multifactor, moore2006flexible, moore2015epistasis}. To summarize, MDR is a constructive induction algorithm that combines two or more features to create a single feature that captures the interaction affects among the features. This constructed created feature can be fed back into the dataset as a new feature or used as the final prediction on the dataset.
The motivation behind adding the EKF operator was that, often times, {\em a priori} expert knowledge about a biomedical dataset exists: Perhaps the dataset has been analyzed and annotated in previous studies, a database exists with relevant information about the genes in a dataset, or statistical expert knowledge can be derived from the dataset before the study~\cite{moore2010bioinformatics}. This {\em a priori} expert knowledge can be leveraged to guide the TPOT-MDR search algorithm in deciding what genes to include in the final genetic model.
The EKF operator selects an expert knowledge source from the sources provided and selects the \texttt{N} best features according to the expert knowledge source (where \texttt{N} is constrained to [1, 5]). Since the EKF operator is parameterized to select both the expert knowledge source and the number of top features to retain, TPOT-MDR optimizes (1) whether and where in the pipeline to include the EKF and (2) the parameters of the EKF. Multiple EKF operators can be included in a TPOT-MDR pipeline, as shown in Figure~\ref{fig:tpot-mdr-example}.
Other than the MDR and EKF operators, the only other operators included in TPOT-MDR are a standard univariate feature selection method (\texttt{SelectKBest} in scikit-learn~\cite{pedregosa2011scikit}, with an evolvable number of features to retain, \texttt{N}, where \texttt{N} is constrained to [1, 5]) and a \texttt{CombineDFs} operator that combines two feature sets together into a single feature set. These operators can be chained together to form a series of operations acting on a GWAS dataset, as depicted in Figure~\ref{fig:tpot-mdr-example}. Except for different operator set, the TPOT-MDR optimization process works the same as the original TPOT algorithm as described in Section~\ref{sec:tpot-review}, and was run with a population size of 300 for 300 generations with a per-individual mutation rate of 90\% and per-individual crossover rate of 5\%.
\subsection{Datasets}
We performed an analysis of TPOT-MDR on both simulated datasets and a real world GWAS dataset. The simulated datasets were generated using GAMETES~\cite{urbanowicz2012gametes}, an open source software package designed to generate GWAS datasets with pure epistatic interactions between the features. We simulated 16 different datasets with specific properties to test the scalability of TPOT-MDR. The simulated datasets included 10, 100, 1,000, or 5,000 single-nucleotide polymorphism (SNP) features, each with 2 predictive features and the remaining features generated randomly using an allele frequency between 0.05 and 0.5. Further, we generated datasets with heritabilities (i.e., noise) of 0.05, 0.1, 0.2, or 0.4, where lower heritability entails more noise in the dataset. Notably, all of the GAMETES datasets had a sample size of 2,000 to ensure a reasonably large dataset size.
By scaling the GAMETES dataset feature spaces from 10 to 5,000, we sought to evaluate how well TPOT-MDR could handle increasingly large numbers of non-predictive features. Similarly, by simulating increasing amounts of noise in the dataset, we sought to evaluate how much noise TPOT-MDR could handle before it failed to detect and model the predictive features. As such, this simulated benchmark provides a detailed view of of the strengths and limitations of TPOT-MDR in the GWAS domain.
To validate TPOT-MDR on a real-world dataset, we used a nationally available genetic dataset of 2,286 men of European descent (488 non-aggressive and 687 aggressive cases, 1,111 controls) collected through the Prostate, Lung, Colon, and Ovarian (PLCO) Cancer Screening Trial, a randomized, well-designed, multi-center investigation sponsored and coordinated by the National Cancer Institute (NCI) and their Cancer Genetic Markers of Susceptibility (CGEMS) program. In this study, we focus on prostate cancer aggressiveness as the endpoint, where the prostate cancer is considered aggressive if it was assigned a Gleason score $\geq$ 7 and was in tumor stages III/IV. Between 1993 and 2001, the PLCO Trial recruited men ages 55--74 years to evaluate the effect of screening on disease specific mortality, relative to standard care. All participants signed informed consent documents approved by both the NCI and local institutional review boards. Access to clinical and background data collected through examinations and questionnaires was approved for use by the PLCO. Men were included in the current analysis if they had a baseline PSA measurement before October 1, 2003, completed a baseline questionnaire, returned at least one Annual Study Update (ASU), and had available SNP profile data through the CGEMS data portal\footnote{http://cgems.cancer.gov}. Prior to this study, the CGEMS dataset was filtered to the 219 SNPs associated with biological pathways relevant to aggressive prostate cancer~\cite{Lavender2012Interaction}. We call this dataset the ``CGEMS Prostate Cancer GWAS dataset.''
For all experiments, we used four different statistical expert knowledge sources as input to the EKF operator: the ReliefF~\cite{kononenko1997overcoming}, SURF~\cite{greene2009spatially}, SURF*~\cite{greene2010informative}, and MultiSURF~\cite{granizo2013multiple} algorithms. These algorithms evaluated the entire dataset prior to the experiments and assigned numerical feature importance scores to each feature, which is an indication of how predictive each feature is of the outcome. These numerical scores were provided to the TPOT-MDR EKF operator, and were used to rank the features when filtering the datasets. We computed the statistical expert knowledge sources for all 16 GAMETES datasets and the CGEMS Prostate Cancer GWAS dataset, resulting in 68 unique expert knowledge sources (4 for each experiment).
\subsection{Evaluating TPOT-MDR}
\label{sec:evaluating-tpot-mdr}
We ran four different sets of experiments on the datasets: (1) Extreme Gradient Boosting (XGBoost)\footnote{XGBoost parameters: 500 trees, learning rate 0.0001, and 10 maximum tree depth}~\cite{chen2016xgboost}, (2) Logistic Regression\footnote{The logistic regression regularization parameter was tuned via 10-fold cross validation}~\cite{MachineLearningBook}, (3) TPOT-MDR without the EKF, and (4) TPOT-MDR with the EKF. In Section~\ref{sec:results}, we refer to these experiments as \texttt{XGBoost}, \texttt{Logistic Regression}, \texttt{TPOT (MDR only)}, and \texttt{TPOT (MDR + EKF)}, respectively. For the GAMETES datasets, we additionally compared the four experiments to the baseline of a MDR model constructed with the two known predictive SNP features (called \texttt{MDR (Predictive SNPs)}), which will achieve the maximum possible classification accuracy for the GAMETES datasets without overfitting on the noisy features.
We chose to compare TPOT-MDR to the XGBoost classifier because XGBoost has been established as a widely popular and successful tree-based classifier in the machine learning community, particularly in the Kaggle\footnote{http://www.kaggle.com} machine learning competitions. Further, we compared TPOT-MDR to a logistic regression to demonstrate the capabilities of a standard linear model on GWAS datasets, which will essentially detect only linear associations between the features and the outcome. Finally, we ran TPOT-MDR without the EKF to demonstrate whether the EKF was important for the TPOT-MDR optimization process.
For every dataset and experiment, we performed 30 replicate runs with unique random number seeds (where applicable). This allowed us to evaluate and explore the limits of TPOT-MDR's modeling capabilities on a broad range of GWAS datasets, and demonstrate how it performs in comparison to state-of-the-art machine learning methods. In all cases, the accuracy scores reported are averaged balanced accuracy scores from 10-fold cross-validation, where the balanced accuracy metric is a normalized version of accuracy that accounts for class imbalance by calculating accuracy on a per-class basis then averaging the per-class accuracies~\cite{Velez2007,urbanowicz2015exstracs}. With balanced accuracy, a score of 50\% is equivalent to random guessing, even with imbalanced datasets.
\section{Results}
\label{sec:results}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/tpot-gametes-comparison-annotated.pdf}
\centering
\caption{Comparison of results on the simulated GAMETES GWAS datasets. Each box plot shows the distribution of averaged 10-fold balanced accuracies for each experiment, where the notches indicate the 95\% confidence interval. A 50\% balanced accuracy is equivalent to random guessing. Each panel within the figure corresponds to differing levels of heritability (i.e., dataset noise) and numbers of features in the simulated datasets, ranging from the easiest dataset on the top right (high heritability, small numbers of features) to the hardest dataset bottom left (low heritability, large numbers of features).\\\\Since some of the experiments had little variance in scores, some box plots are too small to determine their color. For clarity, the box plots represent the following experiments, in order from left to right: TPOT (MDR only), XGBoost, Logistic Regression, TPOT (MDR + EKF), and MDR (Predictive SNPs). These experiments are described in Section~\ref{sec:evaluating-tpot-mdr}.}
\label{fig:gametes-comparison}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.4\textwidth]{figures/cgems-comparison.pdf}
\centering
\caption{Comparison of results on the CGEMS prostate cancer GWAS dataset. Each box plot shows the distribution of averaged 10-fold balanced accuracies for each experiment, where the notches indicate the 95\% confidence interval. A 50\% balanced accuracy is equivalent to random guessing.}
\label{fig:cgems-comparison}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/cgems-mdr-grid.pdf}
\centering
\caption{Classification grid for the best MDR model that TPOT-MDR discovered for the CGEMS prostate cancer GWAS dataset. Each of the three grids correspond to one state of the \texttt{PRKCQ\_rs574512} SNP, whereas the cells within each grid correspond to one combination of states between the \texttt{AKT3\_rs12031994} and \texttt{DIABLO\_rs12870} SNPs. Thus, for example, the light grey upper right cell in the leftmost grid corresponds to \texttt{PRKCQ\_rs574512} = 0, \texttt{AKT3\_rs12031994} = 2, and \texttt{DIABLO\_rs12870} = 0.\\\\Dark grey bars and cells indicate aggressive cases (i.e., at risk of aggressive prostate cancer), whereas light grey bars and cells indicate non-aggressive cases (i.e., lower risk of aggressive prostate cancer). The numbers at the top of each bar indicate the number of aggressive and non-aggressive cases that fall within each cell when the entire CGEMS dataset is sorted into the MDR classification grid. If no data points fall into a cell, the cell is left blank.}
\label{fig:cgems-mdr-grid}
\end{figure*}
\subsection{GAMETES Simulated Datasets}
As shown in Figure~\ref{fig:gametes-comparison}, TPOT-MDR without the EKF rarely finds the best genetic model because it only has a univariate feature selector at its disposal. In contrast, TPOT-MDR with the EKF always discovers the best genetic model except when there are thousands of features and high noise. Even in the cases where TPOT-MDR with the EKF fails to find the best genetic model, it still discovers better genetic models than the other methods in this study.
For a baseline, we compared TPOT-MDR to a tuned logistic regression and XGBoost, as described in Section~\ref{sec:evaluating-tpot-mdr}. Figure~\ref{fig:gametes-comparison} shows that logistic regression consistently fails to find a good model and barely performs better than chance in even the easiest GAMETES datasets. This finding demonstrates a key flaw in using linear models for GWAS: Linear models will not detect higher-order interactions within the dataset unless the interactions are explicitly modeled. Similarly, XGBoost can sometimes find a good model for GWAS datasets if the dataset is heavily filtered beforehand (e.g., to 10s of features), but rapidly degrades in performance as more noisy features are added to the dataset.
\subsection{CGEMS Prostate Cancer Dataset}
The CGEMS prostate cancer GWAS dataset has 219 SNPs, 1,175 samples, and likely falls into the ``lower heritability'' spectrum of the GAMETES datasets. Thus, we would expect to see roughly similar performance on the CGEMS dataset as we saw in the GAMETES datasets with 100 features and 0.1 or 0.05 heritability in Figure~\ref{fig:gametes-comparison}.
As predicted, Figure~\ref{fig:cgems-comparison} shows that XGBoost and logistic regression fail to discover the higher-order interactions within the real-world CGEMS dataset. In contrast, TPOT-MDR with and without the EKF managed to consistently find predictive genetic models for the CGEMS dataset. In particular, TPOT-MDR with the EKF found the best genetic models, largely because the expert knowledge sources (ReliefF, SURF, etc.) contained information about the higher-order interactions between the SNPs that TPOT-MDR was able to harness.
To better understand the genetic models that TPOT-MDR discovered, we analyzed the final model from the highest-scoring TPOT-MDR experiment and visualized the pattern of interactions from the MDR model in Figure~\ref{fig:cgems-mdr-grid}. We see patterns suggestive of statistical epistasis within the model, for example, in the leftmost grid a patient's aggressive (dark grey cells) or non-aggressive (light grey cells) status can only be determined by a combination of \texttt{AKT3\_rs12031994} and \texttt{DIABLO\_rs12870}. Similarly, the pattern of aggressive vs. non-aggressive status between \texttt{AKT3\_rs12031994} and \texttt{DIABLO\_rs12870} varies depending on the state of the third SNP, \texttt{PRKCQ\_rs574512}, which suggests a statistical three-way epistatic interaction between the SNPs. If there were no higher-order interactions between the SNPs, then we would expect a patient's aggressive vs. non-aggressive status to vary independently between the SNPs, i.e., we would expect to see horizontal and vertical bands of aggressive or non-aggressive status within the grids. As previous studies have suggested links between these SNPs and aggressive prostate cancer~\cite{Lavender2012Interaction}, we can use these TPOT-MDR findings to further elucidate the SNPs' higher-order interactions and involvement in the development of aggressive prostate cancer in men of European descent.
\section{Discussion}
In this paper, we introduced a new method and tool, TPOT-MDR, for automating the analysis of complex diseases in genome-wide association studies (GWAS). We developed this tool to aid bioinformaticians so they can more efficiently process and analyze the ever-growing databases of biomedical data. To that end, TPOT-MDR is designed to optimize a series of machine learning operations that are commonly used in biomedical studies, such as filtering the features using expert knowledge sources, combining information from different expert knowledge sources, and modeling the higher-order interactions of the features using Multifactor Dimensionality Reduction (MDR) to predict a patient's outcome. Before, bioinformaticians would typically perform and refine these operations by hand, whereas now TPOT-MDR can relieve the bioinformatician of these tedious duties so they can focus on more challenging tasks.
Even though this paper focuses on the application of TPOT-MDR to GWAS datasets, we note that TPOT-MDR is a general machine learning tool that will work with any dataset that has categorical features and a binary outcome. TPOT-MDR has been released as a free, open source Python tool and is available on GitHub\footnote{https://github.com/rhiever/tpot/tree/tpot-mdr}.
In Section~\ref{sec:results}, we evaluated TPOT-MDR on a series of simulated and real-world GWAS datasets and found that TPOT-MDR outperforms linear models and XGBoost across all of the datasets (Figures~\ref{fig:gametes-comparison} and~\ref{fig:cgems-comparison}). These findings are important for several reasons. For one, we demonstrated that simple linear models are ill-suited for the analysis of GWAS datasets owing to their inability to model higher-order interactions within the dataset. We also demonstrated that state-of-the-art tree-based machine learning methods---typically thought to be effective at modeling higher-order feature interactions---are similarly ill-suited for modeling GWAS datasets with large numbers of features. Finally, we highlighted the importance of harnessing {\em a priori} expert knowledge to filter GWAS datasets prior to the modeling step, which could aid state-of-the-art machine learning algorithms such as XGBoost in eliminating extraneous features.
Although the results in Section~\ref{sec:results} suggest that TPOT-MDR is superior to the compared methods on every dataset we used, there are some drawbacks to TPOT-MDR that must be considered. For one, linear models and XGBoost are orders of magnitude faster to train and evaluate than TPOT-MDR. As TPOT-MDR uses genetic programming to optimize the series of filtering and modeling operations on the dataset, a single TPOT-MDR run took roughly 3 hours on the CGEMS dataset, whereas XGBoost and logistic regression each took less than a minute. Given that many GWAS datasets often have thousands to hundreds of thousands of SNP features (compared to the 219 in CGEMS), TPOT-MDR will require more work to improve its run time scalability to larger GWAS datasets. Furthermore, TPOT-MDR is highly dependent on its expert knowledge sources. In these experiments, we used expert knowledge sources that specialize in detecting higher-order epistatic interactions, which proved to be critical in both the simulated and real world datasets. If TPOT-MDR is provided with less informative expert knowledge sources, then it will likely perform worse, which we can observe in Figures~\ref{fig:gametes-comparison} and~\ref{fig:cgems-comparison} (TPOT-MDR without EKF vs. TPOT-MDR with EKF).
As shown in Figure~\ref{fig:gametes-comparison}, XGBoost can sometimes model higher-order interactions when the dataset is heavily filtered beforehand. However, the resulting XGBoost model is not nearly as interpretable as with TPOT-MDR. TPOT-MDR produces a model that we can inspect to study the pattern of feature interactions within the dataset (Figure~\ref{fig:cgems-mdr-grid}), whereas XGBoost provides only a complex ensemble of decision trees. This is an important consideration when building machine learning tools for bioinformatics: More often than not, bioinformaticians do not need a black box model that achieves high prediction accuracy on a real-world dataset. Instead, bioinformaticians seek to build a model that can be used as a microscope for understanding the underlying biology of the system they are modeling. In this regard, the models generated by TPOT-MDR can be invaluable for elucidating the higher-order interactions that are often present in complex biological systems.
In conclusion, TPOT-MDR is a promising step forward in using evolutionary algorithms to automate the design of machine learning workflows for bioinformaticians. We believe that evolutionary algorithms (EAs) are poised to excel in the automated machine learning domain, and specialized tools such as TPOT-MDR highlight the strengths of EAs by showing how easily EA solution representations can be adapted to a particular domain.
\section{Acknowledgements}
We thank the Penn Medicine Academic Computing Services for the use of their computing resources. This work was supported by National Institutes of Health grant AI116794.
\newpage
\bibliographystyle{ACM-Reference-Format}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.